research_field_id,research_field_label,paper_id,paper_title_x,statement_id,subject_id,predicate_label,object_id,object_label,paper_abstract,object_in_abstract,ner
R77,Animal Sciences,R44429,Pharmacokinetics of levetiracetam after oral and intravenous administration of a single dose to clinically normal cats,S135630,R44504,AED evaluated,L82871,LEV,"OBJECTIVE To determine whether therapeutic concentrations of levetiracetam can be achieved in cats and to establish reasonable i.v. and oral dosing intervals that would not be associated with adverse effects in cats. ANIMALS 10 healthy purpose-bred cats. PROCEDURES In a randomized crossover study, levetiracetam (20 mg/kg) was administered orally and i.v. to each cat. Blood samples were collected 0, 10, 20, and 40 minutes and 1, 1.5, 2, 3, 4, 6, 9, 12, and 24 hours after administration. Plasma levetiracetam concentrations were determined via high-performance liquid chromatography. RESULTS Mean ± SD peak concentration was 25.54 ± 7.97 μg/mL. The mean y-intercept for i.v. administration was 37.52 ± 6.79 μg/mL. Half-life (harmonic mean ± pseudo-SD) was 2.95 ± 0.95 hours and 2.86 ± 0.65 hours for oral and i.v. administration, respectively. Mean volume of distribution at steady state was 0.52 ± 0.09 L/kg, and mean clearance was 2.0 ± 0.60 mL/kg/min. Mean oral bioavailability was 102 ± 39%. Plasma drug concentrations were maintained in the therapeutic range reported for humans (5 to 45 μg/mL) for at least 9 hours after administration in 7 of 10 cats. Only mild, transient hypersalivation was evident in some cats after oral administration. CONCLUSIONS AND CLINICAL RELEVANCE Levetiracetam (20 mg/kg) administered orally or i.v. to cats every 8 hours should achieve and maintain concentrations within the therapeutic range for humans. Levetiracetam administration has favorable pharmacokinetics for clinical use, was apparently tolerated well, and may be a reasonable alternative antiepileptic drug in cats.",TRUE,acronym
R114008,Applied Physics,R137447,Spectroscopic Investigation of a Microwave-Generated Atmospheric Pressure Plasma Torch,S543903,R137449,Excitation_type,L383016,GHz,"The investigated new microwave plasma torch is based on an axially symmetric resonator. Microwaves of a frequency of 2.45 GHz are resonantly fed into this cavity resulting in a sufficiently high electric field to ignite plasma without any additional igniters as well as to maintain stable plasma operation. Optical emission spectroscopy was carried out to characterize a humid air plasma. OH‐bands were used to determine the gas rotational temperature Trot while the electron temperature was estimated by a Boltzmann plot of oxygen lines. Maximum temperatures of Trot of about 3600 K and electron temperatures of 5800 K could be measured. The electron density ne was estimated to ne ≈ 3 · 1020m–3 by using Saha's equation. Parametric studies in dependence of the gas flow and the supplied microwave power revealed that the maximum temperatures are independent of these parameters. However, the volume of the plasma increases with increasing microwave power and with a decrease of the gas flow. Considerations using collision frequencies, energy transfer times and power coupling provide an explanation of the observed phenomena: The optimal microwave heating is reached for electron‐neutral collision frequencies νen being near to the angular frequency of the wave ω (© 2012 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)",TRUE,acronym
R114008,Applied Physics,R137450,Modeling of microwave-induced plasma in argon at atmospheric pressure,S543921,R137452,Excitation_type,L383030,GHz,"A two-dimensional model of microwave-induced plasma (field frequency 2.45 GHz) in argon at atmospheric pressure is presented. The model describes in a self-consistent manner the gas flow and heat transfer, the in-coupling of the microwave energy into the plasma, and the reaction kinetics relevant to high-pressure argon plasma including the contribution of molecular ion species. The model provides the gas and electron temperature distributions, the electron, ion, and excited state number densities, and the power deposited into the plasma for given gas flow rate and temperature at the inlet, and input power of the incoming TEM microwave. For flow rate and absorbed microwave power typical for analytical applications (200-400 ml/min and 20 W), the plasma is far from thermodynamic equilibrium. The gas temperature reaches values above 2000 K in the plasma region, while the electron temperature is about 1 eV. The electron density reaches a maximum value of about 4 × 10(21) m(-3). The balance of the charged particles is essentially controlled by the kinetics of the molecular ions. For temperatures above 1200 K, quasineutrality of the plasma is provided by the atomic ions, and below 1200 K the molecular ion density exceeds the atomic ion density and a contraction of the discharge is observed. Comparison with experimental data is presented which demonstrates good quantitative and qualitative agreement.",TRUE,acronym
R114008,Applied Physics,R137453,Integrated Microwave Atmospheric Plasma Source (IMAPlaS): thermal and spectroscopic properties and antimicrobial effect onB. atrophaeusspores,S543939,R137455,Excitation_type,L383044,GHz,"The Integrated Microwave Atmospheric Plasma Source (IMAPlaS) operating with a microwave resonator at 2.45 GHz driven by a solid-state transistor oscillator generates a core plasma of high temperature (T > 1000 K), therefore producing reactive species such as NO very effectively. The effluent of the plasma source is much colder, which enables direct treatment of thermolabile materials or even living tissue. In this study the source was operated with argon, helium and nitrogen with gas flow rates between 0.3 and 1.0 slm. Depending on working gas and distance, axial gas temperatures between 30 and 250 °C were determined in front of the nozzle. Reactive species were identified by emission spectroscopy in the spectral range from vacuum ultraviolet to near infrared. The irradiance in the ultraviolet range was also measured. Using B. atrophaeus spores to test antimicrobial efficiency, we determined log10-reduction rates of up to a factor of 4.",TRUE,acronym
R114008,Applied Physics,R137419,The influence of the geometry and electrical characteristics on the formation of the atmospheric pressure plasma jet,S543709,R137421,Excitation_type,L382860,kHz,"An extensive electrical study was performed on a coaxial geometry atmospheric pressure plasma jet source in helium, driven by 30 kHz sine voltage. Two modes of operation were observed, a highly reproducible low-power mode that features the emission of one plasma bullet per voltage period and an erratic high-power mode in which micro-discharges appear around the grounded electrode. The minimum of power transfer efficiency corresponds to the transition between the two modes. Effective capacitance was identified as a varying property influenced by the discharge and the dissipated power. The charge carried by plasma bullets was found to be a small fraction of charge produced in the source irrespective of input power and configuration of the grounded electrode. The biggest part of the produced charge stays localized in the plasma source and below the grounded electrode, in the range 1.2–3.3 nC for ground length of 3–8 mm.",TRUE,acronym
R133,Artificial Intelligence,R6395,CASIA@ V2: a MLN-based question answering system over linked data,S7542,R6396,implementation,R6397,CASIA,"We present a question answering system (CASIA@V2) over Linked Data (DBpedia), which translates natural language questions into structured queries automatically. Existing systems usually adopt a pipeline framework, which con- tains four major steps: 1) Decomposing the question and detecting candidate phrases; 2) mapping the detected phrases into semantic items of Linked Data; 3) grouping the mapped semantic items into semantic triples; and 4) generat- ing the rightful SPARQL query. We present a jointly learning framework using Markov Logic Network(MLN) for phrase detection, phrases mapping to seman- tic items and semantic items grouping. We formulate the knowledge for resolving the ambiguities in three steps of QALD as first-order logic clauses in a MLN. We evaluate our approach on QALD-4 test dataset and achieve an F-measure score of 0.36, an average precision of 0.32 and an average recall of 0.40 over 50 questions.",TRUE,acronym
R133,Artificial Intelligence,R69630,Deep knowledge-aware network for news recommendation,S330782,R69631,Machine Learning Method,R69545,CNN,"Online news recommender systems aim to address the information explosion of news and make personalized recommendation for users. In general, news language is highly condensed, full of knowledge entities and common sense. However, existing methods are unaware of such external knowledge and cannot fully discover latent knowledge-level connections among news. The recommended results for a user are consequently limited to simple patterns and cannot be extended reasonably. To solve the above problem, in this paper, we propose a deep knowledge-aware network (DKN) that incorporates knowledge graph representation into news recommendation. DKN is a content-based deep recommendation framework for click-through rate prediction. The key component of DKN is a multi-channel and word-entity-aligned knowledge-aware convolutional neural network (KCNN) that fuses semantic-level and knowledge-level representations of news. KCNN treats words and entities as multiple channels, and explicitly keeps their alignment relationship during convolution. In addition, to address users» diverse interests, we also design an attention module in DKN to dynamically aggregate a user»s history with respect to current candidate news. Through extensive experiments on a real online news platform, we demonstrate that DKN achieves substantial gains over state-of-the-art deep recommendation models. We also validate the efficacy of the usage of knowledge in DKN.",TRUE,acronym
R133,Artificial Intelligence,R139421,DTD2OWL: automatic transforming XML documents into OWL ontology,S555996,R139423,Class hierarchy extraction/learning,R123828,DTD,"DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.",TRUE,acronym
R133,Artificial Intelligence,R139421,DTD2OWL: automatic transforming XML documents into OWL ontology,S555998,R139423,Concepts extraction/learning,R123828,DTD,"DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.",TRUE,acronym
R133,Artificial Intelligence,R139421,DTD2OWL: automatic transforming XML documents into OWL ontology,S556001,R139423,Input format,R123828,DTD,"DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.",TRUE,acronym
R133,Artificial Intelligence,R139907,Automatic transforming XML documents into OWL Ontology,S558507,R139908,Input format,R123828,DTD,"DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.",TRUE,acronym
R133,Artificial Intelligence,R139421,DTD2OWL: automatic transforming XML documents into OWL ontology,S556011,R139423,Properties hierarchy extraction/learning,R123828,DTD,"DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.",TRUE,acronym
R133,Artificial Intelligence,R74198,Duluth at SemEval-2021 Task 11: Applying deBERTa to contributing sentence selection and dependency parsing for entity extraction,S340743,R74200,Team Name,R74201,DULUTH,"This paper describes the Duluth system that participated in SemEval-2021 Task 11, NLP Contribution Graph. It details the extraction of contribution sentences and scientific entities and their relations from scholarly articles in the domain of Natural Language Processing. Our solution uses deBERTa for multi-class sentence classification to extract the contributing sentences and their type, and dependency parsing to outline each sentence and extract subject-predicate-object triples. Our system ranked fifth of seven for Phase 1: end-to-end pipeline, sixth of eight for Phase 2 Part 1: phrases and triples, and fifth of eight for Phase 2 Part 2: triples extraction.",TRUE,acronym
R133,Artificial Intelligence,R75305,Extracting ontological knowledge from Java source code using Hidden Markov Models,S546310,R75307,Extraction methods,R68906,HMM,"Abstract Ontologies have become a key element since many decades in information systems such as in epidemiological surveillance domain. Building domain ontologies requires the access to domain knowledge owned by domain experts or contained in knowledge sources. However, domain experts are not always available for interviews. Therefore, there is a lot of value in using ontology learning which consists in automatic or semi-automatic extraction of ontological knowledge from structured or unstructured knowledge sources such as texts, databases, etc. Many techniques have been used but they all are limited in concepts, properties and terminology extraction leaving behind axioms and rules. Source code which naturally embed domain knowledge is rarely used. In this paper, we propose an approach based on Hidden Markov Models (HMMs) for concepts, properties, axioms and rules learning from Java source code. This approach is experimented with the source code of EPICAM, an epidemiological platform developed in Java and used in Cameroon for tuberculosis surveillance. Domain experts involved in the evaluation estimated that knowledge extracted was relevant to the domain. In addition, we performed an automatic evaluation of the relevance of the terms extracted to the medical domain by aligning them with ontologies hosted on Bioportal platform through the Ontology Recommender tool. The results were interesting since the terms extracted were covered at 82.9% by many biomedical ontologies such as NCIT, SNOWMEDCT and ONTOPARON.",TRUE,acronym
R133,Artificial Intelligence,R6733,A Statistical Approach for Automatic Text Summarization by Extraction,S8936,R6734,implementation,R6735,KSRS,"Automatic Document Summarization is a highly interdisciplinary research area related with computer science as well as cognitive psychology. This Summarization is to compress an original document into a summarized version by extracting almost all of the essential concepts with text mining techniques. This research focuses on developing a statistical automatic text summarization approach, Kmixture probabilistic model, to enhancing the quality of summaries. KSRS employs the K-mixture probabilistic model to establish term weights in a statistical sense, and further identifies the term relationships to derive the semantic relationship significance (SRS) of nouns. Sentences are ranked and extracted based on their semantic relationship significance values. The objective of this research is thus to propose a statistical approach to text summarization. We propose a K-mixture semantic relationship significance (KSRS) approach to enhancing the quality of document summary results. The K-mixture probabilistic model is used to determine the term weights. Term relationships are then investigated to develop the semantic relationship of nouns that manifests sentence semantics. Sentences with significant semantic relationship, nouns are extracted to form the summary accordingly.",TRUE,acronym
R133,Artificial Intelligence,R140624,SemEval-2012 Task 5: Chinese Semantic Dependency Parsing,S570377,R140626,Evaluation metrics,R142058,LAS,"The paper presents the SemEval-2012 Shared Task 5: Chinese Semantic Dependency Parsing. The goal of this task is to identify the dependency structure of Chinese sentences from the semantic view. We firstly introduce the motivation of providing Chinese semantic dependency parsing task, and then describe the task in detail including data preparation, data format, task evaluation, and so on. Over ten thousand sentences were labeled for participants to train and evaluate their systems. At last, we briefly describe the submitted systems and analyze these results.",TRUE,acronym
R133,Artificial Intelligence,R6629,The University of Michigan at DUC 2004,S8477,R6630,implementation,R6632,MEAD,"We present the results of Michigan’s participation in DUC 2004. Our system, MEAD, ranked as one of the top systems in four of the five tasks. We introduce our new feature, LexPageRank, a new measure of sentence centrality inspired by the prestige concept in social networks. LexPageRank gave promising results in multi-document summarization. Our approach for Task 5, biographical summarization, was simplistic, yet succesful. We used regular expression matching to boost up the scores of the sentences that are likely to contain biographical information patterns.",TRUE,acronym
R133,Artificial Intelligence,R6657,MSBGA: A Multi-Document Summarization System Based on Genetic Algorithm,S8597,R6658,implementation,R6660,MSBGA,"The multi-document summarizer using genetic algorithm-based sentence extraction (MSBGA) regards summarization process as an optimization problem where the optimal summary is chosen among a set of summaries formed by the conjunction of the original articles sentences. To solve the NP hard optimization problem, MSBGA adopts genetic algorithm, which can choose the optimal summary on global aspect. The evaluation function employs four features according to the criteria of a good summary: satisfied length, high coverage, high informativeness and low redundancy. To improve the accuracy of term frequency, MSBGA employs a novel method TFS, which takes word sense into account while calculating term frequency. The experiments on DUC04 data show that our strategy is effective and the ROUGE-1 score is only 0.55% lower than the best participant in DUC04",TRUE,acronym
R133,Artificial Intelligence,R138459,Transforming XML documents to OWL ontologies: A survey,S555914,R138461,Output format,R139373,OWL,"The aims of XML data conversion to ontologies are the indexing, integration and enrichment of existing ontologies with knowledge acquired from these sources. The contribution of this paper consists in providing a classification of the approaches used for the conversion of XML documents into OWL ontologies. This classification underlines the usage profile of each conversion method, providing a clear description of the advantages and drawbacks belonging to each method. Hence, this paper focuses on two main processes, which are ontology enrichment and ontology population using XML data. Ontology enrichment is related to the schema of the ontology (TBox), and ontology population is related to an individual (Abox). In addition, the ontologies described in these methods are based on formal languages of the Semantic Web such as OWL (Ontology Web Language) or RDF (Resource Description Framework). These languages are formal because the semantics are formally defined and take advantage of the Description Logics. In contrast, XML data sources are without formal semantics. The XML language is used to store, export and share data between processes able to process the specific data structure. However, even if the semantics is not explicitly expressed, data structure contains the universe of discourse by using a qualified vocabulary regarding a consensual agreement. In order to formalize this semantics, the OWL language provides rich logical constraints. Therefore, these logical constraints are evolved in the transformation of XML documents into OWL documents, allowing the enrichment and the population of the target ontology. To design such a transformation, the current research field establishes connections between OWL constructs (classes, predicates, simple or complex data types, etc.) and XML constructs (elements, attributes, element lists, etc.). Two different approaches for the transformation process are exposed. The instance approaches are based on XML documents without any schema associated. The validation approaches are based on the XML schema and document validated by the associated schema. The second approaches benefit from the schema definition to provide automated transformations with logic constraints. Both approaches are discussed in the text.",TRUE,acronym
R133,Artificial Intelligence,R139415,A GRAPH-BASED TOOL FOR THE TRANSLATION OF XML DATA TO OWL-DL ONTOLOGIES: ,S555964,R139417,Output format,R139373,OWL,"Today most of the data exchanged between information systems is done with the help of the XML syntax. Unfortunately when these data have to be integrated, the integration becomes difficult because of the semantics' heterogeneity. Consequently, leading researches in the domain of database systems are moving to semantic model in order to store data and its semantics definition. To benefit from these new systems and technologies, and to integrate different data sources, a flexible method consists in populating an existing OWL ontology from XML data. In paper we present such a method based on the definition of a graph which represents rules that drive the populating process. The graph of rules facilitates the mapping definition that consists in mapping elements from an XSD schema to the elements of the OWL schema.",TRUE,acronym
R133,Artificial Intelligence,R139421,DTD2OWL: automatic transforming XML documents into OWL ontology,S556006,R139423,Output format,R139373,OWL,"DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.",TRUE,acronym
R133,Artificial Intelligence,R139451,Mapping XML to OWL Ontologies,S556188,R139453,Output format,R139373,OWL,"By now, XML has reached a wide acceptance as data exchange format in E-Business. An efficient collaboration between different participants in E-Business thus, is only possible, when business partners agree on a common syntax and have a common understanding of the basic concepts in the domain. XML covers the syntactic level, but lacks support for efficient sharing of conceptualizations. The Web Ontology Language (OWL [Bec04]) in turn supports the representation of domain knowledge using classes, properties and instances for the use in a distributed environment as the WorldWideWeb. We present in this paper a mapping between the data model elements of XML and OWL. We give account about its implementation within a ready-to-use XSLT framework, as well as its evaluation for common use cases.",TRUE,acronym
R133,Artificial Intelligence,R140371,An Approach For Transforming of Relational Databases to OWL Ontology,S560305,R140373,Output format,R139373,OWL,"Rapid growth of documents, web pages, and other types of text content is a huge challenge for the modern content management systems. One of the problems in the areas of information storage and retrieval is the lacking of semantic data. Ontologies can present knowledge in sharable and repeatedly usable manner and provide an effective way to reduce the data volume overhead by encoding the structure of a particular domain. Metadata in relational databases can be used to extract ontology from database in a special domain. According to solve the problem of sharing and reusing of data, approaches based on transforming relational database to ontology are proposed. In this paper we propose a method for automatic ontology construction based on relational database. Mining and obtaining further components from relational database leads to obtain knowledge with high semantic power and more expressiveness. Triggers are one of the database components which could be transformed to the ontology model and increase the amount of power and expressiveness of knowledge by presenting part of the knowledge dynamically",TRUE,acronym
R133,Artificial Intelligence,R140383,Automatic Constructing OWL Ontology from Relational Database Schema: ,S560345,R140385,Output format,R139373,OWL,"In this paper we present a new tool, called DB_DOOWL, for creating domain ontology from relational database schema (RDBS). In contrast with existing transformation approaches, we propose a generic solution based on automatic instantiation of a specified meta-ontology. This later is an owl ontology which describes any database structure. A prototype of our proposed tool is implemented based on Jena in Java in order to demonstrate its feasibility.",TRUE,acronym
R133,Artificial Intelligence,R140398,Learning ontology from relational database,S560385,R140400,Output format,R139373,OWL,"Ontology provides a shared and reusable piece of knowledge about a specific domain, and has been applied in many fields, such as semantic Web, e-commerce and information retrieval, etc. However, building ontology by hand is a very hard and error-prone task. Learning ontology from existing resources is a good solution. Because relational database is widely used for storing data and OWL is the latest standard recommended by W3C, this paper proposes an approach of learning OWL ontology from data in relational database. Compared with existing methods, the approach can acquire ontology from relational database automatically by using a group of learning rules instead of using a middle model. In addition, it can obtain OWL ontology, including the classes, properties, properties characteristics, cardinality and instances, while none of existing methods can acquire all of them. The proposed learning rules have been proven to be correct by practice.",TRUE,acronym
R133,Artificial Intelligence,R6350,Description of the POMELO System for the Task 2 of QALD-2014,S7310,R6351,implementation,R6352,POMELO,"In this paper, we present the POMELO system developed for participating in the task 2 of the QALD-4 challenge. For translating natural language questions in SPARQL queries we exploit Natural Language Processing methods, semantic resources and RDF triples description. We designed a four-step method which pre-processes the question, performs an abstraction of the question, then builds a representation of the SPARQL query and finally generates the query. The system was ranked second out of three participating systems. It achieves good performance with 0.85 F-measure on the set of 25 test questions.",TRUE,acronym
R133,Artificial Intelligence,R138459,Transforming XML documents to OWL ontologies: A survey,S555915,R138461,Output format,R139414,RDF,"The aims of XML data conversion to ontologies are the indexing, integration and enrichment of existing ontologies with knowledge acquired from these sources. The contribution of this paper consists in providing a classification of the approaches used for the conversion of XML documents into OWL ontologies. This classification underlines the usage profile of each conversion method, providing a clear description of the advantages and drawbacks belonging to each method. Hence, this paper focuses on two main processes, which are ontology enrichment and ontology population using XML data. Ontology enrichment is related to the schema of the ontology (TBox), and ontology population is related to an individual (Abox). In addition, the ontologies described in these methods are based on formal languages of the Semantic Web such as OWL (Ontology Web Language) or RDF (Resource Description Framework). These languages are formal because the semantics are formally defined and take advantage of the Description Logics. In contrast, XML data sources are without formal semantics. The XML language is used to store, export and share data between processes able to process the specific data structure. However, even if the semantics is not explicitly expressed, data structure contains the universe of discourse by using a qualified vocabulary regarding a consensual agreement. In order to formalize this semantics, the OWL language provides rich logical constraints. Therefore, these logical constraints are evolved in the transformation of XML documents into OWL documents, allowing the enrichment and the population of the target ontology. To design such a transformation, the current research field establishes connections between OWL constructs (classes, predicates, simple or complex data types, etc.) and XML constructs (elements, attributes, element lists, etc.). Two different approaches for the transformation process are exposed. The instance approaches are based on XML documents without any schema associated. The validation approaches are based on the XML schema and document validated by the associated schema. The second approaches benefit from the schema definition to provide automated transformations with logic constraints. Both approaches are discussed in the text.",TRUE,acronym
R133,Artificial Intelligence,R69637,Improving sequential recommendation with knowledge-enhanced memory networks,S330836,R69638,Machine Learning Method,R69591,RNN,"With the revival of neural networks, many studies try to adapt powerful sequential neural models, ıe Recurrent Neural Networks (RNN), to sequential recommendation. RNN-based networks encode historical interaction records into a hidden state vector. Although the state vector is able to encode sequential dependency, it still has limited representation power in capturing complicated user preference. It is difficult to capture fine-grained user preference from the interaction sequence. Furthermore, the latent vector representation is usually hard to understand and explain. To address these issues, in this paper, we propose a novel knowledge enhanced sequential recommender. Our model integrates the RNN-based networks with Key-Value Memory Network (KV-MN). We further incorporate knowledge base (KB) information to enhance the semantic representation of KV-MN. RNN-based models are good at capturing sequential user preference, while knowledge-enhanced KV-MNs are good at capturing attribute-level user preference. By using a hybrid of RNNs and KV-MNs, it is expected to be endowed with both benefits from these two components. The sequential preference representation together with the attribute-level preference representation are combined as the final representation of user preference. With the incorporation of KB information, our model is also highly interpretable. To our knowledge, it is the first time that sequential recommender is integrated with external memories by leveraging large-scale KB information.",TRUE,acronym
R133,Artificial Intelligence,R6316,A HMM-based approach to question answering against linked data,S7124,R6317,implementation,R6318,RTV,"In this paper, we present a QA system enabling NL questions against Linked Data, designed and adopted by the Tor Vergata University AI group in the QALD-3 evaluation. The system integrates lexical semantic modeling and statistical inference within a complex architecture that decomposes the NL interpretation task into a cascade of three different stages: (1) The selection of key ontological information from the question (i.e. predicate, arguments and properties), (2) the location of such salient information in the ontology through the joint disambiguation of the different candidates and (3) the compilation of the final SPARQL query. This architecture characterizes a novel approach for the task and exploits a graphical model (i.e. an Hidden Markov Model) to select the proper ontological triples according to the graph nature of RDF. In particular, for each query an HMM model is produced whose Viterbi solution is the comprehensive joint disambiguation across the sentence elements. The combination of these approaches achieved interesting results in the QALD competition. The RTV is in fact within the group of participants performing slightly below the best system, but with smaller requirements and on significantly poorer input information.",TRUE,acronym
R133,Artificial Intelligence,R6567,Automated text summarization and the SUMMARIST system,S8220,R6568,implementation,R6570,SUMMARIST,"This paper consists of three parts: a preliminary typology of summaries in general; a description of the current and planned modules and performance of the SUMMARIST automated multilingual text summarization system being built sat ISI, and a discussion of three methods to evaluate summaries.",TRUE,acronym
R133,Artificial Intelligence,R6575,Generating Natural Language Summaries from Multiple On-Line Sources,S8249,R6576,implementation,R6577,SUMMONS,"We present a methodology for summarization of news about current events in the form of briefings that include appropriate background (historical) information. The system that we developed, SUMMONS, uses the output of systems developed for the DARPA Message Understanding Conferences to generate summaries of multiple documents on the same or related events, presenting similarities and differences, contradictions, and generalizations among sources of information. We describe the various components of the system, showing how information from multiple articles is combined, organized into a paragraph, and finally, realized as English sentences. A feature of our work is the extraction of descriptions of entities such as people and places for reuse to enhance a briefing.",TRUE,acronym
R133,Artificial Intelligence,R6705,TIARA: A Visual Exploratory Text Analytic System,S8814,R6706,implementation,R6708,TIARA,"In this paper, we present a novel exploratory visual analytic system called TIARA (Text Insight via Automated Responsive Analytics), which combines text analytics and interactive visualization to help users explore and analyze large collections of text. Given a collection of documents, TIARA first uses topic analysis techniques to summarize the documents into a set of topics, each of which is represented by a set of keywords. In addition to extracting topics, TIARA derives time-sensitive keywords to depict the content evolution of each topic over time. To help users understand the topic-based summarization results, TIARA employs several interactive text visualization techniques to explain the summarization results and seamlessly link such results to the original text. We have applied TIARA to several real-world applications, including email summarization and patient record analysis. To measure the effectiveness of TIARA, we have conducted several experiments. Our experimental results and initial user feedback suggest that TIARA is effective in aiding users in their exploratory text analytic tasks.",TRUE,acronym
R133,Artificial Intelligence,R6743,UWN: A Large Multilingual Lexical Knowledge Base,S8981,R6744,implementation,R6746,UWN,"We present UWN, a large multilingual lexical knowledge base that describes the meanings and relationships of words in over 200 languages. This paper explains how link prediction, information integration and taxonomy induction methods have been used to build UWN based on WordNet and extend it with millions of named entities from Wikipedia. We additionally introduce extensions to cover lexical relationships, frame-semantic knowledge, and language data. An online interface provides human access to the data, while a software API enables applications to look up over 16 million words and names.",TRUE,acronym
R133,Artificial Intelligence,R139415,A GRAPH-BASED TOOL FOR THE TRANSLATION OF XML DATA TO OWL-DL ONTOLOGIES: ,S555961,R139417,Input format,R139408,XSD,"Today most of the data exchanged between information systems is done with the help of the XML syntax. Unfortunately when these data have to be integrated, the integration becomes difficult because of the semantics' heterogeneity. Consequently, leading researches in the domain of database systems are moving to semantic model in order to store data and its semantics definition. To benefit from these new systems and technologies, and to integrate different data sources, a flexible method consists in populating an existing OWL ontology from XML data. In paper we present such a method based on the definition of a graph which represents rules that drive the populating process. The graph of rules facilitates the mapping definition that consists in mapping elements from an XSD schema to the elements of the OWL schema.",TRUE,acronym
R133,Artificial Intelligence,R139899,Building ontologies from XML data sources,S558490,R139900,Approaches,R139913,X2OWL,"In this paper, we present a tool called X2OWL that aims at building an OWL ontology from an XML datasource. This method is based on XML schema to automatically generate the ontology structure, as well as, a set of mapping bridges. The presented method also includes a refinement step that allows to clean the mapping bridges and possibly to restructure the generated ontology.",TRUE,acronym
R133,Artificial Intelligence,R140383,Automatic Constructing OWL Ontology from Relational Database Schema: ,S560334,R140385,Learning tool,R140386,DB_DOOWL,"In this paper we present a new tool, called DB_DOOWL, for creating domain ontology from relational database schema (RDBS). In contrast with existing transformation approaches, we propose a generic solution based on automatic instantiation of a specified meta-ontology. This later is an owl ontology which describes any database structure. A prototype of our proposed tool is implemented based on Jena in Java in order to demonstrate its feasibility.",TRUE,acronym
R133,Artificial Intelligence,R6665,FEMsum at DUC 2007,S8625,R6667,implementation,R6668,FEMsum,"This paper describes and analyzes how the FEMsum system deals with DUC 2007 tasks of providing summary-length answers to complex questions, both background and just-the-news summaries. We participated in producing background summaries for the main task with the FEMsum approach that obtained better results in our last year participation. The FEMsum semantic based approach was adapted to deal with the update pilot task with the aim of producing just-the-news summaries.",TRUE,acronym
R133,Artificial Intelligence,R6673,GOFAIsum: a symbolic summarizer for DUC,S8665,R6674,implementation,R6676,GOFAIsum,"This article presents GOFAISUM, a topicanswering and summarizing system developed for the main task of DUC 2007 by the Université de Montréal and the Université de Genève. We chose to use an all-symbolic approach, the only source of linguistic knowledge being FIPS, a multilingual syntactic parser. We further attempted to innovate by using XML and XSLT to both represent FIPS’s analysis trees and to manipulate them to create summaries. We relied on tf·idf -like scores to ensure relevance to the topic and on syntactic pruning to achieve conciseness and grammaticality. NIST evaluation metrics show that our system performs well with respect to summary responsiveness and linguistic quality, but could be improved to reduce redundancy.",TRUE,acronym
R133,Artificial Intelligence,R6271,Answering natural language questions with Intui3,S6874,R6272,implementation,R6273,Intui3,"Intui3 is one of the participating systems at the fourth evaluation campaign on multilingual question answering over linked data, QALD4. The system accepts as input a question formulated in natural language (in English), and uses syntactic and semantic information to construct its interpretation with respect to a given database of RDF triples (in this case DBpedia 3.9). The interpretation is mapped to the corresponding SPARQL query, which is then run against a SPARQL endpoint to retrieve the answers to the initial question. Intui3 competed in the challenge called Task 1: Multilingual question answering over linked data, which offered 200 training questions and 50 test questions in 7 different languages. It obtained an F-measure of 0.24 by providing a correct answer to 10 of the test questions and a partial answer to 4 of them.",TRUE,acronym
R133,Artificial Intelligence,R6685,Personalized PageRank Based Multi-document Summarization,S8717,R6686,implementation,R6688,PPRSum,"This paper presents a novel multi-document summarization approach based on personalized pagerank (PPRSum). In this algorithm, we uniformly integrate various kinds of information in the corpus. At first, we train a salience model of sentence global features based on Naive Bayes Model. Secondly, we generate a relevance model for each corpus utilizing the query of it. Then, we compute the personalized prior probability for each sentence in the corpus utilizing the salience model and the relevance model both. With the help of personalized prior probability, a Personalized PageRank ranking process is performed depending on the relationships among all sentences in the corpus. Additionally, the redundancy penalty is imposed on each sentence. The summary is produced by choosing the sentences with both high query-focused information richness and high information novelty. Experiments on DUC2007 are performed and the ROUGE evaluation results show that PPRSum ranks between the 1st and the 2nd systems on DUC2007 main task.",TRUE,acronym
R133,Artificial Intelligence,R6303,QAKiS: an open domain QA system based on relational patterns.,S7058,R6304,implementation,R6305,QAKiS,"We present QAKiS, a system for open domain Question Answering over linked data. It addresses the problem of question interpretation as a relation-based match, where fragments of the question are matched to binary relations of the triple store, using relational textual patterns automatically collected. For the demo, the relational patterns are automatically extracted from Wikipedia, while DBpedia is the RDF data set to be queried using a natural language interface.",TRUE,acronym
R133,Artificial Intelligence,R6319,QAnswer-enhanced entity matching for question answering over linked data,S7140,R6320,implementation,R6321,QAnswer,"QAnswer is a question answering system that uses DBpedia as a knowledge base and converts natural language questions into a SPARQL query. In order to improve the match between entities and relations and natural language text, we make use of Wikipedia to extract lexicalizations of the DBpedia entities and then match them with the question. These entities are validated on the ontology, while missing ones can be inferred. The proposed system was tested in the QALD-5 challenge and it obtained a F1 score of 0.30, which placed QAnswer in the second position in the challenge, despite the fact that the system used only a small subset of the properties in DBpedia, due to the long extraction process.",TRUE,acronym
R133,Artificial Intelligence,R69294,The SOFC-Exp Corpus and Neural Approaches to Information Extraction in the Materials Science Domain,S328937,R69295,dataset,L239663,SOFC-Exp,"This paper presents a new challenging information extraction task in the domain of materials science. We develop an annotation scheme for marking information on experiments related to solid oxide fuel cells in scientific publications, such as involved materials and measurement conditions. With this paper, we publish our annotation guidelines, as well as our SOFC-Exp corpus consisting of 45 open-access scholarly articles annotated by domain experts. A corpus and an inter-annotator agreement study demonstrate the complexity of the suggested named entity recognition and slot filling tasks as well as high annotation quality. We also present strong neural-network based models for a variety of tasks that can be addressed on the basis of our new data set. On all tasks, using BERT embeddings leads to large performance gains, but with increasing task complexity, adding a recurrent neural network on top seems beneficial. Our models will serve as competitive baselines in future work, and analysis of their performance highlights difficult cases when modeling the data and suggests promising research directions.",TRUE,acronym
R133,Artificial Intelligence,R74194,YNU-HPCC at SemEval-2021 Task 11: Using a BERT Model to Extract Contributions from NLP Scholarly Articles,S340727,R74196,Team Name,R74197,YNU-HPCC,"This paper describes the system we built as the YNU-HPCC team in the SemEval-2021 Task 11: NLPContributionGraph. This task involves first identifying sentences in the given natural language processing (NLP) scholarly articles that reflect research contributions through binary classification; then identifying the core scientific terms and their relation phrases from these contribution sentences by sequence labeling; and finally, these scientific terms and relation phrases are categorized, identified, and organized into subject-predicate-object triples to form a knowledge graph with the help of multiclass classification and multi-label classification. We developed a system for this task using a pre-trained language representation model called BERT that stands for Bidirectional Encoder Representations from Transformers, and achieved good results. The average F1-score for Evaluation Phase 2, Part 1 was 0.4562 and ranked 7th, and the average F1-score for Evaluation Phase 2, Part 2 was 0.6541, and also ranked 7th.",TRUE,acronym
R133,Artificial Intelligence,R70188,Template-based Question Answering using Recursive Neural Networks,S333432,R70189,has research problem,R68944,DBPedia,"Most question answering (QA) systems over Linked Data, i.e. Knowledge Graphs, approach the question answering task as a conversion from a natural language question to its corresponding SPARQL query. A common approach is to use query templates to generate SPARQL queries with slots that need to be filled. Using templates instead of running an extensive NLP pipeline or end-to-end model shifts the QA problem into a classification task, where the system needs to match the input question to the appropriate template. This paper presents an approach to automatically learn and classify natural language questions into corresponding templates using recursive neural networks. Our model was trained on 5000 questions and their respective SPARQL queries from the preexisting LC-QuAD dataset grounded in DBpedia, spanning 5042 entities and 615 predicates. The resulting model was evaluated using the FAIR GERBIL QA framework resulting in 0.419 macro f-measure on LC-QuAD and 0.417 macro f-measure on QALD-7.",TRUE,acronym
R175,"Atomic, Molecular and Optical Physics",R108936,Absolute vacuum ultraviolet flux in inductively coupled plasmas and chemical modifications of 193 nm photoresist,S568690,R141777,Plasma_discharge,L399222,ICP,"Vacuum ultraviolet (VUV) photons in plasma processing systems are known to alter surface chemistry and may damage gate dielectrics and photoresist. We characterize absolute VUV fluxes to surfaces exposed in an inductively coupled argon plasma, 1–50 mTorr, 25–400 W, using a calibrated VUV spectrometer. We also demonstrate an alternative method to estimate VUV fluence in an inductively coupled plasma (ICP) reactor using a chemical dosimeter-type monitor. We illustrate the technique with argon ICP and xenon lamp exposure experiments, comparing direct VUV measurements with measured chemical changes in 193 nm photoresist-covered Si wafers following VUV exposure.",TRUE,acronym
R175,"Atomic, Molecular and Optical Physics",R108946,Quantification of the VUV radiation in low pressure hydrogen and nitrogen plasmas,S568565,R141767,Plasma_discharge,L399117,ICP,"Hydrogen and nitrogen containing discharges emit intense radiation in a broad wavelength region in the VUV. The measured radiant power of individual molecular transitions and atomic lines between 117 nm and 280 nm are compared to those obtained in the visible spectral range and moreover to the RF power supplied to the ICP discharge. In hydrogen plasmas driven at 540 W of RF power up to 110 W are radiated in the VUV, whereas less than 2 W is emitted in the VIS. In nitrogen plasmas the power level of about 25 W is emitted both in the VUV and in the VIS. In hydrogen–nitrogen mixtures, the NH radiation increases the VUV amount. The analysis of molecular and atomic hydrogen emission supported by a collisional radiative model allowed determining plasma parameters and particle densities and thus particle fluxes. A comparison of the fluxes showed that the photon fluxes determined from the measured emission are similar to the ion fluxes, whereas the atomic hydrogen fluxes are by far dominant. Photon fluxes up to 5 × 1020 m−2 s−1 are obtained, demonstrating that the VUV radiation should not be neglected in surface modifications processes, whereas the radiant power converted to VUV photons is to be considered in power balances. Varying the admixture of nitrogen to hydrogen offers a possibility to tune photon fluxes in the respective wavelength intervals.",TRUE,acronym
R175,"Atomic, Molecular and Optical Physics",R141452,HBr Plasma Treatment Versus VUV Light Treatment to Improve 193 nm Photoresist Pattern Linewidth Roughness,S568669,R141775,Feed_gases,L399205,HBr,"We have studied the impact of HBr plasma treatment and the role of the VUV light emitted by this plasma on the chemical modifications and resulting roughness of both blanket and patterned photoresists. The experimental results show that both treatments lead to similar resist bulk chemical modifications that result in a decrease of the resist glass transition temperature (Tg). This drop in Tg allows polymer chain rearrangement that favors surface roughness smoothening. The smoothening effect is mainly attributed to main chain scission induced by plasma VUV light. For increased VUV light exposure time, the crosslinking mechanism dominates over main chain scission and limits surface roughness smoothening. In the case of the HBr plasma treatment, the synergy between Bromine radicals and VUV light leads to the formation of dense graphitized layers on top and sidewalls surfaces of the resist pattern. The presence of a dense layer on the HBr cured resist sidewalls prevents from resist pattern reflowing but on the counter side leads to increased surface roughness and linewidth roughness compared to VUV light treatment.",TRUE,acronym
R14,Biochemistry,R74652,Dynamic Impacts of the Inhibition of the Molecular Chaperone Hsp90 on the T-Cell Proteome Have Implications for Anti-Cancer Therapy,S342951,R74654,Material,R74659,T-cells,"The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.",TRUE,acronym
R104,Bioinformatics,R168508,ACME: Automated Cell Morphology Extractor for Comprehensive Reconstruction of Cell Membranes,S668281,R168510,creates,R166925,ACME,"The quantification of cell shape, cell migration, and cell rearrangements is important for addressing classical questions in developmental biology such as patterning and tissue morphogenesis. Time-lapse microscopic imaging of transgenic embryos expressing fluorescent reporters is the method of choice for tracking morphogenetic changes and establishing cell lineages and fate maps in vivo. However, the manual steps involved in curating thousands of putative cell segmentations have been a major bottleneck in the application of these technologies especially for cell membranes. Segmentation of cell membranes while more difficult than nuclear segmentation is necessary for quantifying the relations between changes in cell morphology and morphogenesis. We present a novel and fully automated method to first reconstruct membrane signals and then segment out cells from 3D membrane images even in dense tissues. The approach has three stages: 1) detection of local membrane planes, 2) voting to fill structural gaps, and 3) region segmentation. We demonstrate the superior performance of the algorithms quantitatively on time-lapse confocal and two-photon images of zebrafish neuroectoderm and paraxial mesoderm by comparing its results with those derived from human inspection. We also compared with synthetic microscopic images generated by simulating the process of imaging with fluorescent reporters under varying conditions of noise. Both the over-segmentation and under-segmentation percentages of our method are around 5%. The volume overlap of individual cells, compared to expert manual segmentation, is consistently over 84%. By using our software (ACME) to study somite formation, we were able to segment touching cells with high accuracy and reliably quantify changes in morphogenetic parameters such as cell shape and size, and the arrangement of epithelial and mesenchymal cells. Our software has been developed and tested on Windows, Mac, and Linux platforms and is available publicly under an open source BSD license (https://github.com/krm15/ACME).",TRUE,acronym
R104,Bioinformatics,R168655,ASPASIA: A toolkit for evaluating the effects of biological interventions on SBML model behaviour,S668846,R168656,creates,R167018,ASPASIA,"A calibrated computational model reflects behaviours that are expected or observed in a complex system, providing a baseline upon which sensitivity analysis techniques can be used to analyse pathways that may impact model responses. However, calibration of a model where a behaviour depends on an intervention introduced after a defined time point is difficult, as model responses may be dependent on the conditions at the time the intervention is applied. We present ASPASIA (Automated Simulation Parameter Alteration and SensItivity Analysis), a cross-platform, open-source Java toolkit that addresses a key deficiency in software tools for understanding the impact an intervention has on system behaviour for models specified in Systems Biology Markup Language (SBML). ASPASIA can generate and modify models using SBML solver output as an initial parameter set, allowing interventions to be applied once a steady state has been reached. Additionally, multiple SBML models can be generated where a subset of parameter values are perturbed using local and global sensitivity analysis techniques, revealing the model’s sensitivity to the intervention. To illustrate the capabilities of ASPASIA, we demonstrate how this tool has generated novel hypotheses regarding the mechanisms by which Th17-cell plasticity may be controlled in vivo. By using ASPASIA in conjunction with an SBML model of Th17-cell polarisation, we predict that promotion of the Th1-associated transcription factor T-bet, rather than inhibition of the Th17-associated transcription factor RORγt, is sufficient to drive switching of Th17 cells towards an IFN-γ-producing phenotype. Our approach can be applied to all SBML-encoded models to predict the effect that intervention strategies have on system behaviour. ASPASIA, released under the Artistic License (2.0), can be downloaded from http://www.york.ac.uk/ycil/software.",TRUE,acronym
R104,Bioinformatics,R168655,ASPASIA: A toolkit for evaluating the effects of biological interventions on SBML model behaviour,S668850,R168658,deposits,R167019,ASPASIA,"A calibrated computational model reflects behaviours that are expected or observed in a complex system, providing a baseline upon which sensitivity analysis techniques can be used to analyse pathways that may impact model responses. However, calibration of a model where a behaviour depends on an intervention introduced after a defined time point is difficult, as model responses may be dependent on the conditions at the time the intervention is applied. We present ASPASIA (Automated Simulation Parameter Alteration and SensItivity Analysis), a cross-platform, open-source Java toolkit that addresses a key deficiency in software tools for understanding the impact an intervention has on system behaviour for models specified in Systems Biology Markup Language (SBML). ASPASIA can generate and modify models using SBML solver output as an initial parameter set, allowing interventions to be applied once a steady state has been reached. Additionally, multiple SBML models can be generated where a subset of parameter values are perturbed using local and global sensitivity analysis techniques, revealing the model’s sensitivity to the intervention. To illustrate the capabilities of ASPASIA, we demonstrate how this tool has generated novel hypotheses regarding the mechanisms by which Th17-cell plasticity may be controlled in vivo. By using ASPASIA in conjunction with an SBML model of Th17-cell polarisation, we predict that promotion of the Th1-associated transcription factor T-bet, rather than inhibition of the Th17-associated transcription factor RORγt, is sufficient to drive switching of Th17 cells towards an IFN-γ-producing phenotype. Our approach can be applied to all SBML-encoded models to predict the effect that intervention strategies have on system behaviour. ASPASIA, released under the Artistic License (2.0), can be downloaded from http://www.york.ac.uk/ycil/software.",TRUE,acronym
R104,Bioinformatics,R168539,BEAST 2: A Software Platform for Bayesian Evolutionary Analysis,S668397,R168540,deposits,R166943,BEAST,"We present a new open source, extensible and flexible software platform for Bayesian evolutionary analysis called BEAST 2. This software platform is a re-design of the popular BEAST 1 platform to correct structural deficiencies that became evident as the BEAST 1 software evolved. Key among those deficiencies was the lack of post-deployment extensibility. BEAST 2 now has a fully developed package management system that allows third party developers to write additional functionality that can be directly installed to the BEAST 2 analysis platform via a package manager without requiring a new software release of the platform. This package architecture is showcased with a number of recently published new models encompassing birth-death-sampling tree priors, phylodynamics and model averaging for substitution models and site partitioning. A second major improvement is the ability to read/write the entire state of the MCMC chain to/from disk allowing it to be easily shared between multiple instances of the BEAST software. This facilitates checkpointing and better support for multi-processor and high-end computing extensions. Finally, the functionality in new packages can be easily added to the user interface (BEAUti 2) by a simple XML template-based mechanism because BEAST 2 has been re-designed to provide greater integration between the analysis engine and the user interface so that, for example BEAST and BEAUti use exactly the same XML file format.",TRUE,acronym
R104,Bioinformatics,R169592,"Preliminary Observations of Population Genetics and Relatedness of the Broadnose Sevengill Shark, Notorynchus cepedianus, in Two Northeast Pacific Estuaries",S673157,R169601,uses,R167639,BOTTLENECK,"The broadnose sevengill shark, Notorynchus cepedianus, a common coastal species in the eastern North Pacific, was sampled during routine capture and tagging operations conducted from 2005–2012. One hundred and thirty three biopsy samples were taken during these research operations in Willapa Bay, Washington and in San Francisco Bay, California. Genotypic data from seven polymorphic microsatellites (derived from the related sixgill shark, Hexanchus griseus) were used to describe N. cepedianus genetic diversity, population structure and relatedness. Diversity within N. cepedianus was found to be low to moderate with an average observed heterozygosity of 0.41, expected heterozygosity of 0.53, and an average of 5.1 alleles per microsatellite locus. There was no evidence of a recent population bottleneck based on genetic data. Analyses of genetic differences between the two sampled estuaries suggest two distinct populations with some genetic mixing of sharks sampled during 2005–2006. Relatedness within sampled populations was high, with percent relatedness among sharks caught in the same area indicating 42.30% first-order relative relationships (full or half siblings). Estuary-specific familial relationships suggest that management of N. cepedianus on the U.S. West Coast should incorporate stock-specific management goals to conserve this ecologically important predator.",TRUE,acronym
R104,Bioinformatics,R135489,Identification of Leukemia Subtypes from Microscopic Images Using Convolutional Neural Network,S535863,R135491,Used models,L377978,CNN,"Leukemia is a fatal cancer and has two main types: Acute and chronic. Each type has two more subtypes: Lymphoid and myeloid. Hence, in total, there are four subtypes of leukemia. This study proposes a new approach for diagnosis of all subtypes of leukemia from microscopic blood cell images using convolutional neural networks (CNN), which requires a large training data set. Therefore, we also investigated the effects of data augmentation for an increasing number of training samples synthetically. We used two publicly available leukemia data sources: ALL-IDB and ASH Image Bank. Next, we applied seven different image transformation techniques as data augmentation. We designed a CNN architecture capable of recognizing all subtypes of leukemia. Besides, we also explored other well-known machine learning algorithms such as naive Bayes, support vector machine, k-nearest neighbor, and decision tree. To evaluate our approach, we set up a set of experiments and used 5-fold cross-validation. The results we obtained from experiments showed that our CNN model performance has 88.25% and 81.74% accuracy, in leukemia versus healthy and multi-class classification of all subtypes, respectively. Finally, we also showed that the CNN model has a better performance than other well-known machine learning algorithms.",TRUE,acronym
R104,Bioinformatics,R138756,Learning Spatial–Spectral–Temporal EEG Features With Recurrent 3D Convolutional Neural Networks for Cross-Task Mental Workload Assessment,S551439,R138761,Used models,R138746,CNN,"Mental workload assessment is essential for maintaining human health and preventing accidents. Most research on this issue is limited to a single task. However, cross-task assessment is indispensable for extending a pre-trained model to new workload conditions. Because brain dynamics are complex across different tasks, it is difficult to propose efficient human-designed features based on prior knowledge. Therefore, this paper proposes a concatenated structure of deep recurrent and 3D convolutional neural networks (R3DCNNs) to learn EEG features across different tasks without prior knowledge. First, this paper adds frequency and time dimensions to EEG topographic maps based on a Morlet wavelet transformation. Then, R3DCNN is proposed to simultaneously learn EEG features from the spatial, spectral, and temporal dimensions. The proposed model is validated based on the EEG signals collected from 20 subjects. This paper employs a binary classification of low and high mental workload across spatial n-back and arithmetic tasks. The results show that the R3DCNN achieves an average accuracy of 88.9%, which is a significant increase compared with that of the state-of-the-art methods. In addition, the visualization of the convolutional layers demonstrates that the deep neural network can extract detailed features. These results indicate that R3DCNN is capable of identifying the mental workload levels for cross-task conditions.",TRUE,acronym
R104,Bioinformatics,R138802,Assessing the severity of positive valence symptoms in initial psychiatric evaluation records: Should we use convolutional neural networks?,S551598,R138806,Used models,R138746,CNN,"Background and objective Efficiently capturing the severity of positive valence symptoms could aid in risk stratification for adverse outcomes among patients with psychiatric disorders and identify optimal treatment strategies for patient subgroups. Motivated by the success of convolutional neural networks (CNNs) in classification tasks, we studied the application of various CNN architectures and their performance in predicting the severity of positive valence symptoms in patients with psychiatric disorders based on initial psychiatric evaluation records. Methods Psychiatric evaluation records contain unstructured text and semi-structured data such as question–answer pairs. For a given record, we tokenise and normalise the semi-structured content. Pre-processed tokenised words are represented as one-hot encoded word vectors. We then apply different configurations of convolutional and max pooling layers to automatically learn important features from various word representations. We conducted a series of experiments to explore the effect of different CNN architectures on the classification of psychiatric records. Results Our best CNN model achieved a mean absolute error (MAE) of 0.539 and a normalized MAE of 0.785 on the test dataset, which is comparable to the other well-known text classification algorithms studied in this work. Our results also suggest that the normalisation step has a great impact on the performance of the developed models. Conclusions We demonstrate that normalisation of the semi-structured contents can improve the MAE among all CNN configurations. Without advanced feature engineering, CNN-based approaches can provide a comparable solution for classifying positive valence symptom severity in initial psychiatric evaluation records. Although word embedding is well known for its ability to capture relatively low-dimensional similarity between words, our experimental results show that pre-trained embeddings do not improve the classification performance. This phenomenon may be due to the inability of word embeddings to capture problem specific contextual semantic information implying the quality of the employing embedding is critical for obtaining an accurate CNN model.",TRUE,acronym
R104,Bioinformatics,R138807,DeepBipolar: Identifying genomic mutations for bipolar disorder via deep learning,S551618,R138810,Used models,R138746,CNN,"Bipolar disorder, also known as manic depression, is a brain disorder that affects the brain structure of a patient. It results in extreme mood swings, severe states of depression, and overexcitement simultaneously. It is estimated that roughly 3% of the population of the United States (about 5.3 million adults) suffers from bipolar disorder. Recent research efforts like the Twin studies have demonstrated a high heritability factor for the disorder, making genomics a viable alternative for detecting and treating bipolar disorder, in addition to the conventional lengthy and costly postsymptom clinical diagnosis. Motivated by this study, leveraging several emerging deep learning algorithms, we design an end‐to‐end deep learning architecture (called DeepBipolar) to predict bipolar disorder based on limited genomic data. DeepBipolar adopts the Deep Convolutional Neural Network (DCNN) architecture that automatically extracts features from genotype information to predict the bipolar phenotype. We participated in the Critical Assessment of Genome Interpretation (CAGI) bipolar disorder challenge and DeepBipolar was considered the most successful by the independent assessor. In this work, we thoroughly evaluate the performance of DeepBipolar and analyze the type of signals we believe could have affected the classifier in distinguishing the case samples from the control set.",TRUE,acronym
R104,Bioinformatics,R138870,DepAudioNet: An Efficient Deep Model for Audio based Depression Classification,S551794,R138872,Used models,R138746,CNN,"This paper presents a novel and effective audio based method on depression classification. It focuses on two important issues, \emph{i.e.} data representation and sample imbalance, which are not well addressed in literature. For the former one, in contrast to traditional shallow hand-crafted features, we propose a deep model, namely DepAudioNet, to encode the depression related characteristics in the vocal channel, combining Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) to deliver a more comprehensive audio representation. For the latter one, we introduce a random sampling strategy in the model training phase to balance the positive and negative samples, which largely alleviates the bias caused by uneven sample distribution. Evaluations are carried out on the DAIC-WOZ dataset for the Depression Classification Sub-challenge (DCC) at the 2016 Audio-Visual Emotion Challenge (AVEC), and the experimental results achieved clearly demonstrate the effectiveness of the proposed approach.",TRUE,acronym
R104,Bioinformatics,R138927,DeepBreath: Deep learning of breathing patterns for automatic stress recognition using low-cost thermal imaging in unconstrained settings,S552038,R138929,Used models,R138746,CNN,"We propose DeepBreath, a deep learning model which automatically recognises people's psychological stress level (mental overload) from their breathing patterns. Using a low cost thermal camera, we track a person's breathing patterns as temperature changes around his/her nostril. The paper's technical contribution is threefold. First of all, instead of creating handcrafted features to capture aspects of the breathing patterns, we transform the uni-dimensional breathing signals into two dimensional respiration variability spectrogram (RVS) sequences. The spectrograms easily capture the complexity of the breathing dynamics. Second, a spatial pattern analysis based on a deep Convolutional Neural Network (CNN) is directly applied to the spectrogram sequences without the need of hand-crafting features. Finally, a data augmentation technique, inspired from solutions for over-fitting problems in deep learning, is applied to allow the CNN to learn with a small-scale dataset from short-term measurements (e.g., up to a few hours). The model is trained and tested with data collected from people exposed to two types of cognitive tasks (Stroop Colour Word Test, Mental Computation test) with sessions of different difficulty levels. Using normalised self-report as ground truth, the CNN reaches 84.59% accuracy in discriminating between two levels of stress and 56.52% in discriminating between three levels. In addition, the CNN outperformed powerful shallow learning methods based on a single layer neural network. Finally, the dataset of labelled thermal images will be open to the community.",TRUE,acronym
R104,Bioinformatics,R138931,DCNN and DNN based multi-modal depression recognition,S552057,R138933,Used models,R138746,CNN,"In this paper, we propose an audio visual multimodal depression recognition framework composed of deep convolutional neural network (DCNN) and deep neural network (DNN) models. For each modality, corresponding feature descriptors are input into a DCNN to learn high-level global features with compact dynamic information, which are then fed into a DNN to predict the PHQ-8 score. For multi-modal depression recognition, the predicted PHQ-8 scores from each modality are integrated in a DNN for the final prediction. In addition, we propose the Histogram of Displacement Range as a novel global visual descriptor to quantify the range and speed of the facial landmarks' displacements. Experiments have been carried out on the Distress Analysis Interview Corpus-Wizard of Oz (DAIC-WOZ) dataset for the Depression Sub-challenge of the Audio-Visual Emotion Challenge (AVEC 2016), results show that the proposed multi-modal depression recognition framework obtains very promising results on both the development set and test set, which outperforms the state-of-the-art results.",TRUE,acronym
R104,Bioinformatics,R138992,User-level psychological stress detection from social media using deep neural network,S552275,R138994,Used models,R138746,CNN,"It is of significant importance to detect and manage stress before it turns into severe problems. However, existing stress detection methods usually rely on psychological scales or physiological devices, making the detection complicated and costly. In this paper, we explore to automatically detect individuals' psychological stress via social media. Employing real online micro-blog data, we first investigate the correlations between users' stress and their tweeting content, social engagement and behavior patterns. Then we define two types of stress-related attributes: 1) low-level content attributes from a single tweet, including text, images and social interactions; 2) user-scope statistical attributes through their weekly micro-blog postings, leveraging information of tweeting time, tweeting types and linguistic styles. To combine content attributes with statistical attributes, we further design a convolutional neural network (CNN) with cross autoencoders to generate user-scope content attributes from low-level content attributes. Finally, we propose a deep neural network (DNN) model to incorporate the two types of user-scope attributes to detect users' psychological stress. We test the trained model on four different datasets from major micro-blog platforms including Sina Weibo, Tencent Weibo and Twitter. Experimental results show that the proposed model is effective and efficient on detecting psychological stress from micro-blog data. We believe our model would be useful in developing stress detection tools for mental health agencies and individuals.",TRUE,acronym
R104,Bioinformatics,R169612,Intrinsic Functional Connectivity in Salience and Default Mode Networks and Aberrant Social Processes in Youth at Ultra-High Risk for Psychosis,S673235,R169615,uses,R167646,CONN,"Social processes are key to navigating the world, and investigating their underlying mechanisms and cognitive architecture can aid in understanding disease states such as schizophrenia, where social processes are highly impacted. Evidence suggests that social processes are impaired in individuals at ultra high-risk for the development of psychosis (UHR). Understanding these phenomena in UHR youth may clarify disease etiology and social processes in a period that is characterized by significantly fewer confounds than schizophrenia. Furthermore, understanding social processing deficits in this population will help explain these processes in healthy individuals. The current study examined resting state connectivity of the salience (SN) and default mode networks (DMN) in association with facial emotion recognition (FER), an integral aspect of social processes, as well as broader social functioning (SF) in UHR individuals and healthy controls. Consistent with the existing literature, UHR youth were impaired in FER and SF when compared with controls. In the UHR group, we found increased connectivity between the SN and the medial prefrontal cortex, an area of the DMN relative to controls. In UHR youth, the DMN exhibited both positive and negative correlations with the somatosensory cortex/cerebellum and precuneus, respectively, which was linked with better FER performance. For SF, results showed that sensory processing links with the SN might be important in allowing for better SF for both groups, but especially in controls where sensory processing is likely to be unimpaired. These findings clarify how social processing deficits may manifest in psychosis, and underscore the importance of SN and DMN connectivity for social processing more generally.",TRUE,acronym
R104,Bioinformatics,R168577,dcGOR: An R Package for Analysing Ontologies and Protein Domain Annotations,S668542,R168581,uses,R166972,CRAN,"I introduce an open-source R package ‘dcGOR’ to provide the bioinformatics community with the ease to analyse ontologies and protein domain annotations, particularly those in the dcGO database. The dcGO is a comprehensive resource for protein domain annotations using a panel of ontologies including Gene Ontology. Although increasing in popularity, this database needs statistical and graphical support to meet its full potential. Moreover, there are no bioinformatics tools specifically designed for domain ontology analysis. As an add-on package built in the R software environment, dcGOR offers a basic infrastructure with great flexibility and functionality. It implements new data structure to represent domains, ontologies, annotations, and all analytical outputs as well. For each ontology, it provides various mining facilities, including: (i) domain-based enrichment analysis and visualisation; (ii) construction of a domain (semantic similarity) network according to ontology annotations; and (iii) significance analysis for estimating a contact (statistical significance) network. To reduce runtime, most analyses support high-performance parallel computing. Taking as inputs a list of protein domains of interest, the package is able to easily carry out in-depth analyses in terms of functional, phenotypic and diseased relevance, and network-level understanding. More importantly, dcGOR is designed to allow users to import and analyse their own ontologies and annotations on domains (taken from SCOP, Pfam and InterPro) and RNAs (from Rfam) as well. The package is freely available at CRAN for easy installation, and also at GitHub for version control. The dedicated website with reproducible demos can be found at http://supfam.org/dcGOR.",TRUE,acronym
R104,Bioinformatics,R138687,Diagnosis of attention deficit hyperactivity disorder using deep belief network based on greedy approach,S551121,R138689,Used models,R127044,DBN,"Attention deficit hyperactivity disorder creates conditions for the child as s/he cannot sit calm and still, control his/her behavior and focus his/her attention on a particular issue. Five out of every hundred children are affected by the disease. Boys are three times more than girls at risk for this complication. The disorder often begins before age seven, and parents may not realize their children problem until they get older. Children with hyperactivity and attention deficit are at high risk of conduct disorder, antisocial personality, and drug abuse. Most children suffering from the disease will develop a feeling of depression, anxiety and lack of self-confidence. Given the importance of diagnosis the disease, Deep Belief Networks (DBNs) were used as a deep learning model to predict the disease. In this system, in addition to FMRI images features, sophisticated features such as age and IQ as well as functional characteristics, etc. were used. The proposed method was evaluated by two standard data sets of ADHD-200 Global Competitions, including NeuroImage and NYU data sets, and compared with state-of-the-art algorithms. The results showed the superiority of the proposed method rather than other systems. The prediction accuracy has improved respectively as +12.04 and +27.81 over NeuroImage and NYU datasets compared to the best proposed method in the ADHD-200 Global competition.",TRUE,acronym
R104,Bioinformatics,R168503,DOGS: Reaction-Driven de novo Design of Bioactive Compounds,S668264,R168505,creates,R166922,DOGS,"We present a computational method for the reaction-based de novo design of drug-like molecules. The software DOGS (Design of Genuine Structures) features a ligand-based strategy for automated ‘in silico’ assembly of potentially novel bioactive compounds. The quality of the designed compounds is assessed by a graph kernel method measuring their similarity to known bioactive reference ligands in terms of structural and pharmacophoric features. We implemented a deterministic compound construction procedure that explicitly considers compound synthesizability, based on a compilation of 25'144 readily available synthetic building blocks and 58 established reaction principles. This enables the software to suggest a synthesis route for each designed compound. Two prospective case studies are presented together with details on the algorithm and its implementation. De novo designed ligand candidates for the human histamine H4 receptor and γ-secretase were synthesized as suggested by the software. The computational approach proved to be suitable for scaffold-hopping from known ligands to novel chemotypes, and for generating bioactive molecules with drug-like properties.",TRUE,acronym
R104,Bioinformatics,R168591,ENCORE: Software for Quantitative Ensemble Comparison,S668584,R168592,creates,R166979,ENCORE,"There is increasing evidence that protein dynamics and conformational changes can play an important role in modulating biological function. As a result, experimental and computational methods are being developed, often synergistically, to study the dynamical heterogeneity of a protein or other macromolecules in solution. Thus, methods such as molecular dynamics simulations or ensemble refinement approaches have provided conformational ensembles that can be used to understand protein function and biophysics. These developments have in turn created a need for algorithms and software that can be used to compare structural ensembles in the same way as the root-mean-square-deviation is often used to compare static structures. Although a few such approaches have been proposed, these can be difficult to implement efficiently, hindering a broader applications and further developments. Here, we present an easily accessible software toolkit, called ENCORE, which can be used to compare conformational ensembles generated either from simulations alone or synergistically with experiments. ENCORE implements three previously described methods for ensemble comparison, that each can be used to quantify the similarity between conformational ensembles by estimating the overlap between the probability distributions that underlie them. We demonstrate the kinds of insights that can be obtained by providing examples of three typical use-cases: comparing ensembles generated with different molecular force fields, assessing convergence in molecular simulations, and calculating differences and similarities in structural ensembles refined with various sources of experimental data. We also demonstrate efficient computational scaling for typical analyses, and robustness against both the size and sampling of the ensembles. ENCORE is freely available and extendable, integrates with the established MDAnalysis software package, reads ensemble data in many common formats, and can work with large trajectory files.",TRUE,acronym
R104,Bioinformatics,R168591,ENCORE: Software for Quantitative Ensemble Comparison,S668588,R168594,deposits,R166981,ENCORE,"There is increasing evidence that protein dynamics and conformational changes can play an important role in modulating biological function. As a result, experimental and computational methods are being developed, often synergistically, to study the dynamical heterogeneity of a protein or other macromolecules in solution. Thus, methods such as molecular dynamics simulations or ensemble refinement approaches have provided conformational ensembles that can be used to understand protein function and biophysics. These developments have in turn created a need for algorithms and software that can be used to compare structural ensembles in the same way as the root-mean-square-deviation is often used to compare static structures. Although a few such approaches have been proposed, these can be difficult to implement efficiently, hindering a broader applications and further developments. Here, we present an easily accessible software toolkit, called ENCORE, which can be used to compare conformational ensembles generated either from simulations alone or synergistically with experiments. ENCORE implements three previously described methods for ensemble comparison, that each can be used to quantify the similarity between conformational ensembles by estimating the overlap between the probability distributions that underlie them. We demonstrate the kinds of insights that can be obtained by providing examples of three typical use-cases: comparing ensembles generated with different molecular force fields, assessing convergence in molecular simulations, and calculating differences and similarities in structural ensembles refined with various sources of experimental data. We also demonstrate efficient computational scaling for typical analyses, and robustness against both the size and sampling of the ensembles. ENCORE is freely available and extendable, integrates with the established MDAnalysis software package, reads ensemble data in many common formats, and can work with large trajectory files.",TRUE,acronym
R104,Bioinformatics,R169347,"Integrated Assessment of Behavioral and Environmental Risk Factors for Lyme Disease Infection on Block Island, Rhode Island",S671988,R169348,uses,R167473,ENVI,"Peridomestic exposure to Borrelia burgdorferi-infected Ixodes scapularis nymphs is considered the dominant means of infection with black-legged tick-borne pathogens in the eastern United States. Population level studies have detected a positive association between the density of infected nymphs and Lyme disease incidence. At a finer spatial scale within endemic communities, studies have focused on individual level risk behaviors, without accounting for differences in peridomestic nymphal density. This study simultaneously assessed the influence of peridomestic tick exposure risk and human behavior risk factors for Lyme disease infection on Block Island, Rhode Island. Tick exposure risk on Block Island properties was estimated using remotely sensed landscape metrics that strongly correlated with tick density at the individual property level. Behavioral risk factors and Lyme disease serology were assessed using a longitudinal serosurvey study. Significant factors associated with Lyme disease positive serology included one or more self-reported previous Lyme disease episodes, wearing protective clothing during outdoor activities, the average number of hours spent daily in tick habitat, the subject’s age and the density of shrub edges on the subject’s property. The best fit multivariate model included previous Lyme diagnoses and age. The strength of this association with previous Lyme disease suggests that the same sector of the population tends to be repeatedly infected. The second best multivariate model included a combination of environmental and behavioral factors, namely hours spent in vegetation, subject’s age, shrub edge density (increase risk) and wearing protective clothing (decrease risk). Our findings highlight the importance of concurrent evaluation of both environmental and behavioral factors to design interventions to reduce the risk of tick-borne infections.",TRUE,acronym
R104,Bioinformatics,R168527,GEMINI: Integrative Exploration of Genetic Variation and Genome Annotations,S668349,R168529,creates,R166936,GEMINI,"Modern DNA sequencing technologies enable geneticists to rapidly identify genetic variation among many human genomes. However, isolating the minority of variants underlying disease remains an important, yet formidable challenge for medical genetics. We have developed GEMINI (GEnome MINIng), a flexible software package for exploring all forms of human genetic variation. Unlike existing tools, GEMINI integrates genetic variation with a diverse and adaptable set of genome annotations (e.g., dbSNP, ENCODE, UCSC, ClinVar, KEGG) into a unified database to facilitate interpretation and data exploration. Whereas other methods provide an inflexible set of variant filters or prioritization methods, GEMINI allows researchers to compose complex queries based on sample genotypes, inheritance patterns, and both pre-installed and custom genome annotations. GEMINI also provides methods for ad hoc queries and data exploration, a simple programming interface for custom analyses that leverage the underlying database, and both command line and graphical tools for common analyses. We demonstrate GEMINI's utility for exploring variation in personal genomes and family based genetic studies, and illustrate its ability to scale to studies involving thousands of human samples. GEMINI is designed for reproducibility and flexibility and our goal is to provide researchers with a standard framework for medical genomics.",TRUE,acronym
R104,Bioinformatics,R168527,GEMINI: Integrative Exploration of Genetic Variation and Genome Annotations,S668351,R168530,uses,R166937,GEMINI,"Modern DNA sequencing technologies enable geneticists to rapidly identify genetic variation among many human genomes. However, isolating the minority of variants underlying disease remains an important, yet formidable challenge for medical genetics. We have developed GEMINI (GEnome MINIng), a flexible software package for exploring all forms of human genetic variation. Unlike existing tools, GEMINI integrates genetic variation with a diverse and adaptable set of genome annotations (e.g., dbSNP, ENCODE, UCSC, ClinVar, KEGG) into a unified database to facilitate interpretation and data exploration. Whereas other methods provide an inflexible set of variant filters or prioritization methods, GEMINI allows researchers to compose complex queries based on sample genotypes, inheritance patterns, and both pre-installed and custom genome annotations. GEMINI also provides methods for ad hoc queries and data exploration, a simple programming interface for custom analyses that leverage the underlying database, and both command line and graphical tools for common analyses. We demonstrate GEMINI's utility for exploring variation in personal genomes and family based genetic studies, and illustrate its ability to scale to studies involving thousands of human samples. GEMINI is designed for reproducibility and flexibility and our goal is to provide researchers with a standard framework for medical genomics.",TRUE,acronym
R104,Bioinformatics,R168532,GINI: From ISH Images to Gene Interaction Networks,S668363,R168533,creates,R166939,GINI,"Accurate inference of molecular and functional interactions among genes, especially in multicellular organisms such as Drosophila, often requires statistical analysis of correlations not only between the magnitudes of gene expressions, but also between their temporal-spatial patterns. The ISH (in-situ-hybridization)-based gene expression micro-imaging technology offers an effective approach to perform large-scale spatial-temporal profiling of whole-body mRNA abundance. However, analytical tools for discovering gene interactions from such data remain an open challenge due to various reasons, including difficulties in extracting canonical representations of gene activities from images, and in inference of statistically meaningful networks from such representations. In this paper, we present GINI, a machine learning system for inferring gene interaction networks from Drosophila embryonic ISH images. GINI builds on a computer-vision-inspired vector-space representation of the spatial pattern of gene expression in ISH images, enabled by our recently developed system; and a new multi-instance-kernel algorithm that learns a sparse Markov network model, in which, every gene (i.e., node) in the network is represented by a vector-valued spatial pattern rather than a scalar-valued gene intensity as in conventional approaches such as a Gaussian graphical model. By capturing the notion of spatial similarity of gene expression, and at the same time properly taking into account the presence of multiple images per gene via multi-instance kernels, GINI is well-positioned to infer statistically sound, and biologically meaningful gene interaction networks from image data. Using both synthetic data and a small manually curated data set, we demonstrate the effectiveness of our approach in network building. Furthermore, we report results on a large publicly available collection of Drosophila embryonic ISH images from the Berkeley Drosophila Genome Project, where GINI makes novel and interesting predictions of gene interactions. Software for GINI is available at http://sailing.cs.cmu.edu/Drosophila_ISH_images/",TRUE,acronym
R104,Bioinformatics,R168532,GINI: From ISH Images to Gene Interaction Networks,S668365,R168534,deposits,R166940,GINI,"Accurate inference of molecular and functional interactions among genes, especially in multicellular organisms such as Drosophila, often requires statistical analysis of correlations not only between the magnitudes of gene expressions, but also between their temporal-spatial patterns. The ISH (in-situ-hybridization)-based gene expression micro-imaging technology offers an effective approach to perform large-scale spatial-temporal profiling of whole-body mRNA abundance. However, analytical tools for discovering gene interactions from such data remain an open challenge due to various reasons, including difficulties in extracting canonical representations of gene activities from images, and in inference of statistically meaningful networks from such representations. In this paper, we present GINI, a machine learning system for inferring gene interaction networks from Drosophila embryonic ISH images. GINI builds on a computer-vision-inspired vector-space representation of the spatial pattern of gene expression in ISH images, enabled by our recently developed system; and a new multi-instance-kernel algorithm that learns a sparse Markov network model, in which, every gene (i.e., node) in the network is represented by a vector-valued spatial pattern rather than a scalar-valued gene intensity as in conventional approaches such as a Gaussian graphical model. By capturing the notion of spatial similarity of gene expression, and at the same time properly taking into account the presence of multiple images per gene via multi-instance kernels, GINI is well-positioned to infer statistically sound, and biologically meaningful gene interaction networks from image data. Using both synthetic data and a small manually curated data set, we demonstrate the effectiveness of our approach in network building. Furthermore, we report results on a large publicly available collection of Drosophila embryonic ISH images from the Berkeley Drosophila Genome Project, where GINI makes novel and interesting predictions of gene interactions. Software for GINI is available at http://sailing.cs.cmu.edu/Drosophila_ISH_images/",TRUE,acronym
R104,Bioinformatics,R168568,IDEPI: Rapid Prediction of HIV-1 Antibody Epitopes and Other Phenotypic Features from Sequence Data Using a Flexible Machine Learning Platform,S668506,R168569,creates,R166963,IDEPI,"Since its identification in 1983, HIV-1 has been the focus of a research effort unprecedented in scope and difficulty, whose ultimate goals — a cure and a vaccine – remain elusive. One of the fundamental challenges in accomplishing these goals is the tremendous genetic variability of the virus, with some genes differing at as many as 40% of nucleotide positions among circulating strains. Because of this, the genetic bases of many viral phenotypes, most notably the susceptibility to neutralization by a particular antibody, are difficult to identify computationally. Drawing upon open-source general-purpose machine learning algorithms and libraries, we have developed a software package IDEPI (IDentify EPItopes) for learning genotype-to-phenotype predictive models from sequences with known phenotypes. IDEPI can apply learned models to classify sequences of unknown phenotypes, and also identify specific sequence features which contribute to a particular phenotype. We demonstrate that IDEPI achieves performance similar to or better than that of previously published approaches on four well-studied problems: finding the epitopes of broadly neutralizing antibodies (bNab), determining coreceptor tropism of the virus, identifying compartment-specific genetic signatures of the virus, and deducing drug-resistance associated mutations. The cross-platform Python source code (released under the GPL 3.0 license), documentation, issue tracking, and a pre-configured virtual machine for IDEPI can be found at https://github.com/veg/idepi.",TRUE,acronym
R104,Bioinformatics,R168568,IDEPI: Rapid Prediction of HIV-1 Antibody Epitopes and Other Phenotypic Features from Sequence Data Using a Flexible Machine Learning Platform,S668514,R168573,deposits,R166966,IDEPI,"Since its identification in 1983, HIV-1 has been the focus of a research effort unprecedented in scope and difficulty, whose ultimate goals — a cure and a vaccine – remain elusive. One of the fundamental challenges in accomplishing these goals is the tremendous genetic variability of the virus, with some genes differing at as many as 40% of nucleotide positions among circulating strains. Because of this, the genetic bases of many viral phenotypes, most notably the susceptibility to neutralization by a particular antibody, are difficult to identify computationally. Drawing upon open-source general-purpose machine learning algorithms and libraries, we have developed a software package IDEPI (IDentify EPItopes) for learning genotype-to-phenotype predictive models from sequences with known phenotypes. IDEPI can apply learned models to classify sequences of unknown phenotypes, and also identify specific sequence features which contribute to a particular phenotype. We demonstrate that IDEPI achieves performance similar to or better than that of previously published approaches on four well-studied problems: finding the epitopes of broadly neutralizing antibodies (bNab), determining coreceptor tropism of the virus, identifying compartment-specific genetic signatures of the virus, and deducing drug-resistance associated mutations. The cross-platform Python source code (released under the GPL 3.0 license), documentation, issue tracking, and a pre-configured virtual machine for IDEPI can be found at https://github.com/veg/idepi.",TRUE,acronym
R104,Bioinformatics,R150537,LINNAEUS: A species name identification system for biomedical literature,S603591,R150539,model,R150541,LINNAEUS,"Abstract Background The task of recognizing and identifying species names in biomedical literature has recently been regarded as critical for a number of applications in text and data mining, including gene name recognition, species-specific document retrieval, and semantic enrichment of biomedical articles. Results In this paper we describe an open-source species name recognition and normalization software system, LINNAEUS, and evaluate its performance relative to several automatically generated biomedical corpora, as well as a novel corpus of full-text documents manually annotated for species mentions. LINNAEUS uses a dictionary-based approach (implemented as an efficient deterministic finite-state automaton) to identify species names and a set of heuristics to resolve ambiguous mentions. When compared against our manually annotated corpus, LINNAEUS performs with 94% recall and 97% precision at the mention level, and 98% recall and 90% precision at the document level. Our system successfully solves the problem of disambiguating uncertain species mentions, with 97% of all mentions in PubMed Central full-text documents resolved to unambiguous NCBI taxonomy identifiers. Conclusions LINNAEUS is an open source, stand-alone software system capable of recognizing and normalizing species name mentions with speed and accuracy, and can therefore be integrated into a range of bioinformatics and text-mining applications. The software and manually annotated corpus can be downloaded freely at http://linnaeus.sourceforge.net/.",TRUE,acronym
R104,Bioinformatics,R138859,Multi task sequence learning for depression scale prediction from video,S551753,R138861,Used models,R68911,LSTM,"Depression is a typical mood disorder, which affects people in mental and even physical problems. People who suffer depression always behave abnormal in visual behavior and the voice. In this paper, an audio visual based multimodal depression scale prediction system is proposed. Firstly, features are extracted from video and audio are fused in feature level to represent the audio visual behavior. Secondly, long short memory recurrent neural network (LSTM-RNN) is utilized to encode the dynamic temporal information of the abnormal audio visual behavior. Thirdly, emotion information is utilized by multi-task learning to boost the performance further. The proposed approach is evaluated on the Audio-Visual Emotion Challenge (AVEC2014) dataset. Experiments results show the dimensional emotion recognition helps to depression scale prediction.",TRUE,acronym
R104,Bioinformatics,R138865,Detection of mood disorder using speech emotion profiles and LSTM,S551773,R138867,Used models,R114443,LSTM,"In mood disorder diagnosis, bipolar disorder (BD) patients are often misdiagnosed as unipolar depression (UD) on initial presentation. It is crucial to establish an accurate distinction between BD and UD to make a correct and early diagnosis, leading to improvements in treatment and course of illness. To deal with this misdiagnosis problem, in this study, we experimented on eliciting subjects' emotions by watching six eliciting emotional video clips. After watching each video clips, their speech responses were collected when they were interviewing with a clinician. In mood disorder detection, speech emotions play an import role to detect manic or depressive symptoms. Therefore, speech emotion profiles (EP) are obtained by using the support vector machine (SVM) which are built via speech features adapted from selected databases using a denoising autoencoder-based method. Finally, a Long Short-Term Memory (LSTM) recurrent neural network is employed to characterize the temporal information of the EPs with respect to six emotional videos. Comparative experiments clearly show the promising advantage and efficacy of the LSTM-based approach for mood disorder detection.",TRUE,acronym
R104,Bioinformatics,R138870,DepAudioNet: An Efficient Deep Model for Audio based Depression Classification,S551795,R138872,Used models,R68911,LSTM,"This paper presents a novel and effective audio based method on depression classification. It focuses on two important issues, \emph{i.e.} data representation and sample imbalance, which are not well addressed in literature. For the former one, in contrast to traditional shallow hand-crafted features, we propose a deep model, namely DepAudioNet, to encode the depression related characteristics in the vocal channel, combining Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) to deliver a more comprehensive audio representation. For the latter one, we introduce a random sampling strategy in the model training phase to balance the positive and negative samples, which largely alleviates the bias caused by uneven sample distribution. Evaluations are carried out on the DAIC-WOZ dataset for the Depression Classification Sub-challenge (DCC) at the 2016 Audio-Visual Emotion Challenge (AVEC), and the experimental results achieved clearly demonstrate the effectiveness of the proposed approach.",TRUE,acronym
R104,Bioinformatics,R138876,Mood disorder identification using deep bottleneck features of elicited speech,S551815,R138878,Used models,R68911,LSTM,"In the diagnosis of mental health disorder, a large portion of the Bipolar Disorder (BD) patients is likely to be misdiagnosed as Unipolar Depression (UD) on initial presentation. As speech is the most natural way to express emotion, this work focuses on tracking emotion profile of elicited speech for short-term mood disorder identification. In this work, the Deep Scattering Spectrum (DSS) and Low Level Descriptors (LLDs) of the elicited speech signals are extracted as the speech features. The hierarchical spectral clustering (HSC) algorithm is employed to adapt the emotion database to the mood disorder database to alleviate the data bias problem. The denoising autoencoder is then used to extract the bottleneck features of DSS and LLDs for better representation. Based on the bottleneck features, a long short term memory (LSTM) is applied to generate the time-varying emotion profile sequence. Finally, given the emotion profile sequence, the HMM-based identification and verification model is used to determine mood disorder. This work collected the elicited emotional speech data from 15 BDs, 15 UDs and 15 healthy controls for system training and evaluation. Five-fold cross validation was employed for evaluation. Experimental results show that the system using the bottleneck feature achieved an identification accuracy of 73.33%, improving by 8.89%, compared to that without bottleneck features. Furthermore, the system with verification mechanism, improving by 4.44%, outperformed that without verification.",TRUE,acronym
R104,Bioinformatics,R138879,Exploring microscopic fluctuation of facial expression for mood disorder classification,S551836,R138881,Used models,R68911,LSTM,"In clinical diagnosis of mood disorder, depression is one of the most common psychiatric disorders. There are two major types of mood disorders: major depressive disorder (MDD) and bipolar disorder (BPD). A large portion of BPD are misdiagnosed as MDD in the diagnostic of mood disorders. Short-term detection which could be used in early detection and intervention is thus desirable. This study investigates microscopic facial expression changes for the subjects with MDD, BPD and control group (CG), when elicited by emotional video clips. This study uses eight basic orientations of motion vector (MV) to characterize the subtle changes in microscopic facial expression. Then, wavelet decomposition is applied to extract entropy and energy of different frequency bands. Next, an autoencoder neural network is adopted to extract the bottleneck features for dimensionality reduction. Finally, the long short term memory (LSTM) is employed for modeling the long-term variation among different mood disorders types. For evaluation of the proposed method, the elicited data from 36 subjects (12 for each of MDD, BPD and CG) were considered in the K-fold (K=12) cross validation experiments, and the performance for distinguishing among MDD, BPD and CG achieved 67.7% accuracy.",TRUE,acronym
R104,Bioinformatics,R138984,Cell-Coupled Long Short-Term Memory With $L$ -Skip Fusion Mechanism for Mood Disorder Detection Through Elicited Audiovisual Features,S552251,R138988,Used models,R68911,LSTM,"In early stages, patients with bipolar disorder are often diagnosed as having unipolar depression in mood disorder diagnosis. Because the long-term monitoring is limited by the delayed detection of mood disorder, an accurate and one-time diagnosis is desirable to avoid delay in appropriate treatment due to misdiagnosis. In this paper, an elicitation-based approach is proposed for realizing a one-time diagnosis by using responses elicited from patients by having them watch six emotion-eliciting videos. After watching each video clip, the conversations, including patient facial expressions and speech responses, between the participant and the clinician conducting the interview were recorded. Next, the hierarchical spectral clustering algorithm was employed to adapt the facial expression and speech response features by using the extended Cohn–Kanade and eNTERFACE databases. A denoizing autoencoder was further applied to extract the bottleneck features of the adapted data. Then, the facial and speech bottleneck features were input into support vector machines to obtain speech emotion profiles (EPs) and the modulation spectrum (MS) of the facial action unit sequence for each elicited response. Finally, a cell-coupled long short-term memory (LSTM) network with an $L$ -skip fusion mechanism was proposed to model the temporal information of all elicited responses and to loosely fuse the EPs and the MS for conducting mood disorder detection. The experimental results revealed that the cell-coupled LSTM with the $L$ -skip fusion mechanism has promising advantages and efficacy for mood disorder detection.",TRUE,acronym
R104,Bioinformatics,R139024,X-A-BiLSTM: a Deep Learning Approach for Depression Detection in Imbalanced Data,S552407,R139026,Used models,R68911,LSTM,"An increasing number of people suffering from mental health conditions resort to online resources (specialized websites, social media, etc.) to share their feelings. Early depression detection using social media data through deep learning models can help to change life trajectories and save lives. But the accuracy of these models was not satisfying due to the real-world imbalanced data distributions. To tackle this problem, we propose a deep learning model (X-A-BiLSTM) for depression detection in imbalanced social media data. The X-A-BiLSTM model consists of two essential components: the first one is XGBoost, which is used to reduce data imbalance; and the second one is an Attention-BiLSTM neural network, which enhances classification capacity. The Reddit Self-reported Depression Diagnosis (RSDD) dataset was chosen, which included approximately 9,000 users who claimed to have been diagnosed with depression (”diagnosed users and approximately 107,000 matched control users. Results demonstrate that our approach significantly outperforms the previous state-of-the-art models on the RSDD dataset.",TRUE,acronym
R104,Bioinformatics,R168659,MAGERI: Computational pipeline for molecular-barcoded targeted resequencing,S668868,R168660,creates,R167020,MAGERI,"Unique molecular identifiers (UMIs) show outstanding performance in targeted high-throughput resequencing, being the most promising approach for the accurate identification of rare variants in complex DNA samples. This approach has application in multiple areas, including cancer diagnostics, thus demanding dedicated software and algorithms. Here we introduce MAGERI, a computational pipeline that efficiently handles all caveats of UMI-based analysis to obtain high-fidelity mutation profiles and call ultra-rare variants. Using an extensive set of benchmark datasets including gold-standard biological samples with known variant frequencies, cell-free DNA from tumor patient blood samples and publicly available UMI-encoded datasets we demonstrate that our method is both robust and efficient in calling rare variants. The versatility of our software is supported by accurate results obtained for both tumor DNA and viral RNA samples in datasets prepared using three different UMI-based protocols.",TRUE,acronym
R104,Bioinformatics,R168659,MAGERI: Computational pipeline for molecular-barcoded targeted resequencing,S668870,R168661,deposits,R167021,MAGERI,"Unique molecular identifiers (UMIs) show outstanding performance in targeted high-throughput resequencing, being the most promising approach for the accurate identification of rare variants in complex DNA samples. This approach has application in multiple areas, including cancer diagnostics, thus demanding dedicated software and algorithms. Here we introduce MAGERI, a computational pipeline that efficiently handles all caveats of UMI-based analysis to obtain high-fidelity mutation profiles and call ultra-rare variants. Using an extensive set of benchmark datasets including gold-standard biological samples with known variant frequencies, cell-free DNA from tumor patient blood samples and publicly available UMI-encoded datasets we demonstrate that our method is both robust and efficient in calling rare variants. The versatility of our software is supported by accurate results obtained for both tumor DNA and viral RNA samples in datasets prepared using three different UMI-based protocols.",TRUE,acronym
R104,Bioinformatics,R168691,MAGPIE: Simplifying access and execution of computational models in the life sciences,S669008,R168692,creates,R167043,MAGPIE,"Over the past decades, quantitative methods linking theory and observation became increasingly important in many areas of life science. Subsequently, a large number of mathematical and computational models has been developed. The BioModels database alone lists more than 140,000 Systems Biology Markup Language (SBML) models. However, while the exchange within specific model classes has been supported by standardisation and database efforts, the generic application and especially the re-use of models is still limited by practical issues such as easy and straight forward model execution. MAGPIE, a Modeling and Analysis Generic Platform with Integrated Evaluation, closes this gap by providing a software platform for both, publishing and executing computational models without restrictions on the programming language, thereby combining a maximum on flexibility for programmers with easy handling for non-technical users. MAGPIE goes beyond classical SBML platforms by including all models, independent of the underlying programming language, ranging from simple script models to complex data integration and computations. We demonstrate the versatility of MAGPIE using four prototypic example cases. We also outline the potential of MAGPIE to improve transparency and reproducibility of computational models in life sciences. A demo server is available at magpie.imb.medizin.tu-dresden.de.",TRUE,acronym
R104,Bioinformatics,R168691,MAGPIE: Simplifying access and execution of computational models in the life sciences,S669014,R168695,deposits,R167046,MAGPIE,"Over the past decades, quantitative methods linking theory and observation became increasingly important in many areas of life science. Subsequently, a large number of mathematical and computational models has been developed. The BioModels database alone lists more than 140,000 Systems Biology Markup Language (SBML) models. However, while the exchange within specific model classes has been supported by standardisation and database efforts, the generic application and especially the re-use of models is still limited by practical issues such as easy and straight forward model execution. MAGPIE, a Modeling and Analysis Generic Platform with Integrated Evaluation, closes this gap by providing a software platform for both, publishing and executing computational models without restrictions on the programming language, thereby combining a maximum on flexibility for programmers with easy handling for non-technical users. MAGPIE goes beyond classical SBML platforms by including all models, independent of the underlying programming language, ranging from simple script models to complex data integration and computations. We demonstrate the versatility of MAGPIE using four prototypic example cases. We also outline the potential of MAGPIE to improve transparency and reproducibility of computational models in life sciences. A demo server is available at magpie.imb.medizin.tu-dresden.de.",TRUE,acronym
R104,Bioinformatics,R168691,MAGPIE: Simplifying access and execution of computational models in the life sciences,S669010,R168693,uses,R167044,MAGPIE,"Over the past decades, quantitative methods linking theory and observation became increasingly important in many areas of life science. Subsequently, a large number of mathematical and computational models has been developed. The BioModels database alone lists more than 140,000 Systems Biology Markup Language (SBML) models. However, while the exchange within specific model classes has been supported by standardisation and database efforts, the generic application and especially the re-use of models is still limited by practical issues such as easy and straight forward model execution. MAGPIE, a Modeling and Analysis Generic Platform with Integrated Evaluation, closes this gap by providing a software platform for both, publishing and executing computational models without restrictions on the programming language, thereby combining a maximum on flexibility for programmers with easy handling for non-technical users. MAGPIE goes beyond classical SBML platforms by including all models, independent of the underlying programming language, ranging from simple script models to complex data integration and computations. We demonstrate the versatility of MAGPIE using four prototypic example cases. We also outline the potential of MAGPIE to improve transparency and reproducibility of computational models in life sciences. A demo server is available at magpie.imb.medizin.tu-dresden.de.",TRUE,acronym
R104,Bioinformatics,R168543,CGBayesNets: Conditional Gaussian Bayesian Network Learning and Inference with Mixed Discrete and Continuous Data,S668414,R168545,uses,R166947,MATLAB,"Bayesian Networks (BN) have been a popular predictive modeling formalism in bioinformatics, but their application in modern genomics has been slowed by an inability to cleanly handle domains with mixed discrete and continuous variables. Existing free BN software packages either discretize continuous variables, which can lead to information loss, or do not include inference routines, which makes prediction with the BN impossible. We present CGBayesNets, a BN package focused around prediction of a clinical phenotype from mixed discrete and continuous variables, which fills these gaps. CGBayesNets implements Bayesian likelihood and inference algorithms for the conditional Gaussian Bayesian network (CGBNs) formalism, one appropriate for predicting an outcome of interest from, e.g., multimodal genomic data. We provide four different network learning algorithms, each making a different tradeoff between computational cost and network likelihood. CGBayesNets provides a full suite of functions for model exploration and verification, including cross validation, bootstrapping, and AUC manipulation. We highlight several results obtained previously with CGBayesNets, including predictive models of wood properties from tree genomics, leukemia subtype classification from mixed genomic data, and robust prediction of intensive care unit mortality outcomes from metabolomic profiles. We also provide detailed example analysis on public metabolomic and gene expression datasets. CGBayesNets is implemented in MATLAB and available as MATLAB source code, under an Open Source license and anonymous download at http://www.cgbayesnets.com.",TRUE,acronym
R104,Bioinformatics,R148043,MedPost: a part-of-speech tagger for bioMedical text,S593692,R148045,Other resources,R148046,MEDLINE,"SUMMARY We present a part-of-speech tagger that achieves over 97% accuracy on MEDLINE citations. AVAILABILITY Software, documentation and a corpus of 5700 manually tagged sentences are available at ftp://ftp.ncbi.nlm.nih.gov/pub/lsmith/MedPost/medpost.tar.gz",TRUE,acronym
R104,Bioinformatics,R148050,Tagging gene and protein names in biomedical text,S593722,R148052,Other resources,R148046,MEDLINE,"MOTIVATION The MEDLINE database of biomedical abstracts contains scientific knowledge about thousands of interacting genes and proteins. Automated text processing can aid in the comprehension and synthesis of this valuable information. The fundamental task of identifying gene and protein names is a necessary first step towards making full use of the information encoded in biomedical text. This remains a challenging task due to the irregularities and ambiguities in gene and protein nomenclature. We propose to approach the detection of gene and protein names in scientific abstracts as part-of-speech tagging, the most basic form of linguistic corpus annotation. RESULTS We present a method for tagging gene and protein names in biomedical text using a combination of statistical and knowledge-based strategies. This method incorporates automatically generated rules from a transformation-based part-of-speech tagger, and manually generated rules from morphological clues, low frequency trigrams, indicator terms, suffixes and part-of-speech information. Results of an experiment on a test corpus of 56K MEDLINE documents demonstrate that our method to extract gene and protein names can be applied to large sets of MEDLINE abstracts, without the need for special conditions or human experts to predetermine relevant subsets. AVAILABILITY The programs are available on request from the authors.",TRUE,acronym
R104,Bioinformatics,R168629,MEDYAN: Mechanochemical Simulations of Contraction and Polarity Alignment in Actomyosin Networks,S668751,R168632,deposits,R167005,MEDYAN,"Active matter systems, and in particular the cell cytoskeleton, exhibit complex mechanochemical dynamics that are still not well understood. While prior computational models of cytoskeletal dynamics have lead to many conceptual insights, an important niche still needs to be filled with a high-resolution structural modeling framework, which includes a minimally-complete set of cytoskeletal chemistries, stochastically treats reaction and diffusion processes in three spatial dimensions, accurately and efficiently describes mechanical deformations of the filamentous network under stresses generated by molecular motors, and deeply couples mechanics and chemistry at high spatial resolution. To address this need, we propose a novel reactive coarse-grained force field, as well as a publicly available software package, named the Mechanochemical Dynamics of Active Networks (MEDYAN), for simulating active network evolution and dynamics (available at www.medyan.org). This model can be used to study the non-linear, far from equilibrium processes in active matter systems, in particular, comprised of interacting semi-flexible polymers embedded in a solution with complex reaction-diffusion processes. In this work, we applied MEDYAN to investigate a contractile actomyosin network consisting of actin filaments, alpha-actinin cross-linking proteins, and non-muscle myosin IIA mini-filaments. We found that these systems undergo a switch-like transition in simulations from a random network to ordered, bundled structures when cross-linker concentration is increased above a threshold value, inducing contraction driven by myosin II mini-filaments. Our simulations also show how myosin II mini-filaments, in tandem with cross-linkers, can produce a range of actin filament polarity distributions and alignment, which is crucially dependent on the rate of actin filament turnover and the actin filament’s resulting super-diffusive behavior in the actomyosin-cross-linker system. We discuss the biological implications of these findings for the arc formation in lamellipodium-to-lamellum architectural remodeling. Lastly, our simulations produce force-dependent accumulation of myosin II, which is thought to be responsible for their mechanosensation ability, also spontaneously generating myosin II concentration gradients in the solution phase of the simulation volume.",TRUE,acronym
R104,Bioinformatics,R169651,Transient Cerebral Ischemia Promotes Brain Mitochondrial Dysfunction and Exacerbates Cognitive Impairments in Young 5xFAD Mice,S673434,R169653,uses,R167671,NIS,"Alzheimer's disease (AD) is heterogeneous and multifactorial neurological disorder; and the risk factors of AD still remain elusive. Recent studies have highlighted the role of vascular factors in promoting the progression of AD and have suggested that ischemic events increase the incidence of AD. However, the detailed mechanisms linking ischemic insult to the progression of AD is still largely undetermined. In this study, we have established a transient cerebral ischemia model on young 5xFAD mice and their non-transgenic (nonTg) littermates by the transient occlusion of bilateral common carotid arteries. We have found that transient cerebral ischemia significantly exacerbates brain mitochondrial dysfunction including mitochondrial respiration deficits, oxidative stress as well as suppressed levels of mitochondrial fusion proteins including optic atrophy 1 (OPA1) and mitofusin 2 (MFN2) in young 5xFAD mice resulting in aggravated spatial learning and memory. Intriguingly, transient cerebral ischemia did not induce elevation in the levels of cortical or mitochondrial Amyloid beta (Aβ)1-40 or 1–42 levels in 5xFAD mice. In addition, the glucose- and oxygen-deprivation-induced apoptotic neuronal death in Aβ-treated neurons was significantly mitigated by mitochondria-targeted antioxidant mitotempo which suppresses mitochondrial superoxide levels. Therefore, the simplest interpretation of our results is that young 5xFAD mice with pre-existing AD-like mitochondrial dysfunction are more susceptible to the effects of transient cerebral ischemia; and ischemic events may exacerbate dementia and worsen the outcome of AD patients by exacerbating mitochondrial dysfunction.",TRUE,acronym
R104,Bioinformatics,R168697,PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems,S669041,R168704,uses,R167050,OSX,"Abstract Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal “virtual laboratory” for such multicellular systems simulates both the biochemical microenvironment (the “stage”) and many mechanically and biochemically interacting cells (the “players” upon the stage). PhysiCell—physics-based multicellular simulator—is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility “out of the box.” The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 10 5 -10 6 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a “cellular cargo delivery” system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net. Author Summary This paper introduces PhysiCell: an open source, agent-based modeling framework for 3-D multicellular simulations. It includes a standard library of sub-models for cell fluid and solid volume changes, cycle progression, apoptosis, necrosis, mechanics, and motility. PhysiCell is directly coupled to a biotransport solver to simulate many diffusing substrates and cell-secreted signals. Each cell can dynamically update its phenotype based on its microenvironmental conditions. Users can customize or replace the included sub-models. PhysiCell runs on a variety of platforms (Linux, OSX, and Windows) with few software dependencies. Its computational cost scales linearly in the number of cells. It is feasible to simulate 500,000 cells on quad-core desktop workstations, and millions of cells on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on hanging drop tumor spheroids (HDS) and ductal carcinoma in situ (DCIS) of the breast. We demonstrate contact- and chemokine-based interactions among multiple cell types with examples in synthetic multicellular bioengineering, cancer heterogeneity, and cancer immunology. We developed PhysiCell to help the scientific community tackle multicellular systems biology problems involving many interacting cells in multi-substrate microenvironments. PhysiCell is also an independent, cross-platform codebase for replicating results from other simulators.",TRUE,acronym
R104,Bioinformatics,R168633,PEPIS: A Pipeline for Estimating Epistatic Effects in Quantitative Trait Locus Mapping and Genome-Wide Association Studies,S668766,R168635,deposits,R167007,PEPIS,"The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the ‘missing heritability,’ which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS), for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/.",TRUE,acronym
R104,Bioinformatics,R169119,Copy Number Variation in Subjects with Major Depressive Disorder Who Attempted Suicide,S670928,R169120,uses,R167119,PLINK,"Background Suicide is one of the top ten leading causes of death in North America and represents a major public health burden, partcularly for people with Major Depressive disorder (MD). Many studies have suggested that suicidal behavior runs in families, however, identification of genomic loci that drive this efffect remain to be identified. Methodology/Principal Findings Using subjects collected as part of STAR*D, we genotyped 189 subjects with MD with history of a suicide attempt and 1073 subjects with Major Depressive disorder that had never attempted suicide. Copy Number Variants (CNVs) were called in Birdsuite and analyzed in PLINK. We found a set of CNVs present in the suicide attempter group that were not present in in the non-attempter group including in SNTG2 and MACROD2 – two brain expressed genes previously linked to psychopathology; however, these results failed to reach genome-wide signifigance. Conclusions These data suggest potential CNVs to be investigated further in relation to suicide attempts in MD using large sample sizes.",TRUE,acronym
R104,Bioinformatics,R170999,Pain Processing after Social Exclusion and Its Relation to Rejection Sensitivity in Borderline Personality Disorder,S680981,R171000,uses,R168260,PROCESS,"Objective There is a general agreement that physical pain serves as an alarm signal for the prevention of and reaction to physical harm. It has recently been hypothesized that “social pain,” as induced by social rejection or abandonment, may rely on comparable, phylogenetically old brain structures. As plausible as this theory may sound, scientific evidence for this idea is sparse. This study therefore attempts to link both types of pain directly. We studied patients with borderline personality disorder (BPD) because BPD is characterized by opposing alterations in physical and social pain; hyposensitivity to physical pain is associated with hypersensitivity to social pain, as indicated by an enhanced rejection sensitivity. Method Twenty unmedicated female BPD patients and 20 healthy participants (HC, matched for age and education) played a virtual ball-tossing game (cyberball), with the conditions for exclusion, inclusion, and a control condition with predefined game rules. Each cyberball block was followed by a temperature stimulus (with a subjective pain intensity of 60% in half the cases). The cerebral responses were measured by functional magnetic resonance imaging. The Adult Rejection Sensitivity Questionnaire was used to assess rejection sensitivity. Results Higher temperature heat stimuli had to be applied to BPD patients relative to HCs to reach a comparable subjective experience of painfulness in both groups, which suggested a general hyposensitivity to pain in BPD patients. Social exclusion led to a subjectively reported hypersensitivity to physical pain in both groups that was accompanied by an enhanced activation in the anterior insula and the thalamus. In BPD, physical pain processing after exclusion was additionally linked to enhanced posterior insula activation. After inclusion, BPD patients showed reduced amygdala activation during pain in comparison with HC. In BPD patients, higher rejection sensitivity was associated with lower activation differences during pain processing following social exclusion and inclusion in the insula and in the amygdala. Discussion Despite the similar behavioral effects in both groups, BPD patients differed from HC in their neural processing of physical pain depending on the preceding social situation. Rejection sensitivity further modulated the impact of social exclusion on neural pain processing in BPD, but not in healthy controls.",TRUE,acronym
R104,Bioinformatics,R171681,The neuroelectric dynamics of the emotional anticipation of other people’s pain,S685301,R171682,uses,R168438,PROCESS,"When we observe a dynamic emotional facial expression, we usually automatically anticipate how that expression will develop. Our objective was to study a neurocognitive biomarker of this anticipatory process for facial pain expressions, operationalized as a mismatch effect. For this purpose, we studied the behavioral and neuroelectric (Event-Related Potential, ERP) correlates, of a match or mismatch, between the intensity of an expression of pain anticipated by the participant, and the intensity of a static test expression of pain displayed with the use of a representational momentum paradigm. Here, the paradigm consisted in displaying a dynamic facial pain expression which suddenly disappeared, and participants had to memorize the final intensity of the dynamic expression. We compared ERPs in response to congruent (intensity the same as the one memorized) and incongruent (intensity different from the one memorized) static expression intensities displayed after the dynamic expression. This paradigm allowed us to determine the amplitude and direction of this intensity anticipation by measuring the observer’s memory bias. Results behaviorally showed that the anticipation was backward (negative memory bias) for high intensity expressions of pain (participants expected a return to a neutral state) and more forward (memory bias less negative, or even positive) for less intense expressions (participants expected increased intensity). Detecting mismatch (incongruent intensity) led to faster responses than detecting match (congruent intensity). The neuroelectric correlates of this mismatch effect in response to the testing of expression intensity ranged from P100 to LPP (Late Positive Potential). Path analysis and source localization suggested that the medial frontal gyrus was instrumental in mediating the mismatch effect through top-down influence on both the occipital and temporal regions. Moreover, having the facility to detect incongruent expressions, by anticipating emotional state, could be useful for prosocial behavior and the detection of trustworthiness.",TRUE,acronym
R104,Bioinformatics,R168487,PSICIC: Noise and Asymmetry in Bacterial Division Revealed by Computational Image Analysis at Sub-Pixel Resolution,S668209,R168489,creates,R166914,PSICIC,"Live-cell imaging by light microscopy has demonstrated that all cells are spatially and temporally organized. Quantitative, computational image analysis is an important part of cellular imaging, providing both enriched information about individual cell properties and the ability to analyze large datasets. However, such studies are often limited by the small size and variable shape of objects of interest. Here, we address two outstanding problems in bacterial cell division by developing a generally applicable, standardized, and modular software suite termed Projected System of Internal Coordinates from Interpolated Contours (PSICIC) that solves common problems in image quantitation. PSICIC implements interpolated-contour analysis for accurate and precise determination of cell borders and automatically generates internal coordinate systems that are superimposable regardless of cell geometry. We have used PSICIC to establish that the cell-fate determinant, SpoIIE, is asymmetrically localized during Bacillus subtilis sporulation, thereby demonstrating the ability of PSICIC to discern protein localization features at sub-pixel scales. We also used PSICIC to examine the accuracy of cell division in Esherichia coli and found a new role for the Min system in regulating division-site placement throughout the cell length, but only prior to the initiation of cell constriction. These results extend our understanding of the regulation of both asymmetry and accuracy in bacterial division while demonstrating the general applicability of PSICIC as a computational approach for quantitative, high-throughput analysis of cellular images.",TRUE,acronym
R104,Bioinformatics,R168487,PSICIC: Noise and Asymmetry in Bacterial Division Revealed by Computational Image Analysis at Sub-Pixel Resolution,S668211,R168490,deposits,R166915,PSICIC,"Live-cell imaging by light microscopy has demonstrated that all cells are spatially and temporally organized. Quantitative, computational image analysis is an important part of cellular imaging, providing both enriched information about individual cell properties and the ability to analyze large datasets. However, such studies are often limited by the small size and variable shape of objects of interest. Here, we address two outstanding problems in bacterial cell division by developing a generally applicable, standardized, and modular software suite termed Projected System of Internal Coordinates from Interpolated Contours (PSICIC) that solves common problems in image quantitation. PSICIC implements interpolated-contour analysis for accurate and precise determination of cell borders and automatically generates internal coordinate systems that are superimposable regardless of cell geometry. We have used PSICIC to establish that the cell-fate determinant, SpoIIE, is asymmetrically localized during Bacillus subtilis sporulation, thereby demonstrating the ability of PSICIC to discern protein localization features at sub-pixel scales. We also used PSICIC to examine the accuracy of cell division in Esherichia coli and found a new role for the Min system in regulating division-site placement throughout the cell length, but only prior to the initiation of cell constriction. These results extend our understanding of the regulation of both asymmetry and accuracy in bacterial division while demonstrating the general applicability of PSICIC as a computational approach for quantitative, high-throughput analysis of cellular images.",TRUE,acronym
R104,Bioinformatics,R168487,PSICIC: Noise and Asymmetry in Bacterial Division Revealed by Computational Image Analysis at Sub-Pixel Resolution,S668213,R168491,uses,R166914,PSICIC,"Live-cell imaging by light microscopy has demonstrated that all cells are spatially and temporally organized. Quantitative, computational image analysis is an important part of cellular imaging, providing both enriched information about individual cell properties and the ability to analyze large datasets. However, such studies are often limited by the small size and variable shape of objects of interest. Here, we address two outstanding problems in bacterial cell division by developing a generally applicable, standardized, and modular software suite termed Projected System of Internal Coordinates from Interpolated Contours (PSICIC) that solves common problems in image quantitation. PSICIC implements interpolated-contour analysis for accurate and precise determination of cell borders and automatically generates internal coordinate systems that are superimposable regardless of cell geometry. We have used PSICIC to establish that the cell-fate determinant, SpoIIE, is asymmetrically localized during Bacillus subtilis sporulation, thereby demonstrating the ability of PSICIC to discern protein localization features at sub-pixel scales. We also used PSICIC to examine the accuracy of cell division in Esherichia coli and found a new role for the Min system in regulating division-site placement throughout the cell length, but only prior to the initiation of cell constriction. These results extend our understanding of the regulation of both asymmetry and accuracy in bacterial division while demonstrating the general applicability of PSICIC as a computational approach for quantitative, high-throughput analysis of cellular images.",TRUE,acronym
R104,Bioinformatics,R170112,Estimating genetic kin relationships in prehistoric populations,S675626,R170116,deposits,R167954,READ,"Archaeogenomic research has proven to be a valuable tool to trace migrations of historic and prehistoric individuals and groups, whereas relationships within a group or burial site have not been investigated to a large extent. Knowing the genetic kinship of historic and prehistoric individuals would give important insights into social structures of ancient and historic cultures. Most archaeogenetic research concerning kinship has been restricted to uniparental markers, while studies using genome-wide information were mainly focused on comparisons between populations. Applications which infer the degree of relationship based on modern-day DNA information typically require diploid genotype data. Low concentration of endogenous DNA, fragmentation and other post-mortem damage to ancient DNA (aDNA) makes the application of such tools unfeasible for most archaeological samples. To infer family relationships for degraded samples, we developed the software READ (Relationship Estimation from Ancient DNA). We show that our heuristic approach can successfully infer up to second degree relationships with as little as 0.1x shotgun coverage per genome for pairs of individuals. We uncover previously unknown relationships among prehistoric individuals by applying READ to published aDNA data from several human remains excavated from different cultural contexts. In particular, we find a group of five closely related males from the same Corded Ware culture site in modern-day Germany, suggesting patrilocality, which highlights the possibility to uncover social structures of ancient populations by applying READ to genome-wide aDNA data. READ is publicly available from https://bitbucket.org/tguenther/read.",TRUE,acronym
R104,Bioinformatics,R170251,Examining individual and geographic factors associated with social isolation and loneliness using Canadian Longitudinal Study on Aging (CLSA) data,S676363,R170252,uses,R167595,SAS,"Background A large body of research shows that social isolation and loneliness have detrimental health consequences. Identifying individuals at risk of social isolation or loneliness is, therefore, important. The objective of this study was to examine personal (e.g., sex, income) and geographic (rural/urban and sociodemographic) factors and their association with social isolation and loneliness in a national sample of Canadians aged 45 to 85 years. Methods The study involved cross-sectional analyses of baseline data from the Canadian Longitudinal Study on Aging that were linked to 2016 census data at the Forward Sortation Area (FSA) level. Multilevel logistic regression analyses were conducted to examine the association between personal factors and geographic factors and social isolation and loneliness for the total sample, and women and men, respectively. Results The prevalence of social isolation and loneliness was 5.1% and 10.2%, respectively, but varied substantially across personal characteristics. Personal characteristics (age, sex, education, income, functional impairment, chronic diseases) were significantly related to both social isolation and loneliness, although some differences emerged in the direction of the relationships for the two measures. Associations also differed somewhat for women versus men. Associations between some geographic factors emerged for social isolation, but not loneliness. Living in an urban core was related to increased odds of social isolation, an effect that was no longer significant when FSA-level factors were controlled for. FSAs with a higher percentage of 65+ year old residents with low income were consistently associated with higher odds of social isolation. Conclusion The findings indicate that socially isolated individuals are, to some extent, clustered into areas with a high proportion of low-income older adults, suggesting that support and resources could be targeted at these areas. For loneliness, the focus may be less on where people live, but rather on personal characteristics that place individuals at risk.",TRUE,acronym
R104,Bioinformatics,R171126,Infectious Disease and Grouping Patterns in Mule Deer,S681856,R171127,uses,R167669,SAS,"Infectious disease dynamics are determined, to a great extent, by the social structure of the host. We evaluated sociality, or the tendency to form groups, in Rocky Mountain mule deer (Odocoileus hemionus hemionus) from a chronic wasting disease (CWD) endemic area in Saskatchewan, Canada, to better understand factors that may affect disease transmission. Using group size data collected on 365 radio-collared mule deer (2008–2013), we built a generalized linear mixed model (GLMM) to evaluate whether factors such as CWD status, season, habitat and time of day, predicted group occurrence. Then, we built another GLMM to determine factors associated with group size. Finally, we used 3 measures of group size (typical, mean and median group sizes) to quantify levels of sociality. We found that mule deer showing clinical signs of CWD were less likely to be reported in groups than clinically healthy deer after accounting for time of day, habitat, and month of observation. Mule deer groups were much more likely to occur in February and March than in July. Mixed-sex groups in early gestation were larger than any other group type in any season. Groups were largest and most likely to occur at dawn and dusk, and in open habitats, such as cropland. We discuss the implication of these results with respect to sociobiology and CWD transmission dynamics.",TRUE,acronym
R104,Bioinformatics,R168653,SCOTTI: Efficient Reconstruction of Transmission within Outbreaks with the Structured Coalescent,S668831,R168654,creates,R167017,SCOTTI,"Exploiting pathogen genomes to reconstruct transmission represents a powerful tool in the fight against infectious disease. However, their interpretation rests on a number of simplifying assumptions that regularly ignore important complexities of real data, in particular within-host evolution and non-sampled patients. Here we propose a new approach to transmission inference called SCOTTI (Structured COalescent Transmission Tree Inference). This method is based on a statistical framework that models each host as a distinct population, and transmissions between hosts as migration events. Our computationally efficient implementation of this model enables the inference of host-to-host transmission while accommodating within-host evolution and non-sampled hosts. SCOTTI is distributed as an open source package for the phylogenetic software BEAST2. We show that SCOTTI can generally infer transmission events even in the presence of considerable within-host variation, can account for the uncertainty associated with the possible presence of non-sampled hosts, and can efficiently use data from multiple samples of the same host, although there is some reduction in accuracy when samples are collected very close to the infection time. We illustrate the features of our approach by investigating transmission from genetic and epidemiological data in a Foot and Mouth Disease Virus (FMDV) veterinary outbreak in England and a Klebsiella pneumoniae outbreak in a Nepali neonatal unit. Transmission histories inferred with SCOTTI will be important in devising effective measures to prevent and halt transmission.",TRUE,acronym
R104,Bioinformatics,R170139,Comparisons between different elements of reported burden and common mental disorder in caregivers of ethnically diverse people with dementia in Trinidad,S675783,R170140,uses,R167596,SPSS,"Objective Culture plays a significant role in determining family responsibilities and possibly influences the caregiver burden associated with providing care for a relative with dementia. This study was carried out to determine the elements of caregiver burden in Trinidadians regarding which interventions will provide the most benefit. Methods Seventy-five caregivers of patients diagnosed with dementia participated in this investigation. Demographic data were recorded for each caregiver and patient. Caregiver burden was assessed using the Zarit Burden Interview (ZBI), and the General Health Questionnaire (GHQ) was used as a measure of psychiatric morbidity. Statistical analyses were performed using Stata and SPSS software. Associations between individual ZBI items and GHQ-28 scores in caregivers were analyzed in logistic regression models; the above-median GHQ-28 scores were used a binary dependent variable, and individual ZBI item scores were entered as 5-point ordinal independent variables. Results The caregiver sample was composed of 61 females and 14 males. Caregiver burden was significantly associated with the participant being male; there was heterogeneity by ethnic group, and a higher burden on female caregivers was detected at borderline levels of significance. Upon examining the associations between different ZBI items and the above-median GHQ-28 scores in caregivers, the strongest associations were found with domains reflecting the caregiver’s health having suffered, the caregiver not having sufficient time for him/herself, the caregiver’s social life suffering, and the caregiver admitting to feeling stressed due to caregiving and meeting other responsibilities. Conclusions In this sample, with a majority of female caregivers, the factors of the person with dementia being male and belonging to a minority ethnic group were associated with a greater degree of caregiver burden. The information obtained through the association of individual ZBI items and above-median GHQ-28 scores is a helpful guide for profiling Trinidadian caregiver burden.",TRUE,acronym
R104,Bioinformatics,R170241,"Malaria knowledge and its associated factors among pregnant women attending antenatal clinic of Adis Zemen Hospital, North-western Ethiopia, 2018",S676302,R170243,uses,R167541,SPSS,"Introduction In Ethiopia, the burden of malaria during pregnancy remains a public health problem. Having a good malaria knowledge leads to practicing the prevention of malaria and seeking a health care. Researches regarding pregnant women’s knowledge on malaria in Ethiopia is limited. So the aim of this study was to assess malaria knowledge and its associated factors among pregnant woman, 2018. Methods An institutional-basedcross-sectional study was conducted in Adis Zemen Hospital. Data were collected using pre-tested, an interviewer-administered structured questionnaire among 236 mothers. Women’s knowledge on malaria was measured using six malaria-related questions (cause of malaria, mode of transmission, signs and symptoms, complication and prevention of malaria). The collected data were entered using Epidata version 3.1 and exported to SPSS version 20 for analysis. Bivariate and multivariate logistic regressions were computed to identify predictor variables at 95% confidence interval. Variables having P value of <0.05 were considered as predictor variables of malaria knowledge. Result A total of 235 pregnant women participated which makes the response rate 99.6%. One hundred seventy two pregnant women (73.2%) of mothers had good knowledge on malaria.Women who were from urban (AOR; 2.4: CI; 1.8, 5.7), had better family monthly income (AOR; 3.4: CI; 2.7, 3.8), attended education (AOR; 1.8: CI; 1.4, 3.5) were more knowledgeable. Conclusion and recommendation Majority of participants had good knowledge on malaria. Educational status, household monthly income and residence werepredictors of malaria knowledge. Increasing women’s knowledge especially for those who are from rural, have no education, and have low monthly income is still needed.",TRUE,acronym
R104,Bioinformatics,R170522,Quality of Life of Medical Students in China: A Study Using the WHOQOL-BREF,S678049,R170523,uses,R167352,SPSS,"Objective The aim of this study was to assess the quality of life (QOL) of medical students during their medical education and explore the influencing factors of the QOL of students. Methods A cross-sectional study was conducted in June 2011. The study population was composed of 1686 medical students in years 1 to 5 at China Medical University. The Chinese version of WHOQOL-BREF instrument was used to assess the QOL of medical students. The reliability and validity of the questionnaire were assessed by Cronbach’s α coefficient and factor analysis respectively. The relationships between QOL and the factors including gender, academic year level, and specialty were examined using t-test or one-way ANOVA followed by Student-Newman–Keuls test. Statistic analysis was performed by SPSS 13.0. Results The overall Cronbach’s α coefficient of the WHOQOL-BREF questionnaire was 0.731. The confirmatory factor analysis provided an acceptable fit to a four-factor model in the medical student sample. The scores of different academic years were significantly different in the psychological health and social relations domains (p<0.05). Third year students had the lowest scores in psychological health and social relations domains. The scores of different specialties had significant differences in psychological health and social relations domains (p<0.05). Students from clinical medicine had the highest scores. Gender, interest in the area of study, confidence in career development, hometown location, and physical exercise were significantly associated with the quality of life of students in some domains (p<0.05). Conclusions The WHOQOL-BREF was reliable and valid in the assessment of the QOL of Chinese medical students. In order to cope with the influencing factors of the QOL, medical schools should carry out curriculum innovation and give the necessary support for medical students, especially for 3rd year students.",TRUE,acronym
R104,Bioinformatics,R170766,"Experiences of Social Harm and Changes in Sexual Practices among Volunteers Who Had Completed a Phase I/II HIV Vaccine Trial Employing HIV-1 DNA Priming and HIV-1 MVA Boosting in Dar es Salaam, Tanzania",S679522,R170767,uses,R167563,SPSS,"Background Volunteers in phase I/II HIV vaccine trials are assumed to be at low risk of acquiring HIV infection and are expected to have normal lives in the community. However, during participation in the trials, volunteers may encounter social harm and changes in their sexual behaviours. The current study aimed to study persistence of social harm and changes in sexual practices over time among phase I/II HIV vaccine immunogenicity (HIVIS03) trial volunteers in Dar es Salaam, Tanzania. Methods and Results A descriptive prospective cohort study was conducted among 33 out of 60 volunteers of HIVIS03 trial in Dar es Salaam, Tanzania, who had received three HIV-1 DNA injections boosted with two HIV-1 MVA doses. A structured interview was administered to collect data. Analysis was carried out using SPSS and McNemars’ chi-square (χ2) was used to test the association within-subjects. Participants reported experiencing negative comments from their colleagues about the trial; but such comments were less severe during the second follow up visits (χ2 = 8.72; P<0.001). Most of the comments were associated with discrimination (χ2 = 26.72; P<0.001), stigma (χ2 = 6.06; P<0.05), and mistrust towards the HIV vaccine trial (χ2 = 4.9; P<0.05). Having a regular sexual partner other than spouse or cohabitant declined over the two follow-up periods (χ2 = 4.45; P<0.05). Conclusion Participants in the phase I/II HIV vaccine trial were likely to face negative comments from relatives and colleagues after the end of the trial, but those comments decreased over time. In this study, the inherent sexual practice of having extra sexual partners other than spouse declined over time. Therefore, prolonged counselling and support appears important to minimize risky sexual behaviour among volunteers after participation in HIV Vaccine trials.",TRUE,acronym
R104,Bioinformatics,R171156,"Relationship between Resilience, Psychological Distress and Physical Activity in Cancer Patients: A Cross-Sectional Observation Study",S682066,R171157,uses,R167926,SPSS,"Objective Psychological distress remains a major challenge in cancer care. The complexity of psychological symptoms in cancer patients requires multifaceted symptom management tailored to individual patient characteristics and active patient involvement. We assessed the relationship between resilience, psychological distress and physical activity in cancer patients to elucidate potential moderators of the identified relationships. Method A cross-sectional observational study to assess the prevalence of symptoms and supportive care needs of oncology patients undergoing chemotherapy, radiotherapy or chemo-radiation therapy in a tertiary oncology service. Resilience was assessed using the 10-item Connor-Davidson Resilience Scale (CD-RISC 10), social support was evaluated using the 12-item Multidimensional Scale of Perceived Social Support (MSPSS) and both psychological distress and activity level were measured using corresponding subscales of the Rotterdam Symptom Checklist (RSCL). Socio-demographic and medical data were extracted from patient medical records. Correlation analyses were performed and structural equation modeling was employed to assess the associations between resilience, psychological distress and activity level as well as selected socio-demographic variables. Results Data from 343 patients were included in the analysis. Our revised model demonstrated an acceptable fit to the data (χ2(163) = 313.76, p = .000, comparative fit index (CFI) = .942, Tucker-Lewis index (TLI) = .923, root mean square error of approximation (RMSEA) = .053, 90% CI [.044.062]). Resilience was negatively associated with psychological distress (β = -.59), and positively associated with activity level (β = .20). The relationship between resilience and psychological distress was moderated by age (β = -0.33) but not social support (β = .10, p = .12). Conclusion Cancer patients with higher resilience, particularly older patients, experience lower psychological distress. Patients with higher resilience are physically more active. Evaluating levels of resilience in cancer patients then tailoring targeted interventions to facilitate resilience may help improve the effectiveness of psychological symptom management interventions.",TRUE,acronym
R104,Bioinformatics,R171217,"Prevalence and Predictors of Depression among Pregnant Women in Debretabor Town, Northwest Ethiopia",S682449,R171218,uses,R167686,SPSS,"Background Depression during pregnancy is a major health problem because it is prevalent and chronic, and its impact on birth outcome and child health is serious. Several psychosocial and obstetric factors have been identified as predictors. Evidence on the prevalence and predictors of antenatal depression is very limited in Ethiopia. This study aims to determine prevalence and associated factors with antenatal depression. Methods Community based cross-sectional study was conducted among 527 pregnant women recruited in a cluster sampling method. Data were collected by face-to-face interviews on socio-demographic, obstetric, and psychosocial characteristics. Depression symptoms were assessed using the Edinburgh Postnatal Depression Scale (EPDS). The List of Threatening Experiences questionnaire (LTE-Q) and the Oslo Social Support Scale (OSS-3) were used to assess stressful events and social support, respectively. Data were entered into Epi-info and analyzed using SPSS-20. Descriptive and logistic regression analyses were carried out. Results The prevalence of antenatal depression was found to be 11.8%. Having debt (OR = 2.79, 95% CI = 1.33, 5.85), unplanned pregnancy (OR = 2.39, 95% CI = (1.20, 4.76), history of stillbirth (OR = 3.97, 95% CI = (1.67,9.41), history of abortion (OR = 2.57, 95% CI = 1.005, 6.61), being in the third trimester of pregnancy (OR = 1.70, 95% CI = 1.07,2.72), presence of a complication in the current pregnancy (OR = 3.29, 95% CI = 1.66,6.53), and previous history of depression (OR = 3.48, 95% CI = 1.71,7.06) were factors significantly associated with antenatal depression. Conclusion The prevalence of antenatal depression was high, especially in the third trimester. Poverty, unmet reproductive health needs, and obstetric complications are the main determinants of antenatal depression. For early detection and appropriate intervention, screening for depression during the routine antenatal care should be promoted.",TRUE,acronym
R104,Bioinformatics,R171318,"Women’s autonomy and men's involvement in child care and feeding as predictors of infant and young child anthropometric indices in coffee farming households of Jimma Zone, South West of Ethiopia",S683082,R171319,uses,R167519,SPSS,"Background Most of child mortality and under nutrition in developing world were attributed to suboptimal childcare and feeding, which needs detailed investigation beyond the proximal factors. This study was conducted with the aim of assessing associations of women’s autonomy and men’s involvement with child anthropometric indices in cash crop livelihood areas of South West Ethiopia. Methods Multi-stage stratified sampling was used to select 749 farming households living in three coffee producing sub-districts of Jimma zone, Ethiopia. Domains of women’s Autonomy were measured by a tool adapted from demographic health survey. A model for determination of paternal involvement in childcare was employed. Caring practices were assessed through the WHO Infant and young child feeding practice core indicators. Length and weight measurements were taken in duplicate using standard techniques. Data were analyzed using SPSS for windows version 21. A multivariable linear regression was used to predict weight for height Z-scores and length for age Z-scores after adjusting for various factors. Results The mean (sd) scores of weight for age (WAZ), height for age (HAZ), weight for height (WHZ) and BMI for age (BAZ) was -0.52(1.26), -0.73(1.43), -0.13(1.34) and -0.1(1.39) respectively. The results of multi variable linear regression analyses showed that WHZ scores of children of mothers who had autonomy of conducting big purchase were higher by 0.42 compared to children's whose mothers had not. In addition, a child whose father was involved in childcare and feeding had higher HAZ score by 0.1. Regarding age, as for every month increase in age of child, a 0.04 point decrease in HAZ score and a 0.01 point decrease in WHZ were noted. Similarly, a child living in food insecure households had lower HAZ score by 0.29 compared to child of food secured households. As family size increased by a person a WHZ score of a child is decreased by 0.08. WHZ and HAZ scores of male child was found lower by 0.25 and 0.38 respectively compared to a female child of same age. Conclusion Women’s autonomy and men’s involvement appeared in tandem with better child anthropometric outcomes. Nutrition interventions in such setting should integrate enhancing women’s autonomy over resource and men’s involvement in childcare and feeding, in addition to food security measures.",TRUE,acronym
R104,Bioinformatics,R171666,Personality and social support as determinants of entrepreneurial intention. Gender differences in Italy,S685201,R171667,uses,R168411,SPSS,"The interest in the promotion of entrepreneurship is significantly increasing, particularly in those countries, such as Italy, that suffered during the recent great economic recession and subsequently needed to revitalize their economy. Entrepreneurial intention (EI) is a crucial stage in the entrepreneurial process and represents the basis for consequential entrepreneurial actions. Several research projects have sought to understand the antecedents of EI. This study, using a situational approach, has investigated the personal and contextual determinants of EI, exploring gender differences. In particular, the mediational role of general self-efficacy between internal locus of control (LoC), self-regulation, and support from family and friends, on the one hand, and EI, on the other hand, has been investigated. The study involved a sample of 658 Italian participants, of which 319 were male and 339 were female. Data were collected with a self-report on-line questionnaire and analysed with SPSS 23 and Mplus 7 to test a multi-group structural equation model. The results showed that self-efficacy totally mediated the relationship between internal LoC, self-regulation and EI. Moreover, it partially mediated the relationship between support from family and friends and EI. All the relations were significant for both men and women; however, our findings highlighted a stronger relationship between self-efficacy and EI for men, and between support from family and friends and both self-efficacy and EI for women. Findings highlighted the role of contextual characteristics in addition to personal ones in influencing EI and confirmed the key mediational function of self-efficacy. As for gender, results suggested that differences between men and women in relation to the entrepreneur role still exist. Practical implications for trainers and educators are discussed.",TRUE,acronym
R104,Bioinformatics,R171378,"Knowledge, attitude and perceived stigma towards tuberculosis among pastoralists; Do they differ from sedentary communities? A comparative cross-sectional study",S683461,R171379,uses,R168040,STATA,"Background Ethiopia is ninth among the world high tuberculosis (TB) burden countries, pastoralists being the most affected population. However, there is no published report whether the behavior related to TB are different between pastoralist and the sedentary communities. Therefore, the main aim of this study is to assess the pastoralist community knowledge, attitude and perceived stigma towards tuberculosis and their health care seeking behavior in comparison to the neighboring sedentary communities and this may help to plan TB control interventions specifically for the pastoralist communities. Method A community-based cross-sectional survey was carried out from September 2014 to January 2015, among 337 individuals from pastoralist and 247 from the sedentary community of Kereyu district. Data were collected using structured questionnaires. Three focus group discussions were used to collect qualitative data, one with men and the other with women in the pastoralist and one with men in the sedentary groups. Data were analyzed using Statistical Software for Social Science, SPSS V 22 and STATA. Results A Lower proportion of pastoralists mentioned bacilli (bacteria) as the cause of PTB compared to the sedentary group (63.9% vs. 81.0%, p<0.01), respectively. However, witchcraft was reported as the causes of TB by a higher proportion of pastoralists than the sedentary group (53.6% vs.23.5%, p<0.01), respectively. Similarly, a lower proportion of pastoralists indicated PTB is preventable compared to the sedentary group (95.8% vs. 99.6%, p<0.01), respectively. Moreover, majority of the pastoralists mentioned that most people would reject a TB patient in their community compared to the sedentary group (39.9% vs. 8.9%, p<0.001), respectively, and the pastoralists expressed that they would be ashamed/embarrassed if they had TB 68% vs.36.4%, p<0.001), respectively. Conclusion The finding indicates that there is a lower awareness about TB, a negative attitude towards TB patients and a higher perceived stigma among pastoralists compared to their neighbor sedentary population. Strategic health communications pertinent to the pastoralists way of life should be planned and implemented to improve the awareness gap about tuberculosis.",TRUE,acronym
R104,Bioinformatics,R169038,Complete Mitochondrial DNA Analysis of Eastern Eurasian Haplogroups Rarely Found in Populations of Northern Asia and Eastern Europe,S670578,R169041,uses,R167261,STATISTICA,"With the aim of uncovering all of the most basal variation in the northern Asian mitochondrial DNA (mtDNA) haplogroups, we have analyzed mtDNA control region and coding region sequence variation in 98 Altaian Kazakhs from southern Siberia and 149 Barghuts from Inner Mongolia, China. Both populations exhibit the prevalence of eastern Eurasian lineages accounting for 91.9% in Barghuts and 60.2% in Altaian Kazakhs. The strong affinity of Altaian Kazakhs and populations of northern and central Asia has been revealed, reflecting both influences of central Asian inhabitants and essential genetic interaction with the Altai region indigenous populations. Statistical analyses data demonstrate a close positioning of all Mongolic-speaking populations (Mongolians, Buryats, Khamnigans, Kalmyks as well as Barghuts studied here) and Turkic-speaking Sojots, thus suggesting their origin from a common maternal ancestral gene pool. In order to achieve a thorough coverage of DNA lineages revealed in the northern Asian matrilineal gene pool, we have completely sequenced the mtDNA of 55 samples representing haplogroups R11b, B4, B5, F2, M9, M10, M11, M13, N9a and R9c1, which were pinpointed from a massive collection (over 5000 individuals) of northern and eastern Asian, as well as European control region mtDNA sequences. Applying the newly updated mtDNA tree to the previously reported northern Asian and eastern Asian mtDNA data sets has resolved the status of the poorly classified mtDNA types and allowed us to obtain the coalescence age estimates of the nodes of interest using different calibrated rates. Our findings confirm our previous conclusion that northern Asian maternal gene pool consists of predominantly post-LGM components of eastern Asian ancestry, though some genetic lineages may have a pre-LGM/LGM origin.",TRUE,acronym
R104,Bioinformatics,R169126,"Combined Mitochondrial and Nuclear Markers Revealed a Deep Vicariant History for Leopoldamys neilli, a Cave-Dwelling Rodent of Thailand",S670999,R169145,uses,R167338,STRUCTURE,"Background Historical biogeography and evolutionary processes of cave taxa have been widely studied in temperate regions. However, Southeast Asian cave ecosystems remain largely unexplored despite their high scientific interest. Here we studied the phylogeography of Leopoldamys neilli, a cave-dwelling murine rodent living in limestone karsts of Thailand, and compared the molecular signature of mitochondrial and nuclear markers. Methodology/Principal Findings We used a large sampling (n = 225) from 28 localities in Thailand and a combination of mitochondrial and nuclear markers with various evolutionary rates (two intronic regions and 12 microsatellites). The evolutionary history of L. neilli and the relative role of vicariance and dispersal were investigated using ancestral range reconstruction analysis and Approximate Bayesian computation (ABC). Both mitochondrial and nuclear markers support a large-scale population structure of four main groups (west, centre, north and northeast) and a strong finer structure within each of these groups. A deep genealogical divergence among geographically close lineages is observed and denotes a high population fragmentation. Our findings suggest that the current phylogeographic pattern of this species results from the fragmentation of a widespread ancestral population and that vicariance has played a significant role in the evolutionary history of L. neilli. These deep vicariant events that occurred during Plio-Pleistocene are related to the formation of the Central Plain of Thailand. Consequently, the western, central, northern and northeastern groups of populations were historically isolated and should be considered as four distinct Evolutionarily Significant Units (ESUs). Conclusions/Significance Our study confirms the benefit of using several independent genetic markers to obtain a comprehensive and reliable picture of L. neilli evolutionary history at different levels of resolution. The complex genetic structure of Leopoldamys neilli is supported by congruent mitochondrial and nuclear markers and has been influenced by the geological history of Thailand during Plio-Pleistocene.",TRUE,acronym
R104,Bioinformatics,R168717,LAILAPS-QSM: A RESTful API and JAVA library for semantic query suggestions,S669096,R168718,creates,R167059,LAILAPS-QSM,"In order to access and filter content of life-science databases, full text search is a widely applied query interface. But its high flexibility and intuitiveness is paid for with potentially imprecise and incomplete query results. To reduce this drawback, query assistance systems suggest those combinations of keywords with the highest potential to match most of the relevant data records. Widespread approaches are syntactic query corrections that avoid misspelling and support expansion of words by suffixes and prefixes. Synonym expansion approaches apply thesauri, ontologies, and query logs. All need laborious curation and maintenance. Furthermore, access to query logs is in general restricted. Approaches that infer related queries by their query profile like research field, geographic location, co-authorship, affiliation etc. require user’s registration and its public accessibility that contradict privacy concerns. To overcome these drawbacks, we implemented LAILAPS-QSM, a machine learning approach that reconstruct possible linguistic contexts of a given keyword query. The context is referred from the text records that are stored in the databases that are going to be queried or extracted for a general purpose query suggestion from PubMed abstracts and UniProt data. The supplied tool suite enables the pre-processing of these text records and the further computation of customized distributed word vectors. The latter are used to suggest alternative keyword queries. An evaluated of the query suggestion quality was done for plant science use cases. Locally present experts enable a cost-efficient quality assessment in the categories trait, biological entity, taxonomy, affiliation, and metabolic function which has been performed using ontology term similarities. LAILAPS-QSM mean information content similarity for 15 representative queries is 0.70, whereas 34% have a score above 0.80. In comparison, the information content similarity for human expert made query suggestions is 0.90. The software is either available as tool set to build and train dedicated query suggestion services or as already trained general purpose RESTful web service. The service uses open interfaces to be seamless embeddable into database frontends. The JAVA implementation uses highly optimized data structures and streamlined code to provide fast and scalable response for web service calls. The source code of LAILAPS-QSM is available under GNU General Public License version 2 in Bitbucket GIT repository: https://bitbucket.org/ipk_bit_team/bioescorte-suggestion",TRUE,acronym
R104,Bioinformatics,R168717,LAILAPS-QSM: A RESTful API and JAVA library for semantic query suggestions,S669098,R168719,deposits,R167060,LAILAPS-QSM,"In order to access and filter content of life-science databases, full text search is a widely applied query interface. But its high flexibility and intuitiveness is paid for with potentially imprecise and incomplete query results. To reduce this drawback, query assistance systems suggest those combinations of keywords with the highest potential to match most of the relevant data records. Widespread approaches are syntactic query corrections that avoid misspelling and support expansion of words by suffixes and prefixes. Synonym expansion approaches apply thesauri, ontologies, and query logs. All need laborious curation and maintenance. Furthermore, access to query logs is in general restricted. Approaches that infer related queries by their query profile like research field, geographic location, co-authorship, affiliation etc. require user’s registration and its public accessibility that contradict privacy concerns. To overcome these drawbacks, we implemented LAILAPS-QSM, a machine learning approach that reconstruct possible linguistic contexts of a given keyword query. The context is referred from the text records that are stored in the databases that are going to be queried or extracted for a general purpose query suggestion from PubMed abstracts and UniProt data. The supplied tool suite enables the pre-processing of these text records and the further computation of customized distributed word vectors. The latter are used to suggest alternative keyword queries. An evaluated of the query suggestion quality was done for plant science use cases. Locally present experts enable a cost-efficient quality assessment in the categories trait, biological entity, taxonomy, affiliation, and metabolic function which has been performed using ontology term similarities. LAILAPS-QSM mean information content similarity for 15 representative queries is 0.70, whereas 34% have a score above 0.80. In comparison, the information content similarity for human expert made query suggestions is 0.90. The software is either available as tool set to build and train dedicated query suggestion services or as already trained general purpose RESTful web service. The service uses open interfaces to be seamless embeddable into database frontends. The JAVA implementation uses highly optimized data structures and streamlined code to provide fast and scalable response for web service calls. The source code of LAILAPS-QSM is available under GNU General Public License version 2 in Bitbucket GIT repository: https://bitbucket.org/ipk_bit_team/bioescorte-suggestion",TRUE,acronym
R104,Bioinformatics,R135546,Acute Lymphoblastic Leukemia Detection from Microscopic Images Using Weighted Ensemble of Convolutional Neural Networks,S537789,R135550,dataset,L378941,C-NMC-2019,"Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.",TRUE,acronym
R104,Bioinformatics,R168625,CNVkit: Genome-Wide Copy Number Detection and Visualization from Targeted DNA Sequencing,S668732,R168626,creates,R167001,CNVkit,"Germline copy number variants (CNVs) and somatic copy number alterations (SCNAs) are of significant importance in syndromic conditions and cancer. Massively parallel sequencing is increasingly used to infer copy number information from variations in the read depth in sequencing data. However, this approach has limitations in the case of targeted re-sequencing, which leaves gaps in coverage between the regions chosen for enrichment and introduces biases related to the efficiency of target capture and library preparation. We present a method for copy number detection, implemented in the software package CNVkit, that uses both the targeted reads and the nonspecifically captured off-target reads to infer copy number evenly across the genome. This combination achieves both exon-level resolution in targeted regions and sufficient resolution in the larger intronic and intergenic regions to identify copy number changes. In particular, we successfully inferred copy number at equivalent to 100-kilobase resolution genome-wide from a platform targeting as few as 293 genes. After normalizing read counts to a pooled reference, we evaluated and corrected for three sources of bias that explain most of the extraneous variability in the sequencing read depth: GC content, target footprint size and spacing, and repetitive sequences. We compared the performance of CNVkit to copy number changes identified by array comparative genomic hybridization. We packaged the components of CNVkit so that it is straightforward to use and provides visualizations, detailed reporting of significant features, and export options for integration into existing analysis pipelines. CNVkit is freely available from https://github.com/etal/cnvkit.",TRUE,acronym
R104,Bioinformatics,R168625,CNVkit: Genome-Wide Copy Number Detection and Visualization from Targeted DNA Sequencing,S668736,R168628,deposits,R167003,CNVkit,"Germline copy number variants (CNVs) and somatic copy number alterations (SCNAs) are of significant importance in syndromic conditions and cancer. Massively parallel sequencing is increasingly used to infer copy number information from variations in the read depth in sequencing data. However, this approach has limitations in the case of targeted re-sequencing, which leaves gaps in coverage between the regions chosen for enrichment and introduces biases related to the efficiency of target capture and library preparation. We present a method for copy number detection, implemented in the software package CNVkit, that uses both the targeted reads and the nonspecifically captured off-target reads to infer copy number evenly across the genome. This combination achieves both exon-level resolution in targeted regions and sufficient resolution in the larger intronic and intergenic regions to identify copy number changes. In particular, we successfully inferred copy number at equivalent to 100-kilobase resolution genome-wide from a platform targeting as few as 293 genes. After normalizing read counts to a pooled reference, we evaluated and corrected for three sources of bias that explain most of the extraneous variability in the sequencing read depth: GC content, target footprint size and spacing, and repetitive sequences. We compared the performance of CNVkit to copy number changes identified by array comparative genomic hybridization. We packaged the components of CNVkit so that it is straightforward to use and provides visualizations, detailed reporting of significant features, and export options for integration into existing analysis pipelines. CNVkit is freely available from https://github.com/etal/cnvkit.",TRUE,acronym
R104,Bioinformatics,R168729,COBRAme: A computational framework for genome-scale models of metabolism and gene expression,S669160,R168730,creates,R167066,COBRAme,"Genome-scale models of metabolism and macromolecular expression (ME-models) explicitly compute the optimal proteome composition of a growing cell. ME-models expand upon the well-established genome-scale models of metabolism (M-models), and they enable a new fundamental understanding of cellular growth. ME-models have increased predictive capabilities and accuracy due to their inclusion of the biosynthetic costs for the machinery of life, but they come with a significant increase in model size and complexity. This challenge results in models which are both difficult to compute and challenging to understand conceptually. As a result, ME-models exist for only two organisms (Escherichia coli and Thermotoga maritima) and are still used by relatively few researchers. To address these challenges, we have developed a new software framework called COBRAme for building and simulating ME-models. It is coded in Python and built on COBRApy, a popular platform for using M-models. COBRAme streamlines computation and analysis of ME-models. It provides tools to simplify constructing and editing ME-models to enable ME-model reconstructions for new organisms. We used COBRAme to reconstruct a condensed E. coli ME-model called iJL1678b-ME. This reformulated model gives functionally identical solutions to previous E. coli ME-models while using 1/6 the number of free variables and solving in less than 10 minutes, a marked improvement over the 6 hour solve time of previous ME-model formulations. Errors in previous ME-models were also corrected leading to 52 additional genes that must be expressed in iJL1678b-ME to grow aerobically in glucose minimal in silico media. This manuscript outlines the architecture of COBRAme and demonstrates how ME-models can be created, modified, and shared most efficiently using the new software framework.",TRUE,acronym
R104,Bioinformatics,R168729,COBRAme: A computational framework for genome-scale models of metabolism and gene expression,S669168,R168734,deposits,R167068,COBRAme,"Genome-scale models of metabolism and macromolecular expression (ME-models) explicitly compute the optimal proteome composition of a growing cell. ME-models expand upon the well-established genome-scale models of metabolism (M-models), and they enable a new fundamental understanding of cellular growth. ME-models have increased predictive capabilities and accuracy due to their inclusion of the biosynthetic costs for the machinery of life, but they come with a significant increase in model size and complexity. This challenge results in models which are both difficult to compute and challenging to understand conceptually. As a result, ME-models exist for only two organisms (Escherichia coli and Thermotoga maritima) and are still used by relatively few researchers. To address these challenges, we have developed a new software framework called COBRAme for building and simulating ME-models. It is coded in Python and built on COBRApy, a popular platform for using M-models. COBRAme streamlines computation and analysis of ME-models. It provides tools to simplify constructing and editing ME-models to enable ME-model reconstructions for new organisms. We used COBRAme to reconstruct a condensed E. coli ME-model called iJL1678b-ME. This reformulated model gives functionally identical solutions to previous E. coli ME-models while using 1/6 the number of free variables and solving in less than 10 minutes, a marked improvement over the 6 hour solve time of previous ME-model formulations. Errors in previous ME-models were also corrected leading to 52 additional genes that must be expressed in iJL1678b-ME to grow aerobically in glucose minimal in silico media. This manuscript outlines the architecture of COBRAme and demonstrates how ME-models can be created, modified, and shared most efficiently using the new software framework.",TRUE,acronym
R104,Bioinformatics,R168729,COBRAme: A computational framework for genome-scale models of metabolism and gene expression,S669162,R168731,uses,R167066,COBRAme,"Genome-scale models of metabolism and macromolecular expression (ME-models) explicitly compute the optimal proteome composition of a growing cell. ME-models expand upon the well-established genome-scale models of metabolism (M-models), and they enable a new fundamental understanding of cellular growth. ME-models have increased predictive capabilities and accuracy due to their inclusion of the biosynthetic costs for the machinery of life, but they come with a significant increase in model size and complexity. This challenge results in models which are both difficult to compute and challenging to understand conceptually. As a result, ME-models exist for only two organisms (Escherichia coli and Thermotoga maritima) and are still used by relatively few researchers. To address these challenges, we have developed a new software framework called COBRAme for building and simulating ME-models. It is coded in Python and built on COBRApy, a popular platform for using M-models. COBRAme streamlines computation and analysis of ME-models. It provides tools to simplify constructing and editing ME-models to enable ME-model reconstructions for new organisms. We used COBRAme to reconstruct a condensed E. coli ME-model called iJL1678b-ME. This reformulated model gives functionally identical solutions to previous E. coli ME-models while using 1/6 the number of free variables and solving in less than 10 minutes, a marked improvement over the 6 hour solve time of previous ME-model formulations. Errors in previous ME-models were also corrected leading to 52 additional genes that must be expressed in iJL1678b-ME to grow aerobically in glucose minimal in silico media. This manuscript outlines the architecture of COBRAme and demonstrates how ME-models can be created, modified, and shared most efficiently using the new software framework.",TRUE,acronym
R104,Bioinformatics,R168729,COBRAme: A computational framework for genome-scale models of metabolism and gene expression,S669166,R168733,uses,R167067,COBRApy,"Genome-scale models of metabolism and macromolecular expression (ME-models) explicitly compute the optimal proteome composition of a growing cell. ME-models expand upon the well-established genome-scale models of metabolism (M-models), and they enable a new fundamental understanding of cellular growth. ME-models have increased predictive capabilities and accuracy due to their inclusion of the biosynthetic costs for the machinery of life, but they come with a significant increase in model size and complexity. This challenge results in models which are both difficult to compute and challenging to understand conceptually. As a result, ME-models exist for only two organisms (Escherichia coli and Thermotoga maritima) and are still used by relatively few researchers. To address these challenges, we have developed a new software framework called COBRAme for building and simulating ME-models. It is coded in Python and built on COBRApy, a popular platform for using M-models. COBRAme streamlines computation and analysis of ME-models. It provides tools to simplify constructing and editing ME-models to enable ME-model reconstructions for new organisms. We used COBRAme to reconstruct a condensed E. coli ME-model called iJL1678b-ME. This reformulated model gives functionally identical solutions to previous E. coli ME-models while using 1/6 the number of free variables and solving in less than 10 minutes, a marked improvement over the 6 hour solve time of previous ME-model formulations. Errors in previous ME-models were also corrected leading to 52 additional genes that must be expressed in iJL1678b-ME to grow aerobically in glucose minimal in silico media. This manuscript outlines the architecture of COBRAme and demonstrates how ME-models can be created, modified, and shared most efficiently using the new software framework.",TRUE,acronym
R104,Bioinformatics,R138661,Clinical data Neuroimage data,S550997,R138663,Data,R77132,fMRI,"Effective discrimination of attention deficit hyperactivity disorder (ADHD) using imaging and functional biomarkers would have fundamental influence on public health. In usual, the discrimination is based on the standards of American Psychiatric Association. In this paper, we modified one of the deep learning method on structure and parameters according to the properties of ADHD data, to discriminate ADHD on the unique public dataset of ADHD-200. We predicted the subjects as control, combined, inattentive or hyperactive through their frequency features. The results achieved improvement greatly compared to the performance released by the competition. Besides, the imbalance in datasets of deep learning model influenced the results of classification. As far as we know, it is the first time that the deep learning method has been used for the discrimination of ADHD with fMRI data.",TRUE,acronym
R104,Bioinformatics,R138687,Diagnosis of attention deficit hyperactivity disorder using deep belief network based on greedy approach,S551122,R138689,Data,R77132,fMRI,"Attention deficit hyperactivity disorder creates conditions for the child as s/he cannot sit calm and still, control his/her behavior and focus his/her attention on a particular issue. Five out of every hundred children are affected by the disease. Boys are three times more than girls at risk for this complication. The disorder often begins before age seven, and parents may not realize their children problem until they get older. Children with hyperactivity and attention deficit are at high risk of conduct disorder, antisocial personality, and drug abuse. Most children suffering from the disease will develop a feeling of depression, anxiety and lack of self-confidence. Given the importance of diagnosis the disease, Deep Belief Networks (DBNs) were used as a deep learning model to predict the disease. In this system, in addition to FMRI images features, sophisticated features such as age and IQ as well as functional characteristics, etc. were used. The proposed method was evaluated by two standard data sets of ADHD-200 Global Competitions, including NeuroImage and NYU data sets, and compared with state-of-the-art algorithms. The results showed the superiority of the proposed method rather than other systems. The prediction accuracy has improved respectively as +12.04 and +27.81 over NeuroImage and NYU datasets compared to the best proposed method in the ADHD-200 Global competition.",TRUE,acronym
R104,Bioinformatics,R138690,Deep learning based automatic diagnoses of attention deficit hyperactive disorder,S551140,R138692,Data,R77132,fMRI,"In this paper, we aim to develop a deep learning based automatic Attention Deficit Hyperactive Disorder (ADHD) diagnosis algorithm using resting state functional magnetic resonance imaging (rs-fMRI) scans. However, relative to millions of parameters in deep neural networks (DNN), the number of fMRI samples is still limited to learn discriminative features from the raw data. In light of this, we first encode our prior knowledge on 3D features voxel-wisely, including Regional Homogeneity (ReHo), fractional Amplitude of Low Frequency Fluctuations (fALFF) and Voxel-Mirrored Homotopic Connectivity (VMHC), and take these 3D images as the input to the DNN. Inspired by the way that radiologists examine brain images, we further investigate a novel 3D convolutional neural network (CNN) architecture to learn 3D local patterns which may boost the diagnosis accuracy. Investigation on the hold-out testing data of the ADHD-200 Global competition demonstrates that the proposed 3D CNN approach yields superior performances when compared to the reported classifiers in the literature, even with less training samples.",TRUE,acronym
R104,Bioinformatics,R138698,Application of Autoencoder in Depression Diagnosis,S551180,R138700,Data,R77132,fMRI,"Major depressive disorder (MDD) is a mental disorder characterized by at least two weeks of low mood which is present across most situations. Diagnosis of MDD using rest-state functional magnetic resonance imaging (fMRI) data faces many challenges due to the high dimensionality, small samples, noisy and individual variability. No method can automatically extract discriminative features from the origin time series in fMRI images for MDD diagnosis. In this study, we proposed a new method for feature extraction and a workflow which can make an automatic feature extraction and classification without a prior knowledge. An autoencoder was used to learn pre-training parameters of a dimensionality reduction process using 3-D convolution network. Through comparison with the other three feature extraction methods, our method achieved the best classification performance. This method can be used not only in MDD diagnosis, but also other similar disorders.",TRUE,acronym
R104,Bioinformatics,R138702,3D CNN Based Automatic Diagnosis of Attention Deficit Hyperactivity Disorder Using Functional and Structural MRI,S551226,R138705,Data,R77132,fMRI,"Attention deficit hyperactivity disorder (ADHD) is one of the most common mental-health disorders. As a neurodevelopment disorder, neuroimaging technologies, such as magnetic resonance imaging (MRI), coupled with machine learning algorithms, are being increasingly explored as biomarkers in ADHD. Among various machine learning methods, deep learning has demonstrated excellent performance on many imaging tasks. With the availability of publically-available, large neuroimaging data sets for training purposes, deep learning-based automatic diagnosis of psychiatric disorders can become feasible. In this paper, we develop a deep learning-based ADHD classification method via 3-D convolutional neural networks (CNNs) applied to MRI scans. Since deep neural networks may utilize millions of parameters, even the large number of MRI samples in pooled data sets is still relatively limited if one is to learn discriminative features from the raw data. Instead, here we propose to first extract meaningful 3-D low-level features from functional MRI (fMRI) and structural MRI (sMRI) data. Furthermore, inspired by radiologists’ typical approach for examining brain images, we design a 3-D CNN model to investigate the local spatial patterns of MRI features. Finally, we discover that brain functional and structural information are complementary, and design a multi-modality CNN architecture to combine fMRI and sMRI features. Evaluations on the hold-out testing data of the ADHD-200 global competition shows that the proposed multi-modality 3-D CNN approach achieves the state-of-the-art accuracy of 69.15% and outperforms reported classifiers in the literature, even with fewer training samples. We suggest that multi-modality classification will be a promising direction to find potential neuroimaging biomarkers of neurodevelopment disorders.",TRUE,acronym
R104,Bioinformatics,R138710,A general prediction model for the detection of ADHD and Autism using structural and functional MRI,S551261,R138713,Data,R77132,fMRI,"This work presents a novel method for learning a model that can diagnose Attention Deficit Hyperactivity Disorder (ADHD), as well as Autism, using structural texture and functional connectivity features obtained from 3-dimensional structural magnetic resonance imaging (MRI) and 4-dimensional resting-state functional magnetic resonance imaging (fMRI) scans of subjects. We explore a series of three learners: (1) The LeFMS learner first extracts features from the structural MRI images using the texture-based filters produced by a sparse autoencoder. These filters are then convolved with the original MRI image using an unsupervised convolutional network. The resulting features are used as input to a linear support vector machine (SVM) classifier. (2) The LeFMF learner produces a diagnostic model by first computing spatial non-stationary independent components of the fMRI scans, which it uses to decompose each subject’s fMRI scan into the time courses of these common spatial components. These features can then be used with a learner by themselves or in combination with other features to produce the model. Regardless of which approach is used, the final set of features are input to a linear support vector machine (SVM) classifier. (3) Finally, the overall LeFMSF learner uses the combined features obtained from the two feature extraction processes in (1) and (2) above as input to an SVM classifier, achieving an accuracy of 0.673 on the ADHD-200 holdout data and 0.643 on the ABIDE holdout data. Both of these results, obtained with the same LeFMSF framework, are the best known, over all hold-out accuracies on these datasets when only using imaging data—exceeding previously-published results by 0.012 for ADHD and 0.042 for Autism. Our results show that combining multi-modal features can yield good classification accuracy for diagnosis of ADHD and Autism, which is an important step towards computer-aided diagnosis of these psychiatric diseases and perhaps others as well.",TRUE,acronym
R104,Bioinformatics,R138719,Deep Neural Generative Model of Functional MRI Images for Psychiatric Disorder Diagnosis,S551304,R138724,Data,R77132,fMRI,"Accurate diagnosis of psychiatric disorders plays a critical role in improving the quality of life for patients and potentially supports the development of new treatments. Many studies have been conducted on machine learning techniques that seek brain imaging data for specific biomarkers of disorders. These studies have encountered the following dilemma: A direct classification overfits to a small number of high-dimensional samples but unsupervised feature-extraction has the risk of extracting a signal of no interest. In addition, such studies often provided only diagnoses for patients without presenting the reasons for these diagnoses. This study proposed a deep neural generative model of resting-state functional magnetic resonance imaging (fMRI) data. The proposed model is conditioned by the assumption of the subject's state and estimates the posterior probability of the subject's state given the imaging data, using Bayes’ rule. This study applied the proposed model to diagnose schizophrenia and bipolar disorders. Diagnostic accuracy was improved by a large margin over competitive approaches, namely classifications of functional connectivity, discriminative/generative models of regionwise signals, and those with unsupervised feature-extractors. The proposed model visualizes brain regions largely related to the disorders, thus motivating further biological investigation.",TRUE,acronym
R104,Bioinformatics,R135546,Acute Lymphoblastic Leukemia Detection from Microscopic Images Using Weighted Ensemble of Convolutional Neural Networks,S536119,R135550,Used models,L378120,Xception,"Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.",TRUE,acronym
R104,Bioinformatics,R75371,Isolating SARS-CoV-2 Strains From Countries in the Same Meridian: Genome Evolutionary Analysis,S345263,R75376,Has evaluation,R75391,Y14_SARS2,"Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and “disorder” and “transmembrane” analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein—binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.",TRUE,acronym
R104,Bioinformatics,R168713,4Cin: A computational pipeline for 3D genome modeling and virtual Hi-C analyses from 4C data,S669080,R168714,creates,R167056,4Cin,"The use of 3C-based methods has revealed the importance of the 3D organization of the chromatin for key aspects of genome biology. However, the different caveats of the variants of 3C techniques have limited their scope and the range of scientific fields that could benefit from these approaches. To address these limitations, we present 4Cin, a method to generate 3D models and derive virtual Hi-C (vHi-C) heat maps of genomic loci based on 4C-seq or any kind of 4C-seq-like data, such as those derived from NG Capture-C. 3D genome organization is determined by integrative consideration of the spatial distances derived from as few as four 4C-seq experiments. The 3D models obtained from 4C-seq data, together with their associated vHi-C maps, allow the inference of all chromosomal contacts within a given genomic region, facilitating the identification of Topological Associating Domains (TAD) boundaries. Thus, 4Cin offers a much cheaper, accessible and versatile alternative to other available techniques while providing a comprehensive 3D topological profiling. By studying TAD modifications in genomic structural variants associated to disease phenotypes and performing cross-species evolutionary comparisons of 3D chromatin structures in a quantitative manner, we demonstrate the broad potential and novel range of applications of our method.",TRUE,acronym
R104,Bioinformatics,R168713,4Cin: A computational pipeline for 3D genome modeling and virtual Hi-C analyses from 4C data,S669084,R168716,deposits,R167058,4Cin,"The use of 3C-based methods has revealed the importance of the 3D organization of the chromatin for key aspects of genome biology. However, the different caveats of the variants of 3C techniques have limited their scope and the range of scientific fields that could benefit from these approaches. To address these limitations, we present 4Cin, a method to generate 3D models and derive virtual Hi-C (vHi-C) heat maps of genomic loci based on 4C-seq or any kind of 4C-seq-like data, such as those derived from NG Capture-C. 3D genome organization is determined by integrative consideration of the spatial distances derived from as few as four 4C-seq experiments. The 3D models obtained from 4C-seq data, together with their associated vHi-C maps, allow the inference of all chromosomal contacts within a given genomic region, facilitating the identification of Topological Associating Domains (TAD) boundaries. Thus, 4Cin offers a much cheaper, accessible and versatile alternative to other available techniques while providing a comprehensive 3D topological profiling. By studying TAD modifications in genomic structural variants associated to disease phenotypes and performing cross-species evolutionary comparisons of 3D chromatin structures in a quantitative manner, we demonstrate the broad potential and novel range of applications of our method.",TRUE,acronym
R104,Bioinformatics,R168591,ENCORE: Software for Quantitative Ensemble Comparison,S668586,R168593,uses,R166980,MDAnalysis,"There is increasing evidence that protein dynamics and conformational changes can play an important role in modulating biological function. As a result, experimental and computational methods are being developed, often synergistically, to study the dynamical heterogeneity of a protein or other macromolecules in solution. Thus, methods such as molecular dynamics simulations or ensemble refinement approaches have provided conformational ensembles that can be used to understand protein function and biophysics. These developments have in turn created a need for algorithms and software that can be used to compare structural ensembles in the same way as the root-mean-square-deviation is often used to compare static structures. Although a few such approaches have been proposed, these can be difficult to implement efficiently, hindering a broader applications and further developments. Here, we present an easily accessible software toolkit, called ENCORE, which can be used to compare conformational ensembles generated either from simulations alone or synergistically with experiments. ENCORE implements three previously described methods for ensemble comparison, that each can be used to quantify the similarity between conformational ensembles by estimating the overlap between the probability distributions that underlie them. We demonstrate the kinds of insights that can be obtained by providing examples of three typical use-cases: comparing ensembles generated with different molecular force fields, assessing convergence in molecular simulations, and calculating differences and similarities in structural ensembles refined with various sources of experimental data. We also demonstrate efficient computational scaling for typical analyses, and robustness against both the size and sampling of the ensembles. ENCORE is freely available and extendable, integrates with the established MDAnalysis software package, reads ensemble data in many common formats, and can work with large trajectory files.",TRUE,acronym
R104,Bioinformatics,R169567,Using Community-Based Participatory Research Principles to Develop More Understandable Recruitment and Informed Consent Documents in Genomic Research,S673033,R169568,uses,R167614,ATLAS.ti,"Background Heart Healthy Lenoir is a transdisciplinary project aimed at creating long-term, sustainable approaches to reduce cardiovascular disease risk disparities in Lenoir County, North Carolina using a design spanning genomic analysis and clinical intervention. We hypothesized that residents of Lenoir County would be unfamiliar and mistrustful of genomic research, and therefore reluctant to participate; additionally, these feelings would be higher in African-Americans. Methodology To test our hypothesis, we conducted qualitative research using community-based participatory research principles to ensure our genomic research strategies addressed the needs, priorities, and concerns of the community. African-American (n = 19) and White (n = 16) adults in Lenoir County participated in four focus groups exploring perceptions about genomics and cardiovascular disease. Demographic surveys were administered and a semi-structured interview guide was used to facilitate discussions. The discussions were digitally recorded, transcribed verbatim, and analyzed in ATLAS.ti. Results and Significance From our analysis, key themes emerged: transparent communication, privacy, participation incentives and barriers, knowledge, and the impact of knowing. African-Americans were more concerned about privacy and community impact compared to Whites, however, African-Americans were still eager to participate in our genomic research project. The results from our formative study were used to improve the informed consent and recruitment processes by: 1) reducing misconceptions of genomic studies; and 2) helping to foster participant understanding and trust with the researchers. Our study demonstrates how community-based participatory research principles can be used to gain deeper insight into the community and increase participation in genomic research studies. Due in part to these efforts 80.3% of eligible African-American participants and 86.9% of eligible White participants enrolled in the Heart Healthy Lenoir Genomics study making our overall enrollment 57.8% African-American. Future research will investigate return of genomic results in the Lenoir community.",TRUE,acronym
R104,Bioinformatics,R138959,Automated Depression Diagnosis Based on Deep Networks to Encode Facial Appearance and Dynamics,S552163,R138962,Outcome assessment,R138768,BDI-II,"As a severe psychiatric disorder disease, depression is a state of low mood and aversion to activity, which prevents a person from functioning normally in both work and daily lives. The study on automated mental health assessment has been given increasing attentions in recent years. In this paper, we study the problem of automatic diagnosis of depression. A new approach to predict the Beck Depression Inventory II (BDI-II) values from video data is proposed based on the deep networks. The proposed framework is designed in a two stream manner, aiming at capturing both the facial appearance and dynamics. Further, we employ joint tuning layers that can implicitly integrate the appearance and dynamic information. Experiments are conducted on two depression databases, AVEC2013 and AVEC2014. The experimental results show that our proposed approach significantly improve the depression prediction performance, compared to other visual-based approaches.",TRUE,acronym
R104,Bioinformatics,R138969,Artificial Intelligent System for Automatic Depression Level Analysis Through Visual and Vocal Expressions,S552202,R138972,Outcome assessment,R138768,BDI-II,"A human being’s cognitive system can be simulated by artificial intelligent systems. Machines and robots equipped with cognitive capability can automatically recognize a humans mental state through their gestures and facial expressions. In this paper, an artificial intelligent system is proposed to monitor depression. It can predict the scales of Beck depression inventory II (BDI-II) from vocal and visual expressions. First, different visual features are extracted from facial expression images. Deep learning method is utilized to extract key visual features from the facial expression frames. Second, spectral low-level descriptors and mel-frequency cepstral coefficients features are extracted from short audio segments to capture the vocal expressions. Third, feature dynamic history histogram (FDHH) is proposed to capture the temporal movement on the feature space. Finally, these FDHH and audio features are fused using regression techniques for the prediction of the BDI-II scales. The proposed method has been tested on the public Audio/Visual Emotion Challenges 2014 dataset as it is tuned to be more focused on the study of depression. The results outperform all the other existing methods on the same dataset.",TRUE,acronym
R104,Bioinformatics,R168521,Chaste: An Open Source C++ Library for Computational Physiology and Biology,S668331,R168523,uses,R166932,C++,"Chaste — Cancer, Heart And Soft Tissue Environment — is an open source C++ library for the computational simulation of mathematical models developed for physiology and biology. Code development has been driven by two initial applications: cardiac electrophysiology and cancer development. A large number of cardiac electrophysiology studies have been enabled and performed, including high-performance computational investigations of defibrillation on realistic human cardiac geometries. New models for the initiation and growth of tumours have been developed. In particular, cell-based simulations have provided novel insight into the role of stem cells in the colorectal crypt. Chaste is constantly evolving and is now being applied to a far wider range of problems. The code provides modules for handling common scientific computing components, such as meshes and solvers for ordinary and partial differential equations (ODEs/PDEs). Re-use of these components avoids the need for researchers to ‘re-invent the wheel’ with each new project, accelerating the rate of progress in new applications. Chaste is developed using industrially-derived techniques, in particular test-driven development, to ensure code quality, re-use and reliability. In this article we provide examples that illustrate the types of problems Chaste can be used to solve, which can be run on a desktop computer. We highlight some scientific studies that have used or are using Chaste, and the insights they have provided. The source code, both for specific releases and the development version, is available to download under an open source Berkeley Software Distribution (BSD) licence at http://www.cs.ox.ac.uk/chaste, together with details of a mailing list and links to documentation and tutorials.",TRUE,acronym
R104,Bioinformatics,R168633,PEPIS: A Pipeline for Estimating Epistatic Effects in Quantitative Trait Locus Mapping and Genome-Wide Association Studies,S668770,R168637,uses,R166952,C++,"The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the ‘missing heritability,’ which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS), for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/.",TRUE,acronym
R104,Bioinformatics,R168683,Strawberry: Fast and accurate genome-guided transcript reconstruction and quantification from RNA-Seq,S668978,R168686,uses,R167040,C++,"We propose a novel method and software tool, Strawberry, for transcript reconstruction and quantification from RNA-Seq data under the guidance of genome alignment and independent of gene annotation. Strawberry consists of two modules: assembly and quantification. The novelty of Strawberry is that the two modules use different optimization frameworks but utilize the same data graph structure, which allows a highly efficient, expandable and accurate algorithm for dealing large data. The assembly module parses aligned reads into splicing graphs, and uses network flow algorithms to select the most likely transcripts. The quantification module uses a latent class model to assign read counts from the nodes of splicing graphs to transcripts. Strawberry simultaneously estimates the transcript abundances and corrects for sequencing bias through an EM algorithm. Based on simulations, Strawberry outperforms Cufflinks and StringTie in terms of both assembly and quantification accuracies. Under the evaluation of a real data set, the estimated transcript expression by Strawberry has the highest correlation with Nanostring probe counts, an independent experiment measure for transcript expression. Availability: Strawberry is written in C++14, and is available as open source software at https://github.com/ruolin/strawberry under the MIT license.",TRUE,acronym
R104,Bioinformatics,R168697,PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems,S669035,R168701,uses,R166952,C++,"Abstract Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal “virtual laboratory” for such multicellular systems simulates both the biochemical microenvironment (the “stage”) and many mechanically and biochemically interacting cells (the “players” upon the stage). PhysiCell—physics-based multicellular simulator—is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility “out of the box.” The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 10 5 -10 6 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a “cellular cargo delivery” system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net. Author Summary This paper introduces PhysiCell: an open source, agent-based modeling framework for 3-D multicellular simulations. It includes a standard library of sub-models for cell fluid and solid volume changes, cycle progression, apoptosis, necrosis, mechanics, and motility. PhysiCell is directly coupled to a biotransport solver to simulate many diffusing substrates and cell-secreted signals. Each cell can dynamically update its phenotype based on its microenvironmental conditions. Users can customize or replace the included sub-models. PhysiCell runs on a variety of platforms (Linux, OSX, and Windows) with few software dependencies. Its computational cost scales linearly in the number of cells. It is feasible to simulate 500,000 cells on quad-core desktop workstations, and millions of cells on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on hanging drop tumor spheroids (HDS) and ductal carcinoma in situ (DCIS) of the breast. We demonstrate contact- and chemokine-based interactions among multiple cell types with examples in synthetic multicellular bioengineering, cancer heterogeneity, and cancer immunology. We developed PhysiCell to help the scientific community tackle multicellular systems biology problems involving many interacting cells in multi-substrate microenvironments. PhysiCell is also an independent, cross-platform codebase for replicating results from other simulators.",TRUE,acronym
R104,Bioinformatics,R168746,NFTsim: Theory and Simulation of Multiscale Neural Field Dynamics,S669241,R168749,uses,R166952,C++,"A user ready, portable, documented software package, NFTsim, is presented to facilitate numerical simulations of a wide range of brain systems using continuum neural field modeling. NFTsim enables users to simulate key aspects of brain activity at multiple scales. At the microscopic scale, it incorporates characteristics of local interactions between cells, neurotransmitter effects, synaptodendritic delays and feedbacks. At the mesoscopic scale, it incorporates information about medium to large scale axonal ranges of fibers, which are essential to model dissipative wave transmission and to produce synchronous oscillations and associated cross-correlation patterns as observed in local field potential recordings of active tissue. At the scale of the whole brain, NFTsim allows for the inclusion of long range pathways, such as thalamocortical projections, when generating macroscopic activity fields. The multiscale nature of the neural activity produced by NFTsim has the potential to enable the modeling of resulting quantities measurable via various neuroimaging techniques. In this work, we give a comprehensive description of the design and implementation of the software. Due to its modularity and flexibility, NFTsim enables the systematic study of an unlimited number of neural systems with multiple neural populations under a unified framework and allows for direct comparison with analytic and experimental predictions. The code is written in C++ and bundled with Matlab routines for a rapid quantitative analysis and visualization of the outputs. The output of NFTsim is stored in plain text file enabling users to select from a broad range of tools for offline analysis. This software enables a wide and convenient use of powerful physiologically-based neural field approaches to brain modeling. NFTsim is distributed under the Apache 2.0 license.",TRUE,acronym
R104,Bioinformatics,R168543,CGBayesNets: Conditional Gaussian Bayesian Network Learning and Inference with Mixed Discrete and Continuous Data,S668416,R168546,deposits,R166948,CGBayesNets,"Bayesian Networks (BN) have been a popular predictive modeling formalism in bioinformatics, but their application in modern genomics has been slowed by an inability to cleanly handle domains with mixed discrete and continuous variables. Existing free BN software packages either discretize continuous variables, which can lead to information loss, or do not include inference routines, which makes prediction with the BN impossible. We present CGBayesNets, a BN package focused around prediction of a clinical phenotype from mixed discrete and continuous variables, which fills these gaps. CGBayesNets implements Bayesian likelihood and inference algorithms for the conditional Gaussian Bayesian network (CGBNs) formalism, one appropriate for predicting an outcome of interest from, e.g., multimodal genomic data. We provide four different network learning algorithms, each making a different tradeoff between computational cost and network likelihood. CGBayesNets provides a full suite of functions for model exploration and verification, including cross validation, bootstrapping, and AUC manipulation. We highlight several results obtained previously with CGBayesNets, including predictive models of wood properties from tree genomics, leukemia subtype classification from mixed genomic data, and robust prediction of intensive care unit mortality outcomes from metabolomic profiles. We also provide detailed example analysis on public metabolomic and gene expression datasets. CGBayesNets is implemented in MATLAB and available as MATLAB source code, under an Open Source license and anonymous download at http://www.cgbayesnets.com.",TRUE,acronym
R104,Bioinformatics,R135546,Acute Lymphoblastic Leukemia Detection from Microscopic Images Using Weighted Ensemble of Convolutional Neural Networks,S536121,R135550,Used models,L378122,DenseNet-121,"Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.",TRUE,acronym
R104,Bioinformatics,R168564,eMatchSite: Sequence Order-Independent Structure Alignments of Ligand Binding Pockets in Protein Models,S668485,R168565,creates,R166961,eMatchSite,"Detecting similarities between ligand binding sites in the absence of global homology between target proteins has been recognized as one of the critical components of modern drug discovery. Local binding site alignments can be constructed using sequence order-independent techniques, however, to achieve a high accuracy, many current algorithms for binding site comparison require high-quality experimental protein structures, preferably in the bound conformational state. This, in turn, complicates proteome scale applications, where only various quality structure models are available for the majority of gene products. To improve the state-of-the-art, we developed eMatchSite, a new method for constructing sequence order-independent alignments of ligand binding sites in protein models. Large-scale benchmarking calculations using adenine-binding pockets in crystal structures demonstrate that eMatchSite generates accurate alignments for almost three times more protein pairs than SOIPPA. More importantly, eMatchSite offers a high tolerance to structural distortions in ligand binding regions in protein models. For example, the percentage of correctly aligned pairs of adenine-binding sites in weakly homologous protein models is only 4–9% lower than those aligned using crystal structures. This represents a significant improvement over other algorithms, e.g. the performance of eMatchSite in recognizing similar binding sites is 6% and 13% higher than that of SiteEngine using high- and moderate-quality protein models, respectively. Constructing biologically correct alignments using predicted ligand binding sites in protein models opens up the possibility to investigate drug-protein interaction networks for complete proteomes with prospective systems-level applications in polypharmacology and rational drug repositioning. eMatchSite is freely available to the academic community as a web-server and a stand-alone software distribution at http://www.brylinski.org/ematchsite.",TRUE,acronym
R104,Bioinformatics,R168564,eMatchSite: Sequence Order-Independent Structure Alignments of Ligand Binding Pockets in Protein Models,S668487,R168566,deposits,R166962,eMatchSite,"Detecting similarities between ligand binding sites in the absence of global homology between target proteins has been recognized as one of the critical components of modern drug discovery. Local binding site alignments can be constructed using sequence order-independent techniques, however, to achieve a high accuracy, many current algorithms for binding site comparison require high-quality experimental protein structures, preferably in the bound conformational state. This, in turn, complicates proteome scale applications, where only various quality structure models are available for the majority of gene products. To improve the state-of-the-art, we developed eMatchSite, a new method for constructing sequence order-independent alignments of ligand binding sites in protein models. Large-scale benchmarking calculations using adenine-binding pockets in crystal structures demonstrate that eMatchSite generates accurate alignments for almost three times more protein pairs than SOIPPA. More importantly, eMatchSite offers a high tolerance to structural distortions in ligand binding regions in protein models. For example, the percentage of correctly aligned pairs of adenine-binding sites in weakly homologous protein models is only 4–9% lower than those aligned using crystal structures. This represents a significant improvement over other algorithms, e.g. the performance of eMatchSite in recognizing similar binding sites is 6% and 13% higher than that of SiteEngine using high- and moderate-quality protein models, respectively. Constructing biologically correct alignments using predicted ligand binding sites in protein models opens up the possibility to investigate drug-protein interaction networks for complete proteomes with prospective systems-level applications in polypharmacology and rational drug repositioning. eMatchSite is freely available to the academic community as a web-server and a stand-alone software distribution at http://www.brylinski.org/ematchsite.",TRUE,acronym
R104,Bioinformatics,R168564,eMatchSite: Sequence Order-Independent Structure Alignments of Ligand Binding Pockets in Protein Models,S668489,R168567,uses,R166961,eMatchSite,"Detecting similarities between ligand binding sites in the absence of global homology between target proteins has been recognized as one of the critical components of modern drug discovery. Local binding site alignments can be constructed using sequence order-independent techniques, however, to achieve a high accuracy, many current algorithms for binding site comparison require high-quality experimental protein structures, preferably in the bound conformational state. This, in turn, complicates proteome scale applications, where only various quality structure models are available for the majority of gene products. To improve the state-of-the-art, we developed eMatchSite, a new method for constructing sequence order-independent alignments of ligand binding sites in protein models. Large-scale benchmarking calculations using adenine-binding pockets in crystal structures demonstrate that eMatchSite generates accurate alignments for almost three times more protein pairs than SOIPPA. More importantly, eMatchSite offers a high tolerance to structural distortions in ligand binding regions in protein models. For example, the percentage of correctly aligned pairs of adenine-binding sites in weakly homologous protein models is only 4–9% lower than those aligned using crystal structures. This represents a significant improvement over other algorithms, e.g. the performance of eMatchSite in recognizing similar binding sites is 6% and 13% higher than that of SiteEngine using high- and moderate-quality protein models, respectively. Constructing biologically correct alignments using predicted ligand binding sites in protein models opens up the possibility to investigate drug-protein interaction networks for complete proteomes with prospective systems-level applications in polypharmacology and rational drug repositioning. eMatchSite is freely available to the academic community as a web-server and a stand-alone software distribution at http://www.brylinski.org/ematchsite.",TRUE,acronym
R104,Bioinformatics,R168663,ESPRIT-Forest: Parallel clustering of massive amplicon sequence data in subquadratic time,S668887,R168664,creates,R167023,ESPRIT-Forest,"The rapid development of sequencing technology has led to an explosive accumulation of genomic sequence data. Clustering is often the first step to perform in sequence analysis, and hierarchical clustering is one of the most commonly used approaches for this purpose. However, it is currently computationally expensive to perform hierarchical clustering of extremely large sequence datasets due to its quadratic time and space complexities. In this paper we developed a new algorithm called ESPRIT-Forest for parallel hierarchical clustering of sequences. The algorithm achieves subquadratic time and space complexity and maintains a high clustering accuracy comparable to the standard method. The basic idea is to organize sequences into a pseudo-metric based partitioning tree for sub-linear time searching of nearest neighbors, and then use a new multiple-pair merging criterion to construct clusters in parallel using multiple threads. The new algorithm was tested on the human microbiome project (HMP) dataset, currently one of the largest published microbial 16S rRNA sequence dataset. Our experiment demonstrated that with the power of parallel computing it is now compu- tationally feasible to perform hierarchical clustering analysis of tens of millions of sequences. The software is available at http://www.acsu.buffalo.edu/∼yijunsun/lab/ESPRIT-Forest.html.",TRUE,acronym
R104,Bioinformatics,R168663,ESPRIT-Forest: Parallel clustering of massive amplicon sequence data in subquadratic time,S668891,R168666,deposits,R167025,ESPRIT-Forest,"The rapid development of sequencing technology has led to an explosive accumulation of genomic sequence data. Clustering is often the first step to perform in sequence analysis, and hierarchical clustering is one of the most commonly used approaches for this purpose. However, it is currently computationally expensive to perform hierarchical clustering of extremely large sequence datasets due to its quadratic time and space complexities. In this paper we developed a new algorithm called ESPRIT-Forest for parallel hierarchical clustering of sequences. The algorithm achieves subquadratic time and space complexity and maintains a high clustering accuracy comparable to the standard method. The basic idea is to organize sequences into a pseudo-metric based partitioning tree for sub-linear time searching of nearest neighbors, and then use a new multiple-pair merging criterion to construct clusters in parallel using multiple threads. The new algorithm was tested on the human microbiome project (HMP) dataset, currently one of the largest published microbial 16S rRNA sequence dataset. Our experiment demonstrated that with the power of parallel computing it is now compu- tationally feasible to perform hierarchical clustering analysis of tens of millions of sequences. The software is available at http://www.acsu.buffalo.edu/∼yijunsun/lab/ESPRIT-Forest.html.",TRUE,acronym
R104,Bioinformatics,R168667,FIMTrack: An open source tracking and locomotion analysis software for small animals,S668904,R168668,creates,R167026,FIMTrack,"Imaging and analyzing the locomotion behavior of small animals such as Drosophila larvae or C. elegans worms has become an integral subject of biological research. In the past we have introduced FIM, a novel imaging system feasible to extract high contrast images. This system in combination with the associated tracking software FIMTrack is already used by many groups all over the world. However, so far there has not been an in-depth discussion of the technical aspects. Here we elaborate on the implementation details of FIMTrack and give an in-depth explanation of the used algorithms. Among others, the software offers several tracking strategies to cover a wide range of different model organisms, locomotion types, and camera properties. Furthermore, the software facilitates stimuli-based analysis in combination with built-in manual tracking and correction functionalities. All features are integrated in an easy-to-use graphical user interface. To demonstrate the potential of FIMTrack we provide an evaluation of its accuracy using manually labeled data. The source code is available under the GNU GPLv3 at https://github.com/i-git/FIMTrack and pre-compiled binaries for Windows and Mac are available at http://fim.uni-muenster.de.",TRUE,acronym
R104,Bioinformatics,R168667,FIMTrack: An open source tracking and locomotion analysis software for small animals,S668912,R168672,deposits,R167030,FIMTrack,"Imaging and analyzing the locomotion behavior of small animals such as Drosophila larvae or C. elegans worms has become an integral subject of biological research. In the past we have introduced FIM, a novel imaging system feasible to extract high contrast images. This system in combination with the associated tracking software FIMTrack is already used by many groups all over the world. However, so far there has not been an in-depth discussion of the technical aspects. Here we elaborate on the implementation details of FIMTrack and give an in-depth explanation of the used algorithms. Among others, the software offers several tracking strategies to cover a wide range of different model organisms, locomotion types, and camera properties. Furthermore, the software facilitates stimuli-based analysis in combination with built-in manual tracking and correction functionalities. All features are integrated in an easy-to-use graphical user interface. To demonstrate the potential of FIMTrack we provide an evaluation of its accuracy using manually labeled data. The source code is available under the GNU GPLv3 at https://github.com/i-git/FIMTrack and pre-compiled binaries for Windows and Mac are available at http://fim.uni-muenster.de.",TRUE,acronym
R104,Bioinformatics,R168707,iDREM: Interactive visualization of dynamic regulatory networks,S669064,R168711,deposits,R167055,iDREM,"The Dynamic Regulatory Events Miner (DREM) software reconstructs dynamic regulatory networks by integrating static protein-DNA interaction data with time series gene expression data. In recent years, several additional types of high-throughput time series data have been profiled when studying biological processes including time series miRNA expression, proteomics, epigenomics and single cell RNA-Seq. Combining all available time series and static datasets in a unified model remains an important challenge and goal. To address this challenge we have developed a new version of DREM termed interactive DREM (iDREM). iDREM provides support for all data types mentioned above and combines them with existing interaction data to reconstruct networks that can lead to novel hypotheses on the function and timing of regulators. Users can interactively visualize and query the resulting model. We showcase the functionality of the new tool by applying it to microglia developmental data from multiple labs.",TRUE,acronym
R104,Bioinformatics,R168604,"MIiSR: Molecular Interactions in Super-Resolution Imaging Enables the Analysis of Protein Interactions, Dynamics and Formation of Multi-protein Structures",S668657,R168607,deposits,R166990,MIiSR,"Our current understanding of the molecular mechanisms which regulate cellular processes such as vesicular trafficking has been enabled by conventional biochemical and microscopy techniques. However, these methods often obscure the heterogeneity of the cellular environment, thus precluding a quantitative assessment of the molecular interactions regulating these processes. Herein, we present Molecular Interactions in Super Resolution (MIiSR) software which provides quantitative analysis tools for use with super-resolution images. MIiSR combines multiple tools for analyzing intermolecular interactions, molecular clustering and image segmentation. These tools enable quantification, in the native environment of the cell, of molecular interactions and the formation of higher-order molecular complexes. The capabilities and limitations of these analytical tools are demonstrated using both modeled data and examples derived from the vesicular trafficking system, thereby providing an established and validated experimental workflow capable of quantitatively assessing molecular interactions and molecular complex formation within the heterogeneous environment of the cell.",TRUE,acronym
R104,Bioinformatics,R168746,NFTsim: Theory and Simulation of Multiscale Neural Field Dynamics,S669237,R168747,creates,R167075,NFTsim,"A user ready, portable, documented software package, NFTsim, is presented to facilitate numerical simulations of a wide range of brain systems using continuum neural field modeling. NFTsim enables users to simulate key aspects of brain activity at multiple scales. At the microscopic scale, it incorporates characteristics of local interactions between cells, neurotransmitter effects, synaptodendritic delays and feedbacks. At the mesoscopic scale, it incorporates information about medium to large scale axonal ranges of fibers, which are essential to model dissipative wave transmission and to produce synchronous oscillations and associated cross-correlation patterns as observed in local field potential recordings of active tissue. At the scale of the whole brain, NFTsim allows for the inclusion of long range pathways, such as thalamocortical projections, when generating macroscopic activity fields. The multiscale nature of the neural activity produced by NFTsim has the potential to enable the modeling of resulting quantities measurable via various neuroimaging techniques. In this work, we give a comprehensive description of the design and implementation of the software. Due to its modularity and flexibility, NFTsim enables the systematic study of an unlimited number of neural systems with multiple neural populations under a unified framework and allows for direct comparison with analytic and experimental predictions. The code is written in C++ and bundled with Matlab routines for a rapid quantitative analysis and visualization of the outputs. The output of NFTsim is stored in plain text file enabling users to select from a broad range of tools for offline analysis. This software enables a wide and convenient use of powerful physiologically-based neural field approaches to brain modeling. NFTsim is distributed under the Apache 2.0 license.",TRUE,acronym
R104,Bioinformatics,R168746,NFTsim: Theory and Simulation of Multiscale Neural Field Dynamics,S669239,R168748,deposits,R167076,NFTsim,"A user ready, portable, documented software package, NFTsim, is presented to facilitate numerical simulations of a wide range of brain systems using continuum neural field modeling. NFTsim enables users to simulate key aspects of brain activity at multiple scales. At the microscopic scale, it incorporates characteristics of local interactions between cells, neurotransmitter effects, synaptodendritic delays and feedbacks. At the mesoscopic scale, it incorporates information about medium to large scale axonal ranges of fibers, which are essential to model dissipative wave transmission and to produce synchronous oscillations and associated cross-correlation patterns as observed in local field potential recordings of active tissue. At the scale of the whole brain, NFTsim allows for the inclusion of long range pathways, such as thalamocortical projections, when generating macroscopic activity fields. The multiscale nature of the neural activity produced by NFTsim has the potential to enable the modeling of resulting quantities measurable via various neuroimaging techniques. In this work, we give a comprehensive description of the design and implementation of the software. Due to its modularity and flexibility, NFTsim enables the systematic study of an unlimited number of neural systems with multiple neural populations under a unified framework and allows for direct comparison with analytic and experimental predictions. The code is written in C++ and bundled with Matlab routines for a rapid quantitative analysis and visualization of the outputs. The output of NFTsim is stored in plain text file enabling users to select from a broad range of tools for offline analysis. This software enables a wide and convenient use of powerful physiologically-based neural field approaches to brain modeling. NFTsim is distributed under the Apache 2.0 license.",TRUE,acronym
R104,Bioinformatics,R171381,“The care is the best you can give at the time”: Health care professionals’ experiences in providing gender affirming care in South Africa,S683478,R171384,uses,R168350,NVivo,"Background While the provision of gender affirming care for transgender people in South Africa is considered legal, ethical, and medically sound, and is—theoretically—available in both the South African private and public health sectors, access remains severely limited and unequal within the country. As there are no national policies or guidelines, little is known about how individual health care professionals providing gender affirming care make clinical decisions about eligibility and treatment options. Method Based on an initial policy review and service mapping, this study employed semi-structured interviews with a snowball sample of twelve health care providers, representing most providers currently providing gender affirming care in South Africa. Data were analysed thematically using NVivo, and are reported following COREQ guidelines. Results Our findings suggest that, whilst a small minority of health care providers offer gender affirming care, this is almost exclusively on their own initiative and is usually unsupported by wider structures and institutions. The ad hoc, discretionary nature of services means that access to care is dependent on whether a transgender person is fortunate enough to access a sympathetic and knowledgeable health care provider. Conclusion Accordingly, national, state-sanctioned guidelines for gender affirming care are necessary to increase access, homogenise quality of care, and contribute to equitable provision of gender affirming care in the public and private health systems.",TRUE,acronym
R104,Bioinformatics,R168687,pSSAlib: The partial-propensity stochastic chemical network simulator,S668991,R168688,creates,R167041,pSSAlib,"Chemical reaction networks are ubiquitous in biology, and their dynamics is fundamentally stochastic. Here, we present the software library pSSAlib, which provides a complete and concise implementation of the most efficient partial-propensity methods for simulating exact stochastic chemical kinetics. pSSAlib can import models encoded in Systems Biology Markup Language, supports time delays in chemical reactions, and stochastic spatiotemporal reaction-diffusion systems. It also provides tools for statistical analysis of simulation results and supports multiple output formats. It has previously been used for studies of biochemical reaction pathways and to benchmark other stochastic simulation methods. Here, we describe pSSAlib in detail and apply it to a new model of the endocytic pathway in eukaryotic cells, leading to the discovery of a stochastic counterpart of the cut-out switch motif underlying early-to-late endosome conversion. pSSAlib is provided as a stand-alone command-line tool and as a developer API. We also provide a plug-in for the SBMLToolbox. The open-source code and pre-packaged installers are freely available from http://mosaic.mpi-cbg.de.",TRUE,acronym
R104,Bioinformatics,R168687,pSSAlib: The partial-propensity stochastic chemical network simulator,S668995,R168690,deposits,R167042,pSSAlib,"Chemical reaction networks are ubiquitous in biology, and their dynamics is fundamentally stochastic. Here, we present the software library pSSAlib, which provides a complete and concise implementation of the most efficient partial-propensity methods for simulating exact stochastic chemical kinetics. pSSAlib can import models encoded in Systems Biology Markup Language, supports time delays in chemical reactions, and stochastic spatiotemporal reaction-diffusion systems. It also provides tools for statistical analysis of simulation results and supports multiple output formats. It has previously been used for studies of biochemical reaction pathways and to benchmark other stochastic simulation methods. Here, we describe pSSAlib in detail and apply it to a new model of the endocytic pathway in eukaryotic cells, leading to the discovery of a stochastic counterpart of the cut-out switch motif underlying early-to-late endosome conversion. pSSAlib is provided as a stand-alone command-line tool and as a developer API. We also provide a plug-in for the SBMLToolbox. The open-source code and pre-packaged installers are freely available from http://mosaic.mpi-cbg.de.",TRUE,acronym
R104,Bioinformatics,R138702,3D CNN Based Automatic Diagnosis of Attention Deficit Hyperactivity Disorder Using Functional and Structural MRI,S551227,R138705,Data,R138683,sMRI,"Attention deficit hyperactivity disorder (ADHD) is one of the most common mental-health disorders. As a neurodevelopment disorder, neuroimaging technologies, such as magnetic resonance imaging (MRI), coupled with machine learning algorithms, are being increasingly explored as biomarkers in ADHD. Among various machine learning methods, deep learning has demonstrated excellent performance on many imaging tasks. With the availability of publically-available, large neuroimaging data sets for training purposes, deep learning-based automatic diagnosis of psychiatric disorders can become feasible. In this paper, we develop a deep learning-based ADHD classification method via 3-D convolutional neural networks (CNNs) applied to MRI scans. Since deep neural networks may utilize millions of parameters, even the large number of MRI samples in pooled data sets is still relatively limited if one is to learn discriminative features from the raw data. Instead, here we propose to first extract meaningful 3-D low-level features from functional MRI (fMRI) and structural MRI (sMRI) data. Furthermore, inspired by radiologists’ typical approach for examining brain images, we design a 3-D CNN model to investigate the local spatial patterns of MRI features. Finally, we discover that brain functional and structural information are complementary, and design a multi-modality CNN architecture to combine fMRI and sMRI features. Evaluations on the hold-out testing data of the ADHD-200 global competition shows that the proposed multi-modality 3-D CNN approach achieves the state-of-the-art accuracy of 69.15% and outperforms reported classifiers in the literature, even with fewer training samples. We suggest that multi-modality classification will be a promising direction to find potential neuroimaging biomarkers of neurodevelopment disorders.",TRUE,acronym
R104,Bioinformatics,R168472,SNPdetector: A Software Tool for Sensitive and Accurate SNP Detection,S668169,R168474,creates,R166904,SNPdetector,"Identification of single nucleotide polymorphisms (SNPs) and mutations is important for the discovery of genetic predisposition to complex diseases. PCR resequencing is the method of choice for de novo SNP discovery. However, manual curation of putative SNPs has been a major bottleneck in the application of this method to high-throughput screening. Therefore it is critical to develop a more sensitive and accurate computational method for automated SNP detection. We developed a software tool, SNPdetector, for automated identification of SNPs and mutations in fluorescence-based resequencing reads. SNPdetector was designed to model the process of human visual inspection and has a very low false positive and false negative rate. We demonstrate the superior performance of SNPdetector in SNP and mutation analysis by comparing its results with those derived by human inspection, PolyPhred (a popular SNP detection tool), and independent genotype assays in three large-scale investigations. The first study identified and validated inter- and intra-subspecies variations in 4,650 traces of 25 inbred mouse strains that belong to either the Mus musculus species or the M. spretus species. Unexpected heterozgyosity in CAST/Ei strain was observed in two out of 1,167 mouse SNPs. The second study identified 11,241 candidate SNPs in five ENCODE regions of the human genome covering 2.5 Mb of genomic sequence. Approximately 50% of the candidate SNPs were selected for experimental genotyping; the validation rate exceeded 95%. The third study detected ENU-induced mutations (at 0.04% allele frequency) in 64,896 traces of 1,236 zebra fish. Our analysis of three large and diverse test datasets demonstrated that SNPdetector is an effective tool for genome-scale research and for large-sample clinical studies. SNPdetector runs on Unix/Linux platform and is available publicly (http://lpg.nci.nih.gov).",TRUE,acronym
R104,Bioinformatics,R168472,SNPdetector: A Software Tool for Sensitive and Accurate SNP Detection,S668171,R168475,deposits,R166905,SNPdetector,"Identification of single nucleotide polymorphisms (SNPs) and mutations is important for the discovery of genetic predisposition to complex diseases. PCR resequencing is the method of choice for de novo SNP discovery. However, manual curation of putative SNPs has been a major bottleneck in the application of this method to high-throughput screening. Therefore it is critical to develop a more sensitive and accurate computational method for automated SNP detection. We developed a software tool, SNPdetector, for automated identification of SNPs and mutations in fluorescence-based resequencing reads. SNPdetector was designed to model the process of human visual inspection and has a very low false positive and false negative rate. We demonstrate the superior performance of SNPdetector in SNP and mutation analysis by comparing its results with those derived by human inspection, PolyPhred (a popular SNP detection tool), and independent genotype assays in three large-scale investigations. The first study identified and validated inter- and intra-subspecies variations in 4,650 traces of 25 inbred mouse strains that belong to either the Mus musculus species or the M. spretus species. Unexpected heterozgyosity in CAST/Ei strain was observed in two out of 1,167 mouse SNPs. The second study identified 11,241 candidate SNPs in five ENCODE regions of the human genome covering 2.5 Mb of genomic sequence. Approximately 50% of the candidate SNPs were selected for experimental genotyping; the validation rate exceeded 95%. The third study detected ENU-induced mutations (at 0.04% allele frequency) in 64,896 traces of 1,236 zebra fish. Our analysis of three large and diverse test datasets demonstrated that SNPdetector is an effective tool for genome-scale research and for large-sample clinical studies. SNPdetector runs on Unix/Linux platform and is available publicly (http://lpg.nci.nih.gov).",TRUE,acronym
R104,Bioinformatics,R168472,SNPdetector: A Software Tool for Sensitive and Accurate SNP Detection,S668177,R168478,uses,R166904,SNPdetector,"Identification of single nucleotide polymorphisms (SNPs) and mutations is important for the discovery of genetic predisposition to complex diseases. PCR resequencing is the method of choice for de novo SNP discovery. However, manual curation of putative SNPs has been a major bottleneck in the application of this method to high-throughput screening. Therefore it is critical to develop a more sensitive and accurate computational method for automated SNP detection. We developed a software tool, SNPdetector, for automated identification of SNPs and mutations in fluorescence-based resequencing reads. SNPdetector was designed to model the process of human visual inspection and has a very low false positive and false negative rate. We demonstrate the superior performance of SNPdetector in SNP and mutation analysis by comparing its results with those derived by human inspection, PolyPhred (a popular SNP detection tool), and independent genotype assays in three large-scale investigations. The first study identified and validated inter- and intra-subspecies variations in 4,650 traces of 25 inbred mouse strains that belong to either the Mus musculus species or the M. spretus species. Unexpected heterozgyosity in CAST/Ei strain was observed in two out of 1,167 mouse SNPs. The second study identified 11,241 candidate SNPs in five ENCODE regions of the human genome covering 2.5 Mb of genomic sequence. Approximately 50% of the candidate SNPs were selected for experimental genotyping; the validation rate exceeded 95%. The third study detected ENU-induced mutations (at 0.04% allele frequency) in 64,896 traces of 1,236 zebra fish. Our analysis of three large and diverse test datasets demonstrated that SNPdetector is an effective tool for genome-scale research and for large-sample clinical studies. SNPdetector runs on Unix/Linux platform and is available publicly (http://lpg.nci.nih.gov).",TRUE,acronym
R104,Bioinformatics,R168549,VASP-E: Specificity Annotation with a Volumetric Analysis of Electrostatic Isopotentials,S668439,R168550,creates,R166950,VASP-E,"Algorithms for comparing protein structure are frequently used for function annotation. By searching for subtle similarities among very different proteins, these algorithms can identify remote homologs with similar biological functions. In contrast, few comparison algorithms focus on specificity annotation, where the identification of subtle differences among very similar proteins can assist in finding small structural variations that create differences in binding specificity. Few specificity annotation methods consider electrostatic fields, which play a critical role in molecular recognition. To fill this gap, this paper describes VASP-E (Volumetric Analysis of Surface Properties with Electrostatics), a novel volumetric comparison tool based on the electrostatic comparison of protein-ligand and protein-protein binding sites. VASP-E exploits the central observation that three dimensional solids can be used to fully represent and compare both electrostatic isopotentials and molecular surfaces. With this integrated representation, VASP-E is able to dissect the electrostatic environments of protein-ligand and protein-protein binding interfaces, identifying individual amino acids that have an electrostatic influence on binding specificity. VASP-E was used to examine a nonredundant subset of the serine and cysteine proteases as well as the barnase-barstar and Rap1a-raf complexes. Based on amino acids established by various experimental studies to have an electrostatic influence on binding specificity, VASP-E identified electrostatically influential amino acids with 100% precision and 83.3% recall. We also show that VASP-E can accurately classify closely related ligand binding cavities into groups with different binding preferences. These results suggest that VASP-E should prove a useful tool for the characterization of specific binding and the engineering of binding preferences in proteins.",TRUE,acronym
R104,Bioinformatics,R168549,VASP-E: Specificity Annotation with a Volumetric Analysis of Electrostatic Isopotentials,S668441,R168551,uses,R166951,VASP-E,"Algorithms for comparing protein structure are frequently used for function annotation. By searching for subtle similarities among very different proteins, these algorithms can identify remote homologs with similar biological functions. In contrast, few comparison algorithms focus on specificity annotation, where the identification of subtle differences among very similar proteins can assist in finding small structural variations that create differences in binding specificity. Few specificity annotation methods consider electrostatic fields, which play a critical role in molecular recognition. To fill this gap, this paper describes VASP-E (Volumetric Analysis of Surface Properties with Electrostatics), a novel volumetric comparison tool based on the electrostatic comparison of protein-ligand and protein-protein binding sites. VASP-E exploits the central observation that three dimensional solids can be used to fully represent and compare both electrostatic isopotentials and molecular surfaces. With this integrated representation, VASP-E is able to dissect the electrostatic environments of protein-ligand and protein-protein binding interfaces, identifying individual amino acids that have an electrostatic influence on binding specificity. VASP-E was used to examine a nonredundant subset of the serine and cysteine proteases as well as the barnase-barstar and Rap1a-raf complexes. Based on amino acids established by various experimental studies to have an electrostatic influence on binding specificity, VASP-E identified electrostatically influential amino acids with 100% precision and 83.3% recall. We also show that VASP-E can accurately classify closely related ligand binding cavities into groups with different binding preferences. These results suggest that VASP-E should prove a useful tool for the characterization of specific binding and the engineering of binding preferences in proteins.",TRUE,acronym
R104,Bioinformatics,R168595,VDJtools: Unifying Post-analysis of T Cell Receptor Repertoires,S668609,R168596,creates,R166982,VDJtools,"Despite the growing number of immune repertoire sequencing studies, the field still lacks software for analysis and comprehension of this high-dimensional data. Here we report VDJtools, a complementary software suite that solves a wide range of T cell receptor (TCR) repertoires post-analysis tasks, provides a detailed tabular output and publication-ready graphics, and is built on top of a flexible API. Using TCR datasets for a large cohort of unrelated healthy donors, twins, and multiple sclerosis patients we demonstrate that VDJtools greatly facilitates the analysis and leads to sound biological conclusions. VDJtools software and documentation are available at https://github.com/mikessh/vdjtools.",TRUE,acronym
R104,Bioinformatics,R168595,VDJtools: Unifying Post-analysis of T Cell Receptor Repertoires,S668611,R168597,deposits,R166983,VDJtools,"Despite the growing number of immune repertoire sequencing studies, the field still lacks software for analysis and comprehension of this high-dimensional data. Here we report VDJtools, a complementary software suite that solves a wide range of T cell receptor (TCR) repertoires post-analysis tasks, provides a detailed tabular output and publication-ready graphics, and is built on top of a flexible API. Using TCR datasets for a large cohort of unrelated healthy donors, twins, and multiple sclerosis patients we demonstrate that VDJtools greatly facilitates the analysis and leads to sound biological conclusions. VDJtools software and documentation are available at https://github.com/mikessh/vdjtools.",TRUE,acronym
R104,Bioinformatics,R135546,Acute Lymphoblastic Leukemia Detection from Microscopic Images Using Weighted Ensemble of Convolutional Neural Networks,S536120,R135550,Used models,L378121,VGG-16,"Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.",TRUE,acronym
R104,Bioinformatics,R168646,M-Track: A New Software for Automated Detection of Grooming Trajectories in Mice,S668810,R168647,creates,R167013,M-Track,"Grooming is a complex and robust innate behavior, commonly performed by most vertebrate species. In mice, grooming consists of a series of stereotyped patterned strokes, performed along the rostro-caudal axis of the body. The frequency and duration of each grooming episode is sensitive to changes in stress levels, social interactions and pharmacological manipulations, and is therefore used in behavioral studies to gain insights into the function of brain regions that control movement execution and anxiety. Traditional approaches to analyze grooming rely on manually scoring the time of onset and duration of each grooming episode, and are often performed on grooming episodes triggered by stress exposure, which may not be entirely representative of spontaneous grooming in freely-behaving mice. This type of analysis is time-consuming and provides limited information about finer aspects of grooming behaviors, which are important to understand movement stereotypy and bilateral coordination in mice. Currently available commercial and freeware video-tracking software allow automated tracking of the whole body of a mouse or of its head and tail, not of individual forepaws. Here we describe a simple experimental set-up and a novel open-source code, named M-Track, for simultaneously tracking the movement of individual forepaws during spontaneous grooming in multiple freely-behaving mice. This toolbox provides a simple platform to perform trajectory analysis of forepaw movement during distinct grooming episodes. By using M-track we show that, in C57BL/6 wild type mice, the speed and bilateral coordination of the left and right forepaws remain unaltered during the execution of distinct grooming episodes. Stress exposure induces a profound increase in the length of the forepaw grooming trajectories. M-Track provides a valuable and user-friendly interface to streamline the analysis of spontaneous grooming in biomedical research studies.",TRUE,acronym
R104,Bioinformatics,R168646,M-Track: A New Software for Automated Detection of Grooming Trajectories in Mice,S668812,R168648,deposits,R167014,M-Track,"Grooming is a complex and robust innate behavior, commonly performed by most vertebrate species. In mice, grooming consists of a series of stereotyped patterned strokes, performed along the rostro-caudal axis of the body. The frequency and duration of each grooming episode is sensitive to changes in stress levels, social interactions and pharmacological manipulations, and is therefore used in behavioral studies to gain insights into the function of brain regions that control movement execution and anxiety. Traditional approaches to analyze grooming rely on manually scoring the time of onset and duration of each grooming episode, and are often performed on grooming episodes triggered by stress exposure, which may not be entirely representative of spontaneous grooming in freely-behaving mice. This type of analysis is time-consuming and provides limited information about finer aspects of grooming behaviors, which are important to understand movement stereotypy and bilateral coordination in mice. Currently available commercial and freeware video-tracking software allow automated tracking of the whole body of a mouse or of its head and tail, not of individual forepaws. Here we describe a simple experimental set-up and a novel open-source code, named M-Track, for simultaneously tracking the movement of individual forepaws during spontaneous grooming in multiple freely-behaving mice. This toolbox provides a simple platform to perform trajectory analysis of forepaw movement during distinct grooming episodes. By using M-track we show that, in C57BL/6 wild type mice, the speed and bilateral coordination of the left and right forepaws remain unaltered during the execution of distinct grooming episodes. Stress exposure induces a profound increase in the length of the forepaw grooming trajectories. M-Track provides a valuable and user-friendly interface to streamline the analysis of spontaneous grooming in biomedical research studies.",TRUE,acronym
R122,Chemistry,R46231,Polymeric g-C3N4 coupled with NaNbO3 nanowires toward enhanced photocatalytic reduction of CO2 into renewable fuel,S141182,R46232,Nb-Based Material,L86827,C3N4/NaNbO3,"Visible-light-responsive g-C3N4/NaNbO3 nanowires photocatalysts were fabricated by introducing polymeric g-C3N4 on NaNbO3 nanowires. The microscopic mechanisms of interface interaction, charge transfer and separation, as well as the influence on the photocatalytic activity of g-C3N4/NaNbO3 composite were systematic investigated. The high-resolution transmission electron microscopy (HR-TEM) revealed that an intimate interface between C3N4 and NaNbO3 nanowires formed in the g-C3N4/NaNbO3 heterojunctions. The photocatalytic performance of photocatalysts was evaluated for CO2 reduction under visible-light illumination. Significantly, the activity of g-C3N4/NaNbO3 composite photocatalyst for photoreduction of CO2 was higher than that of either single-phase g-C3N4 or NaNbO3. Such a remarkable enhancement of photocatalytic activity was mainly ascribed to the improved separation and transfer of photogenerated electron–hole pairs at the intimate interface of g-C3N4/NaNbO3 heterojunctions, which originated from the...",TRUE,acronym
R122,Chemistry,R41120,Improving purity and process volume during direct electrolytic reduction of solid SiO 2 in molten CaCl 2 for the production of solar-grade silicon,S130361,R41121,electrolyte,L79196,CaCl2,"The direct electrolytic reduction of solid SiO2 is investigated in molten CaCl2 at 1123 K to produce solar-grade silicon. The target concentrations of impurities for the primary Si are calculated from the acceptable concentrations of impurities in solar-grade silicon (SOG-Si) and the segregation coefficients for the impurity elements. The concentrations of most metal impurities are significantly decreased below their target concentrations by using a quartz vessel and new types of SiO2-contacting electrodes. The electrolytic reduction rate is increased by improving an electron pathway from the lead material to the SiO2, which demonstrates that the characteristics of the electric contact are important factors affecting the reduction rate. Pellet- and basket-type electrodes are tested to improve the process volume for powdery and granular SiO2. Based on the purity of the Si product after melting, refining, and solidifying, the potential of the technology is discussed.",TRUE,acronym
R122,Chemistry,R41132,The use of silicon wafer barriers in the electrochemical reduction of solid silica to form silicon in molten salts,S130463,R41133,electrolyte,L79268,CaCl2,"Nowadays, silicon is the most critical element in solar cells and/or solar chips. Silicon having 98 to 99% Si as being metallurgical grade, requires further refinement/purification processes such as zone refining [1,2] and/or Siemens process [3] to upgrade it for solar applications. A promising method, based on straightforward electrochemical reduction of oxides by FFC Cambridge Process [4], was adopted to form silicon from porous SiO2 pellets in molten CaCl2 and CaCl2-NaCl salt mixture [5]. It was reported that silicon powder contaminated by iron and nickel emanated from stainless steel cathode, consequently disqualified the product from solar applications. SiO2 pellets sintered at 1300oC for 4 hours, were placed in between pure silicon wafer plates to defeat the contamination problem. Encouraging results indicated a reliable alternative method of direct solar grade silicon production for expanding solar energy field.",TRUE,acronym
R122,Chemistry,R46146,A hybrid of CdS/HCa2Nb3O10 ultrathin nanosheets for promoting photocatalytic hydrogen eVolution,S140636,R46147,Niobate,L86420,HCa2Nb3O10,"A hybrid of CdS/HCa2Nb3O10 ultrathin nanosheets was synthesized successfully through a multistep approach. The structures, constitutions, morphologies and specific surface areas of the obtained CdS/HCa2Nb3O10 were characterized well by XRD, XPS, TEM/HRTEM and BET, respectively. The TEM and BET results demonstrated that the unique structural features of CdS/HCa2Nb3O10 restrained the aggregation of CdS nanoparticles as well as the restacking of nanosheets effectively. HRTEM showed that CdS nanocrystals of about 25-30 nm were firmly anchored on HCa2Nb3O10 nanosheets and a tough heterointerface between CdS and the nanosheets was formed. Efficient interfacial charge transfer from CdS to HCa2Nb3O10 nanosheets was also confirmed by EPR and photocurrent responses. The photocatalytic activity tests (λ > 400 nm) showed that the optimal hydrogen evolution activity of CdS/HCa2Nb3O10 was about 4 times that of the bare CdS, because of the efficient separation of photo-generated carriers.",TRUE,acronym
R169,Climate,R48367,Linking sea level rise and socioeconomic indicators underthe Shared Socioeconomic Pathways,S694614,R175316,has start of period,L467076,1986-2005,"In order to assess future sea level rise and its societal impacts, we need to study climate change pathways combined with different scenarios of socioeconomic development. Here, we present Sea Level Rise (SLR) projections for the Shared Socioeconomic Pathway (SSP) storylines and different year-2100 radiative Forcing Targets (FTs). Future SLR is estimated with a comprehensive SLR emulator that accounts for Antarctic rapid discharge from hydrofracturing and ice cliff instability. Across all baseline scenario realizations (no dedicated climate mitigation), we find 2100 median SLR relative to 1986-2005 of 89 cm (likely range: 57 to 130 cm) for SSP1, 105 cm (73 to 150 cm) for SSP2, 105 cm (75 to 147 cm) for SSP3, 93 cm (63 to 133 cm) for SSP4, and 132 cm (95 to 189 cm) for SSP5. The 2100 sea level responses for combined SSP-FT scenarios are dominated by the mitigation targets and yield median estimates of 52 cm (34 to 75 cm) for FT 2.6 Wm-2, 62 cm (40 to 96 cm) for FT 3.4 Wm-2, 75 cm (47 to 113 cm) for FT 4.5 Wm-2, and 91 cm (61 to 132 cm) for FT 6.0 Wm-2. Average 2081-2100 annual SLR rates are 5 mm yr-1 and 19 mm yr-1 for FT 2.6 Wm-2 and the baseline scenarios, respectively. Our model setup allows linking scenario-specific emission and socioeconomic indicators to projected SLR. We find that 2100 median SSP SLR projections could be limited to around 50 cm if 2050 cumulative CO2 emissions since pre-industrial stay below 850 GtC ,with a global coal phase-out nearly completed by that time. For SSP mitigation scenarios, a 2050 carbon price of 100 US$2005 tCO2 -1 would correspond to a median 2100 SLR of around 65 cm. Our results confirm that rapid and early emission reductions are essential for limiting 2100 SLR.",TRUE,acronym
R169,Climate,R48367,Linking sea level rise and socioeconomic indicators underthe Shared Socioeconomic Pathways,S694613,R175316,has end of period,L467075,2081-2100,"In order to assess future sea level rise and its societal impacts, we need to study climate change pathways combined with different scenarios of socioeconomic development. Here, we present Sea Level Rise (SLR) projections for the Shared Socioeconomic Pathway (SSP) storylines and different year-2100 radiative Forcing Targets (FTs). Future SLR is estimated with a comprehensive SLR emulator that accounts for Antarctic rapid discharge from hydrofracturing and ice cliff instability. Across all baseline scenario realizations (no dedicated climate mitigation), we find 2100 median SLR relative to 1986-2005 of 89 cm (likely range: 57 to 130 cm) for SSP1, 105 cm (73 to 150 cm) for SSP2, 105 cm (75 to 147 cm) for SSP3, 93 cm (63 to 133 cm) for SSP4, and 132 cm (95 to 189 cm) for SSP5. The 2100 sea level responses for combined SSP-FT scenarios are dominated by the mitigation targets and yield median estimates of 52 cm (34 to 75 cm) for FT 2.6 Wm-2, 62 cm (40 to 96 cm) for FT 3.4 Wm-2, 75 cm (47 to 113 cm) for FT 4.5 Wm-2, and 91 cm (61 to 132 cm) for FT 6.0 Wm-2. Average 2081-2100 annual SLR rates are 5 mm yr-1 and 19 mm yr-1 for FT 2.6 Wm-2 and the baseline scenarios, respectively. Our model setup allows linking scenario-specific emission and socioeconomic indicators to projected SLR. We find that 2100 median SSP SLR projections could be limited to around 50 cm if 2050 cumulative CO2 emissions since pre-industrial stay below 850 GtC ,with a global coal phase-out nearly completed by that time. For SSP mitigation scenarios, a 2050 carbon price of 100 US$2005 tCO2 -1 would correspond to a median 2100 SLR of around 65 cm. Our results confirm that rapid and early emission reductions are essential for limiting 2100 SLR.",TRUE,acronym
R111778,Communication Neuroscience,R136499,Increased attention but more efficient disengagement: Neuroscientific evidence for defensive processing of threatening health information.,S540218,R136501,Has method,L380260,ERP,"OBJECTIVE Previous studies indicate that people respond defensively to threatening health information, especially when the information challenges self-relevant goals. The authors investigated whether reduced acceptance of self-relevant health risk information is already visible in early attention processes, that is, attention disengagement processes. DESIGN In a randomized, controlled trial with 29 smoking and nonsmoking students, a variant of Posner's cueing task was used in combination with the high-temporal resolution method of event-related brain potentials (ERPs). MAIN OUTCOME MEASURES Reaction times and P300 ERP. RESULTS Smokers showed lower P300 amplitudes in response to high- as opposed to low-threat invalid trials when moving their attention to a target in the opposite visual field, indicating more efficient attention disengagement processes. Furthermore, both smokers and nonsmokers showed increased P300 amplitudes in response to the presentation of high- as opposed to low-threat valid trials, indicating threat-induced attention-capturing processes. Reaction time measures did not support the ERP data, indicating that the ERP measure can be extremely informative to measure low-level attention biases in health communication. CONCLUSION The findings provide the first neuroscientific support for the hypothesis that threatening health information causes more efficient disengagement among those for whom the health threat is self-relevant.",TRUE,acronym
R111778,Communication Neuroscience,R111723,Neural Correlates of Risk Perception during Real-Life Risk Communication,S508264,R111725,Has approach,R77132,fMRI,"During global health crises, such as the recent H1N1 pandemic, the mass media provide the public with timely information regarding risk. To obtain new insights into how these messages are received, we measured neural data while participants, who differed in their preexisting H1N1 risk perceptions, viewed a TV report about H1N1. Intersubject correlation (ISC) of neural time courses was used to assess how similarly the brains of viewers responded to the TV report. We found enhanced intersubject correlations among viewers with high-risk perception in the anterior cingulate, a region which classical fMRI studies associated with the appraisal of threatening information. By contrast, neural coupling in sensory-perceptual regions was similar for the high and low H1N1-risk perception groups. These results demonstrate a novel methodology for understanding how real-life health messages are processed in the human brain, with particular emphasis on the role of emotion and differences in risk perceptions.",TRUE,acronym
R111778,Communication Neuroscience,R111723,Neural Correlates of Risk Perception during Real-Life Risk Communication,S540029,R111725,Has method,L380158,fMRI,"During global health crises, such as the recent H1N1 pandemic, the mass media provide the public with timely information regarding risk. To obtain new insights into how these messages are received, we measured neural data while participants, who differed in their preexisting H1N1 risk perceptions, viewed a TV report about H1N1. Intersubject correlation (ISC) of neural time courses was used to assess how similarly the brains of viewers responded to the TV report. We found enhanced intersubject correlations among viewers with high-risk perception in the anterior cingulate, a region which classical fMRI studies associated with the appraisal of threatening information. By contrast, neural coupling in sensory-perceptual regions was similar for the high and low H1N1-risk perception groups. These results demonstrate a novel methodology for understanding how real-life health messages are processed in the human brain, with particular emphasis on the role of emotion and differences in risk perceptions.",TRUE,acronym
R111778,Communication Neuroscience,R136437,Content Matters: Neuroimaging Investigation of Brain and Behavioral Impact of Televised Anti-Tobacco Public Service Announcements,S540026,R136439,Has method,L380155,fMRI,"Televised public service announcements are video ads that are a key component of public health campaigns against smoking. Understanding the neurophysiological correlates of anti-tobacco ads is an important step toward novel objective methods of their evaluation and design. In the present study, we used functional magnetic resonance imaging (fMRI) to investigate the brain and behavioral effects of the interaction between content (“argument strength,” AS) and format (“message sensation value,” MSV) of anti-smoking ads in humans. Seventy-one nontreatment-seeking smokers viewed a sequence of 16 high or 16 low AS ads during an fMRI scan. Dependent variables were brain fMRI signal, the immediate recall of the ads, the immediate change in intentions to quit smoking, and the urine levels of a major nicotine metabolite cotinine at a 1 month follow-up. Whole-brain ANOVA revealed that AS and MSV interacted in the inferior frontal, inferior parietal, and fusiform gyri; the precuneus; and the dorsomedial prefrontal cortex (dMPFC). Regression analysis showed that the activation in the dMPFC predicted the urine cotinine levels 1 month later. These results characterize the key brain regions engaged in the processing of persuasive communications and suggest that brain fMRI response to anti-smoking ads could predict subsequent smoking severity in nontreatment-seeking smokers. Our findings demonstrate the importance of the quality of content for objective ad outcomes and suggest that fMRI investigation may aid the prerelease evaluation of televised public health ads.",TRUE,acronym
R111778,Communication Neuroscience,R136477,Communicating with Sensation Seekers: An fMRI Study of Neural Responses to Antidrug Public Service Announcements,S540118,R136479,Has method,L380210,fMRI,"ABSTRACT This study examined the neural basis of processing high- and low-message sensation value (MSV) antidrug public service announcements (PSAs) in high (HSS) and low sensation seekers (LSS) using fMRI. HSS more strongly engaged the salience network when processing PSAs (versus LSS), suggesting that high-MSV PSAs attracted their attention. HSS and LSS participants who engaged higher level cognitive processing regions reported that the PSAs were more convincing and believable and recalled the PSAs better immediately after testing. In contrast, HSS and LSS participants who strongly engaged visual attention regions for viewing PSAs reported lower personal relevance. These findings provide neurobiological evidence that high-MSV content is salient to HSS, a primary target group for antidrug messages, and additional cognitive processing is associated with higher perceived message effectiveness.",TRUE,acronym
R277,Computational Engineering,R41026,Predicting Infections Using Computational Intelligence – A Systematic Review,S130192,R41042,Has result,R41047,SSI,"Infections encompass a set of medical conditions of very diverse kinds that can pose a significant risk to health, and even death. As with many other diseases, early diagnosis can help to provide patients with proper care to minimize the damage produced by the disease, or to isolate them to avoid the risk of spread. In this context, computational intelligence can be useful to predict the risk of infection in patients, raising early alarms that can aid medical teams to respond as quick as possible. In this paper, we survey the state of the art on infection prediction using computer science by means of a systematic literature review. The objective is to find papers where computational intelligence is used to predict infections in patients using physiological data as features. We have posed one major research question along with nine specific subquestions. The whole review process is thoroughly described, and eight databases are considered which index most of the literature published in different scholarly formats. A total of 101 relevant documents have been found in the period comprised between 2003 and 2019, and a detailed study of these documents is carried out to classify the works and answer the research questions posed, resulting to our best knowledge in the most comprehensive study of its kind. We conclude that the most widely addressed infection is by far sepsis, followed by Clostridium difficile infection and surgical site infections. Most works use machine learning techniques, from which logistic regression, support vector machines, random forest and naive Bayes are the most common. Some machine learning works provide some ideas on the problems of small data and class imbalance, which can be of interest. The current systematic literature review shows that automatic diagnosis of infectious diseases using computational intelligence is well documented in the medical literature.",TRUE,acronym
R322,Computational Linguistics,R148039,GENETAG: a tagged corpus for gene/protein named entity recognition,S593674,R148041,Dataset name,R44365,GENETAG,"Abstract Background Named entity recognition (NER) is an important first step for text mining the biomedical literature. Evaluating the performance of biomedical NER systems is impossible without a standardized test corpus. The annotation of such a corpus for gene/protein name NER is a difficult process due to the complexity of gene/protein names. We describe the construction and annotation of GENETAG, a corpus of 20K MEDLINE ® sentences for gene/protein NER. 15K GENETAG sentences were used for the BioCreAtIvE Task 1A Competition. Results To ensure heterogeneity of the corpus, MEDLINE sentences were first scored for term similarity to documents with known gene names, and 10K high- and 10K low-scoring sentences were chosen at random. The original 20K sentences were run through a gene/protein name tagger, and the results were modified manually to reflect a wide definition of gene/protein names subject to a specificity constraint, a rule that required the tagged entities to refer to specific entities. Each sentence in GENETAG was annotated with acceptable alternatives to the gene/protein names it contained, allowing for partial matching with semantic constraints. Semantic constraints are rules requiring the tagged entity to contain its true meaning in the sentence context. Application of these constraints results in a more meaningful measure of the performance of an NER system than unrestricted partial matching. Conclusion The annotation of GENETAG required intricate manual judgments by annotators which hindered tagging consistency. The data were pre-segmented into words, to provide indices supporting comparison of system responses to the ""gold standard"". However, character-based indices would have been more robust than word-based indices. GENETAG Train, Test and Round1 data and ancillary programs are freely available at ftp://ftp.ncbi.nlm.nih.gov/pub/tanabe/GENETAG.tar.gz. A newer version of GENETAG-05, will be released later this year.",TRUE,acronym
R322,Computational Linguistics,R148032,MedTag: A Collection of Biomedical Annotations,S593703,R148034,Other resources,R44365,GENETAG,"We present a database of annotated biomedical text corpora merged into a portable data structure with uniform conventions. MedTag combines three corpora, MedPost, ABGene and GENETAG, within a common relational database data model. The GENETAG corpus has been modified to reflect new definitions of genes and proteins. The MedPost corpus has been updated to include 1,000 additional sentences from the clinical medicine domain. All data have been updated with original MEDLINE text excerpts, PubMed identifiers, and tokenization independence to facilitate data accuracy, consistency and usability. The data are available in flat files along with software to facilitate loading the data into a relational SQL database from ftp://ftp.ncbi.nlm.nih.gov/pub/lsmith/MedTag/medtag.tar.gz.",TRUE,acronym
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595280,R148452,Concept types,R148461,GOMOP,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,acronym
R322,Computational Linguistics,R163217,Chemical names: terminological resources and corpora annotation,S650890,R163219,classes,R163233,IUPAC,"Chemical compounds like small signal molecules or other biological active chemical substances are an important entity class in life science publications and patents. The recognition of these named entities relies on appropriate dictionary resources as well as on training and evaluation corpora. In this work we give an overview of publicly available chemical information resources with respect to chemical terminology. The coverage, amount of synonyms, and especially the inclusion of SMILES or InChI are considered. Normalization of different chemical names to a unique structure is only possible with these structure representations. In addition, the generation and annotation of training and testing corpora is presented. We describe a small corpus for the evaluation of dictionaries containing chemical enities as well as a training and test corpus for the recognition of IUPAC and IUPAC-like names, which cannot be fully enumerated in dictionaries. Corpora can be found on http://www.scai.fraunhofer.de/chem-corpora.html",TRUE,acronym
R322,Computational Linguistics,R163869,Syntax Annotation for the GENIA Corpus,S654312,R163871,data source,R148046,MEDLINE,"Linguistically annotated corpus based on texts in biomedical domain has been constructed to tune natural language processing (NLP) tools for biotextmining. As the focus of information extraction is shifting from ""nominal"" information such as named entity to ""verbal"" information such as function and interaction of substances, application of parsers has become one of the key technologies and thus the corpus annotated for syntactic structure of sentences is in demand. A subset of the GENIA corpus consisting of 500 MEDLINE abstracts has been annotated for syntactic structure in an XMLbased format based on Penn Treebank II (PTB) scheme. Inter-annotator agreement test indicated that the writing style rather than the contents of the research abstracts is the source of the difficulty in tree annotation, and that annotation can be stably done by linguists without much knowledge of biology with appropriate guidelines regarding to linguistic phenomena particular to scientific texts.",TRUE,acronym
R322,Computational Linguistics,R163869,Syntax Annotation for the GENIA Corpus,S654314,R163871,Data formats,R163872,PTB,"Linguistically annotated corpus based on texts in biomedical domain has been constructed to tune natural language processing (NLP) tools for biotextmining. As the focus of information extraction is shifting from ""nominal"" information such as named entity to ""verbal"" information such as function and interaction of substances, application of parsers has become one of the key technologies and thus the corpus annotated for syntactic structure of sentences is in demand. A subset of the GENIA corpus consisting of 500 MEDLINE abstracts has been annotated for syntactic structure in an XMLbased format based on Penn Treebank II (PTB) scheme. Inter-annotator agreement test indicated that the writing style rather than the contents of the research abstracts is the source of the difficulty in tree annotation, and that annotation can be stably done by linguists without much knowledge of biology with appropriate guidelines regarding to linguistic phenomena particular to scientific texts.",TRUE,acronym
R322,Computational Linguistics,R163869,Syntax Annotation for the GENIA Corpus,S654313,R163871,Data formats,R38064,XML,"Linguistically annotated corpus based on texts in biomedical domain has been constructed to tune natural language processing (NLP) tools for biotextmining. As the focus of information extraction is shifting from ""nominal"" information such as named entity to ""verbal"" information such as function and interaction of substances, application of parsers has become one of the key technologies and thus the corpus annotated for syntactic structure of sentences is in demand. A subset of the GENIA corpus consisting of 500 MEDLINE abstracts has been annotated for syntactic structure in an XMLbased format based on Penn Treebank II (PTB) scheme. Inter-annotator agreement test indicated that the writing style rather than the contents of the research abstracts is the source of the difficulty in tree annotation, and that annotation can be stably done by linguists without much knowledge of biology with appropriate guidelines regarding to linguistic phenomena particular to scientific texts.",TRUE,acronym
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595297,R148452,Other resources,R145007,MeSH,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,acronym
R322,Computational Linguistics,R148032,MedTag: A Collection of Biomedical Annotations,S593655,R148034,Other resources,R148037,ABGene,"We present a database of annotated biomedical text corpora merged into a portable data structure with uniform conventions. MedTag combines three corpora, MedPost, ABGene and GENETAG, within a common relational database data model. The GENETAG corpus has been modified to reflect new definitions of genes and proteins. The MedPost corpus has been updated to include 1,000 additional sentences from the clinical medicine domain. All data have been updated with original MEDLINE text excerpts, PubMed identifiers, and tokenization independence to facilitate data accuracy, consistency and usability. The data are available in flat files along with software to facilitate loading the data into a relational SQL database from ftp://ftp.ncbi.nlm.nih.gov/pub/lsmith/MedTag/medtag.tar.gz.",TRUE,acronym
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595283,R148452,Concept types,R148464,mRNAcDNA,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,acronym
R322,Computational Linguistics,R155259,Leveraging Abstract Meaning Representation for Knowledge Base Question Answering,S647996,R155261,On evaluation dataset,R157529,QALD-9,"Knowledge base question answering (KBQA) is an important task in Natural Language Processing. Existing approaches face significant challenges including complex question understanding, necessity for reasoning, and lack of large end-to-end training datasets. In this work, we propose Neuro-Symbolic Question Answering (NSQA), a modular KBQA system, that leverages (1) Abstract Meaning Representation (AMR) parses for task-independent question understanding; (2) a simple yet effective graph transformation approach to convert AMR parses into candidate logical queries that are aligned to the KB; (3) a pipeline-based approach which integrates multiple, reusable modules that are trained specifically for their individual tasks (semantic parser, entity and relationship linkers, and neuro-symbolic reasoner) and do not require end-to-end training data. NSQA achieves state-of-the-art performance on two prominent KBQA datasets based on DBpedia (QALD-9 and LC-QuAD 1.0). Furthermore, our analysis emphasizes that AMR is a powerful tool for KBQA systems.",TRUE,acronym
R134,Computer and Systems Architecture,R108292,Process Oriented Knowledge Management: A Service Based Approach,S493309,R108293,Approach name,L357471,PROMOTE,This paper introduces a new viewpoint in knowledge management by introducing KM-Services as a basic concept for Knowledge Management. This text discusses the vision of service oriented knowledge management (KM) as a realisation approach of process oriented knowledge management. In the following process oriented knowledge management as it was defined in the EU-project PROMOTE (IST-1999-11658) is presented and the KM-Service approach to realise process oriented knowledge management is explained. The last part is concerned with an implementation scenario that uses Web-technology to realise a service framework for a KM-system.,TRUE,acronym
R134,Computer and Systems Architecture,R108296,B-KIDE: a framework and a tool for business process-oriented knowledge infrastructure development,S493338,R108298,Approach name,L357494,B-KIDE,"The need for an effective management of knowledge is gaining increasing recognition in today's economy. To acknowledge this fact, new promising and powerful technologies have emerged from industrial and academic research. With these innovations maturing, organizations are increasingly willing to adapt such new knowledge management technologies to improve their knowledge-intensive businesses. However, the successful application in given business contexts is a complex, multidimensional challenge and a current research topic. Therefore, this contribution addresses this challenge and introduces a framework for the development of business process-supportive, technological knowledge infrastructures. While business processes represent the organizational setting for the application of knowledge management technologies, knowledge infrastructures represent a concept that can enable knowledge management in organizations. The B-KIDE Framework introduced in this work provides support for the development of knowledge infrastructures that comprise innovative knowledge management functionality and are visibly supportive of an organization's business processes. The developed B-KIDE Tool eases the application of the B-KIDE Framework for knowledge infrastructure developers. Three empirical studies that were conducted with industrial partners from heterogeneous industry sectors corroborate the relevance and viability of the introduced concepts. Copyright © 2005 John Wiley & Sons, Ltd.",TRUE,acronym
R230,Computer Engineering,R74469,Extension of the BIdo ontology to represent scientific production,S506149,R109104,Ontology,L365274,BIDO,"The SPAR Ontology Network is a suite of complementary ontology modules to describe the scholarly publishing domain. BiDO Standard Bibliometric Measures is part of its set of ontologies. It allows describing of numerical and categorical bibliometric data such as h-index, author citation count, journal impact factor. These measures may be used to evaluate scientific production of researchers. However, they are not enough. In a previous study, we determined the lack of some terms to provide a more complete representation of scientific production. Hence, we have built an extension using the NeOn Methodology to restructure the BiDO ontology. With this extension, it is possible to represent and measure the number of documents from research, the number of citations from a paper and the number of publications in high impact journals according to its area and discipline.",TRUE,acronym
R132,Computer Sciences,R130355,"BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension",S517979,R130359,has model,R116395,BART,"We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and other recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa on GLUE and SQuAD, and achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 3.5 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also replicate other pretraining schemes within the BART framework, to understand their effect on end-task performance.",TRUE,acronym
R132,Computer Sciences,R131170,CURL: Contrastive Unsupervised Representations for Reinforcement Learning,S521930,R131171,has model,R123293,CURL,"We present CURL: Contrastive Unsupervised Representations for Reinforcement Learning. CURL extracts high-level features from raw pixels using contrastive learning and performs off-policy control on top of the extracted features. CURL outperforms prior pixel-based methods, both model-based and model-free, on complex tasks in the DeepMind Control Suite and Atari Games showing 1.9x and 1.2x performance gains at the 100K environment and interaction steps benchmarks respectively. On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency of methods that use state-based features. Our code is open-sourced and available at this https URL.",TRUE,acronym
R132,Computer Sciences,R130308,Dynamic Coattention Networks For Question Answering,S517778,R130309,has model,R119599,DCN,"Several deep learning models have been proposed for question answering. However, due to their single-pass nature, they have no way to recover from local maxima corresponding to incorrect answers. To address this problem, we introduce the Dynamic Coattention Network (DCN) for question answering. The DCN first fuses co-dependent representations of the question and the document in order to focus on relevant parts of both. Then a dynamic pointing decoder iterates over potential answer spans. This iterative procedure enables the model to recover from initial local maxima corresponding to incorrect answers. On the Stanford question answering dataset, a single DCN model improves the previous state of the art from 71.0% F1 to 75.9%, while a DCN ensemble obtains 80.4% F1.",TRUE,acronym
R132,Computer Sciences,R36091,A Large Public Corpus of Web Tables containing Time and Context Metadata,S123504,R36092,Input format,R4825,HTML,"The Web contains vast amounts of HTML tables. Most of these tables are used for layout purposes, but a small subset of the tables is relational, meaning that they contain structured data describing a set of entities [2]. As these relational Web tables cover a very wide range of different topics, there is a growing body of research investigating the utility of Web table data for completing cross-domain knowledge bases [6], for extending arbitrary tables with additional attributes [7, 4], as well as for translating data values [5]. The existing research shows the potentials of Web tables. However, comparing the performance of the different systems is difficult as up till now each system is evaluated using a different corpus of Web tables and as most of the corpora are owned by large search engine companies and are thus not accessible to the public. In this poster, we present a large public corpus of Web tables which contains over 233 million tables and has been extracted from the July 2015 version of the CommonCrawl. By publishing the corpus as well as all tools that we used to extract it from the crawled data, we intend to provide a common ground for evaluating Web table systems. The main difference of the corpus compared to an earlier corpus that we extracted from the 2012 version of the CommonCrawl as well as the corpus extracted by Eberius et al. [3] from the 2014 version of the CommonCrawl is that the current corpus contains a richer set of metadata for each table. This metadata includes table-specific information such as table orientation, table caption, header row, and key column, but also context information such as the text before and after the table, the title of the HTML page, as well as timestamp information that was found before and after the table. The context information can be useful for recovering the semantics of a table [7]. The timestamp information is crucial for fusing time-depended data, such as alternative population numbers for a city [8].",TRUE,acronym
R132,Computer Sciences,R36097,Schema extraction for tabular data on the web,S123571,R36098,Input format,R4825,HTML,"Tabular data is an abundant source of information on the Web, but remains mostly isolated from the latter's interconnections since tables lack links and computer-accessible descriptions of their structure. In other words, the schemas of these tables -- attribute names, values, data types, etc. -- are not explicitly stored as table metadata. Consequently, the structure that these tables contain is not accessible to the crawlers that power search engines and thus not accessible to user search queries. We address this lack of structure with a new method for leveraging the principles of table construction in order to extract table schemas. Discovering the schema by which a table is constructed is achieved by harnessing the similarities and differences of nearby table rows through the use of a novel set of features and a feature processing scheme. The schemas of these data tables are determined using a classification technique based on conditional random fields in combination with a novel feature encoding method called logarithmic binning, which is specifically designed for the data table extraction task. Our method provides considerable improvement over the well-known WebTables schema extraction method. In contrast with previous work that focuses on extracting individual relations, our method excels at correctly interpreting full tables, thereby being capable of handling general tables such as those found in spreadsheets, instead of being restricted to HTML tables as is the case with the WebTables method. We also extract additional schema characteristics, such as row groupings, which are important for supporting information retrieval tasks on tabular data.",TRUE,acronym
R132,Computer Sciences,R130975,An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling,S521134,R130993,has model,R114443,LSTM,"For most deep learning practitioners, sequence modeling is synonymous with recurrent networks. Yet recent results indicate that convolutional architectures can outperform recurrent networks on tasks such as audio synthesis and machine translation. Given a new sequence modeling task or dataset, which architecture should one use? We conduct a systematic evaluation of generic convolutional and recurrent architectures for sequence modeling. The models are evaluated across a broad range of standard tasks that are commonly used to benchmark recurrent networks. Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory. We conclude that the common association between sequence modeling and recurrent networks should be reconsidered, and convolutional networks should be regarded as a natural starting point for sequence modeling tasks. To assist related work, we have made code available at this http URL .",TRUE,acronym
R132,Computer Sciences,R131248,Mean Actor Critic,S522135,R131249,has model,R119888,MAC,"We propose a new algorithm, Mean Actor-Critic (MAC), for discrete-action continuous-state reinforcement learning. MAC is a policy gradient algorithm that uses the agent's explicit representation of all action values to estimate the gradient of the policy, rather than using only the actions that were actually executed. We prove that this approach reduces variance in the policy gradient estimate relative to traditional actor-critic methods. We show empirical results on two control domains and on six Atari games, where MAC is competitive with state-of-the-art policy search algorithms.",TRUE,acronym
R132,Computer Sciences,R130434,MEMEN: Multi-layer Embedding with Memory Networks for Machine Comprehension,S518743,R130435,has model,R119649,MEMEN,"Machine comprehension(MC) style question answering is a representative problem in natural language processing. Previous methods rarely spend time on the improvement of encoding layer, especially the embedding of syntactic information and name entity of the words, which are very crucial to the quality of encoding. Moreover, existing attention methods represent each query word as a vector or use a single vector to represent the whole query sentence, neither of them can handle the proper weight of the key words in query sentence. In this paper, we introduce a novel neural network architecture called Multi-layer Embedding with Memory Network(MEMEN) for machine reading task. In the encoding layer, we employ classic skip-gram model to the syntactic and semantic information of the words to train a new kind of embedding layer. We also propose a memory network of full-orientation matching of the query and passage to catch more pivotal information. Experiments show that our model has competitive results both from the perspectives of precision and efficiency in Stanford Question Answering Dataset(SQuAD) among all published results and achieves the state-of-the-art results on TriviaQA dataset.",TRUE,acronym
R132,Computer Sciences,R134238,Model-Free Episodic Control with State Aggregation,S531442,R134239,has model,R124948,MFEC,"Episodic control provides a highly sample-efficient method for reinforcement learning while enforcing high memory and computational requirements. This work proposes a simple heuristic for reducing these requirements, and an application to Model-Free Episodic Control (MFEC) is presented. Experiments on Atari games show that this heuristic successfully reduces MFEC computational demands while producing no significant loss of performance when conservative choices of hyperparameters are used. Consequently, episodic control becomes a more feasible option when dealing with reinforcement learning tasks.",TRUE,acronym
R132,Computer Sciences,R130293,Multi-Perspective Context Matching for Machine Comprehension,S517730,R130294,has model,R119598,MPCM,"Previous machine comprehension (MC) datasets are either too small to train end-to-end deep learning models, or not difficult enough to evaluate the ability of current MC techniques. The newly released SQuAD dataset alleviates these limitations, and gives us a chance to develop more realistic MC models. Based on this dataset, we propose a Multi-Perspective Context Matching (MPCM) model, which is an end-to-end system that directly predicts the answer beginning and ending points in a passage. Our model first adjusts each word-embedding vector in the passage by multiplying a relevancy weight computed against the question. Then, we encode the question and weighted passage by using bi-directional LSTMs. For each point in the passage, our model matches the context of this point against the encoded question from multiple perspectives and produces a matching vector. Given those matched vectors, we employ another bi-directional LSTM to aggregate all the information and predict the beginning and ending points. Experimental result on the test set of SQuAD shows that our model achieves a competitive result on the leaderboard.",TRUE,acronym
R132,Computer Sciences,R129673,Phrase-Based & Neural Unsupervised Machine Translation,S515709,R129706,has model,R117329,PBSMT,"Machine translation systems achieve near human-level performance on some languages, yet their effectiveness strongly relies on the availability of large amounts of parallel sentences, which hinders their applicability to the majority of language pairs. This work investigates how to learn to translate when having access to only large monolingual corpora in each language. We propose two model variants, a neural and a phrase-based model. Both versions leverage a careful initialization of the parameters, the denoising effect of language models and automatic generation of parallel data by iterative back-translation. These models are significantly better than methods from the literature, while being simpler and having fewer hyper-parameters. On the widely used WMT’14 English-French and WMT’16 German-English benchmarks, our models respectively obtain 28.1 and 25.2 BLEU points without using a single parallel sentence, outperforming the state of the art by more than 11 BLEU points. On low-resource languages like English-Urdu and English-Romanian, our methods achieve even better results than semi-supervised and supervised approaches leveraging the paucity of available bitexts. Our code for NMT and PBSMT is publicly available.",TRUE,acronym
R132,Computer Sciences,R36089,Crowdsourced semantic annotation of scientific publications and tabular data in PDF,S123479,R36090,Input format,R36027,PDF,"Significant amounts of knowledge in science and technology have so far not been published as Linked Open Data but are contained in the text and tables of legacy PDF publications. Making such information available as RDF would, for example, provide direct access to claims and facilitate surveys of related work. A lot of valuable tabular information that till now only existed in PDF documents would also finally become machine understandable. Instead of studying scientific literature or engineering patents for months, it would be possible to collect such input by simple SPARQL queries. The SemAnn approach enables collaborative annotation of text and tables in PDF documents, a format that is still the common denominator of publishing, thus maximising the potential user base. The resulting annotations in RDF format are available for querying through a SPARQL endpoint. To incentivise users with an immediate benefit for making the effort of annotation, SemAnn recommends related papers, taking into account the hierarchical context of annotations in a novel way. We evaluated the usability of SemAnn and the usefulness of its recommendations by analysing annotations resulting from tasks assigned to test users and by interviewing them. While the evaluation shows that even few annotations lead to a good recall, we also observed unexpected, serendipitous recommendations, which confirms the merit of our low-threshold annotation support for the crowd.",TRUE,acronym
R132,Computer Sciences,R36089,Crowdsourced semantic annotation of scientific publications and tabular data in PDF,S123478,R36090,Output format,R5048,RDF,"Significant amounts of knowledge in science and technology have so far not been published as Linked Open Data but are contained in the text and tables of legacy PDF publications. Making such information available as RDF would, for example, provide direct access to claims and facilitate surveys of related work. A lot of valuable tabular information that till now only existed in PDF documents would also finally become machine understandable. Instead of studying scientific literature or engineering patents for months, it would be possible to collect such input by simple SPARQL queries. The SemAnn approach enables collaborative annotation of text and tables in PDF documents, a format that is still the common denominator of publishing, thus maximising the potential user base. The resulting annotations in RDF format are available for querying through a SPARQL endpoint. To incentivise users with an immediate benefit for making the effort of annotation, SemAnn recommends related papers, taking into account the hierarchical context of annotations in a novel way. We evaluated the usability of SemAnn and the usefulness of its recommendations by analysing annotations resulting from tasks assigned to test users and by interviewing them. While the evaluation shows that even few annotations lead to a good recall, we also observed unexpected, serendipitous recommendations, which confirms the merit of our low-threshold annotation support for the crowd.",TRUE,acronym
R132,Computer Sciences,R134288,Exploration by Random Network Distillation,S531594,R134289,has model,R124953,RND,"We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access to the underlying state of the game, and occasionally completes the first level.",TRUE,acronym
R132,Computer Sciences,R134413,RUDDER: Return Decomposition for Delayed Rewards,S531993,R134414,has model,R124997,RUDDER,"We propose RUDDER, a novel reinforcement learning approach for delayed rewards in finite Markov decision processes (MDPs). In MDPs the Q-values are equal to the expected immediate reward plus the expected future rewards. The latter are related to bias problems in temporal difference (TD) learning and to high variance problems in Monte Carlo (MC) learning. Both problems are even more severe when rewards are delayed. RUDDER aims at making the expected future rewards zero, which simplifies Q-value estimation to computing the mean of the immediate reward. We propose the following two new concepts to push the expected future rewards toward zero. (i) Reward redistribution that leads to return-equivalent decision processes with the same optimal policies and, when optimal, zero expected future rewards. (ii) Return decomposition via contribution analysis which transforms the reinforcement learning task into a regression task at which deep learning excels. On artificial tasks with delayed rewards, RUDDER is significantly faster than MC and exponentially faster than Monte Carlo Tree Search (MCTS), TD({\lambda}), and reward shaping approaches. At Atari games, RUDDER on top of a Proximal Policy Optimization (PPO) baseline improves the scores, which is most prominent at games with delayed rewards. Source code is available at \url{this https URL} and demonstration videos at \url{this https URL}.",TRUE,acronym
R132,Computer Sciences,R129323,SEE: Towards Semi-SupervisedEnd-to-End Scene Text Recognition,S514460,R129324,has model,R114166,SEE,"Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present SEE, a step towards semi-supervised neural networks for scene text detection and recognition, that can be optimized end-to-end. Most existing works consist of multiple deep neural networks and several pre-processing steps. In contrast to this, we propose to use a single deep neural network, that learns to detect and recognize text from natural images, in a semi-supervised way. SEE is a network that integrates and jointly learns a spatial transformer network, which can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We introduce the idea behind our novel approach and show its feasibility, by performing a range of experiments on standard benchmark datasets, where we achieve competitive results.",TRUE,acronym
R132,Computer Sciences,R129725,Unsupervised Statistical Machine Translation,S515824,R129739,has model,R117330,SMT,"While modern machine translation has relied on large parallel corpora, a recent line of work has managed to train Neural Machine Translation (NMT) systems from monolingual corpora only (Artetxe et al., 2018c; Lample et al., 2018). Despite the potential of this approach for low-resource settings, existing systems are far behind their supervised counterparts, limiting their practical interest. In this paper, we propose an alternative approach based on phrase-based Statistical Machine Translation (SMT) that significantly closes the gap with supervised systems. Our method profits from the modular architecture of SMT: we first induce a phrase table from monolingual corpora through cross-lingual embedding mappings, combine it with an n-gram language model, and fine-tune hyperparameters through an unsupervised MERT variant. In addition, iterative backtranslation improves results further, yielding, for instance, 14.08 and 26.22 BLEU points in WMT 2014 English-German and English-French, respectively, an improvement of more than 7-10 BLEU points over previous unsupervised systems, and closing the gap with supervised SMT (Moses trained on Europarl) down to 2-5 BLEU points. Our implementation is available at https://github.com/artetxem/monoses.",TRUE,acronym
R132,Computer Sciences,R129380,Improving Relation Extraction by Pre-trained Language Representations,S514655,R129381,has model,R116598,TRE,"Current state-of-the-art relation extraction methods typically rely on a set of lexical, syntactic, and semantic features, explicitly computed in a pre-processing step. Training feature extraction models requires additional annotated language resources, which severely restricts the applicability and portability of relation extraction to novel languages. Similarly, pre-processing introduces an additional source of error. To address these limitations, we introduce TRE, a Transformer for Relation Extraction, extending the OpenAI Generative Pre-trained Transformer [Radford et al., 2018]. Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions. TRE allows us to learn implicit linguistic features solely from plain text corpora by unsupervised pre-training, before fine-tuning the learned language representations on the relation extraction task. TRE obtains a new state-of-the-art result on the TACRED and SemEval 2010 Task 8 datasets, achieving a test F1 of 67.4 and 87.1, respectively. Furthermore, we observe a significant increase in sample efficiency. With only 20% of the training examples, TRE matches the performance of our baselines and our model trained from scratch on 100% of the TACRED dataset. We open-source our trained models, experiments, and source code.",TRUE,acronym
R132,Computer Sciences,R133383,Value Prediction Network,S528862,R133384,has model,R115591,VPN,"This paper proposes a novel deep reinforcement learning (RL) architecture, called Value Prediction Network (VPN), which integrates model-free and model-based RL methods into a single neural network. In contrast to typical model-based RL methods, VPN learns a dynamics model whose abstract states are trained to make option-conditional predictions of future values (discounted sum of rewards) rather than of future observations. Our experimental results show that VPN has several advantages over both model-free and model-based baselines in a stochastic environment where careful planning is required but building an accurate observation-prediction model is difficult. Furthermore, VPN outperforms Deep Q-Network (DQN) on several Atari games even with short-lookahead planning, demonstrating its potential as a new way of learning a good state representation.",TRUE,acronym
R132,Computer Sciences,R129411,SciBERT: A Pretrained Language Model for Scientific Text,S514890,R129459,has model,R125989,SciBERT,"Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SciBERT, a pretrained language model based on BERT (Devlin et. al., 2018) to address the lack of high-quality, large-scale labeled scientific data. SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-the-art results on several of these tasks. The code and pretrained models are available at https://github.com/allenai/scibert/.",TRUE,acronym
R132,Computer Sciences,R130126,XLNet: Generalized Autoregressive Pretraining for Language Understanding,S517086,R130127,has model,R119139,XLNet,"With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.",TRUE,acronym
R132,Computer Sciences,R134962,CvT: Introducing Convolutions to Vision Transformers,S533828,R134963,has model,R126159,CvT-W24,"We present in this paper a new architecture, named Convolutional vision Transformer (CvT), that improves Vision Transformer (ViT) in performance and efficiency by introducing convolutions into ViT to yield the best of both de-signs. This is accomplished through two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs) to the ViT architecture (i.e. shift, scale, and distortion invariance) while maintaining the merits of Transformers (i.e. dynamic attention, global context, and better generalization). We validate CvT by conducting extensive experiments, showing that this approach achieves state-of-the-art performance over other Vision Transformers and ResNets on ImageNet-1k, with fewer parameters and lower FLOPs. In addition, performance gains are maintained when pretrained on larger datasets (e.g. ImageNet-22k) and fine-tuned to downstream tasks. Pretrained on ImageNet-22k, our CvT-W24 obtains a top-1 accuracy of 87.7% on the ImageNet-1k val set. Finally, our results show that the positional encoding, a crucial component in existing Vision Transformers, can be safely re-moved in our model, simplifying the design for higher resolution vision tasks. Code will be released at https://github.com/microsoft/CvT.",TRUE,acronym
R132,Computer Sciences,R131002,R-Transformer: Recurrent Neural Network Enhanced Transformer,S521250,R131003,has model,R120964,R-Transformer,"Recurrent Neural Networks have long been the dominating choice for sequence modeling. However, it severely suffers from two issues: impotent in capturing very long-term dependencies and unable to parallelize the sequential computation procedure. Therefore, many non-recurrent sequence models that are built on convolution and attention operations have been proposed recently. Notably, models with multi-head attention such as Transformer have demonstrated extreme effectiveness in capturing long-term dependencies in a variety of sequence modeling tasks. Despite their success, however, these models lack necessary components to model local structures in sequences and heavily rely on position embeddings that have limited effects and require a considerable amount of design efforts. In this paper, we propose the R-Transformer which enjoys the advantages of both RNNs and the multi-head attention mechanism while avoids their respective drawbacks. The proposed model can effectively capture both local structures and global long-term dependencies in sequences without any use of position embeddings. We evaluate R-Transformer through extensive experiments with data from a wide range of domains and the empirical results show that R-Transformer outperforms the state-of-the-art methods by a large margin in most of the tasks. We have made the code publicly available at \url{this https URL}.",TRUE,acronym
R132,Computer Sciences,R129585,"Entity, Relation, and Event Extraction with Contextualized Span Representations",S515312,R129586,has model,R116638,DYGIE++,"We examine the capabilities of a unified, multi-task framework for three information extraction tasks: named entity recognition, relation extraction, and event extraction. Our framework (called DyGIE++) accomplishes all tasks by enumerating, refining, and scoring text spans designed to capture local (within-sentence) and global (cross-sentence) context. Our framework achieves state-of-the-art results across all tasks, on four datasets from a variety of domains. We perform experiments comparing different techniques to construct span representations. Contextualized embeddings like BERT perform well at capturing relationships among entities in the same or adjacent sentences, while dynamic span graph updates model long-range cross-sentence relationships. For instance, propagating span representations via predicted coreference links can enable the model to disambiguate challenging entity mentions. Our code is publicly available at https://github.com/dwadden/dygiepp and can be easily adapted for new tasks or datasets.",TRUE,acronym
R132,Computer Sciences,R131092,Antipodal Robotic Grasping using Generative Residual Convolutional Neural Network,S521641,R131093,has model,R121352,GR-ConvNet,"In this paper, we present a modular robotic system to tackle the problem of generating and performing antipodal robotic grasps for unknown objects from the n-channel image of the scene. We propose a novel Generative Residual Convolutional Neural Network (GR-ConvNet) model that can generate robust antipodal grasps from n-channel input at real-time speeds (∼20ms). We evaluate the proposed model architecture on standard datasets and a diverse set of household objects. We achieved state-of-the-art accuracy of 97.7% and 94.6% on Cornell and Jacquard grasping datasets, respectively. We also demonstrate a grasp success rate of 95.4% and 93% on household and adversarial objects, respectively, using a 7 DoF robotic arm.",TRUE,acronym
R132,Computer Sciences,R134494,HDLTex: Hierarchical Deep Learning for Text Classification,S532243,R134495,has model,R125975,HDLTex,"Increasingly large document collections require improved information processing methods for searching, retrieving, and organizing text. Central to these information processing methods is document classification, which has become an important application for supervised learning. Recently the performance of traditional supervised classifiers has degraded as the number of documents has increased. This is because along with growth in the number of documents has come an increase in the number of categories. This paper approaches this problem differently from current document classification methods that view the problem as multi-class classification. Instead we perform hierarchical classification using an approach we call Hierarchical Deep Learning for Text classification (HDLTex). HDLTex employs stacks of deep learning architectures to provide specialized understanding at each level of the document hierarchy.",TRUE,acronym
R132,Computer Sciences,R133821,Policy Optimization With Penalized Point Probability Distance: An Alternative To Proximal Policy Optimization,S530175,R133822,has model,R124916,POP3D,"As the most successful variant and improvement for Trust Region Policy Optimization (TRPO), proximal policy optimization (PPO) has been widely applied across various domains with several advantages: efficient data utilization, easy implementation, and good parallelism. In this paper, a first-order gradient reinforcement learning algorithm called Policy Optimization with Penalized Point Probability Distance (POP3D), which is a lower bound to the square of total variance divergence is proposed as another powerful variant. Firstly, we talk about the shortcomings of several commonly used algorithms, by which our method is partly motivated. Secondly, we address to overcome these shortcomings by applying POP3D. Thirdly, we dive into its mechanism from the perspective of solution manifold. Finally, we make quantitative comparisons among several state-of-the-art algorithms based on common benchmarks. Simulation results show that POP3D is highly competitive compared with PPO. Besides, our code is released in this https URL.",TRUE,acronym
R132,Computer Sciences,R130492,QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering,S518928,R130493,has model,R120581,QA-GNN,"The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG. Here we propose a new model, QA-GNN, which addresses the above challenges through two key innovations: (i) relevance scoring, where we use LMs to estimate the importance of KG nodes relative to the given QA context, and (ii) joint reasoning, where we connect the QA context and KG to form a joint graph, and mutually update their representations through graph-based message passing. We evaluate QA-GNN on the CommonsenseQA and OpenBookQA datasets, and show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning, e.g., correctly handling negation in questions.",TRUE,acronym
R132,Computer Sciences,R130521,When Attention Meets Fast Recurrence: Training Language Models with Reduced Compute,S519078,R130541,has model,R121013,SRU++,"Large language models have become increasingly difficult to train because of the growing computation time and cost. In this work, we present SRU++, a highly-efficient architecture that combines fast recurrence and attention for sequence modeling. SRU++ exhibits strong modeling capacity and training efficiency. On standard language modeling tasks such as Enwik8, Wiki-103 and Billion Word datasets, our model obtains better bits-per-character and perplexity while using 3x-10x less training cost compared to top-performing Transformer models. For instance, our model achieves a state-of-the-art result on the Enwik8 dataset using 1.6 days of training on an 8-GPU machine. We further demonstrate that SRU++ requires minimal attention for near state-of-the-art performance. Our results suggest jointly leveraging fast recurrence with little attention as a promising direction for accelerating model training and inference.",TRUE,acronym
R132,Computer Sciences,R135202,"Identification of Tasks, Datasets, Evaluation Metrics, and Numeric Scores for Scientific Leaderboards Construction",S534677,R135203,has model,R128075,TDMS-IE,"While the fast-paced inception of novel tasks and new datasets helps foster active research in a community towards interesting directions, keeping track of the abundance of research activity in different areas on different datasets is likely to become increasingly difficult. The community could greatly benefit from an automatic system able to summarize scientific results, e.g., in the form of a leaderboard. In this paper we build two datasets and develop a framework (TDMS-IE) aimed at automatically extracting task, dataset, metric and score from NLP papers, towards the automatic construction of leaderboards. Experiments show that our model outperforms several baselines by a large margin. Our model is a first step towards automatic leaderboard construction, e.g., in the NLP domain.",TRUE,acronym
R233,Data Storage Systems,R136067,EduCOR: An Educational and Career-Oriented Recommendation Ontology,S538725,R136069,keywords,R136071,OER,"Abstract With the increased dependence on online learning platforms and educational resource repositories, a unified representation of digital learning resources becomes essential to support a dynamic and multi-source learning experience. We introduce the EduCOR ontology, an educational, career-oriented ontology that provides a foundation for representing online learning resources for personalised learning systems. The ontology is designed to enable learning material repositories to offer learning path recommendations, which correspond to the user’s learning goals and preferences, academic and psychological parameters, and labour-market skills. We present the multiple patterns that compose the EduCOR ontology, highlighting its cross-domain applicability and integrability with other ontologies. A demonstration of the proposed ontology on the real-life learning platform eDoer is discussed as a use case. We evaluate the EduCOR ontology using both gold standard and task-based approaches. The comparison of EduCOR to three gold schemata, and its application in two use-cases, shows its coverage and adaptability to multiple OER repositories, which allows generating user-centric and labour-market oriented recommendations. Resource : https://tibonto.github.io/educor/.",TRUE,acronym
R233,Data Storage Systems,R135474,An Ontology-Based Approach for Curriculum Mapping in Higher Education,S536084,R135476,Development in,R135542,OWL,"Programs offered by academic institutions in higher education need to meet specific standards that are established by the appropriate accreditation bodies. Curriculum mapping is an important part of the curriculum management process that is used to document the expected learning outcomes, ensure quality, and align programs and courses with industry standards. Semantic web languages can be used to express and share common agreement about the vocabularies used in the domain under study. In this paper, we present an approach based on ontology for curriculum mapping in higher education. Our proposed approach is focused on the creation of a core curriculum ontology that can support effective knowledge representation and knowledge discovery. The research work presents the case of ontology reuse through the extension of the curriculum ontology to support the creation of micro-credentials. We also present a conceptual framework for knowledge discovery to support various business use case scenarios based on ontology inferencing and querying operations.",TRUE,acronym
R135,Databases/Information Systems,R6100,A fast method based on multiple clustering for name disambiguation in bibliographic citations,S6295,R6101,dataset,R6065,DBLP,"Name ambiguity in the context of bibliographic citation affects the quality of services in digital libraries. Previous methods are not widely applied in practice because of their high computational complexity and their strong dependency on excessive attributes, such as institutional affiliation, research area, address, etc., which are difficult to obtain in practice. To solve this problem, we propose a novel coarse‐to‐fine framework for name disambiguation which sequentially employs 3 common and easily accessible attributes (i.e., coauthor name, article title, and publication venue). Our proposed framework is based on multiple clustering and consists of 3 steps: (a) clustering articles by coauthorship and obtaining rough clusters, that is fragments; (b) clustering fragments obtained in step 1 by title information and getting bigger fragments; (c) and clustering fragments obtained in step 2 by the latent relations among venues. Experimental results on a Digital Bibliography and Library Project (DBLP) data set show that our method outperforms the existing state‐of‐the‐art methods by 2.4% to 22.7% on the average pairwise F1 score and is 10 to 100 times faster in terms of execution time.",TRUE,acronym
R135,Databases/Information Systems,R77123,Heuristics-based query optimisation for SPARQL,S535787,R135463,Algorithm,R135461,HSP,"Query optimization in RDF Stores is a challenging problem as SPARQL queries typically contain many more joins than equivalent relational plans, and hence lead to a large join order search space. In such cases, cost-based query optimization often is not possible. One practical reason for this is that statistics typically are missing in web scale setting such as the Linked Open Datasets (LOD). The more profound reason is that due to the absence of schematic structure in RDF, join-hit ratio estimation requires complicated forms of correlated join statistics; and currently there are no methods to identify the relevant correlations beforehand. For this reason, the use of good heuristics is essential in SPARQL query optimization, even in the case that are partially used with cost-based statistics (i.e., hybrid query optimization). In this paper we describe a set of useful heuristics for SPARQL query optimizers. We present these in the context of a new Heuristic SPARQL Planner (HSP) that is capable of exploiting the syntactic and the structural variations of the triple patterns in a SPARQL query in order to choose an execution plan without the need of any cost model. For this, we define the variable graph and we show a reduction of the SPARQL query optimization problem to the maximum weight independent set problem. We implemented our planner on top of the MonetDB open source column-store and evaluated its effectiveness against the state-of-the-art RDF-3X engine as well as comparing the plan quality with a relational (SQL) equivalent of the benchmarks.",TRUE,acronym
R135,Databases/Information Systems,R135477,A learning object ontology repository to support annotation and discovery of educational resources using semantic thesauri,S539297,R135479,Development in,R135542,OWL," Open educational resources are currently becoming increasingly available from a multitude of sources and are consequently annotated in many diverse ways. Interoperability concerns that naturally arise can often be resolved through the semantification of metadata descriptions, while at the same time strengthening the knowledge value of resources. SKOS can be a solid linking point offering a standard vocabulary for thematic descriptions, by referencing semantic thesauri. We propose the enhancement and maintenance of educational resources’ metadata in the form of learning object ontologies and introduce the notion of a learning object ontology repository that can help towards their publication, discovery and reuse. At the same time, linking to thesauri datasets and contextualized sources interrelates learning objects with linked data and exposes them to the Web of Data. We build a set of extensions and workflows on top of contemporary ontology management tools, such as WebProtégé, that can make it suitable as a learning object ontology repository. The proposed approach and implementation can help libraries and universities in discovering, managing and incorporating open educational resources and enhancing current curricula. ",TRUE,acronym
R135,Databases/Information Systems,R2047,Capturing Knowledge in Semantically-typed Relational Patterns to Enhance Relation Linking,S2082,R2060,uses,R2064,PATTY,"Transforming natural language questions into formal queries is an integral task in Question Answering (QA) systems. QA systems built on knowledge graphs like DBpedia, require a step after natural language processing for linking words, specifically including named entities and relations, to their corresponding entities in a knowledge graph. To achieve this task, several approaches rely on background knowledge bases containing semantically-typed relations, e.g., PATTY, for an extra disambiguation step. Two major factors may affect the performance of relation linking approaches whenever background knowledge bases are accessed: a) limited availability of such semantic knowledge sources, and b) lack of a systematic approach on how to maximize the benefits of the collected knowledge. We tackle this problem and devise SIBKB, a semantic-based index able to capture knowledge encoded on background knowledge bases like PATTY. SIBKB represents a background knowledge base as a bi-partite and a dynamic index over the relation patterns included in the knowledge base. Moreover, we develop a relation linking component able to exploit SIBKB features. The benefits of SIBKB are empirically studied on existing QA benchmarks and observed results suggest that SIBKB is able to enhance the accuracy of relation linking by up to three times.",TRUE,acronym
R135,Databases/Information Systems,R2047,Capturing Knowledge in Semantically-typed Relational Patterns to Enhance Relation Linking,S2078,R2058,presents,R2063,SIBKB,"Transforming natural language questions into formal queries is an integral task in Question Answering (QA) systems. QA systems built on knowledge graphs like DBpedia, require a step after natural language processing for linking words, specifically including named entities and relations, to their corresponding entities in a knowledge graph. To achieve this task, several approaches rely on background knowledge bases containing semantically-typed relations, e.g., PATTY, for an extra disambiguation step. Two major factors may affect the performance of relation linking approaches whenever background knowledge bases are accessed: a) limited availability of such semantic knowledge sources, and b) lack of a systematic approach on how to maximize the benefits of the collected knowledge. We tackle this problem and devise SIBKB, a semantic-based index able to capture knowledge encoded on background knowledge bases like PATTY. SIBKB represents a background knowledge base as a bi-partite and a dynamic index over the relation patterns included in the knowledge base. Moreover, we develop a relation linking component able to exploit SIBKB features. The benefits of SIBKB are empirically studied on existing QA benchmarks and observed results suggest that SIBKB is able to enhance the accuracy of relation linking by up to three times.",TRUE,acronym
R135,Databases/Information Systems,R77123,Heuristics-based query optimisation for SPARQL,S352123,R77125,Has approach,R77016,SPARQL,"Query optimization in RDF Stores is a challenging problem as SPARQL queries typically contain many more joins than equivalent relational plans, and hence lead to a large join order search space. In such cases, cost-based query optimization often is not possible. One practical reason for this is that statistics typically are missing in web scale setting such as the Linked Open Datasets (LOD). The more profound reason is that due to the absence of schematic structure in RDF, join-hit ratio estimation requires complicated forms of correlated join statistics; and currently there are no methods to identify the relevant correlations beforehand. For this reason, the use of good heuristics is essential in SPARQL query optimization, even in the case that are partially used with cost-based statistics (i.e., hybrid query optimization). In this paper we describe a set of useful heuristics for SPARQL query optimizers. We present these in the context of a new Heuristic SPARQL Planner (HSP) that is capable of exploiting the syntactic and the structural variations of the triple patterns in a SPARQL query in order to choose an execution plan without the need of any cost model. For this, we define the variable graph and we show a reduction of the SPARQL query optimization problem to the maximum weight independent set problem. We implemented our planner on top of the MonetDB open source column-store and evaluated its effectiveness against the state-of-the-art RDF-3X engine as well as comparing the plan quality with a relational (SQL) equivalent of the benchmarks.",TRUE,acronym
R135,Databases/Information Systems,R77008,Random Walk TripleRush: Asynchronous Graph Querying and Sampling,S507849,R77010,Has implementation,L366120,SPARQL,"Most Semantic Web applications rely on querying graphs, typically by using SPARQL with a triple store. Increasingly, applications also analyze properties of the graph structure to compute statistical inferences. The current Semantic Web infrastructure, however, does not efficiently support such operations. This forces developers to extract the relevant data for external statistical post-processing. In this paper we propose to rethink query execution in a triple store as a highly parallelized asynchronous graph exploration on an active index data structure. This approach also allows to integrate SPARQL-querying with the sampling of graph properties. To evaluate this architecture we implemented Random Walk TripleRush, which is built on a distributed graph processing system. Our evaluations show that this architecture enables both competitive graph querying, as well as the ability to execute various types of random walks with restarts that sample interesting graph properties. Thanks to the asynchronous architecture, first results are sometimes returned in a fraction of the full execution time. We also evaluate the scalability and show that the architecture supports fast query-times on a dataset with more than a billion triples.",TRUE,acronym
R135,Databases/Information Systems,R77123,Heuristics-based query optimisation for SPARQL,S352125,R77125,Has implementation,R34857,RDF-3X,"Query optimization in RDF Stores is a challenging problem as SPARQL queries typically contain many more joins than equivalent relational plans, and hence lead to a large join order search space. In such cases, cost-based query optimization often is not possible. One practical reason for this is that statistics typically are missing in web scale setting such as the Linked Open Datasets (LOD). The more profound reason is that due to the absence of schematic structure in RDF, join-hit ratio estimation requires complicated forms of correlated join statistics; and currently there are no methods to identify the relevant correlations beforehand. For this reason, the use of good heuristics is essential in SPARQL query optimization, even in the case that are partially used with cost-based statistics (i.e., hybrid query optimization). In this paper we describe a set of useful heuristics for SPARQL query optimizers. We present these in the context of a new Heuristic SPARQL Planner (HSP) that is capable of exploiting the syntactic and the structural variations of the triple patterns in a SPARQL query in order to choose an execution plan without the need of any cost model. For this, we define the variable graph and we show a reduction of the SPARQL query optimization problem to the maximum weight independent set problem. We implemented our planner on top of the MonetDB open source column-store and evaluated its effectiveness against the state-of-the-art RDF-3X engine as well as comparing the plan quality with a relational (SQL) equivalent of the benchmarks.",TRUE,acronym
R142,Earth Sciences,R140548,"ASTER Data Analyses for Lithological Discrimination of Sittampundi Anorthositic Complex, Southern India",S561842,R140550,Data used,R140646,ASTER ,"ASTER is an advanced Thermal Emission and Reflection Radiometer, a multispectral sensor, which measures reflected and emitted electromagnetic radiation of earth surface with 14 bands. The present study aims to delineate different rock types in the Sittampundi Anorthositic Complex (SAC), Tamil Nadu using Visible (VIS), near-infrared (NIR) and short wave infrared (SWIR) reflectance data of ASTER 9 band data. We used different band ratioing, band combinations in the VNIR and SWIR region for discriminating lithological boundaries. SAC is also considered as a lunar highland analog rock. Anorthosite is a plagioclase-rich igneous rock with subordinate amounts of pyroxenes, olivine and other minerals. A methodology has been applied to correct the cross talk effect and radiance to reflectance. Principal Component Analysis (PCA) has been realized on the 9 ASTER bands in order to reduce the redundancy information in highly correlated bands. PCA derived FCC results enable the validation and support to demarcate the different lithological boundaries defined on previous geological map. The image derived spectral profiles for anorthosite are compared with the ASTER resampled laboratory spectra, JHU spectral library spectra and Apollo 14 lunar anorthosites spectra. The Spectral Angle Mapping imaging spectroscopy technique has been practiced to classify the ASTER image of the study area and found that, the processing of ASTER remote sensing data set can be used as a powerful tool for mapping the terrestrial Anorthositic regions and similar kind of process could be applied to map the planetary surfaces (E.g. Moon).",TRUE,acronym
R142,Earth Sciences,R140556,"An image processing approach for converging ASTER-derived spectral maps for mapping Kolhan limestone, Jharkhand, India",S561840,R140557,Data used,R140646,ASTER ,"In the present study, we have attempted the delineation of limestone using different spectral mapping algorithms in ASTER data. Each spectral mapping algorithm derives limestone exposure map independently. Although these spectral maps are broadly similar to each other, they are also different at places in terms of spatial disposition of limestone pixels. Therefore, an attempt is made to integrate the results of these spectral maps to derive an integrated map using minimum noise fraction (MNF) method. The first MNF image is the result of two cascaded principal component methods suitable for preserving complementary information derived from each spectral map. While implementing MNF, noise or non-coherent pixels occurring within a homogeneous patch of limestone are removed first using shift difference method, before attempting principal component analysis on input spectral maps for deriving composite spectral map of limestone exposures. The limestone exposure map is further validated based on spectral data and ancillary geological data.",TRUE,acronym
R142,Earth Sciences,R140698,"Mapping Hydrothermally Altered Rocks at Cuprite, Nevada, Using the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), a New Satellite-Imaging System",S562507,R140700,Data used,R140646,ASTER ,"The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a 14-band multispectral instrument on board the Earth Observing System (EOS), TERRA. The three bands between 0.52 and 0.86 μ m and the six bands from 1.60 and 2.43 μ m, which have 15- and 30-m spatial resolution, respectively, were selected primarily for making remote mineralogical determinations. The Cuprite, Nevada, mining district comprises two hydrothermal alteration centers where Tertiary volcanic rocks have been hydrothermally altered mainly to bleached silicified rocks and opalized rocks, with a marginal zone of limonitic argillized rocks. Country rocks are mainly Cambrian phyllitic siltstone and limestone. Evaluation of an ASTER image of the Cuprite district shows that spectral reflectance differences in the nine bands in the 0.52 to 2.43 μ m region provide a basis for identifying and mapping mineralogical components which characterize the main hydrothermal alteration zones: opal is the spectrally dominant mineral in the silicified zone; whereas, alunite and kaolinite are dominant in the opalized zone. In addition, the distribution of unaltered country rocks was mapped because of the presence of spectrally dominant muscovite in the siltstone and calcite in limestone, and the tuffaceous rocks and playa deposits were distinguishable due to their relatively flat spectra and weak absorption features at 2.33 and 2.20 μ m, respectively. An Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) image of the study area was processed using a similar methodology used with the ASTER data. Comparison of the ASTER and AVIRIS results shows that the results are generally similar, but the higher spectral resolution of AVIRIS (224 bands) permits identification of more individual minerals, including certain polymorphs. However, ASTER has recorded images of more than 90 percent of the Earth’s land surface with less than 20 percent cloud cover, and these data are available at nominal or no cost. Landsat TM images have a similar spatial resolution to ASTER images, but TM has fewer bands, which limits its usefulness for making mineral determinations.",TRUE,acronym
R142,Earth Sciences,R140706,Spectral indices for lithologic discrimination and mapping by using the ASTER SWIR bands,S562485,R140708,Data used,R140646,ASTER ,"The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a research facility instrument launched on NASA's Terra spacecraft in December 1999. Spectral indices, a kind of orthogonal transformation in the five-dimensional space formed by the five ASTER short-wave-infrared (SWIR) bands, were proposed for discrimination and mapping of surface rock types. These include Alunite Index, Kaolinite Index, Calcite Index, and Montmorillonite Index, and can be calculated by linear combination of reflectance values of the five SWIR bands. The transform coefficients were determined so as to direct transform axes to the average spectral pattern of the typical minerals. The spectral indices were applied to the simulated ASTER dataset of Cuprite, Nevada, USA after converting its digital numbers to surface reflectance. The resultant spectral index images were useful for lithologic mapping and were easy to interpret geologically. An advantage of this method is that we can use the pre-determined transform coefficients, as long as image data are converted to surface reflectance.",TRUE,acronym
R142,Earth Sciences,R147402,"Pegmatite spectral behavior considering ASTER and Landsat 8 OLI data in Naipa and Muiane mines (Alto Ligonha, Mozambique)",S591385,R147404,Other datasets,R140646,ASTER ,"The Naipa and Muiane mines are located on the Nampula complex, a stratigraphic tectonic subdivision of the Mozambique Belt, in the Alto Ligonha region. The pegmatites are of the Li-Cs-Ta type, intrude a chlorite phyllite and gneisses with amphibole and biotite. The mines are still active. The main objective of this work was to analyze the pegmatite’s spectral behavior considering ASTER and Landsat 8 OLI data. An ASTER image from 27/05/2005, and an image Landsat OLI image from 02/02/2018 were considered. The data were radiometric calibrated and after atmospheric corrected considered the Dark Object Subtraction algorithm available in the Semi-Automatic Classification Plugin accessible in QGIS software. In the field, samples were collected from lepidolite waste pile in Naipa and Muaine mines. A spectroadiometer was used in order to analyze the spectral behavior of several pegmatite’s samples collected in the field in Alto Ligonha (Naipa and Muiane mines). In addition, QGIS software was also used for the spectral mapping of the hypothetical hydrothermal alterations associated with occurrences of basic metals, beryl gemstones, tourmalines, columbite-tantalites, and lithium minerals. A supervised classification algorithm was employed - Spectral Angle Mapper for the data processing, and the overall accuracy achieved was 80%. The integration of ASTER and Landsat 8 OLI data have proved very useful for pegmatite’s mapping. From the results obtained, we can conclude that: (i) the combination of ASTER and Landsat 8 OLI data allows us to obtain more information about mineral composition than just one sensor, i.e., these two sensors are complementary; (ii) the alteration spots identified in the mines area are composed of clay minerals. In the future, more data and others image processing algorithms can be applied in order to identify the different Lithium minerals, as spodumene, petalite, amblygonite and lepidolite.",TRUE,acronym
R142,Earth Sciences,R147485,Detection of Pb–Zn mineralization zones in west Kunlun using Landsat 8 and ASTER remote sensing data,S591666,R147487,Other datasets,R140646,ASTER ,"Abstract. The integration of Landsat 8 OLI and ASTER data is an efficient tool for interpreting lead–zinc mineralization in the Huoshaoyun Pb–Zn mining region located in the west Kunlun mountains at high altitude and very rugged terrain, where traditional geological work becomes limited and time-consuming. This task was accomplished by using band ratios (BRs), principal component analysis, and spectral matched filtering methods. It is concluded that some BR color composites and principal components of each imagery contain useful information for lithological mapping. SMF technique is useful for detecting lead–zinc mineralization zones, and the results could be verified by handheld portable X-ray fluorescence analysis. Therefore, the proposed methodology shows strong potential of Landsat 8 OLI and ASTER data in lithological mapping and lead–zinc mineralization zone extraction in carbonate stratum.",TRUE,acronym
R142,Earth Sciences,R147491,"Lithological mapping using Landsat 8 OLI and Terra ASTER multispectral data in the Bas Drâa inlier, Moroccan Anti Atlas",S591682,R147493,Other datasets,R140646,ASTER ,"Abstract. Lithological mapping is a fundamental step in various mineral prospecting studies because it forms the basis of the interpretation and validation of retrieved results. Therefore, this study exploited the multispectral Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Landsat 8 Operational Land Imager (OLI) data in order to map lithological units in the Bas Drâa inlier, at the Moroccan Anti Atlas. This task was completed by using principal component analysis (PCA), band ratios (BR), and support vector machine (SVM) classification. Overall accuracy and the kappa coefficient of SVM based on ground truth in addition to the results of PCA and BR show an excellent correlation with the existing geological map of the study area. Consequently, the methodology proposed demonstrates a high potential of ASTER and Landsat 8 OLI data in lithological units discrimination.",TRUE,acronym
R142,Earth Sciences,R140698,"Mapping Hydrothermally Altered Rocks at Cuprite, Nevada, Using the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), a New Satellite-Imaging System",S562506,R140700,Data used,R108176,AVIRIS,"The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a 14-band multispectral instrument on board the Earth Observing System (EOS), TERRA. The three bands between 0.52 and 0.86 μ m and the six bands from 1.60 and 2.43 μ m, which have 15- and 30-m spatial resolution, respectively, were selected primarily for making remote mineralogical determinations. The Cuprite, Nevada, mining district comprises two hydrothermal alteration centers where Tertiary volcanic rocks have been hydrothermally altered mainly to bleached silicified rocks and opalized rocks, with a marginal zone of limonitic argillized rocks. Country rocks are mainly Cambrian phyllitic siltstone and limestone. Evaluation of an ASTER image of the Cuprite district shows that spectral reflectance differences in the nine bands in the 0.52 to 2.43 μ m region provide a basis for identifying and mapping mineralogical components which characterize the main hydrothermal alteration zones: opal is the spectrally dominant mineral in the silicified zone; whereas, alunite and kaolinite are dominant in the opalized zone. In addition, the distribution of unaltered country rocks was mapped because of the presence of spectrally dominant muscovite in the siltstone and calcite in limestone, and the tuffaceous rocks and playa deposits were distinguishable due to their relatively flat spectra and weak absorption features at 2.33 and 2.20 μ m, respectively. An Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) image of the study area was processed using a similar methodology used with the ASTER data. Comparison of the ASTER and AVIRIS results shows that the results are generally similar, but the higher spectral resolution of AVIRIS (224 bands) permits identification of more individual minerals, including certain polymorphs. However, ASTER has recorded images of more than 90 percent of the Earth’s land surface with less than 20 percent cloud cover, and these data are available at nominal or no cost. Landsat TM images have a similar spatial resolution to ASTER images, but TM has fewer bands, which limits its usefulness for making mineral determinations.",TRUE,acronym
R142,Earth Sciences,R140710,"Simple mineral mapping algorithm based on multitype spectral diagnostic absorption features: a case study at Cuprite, Nevada",S562467,R140712,Data used,R108176,AVIRIS,"Abstract. Hyperspectral remote sensing has been widely used in mineral identification using the particularly useful short-wave infrared (SWIR) wavelengths (1.0 to 2.5 μm). Current mineral mapping methods are easily limited by the sensor’s radiometric sensitivity and atmospheric effects. Therefore, a simple mineral mapping algorithm (SMMA) based on the combined application with multitype diagnostic SWIR absorption features for hyperspectral data is proposed. A total of nine absorption features are calculated, respectively, from the airborne visible/infrared imaging spectrometer data, the Hyperion hyperspectral data, and the ground reference spectra data collected from the United States Geological Survey (USGS) spectral library. Based on spectral analysis and statistics, a mineral mapping decision-tree model for the Cuprite mining district in Nevada, USA, is constructed. Then, the SMMA algorithm is used to perform mineral mapping experiments. The mineral map from the USGS (USGS map) in the Cuprite area is selected for validation purposes. Results showed that the SMMA algorithm is able to identify most minerals with high coincidence with USGS map results. Compared with Hyperion data (overall accuracy=74.54%), AVIRIS data showed overall better mineral mapping results (overall accuracy=94.82%) due to low signal-to-noise ratio and high spatial resolution.",TRUE,acronym
R142,Earth Sciences,R155123,"Mapping hydrothermal alteration minerals using high-resolution AVIRIS-NG hyperspectral data in the Hutti-Maski gold deposit area, India",S620300,R155125,yields,R155145,SIDSAM,"ABSTRACT The present study exploits high-resolution hyperspectral imagery acquired by the Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) sensor from the Hutti-Maski gold deposit area, India, to map hydrothermal alteration minerals. The study area is a volcanic-dominated late Archean greenstone belt that hosts major gold mineralization in the Eastern Dharwar Craton of southern India. The study encompasses pre-processing, spectral and spatial image reduction using Minimum Noise Fraction (MNF) and Fast Pixel Purity Index (FPPI), followed by endmember extraction using n-dimensional visualizer and the United States Geological Survey (USGS) mineral spectral library. Image derived endmembers such as goethite, chlorite, chlorite at the mine site (chlorite mixed with mined materials), kaolinite, and muscovite were subsequently used in spectral mapping methods such as Spectral Angle Mapper (SAM), Spectral Information Divergence (SID) and its hybrid, i.e. SIDSAMtan. Spectral similarity matrix of the target and non-target-based method has been proposed to find the possible optimum threshold needed to obtain mineral map using spectral mapping methods. Relative Spectral Discrimination Power (RSDPW) and Confusion Matrix (CM) have been used to evaluate the performance of SAM, SID, and SIDSAMtan. The RSDPW and CM illustrate that the SIDSAMtan benefits from the unique characteristics of SAM and SID to achieve better discrimination capability. The Overall Accuracy (OA) and kappa coefficient (ҡ) of SAM, SID, and SIDSAMtan were computed using 900 random validation points and obtained 90% (OA) and 0.88 (ҡ), 91.4% and 0.90, and 94.4% and 0.93, respectively. Obtained mineral map demonstrates that the northern portion of the area mainly consists of muscovite whereas the southern part is marked by chlorite, goethite, muscovite and kaolinite, indicating the propylitic alteration. Most of these minerals are associated with altered metavolcanic rocks and migmatite.",TRUE,acronym
R142,Earth Sciences,R160558,Classification of Iowa wetlands using an airborne hyperspectral image: a comparison of the spectral angle mapper classifier and an object-oriented approach,S640302,R160560,Softwares,R160556, eCognition,"Wetlands mapping using multispectral imagery from Landsat multispectral scanner (MSS) and thematic mapper (TM) and Système pour l'observation de la Terre (SPOT) does not in general provide high classification accuracies because of poor spectral and spatial resolutions. This study tests the feasibility of using high-resolution hyperspectral imagery to map wetlands in Iowa with two nontraditional classification techniques: the spectral angle mapper (SAM) method and a new nonparametric object-oriented (OO) classification. The software programs used were ENVI and eCognition. Accuracies of these classified images were assessed by using the information collected through a field survey with a global positioning system and high-resolution color infrared images. Wetlands were identified more accurately with the OO method (overall accuracy 92.3%) than with SAM (63.53%). This paper also discusses the limitations of these classification techniques for wetlands, as well as discussing future directions for study.",TRUE,acronym
R142,Earth Sciences,R160584,Spectral angle mapper and object-based classification combined with hyperspectral remote sensing imagery for obtaining land use/cover mapping in a Mediterranean region,S640522,R160586,Datasets,R160594,Quickbird-2,"In this study, we test the potential of two different classification algorithms, namely the spectral angle mapper (SAM) and object-based classifier for mapping the land use/cover characteristics using a Hyperion imagery. We chose a study region that represents a typical Mediterranean setting in terms of landscape structure, composition and heterogeneous land cover classes. Accuracy assessment of the land cover classes was performed based on the error matrix statistics. Validation points were derived from visual interpretation of multispectral high resolution QuickBird-2 satellite imagery. Results from both the classifiers yielded more than 70% classification accuracy. However, the object-based classification clearly outperformed the SAM by 7.91% overall accuracy (OA) and a relatively high kappa coefficient. Similar results were observed in the classification of the individual classes. Our results highlight the potential of hyperspectral remote sensing data as well as object-based classification approach for mapping heterogeneous land use/cover in a typical Mediterranean setting.",TRUE,acronym
R142,Earth Sciences,R143763,Development and utilization of urban spectral library for remote sensing of urban environment,S575754,R143765,Sensors,R32679,WorldView-2,Hyperspectral technology is useful for urban studies due to its capability in examining detailed spectral characteristics of urban materials. This study aims to develop a spectral library of urban materials and demonstrate its application in remote sensing analysis of an urban environment. Field measurements were conducted by using ASD FieldSpec 3 Spectroradiometer with wavelength range from 350 to 2500 nm. The spectral reflectance curves of urban materials were interpreted and analyzed. A collection of 22 spectral data was compiled into a spectral library. The spectral library was put to practical use by utilizing the reference spectra for WorldView-2 satellite image classification which demonstrates the usability of such infrastructure to facilitate further progress of remote sensing applications in Malaysia.,TRUE,acronym
R142,Earth Sciences,R9094,Development and evaluation of an Earth-System model – HadGEM2,S14329,R9095,has research problem,R9138,CMIP5,"Abstract. We describe here the development and evaluation of an Earth system model suitable for centennial-scale climate prediction. The principal new components added to the physical climate model are the terrestrial and ocean ecosystems and gas-phase tropospheric chemistry, along with their coupled interactions. The individual Earth system components are described briefly and the relevant interactions between the components are explained. Because the multiple interactions could lead to unstable feedbacks, we go through a careful process of model spin up to ensure that all components are stable and the interactions balanced. This spun-up configuration is evaluated against observed data for the Earth system components and is generally found to perform very satisfactorily. The reason for the evaluation phase is that the model is to be used for the core climate simulations carried out by the Met Office Hadley Centre for the Coupled Model Intercomparison Project (CMIP5), so it is essential that addition of the extra complexity does not detract substantially from its climate performance. Localised changes in some specific meteorological variables can be identified, but the impacts on the overall simulation of present day climate are slight. This model is proving valuable both for climate predictions, and for investigating the strengths of biogeochemical feedbacks.",TRUE,acronym
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R135750,Characterization and comparison of poorly known moth communities through DNA barcoding in two Afrotropical environments in Gabon,S537034,R135752,higher number estimated species (Method),R138559,BINs,"Biodiversity research in tropical ecosystems-popularized as the most biodiverse habitats on Earth-often neglects invertebrates, yet invertebrates represent the bulk of local species richness. Insect communities in particular remain strongly impeded by both Linnaean and Wallacean shortfalls, and identifying species often remains a formidable challenge inhibiting the use of these organisms as indicators for ecological and conservation studies. Here we use DNA barcoding as an alternative to the traditional taxonomic approach for characterizing and comparing the diversity of moth communities in two different ecosystems in Gabon. Though sampling remains very incomplete, as evidenced by the high proportion (59%) of species represented by singletons, our results reveal an outstanding diversity. With about 3500 specimens sequenced and representing 1385 BINs (Barcode Index Numbers, used as a proxy to species) in 23 families, the diversity of moths in the two sites sampled is higher than the current number of species listed for the entire country, highlighting the huge gap in biodiversity knowledge for this country. Both seasonal and spatial turnovers are strikingly high (18.3% of BINs shared between seasons, and 13.3% between sites) and draw attention to the need to account for these when running regional surveys. Our results also highlight the richness and singularity of savannah environments and emphasize the status of Central African ecosystems as hotspots of biodiversity.",TRUE,acronym
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R138551,Probing planetary biodiversity with DNA barcodes: The Noctuoidea of North America,S629309,R156994,higher number estimated species (Method),R138559,BINs,"This study reports the assembly of a DNA barcode reference library for species in the lepidopteran superfamily Noctuoidea from Canada and the USA. Based on the analysis of 69,378 specimens, the library provides coverage for 97.3% of the noctuoid fauna (3565 of 3664 species). In addition to verifying the strong performance of DNA barcodes in the discrimination of these species, the results indicate close congruence between the number of species analyzed (3565) and the number of sequence clusters (3816) recognized by the Barcode Index Number (BIN) system. Distributional patterns across 12 North American ecoregions are examined for the 3251 species that have GPS data while BIN analysis is used to quantify overlap between the noctuoid faunas of North America and other zoogeographic regions. This analysis reveals that 90% of North American noctuoids are endemic and that just 7.5% and 1.8% of BINs are shared with the Neotropics and with the Palearctic, respectively. One third (29) of the latter species are recent introductions and, as expected, they possess low intraspecific divergences.",TRUE,acronym
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R139497,Congruence between morphology-based species and Barcode Index Numbers (BINs) in Neotropical Eumaeini (Lycaenidae),S629155,R156958,higher number estimated species (Method),R138559,BINs,"Background With about 1,000 species in the Neotropics, the Eumaeini (Theclinae) are one of the most diverse butterfly tribes. Correct morphology-based identifications are challenging in many genera due to relatively little interspecific differences in wing patterns. Geographic infraspecific variation is sometimes more substantial than variation between species. In this paper we present a large DNA barcode dataset of South American Lycaenidae. We analyze how well DNA barcode BINs match morphologically delimited species. Methods We compare morphology-based species identifications with the clustering of molecular operational taxonomic units (MOTUs) delimitated by the RESL algorithm in BOLD, which assigns Barcode Index Numbers (BINs). We examine intra- and interspecific divergences for genera represented by at least four morphospecies. We discuss the existence of local barcode gaps in a genus by genus analysis. We also note differences in the percentage of species with barcode gaps in groups of lowland and high mountain genera. Results We identified 2,213 specimens and obtained 1,839 sequences of 512 species in 90 genera. Overall, the mean intraspecific divergence value of CO1 sequences was 1.20%, while the mean interspecific divergence between nearest congeneric neighbors was 4.89%, demonstrating the presence of a barcode gap. However, the gap seemed to disappear from the entire set when comparing the maximum intraspecific distance (8.40%) with the minimum interspecific distance (0.40%). Clear barcode gaps are present in many genera but absent in others. From the set of specimens that yielded COI fragment lengths of at least 650 bp, 75% of the a priori morphology-based identifications were unambiguously assigned to a single Barcode Index Number (BIN). However, after a taxonomic a posteriori review, the percentage of matched identifications rose to 85%. BIN splitting was observed for 17% of the species and BIN sharing for 9%. We found that genera that contain primarily lowland species show higher percentages of local barcode gaps and congruence between BINs and morphology than genera that contain exclusively high montane species. The divergence values to the nearest neighbors were significantly lower in high Andean species while the intra-specific divergence values were significantly lower in the lowland species. These results raise questions regarding the causes of observed low inter and high intraspecific genetic variation. We discuss incomplete lineage sorting and hybridization as most likely causes of this phenomenon, as the montane species concerned are relatively young and hybridization is probable. The release of our data set represents an essential baseline for a reference library for biological assessment studies of butterflies in mega diverse countries using modern high-throughput technologies an highlights the necessity of taxonomic revisions for various genera combining both molecular and morphological data.",TRUE,acronym
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R142517,"A DNA barcode library for 5,200 German flies and midges (Insecta: Diptera) and its implications for metabarcoding‐based biomonitoring",S624738,R155788,higher number estimated species (Method),R138559,BINs,"This study summarizes results of a DNA barcoding campaign on German Diptera, involving analysis of 45,040 specimens. The resultant DNA barcode library includes records for 2,453 named species comprising a total of 5,200 barcode index numbers (BINs), including 2,700 COI haplotype clusters without species‐level assignment, so called “dark taxa.” Overall, 88 out of 117 families (75%) recorded from Germany were covered, representing more than 50% of the 9,544 known species of German Diptera. Until now, most of these families, especially the most diverse, have been taxonomically inaccessible. By contrast, within a few years this study provided an intermediate taxonomic system for half of the German Dipteran fauna, which will provide a useful foundation for subsequent detailed, integrative taxonomic studies. Using DNA extracts derived from bulk collections made by Malaise traps, we further demonstrate that species delineation using BINs and operational taxonomic units (OTUs) constitutes an effective method for biodiversity studies using DNA metabarcoding. As the reference libraries continue to grow, and gaps in the species catalogue are filled, BIN lists assembled by metabarcoding will provide greater taxonomic resolution. The present study has three main goals: (a) to provide a DNA barcode library for 5,200 BINs of Diptera; (b) to demonstrate, based on the example of bulk extractions from a Malaise trap experiment, that DNA barcode clusters, labelled with globally unique identifiers (such as OTUs and/or BINs), provide a pragmatic, accurate solution to the “taxonomic impediment”; and (c) to demonstrate that interim names based on BINs and OTUs obtained through metabarcoding provide an effective method for studies on species‐rich groups that are usually neglected in biodiversity research projects because of their unresolved taxonomy.",TRUE,acronym
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R157039,DNA barcode library for European Gelechiidae (Lepidoptera) suggests greatly underestimated species diversity,S629579,R157043,higher number estimated species (Method),R138559,BINs,"For the first time, a nearly complete barcode library for European Gelechiidae is provided. DNA barcode sequences (COI gene - cytochrome c oxidase 1) from 751 out of 865 nominal species, belonging to 105 genera, were successfully recovered. A total of 741 species represented by specimens with sequences ≥ 500bp and an additional ten species represented by specimens with shorter sequences were used to produce 53 NJ trees. Intraspecific barcode divergence averaged only 0.54% whereas distance to the Nearest-Neighbour species averaged 5.58%. Of these, 710 species possessed unique DNA barcodes, but 31 species could not be reliably discriminated because of barcode sharing or partial barcode overlap. Species discrimination based on the Barcode Index System (BIN) was successful for 668 out of 723 species which clustered from minimum one to maximum 22 unique BINs. Fifty-five species shared a BIN with up to four species and identification from DNA barcode data is uncertain. Finally, 65 clusters with a unique BIN remained unidentified to species level. These putative taxa, as well as 114 nominal species with more than one BIN, suggest the presence of considerable cryptic diversity, cases which should be examined in future revisionary studies.",TRUE,acronym
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R157056,A DNA Barcode Library for North American Pyraustinae (Lepidoptera: Pyraloidea: Crambidae),S629664,R157057,higher number estimated species (Method),R138559,BINs,"Although members of the crambid subfamily Pyraustinae are frequently important crop pests, their identification is often difficult because many species lack conspicuous diagnostic morphological characters. DNA barcoding employs sequence diversity in a short standardized gene region to facilitate specimen identifications and species discovery. This study provides a DNA barcode reference library for North American pyraustines based upon the analysis of 1589 sequences recovered from 137 nominal species, 87% of the fauna. Data from 125 species were barcode compliant (>500bp, <1% n), and 99 of these taxa formed a distinct cluster that was assigned to a single BIN. The other 26 species were assigned to 56 BINs, reflecting frequent cases of deep intraspecific sequence divergence and a few instances of barcode sharing, creating a total of 155 BINs. Two systems for OTU designation, ABGD and BIN, were examined to check the correspondence between current taxonomy and sequence clusters. The BIN system performed better than ABGD in delimiting closely related species, while OTU counts with ABGD were influenced by the value employed for relative gap width. Different species with low or no interspecific divergence may represent cases of unrecognized synonymy, whereas those with high intraspecific divergence require further taxonomic scrutiny as they may involve cryptic diversity. The barcode library developed in this study will also help to advance understanding of relationships among species of Pyraustinae.",TRUE,acronym
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R139508,Close congruence between Barcode Index Numbers (bins) and species boundaries in the Erebidae (Lepidoptera: Noctuoidea) of the Iberian Peninsula,S556422,R139510,lower number estimated species (Method),R138559,BINs,"Abstract The DNA barcode reference library for Lepidoptera holds much promise as a tool for taxonomic research and for providing the reliable identifications needed for conservation assessment programs. We gathered sequences for the barcode region of the mitochondrial cytochrome c oxidase subunit I gene from 160 of the 176 nominal species of Erebidae moths (Insecta: Lepidoptera) known from the Iberian Peninsula. These results arise from a research project which constructing a DNA barcode library for the insect species of Spain. New records for 271 specimens (122 species) are coupled with preexisting data for 38 species from the Iberian fauna. Mean interspecific distance was 12.1%, while the mean nearest neighbour divergence was 6.4%. All 160 species possessed diagnostic barcode sequences, but one pair of congeneric taxa (Eublemma rosea and Eublemma rietzi) were assigned to the same BIN. As well, intraspecific sequence divergences higher than 1.5% were detected in four species which likely represent species complexes. This study reinforces the effectiveness of DNA barcoding as a tool for monitoring biodiversity in particular geographical areas and the strong correspondence between sequence clusters delineated by BINs and species recognized through detailed taxonomic analysis.",TRUE,acronym
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R157051,"A Transcontinental Challenge — A Test of DNA Barcode Performance for 1,541 Species of Canadian Noctuoidea (Lepidoptera)",S629614,R157052,lower number estimated species (Method),R138559,BINs,"This study provides a first, comprehensive, diagnostic use of DNA barcodes for the Canadian fauna of noctuoids or “owlet” moths (Lepidoptera: Noctuoidea) based on vouchered records for 1,541 species (99.1% species coverage), and more than 30,000 sequences. When viewed from a Canada-wide perspective, DNA barcodes unambiguously discriminate 90% of the noctuoid species recognized through prior taxonomic study, and resolution reaches 95.6% when considered at a provincial scale. Barcode sharing is concentrated in certain lineages with 54% of the cases involving 1.8% of the genera. Deep intraspecific divergence exists in 7.7% of the species, but further studies are required to clarify whether these cases reflect an overlooked species complex or phylogeographic variation in a single species. Non-native species possess higher Nearest-Neighbour (NN) distances than native taxa, whereas generalist feeders have lower NN distances than those with more specialized feeding habits. We found high concordance between taxonomic names and sequence clusters delineated by the Barcode Index Number (BIN) system with 1,082 species (70%) assigned to a unique BIN. The cases of discordance involve both BIN mergers and BIN splits with 38 species falling into both categories, most likely reflecting bidirectional introgression. One fifth of the species are involved in a BIN merger reflecting the presence of 158 species sharing their barcode sequence with at least one other taxon, and 189 species with low, but diagnostic COI divergence. A very few cases (13) involved species whose members fell into both categories. Most of the remaining 140 species show a split into two or three BINs per species, while Virbia ferruginosa was divided into 16. The overall results confirm that DNA barcodes are effective for the identification of Canadian noctuoids. This study also affirms that BINs are a strong proxy for species, providing a pathway for a rapid, accurate estimation of animal diversity.",TRUE,acronym
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R135750,Characterization and comparison of poorly known moth communities through DNA barcoding in two Afrotropical environments in Gabon,S537036,R135752,No. of estimated species (Method),R138559,BINs,"Biodiversity research in tropical ecosystems-popularized as the most biodiverse habitats on Earth-often neglects invertebrates, yet invertebrates represent the bulk of local species richness. Insect communities in particular remain strongly impeded by both Linnaean and Wallacean shortfalls, and identifying species often remains a formidable challenge inhibiting the use of these organisms as indicators for ecological and conservation studies. Here we use DNA barcoding as an alternative to the traditional taxonomic approach for characterizing and comparing the diversity of moth communities in two different ecosystems in Gabon. Though sampling remains very incomplete, as evidenced by the high proportion (59%) of species represented by singletons, our results reveal an outstanding diversity. With about 3500 specimens sequenced and representing 1385 BINs (Barcode Index Numbers, used as a proxy to species) in 23 families, the diversity of moths in the two sites sampled is higher than the current number of species listed for the entire country, highlighting the huge gap in biodiversity knowledge for this country. Both seasonal and spatial turnovers are strikingly high (18.3% of BINs shared between seasons, and 13.3% between sites) and draw attention to the need to account for these when running regional surveys. Our results also highlight the richness and singularity of savannah environments and emphasize the status of Central African ecosystems as hotspots of biodiversity.",TRUE,acronym
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R138551,Probing planetary biodiversity with DNA barcodes: The Noctuoidea of North America,S629314,R156994,No. of estimated species (Method),R138559,BINs,"This study reports the assembly of a DNA barcode reference library for species in the lepidopteran superfamily Noctuoidea from Canada and the USA. Based on the analysis of 69,378 specimens, the library provides coverage for 97.3% of the noctuoid fauna (3565 of 3664 species). In addition to verifying the strong performance of DNA barcodes in the discrimination of these species, the results indicate close congruence between the number of species analyzed (3565) and the number of sequence clusters (3816) recognized by the Barcode Index Number (BIN) system. Distributional patterns across 12 North American ecoregions are examined for the 3251 species that have GPS data while BIN analysis is used to quantify overlap between the noctuoid faunas of North America and other zoogeographic regions. This analysis reveals that 90% of North American noctuoids are endemic and that just 7.5% and 1.8% of BINs are shared with the Neotropics and with the Palearctic, respectively. One third (29) of the latter species are recent introductions and, as expected, they possess low intraspecific divergences.",TRUE,acronym
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R139497,Congruence between morphology-based species and Barcode Index Numbers (BINs) in Neotropical Eumaeini (Lycaenidae),S629160,R156958,No. of estimated species (Method),R138559,BINs,"Background With about 1,000 species in the Neotropics, the Eumaeini (Theclinae) are one of the most diverse butterfly tribes. Correct morphology-based identifications are challenging in many genera due to relatively little interspecific differences in wing patterns. Geographic infraspecific variation is sometimes more substantial than variation between species. In this paper we present a large DNA barcode dataset of South American Lycaenidae. We analyze how well DNA barcode BINs match morphologically delimited species. Methods We compare morphology-based species identifications with the clustering of molecular operational taxonomic units (MOTUs) delimitated by the RESL algorithm in BOLD, which assigns Barcode Index Numbers (BINs). We examine intra- and interspecific divergences for genera represented by at least four morphospecies. We discuss the existence of local barcode gaps in a genus by genus analysis. We also note differences in the percentage of species with barcode gaps in groups of lowland and high mountain genera. Results We identified 2,213 specimens and obtained 1,839 sequences of 512 species in 90 genera. Overall, the mean intraspecific divergence value of CO1 sequences was 1.20%, while the mean interspecific divergence between nearest congeneric neighbors was 4.89%, demonstrating the presence of a barcode gap. However, the gap seemed to disappear from the entire set when comparing the maximum intraspecific distance (8.40%) with the minimum interspecific distance (0.40%). Clear barcode gaps are present in many genera but absent in others. From the set of specimens that yielded COI fragment lengths of at least 650 bp, 75% of the a priori morphology-based identifications were unambiguously assigned to a single Barcode Index Number (BIN). However, after a taxonomic a posteriori review, the percentage of matched identifications rose to 85%. BIN splitting was observed for 17% of the species and BIN sharing for 9%. We found that genera that contain primarily lowland species show higher percentages of local barcode gaps and congruence between BINs and morphology than genera that contain exclusively high montane species. The divergence values to the nearest neighbors were significantly lower in high Andean species while the intra-specific divergence values were significantly lower in the lowland species. These results raise questions regarding the causes of observed low inter and high intraspecific genetic variation. We discuss incomplete lineage sorting and hybridization as most likely causes of this phenomenon, as the montane species concerned are relatively young and hybridization is probable. The release of our data set represents an essential baseline for a reference library for biological assessment studies of butterflies in mega diverse countries using modern high-throughput technologies an highlights the necessity of taxonomic revisions for various genera combining both molecular and morphological data.",TRUE,acronym
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R139508,Close congruence between Barcode Index Numbers (bins) and species boundaries in the Erebidae (Lepidoptera: Noctuoidea) of the Iberian Peninsula,S556430,R139510,No. of estimated species (Method),R138559,BINs,"Abstract The DNA barcode reference library for Lepidoptera holds much promise as a tool for taxonomic research and for providing the reliable identifications needed for conservation assessment programs. We gathered sequences for the barcode region of the mitochondrial cytochrome c oxidase subunit I gene from 160 of the 176 nominal species of Erebidae moths (Insecta: Lepidoptera) known from the Iberian Peninsula. These results arise from a research project which constructing a DNA barcode library for the insect species of Spain. New records for 271 specimens (122 species) are coupled with preexisting data for 38 species from the Iberian fauna. Mean interspecific distance was 12.1%, while the mean nearest neighbour divergence was 6.4%. All 160 species possessed diagnostic barcode sequences, but one pair of congeneric taxa (Eublemma rosea and Eublemma rietzi) were assigned to the same BIN. As well, intraspecific sequence divergences higher than 1.5% were detected in four species which likely represent species complexes. This study reinforces the effectiveness of DNA barcoding as a tool for monitoring biodiversity in particular geographical areas and the strong correspondence between sequence clusters delineated by BINs and species recognized through detailed taxonomic analysis.",TRUE,acronym
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R157056,A DNA Barcode Library for North American Pyraustinae (Lepidoptera: Pyraloidea: Crambidae),S629672,R157057,No. of estimated species (Method),R138559,BINs,"Although members of the crambid subfamily Pyraustinae are frequently important crop pests, their identification is often difficult because many species lack conspicuous diagnostic morphological characters. DNA barcoding employs sequence diversity in a short standardized gene region to facilitate specimen identifications and species discovery. This study provides a DNA barcode reference library for North American pyraustines based upon the analysis of 1589 sequences recovered from 137 nominal species, 87% of the fauna. Data from 125 species were barcode compliant (>500bp, <1% n), and 99 of these taxa formed a distinct cluster that was assigned to a single BIN. The other 26 species were assigned to 56 BINs, reflecting frequent cases of deep intraspecific sequence divergence and a few instances of barcode sharing, creating a total of 155 BINs. Two systems for OTU designation, ABGD and BIN, were examined to check the correspondence between current taxonomy and sequence clusters. The BIN system performed better than ABGD in delimiting closely related species, while OTU counts with ABGD were influenced by the value employed for relative gap width. Different species with low or no interspecific divergence may represent cases of unrecognized synonymy, whereas those with high intraspecific divergence require further taxonomic scrutiny as they may involve cryptic diversity. The barcode library developed in this study will also help to advance understanding of relationships among species of Pyraustinae.",TRUE,acronym
R24,Ecology and Evolutionary Biology,R57662,"Insect herbivore faunal diversity among invasive, non-invasive and native Eugenia species: Implications for the enemy release hypothesis",S199017,R57663,Sub-hypothesis,L125059,ANI,"Abstract The enemy release hypothesis (ERH) frequently has been invoked to explain the naturalization and spread of introduced species. One ramification of the ERH is that invasive plants sustain less herbivore pressure than do native species. Empirical studies testing the ERH have mostly involved two-way comparisons between invasive introduced plants and their native counterparts in the invaded region. Testing the ERH would be more meaningful if such studies also included introduced non-invasive species because introduced plants, regardless of their abundance or impact, may support a reduced insect herbivore fauna and experience less damage. In this study, we employed a three-way comparison, in which we compared herbivore faunas among native, introduced invasive, and introduced non-invasive plants in the genus Eugenia (Myrtaceae) which all co-occur in South Florida. We observed a total of 25 insect species in 12 families and 6 orders feeding on the six species of Eugenia. Of these insect species, the majority were native (72%), polyphagous (64%), and ectophagous (68%). We found that invasive introduced Eugenia has a similar level of herbivore richness as both the native and the non-invasive introduced Eugenia. However, the numbers and percentages of oligophagous insect species were greatest on the native Eugenia, but they were not different between the invasive and non-invasive introduced Eugenia. One oligophagous endophagous insect has likely shifted from the native to the invasive, but none to the non-invasive Eugenia. In summary, the invasive Eugenia encountered equal, if not greater, herbivore pressure than the non-invasive Eugenia, including from oligophagous and endophagous herbivores. Our data only provided limited support to the ERH. We would not have been able to draw this conclusion without inclusion of the non-invasive Eugenia species in the study.",TRUE,acronym
R24,Ecology and Evolutionary Biology,R57761,Community structure of insect herbivores on introduced and native Solidago plants in Japan,S200319,R57762,Sub-hypothesis,L126163,ANI,"We compared community composition, density, and species richness of herbivorous insects on the introduced plant Solidago altissima L. (Asteraceae) and the related native species Solidago virgaurea L. in Japan. We found large differences in community composition on the two Solidago species. Five hemipteran sap feeders were found only on S. altissima. Two of them, the aphid Uroleucon nigrotuberculatum Olive (Hemiptera: Aphididae) and the scale insect Parasaissetia nigra Nietner (Hemiptera: Coccidae), were exotic species, accounting for 62% of the total individuals on S. altissima. These exotic sap feeders mostly determined the difference of community composition on the two plant species. In contrast, the herbivore community on S. virgaurea consisted predominately of five native insects: two lepidopteran leaf chewers and three dipteran leaf miners. Overall species richness did not differ between the plants because the increased species richness of sap feeders was offset by the decreased richness of leaf chewers and leaf miners on S. altissima. The overall density of herbivorous insects was higher on S. altissima than on S. virgaurea, because of the high density of the two exotic sap feeding species on S. altissima. We discuss the importance of analyzing community composition in terms of feeding guilds of insect herbivores for understanding how communities of insect herbivores are organized on introduced plants in novel habitats.",TRUE,acronym
R24,Ecology and Evolutionary Biology,R57912,Parasites and genetic diversity in an invasive bumblebee,S202292,R57913,Sub-hypothesis,L127834,ANI,"Biological invasions are facilitated by the global transportation of species and climate change. Given that invasions may cause ecological and economic damage and pose a major threat to biodiversity, understanding the mechanisms behind invasion success is essential. Both the release of non-native populations from natural enemies, such as parasites, and the genetic diversity of these populations may play key roles in their invasion success. We investigated the roles of parasite communities, through enemy release and parasite acquisition, and genetic diversity in the invasion success of the non-native bumblebee, Bombus hypnorum, in the United Kingdom. The invasive B. hypnorum had higher parasite prevalence than most, or all native congeners for two high-impact parasites, probably due to higher susceptibility and parasite acquisition. Consequently parasites had a higher impact on B. hypnorum queens’ survival and colony-founding success than on native species. Bombus hypnorum also had lower functional genetic diversity at the sex-determining locus than native species. Higher parasite prevalence and lower genetic diversity have not prevented the rapid invasion of the United Kingdom by B. hypnorum. These data may inform our understanding of similar invasions by commercial bumblebees around the world. This study suggests that concerns about parasite impacts on the small founding populations common to re-introduction and translocation programs may be less important than currently believed.",TRUE,acronym
R24,Ecology and Evolutionary Biology,R57948,The parasite community of gobiid fishes (Actinopterygii: Gobiidae) from the Lower Volga River region,S202774,R57949,Sub-hypothesis,L128244,ANI,"Abstract The parasitic fauna in the lower Volga River basin was investigated for four gobiid species: the nonindigenous monkey goby Neogobius fluviatilis (Pallas, 1814), the round goby N. melanostomus (Pallas, 1814), the Caspian bighead goby Ponticola gorlap (Iljin, 1949), and the tubenose goby Proterorhinus cf. semipellucidus (Kessler, 1877). In total, 19 species of goby parasites were identified, of which two - Bothriocephalus opsariichthydis Yamaguti, 1934 and Nicolla skrjabini (Iwanitzki, 1928) - appeared to have been introduced from other geographic regions. The monkey goby had significantly fewer parasitic species (6), but relatively high levels of infection, in comparison to the native species. Parasitism of the Caspian bighead goby, which is the only predatory fish among the studied gobies, differed from the others according to the results of discriminant analysis. The parasitic fauna of the tubenose goby more closely resembled those of Caspian Sea gobiids, rather than the Black Sea monkey goby.",TRUE,acronym
R24,Ecology and Evolutionary Biology,R57982,Comparison of parasite diversity in native panopeid mud crabs and the invasive Asian shore crab in estuaries of northeast North America,S203214,R57983,Sub-hypothesis,L128616,ANI,"Numerous non-indigenous species (NIS) have successfully established in new locales, where they can have large impacts on community and ecosystem structure. A loss of natural enemies, such as parasites, is one mechanism proposed to contribute to that success. While several studies have shown NIS are initially less parasitized than native conspecifics, fewer studies have investigated whether parasite richness changes over time. Moreover, evaluating the role that parasites have in invaded communities requires not only an understanding of the parasite diversity of NIS but also the species with which they interact; yet parasite diversity in native species may be inadequately quantified. In our study, we examined parasite taxonomic richness, infection prevalence, and infection intensity in the invasive Asian shore crab Hemigrapsus sanguineus De Haan, 1835 and two native mud crabs (Panopeus herbstii Milne-Edwards, 1834 and Eurypanopeus depressus Smith, 1869) in estuarine and coastal communities along the east coast of the USA. We also examined reproductive tissue allocation (i.e., the proportion of gonad weight to total body weight) in all three crabs to explore possible differences in infected versus uninfected crabs. We found three parasite taxa infecting H. sanguineus and four taxa infecting mud crabs, including a rhizocephalan castrator (Loxothylacus panopaei) parasitizing E. depressus. Moreover, we documented a significant negative relationship between parasite escape and time for H. sanguineus, including a new 2015 record of a native microphallid trematode. Altogether, there was no significant difference in taxonomic richness among the crab species. Across parasite taxa, H. sanguineus demonstrated significantly lower infection prevalence compared to P. herbstii; yet a multivariate analysis of taxa-specific prevalence demonstrated no significant differences among crabs. Finally, infected P. herbstii had the highest proportion of gonad weight to total body weight. Our study finds some evidence for lower infection prevalence in the non-native versus the native hosts. However, we also demonstrate that parasite escape can lessen with time. Our work has implications for the understanding of the potential influence parasites may have on the future success of NIS in introduced regions.",TRUE,acronym
R24,Ecology and Evolutionary Biology,R57623,Natural-enemy release facilitates habitat expansion of the invasive tropical shrub Clidemia hirta,S198518,R57624,Sub-hypothesis,L124638,HAD,"Nonnative, invasive plant species often increase in growth, abundance, or habitat distribution in their introduced ranges. The enemy-release hypothesis, proposed to account for these changes, posits that herbivores and pathogens (natural enemies) limit growth or survival of plants in native areas, that natural enemies have less impact in the introduced than in the native range, and that the release from natural-enemy regulation in areas of introduction accounts in part for observed changes in plant abundance. We tested experimentally the enemy-release hypothesis with the invasive neotropical shrub Clidemia hirta (L.) D. Don (Melastomataceae). Clidemia hirta does not occur in forest in its native range but is a vigorous invader of tropical forest in its introduced range. Therefore, we tested the specific prediction that release from natural enemies has contributed to its ex- panded habitat distribution. We planted C. hirta into understory and open habitats where it is native (Costa Rica) and where it has been introduced (Hawaii) and applied pesticides to examine the effects of fungal pathogen and insect herbivore exclusion. In understory sites in Costa Rica, C. hirta survival increased by 12% if sprayed with insecticide, 19% with fungicide, and 41% with both insecticide and fungicide compared to control plants sprayed only with water. Exclusion of natural enemies had no effect on survival in open sites in Costa Rica or in either habitat in Hawaii. Fungicide application promoted relative growth rates of plants that survived to the end of the experiment in both habitats of Costa Rica but not in Hawaii, suggesting that fungal pathogens only limit growth of C. hirta where it is native. Galls, stem borers, weevils, and leaf rollers were prevalent in Costa Rica but absent in Hawaii. In addition, the standing percentage of leaf area missing on plants in the control (water only) treatment was five times greater on plants in Costa Rica than in Hawaii and did not differ between habitats. The results from this study suggest that significant effects of herbivores and fungal pathogens may be limited to particular habitats. For Clidemia hirta, its absence from forest understory in its native range likely results in part from the strong pressures of natural enemies. Its invasion into Hawaiian forests is apparently aided by a release from these herbivores and pathogens.",TRUE,acronym
R24,Ecology and Evolutionary Biology,R57700,The invasive shrub Buddleja davidii performs better in its introduced range,S199532,R57701,Sub-hypothesis,L125498,HAD,"It is commonly assumed that invasive plants grow more vigorously in their introduced than in their native range, which is then attributed to release from natural enemies or to microevolutionary changes, or both. However, few studies have tested this assumption by comparing the performance of invasive species in their native vs. introduced ranges. Here, we studied abundance, growth, reproduction, and herbivory in 10 native Chinese and 10 invasive German populations of the invasive shrub Buddleja davidii (Scrophulariaceae; butterfly bush). We found strong evidence for increased plant vigour in the introduced range: plants in invasive populations were significantly taller and had thicker stems, larger inflorescences, and heavier seeds than plants in native populations. These differences in plant performance could not be explained by a more benign climate in the introduced range. Since leaf herbivory was substantially reduced in invasive populations, our data rather suggest that escape from natural enemies, associated with increased plant growth and reproduction, contributes to the invasion success of B. davidii in Central Europe.",TRUE,acronym
R24,Ecology and Evolutionary Biology,R57803,Testing hypotheses for exotic plant success: parallel experiments in the native and introduced ranges,S200842,R57804,Sub-hypothesis,L126602,HAD,"A central question in ecology concerns how some exotic plants that occur at low densities in their native range are able to attain much higher densities where they are introduced. This question has remained unresolved in part due to a lack of experiments that assess factors that affect the population growth or abundance of plants in both ranges. We tested two hypotheses for exotic plant success: escape from specialist insect herbivores and a greater response to disturbance in the introduced range. Within three introduced populations in Montana, USA, and three native populations in Germany, we experimentally manipulated insect herbivore pressure and created small-scale disturbances to determine how these factors affect the performance of houndstongue (Cynoglossum officinale), a widespread exotic in western North America. Herbivores reduced plant size and fecundity in the native range but had little effect on plant performance in the introduced range. Small-scale experimental disturbances enhanced seedling recruitment in both ranges, but subsequent seedling survival was more positively affected by disturbance in the introduced range. We combined these experimental results with demographic data from each population to parameterize integral projection population models to assess how enemy escape and disturbance might differentially influence C. officinale in each range. Model results suggest that escape from specialist insects would lead to only slight increases in the growth rate (lambda) of introduced populations. In contrast, the larger response to disturbance in the introduced vs. native range had much greater positive effects on lambda. These results together suggest that, at least in the regions where the experiments were performed, the differences in response to small disturbances by C. officinale contribute more to higher abundance in the introduced range compared to at home. Despite the challenges of conducting experiments on a wide biogeographic scale and the logistical constraints of adequately sampling populations within a range, this approach is a critical step forward to understanding the success of exotic plants.",TRUE,acronym
R24,Ecology and Evolutionary Biology,R57889,"Biogeographic comparisons of herbivore attack, growth and impact of Japanese knotweed between Japan and France",S201987,R57890,Sub-hypothesis,L127575,HAD,"To shed light on the process of how exotic species become invasive, it is necessary to study them both in their native and non‐native ranges. Our intent was to measure differences in herbivory, plant growth and the impact on other species in Fallopia japonica in its native and non‐native ranges. We performed a cross‐range full descriptive, field study in Japan (native range) and France (non‐native range). We assessed DNA ploidy levels, the presence of phytophagous enemies, the amount of leaf damage, several growth parameters and the co‐occurrence of Fallopia japonica with other plant species of herbaceous communities. Invasive Fallopia japonica plants were all octoploid, a ploidy level we did not encounter in the native range, where plants were all tetraploid. Octoploids in France harboured far less phytophagous enemies, suffered much lower levels of herbivory, grew larger and had a much stronger impact on plant communities than tetraploid conspecifics in the native range in Japan. Our data confirm that Fallopia japonica performs better – plant vigour and dominance in the herbaceous community – in its non‐native than its native range. Because we could not find octoploids in the native range, we cannot separate the effects of differences in ploidy from other biogeographic factors. To go further, common garden experiments would now be needed to disentangle the proper role of each factor, taking into account the ploidy levels of plants in their native and non‐native ranges. Synthesis. As the process by which invasive plants successfully invade ecosystems in their non‐native range is probably multifactorial in most cases, examining several components – plant growth, herbivory load, impact on recipient systems – of plant invasions through biogeographic comparisons is important. Our study contributes towards filling this gap in the research, and it is hoped that this method will spread in invasion ecology, making such an approach more common.",TRUE,acronym
R24,Ecology and Evolutionary Biology,R53377,Testing Darwin's naturalization hypothesis in the Azores,S163508,R53378,Measure of species relationship,L98915,PNND,"Invasive species are a threat for ecosystems worldwide, especially oceanic islands. Predicting the invasive potential of introduced species remains difficult, and only a few studies have found traits correlated to invasiveness. We produced a molecular phylogenetic dataset and an ecological trait database for the entire Azorean flora and find that the phylogenetic nearest neighbour distance (PNND), a measure of evolutionary relatedness, is significantly correlated with invasiveness. We show that introduced plant species are more likely to become invasive in the absence of closely related species in the native flora of the Azores, verifying Darwin's 'naturalization hypothesis'. In addition, we find that some ecological traits (especially life form and seed size) also have predictive power on invasive success in the Azores. Therefore, we suggest a combination of PNND with ecological trait values as a universal predictor of invasiveness that takes into account characteristics of both introduced species and receiving ecosystem.",TRUE,acronym
R194,Engineering,R139969,A Reliable Liquid-Based CMOS MEMS Micro Thermal Convective Accelerometer With Enhanced Sensitivity and Limit of Detection,S558889,R139971,keywords,L392796,CMOS,"In this paper, a liquid-based micro thermal convective accelerometer (MTCA) is optimized by the Rayleigh number (Ra) based compact model and fabricated using the $0.35\mu $ m CMOS MEMS technology. To achieve water-proof performance, the conformal Parylene C coating was adopted as the isolation layer with the accelerated life-testing results of a 9-year-lifetime for liquid-based MTCA. Then, the device performance was characterized considering sensitivity, response time, and noise. Both the theoretical and experimental results demonstrated that fluid with a larger Ra number can provide better performance for the MTCA. More significantly, Ra based model showed its advantage to make a more accurate prediction than the simple linear model to select suitable fluid to enhance the sensitivity and balance the linear range of the device. Accordingly, an alcohol-based MTCA was achieved with a two-order-of magnitude increase in sensitivity (43.8 mV/g) and one-order-of-magnitude decrease in the limit of detection (LOD) ( $61.9~\mu \text{g}$ ) compared with the air-based MTCA. [2021-0092]",TRUE,acronym
R194,Engineering,R144807,Solar blind deep ultraviolet β-Ga2O3 photodetectors grown on sapphire by the Mist-CVD method,S580084,R144810,keywords,L405545,CVD,"In this report, we demonstrate high spectral responsivity (SR) solar blind deep ultraviolet (UV) β-Ga2O3 metal-semiconductor-metal (MSM) photodetectors grown by the mist chemical-vapor deposition (Mist-CVD) method. The β-Ga2O3 thin film was grown on c-plane sapphire substrates, and the fabricated MSM PDs with Al contacts in an interdigitated geometry were found to exhibit peak SR>150A/W for the incident light wavelength of 254 nm at a bias of 20 V. The devices exhibited very low dark current, about 14 pA at 20 V, and showed sharp transients with a photo-to-dark current ratio>105. The corresponding external quantum efficiency is over 7 × 104%. The excellent deep UV β-Ga2O3 photodetectors will enable significant advancements for the next-generation photodetection applications.",TRUE,acronym
R194,Engineering,R139969,A Reliable Liquid-Based CMOS MEMS Micro Thermal Convective Accelerometer With Enhanced Sensitivity and Limit of Detection,S558888,R139971,keywords,L392795,MEMS,"In this paper, a liquid-based micro thermal convective accelerometer (MTCA) is optimized by the Rayleigh number (Ra) based compact model and fabricated using the $0.35\mu $ m CMOS MEMS technology. To achieve water-proof performance, the conformal Parylene C coating was adopted as the isolation layer with the accelerated life-testing results of a 9-year-lifetime for liquid-based MTCA. Then, the device performance was characterized considering sensitivity, response time, and noise. Both the theoretical and experimental results demonstrated that fluid with a larger Ra number can provide better performance for the MTCA. More significantly, Ra based model showed its advantage to make a more accurate prediction than the simple linear model to select suitable fluid to enhance the sensitivity and balance the linear range of the device. Accordingly, an alcohol-based MTCA was achieved with a two-order-of magnitude increase in sensitivity (43.8 mV/g) and one-order-of-magnitude decrease in the limit of detection (LOD) ( $61.9~\mu \text{g}$ ) compared with the air-based MTCA. [2021-0092]",TRUE,acronym
R194,Engineering,R141130,Effects of surface roughness on electromagnetic characteristics of capacitive switches,S564098,R141132,keywords,L395862,MEMS,"This paper studies the effect of surface roughness on up-state and down-state capacitances of microelectromechanical systems (MEMS) capacitive switches. When the root-mean-square (RMS) roughness is 10 nm, the up-state capacitance is approximately 9% higher than the theoretical value. When the metal bridge is driven down, the normalized contact area between the metal bridge and the surface of the dielectric layer is less than 1% if the RMS roughness is larger than 2 nm. Therefore, the down-state capacitance is actually determined by the non-contact part of the metal bridge. The normalized isolation is only 62% for RMS roughness of 10 nm when the hold-down voltage is 30 V. The analysis also shows that the down-state capacitance and the isolation increase with the hold-down voltage. The normalized isolation increases from 58% to 65% when the hold-down voltage increases from 10 V to 60 V for RMS roughness of 10 nm.",TRUE,acronym
R194,Engineering,R145527,Performance Investigation of an n-Type Tin-Oxide Thin Film Transistor by Channel Plasma Processing,S582759,R145529,keywords,L407015,TCAD,"In this paper, we investigated the performance of an n-type tin-oxide (SnOx) thin film transistor (TFT) by experiments and simulation. The fabricated SnOx TFT device by oxygen plasma treatment on the channel exhibited n-type conduction with an on/off current ratio of 4.4x104, a high field-effect mobility of 18.5 cm2/V.s and a threshold swing of 405 mV/decade, which could be attributed to the excess reacted oxygen incorporated to the channel to form the oxygen-rich n-type SnOx. Furthermore, a TCAD simulation based on the n-type SnOx TFT device was performed by fitting the experimental data to investigate the effect of the channel traps on the device performance, indicating that performance enhancements were further achieved by suppressing the density of channel traps. In addition, the n-type SnOx TFT device exhibited high stability upon illumination with visible light. The results show that the n-type SnOx TFT device by channel plasma processing has considerable potential for next-generation high-performance display application.",TRUE,acronym
R194,Engineering,R141873,Preparation of highly c-axis oriented AlN thin films on Hastelloy tapes with Y2O3 buffer layer for flexible SAW sensor applications,S569157,R141876,substrate,L399440,Y2O3/Hastelloy,"Highly c-axis oriented aluminum nitrade (AlN) films were successfully deposited on flexible Hastelloy tapes by middle-frequency magnetron sputtering. The microstructure and piezoelectric properties of the AlN films were investigated. The results show that the AlN films deposited directly on the bare Hastelloy substrate have rough surface with root mean square (RMS) roughness of 32.43nm and its full width at half maximum (FWHM) of the AlN (0002) peak is 12.5∘. However, the AlN films deposited on the Hastelloy substrate with Y2O3 buffer layer show smooth surface with RMS roughness of 5.46nm and its FWHM of the AlN (0002) peak is only 3.7∘. The piezoelectric coefficient d33 of the AlN films deposited on the Y2O3/Hastelloy substrate is larger than three times that of the AlN films deposited on the bare Hastelloy substrate. The prepared highly c-axis oriented AlN films can be used to develop high-temperature flexible SAW sensors.",TRUE,acronym
R194,Engineering,R139963,"Theoretical Modeling, Numerical Simulations and Experimental Study of Micro Thermal Convective Accelerometers",S558861,R139968,Sensitivity (mV/g),L392772,"1,289","We present a one-dimensional (1D) theoretical model for the design analysis of a micro thermal convective accelerometer (MTCA). Systematical design analysis was conducted on the sensor performance covering the sensor output, sensitivity, and power consumption. The sensor output was further normalized as a function of normalized input acceleration in terms of Rayleigh number R $_{\mathrm {a}}$ (the product of Grashof number G $_{\mathrm {r}}$ and Prandtl number P $_{\mathrm {r}}$ ) for different fluids. A critical Rayleigh number (Rac = 3,000) is founded, for the first time, to determine the boundary between the linear and nonlinear response regime of MTCA. Based on the proposed 1D model, key parameters, including the location of the detectors, sensor length, thin film thickness, cavity height, heater temperature, and fluid types, were optimized to improve sensor performance. Accordingly, a CMOS compatible MTCA was designed and fabricated based on the theoretical analysis, which showed a high sensitivity of 1,289 mV/g. Therefore, this efficient 1D model, one million times faster than CFD simulation, can be a promising tool for the system-level CMOS MEMS design.",TRUE,number
R145,Environmental Sciences,R23273,"The ACCESS coupled model: description, control climate and evaluation",S72156,R23274,has name,L44986,ACCESS1.0,"4OASIS3.2–5 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 °C in ACCESS1.0 and 0.04 °C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.",TRUE,acronym
R145,Environmental Sciences,R9221,"The ACCESS coupled model: description, control climate and evaluation",S14620,R9222,has name,L8976,ACCESS1.0 ,"4OASIS3.2–5 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 °C in ACCESS1.0 and 0.04 °C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.",TRUE,acronym
R145,Environmental Sciences,R9221,"The ACCESS coupled model: description, control climate and evaluation",S14632,R9228,has name,L8977,ACCESS1.3,"4OASIS3.2–5 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 °C in ACCESS1.0 and 0.04 °C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.",TRUE,acronym
R145,Environmental Sciences,R23443,"The Norwegian Earth System Model, NorESM1-M – Part 1: Description and basic evaluation of the physical climate",S73039,R23444,has name,L45672,NORESM1-M,"Abstract. The core version of the Norwegian Climate Center's Earth System Model, named NorESM1-M, is presented. The NorESM family of models are based on the Community Climate System Model version 4 (CCSM4) of the University Corporation for Atmospheric Research, but differs from the latter by, in particular, an isopycnic coordinate ocean model and advanced chemistry–aerosol–cloud–radiation interaction schemes. NorESM1-M has a horizontal resolution of approximately 2° for the atmosphere and land components and 1° for the ocean and ice components. NorESM is also available in a lower resolution version (NorESM1-L) and a version that includes prognostic biogeochemical cycling (NorESM1-ME). The latter two model configurations are not part of this paper. Here, a first-order assessment of the model stability, the mean model state and the internal variability based on the model experiments made available to CMIP5 are presented. Further analysis of the model performance is provided in an accompanying paper (Iversen et al., 2013), presenting the corresponding climate response and scenario projections made with NorESM1-M.",TRUE,acronym
R145,Environmental Sciences,R8034,An Overview of CMIP5 and the Experiment Design,S12094,R8035,has research problem,R8038,CMIP5,"The fifth phase of the Coupled Model Intercomparison Project (CMIP5) will produce a state-of-the- art multimodel dataset designed to advance our knowledge of climate variability and climate change. Researchers worldwide are analyzing the model output and will produce results likely to underlie the forthcoming Fifth Assessment Report by the Intergovernmental Panel on Climate Change. Unprecedented in scale and attracting interest from all major climate modeling groups, CMIP5 includes “long term” simulations of twentieth-century climate and projections for the twenty-first century and beyond. Conventional atmosphere–ocean global climate models and Earth system models of intermediate complexity are for the first time being joined by more recently developed Earth system models under an experiment design that allows both types of models to be compared to observations on an equal footing. Besides the longterm experiments, CMIP5 calls for an entirely new suite of “near term” simulations focusing on recent decades...",TRUE,acronym
R145,Environmental Sciences,R8061,Wind extremes in the North Sea Basin under climate change: An ensemble study of 12 CMIP5 GCMs: WIND EXTREMES IN THE NORTH SEA IN CMIP5,S12139,R8062,has research problem,R8063,CMIP5,"Coastal safety may be influenced by climate change, as changes in extreme surge levels and wave extremes may increase the vulnerability of dunes and other coastal defenses. In the North Sea, an area already prone to severe flooding, these high surge levels and waves are generated by low atmospheric pressure and severe wind speeds during storm events. As a result of the geometry of the North Sea, not only the maximum wind speed is relevant, but also wind direction. Climate change could change maximum wind conditions, with potentially negative effects for coastal safety. Here, we use an ensemble of 12 Coupled Model Intercomparison Project Phase 5 (CMIP5) General Circulation Models (GCMs) and diagnose the effect of two climate scenarios (rcp4.5 and rcp8.5) on annual maximum wind speed, wind speeds with lower return frequencies, and the direction of these annual maximum wind speeds. The 12 selected CMIP5 models do not project changes in annual maximum wind speed and in wind speeds with lower return frequencies; however, we do find an indication that the annual extreme wind events are coming more often from western directions. Our results are in line with the studies based on CMIP3 models and do not confirm the statement based on some reanalysis studies that there is a climate‐change‐related upward trend in storminess in the North Sea area.",TRUE,acronym
R145,Environmental Sciences,R9221,"The ACCESS coupled model: description, control climate and evaluation",S14630,R9228,has research problem,R9223,CMIP5,"4OASIS3.2–5 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 °C in ACCESS1.0 and 0.04 °C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.",TRUE,acronym
R145,Environmental Sciences,R23273,"The ACCESS coupled model: description, control climate and evaluation",S72190,R23274,has research problem,R9223,CMIP5,"4OASIS3.2–5 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 °C in ACCESS1.0 and 0.04 °C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.",TRUE,acronym
R145,Environmental Sciences,R23398,Development and evaluation of an Earth-System model – HadGEM2,S72831,R23399,has research problem,R9223,CMIP5,"Abstract. We describe here the development and evaluation of an Earth system model suitable for centennial-scale climate prediction. The principal new components added to the physical climate model are the terrestrial and ocean ecosystems and gas-phase tropospheric chemistry, along with their coupled interactions. The individual Earth system components are described briefly and the relevant interactions between the components are explained. Because the multiple interactions could lead to unstable feedbacks, we go through a careful process of model spin up to ensure that all components are stable and the interactions balanced. This spun-up configuration is evaluated against observed data for the Earth system components and is generally found to perform very satisfactorily. The reason for the evaluation phase is that the model is to be used for the core climate simulations carried out by the Met Office Hadley Centre for the Coupled Model Intercomparison Project (CMIP5), so it is essential that addition of the extra complexity does not detract substantially from its climate performance. Localised changes in some specific meteorological variables can be identified, but the impacts on the overall simulation of present day climate are slight. This model is proving valuable both for climate predictions, and for investigating the strengths of biogeochemical feedbacks.",TRUE,acronym
R145,Environmental Sciences,R23443,"The Norwegian Earth System Model, NorESM1-M – Part 1: Description and basic evaluation of the physical climate",S73075,R23444,has research problem,R9223,CMIP5,"Abstract. The core version of the Norwegian Climate Center's Earth System Model, named NorESM1-M, is presented. The NorESM family of models are based on the Community Climate System Model version 4 (CCSM4) of the University Corporation for Atmospheric Research, but differs from the latter by, in particular, an isopycnic coordinate ocean model and advanced chemistry–aerosol–cloud–radiation interaction schemes. NorESM1-M has a horizontal resolution of approximately 2° for the atmosphere and land components and 1° for the ocean and ice components. NorESM is also available in a lower resolution version (NorESM1-L) and a version that includes prognostic biogeochemical cycling (NorESM1-ME). The latter two model configurations are not part of this paper. Here, a first-order assessment of the model stability, the mean model state and the internal variability based on the model experiments made available to CMIP5 are presented. Further analysis of the model performance is provided in an accompanying paper (Iversen et al., 2013), presenting the corresponding climate response and scenario projections made with NorESM1-M.",TRUE,acronym
R145,Environmental Sciences,R23457,Evaluation of the carbon cycle components in the Norwegian Earth System Model (NorESM),S73140,R23458,has research problem,R9223,CMIP5,"Abstract. The recently developed Norwegian Earth System Model (NorESM) is employed for simulations contributing to the CMIP5 (Coupled Model Intercomparison Project phase 5) experiments and the fifth assessment report of the Intergovernmental Panel on Climate Change (IPCC-AR5). In this manuscript, we focus on evaluating the ocean and land carbon cycle components of the NorESM, based on the preindustrial control and historical simulations. Many of the observed large scale ocean biogeochemical features are reproduced satisfactorily by the NorESM. When compared to the climatological estimates from the World Ocean Atlas (WOA), the model simulated temperature, salinity, oxygen, and phosphate distributions agree reasonably well in both the surface layer and deep water structure. However, the model simulates a relatively strong overturning circulation strength that leads to noticeable model-data bias, especially within the North Atlantic Deep Water (NADW). This strong overturning circulation slightly distorts the structure of the biogeochemical tracers at depth. Advancements in simulating the oceanic mixed layer depth with respect to the previous generation model particularly improve the surface tracer distribution as well as the upper ocean biogeochemical processes, particularly in the Southern Ocean. Consequently, near-surface ocean processes such as biological production and air–sea gas exchange, are in good agreement with climatological observations. The NorESM adopts the same terrestrial model as the Community Earth System Model (CESM1). It reproduces the general pattern of land-vegetation gross primary productivity (GPP) when compared to the observationally based values derived from the FLUXNET network of eddy covariance towers. While the model simulates well the vegetation carbon pool, the soil carbon pool is smaller by a factor of three relative to the observational based estimates. The simulated annual mean terrestrial GPP and total respiration are slightly larger than observed, but the difference between the global GPP and respiration is comparable. Model-data bias in GPP is mainly simulated in the tropics (overestimation) and in high latitudes (underestimation). Within the NorESM framework, both the ocean and terrestrial carbon cycle models simulate a steady increase in carbon uptake from the preindustrial period to the present-day. The land carbon uptake is noticeably smaller than the observations, which is attributed to the strong nitrogen limitation formulated by the land model.",TRUE,acronym
R33,Epidemiology,R187017,The Infectious Disease Ontology in the age of COVID-19,S715275,R187019,Dataset name,L482158,IDO-COVID-19,"Abstract Background Effective response to public health emergencies, such as we are now experiencing with COVID-19, requires data sharing across multiple disciplines and data systems. Ontologies offer a powerful data sharing tool, and this holds especially for those ontologies built on the design principles of the Open Biomedical Ontologies Foundry. These principles are exemplified by the Infectious Disease Ontology (IDO), a suite of interoperable ontology modules aiming to provide coverage of all aspects of the infectious disease domain. At its center is IDO Core, a disease- and pathogen-neutral ontology covering just those types of entities and relations that are relevant to infectious diseases generally. IDO Core is extended by disease and pathogen-specific ontology modules. Results To assist the integration and analysis of COVID-19 data, and viral infectious disease data more generally, we have recently developed three new IDO extensions: IDO Virus (VIDO); the Coronavirus Infectious Disease Ontology (CIDO); and an extension of CIDO focusing on COVID-19 (IDO-COVID-19). Reflecting the fact that viruses lack cellular parts, we have introduced into IDO Core the term acellular structure to cover viruses and other acellular entities studied by virologists. We now distinguish between infectious agents – organisms with an infectious disposition – and infectious structures – acellular structures with an infectious disposition. This in turn has led to various updates and refinements of IDO Core’s content. We believe that our work on VIDO, CIDO, and IDO-COVID-19 can serve as a model for yielding greater conformance with ontology building best practices. Conclusions IDO provides a simple recipe for building new pathogen-specific ontologies in a way that allows data about novel diseases to be easily compared, along multiple dimensions, with data represented by existing disease ontologies. The IDO strategy, moreover, supports ontology coordination, providing a powerful method of data integration and sharing that allows physicians, researchers, and public health organizations to respond rapidly and efficiently to current and future public health crises.",TRUE,acronym
R38,Genomics,R50397,The application of RNA sequencing for the diagnosis and genomic classification of pediatric acute lymphoblastic leukemia,S154152,R50404,Has method,R9306,RNA-seq,"Acute lymphoblastic leukemia (ALL) is the most common childhood malignancy, and implementation of risk-adapted therapy has been instrumental in the dramatic improvements in clinical outcomes. A key to risk-adapted therapies includes the identification of genomic features of individual tumors, including chromosome number (for hyper- and hypodiploidy) and gene fusions, notably ETV6-RUNX1, TCF3-PBX1, and BCR-ABL1 in B-cell ALL (B-ALL). RNA-sequencing (RNA-seq) of large ALL cohorts has expanded the number of recurrent gene fusions recognized as drivers in ALL, and identification of these new entities will contribute to refining ALL risk stratification. We used RNA-seq on 126 ALL patients from our clinical service to test the utility of including RNA-seq in standard-of-care diagnostic pipelines to detect gene rearrangements and IKZF1 deletions. RNA-seq identified 86% of rearrangements detected by standard-of-care diagnostics. KMT2A (MLL) rearrangements, although usually identified, were the most commonly missed by RNA-seq as a result of low expression. RNA-seq identified rearrangements that were not detected by standard-of-care testing in 9 patients. These were found in patients who were not classifiable using standard molecular assessment. We developed an approach to detect the most common IKZF1 deletion from RNA-seq data and validated this using an RQ-PCR assay. We applied an expression classifier to identify Philadelphia chromosome-like B-ALL patients. T-ALL proved a rich source of novel gene fusions, which have clinical implications or provide insights into disease biology. Our experience shows that RNA-seq can be implemented within an individual clinical service to enhance the current molecular diagnostic risk classification of ALL.",TRUE,acronym
R146,Geology,R137118,"Spaceborne visible and thermal infrared lithologic mapping of impact-exposed subsurface lithologies at the Haughton impact structure, Devon Island, Canadian High Arctic: Applications to Mars",S541934,R137119,has dataset,L381634, ASTER ,"Abstract— This study serves as a proof‐of‐concept for the technique of using visible‐near infrared (VNIR), short‐wavelength infrared (SWIR), and thermal infrared (TIR) spectroscopic observations to map impact‐exposed subsurface lithologies and stratigraphy on Earth or Mars. The topmost layer, three subsurface layers and undisturbed outcrops of the target sequence exposed just 10 km to the northeast of the 23 km diameter Haughton impact structure (Devon Island, Nunavut, Canada) were mapped as distinct spectral units using Landsat 7 ETM+ (VNIR/SWIR) and ASTER (VNIR/SWIR/TIR) multispectral images. Spectral mapping was accomplished by using standard image contrast‐stretching algorithms. Both spectral matching and deconvolution algorithms were applied to image‐derived ASTER TIR emissivity spectra using spectra from a library of laboratory‐measured spectra of minerals (Arizona State University) and whole‐rocks (Ward's). These identifications were made without the use of a priori knowledge from the field (i.e., a “blind” analysis). The results from this analysis suggest a sequence of dolomitic rock (in the crater rim), limestone (wall), gypsum‐rich carbonate (floor), and limestone again (central uplift). These matched compositions agree with the lithologic units and the pre‐impact stratigraphic sequence as mapped during recent field studies of the Haughton impact structure by Osinski et al. (2005a). Further conformation of the identity of image‐derived spectra was confirmed by matching these spectra with laboratory‐measured spectra of samples collected from Haughton. The results from the “blind” remote sensing methods used here suggest that these techniques can also be used to understand subsurface lithologies on Mars, where ground truth knowledge may not be generally available.",TRUE,acronym
R146,Geology,R108129,Comparison of Airborne Hyperspectral Data and EO-1 Hyperion for Mineral Mapping,S492756,R108130,Data used,R108176,AVIRIS,"Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-/spl mu/m range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements are required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS.",TRUE,acronym
R146,Geology,R137045,"Integration of Raman, emission, and reflectance spectroscopy for earth and lunar mineralogy",S541396,R137047,Minerals identified (Lunar rock samples),L381239,KREEP,"Abstract. Spectroscopy plays a vital role in the identification and characterization of minerals on terrestrial and planetary surfaces. We review the three different spectroscopic techniques for characterizing minerals on the Earth and lunar surfaces separately. Seven sedimentary and metamorphic terrestrial rock samples were analyzed with three field-based spectrometers, i.e., Raman, Fourier transform infrared (FTIR), and visible to near infrared and shortwave infrared (Vis–NIR–SWIR) spectrometers. Similarly, a review of work done by previous researchers on lunar rock samples was also carried out for their Raman, Vis–NIR–SWIR, and thermal (mid-infrared) spectral responses. It has been found in both the cases that the spectral information such as Si-O-Si stretching (polymorphs) in Raman spectra, identification of impurities, Christiansen and Restrahlen band center variation in mid-infrared spectra, location of elemental substitution, the content of iron, and shifting of the band center of diagnostic absorption features at 1 and 2 μm in reflectance spectra are contributing to the characterization and identification of terrestrial and lunar minerals. We show that quartz can be better characterized by considering silica polymorphs from Raman spectra, emission features in the range of 8 to 14 μm in FTIR spectra, and reflectance absorption features from Vis–NIR–SWIR spectra. KREEP materials from Apollo 12 and 14 samples are also better characterized using integrated spectroscopic studies. Integrated spectral responses felicitate comprehensive characterization and better identification of minerals. We suggest that Raman spectroscopy and visible and NIR-thermal spectroscopy are the best techniques to explore the Earth’s and lunar mineralogy.",TRUE,acronym
R136,Graphics,R8330,An ontology of scientific experiments,S12789,R8331,Ontology,R8332,EXPO,"The formal description of experiments for efficient analysis, annotation and sharing of results is a fundamental part of the practice of science. Ontologies are required to achieve this objective. A few subject-specific ontologies of experiments currently exist. However, despite the unity of scientific experimentation, no general ontology of experiments exists. We propose the ontology EXPO to meet this need. EXPO links the SUMO (the Suggested Upper Merged Ontology) with subject-specific ontologies of experiments by formalizing the generic concepts of experimental design, methodology and results representation. EXPO is expressed in the W3C standard ontology language OWL-DL. We demonstrate the utility of EXPO and its ability to describe different experimental domains, by applying it to two experiments: one in high-energy physics and the other in phylogenetics. The use of EXPO made the goals and structure of these experiments more explicit, revealed ambiguities, and highlighted an unexpected similarity. We conclude that, EXPO is of general value in describing experiments and a step towards the formalization of science.",TRUE,acronym
R136,Graphics,R9524,An ontology of scientific experiments,S15253,R9525,Ontology,R9526,EXPO,"The formal description of experiments for efficient analysis, annotation and sharing of results is a fundamental part of the practice of science. Ontologies are required to achieve this objective. A few subject-specific ontologies of experiments currently exist. However, despite the unity of scientific experimentation, no general ontology of experiments exists. We propose the ontology EXPO to meet this need. EXPO links the SUMO (the Suggested Upper Merged Ontology) with subject-specific ontologies of experiments by formalizing the generic concepts of experimental design, methodology and results representation. EXPO is expressed in the W3C standard ontology language OWL-DL. We demonstrate the utility of EXPO and its ability to describe different experimental domains, by applying it to two experiments: one in high-energy physics and the other in phylogenetics. The use of EXPO made the goals and structure of these experiments more explicit, revealed ambiguities, and highlighted an unexpected similarity. We conclude that, EXPO is of general value in describing experiments and a step towards the formalization of science.",TRUE,acronym
R136,Graphics,R9557,An ontology of scientific experiments,S15486,R9558,Ontology,R9559,EXPO,"The formal description of experiments for efficient analysis, annotation and sharing of results is a fundamental part of the practice of science. Ontologies are required to achieve this objective. A few subject-specific ontologies of experiments currently exist. However, despite the unity of scientific experimentation, no general ontology of experiments exists. We propose the ontology EXPO to meet this need. EXPO links the SUMO (the Suggested Upper Merged Ontology) with subject-specific ontologies of experiments by formalizing the generic concepts of experimental design, methodology and results representation. EXPO is expressed in the W3C standard ontology language OWL-DL. We demonstrate the utility of EXPO and its ability to describe different experimental domains, by applying it to two experiments: one in high-energy physics and the other in phylogenetics. The use of EXPO made the goals and structure of these experiments more explicit, revealed ambiguities, and highlighted an unexpected similarity. We conclude that, EXPO is of general value in describing experiments and a step towards the formalization of science.",TRUE,acronym
R136,Graphics,R6457,Using Hierarchical Edge Bundles to visualize complex ontologies in GLOW,S7809,R6458,implementation,R6459,GLOW,"In the past decade, much effort has been put into the visual representation of ontologies. However, present visualization strategies are not equipped to handle complex ontologies with many relations, leading to visual clutter and inefficient use of space. In this paper, we propose GLOW, a method for ontology visualization based on Hierarchical Edge Bundles. Hierarchical Edge Bundles is a new visually attractive technique for displaying relations in hierarchical data, such as concept structures formed by 'subclass-of' and 'type-of' relations. We have developed a visualization library based on OWL API, as well as a plug-in for Protégé, a well-known ontology editor. The displayed adjacency relations can be selected from an ontology using a set of common configurations, allowing for intuitive discovery of information. Our evaluation demonstrates that the GLOW visualization provides better visual clarity, and displays relations and complex ontologies better than the existing Protégé visualization plug-in Jambalaya.",TRUE,acronym
R136,Graphics,R6457,Using Hierarchical Edge Bundles to visualize complex ontologies in GLOW,S78004,R25717,System,L48829,GLOW ,"In the past decade, much effort has been put into the visual representation of ontologies. However, present visualization strategies are not equipped to handle complex ontologies with many relations, leading to visual clutter and inefficient use of space. In this paper, we propose GLOW, a method for ontology visualization based on Hierarchical Edge Bundles. Hierarchical Edge Bundles is a new visually attractive technique for displaying relations in hierarchical data, such as concept structures formed by 'subclass-of' and 'type-of' relations. We have developed a visualization library based on OWL API, as well as a plug-in for Protégé, a well-known ontology editor. The displayed adjacency relations can be selected from an ontology using a set of common configurations, allowing for intuitive discovery of information. Our evaluation demonstrates that the GLOW visualization provides better visual clarity, and displays relations and complex ontologies better than the existing Protégé visualization plug-in Jambalaya.",TRUE,acronym
R136,Graphics,R6515,Formal Linked Data Visualization Model,S8008,R6516,implementation,R6517,LDVM,"Recently, the amount of semantic data available in the Web has increased dramatically. The potential of this vast amount of data is enormous but in most cases it is difficult for users to explore and use this data, especially for those without experience with Semantic Web technologies. Applying information visualization techniques to the Semantic Web helps users to easily explore large amounts of data and interact with them. In this article we devise a formal Linked Data Visualization Model (LDVM), which allows to dynamically connect data with visualizations. We report about our implementation of the LDVM comprising a library of generic visualizations that enable both users and data analysts to get an overview on, visualize and explore the Data Web and perform detailed analyzes on Linked Data.",TRUE,acronym
R136,Graphics,R6515,Formal Linked Data Visualization Model,S77674,R25679,System,L48608,LDVM ,"Recently, the amount of semantic data available in the Web has increased dramatically. The potential of this vast amount of data is enormous but in most cases it is difficult for users to explore and use this data, especially for those without experience with Semantic Web technologies. Applying information visualization techniques to the Semantic Web helps users to easily explore large amounts of data and interact with them. In this article we devise a formal Linked Data Visualization Model (LDVM), which allows to dynamically connect data with visualizations. We report about our implementation of the LDVM comprising a library of generic visualizations that enable both users and data analysts to get an overview on, visualize and explore the Data Web and perform detailed analyzes on Linked Data.",TRUE,acronym
R136,Graphics,R6417,RDF data exploration and visualization,S7640,R6418,implementation,R6419,PGV,"We present Paged Graph Visualization (PGV), a new semi-autonomous tool for RDF data exploration and visualization. PGV consists of two main components: a) the ""PGV explorer"" and b) the ""RDF pager"" module utilizing BRAHMS, our high per-formance main-memory RDF storage system. Unlike existing graph visualization techniques which attempt to display the entire graph and then filter out irrelevant data, PGV begins with a small graph and provides the tools to incrementally explore and visualize relevant data of very large RDF ontologies. We implemented several techniques to visualize and explore hot spots in the graph, i.e. nodes with large numbers of immediate neighbors. In response to the user-controlled, semantics-driven direction of the exploration, the PGV explorer obtains the necessary sub-graphs from the RDF pager and enables their incremental visualization leaving the previously laid out sub-graphs intact. We outline the problem of visualizing large RDF data sets, discuss our interface and its implementation, and through a controlled experiment we show the benefits of PGV.",TRUE,acronym
R136,Graphics,R6417,RDF data exploration and visualization,S77863,R25704,System,L48731,PGV ,"We present Paged Graph Visualization (PGV), a new semi-autonomous tool for RDF data exploration and visualization. PGV consists of two main components: a) the ""PGV explorer"" and b) the ""RDF pager"" module utilizing BRAHMS, our high per-formance main-memory RDF storage system. Unlike existing graph visualization techniques which attempt to display the entire graph and then filter out irrelevant data, PGV begins with a small graph and provides the tools to incrementally explore and visualize relevant data of very large RDF ontologies. We implemented several techniques to visualize and explore hot spots in the graph, i.e. nodes with large numbers of immediate neighbors. In response to the user-controlled, semantics-driven direction of the exploration, the PGV explorer obtains the necessary sub-graphs from the RDF pager and enables their incremental visualization leaving the previously laid out sub-graphs intact. We outline the problem of visualizing large RDF data sets, discuss our interface and its implementation, and through a controlled experiment we show the benefits of PGV.",TRUE,acronym
R136,Graphics,R8312,The Publishing Workflow Ontology (PWO),S12726,R8313,Ontology,R8314,PWO,". In this paper we introduce the Publishing Workflow Ontology ( PWO ), i.e., an OWL 2 DL ontology for the description of workflows that is particularly suitable for formalising typical publishing processes such as the publication of articles in journals. We support the presentation with a discussion of all the ontology design patterns that have been reused for modelling the main characteristics of publishing workflows. In addition, we present two possible application of PWO in the publishing and legislative domains.",TRUE,acronym
R136,Graphics,R9515,The Publishing Workflow Ontology (PWO),S15190,R9516,Ontology,R9517,PWO,". In this paper we introduce the Publishing Workflow Ontology ( PWO ), i.e., an OWL 2 DL ontology for the description of workflows that is particularly suitable for formalising typical publishing processes such as the publication of articles in journals. We support the presentation with a discussion of all the ontology design patterns that have been reused for modelling the main characteristics of publishing workflows. In addition, we present two possible application of PWO in the publishing and legislative domains.",TRUE,acronym
R136,Graphics,R9548,The Publishing Workflow Ontology (PWO),S15423,R9549,Ontology,R9550,PWO,". In this paper we introduce the Publishing Workflow Ontology ( PWO ), i.e., an OWL 2 DL ontology for the description of workflows that is particularly suitable for formalising typical publishing processes such as the publication of articles in journals. We support the presentation with a discussion of all the ontology design patterns that have been reused for modelling the main characteristics of publishing workflows. In addition, we present two possible application of PWO in the publishing and legislative domains.",TRUE,acronym
R136,Graphics,R8348,Research Articles in Simplified HTML: a Web-first format for HTML-based scholarly articles,S12967,R8349,Semantic representation,R8350,RASH,"PurposeThis paper introduces the Research Articles in Simplified HTML (or RASH), which is a Web-first format for writing HTML-based scholarly papers; it is accompanied by the RASH Framework, a set of tools for interacting with RASH-based articles. The paper also presents an evaluation that involved authors and reviewers of RASH articles submitted to the SAVE-SD 2015 and SAVE-SD 2016 workshops.DesignRASH has been developed aiming to: be easy to learn and use; share scholarly documents (and embedded semantic annotations) through the Web; support its adoption within the existing publishing workflow.FindingsThe evaluation study confirmed that RASH is ready to be adopted in workshops, conferences, and journals and can be quickly learnt by researchers who are familiar with HTML.Research LimitationsThe evaluation study also highlighted some issues in the adoption of RASH, and in general of HTML formats, especially by less technically savvy users. Moreover, additional tools are needed, e.g., for enabling additional conversions from/to existing formats such as OpenXML.Practical ImplicationsRASH (and its Framework) is another step towards enabling the definition of formal representations of the meaning of the content of an article, facilitating its automatic discovery, enabling its linking to semantically related articles, providing access to data within the article in actionable form, and allowing integration of data between papers.Social ImplicationsRASH addresses the intrinsic needs related to the various users of a scholarly article: researchers (focussing on its content), readers (experiencing new ways for browsing it), citizen scientists (reusing available data formally defined within it through semantic annotations), publishers (using the advantages of new technologies as envisioned by the Semantic Publishing movement).ValueRASH helps authors to focus on the organisation of their texts, supports them in the task of semantically enriching the content of articles, and leaves all the issues about validation, visualisation, conversion, and semantic data extraction to the various tools developed within its Framework.",TRUE,acronym
R136,Graphics,R6523,Towards a Linked-Data based Visualization Wizard,S8062,R6524,implementation,R6525,LDVizWiz,"Datasets published in the LOD cloud are recommended to follow some best practice in order to be 4-5 stars Linked Data compliant. They can often be consumed and accessed by different means such as API access, bulk download or as linked data fragments, but most of the time, a SPARQL endpoint is also provided. While the LOD cloud keeps growing, having a quick glimpse of those datasets is getting harder and there is a need to develop new methods enabling to detect automatically what an arbitrary dataset is about and to recommend visualizations for data samples. We consider that ""a visualization is worth a million triples"", and in this paper, we propose a novel approach that mines the content of datasets and automatically generates visualizations. Our approach is directly based on the usage of SPARQL queries that will detect the important categories of a dataset and that will specifically consider the properties used by the objects which have been interlinked via owl:sameAs links. We then propose to associate type of visualization for those categories. We have implemented this approach into a so-called Linked Data Vizualization Wizard (LDVizWiz).",TRUE,acronym
R136,Graphics,R6453,LODWheel - JavaScript-based Visualization of RDF Data.,S7792,R6454,implementation,R6455,LODWheel,"Visualizing Resource Description Framework (RDF) data to support decision-making processes is an important and challenging aspect of consuming Linked Data. With the recent development of JavaScript libraries for data visualization, new opportunities for Web-based visualization of Linked Data arise. This paper presents an extensive evaluation of JavaScript-based libraries for visualizing RDF data. A set of criteria has been devised for the evaluation and 15 major JavaScript libraries have been analyzed against the criteria. The two JavaScript libraries with the highest score in the evaluation acted as the basis for developing LODWheel (Linked Open Data Wheel) - a prototype for visualizing Linked Open Data in graphs and charts - introduced in this paper. This way of visualizing RDF data leads to a great deal of challenges related to data-categorization and connecting data resources together in new ways, which are discussed in this paper.",TRUE,acronym
R136,Graphics,R6507,LODWheel – JavaScript-based Visualization of RDF Data,S7960,R6508,implementation,R6509,LODWheel,"Visualizing Resource Description Framework (RDF) data to support decision-making processes is an important and challenging aspect of consuming Linked Data. With the recent development of JavaScript libraries for data visualization, new opportunities for Web-based visualization of Linked Data arise. This paper presents an extensive evaluation of JavaScript-based libraries for visualizing RDF data. A set of criteria has been devised for the evaluation and 15 major JavaScript libraries have been analyzed against the criteria. The two JavaScript libraries with the highest score in the evaluation acted as the basis for developing LODWheel (Linked Open Data Wheel) - a prototype for visualizing Linked Open Data in graphs and charts - introduced in this paper. This way of visualizing RDF data leads to a great deal of challenges related to data-categorization and connecting data resources together in new ways, which are discussed in this paper.",TRUE,acronym
R40,Immunology and Infectious Disease,R142246,Safety and Immunogenicity of Two RNA-Based Covid-19 Vaccine Candidates,S571597,R142248,Vaccine Name,R142253,BNT162b2,"Abstract Background Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections and the resulting disease, coronavirus disease 2019 (Covid-19), have spread to millions of persons worldwide. Multiple vaccine candidates are under development, but no vaccine is currently available. Interim safety and immunogenicity data about the vaccine candidate BNT162b1 in younger adults have been reported previously from trials in Germany and the United States. Methods In an ongoing, placebo-controlled, observer-blinded, dose-escalation, phase 1 trial conducted in the United States, we randomly assigned healthy adults 18 to 55 years of age and those 65 to 85 years of age to receive either placebo or one of two lipid nanoparticle–formulated, nucleoside-modified RNA vaccine candidates: BNT162b1, which encodes a secreted trimerized SARS-CoV-2 receptor–binding domain; or BNT162b2, which encodes a membrane-anchored SARS-CoV-2 full-length spike, stabilized in the prefusion conformation. The primary outcome was safety (e.g., local and systemic reactions and adverse events); immunogenicity was a secondary outcome. Trial groups were defined according to vaccine candidate, age of the participants, and vaccine dose level (10 μg, 20 μg, 30 μg, and 100 μg). In all groups but one, participants received two doses, with a 21-day interval between doses; in one group (100 μg of BNT162b1), participants received one dose. Results A total of 195 participants underwent randomization. In each of 13 groups of 15 participants, 12 participants received vaccine and 3 received placebo. BNT162b2 was associated with a lower incidence and severity of systemic reactions than BNT162b1, particularly in older adults. In both younger and older adults, the two vaccine candidates elicited similar dose-dependent SARS-CoV-2–neutralizing geometric mean titers, which were similar to or higher than the geometric mean titer of a panel of SARS-CoV-2 convalescent serum samples. Conclusions The safety and immunogenicity data from this U.S. phase 1 trial of two vaccine candidates in younger and older adults, added to earlier interim safety and immunogenicity data regarding BNT162b1 in younger adults from trials in Germany and the United States, support the selection of BNT162b2 for advancement to a pivotal phase 2–3 safety and efficacy evaluation. (Funded by BioNTech and Pfizer; ClinicalTrials.gov number, NCT04368728.)",TRUE,acronym
R40,Immunology and Infectious Disease,R142295,Phase 1 Assessment of the Safety and Immunogenicity of an mRNA- Lipid Nanoparticle Vaccine Candidate Against SARS-CoV-2 in Human Volunteers,S571863,R142296,Vaccine Name,R142298,CVnCoV,"There is an urgent need for vaccines to counter the COVID-19 pandemic due to infections with severe acute respiratory syndrome coronavirus (SARS-CoV-2). Evidence from convalescent sera and preclinical studies has identified the viral Spike (S) protein as a key antigenic target for protective immune responses. We have applied an mRNA-based technology platform, RNActive, to develop CVnCoV which contains sequence optimized mRNA coding for a stabilized form of S protein encapsulated in lipid nanoparticles (LNP). Following demonstration of protective immune responses against SARS-CoV-2 in animal models we performed a dose-escalation phase 1 study in healthy 18-60 year-old volunteers. This interim analysis shows that two doses of CVnCoV ranging from 2 g to 12 g per dose, administered 28 days apart were safe. No vaccine-related serious adverse events were reported. There were dose-dependent increases in frequency and severity of solicited systemic adverse events, and to a lesser extent of local reactions, but the majority were mild or moderate and transient in duration. Immune responses when measured as IgG antibodies against S protein or its receptor-binding domain (RBD) by ELISA, and SARS-CoV-2-virus neutralizing antibodies measured by micro-neutralization, displayed dose-dependent increases. Median titers measured in these assays two weeks after the second 12 g dose were comparable to the median titers observed in convalescent sera from COVID-19 patients. Seroconversion (defined as a 4-fold increase over baseline titer) of virus neutralizing antibodies two weeks after the second vaccination occurred in all participants who received 12 g doses. Preliminary results in the subset of subjects who were enrolled with known SARS-CoV-2 seropositivity at baseline show that CVnCoV is also safe and well tolerated in this population, and is able to boost the pre-existing immune response even at low dose levels. Based on these results, the 12 g dose is selected for further clinical investigation, including a phase 2b/3 study that will investigate the efficacy, safety, and immunogenicity of the candidate vaccine CVnCoV.",TRUE,acronym
R40,Immunology and Infectious Disease,R142254,An mRNA Vaccine against SARS-CoV-2 — Preliminary Report,S571648,R142259,Vaccine Name,R142261,mRNA-1273,"Abstract Background The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) emerged in late 2019 and spread globally, prompting an international effort to accelerate development of a vaccine. The candidate vaccine mRNA-1273 encodes the stabilized prefusion SARS-CoV-2 spike protein. Methods We conducted a phase 1, dose-escalation, open-label trial including 45 healthy adults, 18 to 55 years of age, who received two vaccinations, 28 days apart, with mRNA-1273 in a dose of 25 μg, 100 μg, or 250 μg. There were 15 participants in each dose group. Results After the first vaccination, antibody responses were higher with higher dose (day 29 enzyme-linked immunosorbent assay anti–S-2P antibody geometric mean titer [GMT], 40,227 in the 25-μg group, 109,209 in the 100-μg group, and 213,526 in the 250-μg group). After the second vaccination, the titers increased (day 57 GMT, 299,751, 782,719, and 1,192,154, respectively). After the second vaccination, serum-neutralizing activity was detected by two methods in all participants evaluated, with values generally similar to those in the upper half of the distribution of a panel of control convalescent serum specimens. Solicited adverse events that occurred in more than half the participants included fatigue, chills, headache, myalgia, and pain at the injection site. Systemic adverse events were more common after the second vaccination, particularly with the highest dose, and three participants (21%) in the 250-μg dose group reported one or more severe adverse events. Conclusions The mRNA-1273 vaccine induced anti–SARS-CoV-2 immune responses in all participants, and no trial-limiting safety concerns were identified. These findings support further development of this vaccine. (Funded by the National Institute of Allergy and Infectious Diseases and others; mRNA-1273 ClinicalTrials.gov number, NCT04283461).",TRUE,acronym
R278,Information Science,R182018,AUPress: A Comparison of an Open Access University Press with Traditional Presses,S704096,R182019,statistical_methods,L475072,ANOVA,"This study is a comparison of AUPress with three other traditional (non-open access) Canadian university presses. The analysis is based on the rankings that are correlated with book sales on Amazon.com and Amazon.ca. Statistical methods include the sampling of the sales ranking of randomly selected books from each press. The results of one-way ANOVA analyses show that there is no significant difference in the ranking of printed books sold by AUPress in comparison with traditional university presses. However, AUPress, can demonstrate a significantly larger readership for its books as evidenced by the number of downloads of the open electronic versions.",TRUE,acronym
R278,Information Science,R73196,Persistent Identification of Instruments,S338942,R73205,Used by,R73209,BODC,"Instruments play an essential role in creating research data. Given the importance of instruments and associated metadata to the assessment of data quality and data reuse, globally unique, persistent and resolvable identification of instruments is crucial. The Research Data Alliance Working Group Persistent Identification of Instruments (PIDINST) developed a community-driven solution for persistent identification of instruments which we present and discuss in this paper. Based on an analysis of 10 use cases, PIDINST developed a metadata schema and prototyped schema implementation with DataCite and ePIC as representative persistent identifier infrastructures and with HZB (Helmholtz-Zentrum Berlin fur Materialien und Energie) and BODC (British Oceanographic Data Centre) as representative institutional instrument providers. These implementations demonstrate the viability of the proposed solution in practice. Moving forward, PIDINST will further catalyse adoption and consolidate the schema by addressing new stakeholder requirements.",TRUE,acronym
R278,Information Science,R73154,DataCite: Lessons Learned on Persistent Identifiers for Research Data,S338719,R73155,uses identifier system,R73165,DOI,"Data are the infrastructure of science and they serve as the groundwork for scientific pursuits. Data publication has emerged as a game-changing breakthrough in scholarly communication. Data form the outputs of research but also are a gateway to new hypotheses, enabling new scientific insights and driving innovation. And yet stakeholders across the scholarly ecosystem, including practitioners, institutions, and funders of scientific research are increasingly concerned about the lack of sharing and reuse of research data. Across disciplines and countries, researchers, funders, and publishers are pushing for a more effective research environment, minimizing the duplication of work and maximizing the interaction between researchers. Availability, discoverability, and reproducibility of research outputs are key factors to support data reuse and make possible this new environment of highly collaborative research. An interoperable e-infrastructure is imperative in order to develop new platforms and services for to data publication and reuse. DataCite has been working to establish and promote methods to locate, identify and share information about research data. Along with service development, DataCite supports and advocates for the standards behind persistent identifiers (in particular DOIs, Digital Object Identifiers) for data and other research outputs. Persistent identifiers allow different platforms to exchange information consistently and unambiguously and provide a reliable way to track citations and reuse. Because of this, data publication can become a reality from a technical standpoint, but the adoption of data publication and data citation as a practice by researchers is still in its early stages. Since 2009, DataCite has been developing a series of tools and services to foster the adoption of data publication and citation among the research community. Through the years, DataCite has worked in a close collaboration with interdisciplinary partners on these issues and we have gained insight into the development of data publication workflows. This paper describes the types of different actions and the lessons learned by DataCite.",TRUE,acronym
R278,Information Science,R145318,"Electronic Surveillance System for the Early Notification of Community-Based Epidemics (ESSENCE): Overview, Components, and Public Health Applications",S581712,R145327,Epidemiological surveillance software,R145331,ESSENCE,"Background The Electronic Surveillance System for the Early Notification of Community-Based Epidemics (ESSENCE) is a secure web-based tool that enables health care practitioners to monitor health indicators of public health importance for the detection and tracking of disease outbreaks, consequences of severe weather, and other events of concern. The ESSENCE concept began in an internally funded project at the Johns Hopkins University Applied Physics Laboratory, advanced with funding from the State of Maryland, and broadened in 1999 as a collaboration with the Walter Reed Army Institute for Research. Versions of the system have been further developed by Johns Hopkins University Applied Physics Laboratory in multiple military and civilian programs for the timely detection and tracking of health threats. Objective This study aims to describe the components and development of a biosurveillance system increasingly coordinating all-hazards health surveillance and infectious disease monitoring among large and small health departments, to list the key features and lessons learned in the growth of this system, and to describe the range of initiatives and accomplishments of local epidemiologists using it. Methods The features of ESSENCE include spatial and temporal statistical alerting, custom querying, user-defined alert notifications, geographical mapping, remote data capture, and event communications. To expedite visualization, configurable and interactive modes of data stratification and filtering, graphical and tabular customization, user preference management, and sharing features allow users to query data and view geographic representations, time series and data details pages, and reports. These features allow ESSENCE users to gather and organize the resulting wealth of information into a coherent view of population health status and communicate findings among users. Results The resulting broad utility, applicability, and adaptability of this system led to the adoption of ESSENCE by the Centers for Disease Control and Prevention, numerous state and local health departments, and the Department of Defense, both nationally and globally. The open-source version of Suite for Automated Global Electronic bioSurveillance is available for global, resource-limited settings. Resourceful users of the US National Syndromic Surveillance Program ESSENCE have applied it to the surveillance of infectious diseases, severe weather and natural disaster events, mass gatherings, chronic diseases and mental health, and injury and substance abuse. Conclusions With emerging high-consequence communicable diseases and other health conditions, the continued user requirement–driven enhancements of ESSENCE demonstrate an adaptable disease surveillance capability focused on the everyday needs of public health. The challenge of a live system for widely distributed users with multiple different data sources and high throughput requirements has driven a novel, evolving architecture design.",TRUE,acronym
R278,Information Science,R73196,Persistent Identification of Instruments,S338941,R73205,Used by,R73208,HZB,"Instruments play an essential role in creating research data. Given the importance of instruments and associated metadata to the assessment of data quality and data reuse, globally unique, persistent and resolvable identification of instruments is crucial. The Research Data Alliance Working Group Persistent Identification of Instruments (PIDINST) developed a community-driven solution for persistent identification of instruments which we present and discuss in this paper. Based on an analysis of 10 use cases, PIDINST developed a metadata schema and prototyped schema implementation with DataCite and ePIC as representative persistent identifier infrastructures and with HZB (Helmholtz-Zentrum Berlin fur Materialien und Energie) and BODC (British Oceanographic Data Centre) as representative institutional instrument providers. These implementations demonstrate the viability of the proposed solution in practice. Moving forward, PIDINST will further catalyse adoption and consolidate the schema by addressing new stakeholder requirements.",TRUE,acronym
R278,Information Science,R70870,Microsoft Academic Graph: When experts are not enough,S337162,R70871,Database,L243404,MAG," An ongoing project explores the extent to which artificial intelligence (AI), specifically in the areas of natural language processing and semantic reasoning, can be exploited to facilitate the studies of science by deploying software agents equipped with natural language understanding capabilities to read scholarly publications on the web. The knowledge extracted by these AI agents is organized into a heterogeneous graph, called Microsoft Academic Graph (MAG), where the nodes and the edges represent the entities engaging in scholarly communications and the relationships among them, respectively. The frequently updated data set and a few software tools central to the underlying AI components are distributed under an open data license for research and commercial applications. This paper describes the design, schema, and technical and business motivations behind MAG and elaborates how MAG can be used in analytics, search, and recommendation scenarios. How AI plays an important role in avoiding various biases and human induced errors in other data sets and how the technologies can be further improved in the future are also discussed. ",TRUE,acronym
R278,Information Science,R70878,The MIPS mammalian protein–protein interaction database,S337244,R70879,Database,L243474,MIPS,SUMMARY The MIPS mammalian protein-protein interaction database (MPPI) is a new resource of high-quality experimental protein interaction data in mammals. The content is based on published experimental evidence that has been processed by human expert curators. We provide the full dataset for download and a flexible and powerful web interface for users with various requirements.,TRUE,acronym
R278,Information Science,R135998,A Hybrid Knowlegde-Based Approach for Recommending Massive Learning Activities,S538492,R136000,keywords,R136002,MOOC,"In recent years, the development of recommender systems has attracted increased interest in several domains, especially in e-learning. Massive Open Online Courses have brought a revolution. However, deficiency in support and personalization in this context drive learners to lose their motivation and leave the learning process. To overcome this problem we focus on adapting learning activities to learners' needs using a recommender system.This paper attempts to provide an introduction to different recommender systems for e-learning settings, as well as to present our proposed recommender system for massive learning activities in order to provide learners with the suitable learning activities to follow the learning process and maintain their motivation. We propose a hybrid knowledge-based recommender system based on ontology for recommendation of e-learning activities to learners in the context of MOOCs. In the proposed recommendation approach, ontology is used to model and represent the knowledge about the domain model, learners and learning activities.",TRUE,acronym
R278,Information Science,R135998,A Hybrid Knowlegde-Based Approach for Recommending Massive Learning Activities,S538498,R136000,Development in,R135542,OWL,"In recent years, the development of recommender systems has attracted increased interest in several domains, especially in e-learning. Massive Open Online Courses have brought a revolution. However, deficiency in support and personalization in this context drive learners to lose their motivation and leave the learning process. To overcome this problem we focus on adapting learning activities to learners' needs using a recommender system.This paper attempts to provide an introduction to different recommender systems for e-learning settings, as well as to present our proposed recommender system for massive learning activities in order to provide learners with the suitable learning activities to follow the learning process and maintain their motivation. We propose a hybrid knowledge-based recommender system based on ontology for recommendation of e-learning activities to learners in the context of MOOCs. In the proposed recommendation approach, ontology is used to model and represent the knowledge about the domain model, learners and learning activities.",TRUE,acronym
R278,Information Science,R136009,Ontology-Based Personalized Course Recommendation Framework,S538525,R136012,Development in,R135542,OWL,"Choosing a higher education course at university is not an easy task for students. A wide range of courses are offered by the individual universities whose delivery mode and entry requirements differ. A personalized recommendation system can be an effective way of suggesting the relevant courses to the prospective students. This paper introduces a novel approach that personalizes course recommendations that will match the individual needs of users. The proposed approach developed a framework of an ontology-based hybrid-filtering system called the ontology-based personalized course recommendation (OPCR). This approach aims to integrate the information from multiple sources based on the hierarchical ontology similarity with a view to enhancing the efficiency and the user satisfaction and to provide students with appropriate recommendations. The OPCR combines collaborative-based filtering with content-based filtering. It also considers familiar related concepts that are evident in the profiles of both the student and the course, determining the similarity between them. Furthermore, OPCR uses an ontology mapping technique, recommending jobs that will be available following the completion of each course. This method can enable students to gain a comprehensive knowledge of courses based on their relevance, using dynamic ontology mapping to link the course profiles and student profiles with job profiles. Results show that a filtering algorithm that uses hierarchically related concepts produces better outcomes compared to a filtering method that considers only keyword similarity. In addition, the quality of the recommendations is improved when the ontology similarity between the items’ and the users’ profiles were utilized. This approach, using a dynamic ontology mapping, is flexible and can be adapted to different domains. The proposed framework can be used to filter the items for both postgraduate courses and items from other domains.",TRUE,acronym
R278,Information Science,R38074,OntoIMM: An Ontology for Product Intelligent Master Model,S125186,R38076,hasRepresentationMasterData,R38080,OWL,"Information organizing principle is one of the key issues of intelligent master model (IMM), which is an enhancement of the master model (MM) based on KBE (knowledge-based engineering). Despite the fact that the core product model (CPM) has been confirmed to be an organizing mechanism for product master model, the key issue of supporting the information organizing for IMM is not yet well addressed, mainly due to the following two reasons; (1) lack of representation of complete information and knowledge with regard to product and process, including the know-why, know-how, and know-what information and knowledge, and (2) lack of semantic richness. Therefore, a multiaspect extension to CPM was first defined, and then an ontology was constructed to represent the information and design knowledge. The extension refers to adding a design process model, context model, product control structure model, and design rationale model to CPM concerning the enhancement of master model, which is to comprehensively represent the reason, process, and result information and knowledge of theproduct. The ontology construction refers to representing the concepts, relationships among these concepts and consistency rules of IMM information structure. Finally, an example of barrel design and analysis process is illustrated to verify the effectiveness of proposed method.",TRUE,acronym
R278,Information Science,R146600,Coronavirus disease 2019 (COVID-19) surveillance system: Development of COVID-19 minimum data set and interoperable reporting framework,S586877,R146602,Epidemiological surveillance software,R146607,SNOMED,"INTRODUCTION: The 2019 coronavirus disease (COVID-19) is a major global health concern. Joint efforts for effective surveillance of COVID-19 require immediate transmission of reliable data. In this regard, a standardized and interoperable reporting framework is essential in a consistent and timely manner. Thus, this research aimed at to determine data requirements towards interoperability. MATERIALS AND METHODS: In this cross-sectional and descriptive study, a combination of literature study and expert consensus approach was used to design COVID-19 Minimum Data Set (MDS). A MDS checklist was extracted and validated. The definitive data elements of the MDS were determined by applying the Delphi technique. Then, the existing messaging and data standard templates (Health Level Seven-Clinical Document Architecture [HL7-CDA] and SNOMED-CT) were used to design the surveillance interoperable framework. RESULTS: The proposed MDS was divided into administrative and clinical sections with three and eight data classes and 29 and 40 data fields, respectively. Then, for each data field, structured data values along with SNOMED-CT codes were defined and structured according HL7-CDA standard. DISCUSSION AND CONCLUSION: The absence of effective and integrated system for COVID-19 surveillance can delay critical public health measures, leading to increased disease prevalence and mortality. The heterogeneity of reporting templates and lack of uniform data sets hamper the optimal information exchange among multiple systems. Thus, developing a unified and interoperable reporting framework is more effective to prompt reaction to the COVID-19 outbreak.",TRUE,acronym
R278,Information Science,R70868,The Cooperation Databank,S337145,R70869,Database,L243390,CoDa,"
Publishing studies using standardized, machine-readable formats will enable machines toperform meta-analyses on-demand. To build a semantically-enhanced technology that embodiesthese functions, we developed the Cooperation Databank (CoDa) – a databank that contains2,641 studies on human cooperation (1958-2017) conducted in 78 countries involving 356,680participants. Experts annotated these studies for 312 variables, including the quantitative results(13, 959 effect sizes). We designed an ontology that defines and relates concepts in cooperationresearch and that can represent the relationships between individual study results. We havecreated a research platform that, based on the dataset, enables users to retrieve studies that testthe relation of variables with cooperation, visualize these study results, and perform (1) metaanalyses, (2) meta-regressions, (3) estimates of publication bias, and (4) statistical poweranalyses for future studies. We leveraged the dataset with visualization tools that allow users toexplore the ontology of concepts in cooperation research and to plot a citation network of thehistory of studies. CoDa offers a vision of how publishing studies in a machine-readable formatcan establish institutions and tools that improve scientific practices and knowledge.
",TRUE,acronym
R278,Information Science,R36010,"Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction",S328840,R69260,Dataset name,L239579,SciERC,"We introduce a multi-task setup of identifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called SciIE with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.",TRUE,acronym
R278,Information Science,R108652,"A streamlined workflow for conversion, peer review, and publication of genomics metadata as omics data papers ",S495060,R108654,Has method,R108666,xpath,"Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.",TRUE,acronym
R278,Information Science,R70864,"COVID-19 Knowledge Graph: a computable, multi-modal, cause-and-effect knowledge model of COVID-19 pathophysiology",S337105,R70865,Domain,L243356,COVID-19,"Abstract Summary The COVID-19 crisis has elicited a global response by the scientific community that has led to a burst of publications on the pathophysiology of the virus. However, without coordinated efforts to organize this knowledge, it can remain hidden away from individual research groups. By extracting and formalizing this knowledge in a structured and computable form, as in the form of a knowledge graph, researchers can readily reason and analyze this information on a much larger scale. Here, we present the COVID-19 Knowledge Graph, an expansive cause-and-effect network constructed from scientific literature on the new coronavirus that aims to provide a comprehensive view of its pathophysiology. To make this resource available to the research community and facilitate its exploration and analysis, we also implemented a web application and released the KG in multiple standard formats. Availability and implementation The COVID-19 Knowledge Graph is publicly available under CC-0 license at https://github.com/covid19kg and https://bikmi.covid19-knowledgespace.de. Supplementary information Supplementary data are available at Bioinformatics online.",TRUE,acronym
R278,Information Science,R73196,Persistent Identification of Instruments,S338903,R73205,uses identifier system,R73207,ePIC,"Instruments play an essential role in creating research data. Given the importance of instruments and associated metadata to the assessment of data quality and data reuse, globally unique, persistent and resolvable identification of instruments is crucial. The Research Data Alliance Working Group Persistent Identification of Instruments (PIDINST) developed a community-driven solution for persistent identification of instruments which we present and discuss in this paper. Based on an analysis of 10 use cases, PIDINST developed a metadata schema and prototyped schema implementation with DataCite and ePIC as representative persistent identifier infrastructures and with HZB (Helmholtz-Zentrum Berlin fur Materialien und Energie) and BODC (British Oceanographic Data Centre) as representative institutional instrument providers. These implementations demonstrate the viability of the proposed solution in practice. Moving forward, PIDINST will further catalyse adoption and consolidate the schema by addressing new stakeholder requirements.",TRUE,acronym
R278,Information Science,R145901,"Evaluating the electronic tuberculosis register surveillance system in Eden District, Western Cape, South Africa, 2015",S584396,R145915,Epidemiological surveillance software,R145924,ETR.net,"ABSTRACT Background: Tuberculosis (TB) surveillance data are crucial to the effectiveness of National TB Control Programs. In South Africa, few surveillance system evaluations have been undertaken to provide a rigorous assessment of the platform from which the national and district health systems draws data to inform programs and policies. Objective: Evaluate the attributes of Eden District’s TB surveillance system, Western Cape Province, South Africa. Methods: Data quality, sensitivity and positive predictive value were assessed using secondary data from 40,033 TB cases entered in Eden District’s ETR.Net from 2007 to 2013, and 79 purposively selected TB Blue Cards (TBCs), a medical patient file and source document for data entered into ETR.Net. Simplicity, flexibility, acceptability, stability and usefulness of the ETR.Net were assessed qualitatively through interviews with TB nurses, information health officers, sub-district and district coordinators involved in the TB surveillance. Results: TB surveillance system stakeholders report that Eden District’s ETR.Net system was simple, acceptable, flexible and stable, and achieves its objective of informing TB control program, policies and activities. Data were less complete in the ETR.Net (66–100%) than in the TBCs (76–100%), and concordant for most variables except pre-treatment smear results, antiretroviral therapy (ART) and treatment outcome. The sensitivity of recorded variables in ETR.Net was 98% for gender, 97% for patient category, 93% for ART, 92% for treatment outcome and 90% for pre-treatment smear grading. Conclusions: Our results reveal that the system provides useful information to guide TB control program activities in Eden District. However, urgent attention is needed to address gaps in clinical recording on the TBC and data capturing into the ETR.Net system. We recommend continuous training and support of TB personnel involved with TB care, management and surveillance on TB data recording into the TBCs and ETR.Net as well as the implementation of a well-structured quality control and assurance system.",TRUE,acronym
R278,Information Science,R70905,TeKET: a Tree-Based Unsupervised Keyphrase Extraction Technique ,S337454,R70907,Dataset name,L243519,SemEval-2010,"Abstract Automatic keyphrase extraction techniques aim to extract quality keyphrases for higher level summarization of a document. Majority of the existing techniques are mainly domain-specific, which require application domain knowledge and employ higher order statistical methods, and computationally expensive and require large train data, which is rare for many applications. Overcoming these issues, this paper proposes a new unsupervised keyphrase extraction technique. The proposed unsupervised keyphrase extraction technique, named TeKET or Tree-based Keyphrase Extraction Technique , is a domain-independent technique that employs limited statistical knowledge and requires no train data. This technique also introduces a new variant of a binary tree, called KeyPhrase Extraction ( KePhEx ) tree, to extract final keyphrases from candidate keyphrases. In addition, a measure, called Cohesiveness Index or CI , is derived which denotes a given node’s degree of cohesiveness with respect to the root. The CI is used in flexibly extracting final keyphrases from the KePhEx tree and is co-utilized in the ranking process. The effectiveness of the proposed technique and its domain and language independence are experimentally evaluated using available benchmark corpora, namely SemEval-2010 (a scientific articles dataset), Theses100 (a thesis dataset), and a German Research Article dataset, respectively. The acquired results are compared with other relevant unsupervised techniques belonging to both statistical and graph-based techniques. The obtained results demonstrate the improved performance of the proposed technique over other compared techniques in terms of precision, recall, and F1 scores.",TRUE,acronym
R278,Information Science,R46321,Fast and accurate entity recognition with iterated dilated convolutions,S141630,R46323,has research problem,R46372,NER,"Today when many practitioners run basic NLP on the entire web and large-volume traffic, faster methods are paramount to saving time and energy costs. Recent advances in GPU hardware have led to the emergence of bi-directional LSTMs as a standard method for obtaining per-token vector representations serving as input to labeling tasks such as NER (often followed by prediction in a linear-chain CRF). Though expressive and accurate, these models fail to fully exploit GPU parallelism, limiting their computational efficiency. This paper proposes a faster alternative to Bi-LSTMs for NER: Iterated Dilated Convolutional Neural Networks (ID-CNNs), which have better capacity than traditional CNNs for large context and structured prediction. Unlike LSTMs whose sequential processing on sentences of length N requires O(N) time even in the face of parallelism, ID-CNNs permit fixed-depth convolutions to run in parallel across entire documents. We describe a distinct combination of network structure, parameter sharing and training procedures that enable dramatic 14-20x test-time speedups while retaining accuracy comparable to the Bi-LSTM-CRF. Moreover, ID-CNNs trained to aggregate context from the entire document are more accurate than Bi-LSTM-CRFs while attaining 8x faster test time speeds.",TRUE,acronym
R278,Information Science,R70328,Using DevOps Principles to Continuously Monitor RDF Data Quality,S334112,R70330,has research problem,R5048,RDF,"One approach to continuously achieve a certain data quality level is to use an integration pipeline that continuously checks and monitors the quality of a data set according to defined metrics. This approach is inspired by Continuous Integration pipelines, that have been introduced in the area of software development and DevOps to perform continuous source code checks. By investigating in possible tools to use and discussing the specific requirements for RDF data sets, an integration pipeline is derived that joins current approaches of the areas of software-development and semantic-web as well as reuses existing tools. As these tools have not been built explicitly for CI usage, we evaluate their usability and propose possible workarounds and improvements. Furthermore, a real-world usage scenario is discussed, outlining the benefit of the usage of such a pipeline.",TRUE,acronym
R278,Information Science,R68931,The New DBpedia Release Cycle: Increasing Agility and Efficiency in Knowledge Extraction Workflows,S327346,R68933,has research problem,R68944,DBPedia,"Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility , to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .",TRUE,acronym
R137681,"Information Systems, Process and Knowledge Management",R164003,SoMeSci- A 5 Star Open Data Gold Standard Knowledge Graph of Software Mentions in Scientific Articles,S662993,R166456,Relation types,R166469,URL,"Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.",TRUE,acronym
R12,Life Sciences,R78061,Estimative of real number of infections by COVID-19 in Brazil and possible scenarios,S353647,R78063,Used models,L251299,SEIRD,"Abstract This paper attempts to provide methods to estimate the real scenario of the novel coronavirus pandemic crisis on Brazil and the states of Sao Paulo, Pernambuco, Espirito Santo, Amazonas and Distrito Federal. By the use of a SEIRD mathematical model with age division, we predict the infection and death curve, stating the peak date for Brazil and these states. We also carry out a prediction for the ICU demand on these states for a visualization of the size of a possible collapse on the local health system. By the end, we establish some future scenarios including the stopping of social isolation and the introduction of vaccines and efficient medicine against the virus.",TRUE,acronym
R12,Life Sciences,R150566,Supplementation with vitamin D in the COVID-19 pandemic?,S603710,R150569,has research problem,R109783,COVID-19,"Abstract The coronavirus disease 2019 (COVID-19) pandemic was declared a public health emergency of international concern by the World Health Organization. COVID-19 has high transmissibility and could result in acute lung injury in a fraction of patients. By counterbalancing the activity of the renin-angiotensin system, angiotensin-converting enzyme 2, which is the fusion receptor of the virus, plays a protective role against the development of complications of this viral infection. Vitamin D can induce the expression of angiotensin-converting enzyme 2 and regulate the immune system through different mechanisms. Epidemiologic studies of the relationship between vitamin D and various respiratory infections were reviewed and, here, the postulated mechanisms and clinical data supporting the protective role of vitamin D against COVID-19–mediated complications are discussed.",TRUE,acronym
R112125,Machine Learning,R159399,"DEHB: Evolutionary Hyperband for Scalable, Robust and Efficient Hyperparameter Optimization",S635017,R159430,keywords,R159431,DEHB,"Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever. Here, we combine the advantages of the popular bandit-based HPO method Hyperband (HB) and the evolutionary search approach of Differential Evolution (DE) to yield a new HPO method which we call DEHB. Comprehensive results on a very broad range of HPO problems, as well as a wide range of tabular benchmarks from neural architecture search, demonstrate that DEHB achieves strong performance far more robustly than all previous HPO methods we are aware of, especially for high-dimensional problems with discrete input dimensions. For example, DEHB is up to 1000x faster than random search. It is also efficient in computational time, conceptually simple and easy to implement, positioning it well to become a new default HPO method.",TRUE,acronym
R112125,Machine Learning,R159399,"DEHB: Evolutionary Hyperband for Scalable, Robust and Efficient Hyperparameter Optimization",S635020,R159430,keywords,R159434,HPO,"Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever. Here, we combine the advantages of the popular bandit-based HPO method Hyperband (HB) and the evolutionary search approach of Differential Evolution (DE) to yield a new HPO method which we call DEHB. Comprehensive results on a very broad range of HPO problems, as well as a wide range of tabular benchmarks from neural architecture search, demonstrate that DEHB achieves strong performance far more robustly than all previous HPO methods we are aware of, especially for high-dimensional problems with discrete input dimensions. For example, DEHB is up to 1000x faster than random search. It is also efficient in computational time, conceptually simple and easy to implement, positioning it well to become a new default HPO method.",TRUE,acronym
R112125,Machine Learning,R140156,OWL2Vec*: Embedding of OWL Ontologies,S559999,R140158,Type of data,R139373,OWL,"Abstract Semantic embedding of knowledge graphs has been widely studied and used for prediction and statistical analysis tasks across various domains such as Natural Language Processing and the Semantic Web. However, less attention has been paid to developing robust methods for embedding OWL (Web Ontology Language) ontologies, which contain richer semantic information than plain knowledge graphs, and have been widely adopted in domains such as bioinformatics. In this paper, we propose a random walk and word embedding based ontology embedding method named , which encodes the semantics of an OWL ontology by taking into account its graph structure, lexical information and logical constructors. Our empirical evaluation with three real world datasets suggests that benefits from these three different aspects of an ontology in class membership prediction and class subsumption prediction tasks. Furthermore, often significantly outperforms the state-of-the-art methods in our experiments.",TRUE,acronym
R112125,Machine Learning,R140183,Bio-joie: Joint representation learning of biological knowledge bases,S560041,R140185,Type of data,R139373,OWL,"The widespread of Coronavirus has led to a worldwide pandemic with a high mortality rate. Currently, the knowledge accumulated from different studies about this virus is very limited. Leveraging a wide-range of biological knowledge, such as gene on-tology and protein-protein interaction (PPI) networks from other closely related species presents a vital approach to infer the molecular impact of a new species. In this paper, we propose the transferred multi-relational embedding model Bio-JOIE to capture the knowledge of gene ontology and PPI networks, which demonstrates superb capability in modeling the SARS-CoV-2-human protein interactions. Bio-JOIE jointly trains two model components. The knowledge model encodes the relational facts from the protein and GO domains into separated embedding spaces, using a hierarchy-aware encoding technique employed for the GO terms. On top of that, the transfer model learns a non-linear transformation to transfer the knowledge of PPIs and gene ontology annotations across their embedding spaces. By leveraging only structured knowledge, Bio-JOIE significantly outperforms existing state-of-the-art methods in PPI type prediction on multiple species. Furthermore, we also demonstrate the potential of leveraging the learned representations on clustering proteins with enzymatic function into enzyme commission families. Finally, we show that Bio-JOIE can accurately identify PPIs between the SARS-CoV-2 proteins and human proteins, providing valuable insights for advancing research on this new disease.",TRUE,acronym
R112125,Machine Learning,R140245,Onto2vec: joint vector-based representation of biological entities and their ontology-based annotations,S559974,R140247,Type of data,R139373,OWL,"Motivation Biological knowledge is widely represented in the form of ontology‐based annotations: ontologies describe the phenomena assumed to exist within a domain, and the annotations associate a (kind of) biological entity with a set of phenomena within the domain. The structure and information contained in ontologies and their annotations make them valuable for developing machine learning, data analysis and knowledge extraction algorithms; notably, semantic similarity is widely used to identify relations between biological entities, and ontology‐based annotations are frequently used as features in machine learning applications. Results We propose the Onto2Vec method, an approach to learn feature vectors for biological entities based on their annotations to biomedical ontologies. Our method can be applied to a wide range of bioinformatics research problems such as similarity‐based prediction of interactions between proteins, classification of interaction types using supervised learning, or clustering. To evaluate Onto2Vec, we use the gene ontology (GO) and jointly produce dense vector representations of proteins, the GO classes to which they are annotated, and the axioms in GO that constrain these classes. First, we demonstrate that Onto2Vec‐generated feature vectors can significantly improve prediction of protein‐protein interactions in human and yeast. We then illustrate how Onto2Vec representations provide the means for constructing data‐driven, trainable semantic similarity measures that can be used to identify particular relations between proteins. Finally, we use an unsupervised clustering approach to identify protein families based on their Enzyme Commission numbers. Our results demonstrate that Onto2Vec can generate high quality feature vectors from biological entities and ontologies. Onto2Vec has the potential to significantly outperform the state‐of‐the‐art in several predictive applications in which ontologies are involved. Availability and implementation https://github.com/bio‐ontology‐research‐group/onto2vec",TRUE,acronym
R126,Materials Chemistry,R146812,"π-Bridge-Independent 2-(Benzo[c][1,2,5]thiadiazol-4-ylmethylene)malononitrile-Substituted Nonfullerene Acceptors for Efficient Bulk Heterojunction Solar Cells",S587859,R146824,Acceptor,R146825,CBM,"Molecular acceptors are promising alternatives to fullerenes (e.g., PC61/71BM) in the fabrication of high-efficiency bulk-heterojunction (BHJ) solar cells. While solution-processed polymer–fullerene BHJ devices have recently met the 10% efficiency threshold, molecular acceptors have yet to prove comparably efficient with polymer donors. At this point in time, it is important to forge a better understanding of the design parameters that directly impact small-molecule (SM) acceptor performance in BHJ solar cells. In this report, we show that 2-(benzo[c][1,2,5]thiadiazol-4-ylmethylene)malononitrile (BM)-terminated SM acceptors can achieve efficiencies as high as 5.3% in BHJ solar cells with the polymer donor PCE10. Through systematic device optimization and characterization studies, we find that the nonfullerene analogues (FBM, CBM, and CDTBM) all perform comparably well, independent of the molecular structure and electronics of the π-bridge that links the two electron-deficient BM end groups. With estimated...",TRUE,acronym
R126,Materials Chemistry,R146812,"π-Bridge-Independent 2-(Benzo[c][1,2,5]thiadiazol-4-ylmethylene)malononitrile-Substituted Nonfullerene Acceptors for Efficient Bulk Heterojunction Solar Cells",S587879,R146826,Acceptor,R146827,CDTBM,"Molecular acceptors are promising alternatives to fullerenes (e.g., PC61/71BM) in the fabrication of high-efficiency bulk-heterojunction (BHJ) solar cells. While solution-processed polymer–fullerene BHJ devices have recently met the 10% efficiency threshold, molecular acceptors have yet to prove comparably efficient with polymer donors. At this point in time, it is important to forge a better understanding of the design parameters that directly impact small-molecule (SM) acceptor performance in BHJ solar cells. In this report, we show that 2-(benzo[c][1,2,5]thiadiazol-4-ylmethylene)malononitrile (BM)-terminated SM acceptors can achieve efficiencies as high as 5.3% in BHJ solar cells with the polymer donor PCE10. Through systematic device optimization and characterization studies, we find that the nonfullerene analogues (FBM, CBM, and CDTBM) all perform comparably well, independent of the molecular structure and electronics of the π-bridge that links the two electron-deficient BM end groups. With estimated...",TRUE,acronym
R126,Materials Chemistry,R146865,A simple small molecule as an acceptor for fullerene-free organic solar cells with efficiency near 8%,S588040,R146868,Acceptor,R146869,DICTF,"A simple small molecule acceptor named DICTF, with fluorene as the central block and 2-(2,3-dihydro-3-oxo-1H-inden-1-ylidene)propanedinitrile as the end-capping groups, has been designed for fullerene-free organic solar cells. The new molecule was synthesized from widely available and inexpensive commercial materials in only three steps with a high overall yield of ∼60%. Fullerene-free organic solar cells with DICTF as the acceptor material provide a high PCE of 7.93%.",TRUE,acronym
R126,Materials Chemistry,R146812,"π-Bridge-Independent 2-(Benzo[c][1,2,5]thiadiazol-4-ylmethylene)malononitrile-Substituted Nonfullerene Acceptors for Efficient Bulk Heterojunction Solar Cells",S587838,R146814,Acceptor,R146815,FBM,"Molecular acceptors are promising alternatives to fullerenes (e.g., PC61/71BM) in the fabrication of high-efficiency bulk-heterojunction (BHJ) solar cells. While solution-processed polymer–fullerene BHJ devices have recently met the 10% efficiency threshold, molecular acceptors have yet to prove comparably efficient with polymer donors. At this point in time, it is important to forge a better understanding of the design parameters that directly impact small-molecule (SM) acceptor performance in BHJ solar cells. In this report, we show that 2-(benzo[c][1,2,5]thiadiazol-4-ylmethylene)malononitrile (BM)-terminated SM acceptors can achieve efficiencies as high as 5.3% in BHJ solar cells with the polymer donor PCE10. Through systematic device optimization and characterization studies, we find that the nonfullerene analogues (FBM, CBM, and CDTBM) all perform comparably well, independent of the molecular structure and electronics of the π-bridge that links the two electron-deficient BM end groups. With estimated...",TRUE,acronym
R126,Materials Chemistry,R146794,A Rhodanine Flanked Nonfullerene Acceptor for Solution-Processed Organic Photovoltaics,S587738,R146795,Acceptor,R146796,FBR,"A novel small molecule, FBR, bearing 3-ethylrhodanine flanking groups was synthesized as a nonfullerene electron acceptor for solution-processed bulk heterojunction organic photovoltaics (OPV). A straightforward synthesis route was employed, offering the potential for large scale preparation of this material. Inverted OPV devices employing poly(3-hexylthiophene) (P3HT) as the donor polymer and FBR as the acceptor gave power conversion efficiencies (PCE) up to 4.1%. Transient and steady state optical spectroscopies indicated efficient, ultrafast charge generation and efficient photocurrent generation from both donor and acceptor. Ultrafast transient absorption spectroscopy was used to investigate polaron generation efficiency as well as recombination dynamics. It was determined that the P3HT:FBR blend is highly intermixed, leading to increased charge generation relative to comparative devices with P3HT:PC60BM, but also faster recombination due to a nonideal morphology in which, in contrast to P3HT:PC60BM devices, the acceptor does not aggregate enough to create appropriate percolation pathways that prevent fast nongeminate recombination. Despite this nonoptimal morphology the P3HT:FBR devices exhibit better performance than P3HT:PC60BM devices, used as control, demonstrating that this acceptor shows great promise for further optimization.",TRUE,acronym
R126,Materials Chemistry,R148630,Naphthodithiophene‐Based Nonfullerene Acceptor for High‐Performance Organic Photovoltaics: Effect of Extended Conjugation,S595867,R148632,Donor,R148244,FTAZ,"Naphtho[1,2‐b:5,6‐b′]dithiophene is extended to a fused octacyclic building block, which is end capped by strong electron‐withdrawing 2‐(5,6‐difluoro‐3‐oxo‐2,3‐dihydro‐1H‐inden‐1‐ylidene)malononitrile to yield a fused‐ring electron acceptor (IOIC2) for organic solar cells (OSCs). Relative to naphthalene‐based IHIC2, naphthodithiophene‐based IOIC2 with a larger π‐conjugation and a stronger electron‐donating core shows a higher lowest unoccupied molecular orbital energy level (IOIC2: −3.78 eV vs IHIC2: −3.86 eV), broader absorption with a smaller optical bandgap (IOIC2: 1.55 eV vs IHIC2: 1.66 eV), and a higher electron mobility (IOIC2: 1.0 × 10−3 cm2 V−1 s−1 vs IHIC2: 5.0 × 10−4 cm2 V−1 s−1). Thus, IOIC2‐based OSCs show higher values in open‐circuit voltage, short‐circuit current density, fill factor, and thereby much higher power conversion efficiency (PCE) values than those of the IHIC2‐based counterpart. In particular, as‐cast OSCs based on FTAZ: IOIC2 yield PCEs of up to 11.2%, higher than that of the control devices based on FTAZ: IHIC2 (7.45%). Furthermore, by using 0.2% 1,8‐diiodooctane as the processing additive, a PCE of 12.3% is achieved from the FTAZ:IOIC2‐based devices, higher than that of the FTAZ:IHIC2‐based devices (7.31%). These results indicate that incorporating extended conjugation into the electron‐donating fused‐ring units in nonfullerene acceptors is a promising strategy for designing high‐performance electron acceptors.",TRUE,acronym
R126,Materials Chemistry,R146888,High-performance fullerene-free polymer solar cells with 6.31% efficiency,S593159,R146891,Acceptor,R147878,IEIC,"A nonfullerene electron acceptor (IEIC) based on indaceno[1,2-b:5,6-b′]dithiophene and 2-(3-oxo-2,3-dihydroinden-1-ylidene)malononitrile was designed and synthesized, and fullerene-free polymer solar cells based on the IEIC acceptor showed power conversion efficiencies of up to 6.31%.
",TRUE,acronym
R126,Materials Chemistry,R148606,Fused Hexacyclic Nonfullerene Acceptor with Strong Near‐Infrared Absorption for Semitransparent Organic Solar Cells with 9.77% Efficiency,S595768,R148607,Acceptor,R148613,IHIC,"A fused hexacyclic electron acceptor, IHIC, based on strong electron‐donating group dithienocyclopentathieno[3,2‐b]thiophene flanked by strong electron‐withdrawing group 1,1‐dicyanomethylene‐3‐indanone, is designed, synthesized, and applied in semitransparent organic solar cells (ST‐OSCs). IHIC exhibits strong near‐infrared absorption with extinction coefficients of up to 1.6 × 105m−1 cm−1, a narrow optical bandgap of 1.38 eV, and a high electron mobility of 2.4 × 10−3 cm2 V−1 s−1. The ST‐OSCs based on blends of a narrow‐bandgap polymer donor PTB7‐Th and narrow‐bandgap IHIC acceptor exhibit a champion power conversion efficiency of 9.77% with an average visible transmittance of 36% and excellent device stability; this efficiency is much higher than any single‐junction and tandem ST‐OSCs reported in the literature.",TRUE,acronym
R126,Materials Chemistry,R146842,Push–Pull Type Non-Fullerene Acceptors for Polymer Solar Cells: Effect of the Donor Core,S587924,R146845,Acceptor,R146847,ITDI,"There has been a growing interest in the design and synthesis of non-fullerene acceptors for organic solar cells that may overcome the drawbacks of the traditional fullerene-based acceptors. Herein, two novel push-pull (acceptor-donor-acceptor) type small-molecule acceptors, that is, ITDI and CDTDI, with indenothiophene and cyclopentadithiophene as the core units and 2-(3-oxo-2,3-dihydroinden-1-ylidene)malononitrile (INCN) as the end-capping units, are designed and synthesized for non-fullerene polymer solar cells (PSCs). After device optimization, PSCs based on ITDI exhibit good device performance with a power conversion efficiency (PCE) as high as 8.00%, outperforming the CDTDI-based counterparts fabricated under identical condition (2.75% PCE). We further discuss the performance of these non-fullerene PSCs by correlating the energy level and carrier mobility with the core of non-fullerene acceptors. These results demonstrate that indenothiophene is a promising electron-donating core for high-performance non-fullerene small-molecule acceptors.",TRUE,acronym
R126,Materials Chemistry,R148537,"A Twisted Thieno[3,4-b
]thiophene-Based Electron Acceptor Featuring a 14-π-Electron Indenoindene Core for High-Performance Organic Photovoltaics",S595564,R148539,Acceptor,R148546,NITI,"With an indenoindene core, a new thieno[3,4‐b]thiophene‐based small‐molecule electron acceptor, 2,2′‐((2Z,2′Z)‐((6,6′‐(5,5,10,10‐tetrakis(2‐ethylhexyl)‐5,10‐dihydroindeno[2,1‐a]indene‐2,7‐diyl)bis(2‐octylthieno[3,4‐b]thiophene‐6,4‐diyl))bis(methanylylidene))bis(5,6‐difluoro‐3‐oxo‐2,3‐dihydro‐1H‐indene‐2,1‐diylidene))dimalononitrile (NITI), is successfully designed and synthesized. Compared with 12‐π‐electron fluorene, a carbon‐bridged biphenylene with an axial symmetry, indenoindene, a carbon‐bridged E‐stilbene with a centrosymmetry, shows elongated π‐conjugation with 14 π‐electrons and one more sp3 carbon bridge, which may increase the tunability of electronic structure and film morphology. Despite its twisted molecular framework, NITI shows a low optical bandgap of 1.49 eV in thin film and a high molar extinction coefficient of 1.90 × 105m−1 cm−1 in solution. By matching NITI with a large‐bandgap polymer donor, an extraordinary power conversion efficiency of 12.74% is achieved, which is among the best performance so far reported for fullerene‐free organic photovoltaics and is inspiring for the design of new electron acceptors.",TRUE,acronym
R126,Materials Chemistry,R148232,Enhancing Performance of Nonfullerene Acceptors via Side‐Chain Conjugation Strategy,S594327,R148234,Acceptor,R148243,ITIC2,"A side‐chain conjugation strategy in the design of nonfullerene electron acceptors is proposed, with the design and synthesis of a side‐chain‐conjugated acceptor (ITIC2) based on a 4,8‐bis(5‐(2‐ethylhexyl)thiophen‐2‐yl)benzo[1,2‐b:4,5‐b′]di(cyclopenta‐dithiophene) electron‐donating core and 1,1‐dicyanomethylene‐3‐indanone electron‐withdrawing end groups. ITIC2 with the conjugated side chains exhibits an absorption peak at 714 nm, which redshifts 12 nm relative to ITIC1. The absorption extinction coefficient of ITIC2 is 2.7 × 105m−1 cm−1, higher than that of ITIC1 (1.5 × 105m−1 cm−1). ITIC2 exhibits slightly higher highest occupied molecular orbital (HOMO) (−5.43 eV) and lowest unoccupied molecular orbital (LUMO) (−3.80 eV) energy levels relative to ITIC1 (HOMO: −5.48 eV; LUMO: −3.84 eV), and higher electron mobility (1.3 × 10−3 cm2 V−1 s−1) than that of ITIC1 (9.6 × 10−4 cm2 V−1 s−1). The power conversion efficiency of ITIC2‐based organic solar cells is 11.0%, much higher than that of ITIC1‐based control devices (8.54%). Our results demonstrate that side‐chain conjugation can tune energy levels, enhance absorption, and electron mobility, and finally enhance photovoltaic performance of nonfullerene acceptors.",TRUE,acronym
R126,Materials Chemistry,R147918,High-Performance Electron Acceptor with Thienyl Side Chains for Organic Photovoltaics,S593310,R147931,Acceptor,R147928,ITIC-Th,"We develop an efficient fused-ring electron acceptor (ITIC-Th) based on indacenodithieno[3,2-b]thiophene core and thienyl side-chains for organic solar cells (OSCs). Relative to its counterpart with phenyl side-chains (ITIC), ITIC-Th shows lower energy levels (ITIC-Th: HOMO = -5.66 eV, LUMO = -3.93 eV; ITIC: HOMO = -5.48 eV, LUMO = -3.83 eV) due to the σ-inductive effect of thienyl side-chains, which can match with high-performance narrow-band-gap polymer donors and wide-band-gap polymer donors. ITIC-Th has higher electron mobility (6.1 × 10(-4) cm(2) V(-1) s(-1)) than ITIC (2.6 × 10(-4) cm(2) V(-1) s(-1)) due to enhanced intermolecular interaction induced by sulfur-sulfur interaction. We fabricate OSCs by blending ITIC-Th acceptor with two different low-band-gap and wide-band-gap polymer donors. In one case, a power conversion efficiency of 9.6% was observed, which rivals some of the highest efficiencies for single junction OSCs based on fullerene acceptors.",TRUE,acronym
R126,Materials Chemistry,R146997,Enhancing the Performance of Organic Solar Cells by Hierarchically Supramolecular Self-Assembly of Fused-Ring Electron Acceptors,S593036,R146999,Acceptor,R147814,ITOIC-2F,"Three novel non-fullerene small molecular acceptors ITOIC, ITOIC-F, and ITOIC-2F were designed and synthesized with easy chemistry. The concept of supramolecular chemistry was successfully used in the molecular design, which includes noncovalently conformational locking (via intrasupramolecular interaction) to enhance the planarity of backbone and electrostatic interaction (intersupramolecular interaction) to enhance the π–π stacking of terminal groups. Fluorination can further strengthen the intersupramolecular electrostatic interaction of terminal groups. As expected, the designed acceptors exhibited excellent device performance when blended with polymer donor PBDB-T. In comparison with the parent acceptor molecule DC-IDT2T reported in the literature with a power conversion efficiency (PCE) of 3.93%, ITOIC with a planar structure exhibited a PCE of 8.87% and ITOIC-2F with a planar structure and enhanced electrostatic interaction showed a quite impressive PCE of 12.17%. Our result demonstrates the import...",TRUE,acronym
R126,Materials Chemistry,R146924,Nonfullerene Polymer Solar Cells Based on a Main-Chain Twisted Low-Bandgap Acceptor with Power Conversion Efficiency of 13.2%,S593088,R146928,Donor,R147846,J52,"A new acceptor–donor–acceptor-structured nonfullerene acceptor, 2,2′-((2Z,2′Z)-(((4,4,9,9-tetrakis(4-hexylphenyl)-4,9-dihydro-s-indaceno[1,2-b:5,6-b′]dithiophene-2,7-diyl)bis(4-((2-ethylhexyl)oxy)thiophene-4,3-diyl))bis(methanylylidene))bis(5,6-difluoro-3-oxo-2,3-dihydro-1H-indene-2,1-diylidene))dimalononitrile (i-IEICO-4F), is designed and synthesized via main-chain substituting position modification of 2-(5,6-difluoro-3-oxo-2,3-dihydro-1H-indene-2,1-diylidene)dimalononitrile. Unlike its planar analogue IEICO-4F with strong absorption in the near-infrared region, i-IEICO-4F exhibits a twisted main-chain configuration, resulting in 164 nm blue shifts and leading to complementary absorption with the wide-bandgap polymer (J52). A high solution molar extinction coefficient of 2.41 × 105 M–1 cm–1, and sufficiently high energy of charge-transfer excitons of 1.15 eV in a J52:i-IEICO-4F blend were observed, in comparison with those of 2.26 × 105 M–1 cm–1 and 1.08 eV for IEICO-4F. A power conversion efficiency of...",TRUE,acronym
R126,Materials Chemistry,R147898,Side-Chain Isomerization on an n-type Organic Semiconductor ITIC Acceptor Makes 11.77% High Efficiency Polymer Solar Cells,S593250,R147899,Donor,R147908,J61,"Low bandgap n-type organic semiconductor (n-OS) ITIC has attracted great attention for the application as an acceptor with medium bandgap p-type conjugated polymer as donor in nonfullerene polymer solar cells (PSCs) because of its attractive photovoltaic performance. Here we report a modification on the molecular structure of ITIC by side-chain isomerization with meta-alkyl-phenyl substitution, m-ITIC, to further improve its photovoltaic performance. In a comparison with its isomeric counterpart ITIC with para-alkyl-phenyl substitution, m-ITIC shows a higher film absorption coefficient, a larger crystalline coherence, and higher electron mobility. These inherent advantages of m-ITIC resulted in a higher power conversion efficiency (PCE) of 11.77% for the nonfullerene PSCs with m-ITIC as acceptor and a medium bandgap polymer J61 as donor, which is significantly improved over that (10.57%) of the corresponding devices with ITIC as acceptor. To the best of our knowledge, the PCE of 11.77% is one of the highest values reported in the literature to date for nonfullerene PSCs. More importantly, the m-ITIC-based device shows less thickness-dependent photovoltaic behavior than ITIC-based devices in the active-layer thickness range of 80-360 nm, which is beneficial for large area device fabrication. These results indicate that m-ITIC is a promising low bandgap n-OS for the application as an acceptor in PSCs, and the side-chain isomerization could be an easy and convenient way to further improve the photovoltaic performance of the donor and acceptor materials for high efficiency PSCs.",TRUE,acronym
R126,Materials Chemistry,R147944,A near-infrared non-fullerene electron acceptor for high performance polymer solar cells,S593379,R147951,Donor,R147960,J71,"Low-bandgap polymers/molecules are an interesting family of semiconductor materials, and have enabled many recent exciting breakthroughs in the field of organic electronics, especially for organic photovoltaics (OPVs). Here, such a low-bandgap (1.43 eV) non-fullerene electron acceptor (BT-IC) bearing a fused 7-heterocyclic ring with absorption edge extending to the near-infrared (NIR) region was specially designed and synthesized. Benefitted from its NIR light harvesting, high performance OPVs were fabricated with medium bandgap polymers (J61 and J71) as donors, showing power conversion efficiencies of 9.6% with J61 and 10.5% with J71 along with extremely low energy loss (0.56 eV for J61 and 0.53 eV for J71). Interestingly, femtosecond transient absorption spectroscopy studies on both systems show that efficient charge generation was observed despite the fact that the highest occupied molecular orbital (HOMO)–HOMO offset (ΔEH) in the blends was as low as 0.10 eV, suggesting that such a small ΔEH is not a crucial limitation in realizing high performance of NIR non-fullerene based OPVs. Our results indicated that BT-IC is an interesting NIR non-fullerene acceptor with great potential application in tandem/multi-junction, semitransparent, and ternary blend solar cells.",TRUE,acronym
R126,Materials Chemistry,R148246,"Design, synthesis, and structural characterization of the first dithienocyclopentacarbazole-based n-type organic semiconductor and its application in non-fullerene polymer solar cells",S594367,R148250,Donor,R147879,PTB7-Th,"Ladder-type dithienocyclopentacarbazole (DTCC) cores, which possess highly extended π-conjugated backbones and versatile modular structures for derivatization, were widely used to develop high-performance p-type polymeric semiconductors. However, an n-type DTCC-based organic semiconductor has not been reported to date. In this study, the first DTCC-based n-type organic semiconductor (DTCC–IC) with a well-defined A–D–A backbone was designed, synthesized, and characterized, in which a DTCC derivative substituted by four p-octyloxyphenyl groups was used as the electron-donating core and two strongly electron-withdrawing 3-(dicyanomethylene)indan-1-one moieties were used as the terminal acceptors. It was found that DTCC–IC has strong light-capturing ability in the range of 500–720 nm and exhibits an impressively high molar absorption coefficient of 2.24 × 105 M−1 cm−1 at 669 nm owing to effective intramolecular charge transfer and a strong D–A effect. Cyclic voltammetry measurements indicated that the HOMO and LUMO energy levels of DTCC–IC are −5.50 and −3.87 eV, respectively. More importantly, a high electron mobility of 2.17 × 10−3 cm2 V−1 s−1 was determined by the space-charge-limited current method; this electron mobility can be comparable to that of fullerene derivative acceptors (μe ∼ 10−3 cm2 V−1 s−1). To investigate its application potential in non-fullerene solar cells, we fabricated organic solar cells (OSCs) by blending a DTCC–IC acceptor with a PTB7-Th donor under various conditions. The results suggest that the optimized device exhibits a maximum power conversion efficiency (PCE) of up to 6% and a rational high VOC of 0.95 V. These findings demonstrate that the ladder-type DTCC core is a promising building block for the development of high-mobility n-type organic semiconductors for OSCs.",TRUE,acronym
R126,Materials Chemistry,R148663,Dithienopicenocarbazole-Based Acceptors for Efficient Organic Solar Cells with Optoelectronic Response Over 1000 nm and an Extremely Low Energy Loss,S595984,R148666,Donor,R147879,PTB7-Th,"Two cheliform non-fullerene acceptors, DTPC-IC and DTPC-DFIC, based on a highly electron-rich core, dithienopicenocarbazole (DTPC), are synthesized, showing ultra-narrow bandgaps (as low as 1.21 eV). The two-dimensional nitrogen-containing conjugated DTPC possesses strong electron-donating capability, which induces intense intramolecular charge transfer and intermolecular π-π stacking in derived acceptors. The solar cell based on DTPC-DFIC and a spectrally complementary polymer donor, PTB7-Th, showed a high power conversion efficiency of 10.21% and an extremely low energy loss of 0.45 eV, which is the lowest among reported efficient OSCs.",TRUE,acronym
R126,Materials Chemistry,R147944,A near-infrared non-fullerene electron acceptor for high performance polymer solar cells,S593377,R147951,Acceptor,R147959,BT-IC,"Low-bandgap polymers/molecules are an interesting family of semiconductor materials, and have enabled many recent exciting breakthroughs in the field of organic electronics, especially for organic photovoltaics (OPVs). Here, such a low-bandgap (1.43 eV) non-fullerene electron acceptor (BT-IC) bearing a fused 7-heterocyclic ring with absorption edge extending to the near-infrared (NIR) region was specially designed and synthesized. Benefitted from its NIR light harvesting, high performance OPVs were fabricated with medium bandgap polymers (J61 and J71) as donors, showing power conversion efficiencies of 9.6% with J61 and 10.5% with J71 along with extremely low energy loss (0.56 eV for J61 and 0.53 eV for J71). Interestingly, femtosecond transient absorption spectroscopy studies on both systems show that efficient charge generation was observed despite the fact that the highest occupied molecular orbital (HOMO)–HOMO offset (ΔEH) in the blends was as low as 0.10 eV, suggesting that such a small ΔEH is not a crucial limitation in realizing high performance of NIR non-fullerene based OPVs. Our results indicated that BT-IC is an interesting NIR non-fullerene acceptor with great potential application in tandem/multi-junction, semitransparent, and ternary blend solar cells.",TRUE,acronym
R126,Materials Chemistry,R141708,"N,S co-doped carbon dots as a stable bio-imaging probe for detection of intracellular temperature and tetracycline",S568293,R141713,precursors,R141714,C3N3S3,"Stable bioimaging with nanomaterials in living cells has been a great challenge and of great importance for understanding intracellular events and elucidating various biological phenomena. Herein, we demonstrate that N,S co-doped carbon dots (N,S-CDs) produced by one-pot reflux treatment of C3N3S3 with ethane diamine at a relatively low temperature (80 °C) exhibit a high fluorescence quantum yield of about 30.4%, favorable biocompatibility, low-toxicity, strong resistance to photobleaching and good stability. The N,S-CDs as an effective temperature indicator exhibit good temperature-dependent fluorescence with a sensational linear response from 20 to 80 °C. In addition, the obtained N,S-CDs facilitate high selectivity detection of tetracycline (TC) with a detection limit as low as 3 × 10-10 M and a wide linear range from 1.39 × 10-5 to 1.39 × 10-9 M. More importantly, the N,S-CDs display an unambiguous bioimaging ability in the detection of intracellular temperature and TC with satisfactory results.",TRUE,acronym
R126,Materials Chemistry,R141661,Fluorescent N-Doped Carbon Dots as in Vitro and in Vivo Nanothermometer,S567958,R141663,precursors,R141665,C3N4,"The fluorescent N-doped carbon dots (N-CDs) obtained from C3N4 emit strong blue fluorescence, which is stable with different ionic strengths and time. The fluorescence intensity of N-CDs decreases with the temperature increasing, while it can recover to the initial one with the temperature decreasing. It is an accurate linear response of fluorescence intensity to temperature, which may be attributed to the synergistic effect of abundant oxygen-containing functional groups and hydrogen bonds. Further experiments also demonstrate that N-CDs can serve as effective in vitro and in vivo fluorescence-based nanothermometer.",TRUE,acronym
R126,Materials Chemistry,R41138,Electrodeposition of crystalline and photoactive silicon directly from silicon dioxide nanoparticles in molten CaCl 2,S130524,R41139,electrolyte,L79314,CaCl2,"Silicon is a widely used semiconductor for electronic and photovoltaic devices because of its earth-abundance, chemical stability, and the tunable electrical properties by doping. Therefore, the production of pure silicon films by simple and inexpensive methods has been the subject of many investigations. The desire for lower-cost silicon-based solar photovoltaic devices has encouraged the quest for solar-grade silicon production through processes alternative to the currently used Czochralski process or other processes. Electrodeposition is one of the least expensive methods for fabricating films of metals and semiconductors. Electrodeposition of silicon has been studied for over 30 years, in various solution media such as molten salts (LiF-KF-K2SiF6 at 745 8C and BaO-SiO2-BaF2 at 1465 8C ), organic solvents (acetonitrile, tetrahydrofuran), and room-temperature ionic liquids. Recently, the direct electrochemical reduction of bulk solid silicon dioxide in a CaCl2 melt was reported. [7] A key factor for silicon electrodeposition is the purity of silicon deposit because Si for the use in photovoltaic devices is solargrade silicon (> 99.9999% or 6N) and its grade is even higher in electronic devices (electronic-grade silicon or 11N). In most cases, the electrodeposited silicon does not meet these requirements without further purification and, to our knowledge, none have been shown to exhibit a photoresponse. In fact, silicon electrodeposition is not as straightforward as metal deposition, since the deposited semiconductor layer is resistive at room temperature, which complicates electron transfer through the deposit. In many cases, for example in room-temperature aprotic solvents, the deposited silicon acts as an insulating layer and prevents a continuous deposition reaction. In some cases, the silicon deposit contains a high level of impurities (> 2%). Moreover, the nucleation and growth of silicon requires a large amount of energy. The deposition is made even more challenging if the Si precursor is SiO2, which is a very resistive material. We reported previously the electrochemical formation of silicon on molybdenum from a CaCl2 molten salt (850 8C) containing a SiO2 nanoparticle (NP with a diameter of 5– 15 nm) suspension by applying a constant reduction current. However this Si film did not show photoactivity. Here we show the electrodeposition of photoactive crystalline silicon directly from SiO2 NPs from CaCl2 molten salt on a silver electrode that shows a clear photoresponse. To the best of our knowledge, this is a first report of the direct electrodeposition of photoactive silicon. The electrochemical reduction and the cyclic voltammetry (CV) of SiO2 were investigated as described previously. [8] In this study, we found that the replacement of the Mo substrate by silver leads to a dramatic change in the properties of the silicon deposit. The silver substrate exhibited essentially the same electrochemical and CV behavior as other metal substrates, that is, a high reduction current for SiO2 at negative potentials of 1.0 V with the development of a new redox couple near 0.65 V vs. a graphite quasireference electrode (QRE) (Figure 1a). Figure 1b shows a change in the reduction current as a function of the reduction potential, and the optical images of silver electrodes before and after the electrolysis, which displays a dark gray-colored deposit after the reduction. Figure 2 shows SEM images of silicon deposits grown potentiostatically ( 1.25 V vs. graphite QRE) on silver. The amount of silicon deposit increased with the deposition time, and the deposit finally covered the whole silver surface (Figure 2). High-magnification images show that the silicon deposit is not a film but rather platelets or clusters of silicon crystals of domain sizes in the range of tens of micrometers. The average height of the platelets was around 25 mm after a 10000 s deposition (Figure 2b), and 45 mm after a 20000s deposition (Figure 2c), respectively. The edges of the silicon crystals were clearly observed. Contrary to other substrates, silver enhanced the crystallization of silicon produced from silicon dioxide reduction and it is known that silver induces the crystallization of amorphous silicon. Energy-dispersive spectrometry (EDS) elemental mapping (images shown in the bottom row of Figure 2) revealed that small silver islands exist on the top of the silicon deposits, which we think is closely related to the growth mechanism of silicon on silver. The EDS spectrum of the silicon deposit (Figure 3a) suggested that the deposited silicon was quite pure and the amounts of other elements such as C, Ca, and Cl were below the detection limit (about 0.1 atom%). Since the oxygen signal was probably from the native oxide formed on exposure of the deposit to air and silicon does not form an alloy with silver, the purity of silicon was estimated to be at least 99.9 atom%. The successful reduction of Si(4+) in silicon dioxide to elemental silicon (Si) was confirmed by Xray photoelectron spectroscopy (XPS) of the silicon deposit [*] Dr. S. K. Cho, Dr. F.-R. F. Fan, Prof. A. J. Bard Center for Electrochemistry, Department of Chemistry and Biochemistry, The University of Texas at Austin Austin, TX 78712 (USA) E-mail: ajbard@mail.utexas.edu",TRUE,acronym
R126,Materials Chemistry,R41144,Up-scalable and controllable electrolytic production of photo-responsive nanostructured silicon,S130581,R41145,electrolyte,L79356,CaCl2,"The electrochemical reduction of solid silica has been investigated in molten CaCl2 at 900 °C for the one-step, up-scalable, controllable and affordable production of nanostructured silicon with promising photo-responsive properties. Cyclic voltammetry of the metallic cavity electrode loaded with fine silica powder was performed to elaborate the electrochemical reduction mechanism. Potentiostatic electrolysis of porous and dense silica pellets was carried out at different potentials, focusing on the influences of the electrolysis potential and the microstructure of the precursory silica on the product purity and microstructure. The findings suggest a potential range between −0.60 and −0.95 V (vs. Ag/AgCl) for the production of nanostructured silicon with high purity (>99 wt%). According to the elucidated mechanism on the electro-growth of the silicon nanostructures, optimal process parameters for the controllable preparation of high-purity silicon nanoparticles and nanowires were identified. Scaling-up the optimal electrolysis was successful at the gram-scale for the preparation of high-purity silicon nanowires which exhibited promising photo-responsive properties.",TRUE,acronym
R126,Materials Chemistry,R148663,Dithienopicenocarbazole-Based Acceptors for Efficient Organic Solar Cells with Optoelectronic Response Over 1000 nm and an Extremely Low Energy Loss,S595982,R148666,Acceptor,R148675,DTPC-DFIC,"Two cheliform non-fullerene acceptors, DTPC-IC and DTPC-DFIC, based on a highly electron-rich core, dithienopicenocarbazole (DTPC), are synthesized, showing ultra-narrow bandgaps (as low as 1.21 eV). The two-dimensional nitrogen-containing conjugated DTPC possesses strong electron-donating capability, which induces intense intramolecular charge transfer and intermolecular π-π stacking in derived acceptors. The solar cell based on DTPC-DFIC and a spectrally complementary polymer donor, PTB7-Th, showed a high power conversion efficiency of 10.21% and an extremely low energy loss of 0.45 eV, which is the lowest among reported efficient OSCs.",TRUE,acronym
R126,Materials Chemistry,R146907,Non-fullerene polymer solar cells based on a selenophene-containing fused-ring acceptor with photovoltaic performance of 8.6%,S593134,R146909,Acceptor,R147867,IDSe-T-IC,"In this work, we present a non-fullerene electron acceptor bearing a fused five-heterocyclic ring containing selenium atoms, denoted as IDSe-T-IC, for fullerene-free polymer solar cells (PSCs).
",TRUE,acronym
R126,Materials Chemistry,R148630,Naphthodithiophene‐Based Nonfullerene Acceptor for High‐Performance Organic Photovoltaics: Effect of Extended Conjugation,S595865,R148632,Acceptor,R148639,IOIC2,"Naphtho[1,2‐b:5,6‐b′]dithiophene is extended to a fused octacyclic building block, which is end capped by strong electron‐withdrawing 2‐(5,6‐difluoro‐3‐oxo‐2,3‐dihydro‐1H‐inden‐1‐ylidene)malononitrile to yield a fused‐ring electron acceptor (IOIC2) for organic solar cells (OSCs). Relative to naphthalene‐based IHIC2, naphthodithiophene‐based IOIC2 with a larger π‐conjugation and a stronger electron‐donating core shows a higher lowest unoccupied molecular orbital energy level (IOIC2: −3.78 eV vs IHIC2: −3.86 eV), broader absorption with a smaller optical bandgap (IOIC2: 1.55 eV vs IHIC2: 1.66 eV), and a higher electron mobility (IOIC2: 1.0 × 10−3 cm2 V−1 s−1 vs IHIC2: 5.0 × 10−4 cm2 V−1 s−1). Thus, IOIC2‐based OSCs show higher values in open‐circuit voltage, short‐circuit current density, fill factor, and thereby much higher power conversion efficiency (PCE) values than those of the IHIC2‐based counterpart. In particular, as‐cast OSCs based on FTAZ: IOIC2 yield PCEs of up to 11.2%, higher than that of the control devices based on FTAZ: IHIC2 (7.45%). Furthermore, by using 0.2% 1,8‐diiodooctane as the processing additive, a PCE of 12.3% is achieved from the FTAZ:IOIC2‐based devices, higher than that of the FTAZ:IHIC2‐based devices (7.31%). These results indicate that incorporating extended conjugation into the electron‐donating fused‐ring units in nonfullerene acceptors is a promising strategy for designing high‐performance electron acceptors.",TRUE,acronym
R126,Materials Chemistry,R146779,A Solution-Processable Electron Acceptor Based on Dibenzosilole and Diketopyrrolopyrrole for Organic Solar Cells,S587673,R146781,Donor,R146786,P3HT,"Organic solar cells (OSCs) are a promising cost-effective alternative for utility of solar energy, and possess low-cost, light-weight, and fl exibility advantages. [ 1–7 ] Much attention has been focused on the development of OSCs which have seen a dramatic rise in effi ciency over the last decade, and the encouraging power conversion effi ciency (PCE) over 9% has been achieved from bulk heterojunction (BHJ) OSCs. [ 8 ] With regard to photoactive materials, fullerenes and their derivatives, such as [6,6]-phenyl C 61 butyric acid methyl ester (PC 61 BM), have been the dominant electron-acceptor materials in BHJ OSCs, owing to their high electron mobility, large electron affi nity and isotropy of charge transport. [ 9 ] However, fullerenes have a few disadvantages, such as restricted electronic tuning and weak absorption in the visible region. Furthermore, in typical BHJ system of poly(3-hexylthiophene) (P3HT):PC 61 BM, mismatching energy levels between donor and acceptor leads to energy loss and low open-circuit voltages ( V OC ). To solve these problems, novel electron acceptor materials with strong and broad absorption spectra and appropriate energy levels are necessary for OSCs. Recently, non-fullerene small molecule acceptors have been developed. [ 10 , 11 ] However, rare reports on the devices based on solution-processed non-fullerene small molecule acceptors have shown PCEs approaching or exceeding 1.5%, [ 12–19 ] and only one paper reported PCEs over 2%. [ 16 ]",TRUE,acronym
R126,Materials Chemistry,R146794,A Rhodanine Flanked Nonfullerene Acceptor for Solution-Processed Organic Photovoltaics,S587740,R146795,Donor,R146786,P3HT,"A novel small molecule, FBR, bearing 3-ethylrhodanine flanking groups was synthesized as a nonfullerene electron acceptor for solution-processed bulk heterojunction organic photovoltaics (OPV). A straightforward synthesis route was employed, offering the potential for large scale preparation of this material. Inverted OPV devices employing poly(3-hexylthiophene) (P3HT) as the donor polymer and FBR as the acceptor gave power conversion efficiencies (PCE) up to 4.1%. Transient and steady state optical spectroscopies indicated efficient, ultrafast charge generation and efficient photocurrent generation from both donor and acceptor. Ultrafast transient absorption spectroscopy was used to investigate polaron generation efficiency as well as recombination dynamics. It was determined that the P3HT:FBR blend is highly intermixed, leading to increased charge generation relative to comparative devices with P3HT:PC60BM, but also faster recombination due to a nonideal morphology in which, in contrast to P3HT:PC60BM devices, the acceptor does not aggregate enough to create appropriate percolation pathways that prevent fast nongeminate recombination. Despite this nonoptimal morphology the P3HT:FBR devices exhibit better performance than P3HT:PC60BM devices, used as control, demonstrating that this acceptor shows great promise for further optimization.",TRUE,acronym
R126,Materials Chemistry,R146997,Enhancing the Performance of Organic Solar Cells by Hierarchically Supramolecular Self-Assembly of Fused-Ring Electron Acceptors,S593037,R146999,Donor,R147830,PBDB-T,"Three novel non-fullerene small molecular acceptors ITOIC, ITOIC-F, and ITOIC-2F were designed and synthesized with easy chemistry. The concept of supramolecular chemistry was successfully used in the molecular design, which includes noncovalently conformational locking (via intrasupramolecular interaction) to enhance the planarity of backbone and electrostatic interaction (intersupramolecular interaction) to enhance the π–π stacking of terminal groups. Fluorination can further strengthen the intersupramolecular electrostatic interaction of terminal groups. As expected, the designed acceptors exhibited excellent device performance when blended with polymer donor PBDB-T. In comparison with the parent acceptor molecule DC-IDT2T reported in the literature with a power conversion efficiency (PCE) of 3.93%, ITOIC with a planar structure exhibited a PCE of 8.87% and ITOIC-2F with a planar structure and enhanced electrostatic interaction showed a quite impressive PCE of 12.17%. Our result demonstrates the import...",TRUE,acronym
R67,Medicinal Chemistry and Pharmaceutics,R138607,A Novel Nanoparticle Formulation for Sustained Paclitaxel Delivery,S550711,R138609,keywords,L387536,GMO,"PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.",TRUE,acronym
R67,Medicinal Chemistry and Pharmaceutics,R110813,Resveratrol loaded polymeric micelles for theranostic targeting of breast cancer cells,S505289,R110815,has cell line,R110967,MDA-MB-231,"Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179±22 nm were produced. Res was loaded with high EE of 73±0.9% and DL content of 6.2±0.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.",TRUE,acronym
R67,Medicinal Chemistry and Pharmaceutics,R138607,A Novel Nanoparticle Formulation for Sustained Paclitaxel Delivery,S550733,R138609,has cell line,R110967,MDA-MB-231,"PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.",TRUE,acronym
R67,Medicinal Chemistry and Pharmaceutics,R138607,A Novel Nanoparticle Formulation for Sustained Paclitaxel Delivery,S550712,R138609,keywords,R110967,MDA-MB-231,"PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.",TRUE,acronym
R67,Medicinal Chemistry and Pharmaceutics,R141393,Chaperna-mediated assembly of ferritin-based Middle East respiratory syndrome-coronavirus nanoparticles,S565807,R141394,Virus,L397095,MERS-CoV,"The folding of monomeric antigens and their subsequent assembly into higher ordered structures are crucial for robust and effective production of nanoparticle (NP) vaccines in a timely and reproducible manner. Despite significant advances in in silico design and structure-based assembly, most engineered NPs are refractory to soluble expression and fail to assemble as designed, presenting major challenges in the manufacturing process. The failure is due to a lack of understanding of the kinetic pathways and enabling technical platforms to ensure successful folding of the monomer antigens into regular assemblages. Capitalizing on a novel function of RNA as a molecular chaperone (chaperna: chaperone + RNA), we provide a robust protein-folding vehicle that may be implemented to NP assembly in bacterial hosts. The receptor-binding domain (RBD) of Middle East respiratory syndrome-coronavirus (MERS-CoV) was fused with the RNA-interaction domain (RID) and bacterioferritin, and expressed in Escherichia coli in a soluble form. Site-specific proteolytic removal of the RID prompted the assemblage of monomers into NPs, which was confirmed by electron microscopy and dynamic light scattering. The mutations that affected the RNA binding to RBD significantly increased the soluble aggregation into amorphous structures, reducing the overall yield of NPs of a defined size. This underscored the RNA-antigen interactions during NP assembly. The sera after mouse immunization effectively interfered with the binding of MERS-CoV RBD to the cellular receptor hDPP4. The results suggest that RNA-binding controls the overall kinetic network of the antigen folding pathway in favor of enhanced assemblage of NPs into highly regular and immunologically relevant conformations. The concentration of the ion Fe2+, salt, and fusion linker also contributed to the assembly in vitro, and the stability of the NPs. The kinetic “pace-keeping” role of chaperna in the super molecular assembly of antigen monomers holds promise for the development and delivery of NPs and virus-like particles as recombinant vaccines and for serological detection of viral infections.",TRUE,acronym
R67,Medicinal Chemistry and Pharmaceutics,R141395,Enhanced Ability of Oligomeric Nanobodies Targeting MERS Coronavirus Receptor-Binding Domain,S565829,R141396,Virus,L397114,MERS-CoV,"Middle East respiratory syndrome (MERS) coronavirus (MERS-CoV), an infectious coronavirus first reported in 2012, has a mortality rate greater than 35%. Therapeutic antibodies are key tools for preventing and treating MERS-CoV infection, but to date no such agents have been approved for treatment of this virus. Nanobodies (Nbs) are camelid heavy chain variable domains with properties distinct from those of conventional antibodies and antibody fragments. We generated two oligomeric Nbs by linking two or three monomeric Nbs (Mono-Nbs) targeting the MERS-CoV receptor-binding domain (RBD), and compared their RBD-binding affinity, RBD–receptor binding inhibition, stability, and neutralizing and cross-neutralizing activity against MERS-CoV. Relative to Mono-Nb, dimeric Nb (Di-Nb) and trimeric Nb (Tri-Nb) had significantly greater ability to bind MERS-CoV RBD proteins with or without mutations in the RBD, thereby potently blocking RBD–MERS-CoV receptor binding. The engineered oligomeric Nbs were very stable under extreme conditions, including low or high pH, protease (pepsin), chaotropic denaturant (urea), and high temperature. Importantly, Di-Nb and Tri-Nb exerted significantly elevated broad-spectrum neutralizing activity against at least 19 human and camel MERS-CoV strains isolated in different countries and years. Overall, the engineered Nbs could be developed into effective therapeutic agents for prevention and treatment of MERS-CoV infection.",TRUE,acronym
R67,Medicinal Chemistry and Pharmaceutics,R141415,Development of Label-Free Colorimetric Assay for MERS-CoV Using Gold Nanoparticles,S566043,R141416,Virus,L397298,MERS-CoV,"Worldwide outbreaks of infectious diseases necessitate the development of rapid and accurate diagnostic methods. Colorimetric assays are a representative tool to simply identify the target molecules in specimens through color changes of an indicator (e.g., nanosized metallic particle, and dye molecules). The detection method is used to confirm the presence of biomarkers visually and measure absorbance of the colored compounds at a specific wavelength. In this study, we propose a colorimetric assay based on an extended form of double-stranded DNA (dsDNA) self-assembly shielded gold nanoparticles (AuNPs) under positive electrolyte (e.g., 0.1 M MgCl2) for detection of Middle East respiratory syndrome coronavirus (MERS-CoV). This platform is able to verify the existence of viral molecules through a localized surface plasmon resonance (LSPR) shift and color changes of AuNPs in the UV–vis wavelength range. We designed a pair of thiol-modified probes at either the 5′ end or 3′ end to organize complementary base pairs with upstream of the E protein gene (upE) and open reading frames (ORF) 1a on MERS-CoV. The dsDNA of the target and probes forms a disulfide-induced long self-assembled complex, which protects AuNPs from salt-induced aggregation and transition of optical properties. This colorimetric assay could discriminate down to 1 pmol/μL of 30 bp MERS-CoV and further be adapted for convenient on-site detection of other infectious diseases, especially in resource-limited settings.",TRUE,acronym
R67,Medicinal Chemistry and Pharmaceutics,R141417,"Multiplex Paper-Based Colorimetric DNA Sensor Using Pyrrolidinyl Peptide Nucleic Acid-Induced AgNPs Aggregation for Detecting MERS-CoV, MTB, and HPV Oligonucleotides",S566061,R141418,Virus,L397313,MERS-CoV,"The development of simple fluorescent and colorimetric assays that enable point-of-care DNA and RNA detection has been a topic of significant research because of the utility of such assays in resource limited settings. The most common motifs utilize hybridization to a complementary detection strand coupled with a sensitive reporter molecule. Here, a paper-based colorimetric assay for DNA detection based on pyrrolidinyl peptide nucleic acid (acpcPNA)-induced nanoparticle aggregation is reported as an alternative to traditional colorimetric approaches. PNA probes are an attractive alternative to DNA and RNA probes because they are chemically and biologically stable, easily synthesized, and hybridize efficiently with the complementary DNA strands. The acpcPNA probe contains a single positive charge from the lysine at C-terminus and causes aggregation of citrate anion-stabilized silver nanoparticles (AgNPs) in the absence of complementary DNA. In the presence of target DNA, formation of the anionic DNA-acpcPNA duplex results in dispersion of the AgNPs as a result of electrostatic repulsion, giving rise to a detectable color change. Factors affecting the sensitivity and selectivity of this assay were investigated, including ionic strength, AgNP concentration, PNA concentration, and DNA strand mismatches. The method was used for screening of synthetic Middle East respiratory syndrome coronavirus (MERS-CoV), Mycobacterium tuberculosis (MTB), and human papillomavirus (HPV) DNA based on a colorimetric paper-based analytical device developed using the aforementioned principle. The oligonucleotide targets were detected by measuring the color change of AgNPs, giving detection limits of 1.53 (MERS-CoV), 1.27 (MTB), and 1.03 nM (HPV). The acpcPNA probe exhibited high selectivity for the complementary oligonucleotides over single-base-mismatch, two-base-mismatch, and noncomplementary DNA targets. The proposed paper-based colorimetric DNA sensor has potential to be an alternative approach for simple, rapid, sensitive, and selective DNA detection.",TRUE,acronym
R67,Medicinal Chemistry and Pharmaceutics,R141419,Identification of sialic acid-binding function for the Middle East respiratory syndrome coronavirus spike glycoprotein,S566090,R141420,Virus,L397339,MERS-CoV,"Significance Middle East respiratory syndrome coronavirus (MERS-CoV) recurrently infects humans from its dromedary camel reservoir, causing severe respiratory disease with an ∼35% fatality rate. The virus binds to the dipeptidyl peptidase 4 (DPP4) entry receptor on respiratory epithelial cells via its spike protein. We here report that the MERS-CoV spike protein selectively binds to sialic acid (Sia) and demonstrate that cell-surface sialoglycoconjugates can serve as an attachment factor. Our observations warrant further research into the role of Sia binding in the virus’s host and tissue tropism and transmission, which may be influenced by the observed Sia-binding fine specificity and by differences in sialoglycomes among host species. Middle East respiratory syndrome coronavirus (MERS-CoV) targets the epithelial cells of the respiratory tract both in humans and in its natural host, the dromedary camel. Virion attachment to host cells is mediated by 20-nm-long homotrimers of spike envelope protein S. The N-terminal subunit of each S protomer, called S1, folds into four distinct domains designated S1A through S1D. Binding of MERS-CoV to the cell surface entry receptor dipeptidyl peptidase 4 (DPP4) occurs via S1B. We now demonstrate that in addition to DPP4, MERS-CoV binds to sialic acid (Sia). Initially demonstrated by hemagglutination assay with human erythrocytes and intact virus, MERS-CoV Sia-binding activity was assigned to S subdomain S1A. When multivalently displayed on nanoparticles, S1 or S1A bound to human erythrocytes and to human mucin in a strictly Sia-dependent fashion. Glycan array analysis revealed a preference for α2,3-linked Sias over α2,6-linked Sias, which correlates with the differential distribution of α2,3-linked Sias and the predominant sites of MERS-CoV replication in the upper and lower respiratory tracts of camels and humans, respectively. Binding is hampered by Sia modifications such as 5-N-glycolylation and (7,)9-O-acetylation. Depletion of cell surface Sia by neuraminidase treatment inhibited MERS-CoV entry of Calu-3 human airway cells, thus providing direct evidence that virus–Sia interactions may aid in virion attachment. The combined observations lead us to propose that high-specificity, low-affinity attachment of MERS-CoV to sialoglycans during the preattachment or early attachment phase may form another determinant governing the host range and tissue tropism of this zoonotic pathogen.",TRUE,acronym
R67,Medicinal Chemistry and Pharmaceutics,R141421,Species-Specific Colocalization of Middle East Respiratory Syndrome Coronavirus Attachment and Entry Receptors,S566111,R141422,Virus,L397357,MERS-CoV,"MERS-CoV uses the S1B domain of its spike protein to attach to its host receptor, dipeptidyl peptidase 4 (DPP4). The tissue localization of DPP4 has been mapped in different susceptible species. On the other hand, the S1A domain, the N-terminal domain of this spike protein, preferentially binds to several glycotopes of α2,3-sialic acids, the attachment factor of MERS-CoV. Here we show, using a novel method, that the S1A domain specifically binds to the nasal epithelium of dromedary camels, alveolar epithelium of humans, and intestinal epithelium of common pipistrelle bats. In contrast, it does not bind to the nasal epithelium of pigs or rabbits, nor does it bind to the intestinal epithelium of serotine bats and frugivorous bat species. This finding supports the importance of the S1A domain in MERS-CoV infection and tropism, suggests its role in transmission, and highlights its potential use as a component of novel vaccine candidates. ABSTRACT Middle East respiratory syndrome coronavirus (MERS-CoV) uses the S1B domain of its spike protein to bind to dipeptidyl peptidase 4 (DPP4), its functional receptor, and its S1A domain to bind to sialic acids. The tissue localization of DPP4 in humans, bats, camelids, pigs, and rabbits generally correlates with MERS-CoV tropism, highlighting the role of DPP4 in virus pathogenesis and transmission. However, MERS-CoV S1A does not indiscriminately bind to all α2,3-sialic acids, and the species-specific binding and tissue distribution of these sialic acids in different MERS-CoV-susceptible species have not been investigated. We established a novel method to detect these sialic acids on tissue sections of various organs of different susceptible species by using nanoparticles displaying multivalent MERS-CoV S1A. We found that the nanoparticles specifically bound to the nasal epithelial cells of dromedary camels, type II pneumocytes in human lungs, and the intestinal epithelial cells of common pipistrelle bats. Desialylation by neuraminidase abolished nanoparticle binding and significantly reduced MERS-CoV infection in primary susceptible cells. In contrast, S1A nanoparticles did not bind to the intestinal epithelium of serotine bats and frugivorous bat species, nor did they bind to the nasal epithelium of pigs and rabbits. Both pigs and rabbits have been shown to shed less infectious virus than dromedary camels and do not transmit the virus via either contact or airborne routes. Our results depict species-specific colocalization of MERS-CoV entry and attachment receptors, which may be relevant in the transmission and pathogenesis of MERS-CoV. IMPORTANCE MERS-CoV uses the S1B domain of its spike protein to attach to its host receptor, dipeptidyl peptidase 4 (DPP4). The tissue localization of DPP4 has been mapped in different susceptible species. On the other hand, the S1A domain, the N-terminal domain of this spike protein, preferentially binds to several glycotopes of α2,3-sialic acids, the attachment factor of MERS-CoV. Here we show, using a novel method, that the S1A domain specifically binds to the nasal epithelium of dromedary camels, alveolar epithelium of humans, and intestinal epithelium of common pipistrelle bats. In contrast, it does not bind to the nasal epithelium of pigs or rabbits, nor does it bind to the intestinal epithelium of serotine bats and frugivorous bat species. This finding supports the importance of the S1A domain in MERS-CoV infection and tropism, suggests its role in transmission, and highlights its potential use as a component of novel vaccine candidates.",TRUE,acronym
R67,Medicinal Chemistry and Pharmaceutics,R160791,Mechanochemical Synthesis of Pharmaceutical Cocrystal Suspensions via Hot Melt Extrusion: Feasibility Studies and Physicochemical Characterization,S641672,R160795,Carrier for hot melt extrusion,R160767,Xylitol,"Engineered cocrystals offer an alternative solid drug form with tailored physicochemical properties. Interestingly, although cocrystals provide many new possibilities, they also present new challenges, particularly in regard to their design and large-scale manufacture. Current literature has primarily focused on the preparation and characterization of novel cocrystals typically containing only the drug and coformer, leaving the subsequent formulation less explored. In this paper we propose, for the first time, the use of hot melt extrusion for the mechanochemical synthesis of pharmaceutical cocrystals in the presence of a meltable binder. In this approach, we examine excipients that are amenable to hot melt extrusion, forming a suspension of cocrystal particulates embedded in a pharmaceutical matrix. Using ibuprofen and isonicotinamide as a model cocrystal reagent pair, formulations extruded with a small molecular matrix carrier (xylitol) were examined to be intimate mixtures wherein the newly formed cocrystal particulates were physically suspended in a matrix. With respect to formulations extruded using polymeric carriers (Soluplus and Eudragit EPO, respectively), however, there was no evidence within PXRD patterns of either crystalline ibuprofen or the cocrystal. Importantly, it was established in this study that an appropriate carrier for a cocrystal reagent pair during HME processing should satisfy certain criteria including limited interaction with parent reagents and cocrystal product, processing temperature sufficiently lower than the onset of cocrystal Tm, low melt viscosity, and rapid solidification upon cooling.",TRUE,acronym
R67,Medicinal Chemistry and Pharmaceutics,R144478,Co-delivery of doxorubicin and siRNA for glioma therapy by a brain targeting system: angiopep-2-modified poly(lactic-co-glycolic acid) nanoparticles,S578673,R144480,Surface functionalized with,R144481,Angiopep-2,"Abstract It is very challenging to treat brain cancer because of the blood–brain barrier (BBB) restricting therapeutic drug or gene to access the brain. In this research project, angiopep-2 (ANG) was used as a brain-targeted peptide for preparing multifunctional ANG-modified poly(lactic-co-glycolic acid) (PLGA) nanoparticles (NPs), which encapsulated both doxorubicin (DOX) and epidermal growth factor receptor (EGFR) siRNA, designated as ANG/PLGA/DOX/siRNA. This system could efficiently deliver DOX and siRNA into U87MG cells leading to significant cell inhibition, apoptosis and EGFR silencing in vitro. It demonstrated that this drug system was capable of penetrating the BBB in vivo, resulting in more drugs accumulation in the brain. The animal study using the brain orthotopic U87MG glioma xenograft model indicated that the ANG-targeted co-delivery of DOX and EGFR siRNA resulted in not only the prolongation of the life span of the glioma-bearing mice but also an obvious cell apoptosis in glioma tissue.",TRUE,acronym
R67,Medicinal Chemistry and Pharmaceutics,R141399,A novel nanobody targeting Middle East respiratory syndrome coronavirus (MERS-CoV) receptor-binding domain has potent cross-neutralizing activity and protective efficacy against MERS-CoV,S565888,R141400,Virus,L397167,ERS-CoV,"Therapeutic development is critical for preventing and treating continual MERS-CoV infections in humans and camels. Because of their small size, nanobodies (Nbs) have advantages as antiviral therapeutics (e.g., high expression yield and robustness for storage and transportation) and also potential limitations (e.g., low antigen-binding affinity and fast renal clearance). Here, we have developed novel Nbs that specifically target the receptor-binding domain (RBD) of MERS-CoV spike protein. They bind to a conserved site on MERS-CoV RBD with high affinity, blocking RBD's binding to MERS-CoV receptor. Through engineering a C-terminal human Fc tag, the in vivo half-life of the Nbs is significantly extended. Moreover, the Nbs can potently cross-neutralize the infections of diverse MERS-CoV strains isolated from humans and camels. The Fc-tagged Nb also completely protects humanized mice from lethal MERS-CoV challenge. Taken together, our study has discovered novel Nbs that hold promise as potent, cost-effective, and broad-spectrum anti-MERS-CoV therapeutic agents.",TRUE,acronym
R67,Medicinal Chemistry and Pharmaceutics,R147006,Exendin-4-Loaded PLGA Microspheres Relieve Cerebral Ischemia/Reperfusion Injury and Neurologic Deficits through Long-Lasting Bioactivity-Mediated Phosphorylated Akt/eNOS Signaling in Rats,S588757,R147008,Uses drug,L409901,exendin-4,"Glucagon-like peptide-1 (GLP-1) receptor activation in the brain provides neuroprotection. Exendin-4 (Ex-4), a GLP-1 analog, has seen limited clinical usage because of its short half-life. We developed long-lasting Ex-4-loaded poly(D,L-lactide-co-glycolide) microspheres (PEx-4) and explored its neuroprotective potential against cerebral ischemia in diabetic rats. Compared with Ex-4, PEx-4 in the gradually degraded microspheres sustained higher Ex-4 levels in the plasma and cerebrospinal fluid for at least 2 weeks and improved diabetes-induced glycemia after a single subcutaneous administration (20 μg/day). Ten minutes of bilateral carotid artery occlusion (CAO) combined with hemorrhage-induced hypotension (around 30 mm Hg) significantly decreased cerebral blood flow and microcirculation in male Wistar rats subjected to streptozotocin-induced diabetes. CAO increased cortical O 2 – levels by chemiluminescence amplification and prefrontal cortex edema by T2-weighted magnetic resonance imaging analysis. CAO significantly increased aquaporin 4 and glial fibrillary acidic protein expression and led to cognition deficits. CAO downregulated phosphorylated Akt/endothelial nitric oxide synthase (p-Akt/p-eNOS) signaling and enhanced nuclear factor (NF)-κBp65/ intercellular adhesion molecule-1 (ICAM-1) expression, endoplasmic reticulum (ER) stress, and apoptosis in the cerebral cortex. PEx-4 was more effective than Ex-4 to improve CAO-induced oxidative injury and cognitive deficits. The neuroprotection provided by PEx-4 was through p-Akt/p-eNOS pathways, which suppressed CAO-enhanced NF- κB/ICAM-1 signaling, ER stress, and apoptosis.",TRUE,acronym
R67,Medicinal Chemistry and Pharmaceutics,R110813,Resveratrol loaded polymeric micelles for theranostic targeting of breast cancer cells,S505294,R110815,has cell line,R110968,MCF-10A,"Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179±22 nm were produced. Res was loaded with high EE of 73±0.9% and DL content of 6.2±0.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.",TRUE,acronym
R67,Medicinal Chemistry and Pharmaceutics,R147246,PEG-g-chitosan nanoparticles functionalized with the monoclonal antibody OX26 for brain drug targeting,S590244,R147248,Surface functionalized with,R147251,OX26,"AIM Drug targeting to the CNS is challenging due to the presence of blood-brain barrier. We investigated chitosan (Cs) nanoparticles (NPs) as drug transporter system across the blood-brain barrier, based on mAb OX26 modified Cs. MATERIALS & METHODS Cs NPs functionalized with PEG, modified and unmodified with OX26 (Cs-PEG-OX26) were prepared and chemico-physically characterized. These NPs were administered (intraperitoneal) in mice to define their ability to reach the brain. RESULTS Brain uptake of OX26-conjugated NPs is much higher than of unmodified NPs, because: long-circulating abilities (conferred by PEG), interaction between cationic Cs and brain endothelium negative charges and OX26 TfR receptor affinity. CONCLUSION Cs-PEG-OX26 NPs are promising drug delivery system to the CNS.",TRUE,acronym
R67,Medicinal Chemistry and Pharmaceutics,R144137,Low active loading of cargo into engineered extracellular vesicles results in inefficient miRNA mimic delivery,S576977,R144142,Fusion protein,R144146,TAT-TAR,"ABSTRACT Extracellular vesicles (EVs) hold great potential as novel systems for nucleic acid delivery due to their natural composition. Our goal was to load EVs with microRNA that are synthesized by the cells that produce the EVs. HEK293T cells were engineered to produce EVs expressing a lysosomal associated membrane, Lamp2a fusion protein. The gene encoding pre-miR-199a was inserted into an artificial intron of the Lamp2a fusion protein. The TAT peptide/HIV-1 transactivation response (TAR) RNA interacting peptide was exploited to enhance the EV loading of the pre-miR-199a containing a modified TAR RNA loop. Computational modeling demonstrated a stable interaction between the modified pre-miR-199a loop and TAT peptide. EMSA gel shift, recombinant Dicer processing and luciferase binding assays confirmed the binding, processing and functionality of the modified pre-miR-199a. The TAT-TAR interaction enhanced the loading of the miR-199a into EVs by 65-fold. Endogenously loaded EVs were ineffective at delivering active miR-199a-3p therapeutic to recipient SK-Hep1 cells. While the low degree of miRNA loading into EVs through this approach resulted in inefficient distribution of RNA cargo into recipient cells, the TAT TAR strategy to load miRNA into EVs may be valuable in other drug delivery approaches involving miRNA mimics or other hairpin containing RNAs.",TRUE,acronym
R67,Medicinal Chemistry and Pharmaceutics,R141401,Application of camelid heavy-chain variable domains (VHHs) in prevention and treatment of bacterial and viral infections,S565905,R141402,Type of nanoparticles,L397181,VHHs,"ABSTRACT Camelid heavy-chain variable domains (VHHs) are the smallest, intact, antigen-binding units to occur in nature. VHHs possess high degrees of solubility and robustness enabling generation of multivalent constructs with increased avidity – characteristics that mark their superiority to other antibody fragments and monoclonal antibodies. Capable of effectively binding to molecular targets inaccessible to classical immunotherapeutic agents and easily produced in microbial culture, VHHs are considered promising tools for pharmaceutical biotechnology. With the aim to demonstrate the perspective and potential of VHHs for the development of prophylactic and therapeutic drugs to target diseases caused by bacterial and viral infections, this review article will initially describe the structural features that underlie the unique properties of VHHs and explain the methods currently used for the selection and recombinant production of pathogen-specific VHHs, and then thoroughly summarize the experimental findings of five distinct studies that employed VHHs as inhibitors of host–pathogen interactions or neutralizers of infectious agents. Past and recent studies suggest the potential of camelid heavy-chain variable domains as a novel modality of immunotherapeutic drugs and a promising alternative to monoclonal antibodies. VHHs demonstrate the ability to interfere with bacterial pathogenesis by preventing adhesion to host tissue and sequestering disease-causing bacterial toxins. To protect from viral infections, VHHs may be employed as inhibitors of viral entry by binding to viral coat proteins or blocking interactions with cell-surface receptors. The implementation of VHHs as immunotherapeutic agents for infectious diseases is of considerable potential and set to contribute to public health in the near future.",TRUE,acronym
R67,Medicinal Chemistry and Pharmaceutics,R141413,Novel coronavirus-like particles targeting cells lining the respiratory tract,S566027,R141414,Type of nanoparticles,L397285,VLPs,"Virus like particles (VLPs) produced by the expression of viral structural proteins can serve as versatile nanovectors or potential vaccine candidates. In this study we describe for the first time the generation of HCoV-NL63 VLPs using baculovirus system. Major structural proteins of HCoV-NL63 have been expressed in tagged or native form, and their assembly to form VLPs was evaluated. Additionally, a novel procedure for chromatography purification of HCoV-NL63 VLPs was developed. Interestingly, we show that these nanoparticles may deliver cargo and selectively transduce cells expressing the ACE2 protein such as ciliated cells of the respiratory tract. Production of a specific delivery vector is a major challenge for research concerning targeting molecules. The obtained results show that HCoV-NL63 VLPs may be efficiently produced, purified, modified and serve as a delivery platform. This study constitutes an important basis for further development of a promising viral vector displaying narrow tissue tropism.",TRUE,acronym
R359,Medicine and Health,R109777,Staff Shortage in German Intensive Care Units During the COVID-19 Pandemic - Not only a Sensed Dilemma: Results from a Nationwide Survey,S500992,R109780,has research problem,R109783,COVID-19,"Background: The surge in patients during the COVID-19 pandemic has exacerbated the looming problem of staff shortage in German ICUs possibly leading to worse outcomes for patients. Methods: Within the German Evidence Ecosystem CEOsys network, we conducted an online national mixed-methods survey assessing the standard of care in German ICUs treating patients with COVID-19. Results: A total of 171 German ICUs reported a median ideal number of patients per intensivist of 8 (interquartile range, IQR = 3rd quartile - 1st quartile = 4.0) and per nurse of 2.0 (IQR = 1.0). For COVID-19 patients, the median target was a maximum of 6.0 (IQR = 2.0) patients per intensivist or 2.0 (IQR = 0.0) patients per nurse. Targets for intensivists were rarely met by 15.2% and never met by 3.5% of responding institutions. Targets for nursing staffing could rarely be met in 32.2% and never in 5.3% of responding institutions.Conclusions: Shortages of staffing in the critical care setting are eminent during the COVID-19 pandemic and might not only negatively affect patient outcomes, but also staff wellbeing and healthcare costs. A joint effort that scrutinizes the demands and structures of our health care system seems fundamental to be prepared for the future.",TRUE,acronym
R63,Molecular and Cellular Neuroscience,R110387,Aldehyde dehydrogenase 2 activity and aldehydic load contribute to neuroinflammation and Alzheimer’s disease related pathology,S505180,R110390,keywords,R110933,Alda-1,"Abstract Aldehyde dehydrogenase 2 deficiency (ALDH2*2) causes facial flushing in response to alcohol consumption in approximately 560 million East Asians. Recent meta-analysis demonstrated the potential link between ALDH2*2 mutation and Alzheimer’s Disease (AD). Other studies have linked chronic alcohol consumption as a risk factor for AD. In the present study, we show that fibroblasts of an AD patient that also has an ALDH2*2 mutation or overexpression of ALDH2*2 in fibroblasts derived from AD patients harboring ApoE ε4 allele exhibited increased aldehydic load, oxidative stress, and increased mitochondrial dysfunction relative to healthy subjects and exposure to ethanol exacerbated these dysfunctions. In an in vivo model, daily exposure of WT mice to ethanol for 11 weeks resulted in mitochondrial dysfunction, oxidative stress and increased aldehyde levels in their brains and these pathologies were greater in ALDH2*2/*2 (homozygous) mice. Following chronic ethanol exposure, the levels of the AD-associated protein, amyloid-β, and neuroinflammation were higher in the brains of the ALDH2*2/*2 mice relative to WT. Cultured primary cortical neurons of ALDH2*2/*2 mice showed increased sensitivity to ethanol and there was a greater activation of their primary astrocytes relative to the responses of neurons or astrocytes from the WT mice. Importantly, an activator of ALDH2 and ALDH2*2, Alda-1, blunted the ethanol-induced increases in Aβ, and the neuroinflammation in vitro and in vivo. These data indicate that impairment in the metabolism of aldehydes, and specifically ethanol-derived acetaldehyde, is a contributor to AD associated pathology and highlights the likely risk of alcohol consumption in the general population and especially in East Asians that carry ALDH2*2 mutation.",TRUE,acronym
R63,Molecular and Cellular Neuroscience,R110387,Aldehyde dehydrogenase 2 activity and aldehydic load contribute to neuroinflammation and Alzheimer’s disease related pathology,S505389,R110390,proteins detected by western blot ,R110992,ALDH2,"Abstract Aldehyde dehydrogenase 2 deficiency (ALDH2*2) causes facial flushing in response to alcohol consumption in approximately 560 million East Asians. Recent meta-analysis demonstrated the potential link between ALDH2*2 mutation and Alzheimer’s Disease (AD). Other studies have linked chronic alcohol consumption as a risk factor for AD. In the present study, we show that fibroblasts of an AD patient that also has an ALDH2*2 mutation or overexpression of ALDH2*2 in fibroblasts derived from AD patients harboring ApoE ε4 allele exhibited increased aldehydic load, oxidative stress, and increased mitochondrial dysfunction relative to healthy subjects and exposure to ethanol exacerbated these dysfunctions. In an in vivo model, daily exposure of WT mice to ethanol for 11 weeks resulted in mitochondrial dysfunction, oxidative stress and increased aldehyde levels in their brains and these pathologies were greater in ALDH2*2/*2 (homozygous) mice. Following chronic ethanol exposure, the levels of the AD-associated protein, amyloid-β, and neuroinflammation were higher in the brains of the ALDH2*2/*2 mice relative to WT. Cultured primary cortical neurons of ALDH2*2/*2 mice showed increased sensitivity to ethanol and there was a greater activation of their primary astrocytes relative to the responses of neurons or astrocytes from the WT mice. Importantly, an activator of ALDH2 and ALDH2*2, Alda-1, blunted the ethanol-induced increases in Aβ, and the neuroinflammation in vitro and in vivo. These data indicate that impairment in the metabolism of aldehydes, and specifically ethanol-derived acetaldehyde, is a contributor to AD associated pathology and highlights the likely risk of alcohol consumption in the general population and especially in East Asians that carry ALDH2*2 mutation.",TRUE,acronym
R279,Nanoscience and Nanotechnology,R151360,ZnO Nanotube Arrays as Biosensors for Glucose,S607200,R151362,Reference Electrode,L419852,ITO,"Highly oriented single-crystal ZnO nanotube (ZNT) arrays were prepared by a two-step electrochemical/chemical process on indium-doped tin oxide (ITO) coated glass in an aqueous solution. The prepared ZNT arrays were further used as a working electrode to fabricate an enzyme-based glucose biosensor through immobilizing glucose oxidase in conjunction with a Nafion coating. The present ZNT arrays-based biosensor exhibits high sensitivity of 30.85 μA cm−2 mM−1 at an applied potential of +0.8 V vs. SCE, wide linear calibration ranges from 10 μM to 4.2 mM, and a low limit of detection (LOD) at 10 μM (measured) for sensing of glucose. The apparent Michaelis−Menten constant KMapp was calculated to be 2.59 mM, indicating a higher bioactivity for the biosensor.",TRUE,acronym
R279,Nanoscience and Nanotechnology,R155382,Highly Sensitive Electromechanical Piezoresistive Pressure Sensors Based on Large-Area Layered PtSe2 Films,S624062,R155387,Material,L429572,PtSe2,"Two-dimensional (2D) layered materials are ideal for micro- and nanoelectromechanical systems (MEMS/NEMS) due to their ultimate thinness. Platinum diselenide (PtSe2), an exciting and unexplored 2D transition metal dichalcogenide material, is particularly interesting because its low temperature growth process is scalable and compatible with silicon technology. Here, we report the potential of thin PtSe2 films as electromechanical piezoresistive sensors. All experiments have been conducted with semimetallic PtSe2 films grown by thermally assisted conversion of platinum at a complementary metal–oxide–semiconductor (CMOS)-compatible temperature of 400 °C. We report high negative gauge factors of up to −85 obtained experimentally from PtSe2 strain gauges in a bending cantilever beam setup. Integrated NEMS piezoresistive pressure sensors with freestanding PMMA/PtSe2 membranes confirm the negative gauge factor and exhibit very high sensitivity, outperforming previously reported values by orders of magnitude. We employ density functional theory calculations to understand the origin of the measured negative gauge factor. Our results suggest PtSe2 as a very promising candidate for future NEMS applications, including integration into CMOS production lines.",TRUE,acronym
R279,Nanoscience and Nanotechnology,R148377,Bioinspired Cocatalysts Decorated WO3 Nanotube Toward Unparalleled Hydrogen Sulfide Chemiresistor,S595005,R148380,Target gas,L413622,H2S,"Herein, we incorporated dual biotemplates, i.e., cellulose nanocrystals (CNC) and apoferritin, into electrospinning solution to achieve three distinct benefits, i.e., (i) facile synthesis of a WO3 nanotube by utilizing the self-agglomerating nature of CNC in the core of as-spun nanofibers, (ii) effective sensitization by partial phase transition from WO3 to Na2W4O13 induced by interaction between sodium-doped CNC and WO3 during calcination, and (iii) uniform functionalization with monodispersive apoferritin-derived Pt catalytic nanoparticles (2.22 ± 0.42 nm). Interestingly, the sensitization effect of Na2W4O13 on WO3 resulted in highly selective H2S sensing characteristics against seven different interfering molecules. Furthermore, synergistic effects with a bioinspired Pt catalyst induced a remarkably enhanced H2S response ( Rair/ Rgas = 203.5), unparalleled selectivity ( Rair/ Rgas < 1.3 for the interfering molecules), and rapid response (<10 s)/recovery (<30 s) time at 1 ppm of H2S under 95% relative humidity level. This work paves the way for a new class of cosensitization routes to overcome critical shortcomings of SMO-based chemical sensors, thus providing a potential platform for diagnosis of halitosis.",TRUE,acronym
R279,Nanoscience and Nanotechnology,R155375,Sensitive Electronic-Skin Strain Sensor Array Based on the Patterned Two-Dimensional α-In2Se3,S624067,R155377,Material,L429577,In2Se3,"Two-dimensional (2D) layered semiconductors have emerged as a highly attractive class of materials for flexible and wearable strain sensor-centric devices such as electronic-skin (e-skin). This is primarily due to their dimensionality, excellent mechanical flexibility, and unique electronic properties. However, the lack of effective and low-cost methods for wafer-scale fabrication of these materials for strain sensor arrays limits their potential for such applications. Here, we report growth of large-scale 2D In2Se3 nanosheets by templated chemical vapor deposition (CVD) method, using In2O3 and Se powders as precursors. The strain sensors fabricated from the as-grown 2D In2Se3 films show 2 orders of magnitude higher sensitivity (gauge factor ∼237 in −0.39% to 0.39% uniaxial strain range along the device channel length) than what has been demonstrated from conventional metal-based (gauge factor: ∼1–5) and graphene-based strain sensors (gauge factor: ∼2–4) in a similar uniaxial strain range. The integrated ...",TRUE,acronym
R279,Nanoscience and Nanotechnology,R155372,"MoS2
-Based Tactile Sensor for Electronic Skin Applications",S624042,R155373,Material,L429555,MoS2,"A conformal tactile sensor based on MoS2 and graphene is demonstrated. The MoS2 tactile sensor exhibits excellent sensitivity, high uniformity, and good repeatability in terms of various strains. In addition, the outstanding flexibility enables the MoS2 strain tactile sensor to be realized conformally on a finger tip. The MoS2 -based tactile sensor can be utilized for wearable electronics, such as electronic skin.",TRUE,acronym
R279,Nanoscience and Nanotechnology,R155388,Kirigami-Inspired Highly Stretchable Nanoscale Devices Using Multidimensional Deformation of Monolayer MoS2,S624048,R155395,Material,L429558,MoS2,"Two-dimensional (2D) layered materials, such as MoS2, are greatly attractive for flexible devices due to their unique layered structures, novel physical and electronic properties, and high mechanical strength. However, their limited mechanical strains (<2%) can hardly meet the demands of loading conditions for most flexible and stretchable device applications. In this Article, inspired from Kirigami, the ancient Japanese art of paper cutting, we design and fabricate nanoscale Kirigami architectures of 2D layered MoS2 on a soft substrate of polydimethylsiloxane (PDMS) using a top-down fabrication process. Results show that the Kirigami structures significantly improve the reversible stretchability of flexible 2D MoS2 electronic devices, which is increased from 0.75% to ∼15%. This increase in flexibility is originated from a combination of multidimensional deformation capabilities from the nanoscale Kirigami architectures consisting of in-plane stretching and out-of-plane deformation. We further discover a ...",TRUE,acronym
R279,Nanoscience and Nanotechnology,R155396,Piezoresistive strain sensor based on monolayer molybdenum disulfide continuous film deposited by chemical vapor deposition,S624044,R155401,Material,L429556,MoS2,"In this paper, a centimeter-scale monolayer molybdenum disulfide (MoS2) film deposition method has been developed through a simple low-pressure chemical vapor deposition (LPCVD) growth system. The growth pressure dependence on film quality is investigated in this LPCVD system. The layer nature, electrical characteristic of the as-grown MoS2 films indicate that high quality films have been achieved. In addition, a hydrofluoric acid treated SiO2/Si substrate is used to improve the quality of the MoS2 films. Piezoresistive strain sensor based on the monolayer MoS2 film elements is fabricated by directly patterning metal contact pads on MoS2 films through a silicon stencil mask. A gauge factor of 104 ± 26 under compressive strain is obtained by using a four-point bending method, which may inspire new possibilities for two-dimensional (2D) material-based microsystems and electronics.",TRUE,acronym
R279,Nanoscience and Nanotechnology,R161623,"Stretchable, Transparent, Ultrasensitive, and Patchable Strain Sensor for Human–Machine Interfaces Comprising a Nanohybrid of Carbon Nanotubes and Conductive Elastomers",S645391,R161625,Sensing material,L440872,SWCNT/PU-PEDOT:PSS,"UNLABELLED Interactivity between humans and smart systems, including wearable, body-attachable, or implantable platforms, can be enhanced by realization of multifunctional human-machine interfaces, where a variety of sensors collect information about the surrounding environment, intentions, or physiological conditions of the human to which they are attached. Here, we describe a stretchable, transparent, ultrasensitive, and patchable strain sensor that is made of a novel sandwich-like stacked piezoresisitive nanohybrid film of single-wall carbon nanotubes (SWCNTs) and a conductive elastomeric composite of polyurethane (PU)-poly(3,4-ethylenedioxythiophene) polystyrenesulfonate ( PEDOT PSS). This sensor, which can detect small strains on human skin, was created using environmentally benign water-based solution processing. We attributed the tunability of strain sensitivity (i.e., gauge factor), stability, and optical transparency to enhanced formation of percolating networks between conductive SWCNTs and PEDOT phases at interfaces in the stacked PU-PEDOT:PSS/SWCNT/PU-PEDOT:PSS structure. The mechanical stability, high stretchability of up to 100%, optical transparency of 62%, and gauge factor of 62 suggested that when attached to the skin of the face, this sensor would be able to detect small strains induced by emotional expressions such as laughing and crying, as well as eye movement, and we confirmed this experimentally.",TRUE,acronym
R145261,Natural Language Processing,R172664,End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF,S689058,R172666,model,R172686,CRF,"State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data pre-processing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks --- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both the two data --- 97.55\% accuracy for POS tagging and 91.21\% F1 for NER.",TRUE,acronym
R145261,Natural Language Processing,R162920,GATE: an architecture for development of robust HLT applications,S649800,R162922,Tool name,R162923,GATE,"In this paper we present GATE, a framework and graphical development environment which enables users to develop and deploy language engineering components and resources in a robust fashion. The GATE architecture has enabled us not only to develop a number of successful applications for various language processing tasks (such as Information Extraction), but also to build and annotate corpora and carry out evaluations on the applications generated. The framework can be used to develop applications and resources in multiple languages, based on its thorough Unicode support.",TRUE,acronym
R145261,Natural Language Processing,R162526,Overview of the BioCreative VI text-mining services for Kinome Curation Track,S686816,R172039,data source,R148046,MEDLINE,"Abstract The text-mining services for kinome curation track, part of BioCreative VI, proposed a competition to assess the effectiveness of text mining to perform literature triage. The track has exploited an unpublished curated data set from the neXtProt database. This data set contained comprehensive annotations for 300 human protein kinases. For a given protein and a given curation axis [diseases or gene ontology (GO) biological processes], participants’ systems had to identify and rank relevant articles in a collection of 5.2 M MEDLINE citations (task 1) or 530 000 full-text articles (task 2). Explored strategies comprised named-entity recognition and machine-learning frameworks. For that latter approach, participants developed methods to derive a set of negative instances, as the databases typically do not store articles that were judged as irrelevant by curators. The supervised approaches proposed by the participating groups achieved significant improvements compared to the baseline established in a previous study and compared to a basic PubMed search.",TRUE,acronym
R145261,Natural Language Processing,R164170,Coreference Resolution in Biomedical Texts: a Machine Learning Approach,S655534,R164172,data source,R148046,MEDLINE,"Motivation: Coreference resolution, the process of identifying different mentions of an entity, is a very important component in a text-mining system. Compared with the work in news articles, the existing study of coreference resolution in biomedical texts is quite preliminary by only focusing on specific types of anaphors like pronouns or definite noun phrases, using heuristic methods, and running on small data sets. Therefore, there is a need for an in-depth exploration of this task in the biomedical domain. Results: In this article, we presented a learning-based approach to coreference resolution in the biomedical domain. We made three contributions in our study. Firstly, we annotated a large scale coreference corpus, MedCo, which consists of 1,999 medline abstracts in the GENIA data set. Secondly, we proposed a detailed framework for the coreference resolution task, in which we augmented the traditional learning model by incorporating non-anaphors into training. Lastly, we explored various sources of knowledge for coreference resolution, particularly, those that can deal with the complexity of biomedical texts. The evaluation on the MedCo corpus showed promising results. Our coreference resolution system achieved a high precision of 85.2% with a reasonable recall of 65.3%, obtaining an F-measure of 73.9%. The results also suggested that our augmented learning model significantly boosted precision (up to 24.0%) without much loss in recall (less than 5%), and brought a gain of over 8% in F-measure.",TRUE,acronym
R145261,Natural Language Processing,R172664,End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF,S689029,R172666,Result,R172671,NER,"State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data pre-processing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks --- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both the two data --- 97.55\% accuracy for POS tagging and 91.21\% F1 for NER.",TRUE,acronym
R145261,Natural Language Processing,R166235,WEXEA: Wikipedia EXhaustive Entity Annotation,S662004,R166237,Dataset name,R166238,WEXEA,"Building predictive models for information extraction from text, such as named entity recognition or the extraction of semantic relationships between named entities in text, requires a large corpus of annotated text. Wikipedia is often used as a corpus for these tasks where the annotation is a named entity linked by a hyperlink to its article. However, editors on Wikipedia are only expected to link these mentions in order to help the reader to understand the content, but are discouraged from adding links that do not add any benefit for understanding an article. Therefore, many mentions of popular entities (such as countries or popular events in history), or previously linked articles, as well as the article’s entity itself, are not linked. In this paper, we discuss WEXEA, a Wikipedia EXhaustive Entity Annotation system, to create a text corpus based on Wikipedia with exhaustive annotations of entity mentions, i.e. linking all mentions of entities to their corresponding articles. This results in a huge potential for additional annotations that can be used for downstream NLP tasks, such as Relation Extraction. We show that our annotations are useful for creating distantly supervised datasets for this task. Furthermore, we publish all code necessary to derive a corpus from a raw Wikipedia dump, so that it can be reproduced by everyone.",TRUE,acronym
R145261,Natural Language Processing,R164170,Coreference Resolution in Biomedical Texts: a Machine Learning Approach,S655535,R164172,Dataset name,R164173,MedCo,"Motivation: Coreference resolution, the process of identifying different mentions of an entity, is a very important component in a text-mining system. Compared with the work in news articles, the existing study of coreference resolution in biomedical texts is quite preliminary by only focusing on specific types of anaphors like pronouns or definite noun phrases, using heuristic methods, and running on small data sets. Therefore, there is a need for an in-depth exploration of this task in the biomedical domain. Results: In this article, we presented a learning-based approach to coreference resolution in the biomedical domain. We made three contributions in our study. Firstly, we annotated a large scale coreference corpus, MedCo, which consists of 1,999 medline abstracts in the GENIA data set. Secondly, we proposed a detailed framework for the coreference resolution task, in which we augmented the traditional learning model by incorporating non-anaphors into training. Lastly, we explored various sources of knowledge for coreference resolution, particularly, those that can deal with the complexity of biomedical texts. The evaluation on the MedCo corpus showed promising results. Our coreference resolution system achieved a high precision of 85.2% with a reasonable recall of 65.3%, obtaining an F-measure of 73.9%. The results also suggested that our augmented learning model significantly boosted precision (up to 24.0%) without much loss in recall (less than 5%), and brought a gain of over 8% in F-measure.",TRUE,acronym
R145261,Natural Language Processing,R162561,Overview of the NLM-Chem BioCreative VII track: Full-text Chemical Identification and Indexing in PubMed articles,S687048,R172126,Ontology used,R145007,MeSH,"The BioCreative NLM-Chem track calls for a community effort to fine-tune automated recognition of chemical names in biomedical literature. Chemical names are one of the most searched biomedical entities in PubMed and – as highlighted during the COVID-19 pandemic – their identification may significantly advance research in multiple biomedical subfields. While previous community challenges focused on identifying chemical names mentioned in titles and abstracts, the full text contains valuable additional detail. We organized the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document. This manuscript summarizes the BioCreative NLM-Chem track. We received a total of 88 submissions in total from 17 teams worldwide. The highest performance achieved for the Chemical Identification task was 0.8672 f-score (0.8759 precision, 0.8587 recall) for strict NER performance and 0.8136 f-score (0.8621 precision, 0.7702 recall) for strict normalization performance. The highest performance achieved for the Chemical Indexing task was 0.4825 f-score (0.4397 precision, 0.5344 recall). The NLM-Chem track dataset and other challenge materials are publicly available at https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/. This community challenge demonstrated 1) the current substantial achievements in deep learning technologies can be utilized to further improve automated prediction accuracy, and 2) the Chemical Indexing task is substantially more challenging. We look forward to further development of biomedical text mining methods to respond to the rapid growth of biomedical literature. Keywords— biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; text mining; chemical entity recognition; chemical indexing",TRUE,acronym
R145261,Natural Language Processing,R182418,SPECTER: Document-level Representation Learning using Citation-informed Transformers,S705882,R182420,Material,R182432,SciDocs,"Representation learning is a critical ingredient for natural language processing systems. Recent Transformer language models like BERT learn powerful textual representations, but these models are targeted towards token- and sentence-level training objectives and do not leverage information on inter-document relatedness, which limits their document-level representation power. For applications on scientific documents, such as classification and recommendation, accurate embeddings of documents are a necessity. We propose SPECTER, a new method to generate document-level embedding of scientific papers based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, Specter can be easily applied to downstream applications without task-specific fine-tuning. Additionally, to encourage further research on document-level models, we introduce SciDocs, a new evaluation benchmark consisting of seven document-level tasks ranging from citation prediction, to document classification and recommendation. We show that Specter outperforms a variety of competitive baselines on the benchmark.",TRUE,acronym
R145261,Natural Language Processing,R182418,SPECTER: Document-level Representation Learning using Citation-informed Transformers,S705881,R182420,On evaluation dataset,R182431,SciDocs,"Representation learning is a critical ingredient for natural language processing systems. Recent Transformer language models like BERT learn powerful textual representations, but these models are targeted towards token- and sentence-level training objectives and do not leverage information on inter-document relatedness, which limits their document-level representation power. For applications on scientific documents, such as classification and recommendation, accurate embeddings of documents are a necessity. We propose SPECTER, a new method to generate document-level embedding of scientific papers based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, Specter can be easily applied to downstream applications without task-specific fine-tuning. Additionally, to encourage further research on document-level models, we introduce SciDocs, a new evaluation benchmark consisting of seven document-level tasks ranging from citation prediction, to document classification and recommendation. We show that Specter outperforms a variety of competitive baselines on the benchmark.",TRUE,acronym
R145261,Natural Language Processing,R69288,"Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction",S583781,R69289,Dataset name,R145781,SciERC,"We introduce a multi-task setup of identifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called SciIE with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.",TRUE,acronym
R145261,Natural Language Processing,R69288,"Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction",S583808,R69289,model,R116717,SciIE,"We introduce a multi-task setup of identifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called SciIE with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.",TRUE,acronym
R145261,Natural Language Processing,R146853,SciREX: A Challenge Dataset for Document-Level Information Extraction,S588008,R146855,Dataset name,R146863,SciREX,"Extracting information from full documents is an important problem in many domains, but most previous work focus on identifying relationships within a sentence or a paragraph. It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections. In this paper, we introduce SciREX, a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles. We annotate our dataset by integrating automatic and human annotations, leveraging existing scientific knowledge resources. We develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE. Analyzing the model performance shows a significant gap between human performance and current baselines, inviting the community to use our dataset as a challenge to develop document-level IE models. Our data and code are publicly available at https://github.com/allenai/SciREX .",TRUE,acronym
R145261,Natural Language Processing,R163656,"PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track",S661221,R165882,Ontologies used,R165884,SNOMED-CT,"One of the biomedical entity types of relevance for medicine or biosciences are chemical compounds and drugs. The correct detection these entities is critical for other text mining applications building on them, such as adverse drug-reaction detection, medication-related fake news or drug-target extraction. Although a significant effort was made to detect mentions of drugs/chemicals in English texts, so far only very limited attempts were made to recognize them in medical documents in other languages. Taking into account the growing amount of medical publications and clinical records written in Spanish, we have organized the first shared task on detecting drug and chemical entities in Spanish medical documents. Additionally, we included a clinical concept-indexing sub-track asking teams to return SNOMED-CT identifiers related to drugs/chemicals for a collection of documents. For this task, named PharmaCoNER, we generated annotation guidelines together with a corpus of 1,000 manually annotated clinical case studies. A total of 22 teams participated in the sub-track 1, (77 system runs), and 7 teams in the sub-track 2 (19 system runs). Top scoring teams used sophisticated deep learning approaches yielding very competitive results with F-measures above 0.91. These results indicate that there is a real interest in promoting biomedical text mining efforts beyond English. We foresee that the PharmaCoNER annotation guidelines, corpus and participant systems will foster the development of new resources for clinical and biomedical text mining systems of Spanish medical data.",TRUE,acronym
R145261,Natural Language Processing,R141070,Named Entity Recognition on Code-Switched Data: Overview of the CALCS 2018 Shared Task,S579406,R141072,Dataset name,R144718,ENG-SPA,"In the third shared task of the Computational Approaches to Linguistic Code-Switching (CALCS) workshop, we focus on Named Entity Recognition (NER) on code-switched social-media data. We divide the shared task into two competitions based on the English-Spanish (ENG-SPA) and Modern Standard Arabic-Egyptian (MSA-EGY) language pairs. We use Twitter data and 9 entity types to establish a new dataset for code-switched NER benchmarks. In addition to the CS phenomenon, the diversity of the entities and the social media challenges make the task considerably hard to process. As a result, the best scores of the competitions are 63.76% and 71.61% for ENG-SPA and MSA-EGY, respectively. We present the scores of 9 participants and discuss the most common challenges among submissions.",TRUE,acronym
R145261,Natural Language Processing,R163224,An empirical evaluation of resources for the identification of diseases and adverse effects in biomedical literature,S651014,R163226,Other resources,R142499,ICD-10,"The mentions of human health perturbations such as the diseases and adverse effects denote a special entity class in the biomedical literature. They help in understanding the underlying risk factors and develop a preventive rationale. The recognition of these named entities in texts through dictionary-based approaches relies on the availability of appropriate terminological resources. Although few resources are publicly available, not all are suitable for the text mining needs. Therefore, this work provides an overview of the well known resources with respect to human diseases and adverse effects such as the MeSH, MedDRA, ICD-10, SNOMED CT, and UMLS. Individual dictionaries are generated from these resources and their performance in recognizing the named entities is evaluated over a manually annotated corpus. In addition, the steps for curating the dictionaries, rule-based acronym disambiguation and their impact on the dictionary performance is discussed. The results show that the MedDRA and UMLS achieve the best recall. Besides this, MedDRA provides an additional benefit of achieving a higher precision. The combination of search results of all the dictionaries achieve a considerably high recall. The corpus is available on http://www.scai.fraunhofer.de/disease-ae-corpus.html",TRUE,acronym
R145261,Natural Language Processing,R141070,Named Entity Recognition on Code-Switched Data: Overview of the CALCS 2018 Shared Task,S579407,R141072,Dataset name,R144719,MSA-EGY,"In the third shared task of the Computational Approaches to Linguistic Code-Switching (CALCS) workshop, we focus on Named Entity Recognition (NER) on code-switched social-media data. We divide the shared task into two competitions based on the English-Spanish (ENG-SPA) and Modern Standard Arabic-Egyptian (MSA-EGY) language pairs. We use Twitter data and 9 entity types to establish a new dataset for code-switched NER benchmarks. In addition to the CS phenomenon, the diversity of the entities and the social media challenges make the task considerably hard to process. As a result, the best scores of the competitions are 63.76% and 71.61% for ENG-SPA and MSA-EGY, respectively. We present the scores of 9 participants and discuss the most common challenges among submissions.",TRUE,acronym
R137,Numerical Analysis/Scientific Computing,R109703,Simulation of Severe Accident Progression Using ROSHNI: A New Integrated Simulation Code for PHWR Severe Accidents,S501367,R109705,Nuclear reactor type,R109916,CANDU,"As analysts still grapple with understanding core damage accident progression at Three Mile Island and Fukushima that caught the nuclear industry off-guard once too many times, one notices the very limited detail with which the large reactor cores of these subject reactors have been modelled in their severe accident simulation code packages. At the same time, modelling of CANDU severe accidents have largely borrowed from and suffered from the limitations of the same LWR codes (see IAEA TECDOC 1727) whose applications to PHWRs have poorly caught critical PHWR design specifics and vulnerabilities. As a result, accident management measures that have been instituted at CANDU PHWRs, while meeting the important industry objective of publically seeming to be doing something about lessons learnt from say Fukushima and showing that the reactor designs are oh so close to perfect and the off-site consequences of severe accidents happily benign. Integrated PHWR severe accident progression and consequence assessment code ROSHNI can make a significant contribution to actual, practical understanding of severe accident progression in CANDU PHWRs, improving significantly on the other PHWR specific computer codes developed three decades ago when modeling decisions were constrained by limited computing power and poor understanding of and interest in severe core damage accidents. These codes force gross simplifications in reactor core modelling and do not adequately represent all the right CANDU core details, materials, fluids, vessels or phenomena. But they produce results that are familiar and palatable. They do, however to their credit, also excel in their computational speed, largely because they model and compute so little and with such un-necessary simplifications. ROSHNI sheds most previous modelling simplifications and represents each of the 380 channels, 4560 bundle, 37 elements in four concentric ring, Zircaloy clad fuel geometry, materials and fluids more faithfully in a 2000 MW(Th) CANDU6 reactor. It can be used easily for other PHWRs with different number of fuel channels and bundles per each channel. Each of horizontal PHWR reactor channels with all their bundles, fuel rings, sheaths, appendages, end fittings and feeders are modelled and in detail that reflects large across core differences. While other codes model at best a few hundred core fuel entities, thermo-chemical transient behaviour of about 73,000 different fuel channel entities within the core is considered by ROSHNI simultaneously along with other 15,000 or so other flow path segments. At each location all known thermo-chemical and hydraulic phenomena are computed. With such detail, ROSHNI is able to provide information on their progressive and parallel thermo-chemical contribution to accident progression and a more realistic fission product release source term that would belie the miniscule one (100 TBq of Cs-137 or 0.15% of core inventory) used by EMOs now in Canada on recommendation of our national regulator CNSC. ROSHNI has an advanced, more CANDU specific consideration of each bundle transitioning to a solid debris behaviour in the Calandria vessel without reverting to a simplified molten corium formulation that happily ignores interaction of debris with vessel welds, further vessel failures and energetic interactions. The code is able to follow behaviour of each fuel bundle following its disassembly from the fuel channel and thus demonstrate that the gross assumption of a core collapse made in some analyses is wrong and misleading. It is able to thus demonstrate that PHWR core disassembly is not only gradual, it will be also be incomplete with a large number of low power, peripheral fuel channels never disassembling under most credible scenarios. The code is designed to grow into and use its voluminous results in a severe accident simulator for operator training. It’s phenomenological models are able to examine design inadequacies / issues that affect accident progression and several simple to implement design improvements that have a profound effect on results. For example, an early pressure boundary failure due to inadequacy of heat sinks in a station blackout scenario can be examined along with the effect of improved and adequate over pressure protection. A best effort code such as ROSHNI can be instrumental in identifying the risk reduction benefits of undertaking certain design, operational and accidental management improvements for PHWRs, with some of the multi-unit ones handicapped by poor pressurizer placement and leaky containments with vulnerable materials, poor overpressure protection, ad-hoc mitigation measures and limited instrumentation common to all CANDUs. Case in point is the PSA supported design and installed number of Hydrogen recombiners that are neither for the right gas (designed mysteriously for H2 instead of D2) or its potential release quantity (they are sparse and will cause explosions). The paper presents ROSHNI results of simulations of a postulated station blackout scenario and sheds a light on the challenges ahead in minimizing risk from operation of these otherwise unique power reactors.",TRUE,acronym
R137,Numerical Analysis/Scientific Computing,R109703,Simulation of Severe Accident Progression Using ROSHNI: A New Integrated Simulation Code for PHWR Severe Accidents,S501369,R109705,Nuclear reactor type,R109918,PHWR,"As analysts still grapple with understanding core damage accident progression at Three Mile Island and Fukushima that caught the nuclear industry off-guard once too many times, one notices the very limited detail with which the large reactor cores of these subject reactors have been modelled in their severe accident simulation code packages. At the same time, modelling of CANDU severe accidents have largely borrowed from and suffered from the limitations of the same LWR codes (see IAEA TECDOC 1727) whose applications to PHWRs have poorly caught critical PHWR design specifics and vulnerabilities. As a result, accident management measures that have been instituted at CANDU PHWRs, while meeting the important industry objective of publically seeming to be doing something about lessons learnt from say Fukushima and showing that the reactor designs are oh so close to perfect and the off-site consequences of severe accidents happily benign. Integrated PHWR severe accident progression and consequence assessment code ROSHNI can make a significant contribution to actual, practical understanding of severe accident progression in CANDU PHWRs, improving significantly on the other PHWR specific computer codes developed three decades ago when modeling decisions were constrained by limited computing power and poor understanding of and interest in severe core damage accidents. These codes force gross simplifications in reactor core modelling and do not adequately represent all the right CANDU core details, materials, fluids, vessels or phenomena. But they produce results that are familiar and palatable. They do, however to their credit, also excel in their computational speed, largely because they model and compute so little and with such un-necessary simplifications. ROSHNI sheds most previous modelling simplifications and represents each of the 380 channels, 4560 bundle, 37 elements in four concentric ring, Zircaloy clad fuel geometry, materials and fluids more faithfully in a 2000 MW(Th) CANDU6 reactor. It can be used easily for other PHWRs with different number of fuel channels and bundles per each channel. Each of horizontal PHWR reactor channels with all their bundles, fuel rings, sheaths, appendages, end fittings and feeders are modelled and in detail that reflects large across core differences. While other codes model at best a few hundred core fuel entities, thermo-chemical transient behaviour of about 73,000 different fuel channel entities within the core is considered by ROSHNI simultaneously along with other 15,000 or so other flow path segments. At each location all known thermo-chemical and hydraulic phenomena are computed. With such detail, ROSHNI is able to provide information on their progressive and parallel thermo-chemical contribution to accident progression and a more realistic fission product release source term that would belie the miniscule one (100 TBq of Cs-137 or 0.15% of core inventory) used by EMOs now in Canada on recommendation of our national regulator CNSC. ROSHNI has an advanced, more CANDU specific consideration of each bundle transitioning to a solid debris behaviour in the Calandria vessel without reverting to a simplified molten corium formulation that happily ignores interaction of debris with vessel welds, further vessel failures and energetic interactions. The code is able to follow behaviour of each fuel bundle following its disassembly from the fuel channel and thus demonstrate that the gross assumption of a core collapse made in some analyses is wrong and misleading. It is able to thus demonstrate that PHWR core disassembly is not only gradual, it will be also be incomplete with a large number of low power, peripheral fuel channels never disassembling under most credible scenarios. The code is designed to grow into and use its voluminous results in a severe accident simulator for operator training. It’s phenomenological models are able to examine design inadequacies / issues that affect accident progression and several simple to implement design improvements that have a profound effect on results. For example, an early pressure boundary failure due to inadequacy of heat sinks in a station blackout scenario can be examined along with the effect of improved and adequate over pressure protection. A best effort code such as ROSHNI can be instrumental in identifying the risk reduction benefits of undertaking certain design, operational and accidental management improvements for PHWRs, with some of the multi-unit ones handicapped by poor pressurizer placement and leaky containments with vulnerable materials, poor overpressure protection, ad-hoc mitigation measures and limited instrumentation common to all CANDUs. Case in point is the PSA supported design and installed number of Hydrogen recombiners that are neither for the right gas (designed mysteriously for H2 instead of D2) or its potential release quantity (they are sparse and will cause explosions). The paper presents ROSHNI results of simulations of a postulated station blackout scenario and sheds a light on the challenges ahead in minimizing risk from operation of these otherwise unique power reactors.",TRUE,acronym
R137,Numerical Analysis/Scientific Computing,R109703,Simulation of Severe Accident Progression Using ROSHNI: A New Integrated Simulation Code for PHWR Severe Accidents,S501366,R109705,Software Used,R109915,ROSHNI,"As analysts still grapple with understanding core damage accident progression at Three Mile Island and Fukushima that caught the nuclear industry off-guard once too many times, one notices the very limited detail with which the large reactor cores of these subject reactors have been modelled in their severe accident simulation code packages. At the same time, modelling of CANDU severe accidents have largely borrowed from and suffered from the limitations of the same LWR codes (see IAEA TECDOC 1727) whose applications to PHWRs have poorly caught critical PHWR design specifics and vulnerabilities. As a result, accident management measures that have been instituted at CANDU PHWRs, while meeting the important industry objective of publically seeming to be doing something about lessons learnt from say Fukushima and showing that the reactor designs are oh so close to perfect and the off-site consequences of severe accidents happily benign. Integrated PHWR severe accident progression and consequence assessment code ROSHNI can make a significant contribution to actual, practical understanding of severe accident progression in CANDU PHWRs, improving significantly on the other PHWR specific computer codes developed three decades ago when modeling decisions were constrained by limited computing power and poor understanding of and interest in severe core damage accidents. These codes force gross simplifications in reactor core modelling and do not adequately represent all the right CANDU core details, materials, fluids, vessels or phenomena. But they produce results that are familiar and palatable. They do, however to their credit, also excel in their computational speed, largely because they model and compute so little and with such un-necessary simplifications. ROSHNI sheds most previous modelling simplifications and represents each of the 380 channels, 4560 bundle, 37 elements in four concentric ring, Zircaloy clad fuel geometry, materials and fluids more faithfully in a 2000 MW(Th) CANDU6 reactor. It can be used easily for other PHWRs with different number of fuel channels and bundles per each channel. Each of horizontal PHWR reactor channels with all their bundles, fuel rings, sheaths, appendages, end fittings and feeders are modelled and in detail that reflects large across core differences. While other codes model at best a few hundred core fuel entities, thermo-chemical transient behaviour of about 73,000 different fuel channel entities within the core is considered by ROSHNI simultaneously along with other 15,000 or so other flow path segments. At each location all known thermo-chemical and hydraulic phenomena are computed. With such detail, ROSHNI is able to provide information on their progressive and parallel thermo-chemical contribution to accident progression and a more realistic fission product release source term that would belie the miniscule one (100 TBq of Cs-137 or 0.15% of core inventory) used by EMOs now in Canada on recommendation of our national regulator CNSC. ROSHNI has an advanced, more CANDU specific consideration of each bundle transitioning to a solid debris behaviour in the Calandria vessel without reverting to a simplified molten corium formulation that happily ignores interaction of debris with vessel welds, further vessel failures and energetic interactions. The code is able to follow behaviour of each fuel bundle following its disassembly from the fuel channel and thus demonstrate that the gross assumption of a core collapse made in some analyses is wrong and misleading. It is able to thus demonstrate that PHWR core disassembly is not only gradual, it will be also be incomplete with a large number of low power, peripheral fuel channels never disassembling under most credible scenarios. The code is designed to grow into and use its voluminous results in a severe accident simulator for operator training. It’s phenomenological models are able to examine design inadequacies / issues that affect accident progression and several simple to implement design improvements that have a profound effect on results. For example, an early pressure boundary failure due to inadequacy of heat sinks in a station blackout scenario can be examined along with the effect of improved and adequate over pressure protection. A best effort code such as ROSHNI can be instrumental in identifying the risk reduction benefits of undertaking certain design, operational and accidental management improvements for PHWRs, with some of the multi-unit ones handicapped by poor pressurizer placement and leaky containments with vulnerable materials, poor overpressure protection, ad-hoc mitigation measures and limited instrumentation common to all CANDUs. Case in point is the PSA supported design and installed number of Hydrogen recombiners that are neither for the right gas (designed mysteriously for H2 instead of D2) or its potential release quantity (they are sparse and will cause explosions). The paper presents ROSHNI results of simulations of a postulated station blackout scenario and sheds a light on the challenges ahead in minimizing risk from operation of these otherwise unique power reactors.",TRUE,acronym
R129,Organic Chemistry,R154468,"Atmospheric Hydrodeoxygenation of Guaiacol over Alumina-, Zirconia-, and Silica-Supported Nickel Phosphide Catalysts",S618411,R154470,catalyst,R154472,Ni2P/SiO2,"This study investigated atmospheric hydrodeoxygenation (HDO) of guaiacol over Ni2P-supported catalysts. Alumina, zirconia, and silica served as the supports of Ni2P catalysts. The physicochemical properties of these catalysts were surveyed by N2 physisorption, X-ray diffraction (XRD), CO chemisorption, H2 temperature-programmed reduction (H2-TPR), H2 temperature-programmed desorption (H2-TPD), and NH3 temperature-programmed desorption (NH3-TPD). The catalytic performance of these catalysts was tested in a continuous fixed-bed system. This paper proposes a plausible network of atmospheric guaiacol HDO, containing demethoxylation (DMO), demethylation (DME), direct deoxygenation (DDO), hydrogenation (HYD), transalkylation, and methylation. Pseudo-first-order kinetics analysis shows that the intrinsic activity declined in the following order: Ni2P/ZrO2 > Ni2P/Al2O3 > Ni2P/SiO2. Product selectivity at zero guaiacol conversion indicates that Ni2P/SiO2 promotes DMO and DDO routes, whereas Ni2P/ZrO2 and Ni2P/Al2O...",TRUE,acronym
R130,Physical Chemistry,R135710,Continuous Symmetry Breaking Induced by Ion Pairing Effect in Heptamethine Cyanine Dyes: Beyond the Cyanine Limit,S536902,R135714,Counterion,L378454,TRISPHAT,"The association of heptamethine cyanine cation 1(+) with various counterions A (A = Br(-), I(-), PF(6)(-), SbF(6)(-), B(C(6)F(5))(4)(-), TRISPHAT) was realized. The six different ion pairs have been characterized by X-ray diffraction, and their absorption properties were studied in polar (DCM) and apolar (toluene) solvents. A small, hard anion (Br(-)) is able to strongly polarize the polymethine chain, resulting in the stabilization of an asymmetric dipolar-like structure in the crystal and in nondissociating solvents. On the contrary, in more polar solvents or when it is associated with a bulky soft anion (TRISPHAT or B(C(6)F(5))(4)(-)), the same cyanine dye adopts preferentially the ideal polymethine state. The solid-state and solution absorption properties of heptamethine dyes are therefore strongly correlated to the nature of the counterion.",TRUE,acronym
R130,Physical Chemistry,R135710,Continuous Symmetry Breaking Induced by Ion Pairing Effect in Heptamethine Cyanine Dyes: Beyond the Cyanine Limit,S536900,R135714,BLA evaluation method,L378452,X-Ray,"The association of heptamethine cyanine cation 1(+) with various counterions A (A = Br(-), I(-), PF(6)(-), SbF(6)(-), B(C(6)F(5))(4)(-), TRISPHAT) was realized. The six different ion pairs have been characterized by X-ray diffraction, and their absorption properties were studied in polar (DCM) and apolar (toluene) solvents. A small, hard anion (Br(-)) is able to strongly polarize the polymethine chain, resulting in the stabilization of an asymmetric dipolar-like structure in the crystal and in nondissociating solvents. On the contrary, in more polar solvents or when it is associated with a bulky soft anion (TRISPHAT or B(C(6)F(5))(4)(-)), the same cyanine dye adopts preferentially the ideal polymethine state. The solid-state and solution absorption properties of heptamethine dyes are therefore strongly correlated to the nature of the counterion.",TRUE,acronym
R138056,Planetary Sciences,R155421,"Compositional stratigraphy of clay-bearing layered deposits at Mawrth Vallis, Mars: STRATIGRAPHY OF CLAY-BEARING DEPOSITS ON MARS",S622195,R155422,Supplimentary Information,R155418, HRSC,"Phyllosilicates have previously been detected in layered outcrops in and around the Martian outflow channel Mawrth Vallis. CRISM spectra of these outcrops exhibit features diagnostic of kaolinite, montmorillonite, and Fe/Mg‐rich smectites, along with crystalline ferric oxide minerals such as hematite. These minerals occur in distinct stratigraphic horizons, implying changing environmental conditions and/or a variable sediment source for these layered deposits. Similar stratigraphic sequences occur on both sides of the outflow channel and on its floor, with Al‐clay‐bearing layers typically overlying Fe/Mg‐clay‐bearing layers. This pattern, combined with layer geometries measured using topographic data from HiRISE and HRSC, suggests that the Al‐clay‐bearing horizons at Mawrth Vallis postdate the outflow channel and may represent a later sedimentary or altered pyroclastic deposit that drapes the topography.",TRUE,acronym
R138056,Planetary Sciences,R138512,Raman spectroscopy for mineral identification and quantification for in situ planetary surface analysis: A point count method,S550058,R138514,Samples,L387071,"141,617,062","Quantification of mineral proportions in rocks and soils by Raman spectroscopy on a planetary surface is best done by taking many narrow-beam spectra from different locations on the rock or soil, with each spectrum yielding peaks from only one or two minerals. The proportion of each mineral in the rock or soil can then be determined from the fraction of the spectra that contain its peaks, in analogy with the standard petrographic technique of point counting. The method can also be used for nondestructive laboratory characterization of rock samples. Although Raman peaks for different minerals seldom overlap each other, it is impractical to obtain proportions of constituent minerals by Raman spectroscopy through analysis of peak intensities in a spectrum obtained by broad-beam sensing of a representative area of the target material. That is because the Raman signal strength produced by a mineral in a rock or soil is not related in a simple way through the Raman scattering cross section of that mineral to its proportion in the rock, and the signal-to-noise ratio of a Raman spectrum is poor when a sample is stimulated by a low-power laser beam of broad diameter. Results obtained by the Raman point-count method are demonstrated for a lunar thin section (14161,7062) and a rock fragment (15273,7039). Major minerals (plagioclase and pyroxene), minor minerals (cristobalite and K-feldspar), and accessory minerals (whitlockite, apatite, and baddeleyite) were easily identified. Identification of the rock types, KREEP basalt or melt rock, from the 100-location spectra was straightforward.",TRUE,number
R138056,Planetary Sciences,R138512,Raman spectroscopy for mineral identification and quantification for in situ planetary surface analysis: A point count method,S550059,R138514,Samples,L387072,"152,737,039","Quantification of mineral proportions in rocks and soils by Raman spectroscopy on a planetary surface is best done by taking many narrow-beam spectra from different locations on the rock or soil, with each spectrum yielding peaks from only one or two minerals. The proportion of each mineral in the rock or soil can then be determined from the fraction of the spectra that contain its peaks, in analogy with the standard petrographic technique of point counting. The method can also be used for nondestructive laboratory characterization of rock samples. Although Raman peaks for different minerals seldom overlap each other, it is impractical to obtain proportions of constituent minerals by Raman spectroscopy through analysis of peak intensities in a spectrum obtained by broad-beam sensing of a representative area of the target material. That is because the Raman signal strength produced by a mineral in a rock or soil is not related in a simple way through the Raman scattering cross section of that mineral to its proportion in the rock, and the signal-to-noise ratio of a Raman spectrum is poor when a sample is stimulated by a low-power laser beam of broad diameter. Results obtained by the Raman point-count method are demonstrated for a lunar thin section (14161,7062) and a rock fragment (15273,7039). Major minerals (plagioclase and pyroxene), minor minerals (cristobalite and K-feldspar), and accessory minerals (whitlockite, apatite, and baddeleyite) were easily identified. Identification of the rock types, KREEP basalt or melt rock, from the 100-location spectra was straightforward.",TRUE,number
R138056,Planetary Sciences,R138508,Raman spectroscopy as a method for mineral identification on lunar robotic exploration missions,S549975,R138510,Raman Stokes-shift range (cm-1),L386993,100-1400,"fiber bundle that carried the laser beam and returned the scattered radiation could be placed against surfaces at any desired angle by a deployment mechanism; otherwise, the instrument would need no moving parts. A modem micro-Raman spectrometer with its beam broadened (to .expand the spot to 50-gm diameter) and set for low resolution (7 cm ' in the 100-1400 cm ' region relative to 514.5-nm excitation), was used to simulate the spectra anticipated from a rover instrument. We present spectra for lunar mineral grains, <1 mm soil fines, breccia fragments, and glasses. From frequencies of olivine peaks, we derived sufficiently precise forsteritc contents to correlate the analyzed grains to known rock types and we obtained appropriate forsteritc contents from weak signals above background in soil fines and breccias. Peak positions of pyroxenes were sufficiently well determined to distinguish among orthorhombic, monoclinic, and triclinic (pyroxenoid) structures; additional information can be obtained from pyroxene spectra, but requires further laboratory calibration. Plagioclase provided sharp peaks in soil fines and most breccias even when the glass content was high.",TRUE,acronym
R185,Plasma and Beam Physics,R139109,Cold Atmospheric Pressure Plasma VUV Interactions With Surfaces: Effect of Local Gas Environment and Source Design,S554333,R139180,VUV,L390098,FILTERS,"This study uses photoresist materials in combination with several optical filters as a diagnostic to examine the relative importance of VUV-induced surface modifications for different cold atmospheric pressure plasma (CAPP) sources. The argon fed kHz-driven ring-APPJ showed the largest ratio of VUV surface modification relative to the total modification introduced, whereas the MHz APPJ showed the largest overall surface modification. The MHz APPJ shows increased total thickness reduction and reduced VUV effect as oxygen is added to the feed gas, a condition that is often used for practical applications. We examine the influence of noble gas flow from the APPJ on the local environment. The local environment has a decisive impact on polymer modification from VUV emission as O2 readily absorbs VUV photons.",TRUE,acronym
R185,Plasma and Beam Physics,R139065,Etching materials with an atmospheric-pressure plasma jet,S554042,R139165,Unit_frequency,L389837,MHz,"A plasma jet has been developed for etching materials at atmospheric pressure and between 100 and C. Gas mixtures containing helium, oxygen and carbon tetrafluoride were passed between an outer, grounded electrode and a centre electrode, which was driven by 13.56 MHz radio frequency power at 50 to 500 W. At a flow rate of , a stable, arc-free discharge was produced. This discharge extended out through a nozzle at the end of the electrodes, forming a plasma jet. Materials placed 0.5 cm downstream from the nozzle were etched at the following maximum rates: for Kapton ( and He only), for silicon dioxide, for tantalum and for tungsten. Optical emission spectroscopy was used to identify the electronically excited species inside the plasma and outside in the jet effluent.",TRUE,acronym
R185,Plasma and Beam Physics,R139074,RF Capillary Jet - a Tool for Localized Surface Treatment,S554106,R139168,Unit_frequency,L389895,MHz,"The UV/VUV spectrum of a non‐thermal capillary plasma jet operating with Ar at ambient atmosphere and the temperature load of a substrate exposed to the jet have been measured. The VUV radiation is assigned to N, H, and O atomic lines along with an Ar*2 excimer continuum. The absolute radiance (115‐200 nm) of the source has been determined. Maximum values of 880 μW/mm2sr are obtained. Substrate temperatures range between 35 °C for low powers and high gas flow conditions and 95 °C for high powers and reduced gas flow. The plasma source (13.56, 27.12 or 40.78 MHz) can be operated in Ar and in N2. The further addition of a low percentage of silicon containing reactive admixtures has been demonstrated for thin film deposition. Several further applications related to surface modification have been successfully applied. (© 2007 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)",TRUE,acronym
R185,Plasma and Beam Physics,R139083,Vacuum UV Radiation of a Plasma Jet Operated With Rare Gases at Atmospheric Pressure,S554168,R139171,Unit_frequency,L389951,MHz,"The vacuum ultraviolet (VUV) emissions from 115 to 200 nm from the effluent of an RF (1.2 MHz) capillary jet fed with pure argon and binary mixtures of argon and xenon or krypton (up to 20%) are analyzed. The feed gas mixture is emanating into air at normal pressure. The Ar2 excimer second continuum, observed in the region of 120-135 nm, prevails in the pure Ar discharge. It decreases when small amounts (as low as 0.5%) of Xe or Kr are added. In that case, the resonant emission of Xe at 147 nm (or 124 nm for Kr, respectively) becomes dominant. The Xe2 second continuum at 172 nm appears for higher admixtures of Xe (10%). Furthermore, several N I emission lines, the O I resonance line, and H I line appear due to ambient air. Two absorption bands (120.6 and 124.6 nm) are present in the spectra. Their origin could be unequivocally associated to O2 and O3. The radiance is determined end-on at varying axial distance in absolute units for various mixtures of Ar/Xe and Ar/Kr and compared to pure Ar. Integration over the entire VUV wavelength region provides the integrated spectral distribution. Maximum values of 2.2 mW middotmm-2middotsr-1 are attained in pure Ar and at a distance of 4 mm from the outlet nozzle of the discharge. By adding diminutive admixtures of Kr or Xe, the intensity and spectral distribution is effectively changed.",TRUE,acronym
R185,Plasma and Beam Physics,R139086,Generation of atomic oxygen in the effluent of an atmospheric pressure plasma jet,S554186,R139172,Unit_frequency,L389967,MHz,"The planar 13.56 MHz RF-excited low temperature atmospheric pressure plasma jet (APPJ) investigated in this study is operated with helium feed gas and a small molecular oxygen admixture. The effluent leaving the discharge through the jet's nozzle contains very few charged particles and a high reactive oxygen species' density. As its main reactive radical, essential for numerous applications, the ground state atomic oxygen density in the APPJ's effluent is measured spatially resolved with two-photon absorption laser induced fluorescence spectroscopy. The atomic oxygen density at the nozzle reaches a value of ~1016 cm−3. Even at several centimetres distance still 1% of this initial atomic oxygen density can be detected. Optical emission spectroscopy (OES) reveals the presence of short living excited oxygen atoms up to 10 cm distance from the jet's nozzle. The measured high ground state atomic oxygen density and the unaccounted for presence of excited atomic oxygen require further investigations on a possible energy transfer from the APPJ's discharge region into the effluent: energetic vacuum ultraviolet radiation, measured by OES down to 110 nm, reaches far into the effluent where it is presumed to be responsible for the generation of atomic oxygen.",TRUE,acronym
R185,Plasma and Beam Physics,R139109,Cold Atmospheric Pressure Plasma VUV Interactions With Surfaces: Effect of Local Gas Environment and Source Design,S554339,R139180,Unit_frequency,L390104,MHz,"This study uses photoresist materials in combination with several optical filters as a diagnostic to examine the relative importance of VUV-induced surface modifications for different cold atmospheric pressure plasma (CAPP) sources. The argon fed kHz-driven ring-APPJ showed the largest ratio of VUV surface modification relative to the total modification introduced, whereas the MHz APPJ showed the largest overall surface modification. The MHz APPJ shows increased total thickness reduction and reduced VUV effect as oxygen is added to the feed gas, a condition that is often used for practical applications. We examine the influence of noble gas flow from the APPJ on the local environment. The local environment has a decisive impact on polymer modification from VUV emission as O2 readily absorbs VUV photons.",TRUE,acronym
R185,Plasma and Beam Physics,R139112,Absolute ozone densities in a radio-frequency driven atmospheric pressure plasma using two-beam UV-LED absorption spectroscopy and numerical simulations,S554360,R139181,Unit_frequency,L390123,MHz,"The efficient generation of reactive oxygen species (ROS) in cold atmospheric pressure plasma jets (APPJs) is an increasingly important topic, e.g. for the treatment of temperature sensitive biological samples in the field of plasma medicine. A 13.56 MHz radio-frequency (rf) driven APPJ device operated with helium feed gas and small admixtures of oxygen (up to 1%), generating a homogeneous glow-mode plasma at low gas temperatures, was investigated. Absolute densities of ozone, one of the most prominent ROS, were measured across the 11 mm wide discharge channel by means of broadband absorption spectroscopy using the Hartley band centered at λ = 255 nm. A two-beam setup with a reference beam in MachZehnder configuration is employed for improved signal-to-noise ratio allowing highsensitivity measurements in the investigated single-pass weak-absorbance regime. The results are correlated to gas temperature measurements, deduced from the rotational temperature of the N2 (C Π u → B Π g , υ = 0 → 2) optical emission from introduced air impurities. The observed opposing trends of both quantities as a function of rf power input and oxygen admixture are analysed and explained in terms of a zerodimensional plasma-chemical kinetics simulation. It is found that the gas temperature as well as the densities of O and O2(b Σ g ) influence the absolute O3 densities when the rf power is varied. ‡ Current address: KROHNE Innovation GmbH, Ludwig-Krone-Str.5, 47058 Duisburg, Germany Page 1 of 26 AUTHOR SUBMITTED MANUSCRIPT PSST-101801.R1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 A cc e d M an us cr ip t",TRUE,acronym
R185,Plasma and Beam Physics,R139135,2D spatially resolved O atom density profiles in an atmospheric pressure plasma jet: from the active plasma volume to the effluent,S554538,R139189,Unit_frequency,L390285,MHz,"Two-dimensional spatially resolved absolute atomic oxygen densities are measured within an atmospheric pressure micro plasma jet and in its effluent. The plasma is operated in helium with an admixture of 0.5% of oxygen at 13.56 MHz and with a power of 1 W. Absolute atomic oxygen densities are obtained using two photon absorption laser induced fluorescence spectroscopy. The results are interpreted based on measurements of the electron dynamics by phase resolved optical emission spectroscopy in combination with a simple model that balances the production of atomic oxygen with its losses due to chemical reactions and diffusion. Within the discharge, the atomic oxygen density builds up with a rise time of 600 µs along the gas flow and reaches a plateau of 8 × 1015 cm−3. In the effluent, the density decays exponentially with a decay time of 180 µs (corresponding to a decay length of 3 mm at a gas flow of 1.0 slm). It is found that both, the species formation behavior and the maximum distance between the jet nozzle and substrates for possible oxygen treatments of surfaces can be controlled by adjusting the gas flow.",TRUE,acronym
R185,Plasma and Beam Physics,R139124,Comparison of electron heating and energy loss mechanisms in an RF plasma jet operated in argon and helium,S554437,R139185,Plasma_discharge,L390192,COST-Jet,"The µ-APPJ is a well-investigated atmospheric pressure RF plasma jet. Up to now, it has mainly been operated using helium as feed gas due to stability restrictions. However, the COST-Jet design including precise electrical probes now offers the stability and reproducibility to create equi-operational plasmas in helium as well as in argon. In this publication, we compare fundamental plasma parameters and physical processes inside the COST reference microplasma jet, a capacitively coupled RF atmospheric pressure plasma jet, under operation in argon and in helium. Differences already observable by the naked eye are reflected in differences in the power-voltage characteristic for both gases. Using an electrical model and a power balance, we calculated the electron density and temperature at 0.6 W to be 9e17 m-3, 1.2 eV and 7.8e16 m-3, 1.7 eV for argon and helium, respectively. In case of helium, a considerable part of the discharge power is dissipated in elastic electron-atom collisions, while for argon most of the input power is used for ionization. Phase-resolved emission spectroscopy reveals differently pronounced heating mechanisms. Whereas bulk heating is more prominent in argon compared to helium, the opposite trend is observed for sheath heating. This also explains the different behavior observed in the power-voltage characteristics.",TRUE,acronym
R131,Polymer Chemistry,R161372,Biocatalytic Degradation Efficiency of Postconsumer Polyethylene Terephthalate Packaging Determined by Their Polymer Microstructures,S644443,R161375,Enzyme,R161380,TfCut2,"Polyethylene terephthalate (PET) is the most important mass‐produced thermoplastic polyester used as a packaging material. Recently, thermophilic polyester hydrolases such as TfCut2 from Thermobifida fusca have emerged as promising biocatalysts for an eco‐friendly PET recycling process. In this study, postconsumer PET food packaging containers are treated with TfCut2 and show weight losses of more than 50% after 96 h of incubation at 70 °C. Differential scanning calorimetry analysis indicates that the high linear degradation rates observed in the first 72 h of incubation is due to the high hydrolysis susceptibility of the mobile amorphous fraction (MAF) of PET. The physical aging process of PET occurring at 70 °C is shown to gradually convert MAF to polymer microstructures with limited accessibility to enzymatic hydrolysis. Analysis of the chain‐length distribution of degraded PET by nuclear magnetic resonance spectroscopy reveals that MAF is rapidly hydrolyzed via a combinatorial exo‐ and endo‐type degradation mechanism whereas the remaining PET microstructures are slowly degraded only by endo‐type chain scission causing no detectable weight loss. Hence, efficient thermostable biocatalysts are required to overcome the competitive physical aging process for the complete degradation of postconsumer PET materials close to the glass transition temperature of PET.",TRUE,acronym
R11,Science,R28998,"Annotated facial landmarks in the wild: A large-scale, real- world database for facial landmark localization",S95857,R28999,Databases,L58719,AFLW ,"Face alignment is a crucial step in face recognition tasks. Especially, using landmark localization for geometric face normalization has shown to be very effective, clearly improving the recognition results. However, no adequate databases exist that provide a sufficient number of annotated facial landmarks. The databases are either limited to frontal views, provide only a small number of annotated images or have been acquired under controlled conditions. Hence, we introduce a novel database overcoming these limitations: Annotated Facial Landmarks in the Wild (AFLW). AFLW provides a large-scale collection of images gathered from Flickr, exhibiting a large variety in face appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total 25,993 faces in 21,997 real-world images are annotated with up to 21 landmarks per image. Due to the comprehensive set of annotations AFLW is well suited to train and test algorithms for multi-view face detection, facial landmark localization and face pose estimation. Further, we offer a rich set of tools that ease the integration of other face databases and associated annotations into our joint framework.",TRUE,acronym
R11,Science,R32984,"The importance of diagnostic cytogenetics on outcome in AML: analysis of 1,612 patients entered into the MRC AML 10 trial",S114226,R32985,Disease,R32983,AML,"Abstract Cytogenetics is considered one of the most valuable prognostic determinants in acute myeloid leukemia (AML). However, many studies on which this assertion is based were limited by relatively small sample sizes or varying treatment approach, leading to conflicting data regarding the prognostic implications of specific cytogenetic abnormalities. The Medical Research Council (MRC) AML 10 trial, which included children and adults up to 55 years of age, not only affords the opportunity to determine the independent prognostic significance of pretreatment cytogenetics in the context of large patient groups receiving comparable therapy, but also to address their impact on the outcome of subsequent transplantation procedures performed in first complete remission (CR). On the basis of response to induction treatment, relapse risk, and overall survival, three prognostic groups could be defined by cytogenetic abnormalities detected at presentation in comparison with the outcome of patients with normal karyotype. AML associated with t(8;21), t(15;17) or inv(16) predicted a relatively favorable outcome. Whereas in patients lacking these favorable changes, the presence of a complex karyotype, −5, del(5q), −7, or abnormalities of 3q defined a group with relatively poor prognosis. The remaining group of patients including those with 11q23 abnormalities, +8, +21, +22, del(9q), del(7q) or other miscellaneous structural or numerical defects not encompassed by the favorable or adverse risk groups were found to have an intermediate prognosis. The presence of additional cytogenetic abnormalities did not modify the outcome of patients with favorable cytogenetics. Subgroup analysis demonstrated that the three cytogenetically defined prognostic groups retained their predictive value in the context of secondary as well as de novo AML, within the pediatric age group and furthermore were found to be a key determinant of outcome from autologous or allogeneic bone marrow transplantation (BMT) in first CR. This study highlights the importance of diagnostic cytogenetics as an independent prognostic factor in AML, providing the framework for a stratified treatment approach of this disease, which has been adopted in the current MRC AML 12 trial.",TRUE,acronym
R11,Science,R31456,Applications of Artificial Neural Network for the Prediction of Flow Boiling Curves,S105461,R31457,Types,R31306,ANN,"An artificial neural network (ANN) was applied successfully to predict flow boiling curves. The databases used in the analysis are from the 1960's, including 1,305 data points which cover these parameter ranges: pressure P=100–1,000 kPa, mass flow rate G=40–500 kg/m2-s, inlet subcooling ΔTsub =0–35°C, wall superheat ΔTw = 10–300°C and heat flux Q=20–8,000kW/m2. The proposed methodology allows us to achieve accurate results, thus it is suitable for the processing of the boiling curve data. The effects of the main parameters on flow boiling curves were analyzed using the ANN. The heat flux increases with increasing inlet subcooling for all heat transfer modes. Mass flow rate has no significant effects on nucleate boiling curves. The transition boiling and film boiling heat fluxes will increase with an increase in the mass flow rate. Pressure plays a predominant role and improves heat transfer in all boiling regions except the film boiling region. There are slight differences between the steady and the transient boiling curves in all boiling regions except the nucleate region. The transient boiling curve lies below the corresponding steady boiling curve.",TRUE,acronym
R11,Science,R31460,Using artificial neural network to predict the pressure drop in a rotating packed bed,S105472,R31461,Types,R31306,ANN,"Although rotating beds are good equipments for intensified separations and multiphase reactions, but the fundamentals of its hydrodynamics are still unknown. In the wide range of operating conditions, the pressure drop across an irrigated bed is significantly lower than dry bed. In this regard, an approach based on artificial intelligence, that is, artificial neural network (ANN) has been proposed for prediction of the pressure drop across the rotating packed beds (RPB). The experimental data sets used as input data (280 data points) were divided into training and testing subsets. The training data set has been used to develop the ANN model while the testing data set was used to validate the performance of the trained ANN model. The results of the predicted pressure drop values with the experimental values show a good agreement between the prediction and experimental results regarding to some statistical parameters, for example (AARD% = 4.70, MSE = 2.0 × 10−5 and R2 = 0.9994). The designed ANN model can estimate the pressure drop in the countercurrent flow rotating packed bed with unexpected phenomena for higher pressure drop in dry bed than in wet bed. Also, the designed ANN model has been able to predict the pressure drop in a wet bed with the good accuracy with experimental.",TRUE,acronym
R11,Science,R26624,APTEEN: a hybrid protocol for efficient routing and comprehensive information retrieval in wireless,S83707,R26625,Protocol,R26622,APTEEN,"Wireless sensor networks with thousands of tiny sensor nodes, are expected to find wide applicability and increasing deployment in coming years, as they enable reliable monitoring and analysis of the environment. In this paper, we propose a hybrid routing protocol (APTEEN) which allows for comprehensive information retrieval. The nodes in such a network not only react to time-critical situations, but also give an overall picture of the network at periodic intervals in a very energy efficient manner. Such a network enables the user to request past, present and future data from the network in the form of historical, one-time and persistent queries respectively. We evaluated the performance of these protocols and observe that these protocols are observed to outperform existing protocols in terms of energy consumption and longevity of the network.",TRUE,acronym
R11,Science,R28097,A fast trilateral filterbased adaptive support weight method for stereo matching,S91750,R28098,Taxonomy stage: Step,R27854,ASW,"Adaptive support weight (ASW) methods represent the state of the art in local stereo matching, while the bilateral filter-based ASW method achieves outstanding performance. However, this method fails to resolve the ambiguity induced by nearby pixels at different disparities but with similar colors. In this paper, we introduce a novel trilateral filter (TF)-based ASW method that remedies such ambiguities by considering the possible disparity discontinuities through color discontinuity boundaries, i.e., the boundary strength between two pixels, which is measured by a local energy model. We also present a recursive TF-based ASW method whose computational complexity is O(N) for the cost aggregation step, and O(NLog2(N)) for boundary detection, where N denotes the input image size. This complexity is thus independent of the support window size. The recursive TF-based method is a nonlocal cost aggregation strategy. The experimental evaluation on the Middlebury benchmark shows that the proposed method, whose average error rate is 4.95%, outperforms other local methods in terms of accuracy. Equally, the average runtime of the proposed TF-based cost aggregation is roughly 260 ms on a 3.4-GHz Inter Core i7 CPU, which is comparable with state-of-the-art efficiency.",TRUE,acronym
R11,Science,R26704,A centralized energy-efficient routing protocol for wireless sensor networks,S85240,R26705,Protocol,R26702,BCDCP,"Wireless sensor networks consist of small battery powered devices with limited energy resources. Once deployed, the small sensor nodes are usually inaccessible to the user, and thus replacement of the energy source is not feasible. Hence, energy efficiency is a key design issue that needs to be enhanced in order to improve the life span of the network. Several network layer protocols have been proposed to improve the effective lifetime of a network with a limited energy supply. In this article we propose a centralized routing protocol called base-station controlled dynamic clustering protocol (BCDCP), which distributes the energy dissipation evenly among all sensor nodes to improve network lifetime and average energy savings. The performance of BCDCP is then compared to clustering-based schemes such as low-energy adaptive clustering hierarchy (LEACH), LEACH-centralized (LEACH-C), and power-efficient gathering in sensor information systems (PEGASIS). Simulation results show that BCDCP reduces overall energy consumption and improves network lifetime over its comparatives.",TRUE,acronym
R11,Science,R25185,Effect of Euler Number as a Feature in gender Recognition System from Offline HandwrittenSignature Using Neural Networks,S74833,R25186,Classifier,R25184,BPNN,"Recent growth of technology has also increased identification insecurity. Signature is a unique feature which is different for every other person, and each person can be identified using their own handwritten signature. Gender identification is one of key feature in case of human identification. In this paper, a feature based gender detection method has been proposed. The proposed framework takes handwritten signature as an input. Afterwards, several features are extracted from those images. The extracted features and their values are stored as data, which is further classified using Back Propagation Neural Network (BPNN). Gender classification is done using BPNN which is one of the most popular classifier. The proposed system is broken into two parts. In the first part, several features such as roundness, skewness, kurtosis, mean, standard deviation, area, Euler number, distribution density of black pixel, entropy, equi-diameter, connected component (cc) and perimeter were taken as feature. Then obtained features are divided into two categories. In the first category experimental feature set contains Euler number, whereas in the second category the obtained feature set excludes the same. BPNN is used to classify both types of feature sets to recognize the gender. Our study reports an improvement of 4.7% in gender classification system by the inclusion of Euler number as a feature.",TRUE,acronym
R11,Science,R25439,Reliability of Component Based systems- a Critical Survey,S76270,R25440,Area of use,R25407,CBS,"Software reliability is defined as the probability of the failure free operation of a software system for a specified period of time in a specified environment. Day by day software applications are growing more complex and with more emphasis on reuse. Component Based Software (CBS) applications have emerged. The focus of this paper is to provide an overview for the state of the art of Component Based Systems reliability estimation. In this paper, we discussed various approaches in terms of their scope, model, methods, technique and validation scheme. This comparison provides insight into determining the direction of future CBS reliability research.",TRUE,acronym
R11,Science,R25169,SVM-DSmT Combination for Off-Line Signature Verification,S74774,R25170,"Offline
Database",L46513,CEDAR,"We propose in this work a signature verification system based on decision combination of off-line signatures for managing conflict provided by the SVM classifiers. The system is basically divided into three modules: i) Radon Transform-SVM, ii) Ridgelet Transform-SVM and iii) PCR5 combination rule based on the generalized belief functions of Dezert-Smarandache theory. The proposed framework allows combining the normalized SVM outputs and uses an estimation technique based on the dissonant model of Appriou to compute the belief assignments. Decision making is performed through likelihood ratio. Experiments are conducted on the well known CEDAR database using false rejection and false acceptance criteria. The obtained results show that the proposed combination framework improves the verification accuracy compared to individual SVM classifiers.",TRUE,acronym
R11,Science,R31182,Advanced models of cellular genetic algorithms evaluated on SAT,S104511,R31183,Name,L62485,CGA,"Cellular genetic algorithms (cGAs) are mainly characterized by their spatially decentralized population, in which individuals can only interact with their neighbors. In this work, we study the behavior of a large number of different cGAs when solving the well-known 3-SAT problem. These cellular algorithms differ in the policy of individuals update and the population shape, since these two features affect the balance between exploration and exploitation of the algorithm. We study in this work both synchronous and asynchronous cGAs, having static and dynamically adaptive shapes for the population. Our main conclusion is that the proposed adaptive cGAs outperform other more traditional genetic algorithms for a well known benchmark of 3-SAT.",TRUE,acronym
R11,Science,R32990,Chromosomal abnormalities in Philadelphia chromosome negative metaphases appearing during imatinib mesylate therapy in patients with newly diagnosed chronic myeloid leukemia in chronic phase,S114254,R32991,Disease,R32989,CML,"The development of chromosomal abnormalities (CAs) in the Philadelphia chromosome (Ph)-negative metaphases during imatinib (IM) therapy in patients with newly diagnosed chronic myecloid leukemia (CML) has been reported only anecdotally. We assessed the frequency and significance of this phenomenon among 258 patients with newly diagnosed CML in chronic phase receiving IM. After a median follow-up of 37 months, 21 (9%) patients developed 23 CAs in Ph-negative cells; excluding -Y, this incidence was 5%. Sixteen (70%) of all CAs were observed in 2 or more metaphases. The median time from start of IM to the appearance of CAs was 18 months. The most common CAs were -Y and + 8 in 9 and 3 patients, respectively. CAs were less frequent in young patients (P = .02) and those treated with high-dose IM (P = .03). In all but 3 patients, CAs were transient and disappeared after a median of 5 months. One patient developed acute myeloid leukemia (associated with - 7). At last follow-up, 3 patients died from transplantation-related complications, myocardial infarction, and progressive disease and 2 lost cytogenetic response. CAs occur in Ph-negative cells in a small percentage of patients with newly diagnosed CML treated with IM. In rare instances, these could reflect the emergence of a new malignant clone.",TRUE,acronym
R11,Science,R30159,"Investigating the impacts of energy consumption, real GDP, tourism and trade on CO 2 emissions by accounting for cross-sectional dependence: a panel study of OECD countries",S99986,R30160,Methodology,R29644,DOLS,"The objective of this study is to analyse the long-run dynamic relationship of carbon dioxide emissions, real gross domestic product (GDP), the square of real GDP, energy consumption, trade and tourism under an Environmental Kuznets Curve (EKC) model for the Organization for Economic Co-operation and Development (OECD) member countries. Since we find the presence of cross-sectional dependence within the panel time-series data, we apply second-generation unit root tests, cointegration test and causality test which can deal with cross-sectional dependence problems. The cross-sectionally augmented Dickey-Fuller (CADF) and the cross-sectionally augmented Im-Pesaran-Shin (CIPS) unit root tests indicate that the analysed variables become stationary at their first differences. The Lagrange multiplier bootstrap panel cointegration test shows the existence of a long-run relationship between the analysed variables. The dynamic ordinary least squares (DOLS) estimation technique indicates that energy consumption and tourism contribute to the levels of gas emissions, while increases in trade lead to environmental improvements. In addition, the EKC hypothesis cannot be supported as the sign of coefficients on GDP and GDP2 is negative and positive, respectively. Moreover, the Dumitrescu–Hurlin causality tests exploit a variety of causal relationship between the analysed variables. The OECD countries are suggested to invest in improving energy efficiency, regulate necessary environmental protection policies for tourism sector in specific and promote trading activities through several types of encouragement act.",TRUE,acronym
R11,Science,R29056,Robust Discriminative Response Map Fitting with Constrained Local Models,S96189,R29057,Methods,R29055,DRMF,"We present a novel discriminative regression based approach for the Constrained Local Models (CLMs) framework, referred to as the Discriminative Response Map Fitting (DRMF) method, which shows impressive performance in the generic face fitting scenario. The motivation behind this approach is that, unlike the holistic texture based features used in the discriminative AAM approaches, the response map can be represented by a small set of parameters and these parameters can be very efficiently used for reconstructing unseen response maps. Furthermore, we show that by adopting very simple off-the-shelf regression techniques, it is possible to learn robust functions from response maps to the shape parameters updates. The experiments, conducted on Multi-PIE, XM2VTS and LFPW database, show that the proposed DRMF method outperforms state-of-the-art algorithms for the task of generic face fitting. Moreover, the DRMF method is computationally very efficient and is real-time capable. The current MATLAB implementation takes 1 second per image. To facilitate future comparisons, we release the MATLAB code and the pre-trained models for research purposes.",TRUE,acronym
R11,Science,R26754,An Energy-Aware Distributed Unequal Clustering Protocol for Wireless Sensor Networks,S85595,R26755,Protocol,R26753,EADUC,"Due to the imbalance of energy consumption of nodes in wireless sensor networks (WSNs), some local nodes die prematurely, which causes the network partitions and then shortens the lifetime of the network. The phenomenon is called “hot spot” or “energy hole” problem. For this problem, an energy-aware distributed unequal clustering protocol (EADUC) in multihop heterogeneous WSNs is proposed. Compared with the previous protocols, the cluster heads obtained by EADUC can achieve balanced energy, good distribution, and seamless coverage for all the nodes. Moreover, the complexity of time and control message is low. Simulation experiments show that EADUC can prolong the lifetime of the network significantly.",TRUE,acronym
R11,Science,R26773,An energy aware fuzzy unequal clustering algorithm for wireless sensor networks,S85742,R26774,Protocol,R26772,EAUCF,"In order to gather information more efficiently, wireless sensor networks (WSNs) are partitioned into clusters. The most of the proposed clustering algorithms do not consider the location of the base station. This situation causes hot spots problem in multi-hop WSNs. Unequal clustering mechanisms, which are designed by considering the base station location, solve this problem. In this paper, we introduce a fuzzy unequal clustering algorithm (EAUCF) which aims to prolong the lifetime of WSNs. EAUCF adjusts the cluster-head radius considering the residual energy and the distance to the base station parameters of the sensor nodes. This helps decreasing the intra-cluster work of the sensor nodes which are closer to the base station or have lower battery level. We utilize fuzzy logic for handling the uncertainties in cluster-head radius estimation. We compare our algorithm with some popular algorithms in literature, namely LEACH, CHEF and EEUC, according to First Node Dies (FND), Half of the Nodes Alive (HNA) and energy-efficiency metrics. Our simulation results show that EAUCF performs better than the other algorithms in most of the cases. Therefore, EAUCF is a stable and energy-efficient clustering algorithm to be utilized in any real time WSN application.",TRUE,acronym
R11,Science,R34187,An evaluation of the viability of a single monetary zone in ECOWAS,S118832,R34188,Countries,R34184,ECOWAS,"Currency convertibility and monetary integration activities of the Economic Community of West African States (ECOWAS) are directed at addressing the problems of multiple currencies and exchange rate changes that are perceived as stumbling blocks to regional integration. A real exchange rate (RER) variability model shows that ECOWAS is closer to a monetary union now than before. As expected, the implementation of structural adjustment programmes (SAPs) by various governments in the subregion has brought about a reasonable level of convergence. However, wide differences still exist between RER shocks facing CFA zone and non-CFA zone West African countries. Further convergence in economic policy and alternatives to dependence on revenues from taxes on international transactions are required for a stable region-wide monetary union in West Africa.",TRUE,acronym
R11,Science,R34190,"Monetary union in West Africa: who might gain, who might lose, and why?",S118846,R34191,Countries,R34184,ECOWAS,"We develop a model in which governments' financing needs exceed the socially optimal level because public resources are diverted to serve the narrow interests of the group in power. From a social welfare perspective, this results in undue pressure on the central bank to extract seigniorage. Monetary policy also suffers from an expansive bias, owing to the authorities' inability to precommit to price stability. Such a conjecture about the fiscal-monetary policy mix appears quite relevant in Africa, with deep implications for the incentives of fiscally heterogeneous countries to form a currency union. We calibrate the model to data for West Africa and use it to assess proposed ECOWAS monetary unions. Fiscal heterogeneity indeed appears critical in shaping regional currency blocs that would be mutually beneficial for all their members. In particular, Nigeria's membership in the configurations currently envisaged would not be in the interests of other ECOWAS countries unless it were accompanied by effective containment on Nigeria's financing needs.",TRUE,acronym
R11,Science,R34231,West African Single Currency and Competitiveness,S118996,R34232,Countries,R34184,ECOWAS,"This paper compares different nominal anchors to promote internal and external competitiveness in the case of a fixed exchange rate regime for the future single regional currency of the Economic Community of the West African States (ECOWAS). We use counterfactual analyses and estimate a model of dependent economy for small commodity exporting countries. We consider four foreign anchor currencies: the US dollar, the euro, the yen and the yuan. Our simulations show little support for a dominant peg in the ECOWAS area if they pursue several goals: maximizing the export revenues, minimizing their variability, stabilizing them and minimizing the real exchange rate misalignments from the fundamental value.",TRUE,acronym
R11,Science,R34245,Analysis of convergence criteria in a proposed monetary union: a study of the economic community of West African States,S119065,R34246,Countries,R34184,ECOWAS,"This study examines the processes of the monetary union of the Economic Community of West African States (ECOWAS). It takes a critical look at the convergence criteria and the various conditions under which they are to be met. Using the panel least square technique an estimate of the beta convergence was made for the period 2000-2008. The findings show that nearly all the explanatory variables have indirect effects on the income growth rate and that there tends to be convergence in income over time. The speed of adjustment estimated is 0.2% per year and the half-life is -346.92. Thus the economies can make up for half of the distance that separates them from their stationary state. From the findings, it was concluded that a well integrated economy could further the achievement of steady growth in these countries in the long run.",TRUE,acronym
R11,Science,R26640,An energy-efficient unequal clustering mechanism for wireless sensor networks,S85440,R26736,Protocol,R26735,EEUC,"Clustering provides an effective way for prolonging the lifetime of a wireless sensor network. Current clustering algorithms usually utilize two techniques, selecting cluster heads with more residual energy and rotating cluster heads periodically, to distribute the energy consumption among nodes in each cluster and extend the network lifetime. However, they rarely consider the hot spots problem in multihop wireless sensor networks. When cluster heads cooperate with each other to forward their data to the base station, the cluster heads closer to the base station are burdened with heavy relay traffic and tend to die early, leaving areas of the network uncovered and causing network partition. To address the problem, we propose an energy-efficient unequal clustering (EEUC) mechanism for periodical data gathering in wireless sensor networks. It partitions the nodes into clusters of unequal size, and clusters closer to the base station have smaller sizes than those farther away from the base station. Thus cluster heads closer to the base station can preserve some energy for the inter-cluster data forwarding. We also propose an energy-aware multihop routing protocol for the inter-cluster communication. Simulation results show that our unequal clustering mechanism balances the energy consumption well among all sensor nodes and achieves an obvious improvement on the network lifetime",TRUE,acronym
R11,Science,R70595,"A Generalizable, Data-Driven Approach to Predict Daily Risk of Clostridium difficile Infection at Two Large Academic Health Centers",S335980,R70596,Features,L242756,EHR,"OBJECTIVE An estimated 293,300 healthcare-associated cases of Clostridium difficile infection (CDI) occur annually in the United States. To date, research has focused on developing risk prediction models for CDI that work well across institutions. However, this one-size-fits-all approach ignores important hospital-specific factors. We focus on a generalizable method for building facility-specific models. We demonstrate the applicability of the approach using electronic health records (EHR) from the University of Michigan Hospitals (UM) and the Massachusetts General Hospital (MGH). METHODS We utilized EHR data from 191,014 adult admissions to UM and 65,718 adult admissions to MGH. We extracted patient demographics, admission details, patient history, and daily hospitalization details, resulting in 4,836 features from patients at UM and 1,837 from patients at MGH. We used L2 regularized logistic regression to learn the models, and we measured the discriminative performance of the models on held-out data from each hospital. RESULTS Using the UM and MGH test data, the models achieved area under the receiver operating characteristic curve (AUROC) values of 0.82 (95% confidence interval [CI], 0.80–0.84) and 0.75 ( 95% CI, 0.73–0.78), respectively. Some predictive factors were shared between the 2 models, but many of the top predictive factors differed between facilities. CONCLUSION A data-driven approach to building models for estimating daily patient risk for CDI was used to build institution-specific models at 2 large hospitals with different patient populations and EHR systems. In contrast to traditional approaches that focus on developing models that apply across hospitals, our generalizable approach yields risk-stratification models tailored to an institution. These hospital-specific models allow for earlier and more accurate identification of high-risk patients and better targeting of infection prevention strategies. Infect Control Hosp Epidemiol 2018;39:425–433",TRUE,acronym
R11,Science,R70608,Automated Detection of Postoperative Surgical Site Infections Using Supervised Methods with Electronic Health Record Data,S336069,R70609,Features,L242823,EHR,"The National Surgical Quality Improvement Project (NSQIP) is widely recognized as “the best in the nation” surgical quality improvement resource in the United States. In particular, it rigorously defines postoperative morbidity outcomes, including surgical adverse events occurring within 30 days of surgery. Due to its manual yet expensive construction process, the NSQIP registry is of exceptionally high quality, but its high cost remains a significant bottleneck to NSQIP’s wider dissemination. In this work, we propose an automated surgical adverse events detection tool, aimed at accelerating the process of extracting postoperative outcomes from medical charts. As a prototype system, we combined local EHR data with the NSQIP gold standard outcomes and developed machine learned models to retrospectively detect Surgical Site Infections (SSI), a particular family of adverse events that NSQIP extracts. The built models have high specificity (from 0.788 to 0.988) as well as very high negative predictive values (>0.98), reliably eliminating the vast majority of patients without SSI, thereby significantly reducing the NSQIP extractors’ burden.",TRUE,acronym
R11,Science,R151127,"Distributed Group
Support Systems",S626077,R156012,Focus Group,L430840,EMO,"Distributed group support systems are likely to be widely used in the future as a means for dispersed groups of people to work together through computer networks. They combine characteristics of computer-mediated communication systems with the specialized tools and processes developed in the context of group decision support systems, to provide communications, a group memory, and tools and structures to coordinate the group process and analyze data. These tools and structures can take a wide variety of forms in order to best support computer-mediated interaction for different types of tasks and groups. This article summarizes five case studies of different distributed group support systems developed by the authors and their colleagues over the last decade to support different types of tasks and to accommodate fairly large numbers of participants (tens to hundreds). The case studies are placed within conceptual frameworks that aid in classifying and comparing such systems. The results of the case studies demonstrate that design requirements and the associated research issues for group support systems an be very different in the distributed environment compared to the decision room approach.",TRUE,acronym
R11,Science,R151135,The design of a dynamic emergency response management information system,S626121,R156016,Focus Group,L430880,EMO,"ABSTRACT This paper systematically develops a set of general and supporting design principles and specifications for a ""Dynamic Emergency Response Management Information System"" (DERMIS) by identifying design premises resulting from the use of the ""Emergency Management Information System and Reference Index"" (EMISARI) and design concepts resulting from a comprehensive literature review. Implicit in crises of varying scopes and proportions are communication and information needs that can be addressed by today's information and communication technologies. However, what is required is organizing the premises and concepts that can be mapped into a set of generic design principles in turn providing a framework for the sensible development of flexible and dynamic Emergency Response Information Systems. A framework is presented for the system design and development that addresses the communication and information needs of first responders as well as the decision making needs of command and control personnel. The framework also incorporates thinking about the value of insights and information from communities of geographically dispersed experts and suggests how that expertise can be brought to bear on crisis decision making. Historic experience is used to suggest nine design premises. These premises are complemented by a series of five design concepts based upon the review of pertinent and applicable research. The result is a set of eight general design principles and three supporting design considerations that are recommended to be woven into the detailed specifications of a DERMIS. The resulting DERMIS design model graphically indicates the heuristic taken by this paper and suggests that the result will be an emergency response system flexible, robust, and dynamic enough to support the communication and information needs of emergency and crisis personnel on all levels. In addition it permits the development of dynamic emergency response information systems with tailored flexibility to support and be integrated across different sizes and types of organizations. This paper provides guidelines for system analysts and designers, system engineers, first responders, communities of experts, emergency command and control personnel, and MIS/IT researchers. SECTIONS 1. Introduction 2. Historical Insights about EMISARI 3. The emergency Response Atmosphere of OEP 4. Resulting Requirements for Emergency Response and Conceptual Design Specifics 4.1 Metaphors 4.2 Roles 4.3 Notifications 4.4 Context Visibility 4.5 Hypertext 5. Generalized Design Principles 6. Supporting Design Considerations 6.1 Resource Databases and Community Collaboration 6.2 Collective Memory 6.3 Online Communities of Experts 7. Conclusions and Final Observations 8. References 1. INTRODUCTION There have been, since 9/11, considerable efforts to propose improvements in the ability to respond to emergencies. However, the vast majority of these efforts have concentrated on infrastructure improvements to aid in mitigation of the impacts of either a man-made or natural disaster. In the area of communication and information systems to support the actual ongoing reaction to a disaster situation, the vast majority of the efforts have focused on the underlying technology to reliably support survivability of the underlying networks and physical facilities (Kunreuther and LernerLam 2002; Mork 2002). The fact that there were major failures of the basic technology and loss of the command center for 48 hours in the 9/11 event has made this an understandable result. The very workable commercial paging and digital mail systems supplied immediately afterwards by commercial firms (Michaels 2001; Vatis 2002) to the emergency response workers demonstrated that the correction of underlying technology is largely a process of setting integration standards and deciding to spend the necessary funds to update antiquated systems. …",TRUE,acronym
R11,Science,R34072,Migratory environmental history of the grey mullet Mugil cephalus as revealed by otolith Sr:Ca ratios,S118199,R34073,Analytical method,R33974,EPMA,"We used an electron probe microanalyzer (EPMA) to determine the migratory environ- mental history of the catadromous grey mullet Mugil cephalus from the Sr:Ca ratios in otoliths of 10 newly recruited juveniles collected from estuaries and 30 adults collected from estuaries, nearshore (coastal waters and bay) and offshore, in the adjacent waters off Taiwan. Mean (±SD) Sr:Ca ratios at the edges of adult otoliths increased significantly from 6.5 ± 0.9 × 10 -3 in estuaries and nearshore waters to 8.9 ± 1.4 × 10 -3 in offshore waters (p < 0.01), corresponding to increasing ambi- ent salinity from estuaries and nearshore to offshore waters. The mean Sr:Ca ratios decreased sig- nificantly from the core (11.2 ± 1.2 × 10 -3 ) to the otolith edge (6.2 ± 1.4 × 10 -3 ) in juvenile otoliths (p < 0.001). The mullet generally spawned offshore and recruited to the estuary at the juvenile stage; therefore, these data support the use of Sr:Ca ratios in otoliths to reconstruct the past salinity history of the mullet. A life-history scan of the otolith Sr:Ca ratios indicated that the migratory environmen- tal history of the mullet beyond the juvenile stage consists of 2 types. In Type 1 mullet, Sr:Ca ratios range between 4.0 × 10 -3 and 13.9 × 10 -3 , indicating that they migrated between estuary and offshore waters but rarely entered the freshwater habitat. In Type 2 mullet, the Sr:Ca ratios decreased to a minimum value of 0.4 × 10 -3 , indicating that the mullet migrated to a freshwater habitat. Most mullet beyond the juvenile stage migrated from estuary to offshore waters, but a few mullet less than 2 yr old may have migrated into a freshwater habitat. Most mullet collected nearshore and offshore were of Type 1, while those collected from the estuaries were a mixture of Types 1 and 2. The mullet spawning stock consisted mainly of Type 1 fish. The growth rates of the mullet were similar for Types 1 and 2. The migratory patterns of the mullet were more divergent than indicated by previous reports of their catadromous behavior.",TRUE,acronym
R11,Science,R151135,The design of a dynamic emergency response management information system,S626120,R156016,Technology,L430879,ERMIS,"ABSTRACT This paper systematically develops a set of general and supporting design principles and specifications for a ""Dynamic Emergency Response Management Information System"" (DERMIS) by identifying design premises resulting from the use of the ""Emergency Management Information System and Reference Index"" (EMISARI) and design concepts resulting from a comprehensive literature review. Implicit in crises of varying scopes and proportions are communication and information needs that can be addressed by today's information and communication technologies. However, what is required is organizing the premises and concepts that can be mapped into a set of generic design principles in turn providing a framework for the sensible development of flexible and dynamic Emergency Response Information Systems. A framework is presented for the system design and development that addresses the communication and information needs of first responders as well as the decision making needs of command and control personnel. The framework also incorporates thinking about the value of insights and information from communities of geographically dispersed experts and suggests how that expertise can be brought to bear on crisis decision making. Historic experience is used to suggest nine design premises. These premises are complemented by a series of five design concepts based upon the review of pertinent and applicable research. The result is a set of eight general design principles and three supporting design considerations that are recommended to be woven into the detailed specifications of a DERMIS. The resulting DERMIS design model graphically indicates the heuristic taken by this paper and suggests that the result will be an emergency response system flexible, robust, and dynamic enough to support the communication and information needs of emergency and crisis personnel on all levels. In addition it permits the development of dynamic emergency response information systems with tailored flexibility to support and be integrated across different sizes and types of organizations. This paper provides guidelines for system analysts and designers, system engineers, first responders, communities of experts, emergency command and control personnel, and MIS/IT researchers. SECTIONS 1. Introduction 2. Historical Insights about EMISARI 3. The emergency Response Atmosphere of OEP 4. Resulting Requirements for Emergency Response and Conceptual Design Specifics 4.1 Metaphors 4.2 Roles 4.3 Notifications 4.4 Context Visibility 4.5 Hypertext 5. Generalized Design Principles 6. Supporting Design Considerations 6.1 Resource Databases and Community Collaboration 6.2 Collective Memory 6.3 Online Communities of Experts 7. Conclusions and Final Observations 8. References 1. INTRODUCTION There have been, since 9/11, considerable efforts to propose improvements in the ability to respond to emergencies. However, the vast majority of these efforts have concentrated on infrastructure improvements to aid in mitigation of the impacts of either a man-made or natural disaster. In the area of communication and information systems to support the actual ongoing reaction to a disaster situation, the vast majority of the efforts have focused on the underlying technology to reliably support survivability of the underlying networks and physical facilities (Kunreuther and LernerLam 2002; Mork 2002). The fact that there were major failures of the basic technology and loss of the command center for 48 hours in the 9/11 event has made this an understandable result. The very workable commercial paging and digital mail systems supplied immediately afterwards by commercial firms (Michaels 2001; Vatis 2002) to the emergency response workers demonstrated that the correction of underlying technology is largely a process of setting integration standards and deciding to spend the necessary funds to update antiquated systems. …",TRUE,acronym
R11,Science,R137023,Increased attention for computer-tailored health communications: an event-related potential study,S541126,R137024,has_method,L381069,ERP,"The authors tested whether individually tailored health communications receive more attention from the reader than nontailored health communications in a randomized, controlled trial among student volunteers (N = 24). They used objective measures of attention allocation during the message exposure. In a between-subjects design, participants had to read tailored or nontailored nutrition education messages and at the same time had to pay attention to specific odd auditory stimuli in a sequence of frequent auditory stimuli (odd ball paradigm). The amount of attention allocation was measured by recording event-related potentials (ERPs; i.e., N100 and P300 ERPs) and reaction times. For the tailored as opposed to the nontailored group, results revealed larger amplitudes for the N100 effect, smaller amplitudes for the P300 effect, and slower reaction times. Resource allocation theory and these results suggest that those in the tailored group allocated more attention resources to the nutrition message than those in the nontailored group.",TRUE,acronym
R11,Science,R25752,An effective Fuzzy Healthy Association Rule Mining Algorithm (FHARM),S78272,R25753,Algorithm name,L49033,FHARM,"In this paper we propose an effective and efficient new Fuzzy Healthy Association Rule Mining Algorithm (FHARM) that produces more interesting and quality rules by introducing new quality measures. In this approach, edible attributes are filtered from transactional input data by projections and are then converted to Required Daily Allowance (RDA) numeric values. The averaged RDA database is then converted to a fuzzy database that contains normalized fuzzy attributes comprising different fuzzy sets. Analysis of nutritional information is then performed from the converted normalized fuzzy transactional database. The paper presents various performance tests and interestingness measures to demonstrate the effectiveness of the approach and proposes further work on evaluating our approach with other generic fuzzy association rule algorithms.",TRUE,acronym
R11,Science,R26582,Design and analysis of a fast local clustering service for wireless sensor networks,S83964,R26666,Protocol,R26580,FLOC,"We present a fast local clustering service, FLOC, that partitions a multi-hop wireless network into nonoverlapping and approximately equal-sited clusters. Each cluster has a clusterhead such that all nodes within unit distance of the clusterhead belong to the cluster but no node beyond distance m from the clusterhead belongs to the cluster. By asserting m /spl ges/ 2, FLOC achieves locality: effects of cluster formation and faults/changes at any part of the network are contained within most m units. By taking unit distance to be the reliable communication radius and m to be the maximum communication radius, FLOC exploits the double-band nature of wireless radio-model and achieves clustering in constant time regardless of the network size. Through simulations and experiments with actual deployments, we analyze the tradeoffs between clustering time and the quality of clustering, and suggest suitable parameters for FLOC to achieve a fast completion time without compromising the quality of the resulting clustering.",TRUE,acronym
R11,Science,R30284,"The Relationship between CO2 Emission, Energy Consumption, Urbanization and Trade Openness for Selected CEECs",S100368,R30285,Methodology,R29555,FMOLS,"This paper investigates the relationship between CO2 emission, real GDP, energy consumption, urbanization and trade openness for 10 for selected Central and Eastern European Countries (CEECs), including, Albania, Bulgaria, Croatia, Czech Republic, Macedonia, Hungary, Poland, Romania, Slovak Republic and Slovenia for the period of 1991–2011. The results show that the environmental Kuznets curve (EKC) hypothesis holds for these countries. The fully modified ordinary least squares (FMOLS) results reveal that a 1% increase in energy consumption leads to a %1.0863 increase in CO2 emissions. Results for the existence and direction of panel Vector Error Correction Model (VECM) Granger causality method show that there is bidirectional causal relationship between CO2 emissions - real GDP and energy consumption-real GDP as well.",TRUE,acronym
R11,Science,R28074,Hardware implementation of a full HD real-time disparity estimation algorithm,S91663,R28075,Computational platform,L56532,FPGA,"Disparity estimation is a common task in stereo vision and usually requires a high computational effort. High resolution disparity maps are necessary to provide a good image quality on autostereoscopic displays which deliver stereo content without the need for 3D glasses. In this paper, an FPGA architecture for a disparity estimation algorithm is proposed, that is capable of processing high-definition content in real-time. The resulting architecture is efficient in terms of power consumption and can be easily scaled to support higher resolutions.",TRUE,acronym
R11,Science,R25191,GMM For Offline Signature Forgery Detection,S74853,R25192,Classifier,R25190,GMM,"As signature continues to play a crucial part in personal identification for number of applications including financial transaction, an efficient signature authentication system becomes more and more important. Various researches in the field of signature authentication has been dynamically pursued for many years and its extent is still being explored. Signature verification is the process which is carried out to determine whether a given signature is genuine or forged. It can be distinguished into two types such as the Online and the Offline. In this paper we presented the Offline signature verification system and extracted some new local and geometric features like QuadSurface feature, Area ratio, Distance ratio etc. For this we have taken some genuine signatures from 5 different persons and extracted the features from all of the samples after proper preprocessing steps. The training phase uses Gaussian Mixture Model (GMM) technique to obtain a reference model for each signature sample of a particular user. By computing Euclidian distance between reference signature and all the training sets of signatures, acceptance range is defined. If the Euclidian distance of a query signature is within the acceptance range then it is detected as an authenticated signature else, a forged signature.",TRUE,acronym
R11,Science,R30280,Estimating the relationship between economic growth and environmental quality for the brics economies - a dynamic panel data approach,S100352,R30281,Methodology,R25190,GMM,"It has been forecasted by many economists that in the next couple of decades the BRICS economies are going to experience an unprecedented economic growth. This massive economic growth would definitely have a detrimental impact on the environment since these economies, like others, would extract their environmental and natural resource to a larger scale in the process of their economic growth. Therefore, maintaining environmental quality while growing has become a major challenge for these economies. However, the proponents of Environmental Kuznets Curve (EKC) Hypothesis - an inverted U shape relationship between income and emission per capita, suggest BRICS economies need not bother too much about environmental quality while growing because growth would eventually take care of the environment once a certain level of per capita income is achieved. In this backdrop, the present study makes an attempt to estimate EKC type relationship, if any, between income and emission in the context of the BRICS countries for the period 1997 to 2011. Therefore, the study first adopts fixed effect (FE) panel data model to control time constant country specific effects, and then uses Generalized Method of Moments (GMM) approach for dynamic panel data to address endogeneity of income variable and dynamism in emission per capita. Apart from income, we also include variables related to financial sector development and energy utilization to explain emission. The fixed effect model shows a significant EKC type relation between income and emission supporting the previous literature. However, GMM estimates for the dynamic panel model show the relationship between income and emission is actually U shaped with the turning point being out of sample. This out of sample turning point indicates that emission has been growing monotonically with growth in income. Factors like, net energy imports and share of industrial output in GDP are found to be significant and having detrimental impact on the environment in the dynamic panel model. However, these variables are found to be insignificant in FE model. Capital account convertibility shows significant and negative impact on the environment irrespective of models used. The monotonically increasing relationship between income and emission suggests the BRICS economies must adopt some efficiency oriented action plan so that they can grow without putting much pressure on the environment. These findings can have important policy implications as BRICS countries are mainly depending on these factors for their growth but at the same time they can cause serious threat to the environment.",TRUE,acronym
R11,Science,R34240,"Are proposed African monetary unions optimal currency areas? Real, monetary and fiscal policy convergence analysis",S119243,R34280,Methodology,L72031,GMM,"Purpose – A spectre is hunting embryonic African monetary zones: the EMU crisis. This paper assesses real, monetary and fiscal policy convergence within the proposed WAM and EAM zones. The introduction of common currencies in West and East Africa is facing stiff challenges in the timing of monetary convergence, the imperative of central bankers to apply common modeling and forecasting methods of monetary policy transmission, as well as the requirements of common structural and institutional characteristics among candidate states. Design/methodology/approach – In the analysis: monetary policy targets inflation and financial dynamics of depth, efficiency, activity and size; real sector policy targets economic performance in terms of GDP growth at macro and micro levels; while, fiscal policy targets debt-to-GDP and deficit-to-GDP ratios. A dynamic panel GMM estimation with data from different non-overlapping intervals is employed. The implied rate of convergence and the time required to achieve full (100%) convergence are then computed from the estimations. Findings – Findings suggest overwhelming lack of convergence: (1) initial conditions for financial development are different across countries; (2) fundamental characteristics as common monetary policy initiatives and IMF backed financial reform programs are implemented differently across countries; (3) there is remarkable evidence of cross-country variations in structural characteristics of macroeconomic performance; (4) institutional cross-country differences could also be responsible for the deficiency in convergence within the potential monetary zones; (5) absence of fiscal policy convergence and no potential for eliminating idiosyncratic fiscal shocks due to business cycle incoherence. Practical implications – As a policy implication, heterogeneous structural and institutional characteristics across countries are giving rise to different levels and patterns of financial intermediary development. Thus, member states should work towards harmonizing cross-country differences in structural and institutional characteristics that hamper the effectiveness of convergence in monetary, real and fiscal policies. This could be done by stringently monitoring the implementation of existing common initiatives and/or the adoption of new reforms programs. Originality/value – It is one of the few attempts to investigate the issue of convergence within the proposed WAM and EAM unions.",TRUE,acronym
R11,Science,R26586,"Distributed clustering in ad-hoc sensor networks: a hybrid, energy-efficient approach",S85001,R26670,Protocol,R26584,HEED,"Prolonged network lifetime, scalability, and load balancing are important requirements for many ad-hoc sensor network applications. Clustering sensor nodes is an effective technique for achieving these goals. In this work, we propose a new energy-efficient approach for clustering nodes in ad-hoc sensor networks. Based on this approach, we present a protocol, HEED (hybrid energy-efficient distributed clustering), that periodically selects cluster heads according to a hybrid of their residual energy and a secondary parameter, such as node proximity to its neighbors or node degree. HEED does not make any assumptions about the distribution or density of nodes, or about node capabilities, e.g., location-awareness. The clustering process terminates in O(1) iterations, and does not depend on the network topology or size. The protocol incurs low overhead in terms of processing cycles and messages exchanged. It also achieves fairly uniform cluster head distribution across the network. A careful selection of the secondary clustering parameter can balance load among cluster heads. Our simulation results demonstrate that HEED outperforms weight-based clustering protocols in terms of several cluster characteristics. We also apply our approach to a simple application to demonstrate its effectiveness in prolonging the network lifetime and supporting data aggregation.",TRUE,acronym
R11,Science,R31626,Control of a batch polymerization system using hybrid neural network - First principle model,S105956,R31627,Types,R31612,HNN,"In this work, the utilization of neural network in hybrid with first principle models for modelling and control of a batch polymerization process was investigated. Following the steps of the methodology, hybrid neural network (HNN) forward models and HNN inverse model of the process were first developed and then the performance of the model in direct inverse control strategy and internal model control (IMC) strategy was investigated. For comparison purposes, the performance of conventional neural network and PID controller in control was compared with the proposed HNN. The results show that HNN is able to control perfectly for both set points tracking and disturbance rejection studies.",TRUE,acronym
R11,Science,R151214,"Organizational Resilience
and Using Information
and Communication
Technologies to Rebuild
Communication Structures",S626376,R156056,Technology,L431095,ICT,"This study employs the perspective of organizational resilience to examine how information and communication technologies (ICTs) were used by organizations to aid in their recovery after Hurricane Katrina. In-depth interviews enabled longitudinal analysis of ICT use. Results showed that organizations enacted a variety of resilient behaviors through adaptive ICT use, including information sharing, (re)connection, and resource acquisition. Findings emphasize the transition of ICT use across different stages of recovery, including an anticipated stage. Key findings advance organizational resilience theory with an additional source of resilience, external availability. Implications and contributions to the literature of ICTs in disaster contexts and organizational resilience are discussed.",TRUE,acronym
R11,Science,R32573,An Enhanced Spatio-spectral Template for Automatic Small Recreational Vessel Detection,S110860,R32574,Satellite sensor,R32555,IKONOS,"This paper examines the performance of a spatiospectral template on Ikonos imagery to automatically detect small recreational boats. The spatiospectral template is utilized and then enhanced through the use of a weighted Euclidean distance metric adapted from the Mahalanobis distance metric. The aim is to assist the Canadian Coast Guard in gathering data on recreational boating for the modeling of search and rescue incidence risk. To test the detection accuracy of the enhanced spatiospectral template, a dataset was created by gathering position and attribute data for 53 recreational vessel targets purposely moored for this research within Cadboro Bay, British Columbia, Canada. The Cadboro Bay study site containing the targets was imaged using Ikonos. Overall detection accuracy was 77%. Targets were broken down into 2 categories: 1) Category A-less than 6 m in length, and Category B-more than 6 m long. The detection rate for Category B targets was 100%, while the detection rate for Category A targets was 61%. It is important to note that some Category A targets were intentionally selected for their small size to test the detection limits of the enhanced spatiospectral template. The smallest target detected was 2.2 m long and 1.1 m wide. The analysis also revealed that the ability to detect targets between 2.2 and 6 m long was diminished if the target was dark in color.",TRUE,acronym
R11,Science,R31413,Online prediction of polymer product quality in an industrial reactor using recurrent neural networks,S105335,R31414,Types,R31342,IRN,"In this paper, internally recurrent neural networks (IRNN) are used to predict a key polymer product quality variable from an industrial polymerization reactor. IRNN are selected as the modeling tools for two reasons: 1) over the wide range of operating regions required to make multiple polymer grades, the process is highly nonlinear; and 2) the finishing of the polymer product after it leaves the reactor imparts significant dynamics to the process by ""mixing"" effects. IRNN are shown to be very effective tools for predicting key polymer quality variables from secondary measurements taken around the reactor.",TRUE,acronym
R11,Science,R26727,LCM: A Link-Aware Clustering Mechanism for Energy-Efficient Routing in Wireless Sensor Networks,S85409,R26728,Protocol,R26726,LCM,"In wireless sensor networks, nodes in the area of interest must report sensing readings to the sink, and this report always satisfies the report frequency required by the sink. This paper proposes a link-aware clustering mechanism, called LCM, to determine an energy-efficient and reliable routing path. The LCM primarily considers node status and link condition, and uses a novel clustering metric called the predicted transmission count (PTX), to evaluate the qualification of nodes for clusterheads and gateways to construct clusters. Each clusterhead or gateway candidate depends on the PTX to derive its priority, and the candidate with the highest priority becomes the clusterhead or gateway. Simulation results validate that the proposed LCM significantly outperforms the clustering mechanisms using random selection and by considering only link quality and residual energy in the packet delivery ratio, energy consumption, and delivery latency.",TRUE,acronym
R11,Science,R26554,Energy-efficient communication protocol for wireless microsensor networks,S83891,R26657,Protocol,R26551,LEACH,"Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.",TRUE,acronym
R11,Science,R26715,Mobility-based clustering protocol for wireless sensor networks with mobile nodes,S85324,R26716,Protocol,R26713,MBC,"In this study, the authors propose a mobility-based clustering (MBC) protocol for wireless sensor networks with mobile nodes. In the proposed clustering protocol, a sensor node elects itself as a cluster-head based on its residual energy and mobility. A non-cluster-head node aims at its link stability with a cluster head during clustering according to the estimated connection time. Each non-cluster-head node is allocated a timeslot for data transmission in ascending order in a time division multiple address (TDMA) schedule based on the estimated connection time. In the steady-state phase, a sensor node transmits its sensed data in its timeslot and broadcasts a joint request message to join in a new cluster and avoid more packet loss when it has lost or is going to lose its connection with its cluster head. Simulation results show that the MBC protocol can reduce the packet loss by 25% compared with the cluster-based routing (CBR) protocol and 50% compared with the low-energy adaptive clustering hierarchy-mobile (LEACH-mobile) protocol. Moreover, it outperforms both the CBR protocol and the LEACH-mobile protocol in terms of average energy consumption and average control overhead, and can better adapt to a highly mobile environment.",TRUE,acronym
R11,Science,R32987,New insights into the prognostic impact of the karyotype in MDS and correlation with subtypes: evidence from a core dataset of 2124 patients,S114815,R33069,Disease,R32986,MDS,"We have generated a large, unique database that includes morphologic, clinical, cytogenetic, and follow-up data from 2124 patients with myelodysplastic syndromes (MDSs) at 4 institutions in Austria and 4 in Germany. Cytogenetic analyses were successfully performed in 2072 (97.6%) patients, revealing clonal abnormalities in 1084 (52.3%) patients. Numeric and structural chromosomal abnormalities were documented for each patient and subdivided further according to the number of additional abnormalities. Thus, 684 different cytogenetic categories were identified. The impact of the karyotype on the natural course of the disease was studied in 1286 patients treated with supportive care only. Median survival was 53.4 months for patients with normal karyotypes (n = 612) and 8.7 months for those with complex anomalies (n = 166). A total of 13 rare abnormalities were identified with good (+1/+1q, t(1q), t(7q), del(9q), del(12p), chromosome 15 anomalies, t(17q), monosomy 21, trisomy 21, and -X), intermediate (del(11q), chromosome 19 anomalies), or poor (t(5q)) prognostic impact, respectively. The prognostic relevance of additional abnormalities varied considerably depending on the chromosomes affected. For all World Health Organization (WHO) and French-American-British (FAB) classification system subtypes, the karyotype provided additional prognostic information. Our analyses offer new insights into the prognostic significance of rare chromosomal abnormalities and specific karyotypic combinations in MDS.",TRUE,acronym
R11,Science,R31212,MLGA: a multilevel cooperative genetic algorithm,S104675,R31213,Name,L62595,MLGA,"This paper incorporate the multilevel selection (MLS) theory into the genetic algorithm. Based on this theory, a Multilevel Cooperative Genetic Algorithm (MLGA) is presented. In MLGA, a species is subdivided in a set of populations, each population is subdivided in groups, and evolution occurs at two levels so called individual and group level. A fast population dynamics occurs at individual level. At this level, selection occurs between individuals of the same group. The popular genetic operators such as mutation and crossover are applied within groups. A slow population dynamics occurs at group level. At this level, selection occurs between groups of a population. A group level operator so called colonization is applied between groups in which a group is selected as extinct, and replaced by offspring of a colonist group. We used a set of well known numerical functions in order to evaluate performance of the proposed algorithm. The results showed that the MLGA is robust, and provides an efficient way for numerical function optimization.",TRUE,acronym
R11,Science,R26602,WSN16-5: Distributed Formation of Overlapping Multi-hop Clusters in Wireless Sensor Networks,S83980,R26667,Protocol,R26600,MOCA,"Clustering is a standard approach for achieving efficient and scalable performance in wireless sensor networks. Most of the published clustering algorithms strive to generate the minimum number of disjoint clusters. However, we argue that guaranteeing some degree of overlap among clusters can facilitate many applications, like inter-cluster routing, topology discovery and node localization, recovery from cluster head failure, etc. We formulate the overlapping multi-hop clustering problem as an extension to the k-dominating set problem. Then we propose MOCA; a randomized distributed multi-hop clustering algorithm for organizing the sensors into overlapping clusters. We validate MOCA in a simulated environment and analyze the effect of different parameters, e.g. node density and network connectivity, on its performance. The simulation results demonstrate that MOCA is scalable, introduces low overhead and produces approximately equal-sized clusters.",TRUE,acronym
R11,Science,R28826,A Multi-objective Approach to Testing Resource Allocation in Modular Software Systems,S95116,R28827,Algorithm(s),R28825,MODE,"Nowadays, as the software systems become increasingly large and complex, the problem of allocating the limited testing-resource during the testing phase has become more and more difficult. In this paper, we propose to solve the testing-resource allocation problem (TRAP) using multi-objective evolutionary algorithms. Specifically, we formulate TRAP as two multi-objective problems. First, we consider the reliability of the system and the testing cost as two objectives. In the second formulation, the total testing-resource consumed is also taken into account as the third goal. Two multi-objective evolutionary algorithms, non-dominated sorting genetic algorithm II (NSGA2) and multi-objective differential evolution algorithms (MODE), are applied to solve the TRAP in the two scenarios. This is the first time that the TRAP is explicitly formulated and solved by multi-objective evolutionary approaches. Advantages of our approaches over the state-of-the-art single-objective approaches are demonstrated on two parallel-series modular software models.",TRUE,acronym
R11,Science,R32625,Ship detection in MODIS imagery,S111172,R32626,Satellite sensor,R32623,MODIS,Understanding the capabilities of satellite sensors with spatial and spectral characteristics similar to those of MODIS for Maritime Domain Awareness (MDA) is of importance because of the upcoming NPOES with 100 minutes revisit time carrying the MODIS-like VIIRS multispectral imaging sensor. This paper presents an experimental study of ship detection using MODIS imagery. We study the use of ship signatures such as contaminant plumes in clouds and the spectral contrast between the ship and the sea background for detection. Results show the potential and challenges for such approach in MDA.,TRUE,acronym
R11,Science,R28880,Single and Multi Objective Genetic Programming for Software Development Effort Estimation,S95354,R28881,Algorithm(s),R28879,MOGP,"The idea of exploiting Genetic Programming (GP) to estimate software development effort is based on the observation that the effort estimation problem can be formulated as an optimization problem. Indeed, among the possible models, we have to identify the one providing the most accurate estimates. To this end a suitable measure to evaluate and compare different models is needed. However, in the context of effort estimation there does not exist a unique measure that allows us to compare different models but several different criteria (e.g., MMRE, Pred(25), MdMRE) have been proposed. Aiming at getting an insight on the effects of using different measures as fitness function, in this paper we analyzed the performance of GP using each of the five most used evaluation criteria. Moreover, we designed a Multi-Objective Genetic Programming (MOGP) based on Pareto optimality to simultaneously optimize the five evaluation measures and analyzed whether MOGP is able to build estimation models more accurate than those obtained using GP. The results of the empirical analysis, carried out using three publicly available datasets, showed that the choice of the fitness function significantly affects the estimation accuracy of the models built with GP and the use of some fitness functions allowed GP to get estimation accuracy comparable with the ones provided by MOGP.",TRUE,acronym
R11,Science,R26766,Multihop Routing Protocol with Unequal Clustering for Wireless Sensor Networks,S85695,R26767,Protocol,R26765,MRPUC,"In order to prolong the lifetime of wireless sensor networks, this paper presents a multihop routing protocol with unequal clustering (MRPUC). On the one hand, cluster heads deliver the data to the base station with relay to reduce energy consumption. On the other hand, MRPUC uses many measures to balance the energy of nodes. First, it selects the nodes with more residual energy as cluster heads, and clusters closer to the base station have smaller sizes to preserve some energy during intra-cluster communication for inter-cluster packets forwarding. Second, when regular nodes join clusters, they consider not only the distance to cluster heads but also the residual energy of cluster heads. Third, cluster heads choose those nodes as relay nodes, which have minimum energy consumption for forwarding and maximum residual energy to avoid dying earlier. Simulation results show that MRPUC performs much better than similar protocols.",TRUE,acronym
R11,Science,R32995,"Chromosomal abnormalities in untreated patients with non-Hodgkin’s lymphoma: associations with histology, clinical characteristics, and treatment outcome. The Nebraska Lymphoma Study Group",S114275,R32996,Disease,R32994,NHL,"We describe the chromosomal abnormalities found in 104 previously untreated patients with non-Hodgkin's lymphoma (NHL) and the correlations of these abnormalities with disease characteristics. The cytogenetic method used was a 24- to 48-hour culture, followed by G-banding. Several significant associations were discovered. A trisomy 3 was correlated with high-grade NHL. In the patients with an immunoblastic NHL, an abnormal chromosome no. 3 or 6 was found significantly more frequently. As previously described, a t(14;18) was significantly correlated with a follicular growth pattern. Abnormalities on chromosome no. 17 were correlated with a diffuse histology and a shorter survival. A shorter survival was also correlated with a +5, +6, +18, all abnormalities on chromosome no. 5, or involvement of breakpoint 14q11-12. In a multivariate analysis, these chromosomal abnormalities appeared to be independent prognostic factors and correlated with survival more strongly than any traditional prognostic variable. Patients with a t(11;14)(q13;q32) had an elevated lactate dehydrogenase (LDH). Skin infiltration was correlated with abnormalities on 2p. Abnormalities involving breakpoints 6q11-16 were correlated with B symptoms. Patients with abnormalities involving breakpoints 3q21-25 and 13q21-24 had more frequent bulky disease. The correlations of certain clinical findings with specific chromosomal abnormalities might help unveil the pathogenetic mechanisms of NHL and tailor treatment regimens.",TRUE,acronym
R11,Science,R29843,"An econometric study of carbon dioxide (CO2) emissions, energy consumption, and economic growth of Pakistan",S99036,R29844,Methodology,R27125,OLS,"Purpose – The purpose of this paper is to examine the relationship among environmental pollution, economic growth and energy consumption per capita in the case of Pakistan. The per capital carbon dioxide (CO2) emission is used as the environmental indicator, the commercial energy use per capita as the energy consumption indicator, and the per capita gross domestic product (GDP) as the economic indicator.Design/methodology/approach – The investigation is made on the basis of the environmental Kuznets curve (EKC), using time series data from 1971 to 2006, by applying different econometric tools like ADF Unit Root Johansen Co‐integration VECM and Granger causality tests.Findings – The Granger causality test shows that there is a long term relationship between these three indicators, with bidirectional causality between per capita CO2 emission and per capita energy consumption. A monotonically increasing curve between GDP and CO2 emission has been found for the sample period, rejecting the EKC relationship, i...",TRUE,acronym
R11,Science,R32565,Object oriented ship detection from VHR satellite images,S110803,R32566,Band,R32556,PAN,"Within today's security environment and with increasing worldwide travel and transport of dangerous goods the need of vessel traffic services, ship routing and monitoring of ship movements on sea and along coastlines becomes more time consuming and an important responsibility for coastal authorities. This paper describes the architecture of a ship detection prototype based on an object-oriented methodology to support these monitoring tasks. The system’s architecture comprises a fully-automatic coastline detection tool, a tool for fully or semiautomatic ship detection in off-shore areas and a semi-automatic tool for ship detection within harbour-areas. Its core is based on the client-server environment of the first object-oriented image analysis software on the market named eCognition. The described ship detection system has been developed for panchromatic VHR satellite image data and has proven its capabilities on Ikonos and QuickBird imagery under different weather conditions and for various regions of the world. With the capability of eCognition to combine raster data with imported thematic data it is possible to work with available non-remote sensing based data e.g. detailed harbour GIS information in ESRI shape file format or weather information, which can be attached to the results. Finally the system’s ability of generating customized reports in HTML format and the possibility of exporting results in standard raster or vector format offers new opportunities in the direction of an interoperability of technology where a great number of heterogeneous networks and operators are involved in the surveillance process.",TRUE,acronym
R11,Science,R32568,Ship detection and classification from overhead imagery,S110822,R32569,Band,R32556,PAN,"This paper presents a sequence of image-processing algorithms suitable for detecting and classifying ships from nadir panchromatic electro-optical imagery. Results are shown of techniques for overcoming the presence of background sea clutter, sea wakes, and non-uniform illumination. Techniques are presented to measure vessel length, width, and direction-of-motion. Mention is made of the additional value of detecting identifying features such as unique superstructure, weaponry, fuel tanks, helicopter landing pads, cargo containers, etc. Various shipping databases are then described as well as a discussion of how measured features can be used as search parameters in these databases to pull out positive ship identification. These are components of a larger effort to develop a low-cost solution for detecting the presence of ships from readily-available overhead commercial imagery and comparing this information against various open-source ship-registry databases to categorize contacts for follow-on analysis.",TRUE,acronym
R11,Science,R32575,Enhanced ship detection from overhead imagery,S110879,R32576,Band,R32556,PAN,"In the authors' previous work, a sequence of image-processing algorithms was developed that was suitable for detecting and classifying ships from panchromatic Quickbird electro-optical satellite imagery. Presented in this paper are several new algorithms, which improve the performance and enhance the capabilities of the ship detection software, as well as an overview on how land masking is performed. Specifically, this paper describes the new algorithms for enhanced detection including for the reduction of false detects such as glint and clouds. Improved cloud detection and filtering algorithms are described as well as several texture classification algorithms are used to characterize the background statistics of the ocean texture. These detection algorithms employ both cloud and glint removal techniques, which we describe. Results comparing ship detection with and without these false detect reduction algorithms are provided. These are components of a larger effort to develop a low-cost solution for detecting the presence of ships from readily-available overhead commercial imagery and comparing this information against various open-source ship-registry databases to categorize contacts for follow-on analysis.",TRUE,acronym
R11,Science,R32578,Using SPOT-5 HRG Data in Panchromatic Mode for Operational Detection of Small Ships in Tropical Area,S110897,R32579,Band,R32556,PAN,"Nowadays, there is a growing interest in applications of space remote sensing systems for maritime surveillance which includes among others traffic surveillance, maritime security, illegal fisheries survey, oil discharge and sea pollution monitoring. Within the framework of several French and European projects, an algorithm for automatic ship detection from SPOT–5 HRG data was developed to complement existing fishery control measures, in particular the Vessel Monitoring System. The algorithm focused on feature–based analysis of satellite imagery. Genetic algorithms and Neural Networks were used to deal with the feature–borne information. Based on the described approach, a first prototype was designed to classify small targets such as shrimp boats and tested on panchromatic SPOT–5, 5–m resolution product taking into account the environmental and fishing context. The ability to detect shrimp boats with satisfactory detection rates is an indicator of the robustness of the algorithm. Still, the benchmark revealed problems related to increased false alarm rates on particular types of images with a high percentage of cloud cover and a sea cluttered background.",TRUE,acronym
R11,Science,R32580,Fully automated procedure for ship detection using optical satellite imagery,S110915,R32581,Band,R32556,PAN,"Ship detection from remote sensing imagery is a crucial application for maritime security which includes among others traffic surveillance, protection against illegal fisheries, oil discharge control and sea pollution monitoring. In the framework of a European integrated project GMES-Security/LIMES, we developed an operational ship detection algorithm using high spatial resolution optical imagery to complement existing regulations, in particular the fishing control system. The automatic detection model is based on statistical methods, mathematical morphology and other signal processing techniques such as the wavelet analysis and Radon transform. This paper presents current progress made on the detection model and describes the prototype designed to classify small targets. The prototype was tested on panchromatic SPOT 5 imagery taking into account the environmental and fishing context in French Guiana. In terms of automatic detection of small ship targets, the proposed algorithm performs well. Its advantages are manifold: it is simple and robust, but most of all, it is efficient and fast, which is a crucial point in performance evaluation of advanced ship detection strategies.",TRUE,acronym
R11,Science,R32608,A complete processing chain for ship detection using optical satellite imagery,S111058,R32609,Band,R32556,PAN,"Ship detection from remote sensing imagery is a crucial application for maritime security, which includes among others traffic surveillance, protection against illegal fisheries, oil discharge control and sea pollution monitoring. In the framework of a European integrated project Global Monitoring for Environment and Security (GMES) Security/Land and Sea Integrated Monitoring for European Security (LIMES), we developed an operational ship detection algorithm using high spatial resolution optical imagery to complement existing regulations, in particular the fishing control system. The automatic detection model is based on statistical methods, mathematical morphology and other signal-processing techniques such as the wavelet analysis and Radon transform. This article presents current progress made on the detection model and describes the prototype designed to classify small targets. The prototype was tested on panchromatic Satellite Pour l'Observation de la Terre (SPOT) 5 imagery taking into account the environmental and fishing context in French Guiana. In terms of automatic detection of small ship targets, the proposed algorithm performs well. Its advantages are manifold: it is simple and robust, but most of all, it is efficient and fast, which is a crucial point in performance evaluation of advanced ship detection strategies.",TRUE,acronym
R11,Science,R32610,Ship detection in satellite imagery using rank-order grayscale hit-or-miss transforms,S111075,R32611,Band,R32556,PAN,"Ship detection from satellite imagery is something that has great utility in various communities. Knowing where ships are and their types provides useful intelligence information. However, detecting and recognizing ships is a difficult problem. Existing techniques suffer from too many false-alarms. We describe approaches we have taken in trying to build ship detection algorithms that have reduced false alarms. Our approach uses a version of the grayscale morphological Hit-or-Miss transform. While this is well known and used in its standard form, we use a version in which we use a rank-order selection for the dilation and erosion parts of the transform, instead of the standard maximum and minimum operators. This provides some slack in the fitting that the algorithm employs and provides a method for tuning the algorithm's performance for particular detection problems. We describe our algorithms, show the effect of the rank-order parameter on the algorithm's performance and illustrate the use of this approach for real ship detection problems with panchromatic satellite imagery.",TRUE,acronym
R11,Science,R32614,Characterization of a Bayesian Ship Detection Method in Optical Satellite Images,S111114,R32615,Band,R32556,PAN,"This letter presents the experimental results obtained for an automatic predetection of small ships (about 5 × 5 pixels) in high-resolution optical satellite images. Our images are panchromatic SPOT 5 images, whose resolution is 5 m per pixel. Our detection method is based on the Bayesian decision theory and does not need any preprocessing. Here, we describe the method precisely and the tuning of its two parameters, namely, the size of the analysis window and the threshold used to make a decision. Both are fixed from the receiver operating-characteristic curves that we draw from different sets of tests. Finally, the overall results of the method are given for a set of images, as close as possible to the operational conditions.",TRUE,acronym
R11,Science,R32660,A visual search inspired computational model for ship detection in optical satellite images,S111399,R32661,Band,R32556,PAN,"In this letter, we propose a novel computational model for automatic ship detection in optical satellite images. The model first selects salient candidate regions across entire detection scene by using a bottom-up visual attention mechanism. Then, two complementary types of top-down cues are employed to discriminate the selected ship candidates. Specifically, in addition to the detailed appearance analysis of candidates, a neighborhood similarity-based method is further exploited to characterize their local context interactions. Furthermore, the framework of our model is designed in a multiscale and hierarchical manner which provides a plausible approximation to a visual search process and reasonably distributes the computational resources. Experiments over panchromatic SPOT5 data prove the effectiveness and computational efficiency of the proposed model.",TRUE,acronym
R11,Science,R32714,Ship detection in high-resolution optical imagery based on anomaly detector and local shape feature,S111771,R32715,Band,R32556,PAN,"Ship detection in high-resolution optical imagery is a challenging task due to the variable appearances of ships and background. This paper aims at further investigating this problem and presents an approach to detect ships in a “coarse-to-fine” manner. First, to increase the separability between ships and background, we concentrate on the pixels in the vicinities of ships. We rearrange the spatially adjacent pixels into a vector, transforming the panchromatic image into a “fake” hyperspectral form. Through this procedure, each produced vector is endowed with some contextual information, which amplifies the separability between ships and background. Afterward, for the “fake” hyperspectral image, a hyperspectral algorithm is applied to extract ship candidates preliminarily and quickly by regarding ships as anomalies. Finally, to validate real ships out of ship candidates, an extra feature is provided with histograms of oriented gradients (HOGs) to generate a hypothesis using AdaBoost algorithm. This extra feature focuses on the gray values rather than the gradients of an image and includes some information generated by very near but not closely adjacent pixels, which can reinforce HOG to some degree. Experimental results on real database indicate that the hyperspectral algorithm is robust, even for the ships with low contrast. In addition, in terms of the shape of ships, the extended HOG feature turns out to be better than HOG itself as well as some other features such as local binary pattern.",TRUE,acronym
R11,Science,R32723,Ship detection from optical satellite images based on sea surface analysis,S111836,R32724,Band,R32556,PAN,"Automatic ship detection in high-resolution optical satellite images with various sea surfaces is a challenging task. In this letter, we propose a novel detection method based on sea surface analysis to solve this problem. The proposed method first analyzes whether the sea surface is homogeneous or not by using two new features. Then, a novel linear function combining pixel and region characteristics is employed to select ship candidates. Finally, Compactness and Length-width ratio are adopted to remove false alarms. Specifically, based on the sea surface analysis, the proposed method cannot only efficiently block out no-candidate regions to reduce computational time, but also automatically assign weights for candidate selection function to optimize the detection performance. Experimental results on real panchromatic satellite images demonstrate the detection accuracy and computational efficiency of the proposed method.",TRUE,acronym
R11,Science,R32743,Ship detection from high-resolution imagery based on land masking and cloud filtering,S111969,R32744,Band,R32556,PAN,"High resolution satellite images play an important role in target detection application presently. This article focuses on the ship target detection from the high resolution panchromatic images. Taking advantage of geographic information such as the coastline vector data provided by NOAA Medium Resolution Coastline program, the land region is masked which is a main noise source in ship detection process. After that, the algorithm tries to deal with the cloud noise which appears frequently in the ocean satellite images, which is another reason for false alarm. Based on the analysis of cloud noise's feature in frequency domain, we introduce a windowed noise filter to get rid of the cloud noise. With the help of morphological processing algorithms adapted to target detection, we are able to acquire ship targets in fine shapes. In addition, we display the extracted information such as length and width of ship targets in a user-friendly way i.e. a KML file interpreted by Google Earth.",TRUE,acronym
R11,Science,R32752,Unsupervised ship detection based on saliency and S-HOG descriptor from optical satellite images,S112035,R32753,Band,R32556,PAN,"With the development of high-resolution imagery, ship detection in optical satellite images has attracted a lot of research interest because of the broad applications in fishery management, vessel salvage, etc. Major challenges for this task include cloud, wave, and wake clutters, and even the variability of ship sizes. In this letter, we propose an unsupervised ship detection method toward overcoming these existing issues. Visual saliency, which focuses on highlighting salient signals from scenes, is applied to extract candidate regions followed by a homogeneous filter presented to confirm suspected ship targets with complete profiles. Then, a novel descriptor, ship histogram of oriented gradient, which characterizes the gradient symmetry of ship sides, is provided to discriminate real ships. Experimental results on numerous panchromatic satellite images demonstrate the good performance of our method compared to state-of-the-art methods.",TRUE,acronym
R11,Science,R32847,Fast ship detection from optical satellite images based on ship distribution probability analysis,S112617,R32848,Band,R32556,PAN,"Automatic ship detection from optical satellite images remains a tough task. In this paper, a novel method of ship detection from optical satellites is proposed by analyzing the ship distribution probability. First, an anomaly detection model is constructed by the sea cluster histogram model; then, the ship distribution based on the ship safety navigational criterion is analyzed to obtain the ship candidates, and obvious non-ship objects are removed by the area properties from ship candidates; finally, a structural continuity descriptor is designed to remove false alarms from the ship candidates. Experiments on numerous satellite images from panchromatic and one band within multispectral sensors are conducted. The results verified that the proposed method outperforms existing methods in both effectiveness and efficiency.",TRUE,acronym
R11,Science,R32851,Ship detection in panchromatic images: a new method and its DSP implementation,S112656,R32852,Band,R32556,PAN,"In this paper, a new ship detection method is proposed after analyzing the characteristics of panchromatic remote sensing images and ship targets. Firstly, AdaBoost(Adaptive Boosting) classifiers trained by Haar features are utilized to make coarse detection of ship targets. Then LSD (Line Segment Detector) is adopted to extract the line features in target slices to make fine detection. Experimental results on a dataset of panchromatic remote sensing images with a spatial resolution of 2m show that the proposed algorithm can achieve high detection rate and low false alarm rate. Meanwhile, the algorithm can meet the needs of practical applications on DSP (Digital Signal Processor).",TRUE,acronym
R11,Science,R32869,Ship Detection From Optical Satellite Images Based on Saliency Segmentation and Structure-LBP Feature,S112799,R32870,Band,R32556,PAN,"Automatic ship detection from optical satellite imagery is a challenging task due to cluttered scenes and variability in ship sizes. This letter proposes a detection algorithm based on saliency segmentation and the local binary pattern (LBP) descriptor combined with ship structure. First, we present a novel saliency segmentation framework with flexible integration of multiple visual cues to extract candidate regions from different sea surfaces. Then, simple shape analysis is adopted to eliminate obviously false targets. Finally, a structure-LBP feature that characterizes the inherent topology structure of ships is applied to discriminate true ship targets. Experimental results on numerous panchromatic satellite images validate that our proposed scheme outperforms other state-of-the-art methods in terms of both detection time and detection accuracy.",TRUE,acronym
R11,Science,R26570,PEGASIS: power efficient gathering in sensor informa- tion systems,S83718,R26626,Protocol,R26568,PEGASIS,"Sensor webs consisting of nodes with limited battery power and wireless communications are deployed to collect useful information from the field. Gathering sensed information in an energy efficient manner is critical to operate the sensor network for a long period of time. In W. Heinzelman et al. (Proc. Hawaii Conf. on System Sci., 2000), a data collection problem is defined where, in a round of communication, each sensor node has a packet to be sent to the distant base station. If each node transmits its sensed data directly to the base station then it will deplete its power quickly. The LEACH protocol presented by W. Heinzelman et al. is an elegant solution where clusters are formed to fuse data before transmitting to the base station. By randomizing the cluster heads chosen to transmit to the base station, LEACH achieves a factor of 8 improvement compared to direct transmissions, as measured in terms of when nodes die. In this paper, we propose PEGASIS (power-efficient gathering in sensor information systems), a near optimal chain-based protocol that is an improvement over LEACH. In PEGASIS, each node communicates only with a close neighbor and takes turns transmitting to the base station, thus reducing the amount of energy spent per round. Simulation results show that PEGASIS performs better than LEACH by about 100 to 300% when 1%, 20%, 50%, and 100% of nodes die for different network sizes and topologies.",TRUE,acronym
R11,Science,R33088,The role of cytogenetic abnormalities as a prognostic marker in primary myelofibrosis: applicability at the time of diagnosis and later during disease course,S115000,R33089,Disease,R32960,PMF,"Although cytogenetic abnormalities are important prognostic factors in myeloid malignancies, they are not included in current prognostic scores for primary myelofibrosis (PMF). To determine their relevance in PMF, we retrospectively examined the impact of cytogenetic abnormalities and karyotypic evolution on the outcome of 256 patients. Baseline cytogenetic status impacted significantly on survival: patients with favorable abnormalities (sole deletions in 13q or 20q, or trisomy 9 +/- one other abnormality) had survivals similar to those with normal diploid karyotypes (median, 63 and 46 months, respectively), whereas patients with unfavorable abnormalities (rearrangement of chromosome 5 or 7, or > or = 3 abnormalities) had a poor median survival of 15 months. Patients with abnormalities of chromosome 17 had a median survival of only 5 months. A model containing karyotypic abnormalities, hemoglobin, platelet count, and performance status effectively risk-stratified patients at initial evaluation. Among 73 patients assessable for clonal evolution during stable chronic phase, those who developed unfavorable or chromosome 17 abnormalities had median survivals of 18 and 9 months, respectively, suggesting the potential role of cytogenetics as a risk factor applicable at any time in the disease course. Dynamic prognostic significance of cytogenetic abnormalities in PMF should be further prospectively evaluated.",TRUE,acronym
R11,Science,R29010,Robust Face Landmark Estimation under Occlusion,S96211,R29063,Methods,R29062,RCPR,"Human faces captured in real-world conditions present large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food). Current face landmark estimation approaches struggle under such conditions since they fail to provide a principled way of handling outliers. We propose a novel method, called Robust Cascaded Pose Regression (RCPR) which reduces exposure to outliers by detecting occlusions explicitly and using robust shape-indexed features. We show that RCPR improves on previous landmark estimation methods on three popular face datasets (LFPW, LFW and HELEN). We further explore RCPR's performance by introducing a novel face dataset focused on occlusion, composed of 1,007 faces presenting a wide range of occlusion patterns. RCPR reduces failure cases by half on all four datasets, at the same time as it detects face occlusions with a 80/40% precision/recall.",TRUE,acronym
R11,Science,R29010,Robust Face Landmark Estimation under Occlusion,S96052,R29029,Methods,L58863,RCPR ,"Human faces captured in real-world conditions present large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food). Current face landmark estimation approaches struggle under such conditions since they fail to provide a principled way of handling outliers. We propose a novel method, called Robust Cascaded Pose Regression (RCPR) which reduces exposure to outliers by detecting occlusions explicitly and using robust shape-indexed features. We show that RCPR improves on previous landmark estimation methods on three popular face datasets (LFPW, LFW and HELEN). We further explore RCPR's performance by introducing a novel face dataset focused on occlusion, composed of 1,007 faces presenting a wide range of occlusion patterns. RCPR reduces failure cases by half on all four datasets, at the same time as it detects face occlusions with a 80/40% precision/recall.",TRUE,acronym
R11,Science,R31638,"Dynamic modeling and optimal control of batch reactors, based on structure approaching hybrid neural networks",S105991,R31639,Types,R31636,SAHNN,"A novel Structure Approaching Hybrid Neural Network (SAHNN) approach to model batch reactors is presented. The Virtual Supervisor−Artificial Immune Algorithm method is utilized for the training of SAHNN, especially for the batch processes with partial unmeasurable state variables. SAHNN involves the use of approximate mechanistic equations to characterize unmeasured state variables. Since the main interest in batch process operation is on the end-of-batch product quality, an extended integral square error control index based on the SAHNN model is applied to track the desired temperature profile of a batch process. This approach introduces model mismatches and unmeasured disturbances into the optimal control strategy and provides a feedback channel for control. The performance of robustness and antidisturbances of the control system are then enhanced. The simulation result indicates that the SAHNN model and model-based optimal control strategy of the batch process are effective.",TRUE,acronym
R11,Science,R29023,Supervised descent method and its applications to face alignment,S96195,R29059,Methods,R29058,SDM,"Many computer vision problems (e.g., camera calibration, image alignment, structure from motion) are solved through a nonlinear optimization method. It is generally accepted that 2nd order descent methods are the most robust, fast and reliable approaches for nonlinear optimization of a general smooth function. However, in the context of computer vision, 2nd order descent methods have two main drawbacks: (1) The function might not be analytically differentiable and numerical approximations are impractical. (2) The Hessian might be large and not positive definite. To address these issues, this paper proposes a Supervised Descent Method (SDM) for minimizing a Non-linear Least Squares (NLS) function. During training, the SDM learns a sequence of descent directions that minimizes the mean of NLS functions sampled at different points. In testing, SDM minimizes the NLS objective using the learned descent directions without computing the Jacobian nor the Hessian. We illustrate the benefits of our approach in synthetic and real examples, and show how SDM achieves state-of-the-art performance in the problem of facial feature detection. The code is available at www.humansensing.cs. cmu.edu/intraface.",TRUE,acronym
R11,Science,R29023,Supervised descent method and its applications to face alignment,S96022,R29024,Methods,L58843,SDM ,"Many computer vision problems (e.g., camera calibration, image alignment, structure from motion) are solved through a nonlinear optimization method. It is generally accepted that 2nd order descent methods are the most robust, fast and reliable approaches for nonlinear optimization of a general smooth function. However, in the context of computer vision, 2nd order descent methods have two main drawbacks: (1) The function might not be analytically differentiable and numerical approximations are impractical. (2) The Hessian might be large and not positive definite. To address these issues, this paper proposes a Supervised Descent Method (SDM) for minimizing a Non-linear Least Squares (NLS) function. During training, the SDM learns a sequence of descent directions that minimizes the mean of NLS functions sampled at different points. In testing, SDM minimizes the NLS objective using the learned descent directions without computing the Jacobian nor the Hessian. We illustrate the benefits of our approach in synthetic and real examples, and show how SDM achieves state-of-the-art performance in the problem of facial feature detection. The code is available at www.humansensing.cs. cmu.edu/intraface.",TRUE,acronym
R11,Science,R26634,SEP: A Stable Election Protocol for clustered heterogeneous wireless sensor networks,S83772,R26635,Protocol,R26633,SEP,"We study the impact of heterogeneity of nodes, in terms of their energy, in wireless sensor networks that are hierarchically clustered. In these networks some of the nodes become cluster heads, aggregate the data of their cluster members and transmit it to the sink. We assume that a percentage of the population of sensor nodes is equipped with additional energy resources—this is a source of heterogeneity which may result from the initial setting or as the operation of the network evolves. We also assume that the sensors are randomly (uniformly) distributed and are not mobile, the coordinates of the sink and the dimensions of the sensor field are known. We show that the behavior of such sensor networks becomes very unstable once the first node dies, especially in the presence of node heterogeneity. Classical clustering protocols assume that all the nodes are equipped with the same amount of energy and as a result, they can not take full advantage of the presence of node heterogeneity. We propose SEP, a heterogeneous-aware protocol to prolong the time interval before the death of the first node (we refer to as stability period), which is crucial for many applications where the feedback from the sensor network must be reliable. SEP is based on weighted election probabilities of each node to become cluster head according to the remaining energy in each node. We show by simulation that SEP always prolongs the stability period compared to (and that the average throughput is greater than) the one obtained using current clustering protocols. We conclude by studying the sensitivity of our SEP protocol to heterogeneity parameters capturing energy imbalance in the network. We found that SEP yields longer stability region for higher values of extra energy brought by more powerful nodes.",TRUE,acronym
R11,Science,R25977,Page grammars and page parsing. A syntactic approach to document layout recognition,S80392,R26007,Output Representation,L50781,SGML,"Describes a syntactic approach to deducing the logical structure of printed documents from their physical layout. Page layout is described by a two-dimensional grammar, similar to a context-free string grammar, and a chart parser is used to parse segmented page images according to the grammar. This process is part of a system which reads scanned document images and produces computer-readable text in a logical mark-up format such as SGML. The system is briefly outlined, the grammar formalism and the parsing algorithm are described in detail, and some experimental results are reported.<>",TRUE,acronym
R11,Science,R27913,Vision based autonomous vehicle navigation with self-organizing map feature matching technique,S90982,R27914,Algorithm,R27912,SIFT,"Vision is becoming more and more common in applications such as localization, autonomous navigation, path finding and many other computer vision applications. This paper presents an improved technique for feature matching in the stereo images captured by the autonomous vehicle. The Scale Invariant Feature Transform (SIFT) algorithm is used to extract distinctive invariant features from images but this algorithm has a high complexity and a long computational time. In order to reduce the computation time, this paper proposes a SIFT improvement technique based on a Self-Organizing Map (SOM) to perform the matching procedure more efficiently for feature matching problems. Experimental results on real stereo images show that the proposed algorithm performs feature group matching with lower computation time than the original SIFT algorithm. The results showing improvement over the original SIFT are validated through matching examples between different pairs of stereo images. The proposed algorithm can be applied to stereo vision based autonomous vehicle navigation for obstacle avoidance, as well as many other feature matching and computer vision applications.",TRUE,acronym
R11,Science,R25247,SLSA: A Sentiment Lexicon for Standard Arabic,S75234,R25248,Lexicon,L46862,SLSA ,"Sentiment analysis has been a major area of interest, for which the existence of highquality resources is crucial. In Arabic, there is a reasonable number of sentiment lexicons but with major deficiencies. The paper presents a large-scale Standard Arabic Sentiment Lexicon (SLSA) that is publicly available for free and avoids the deficiencies in the current resources. SLSA has the highest up-to-date reported coverage. The construction of SLSA is based on linking the lexicon of AraMorph with SentiWordNet along with a few heuristics and powerful back-off. SLSA shows a relative improvement of 37.8% over a state-of-theart lexicon when tested for accuracy. It also outperforms it by an absolute 3.5% of F1-score when tested for sentiment analysis.",TRUE,acronym
R11,Science,R25423,Reliability Modeling for SOA Systems,S76197,R25424,Area of use,R25411,SOA,"Service-oriented architecture (SOA) is a popular paradigm for development of distributed systems by composing the functionality provided by the services exposed on the network. In effect, the services can use functionalities of other services to accomplish their own goals. Although such an architecture provides an elegant solution to simple construction of loosely coupled distributed systems, it also introduces additional concerns. One of the primary concerns in designing a SOA system is the overall system reliability. Since the building blocks are services provided by various third parties, it is often not possible to apply the well established fault removal techniques during the development phases. Therefore, in order to reach desirable system reliability for SOA systems, the focus shifts towards fault prediction and fault tolerance techniques. In this paper an overview of existing reliability modeling techniques for SOA-based systems is given. Furthermore, we present a model for reliability estimation of a service composition using directed acyclic graphs. The model is applied to the service composition based on the orchestration model. A case study for the proposed model is presented by analyzing a simple Web Service composition scenario.",TRUE,acronym
R11,Science,R25431,Automatic Reliability Management in SOA-based critical systems,S76231,R25432,Area of use,R25411,SOA,"A well-known concept for the design and development of distributed software systems is service-orientation. In SOA, an interacting group of autonomous services realize a dynamic adaptive heterogenous distributed system. Because of its flexibility, SOA allows an easy adaptation of new business requirements. This also makes the serviceorientation idea a suitable concept for development of critical software systems. Reliability is a central parameter for developing critical software systems. SOA brings some additional requirements to the usual reliability models currently being used for standard software solutions. In order to fullfil all requirements and guarantee a certain degree of reliability, a generic reliability management model is needed for SOA based software systems. This article defines research challenges in this area and gives an approach to solve this problem.",TRUE,acronym
R11,Science,R25445,Estimating Reliability Of Service-Oriented Systems: A Rule- Based Approach”,S76293,R25446,Area of use,R25411,SOA,"In service-oriented architecture (SOA), the entire software system consists of an interacting group of autonomous services. In order to make such a system reliable, it should inhibit guarantee for basic service, data flow, composition of services, and the complete workflow. This paper discusses the important factor of SOA and their role in the entire SOA system reliability. We focus on the factors that have the strongest effect of SOA system reliability. Based on these factors, we used a fuzzy-based approach to estimate the SOA reliability. The proposed approach is implemented on a database obtained for SOA application, and the results obtained validate and confirm the effectiveness of the proposed fuzzy approach. Furthermore, one can make trade-off analyses between different parameters for reliability.",TRUE,acronym
R11,Science,R34276,Macroeconomic Shock Synchronization in the East African Community,S119221,R34277,Methodology,L72018,SVAR," The East African Community’s (EAC) economic integration has gained momentum recently, with the EAC countries aiming to adopt a single currency in 2015. This article evaluates empirically the readiness of the EAC countries for monetary union. First, structural similarity in terms of similarity of production and exports of the EAC countries is measured. Second, the symmetry of shocks is examined with structural vector auto-regression analysis (SVAR). The lack of macroeconomic convergence gives evidence against a hurried transition to a monetary union. Given the divergent macroeconomic outcomes, structural reforms, including closing infrastructure gaps and harmonizing macroeconomic policies that would raise synchronization of business cycles, need to be in place before moving to monetary union. ",TRUE,acronym
R11,Science,R25180,Off-line English and Chinese Signature Identification Using Foreground and Background Features,S74811,R25181,Classifier,R25174,SVM,"In the field of information security, the usage of biometrics is growing for user authentication. Automatic signature recognition and verification is one of the biometric techniques, which is only one of several used to verify the identity of individuals. In this paper, a foreground and background based technique is proposed for identification of scripts from bi-lingual (English/Roman and Chinese) off-line signatures. This system will identify whether a claimed signature belongs to the group of English signatures or Chinese signatures. The identification of signatures based on its script is a major contribution for multi-script signature verification. Two background information extraction techniques are used to produce the background components of the signature images. Gradient-based method was used to extract the features of the foreground as well as background components. Zernike Moment feature was also employed on signature samples. Support Vector Machine (SVM) is used as the classifier for signature identification in the proposed system. A database of 1120 (640 English+480 Chinese) signature samples were used for training and 560 (320 English+240 Chinese) signature samples were used for testing the proposed system. An encouraging identification accuracy of 97.70% was obtained using gradient feature from the experiment.",TRUE,acronym
R11,Science,R25182,Off-line Signature Verification Based on Chain Code Histogram and Support Vector Machine,S74820,R25183,Classifier,R25174,SVM,"In this paper, we present an approach based on chain code histogram features enhanced through Laplacian of Gaussian filter for off-line signature verification. In the proposed approach, the four-directional chain code histogram of each grid on the contour of the signature image is extracted. The Laplacian of Gaussian filter is used to enhance the extracted features of each signature sample. Thus, the extracted and enhanced features of all signature samples of the off-line signature dataset constitute the knowledge base. Subsequently, the Support Vector Machine (SVM) classifier is used as the verification tool. The SVM is trained with the randomly selected training sample's features including genuine and random forgeries and tested with the remaining untrained genuine along with the skilled forge sample features to classify the tested/questioned sample as genuine or forge. Similar to the real time scenario, in the proposed approach we have not considered the skilled fore sample to train the classifier. Extensive experimentations have been conducted to exhibit the performance of the proposed approach on the publicly available datasets namely, CEDAR, GPDS-100 and MUKOS, a regional language dataset. The state-of-art off-line signature verification methods are considered for comparative study to justify the feasibility of the proposed approach for off-line signature verification and to reveal its accuracy over the existing approaches.",TRUE,acronym
R11,Science,R25193,Discriminative DCT: An Efficient and Accurate Approach for Off-line Signature Verification,S74862,R25194,Classifier,R25174,SVM,"In this paper, we proposed to combine the transform based approach with dimensionality reduction technique for off-line signature verification. The proposed approach has four major phases: Preprocessing, Feature extraction, Feature reduction and Classification. In the feature extraction phase, Discrete Cosine Transform (DCT) is employed on the signature image to obtain the upper-left corner block of size mX n as a representative feature vector. These features are subjected to Linear Discriminant Analysis (LDA) for further reduction and representing the signature with optimal set of features. Thus obtained features from all the samples in the dataset form the knowledge base. The Support Vector Machine (SVM), a bilinear classifier is used for classification and the performance is measured through FAR/FRR metric. Experiments have been conducted on standard signature datasets namely CEDAR and GPDS-160, and MUKOS, a regional language (Kannada) dataset. The comparative study is also provided with the well known approaches to exhibit the performance of the proposed approach.",TRUE,acronym
R11,Science,R70541,A Pediatric Infection Screening System with a Radar Respiration Monitor for Rapid Detection of Seasonal Influenza among Outpatient Children,S335779,R70571,Computational intelligence technologie,L242591,SVM,"Background: Seasonal influenza virus outbreaks cause annual epidemics, mostly during winter in temperate zone countries, especially resulting in increased morbidity and higher mortality in children. In order to conduct rapid screening for influenza in pediatric outpatient units, we developed a pediatric infection screening system with a radar respiration monitor. Methods: The system conducts influenza screening within 10 seconds based on vital signs (i.e., respiration rate monitored using a 24 GHz microwave radar; facial temperature, using a thermopile array; and heart rate, using a pulse photosensor). A support vector machine (SVM) classification method was used to discriminate influenza children from healthy children based on vital signs. To assess the classification performance of the screening system that uses the SVM, we conducted influenza screening for 70 children (i.e., 27 seasonal influenza patients (11 ± 2 years) at a pediatric clinic and 43 healthy control subjects (9 ± 4 years) at a pediatric dental clinic) in the winter of 2013-2014. Results: The screening system using the SVM identified 26 subjects with influenza (22 of the 27 influenza patients and 4 of the 43 healthy subjects). The system discriminated 44 subjects as healthy (5 of the 27 influenza patients and 39 of the 43 healthy subjects), with sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 81.5%, 90.7%, 84.6%, and 88.6%, respectively. Conclusion: The SVM-based screening system achieved classification results for the outpatient children based on vital signs with comparatively high NPV within 10 seconds. At pediatric clinics and hospitals, our system seems potentially useful in the first screening step for infections in the future.",TRUE,acronym
R11,Science,R26687,TASC: topology adaptive spatial clustering for sensor networks,S85129,R26688,Protocol,R26686,TASC,"The ability to extract topological regularity out of large randomly deployed sensor networks holds the promise to maximally leverage correlation for data aggregation and also to assist with sensor localization and hierarchy creation. This paper focuses on extracting such regular structures from physical topology through the development of a distributed clustering scheme. The topology adaptive spatial clustering (TASC) algorithm presented here is a distributed algorithm that partitions the network into a set of locally isotropic, non-overlapping clusters without prior knowledge of the number of clusters, cluster size and node coordinates. This is achieved by deriving a set of weights that encode distance measurements, connectivity and density information within the locality of each node. The derived weights form the terrain for holding a coordinated leader election in which each node selects the node closer to the center of mass of its neighborhood to become its leader. The clustering algorithm also employs a dynamic density reachability criterion that groups nodes according to their neighborhood's density properties. Our simulation results show that the proposed algorithm can trace locally isotropic structures in non-isotropic network and cluster the network with respect to local density attributes. We also found out that TASC exhibits consistent behavior in the presence of moderate measurement noise levels",TRUE,acronym
R11,Science,R31185,Terrain-based genetic algorithm (TBGA): modeling parameter space as terrain,S104536,R31186,Name,L62504,TBGA,"The Terrain-Based Genetic Algorithm (TBGA) is a self-tuning version of the traditional Cellular Genetic Algorithm (CGA). In a TBGA, various combinations of parameter values appear in different physical locations of the population, forming a sort of terrain in which individual solutions evolve. We compare the performance of the TBGA against that of the CGA on a known suite of problems. Our results indicate that the TBGA performs better than the CGA on the test suite, with less parameter tuning, when the CGA is set to parameter values thought in prior studies to be good. While we had hoped that good solutions would cluster around the best parameter settings, this was not observed. However, we were able to use the TBGA to automatically determine better parameter settings for the CGA. The resulting CGA produced even better results than were achieved by the TBGA which found those parameter settings.",TRUE,acronym
R11,Science,R26649,An Adaptive Data Dissemination Strategy for Wireless Sensor Networks,S83861,R26650,Protocol,R26648,TCCA,"Future large-scale sensor networks may comprise thousands of wirelessly connected sensor nodes that could provide an unimaginable opportunity to interact with physical phenomena in real time. However, the nodes are typically highly resource-constrained. Since the communication task is a significant power consumer, various attempts have been made to introduce energy-awareness at different levels within the communication stack. Clustering is one such attempt to control energy dissipation for sensor data dissemination in a multihop fashion. The Time-Controlled Clustering Algorithm (TCCA) is proposed to realize a network-wide energy reduction. A realistic energy dissipation model is derived probabilistically to quantify the sensor network's energy consumption using the proposed clustering algorithm. A discrete-event simulator is developed to verify the mathematical model and to further investigate TCCA in other scenarios. The simulator is also extended to include the rest of the communication stack to allow a comprehensive evaluation of the proposed algorithm.",TRUE,acronym
R11,Science,R26562,TEEN: a routing protocol for enhanced efficiency in wireless sensor networks,S83689,R26621,Protocol,R26560,TEEN,"Wireless sensor networks are expected to find wide applicability and increasing deployment in the near future. In this paper, we propose a formal classification of sensor networks, based on their mode of functioning, as proactive and reactive networks. Reactive networks, as opposed to passive data collecting proactive networks, respond immediately to changes in the relevant parameters of interest. We also introduce a new energy efficient protocol, TEEN (Threshold sensitive Energy Efficient sensor Network protocol) for reactive networks. We evaluate the performance of our protocol for a simple temperature sensing application. In terms of energy efficiency, our protocol has been observed to outperform existing conventional sensor network protocols.",TRUE,acronym
R11,Science,R26763,UHEED - An Unequal Clustering Algorithm for Wireless Sensor Networks,S85671,R26764,Protocol,R26762,UHEED,"Prolonging the lifetime of wireless sensor networks has always been a determining factor when designing and deploying such networks. Clustering is one technique that can be used to extend the lifetime of sensor networks by grouping sensors together. However, there exists the hot spot problem which causes an unbalanced energy consumption in equally formed clusters. In this paper, we propose UHEED, an unequal clustering algorithm which mitigates this problem and which leads to a more uniform residual energy in the network and improves the network lifetime. Furthermore, from the simulation results presented, we were able to deduce the most appropriate unequal cluster size to be used.",TRUE,acronym
R11,Science,R27505,An investigation of cointegration and causality between energy consumption and economic growth,S89158,R27506,Countries,R27483,USA,"This paper reexamines the causality between energy consumption and economic growth with both bivariate and multivariate models by applying the recently developed methods of cointegration and Hsiao`s version of the Granger causality to transformed U.S. data for the period 1947-1990. The Phillips-Perron (PP) tests reveal that the original series are not stationary and, therefore, a first differencing is performed to secure stationarity. The study finds no causal linkages between energy consumption and economic growth. Energy and gross national product (GNP) each live a life of its own. The results of this article are consistent with some of the past studies that find no relationship between energy and GNP but are contrary to some other studies that find GNP unidirectionally causes energy consumption. Both the bivariate and trivariate models produce the similar results. We also find that there is no causal relationship between energy consumption and industrial production. The United States is basically a service-oriented economy and changes in energy consumption can cause little or no changes in GNP. In other words, an implementation of energy conservation policy may not impair economic growth. 27 refs., 5 tabs.",TRUE,acronym
R11,Science,R27708,Nuclear energy consumption and economic growth in the US: an empirical note,S90172,R27709,Country,R27483,USA,Abstract This empirical note examines the relationship between nuclear energy consumption growth and real gross domestic product (GDP) growth within a neoclassical production function framework for the US using annual data from 1957 to 2006. The Toda-Yamamoto (1995) test for long-run Granger-causality reveals the absence of Granger-causality between nuclear energy consumption growth and real GDP growth which supports the neutrality hypothesis within the energy consumption-economic growth literature.,TRUE,acronym
R11,Science,R27719,On biomass energy consumption and real Output in the U,S90250,R27720,Country,R27483,USA,Abstract This empirical note utilizes US annual data from 1949 to 2007 to examine the causal relationship between biomass energy consumption and real gross domestic product (GDP) within a multivariate framework. Toda-Yamamoto causality tests reveal unidirectional causality from biomass energy consumption to real GDP supportive of the growth hypothesis.,TRUE,acronym
R11,Science,R27149,Real Exchange Rate Volatility and U.S. Bilateral Trade: A VAR Approach,S87319,R27150,Countries and Estimation technique used,R27148,VAR,"This paper uses VAR models to investigate the impact of real exchange rate volatility on U.S. bilateral imports from the United Kingdom, France, Germany, Japan and Canada. The VAR systems include U.S. and foreign macro variables, and are estimated separately for each country. The major results suggest that the effect of volatility on imports is weak, although permanent shocks to volatility do have a negative impact on this measure of trade, and those effects are relatively more important over the flexible rate period. Copyright 1989 by MIT Press.",TRUE,acronym
R11,Science,R27190,Does Exchange Rate Volatility Depress Trade Flows? Evidence from Error- Correction Models,S87509,R27191,Countries and Estimation technique used,R27148,VAR,"This paper examines the impact of exchange rate volatility on the trade flows of the G-7 countries in the context of a multivariate error-correction model. The error-correction models do not show any sign of parameter instability. The results indicate that the exchange rate volatility has a significant negative impact on the volume of exports in each of the G-7 countries. Assuming market participants are risk averse, these results imply that exchange rate uncertainty causes them to reduce their activities, change prices, or shift sources of demand and supply in order to minimize their exposure to the effects of exchange rate volatility. This, in turn, can change the distribution of output across many sectors in these countries. It is quite possible that the surprisingly weak relationship between trade flows and exchange rate volatility reported in several previous studies are due to insufficient attention to the stochastic properties of the relevant time series. Copyright 1993 by MIT Press.",TRUE,acronym
R11,Science,R34242,How Would Monetary Policy Matter in the Proposed African Monetary Unions? Evidence from Output and Prices,S119250,R34281,Methodology,L72035,VAR,"We analyze the effects of monetary policy on economic activity in the proposed African monetary unions. Findings broadly show that: (1) but for financial efficiency in the EAMZ, monetary policy variables affect output neither in the short-run nor in the long-term and; (2) with the exception of financial size that impacts inflation in the EAMZ in the short-term, monetary policy variables generally have no effect on prices in the short-run. The WAMZ may not use policy instruments to offset adverse shocks to output by pursuing either an expansionary or a contractionary policy, while the EAMZ can do with the ‘financial allocation efficiency’ instrument. Policy implications are discussed.",TRUE,acronym
R11,Science,R25748,Efficient mining of weighted association rules (WAR),S78246,R25749,Algorithm name,L49013,WAR,"In this paper, we extend the tradition association rule problem by allowing a weight to be associated with each item in a transaction, to re ect interest/intensity of the item within the transaction. This provides us in turn with an opportunity to associate a weight parameter with each item in the resulting association rule. We call it weighted association rule (WAR). WAR not only improves the con dence of the rules, but also provides a mechanism to do more effective target marketing by identifying or segmenting customers based on their potential degree of loyalty or volume of purchases. Our approach mines WARs by rst ignoring the weight and nding the frequent itemsets (via a traditional frequent itemset discovery algorithm), and is followed by introducing the weight during the rule generation. It is shown by experimental results that our approach not only results in shorter average execution times, but also produces higher quality results than the generalization of previous known methods on quantitative association rules.",TRUE,acronym
R11,Science,R25317,A Fuzzy-set based Semantic Similarity Matching Algorithm for Web Service,S75700,R25318,Specification Languages,R25292,WSDL,"A critical step in the process of reusing existing WSDL-specified services for building web-based applications is the discovery of potentially relevant services. However, the category-based service discovery, such as UDDI, is clearly insufficient. Semantic Web Services, augmenting Web service descriptions using Semantic Web technology, were introduced to facilitate the publication, discovery, and execution of Web services at the semantic level. Semantic matchmaker enhances the capability of UDDI service registries in the Semantic Web Services architecture by applying some matching algorithms between advertisements and requests described in OWL-S to recognize various degrees of matching for Web services. Based on Semantic Web Service framework, semantic matchmaker, specification matching and probabilistic matching approach, this paper proposes a fuzzy-set based semantic similarity matching algorithm for Web Service to support a more automated and veracity service discovery process in the Semantic Web Service Framework.",TRUE,acronym
R11,Science,R25763,H-Mine: Fast and space-preserving frequent pattern mining in large databases,S78340,R25764,Algorithm name,L49086,H-mine ,"In this study, we propose a simple and novel data structure using hyper-links, H-struct, and a new mining algorithm, H-mine, which takes advantage of this data structure and dynamically adjusts links in the mining process. A distinct feature of this method is that it has a very limited and precisely predictable main memory cost and runs very quickly in memory-based settings. Moreover, it can be scaled up to very large databases using database partitioning. When the data set becomes dense, (conditional) FP-trees can be constructed dynamically as part of the mining process. Our study shows that H-mine has an excellent performance for various kinds of data, outperforms currently available algorithms in different settings, and is highly scalable to mining large databases. This study also proposes a new data mining methodology, space-preserving mining, which may have a major impact on the future development of efficient and scalable data mining methods. †Decreased",TRUE,acronym
R11,Science,R33815,Comparative analyses of seven algorithms for copy number variant identification from single nucleotide polymorphism arrays,S117250,R33816,Algorithm,R33810,cnvFinder,"Determination of copy number variants (CNVs) inferred in genome wide single nucleotide polymorphism arrays has shown increasing utility in genetic variant disease associations. Several CNV detection methods are available, but differences in CNV call thresholds and characteristics exist. We evaluated the relative performance of seven methods: circular binary segmentation, CNVFinder, cnvPartition, gain and loss of DNA, Nexus algorithms, PennCNV and QuantiSNP. Tested data included real and simulated Illumina HumHap 550 data from the Singapore cohort study of the risk factors for Myopia (SCORM) and simulated data from Affymetrix 6.0 and platform-independent distributions. The normalized singleton ratio (NSR) is proposed as a metric for parameter optimization before enacting full analysis. We used 10 SCORM samples for optimizing parameter settings for each method and then evaluated method performance at optimal parameters using 100 SCORM samples. The statistical power, false positive rates, and receiver operating characteristic (ROC) curve residuals were evaluated by simulation studies. Optimal parameters, as determined by NSR and ROC curve residuals, were consistent across datasets. QuantiSNP outperformed other methods based on ROC curve residuals over most datasets. Nexus Rank and SNPRank have low specificity and high power. Nexus Rank calls oversized CNVs. PennCNV detects one of the fewest numbers of CNVs.",TRUE,acronym
R11,Science,R33815,Comparative analyses of seven algorithms for copy number variant identification from single nucleotide polymorphism arrays,S117251,R33816,Algorithm,R33811,cnvPartition,"Determination of copy number variants (CNVs) inferred in genome wide single nucleotide polymorphism arrays has shown increasing utility in genetic variant disease associations. Several CNV detection methods are available, but differences in CNV call thresholds and characteristics exist. We evaluated the relative performance of seven methods: circular binary segmentation, CNVFinder, cnvPartition, gain and loss of DNA, Nexus algorithms, PennCNV and QuantiSNP. Tested data included real and simulated Illumina HumHap 550 data from the Singapore cohort study of the risk factors for Myopia (SCORM) and simulated data from Affymetrix 6.0 and platform-independent distributions. The normalized singleton ratio (NSR) is proposed as a metric for parameter optimization before enacting full analysis. We used 10 SCORM samples for optimizing parameter settings for each method and then evaluated method performance at optimal parameters using 100 SCORM samples. The statistical power, false positive rates, and receiver operating characteristic (ROC) curve residuals were evaluated by simulation studies. Optimal parameters, as determined by NSR and ROC curve residuals, were consistent across datasets. QuantiSNP outperformed other methods based on ROC curve residuals over most datasets. Nexus Rank and SNPRank have low specificity and high power. Nexus Rank calls oversized CNVs. PennCNV detects one of the fewest numbers of CNVs.",TRUE,acronym
R11,Science,R33827,Assessment of copy number variation using the Illumina Infinium 1M SNP-array: A comparison of methodological approaches in the Spanish Bladder Cancer/EPICURO study,S117323,R33828,Algorithm,R33811,cnvPartition,"High‐throughput single nucleotide polymorphism (SNP)‐array technologies allow to investigate copy number variants (CNVs) in genome‐wide scans and specific calling algorithms have been developed to determine CNV location and copy number. We report the results of a reliability analysis comparing data from 96 pairs of samples processed with CNVpartition, PennCNV, and QuantiSNP for Infinium Illumina Human 1Million probe chip data. We also performed a validity assessment with multiplex ligation‐dependent probe amplification (MLPA) as a reference standard. The number of CNVs per individual varied according to the calling algorithm. Higher numbers of CNVs were detected in saliva than in blood DNA samples regardless of the algorithm used. All algorithms presented low agreement with mean Kappa Index (KI) <66. PennCNV was the most reliable algorithm (KIw=98.96) when assessing the number of copies. The agreement observed in detecting CNV was higher in blood than in saliva samples. When comparing to MLPA, all algorithms identified poorly known copy aberrations (sensitivity = 0.19–0.28). In contrast, specificity was very high (0.97–0.99). Once a CNV was detected, the number of copies was truly assessed (sensitivity >0.62). Our results indicate that the current calling algorithms should be improved for high performance CNV analysis in genome‐wide scans. Further refinement is required to assess CNVs as risk factors in complex diseases.Hum Mutat 32:1–10, 2011. © 2011 Wiley‐Liss, Inc.",TRUE,acronym
R11,Science,R28652,On the value of user preferences in search-based software engineering: A case study in software product lines,S94923,R28783,Algorithm(s),R28649,SPEA2,"Software design is a process of trading off competing objectives. If the user objective space is rich, then we should use optimizers that can fully exploit that richness. For example, this study configures software product lines (expressed as feature maps) using various search-based software engineering methods. As we increase the number of optimization objectives, we find that methods in widespread use (e.g. NSGA-II, SPEA2) perform much worse than IBEA (Indicator-Based Evolutionary Algorithm). IBEA works best since it makes most use of user preference knowledge. Hence it does better on the standard measures (hypervolume and spread) but it also generates far more products with 0% violations of domain constraints. Our conclusion is that we need to change our methods for search-based software engineering, particularly when studying complex decision spaces.",TRUE,acronym
R11,Science,R28848,Generating Integration Test Orders for Aspect-Oriented Software with Multi-objective Algorithms,S95210,R28849,Algorithm(s),R28649,SPEA2,"The problem known as CAITO refers to the determination of an order to integrate and test classes and aspects that minimizes stubbing costs. Such problem is NP-hard and to solve it efficiently, search based algorithms have been used, mainly evolutionary ones. However, the problem is very complex since it involves different factors that may influence the stubbing process, such as complexity measures, contractual issues and so on. These factors are usually in conflict and different possible solutions for the problem exist. To deal properly with this problem, this work explores the use of multi-objective optimization algorithms. The paper presents results from the application of two evolutionary algorithms - NSGA-II and SPEA2 - to the CAITO problem in four real systems, implemented in AspectJ. Both multi-objective algorithms are evaluated and compared with the traditional Tarjan's algorithm and with a mono-objective genetic algorithm. Moreover, it is shown how the tester can use the found solutions, according to the test goals.",TRUE,acronym
R11,Science,R28851,Establishing Integration Test Orders of Classes with Several Coupling Measures,S95223,R28852,Algorithm(s),R28649,SPEA2,"During the inter-class test, a common problem, named Class Integration and Test Order (CITO) problem, involves the determination of a test class order that minimizes stub creation effort, and consequently test costs. The approach based on Multi-Objective Evolutionary Algorithms (MOEAs) has achieved promising results because it allows the use of different factors and measures that can affect the stubbing process. Many times these factors are in conflict and usually there is no a single solution for the problem. Existing works on MOEAs present some limitations. The approach was evaluated with only two coupling measures, based on the number of attributes and methods of the stubs to be created. Other MOEAs can be explored and also other coupling measures. Considering this fact, this paper investigates the performance of two evolutionary algorithms: NSGA-II and SPEA2, for the CITO problem with four coupling measures (objectives) related to: attributes, methods, number of distinct return types and distinct parameter types. An experimental study was performed with four real systems developed in Java. The obtained results point out that the MOEAs can be efficiently used to solve this problem with several objectives, achieving solutions with balanced compromise between the measures, and of minimal effort to test.",TRUE,acronym
R11,Science,R28619,The Multi- Objective Next Release Problem,S94863,R28774,Algorithm(s),R28615,NSGA-II,"This paper is concerned with the Multi-Objective Next Release Problem (MONRP), a problem in search-based requirements engineering. Previous work has considered only single objective formulations. In the multi-objective formulation, there are at least two (possibly conflicting) objectives that the software engineer wishes to optimize. It is argued that the multi-objective formulation is more realistic, since requirements engineering is characterised by the presence of many complex and conflicting demands, for which the software engineer must find a suitable balance. The paper presents the results of an empirical study into the suitability of weighted and Pareto optimal genetic algorithms, together with the NSGA-II algorithm, presenting evidence to support the claim that NSGA-II is well suited to the MONRP. The paper also provides benchmark data to indicate the size above which the MONRP becomes non--trivial.",TRUE,acronym
R11,Science,R28626,A Study of the Multi-Objective Next Release Problem,S94876,R28776,Algorithm(s),R28615,NSGA-II,"One of the first issues which has to be taken into account by software companies is to determine what should be included in the next release of their products, in such a way that the highest possible number of customers get satisfied while this entails a minimum cost for the company. This problem is known as the Next Release Problem (NRP). Since minimizing the total cost of including new features into a software package and maximizing the total satisfaction of customers are contradictory objectives, the problem has a multi-objective nature. In this work we study the NRP problem from the multi-objective point of view, paying attention to the quality of the obtained solutions, the number of solutions, the range of solutions covered by these fronts, and the number of optimal solutions obtained.Also, we evaluate the performance of two state-of-the-art multi-objective metaheuristics for solving NRP: NSGA-II and MOCell. The obtained results show that MOCell outperforms NSGA-II in terms of the range of solutions covered, while this latter is able of obtaining better solutions than MOCell in large instances. Furthermore, we have observed that the optimal solutions found are composed of a high percentage of low-cost requirements and, also, the requirements that produce most satisfaction on the customers.",TRUE,acronym
R11,Science,R28848,Generating Integration Test Orders for Aspect-Oriented Software with Multi-objective Algorithms,S95209,R28849,Algorithm(s),R28615,NSGA-II,"The problem known as CAITO refers to the determination of an order to integrate and test classes and aspects that minimizes stubbing costs. Such problem is NP-hard and to solve it efficiently, search based algorithms have been used, mainly evolutionary ones. However, the problem is very complex since it involves different factors that may influence the stubbing process, such as complexity measures, contractual issues and so on. These factors are usually in conflict and different possible solutions for the problem exist. To deal properly with this problem, this work explores the use of multi-objective optimization algorithms. The paper presents results from the application of two evolutionary algorithms - NSGA-II and SPEA2 - to the CAITO problem in four real systems, implemented in AspectJ. Both multi-objective algorithms are evaluated and compared with the traditional Tarjan's algorithm and with a mono-objective genetic algorithm. Moreover, it is shown how the tester can use the found solutions, according to the test goals.",TRUE,acronym
R11,Science,R28851,Establishing Integration Test Orders of Classes with Several Coupling Measures,S95222,R28852,Algorithm(s),R28615,NSGA-II,"During the inter-class test, a common problem, named Class Integration and Test Order (CITO) problem, involves the determination of a test class order that minimizes stub creation effort, and consequently test costs. The approach based on Multi-Objective Evolutionary Algorithms (MOEAs) has achieved promising results because it allows the use of different factors and measures that can affect the stubbing process. Many times these factors are in conflict and usually there is no a single solution for the problem. Existing works on MOEAs present some limitations. The approach was evaluated with only two coupling measures, based on the number of attributes and methods of the stubs to be created. Other MOEAs can be explored and also other coupling measures. Considering this fact, this paper investigates the performance of two evolutionary algorithms: NSGA-II and SPEA2, for the CITO problem with four coupling measures (objectives) related to: attributes, methods, number of distinct return types and distinct parameter types. An experimental study was performed with four real systems developed in Java. The obtained results point out that the MOEAs can be efficiently used to solve this problem with several objectives, achieving solutions with balanced compromise between the measures, and of minimal effort to test.",TRUE,acronym
R11,Science,R28875,Multiobjective Simulation Optimisation in Software Project Management,S95330,R28876,Algorithm(s),R28615,NSGA-II,"Traditionally, simulation has been used by project managers in optimising decision making. However, current simulation packages only include simulation optimisation which considers a single objective (or multiple objectives combined into a single fitness function). This paper aims to describe an approach that consists of using multiobjective optimisation techniques via simulation in order to help software project managers find the best values for initial team size and schedule estimates for a given project so that cost, time and productivity are optimised. Using a System Dynamics (SD) simulation model of a software project, the sensitivity of the output variables regarding productivity, cost and schedule using different initial team size and schedule estimations is determined. The generated data is combined with a well-known multiobjective optimisation algorithm, NSGA-II, to find optimal solutions for the output variables. The NSGA-II algorithm was able to quickly converge to a set of optimal solutions composed of multiple and conflicting variables from a medium size software project simulation model. Multiobjective optimisation and SD simulation modeling are complementary techniques that can generate the Pareto front needed by project managers for decision making. Furthermore, visual representations of such solutions are intuitive and can help project managers in their decision making process.",TRUE,acronym
R11,Science,R28877,A Hybrid Approach to Solve the Agile Team Allocation Problem,S95343,R28878,Algorithm(s),R28615,NSGA-II,"The success of the team allocation in a agile software development project is essential. The agile team allocation is a NP-hard problem, since it comprises the allocation of self-organizing and cross-functional teams. Many researchers have driven efforts to apply Computational Intelligence techniques to solve this problem. This work presents a hybrid approach based on NSGA-II multi-objective metaheuristic and Mamdani Fuzzy Inference Systems to solve the agile team allocation problem, together with an initial evaluation of its use in a real environment.",TRUE,acronym
R11,Science,R30669,The effect of socio-economic status and ethnicity on the comparative oral health of Asian and White Caucasian 12-year-old children,S102331,R30670,n,L61434,"1,753","OBJECTIVE To investigate the oral health of 12-year-old children of different deprivation but similar fluoridation status from South Asian and White Caucasian ethnic groups. DESIGN An epidemiological survey of 12-year-old children using BASCD criteria, with additional tooth erosion, ethnic classification and postcode data. CLINICAL SETTING Examinations were completed in schools in Leicestershire and Rutland, England, UK. Participants A random sample of 1,753 12-year-old children from all schools in the study area. MAIN OUTCOME MEASURES Caries experience was measured using the DMFT index diagnosed at the caries into dentine (D3) threshold, and tooth erosion using the index employed in the Children's Dental Health UK study reported in 1993. RESULTS The overall prevalence of caries was greater in White than Asian children, but varied at different levels of deprivation and amongst different Asian religious groups. There was a significant positive association between caries and deprivation for White children, but the reverse was true for non-Muslim Asians. White Low Deprivation children had significantly less tooth erosion, but erosion experience increased with decreasing deprivation in non-Muslim Asians. CONCLUSIONS Oral health is associated with ethnicity and linked to deprivation on an ethnic basis. The intra-Asian dental health disadvantage found in the primary dentition of Muslim children is perpetuated into the permanent dentition.",TRUE,number
R11,Science,R28994,Overview of the Face Recognition Grand Challenge,S95820,R28995,Amount of data,L58688,"50,000","Over the last couple of years, face recognition researchers have been developing new techniques. These developments are being fueled by advances in computer vision techniques, computer design, sensor design, and interest in fielding face recognition systems. Such advances hold the promise of reducing the error rate in face recognition systems by an order of magnitude over Face Recognition Vendor Test (FRVT) 2002 results. The face recognition grand challenge (FRGC) is designed to achieve this performance goal by presenting to researchers a six-experiment challenge problem along with data corpus of 50,000 images. The data consists of 3D scans and high resolution still imagery taken under controlled and uncontrolled conditions. This paper describes the challenge problem, data corpus, and presents baseline performance and preliminary results on natural statistics of facial imagery.",TRUE,number
R11,Science,R31224,"Tracking the Middle-Income Trap: What is it, Who is in it, and Why?",S104866,R31267,Sample Period,L62703,1950–2010,"This paper provides a working definition of what the middle-income trap is. We start by defining four income groups of GDP per capita in 1990 PPP dollars: low-income below $2,000; lower-middle-income between $2,000 and $7,250; upper-middle-income between $7,250 and $11,750; and high-income above $11,750. We then classify 124 countries for which we have consistent data for 1950–2010. In 2010, there were 40 low-income countries in the world, 38 lower-middle-income, 14 upper-middle-income, and 32 high-income countries. Then we calculate the threshold number of years for a country to be in the middle-income trap: a country that becomes lower-middle-income (i.e., that reaches $2,000 per capita income) has to attain an average growth rate of per capita income of at least 4.7 percent per annum to avoid falling into the lower-middle-income trap (i.e., to reach $7,250, the upper-middle-income threshold); and a country that becomes upper-middle-income (i.e., that reaches $7,250 per capita income) has to attain an average growth rate of per capita income of at least 3.5 percent per annum to avoid falling into the upper-middle-income trap (i.e., to reach $11,750, the high-income level threshold). Avoiding the middle-income trap is, therefore, a question of how to grow fast enough so as to cross the lower-middle-income segment in at most 28 years, and the upper-middle-income segment in at most 14 years. Finally, the paper proposes and analyzes one possible reason why some countries get stuck in the middle-income trap: the role played by the changing structure of the economy (from low-productivity activities into high-productivity activities), the types of products exported (not all products have the same consequences for growth and development), and the diversification of the economy. We compare the exports of countries in the middle-income trap with those of countries that graduated from it, across eight dimensions that capture different aspects of a country’s capabilities to undergo structural transformation, and test whether they are different. Results indicate that, in general, they are different. We also compare Korea, Malaysia, and the Philippines according to the number of products that each exports with revealed comparative advantage. We find that while Korea was able to gain comparative advantage in a significant number of sophisticated products and was well connected, Malaysia and the Philippines were able to gain comparative advantage in electronics only.",TRUE,acronym
R11,Science,R31224,"Tracking the Middle-Income Trap: What is it, Who is in it, and Why?",S104718,R31225,Time period,L62624,1950–2010,"This paper provides a working definition of what the middle-income trap is. We start by defining four income groups of GDP per capita in 1990 PPP dollars: low-income below $2,000; lower-middle-income between $2,000 and $7,250; upper-middle-income between $7,250 and $11,750; and high-income above $11,750. We then classify 124 countries for which we have consistent data for 1950–2010. In 2010, there were 40 low-income countries in the world, 38 lower-middle-income, 14 upper-middle-income, and 32 high-income countries. Then we calculate the threshold number of years for a country to be in the middle-income trap: a country that becomes lower-middle-income (i.e., that reaches $2,000 per capita income) has to attain an average growth rate of per capita income of at least 4.7 percent per annum to avoid falling into the lower-middle-income trap (i.e., to reach $7,250, the upper-middle-income threshold); and a country that becomes upper-middle-income (i.e., that reaches $7,250 per capita income) has to attain an average growth rate of per capita income of at least 3.5 percent per annum to avoid falling into the upper-middle-income trap (i.e., to reach $11,750, the high-income level threshold). Avoiding the middle-income trap is, therefore, a question of how to grow fast enough so as to cross the lower-middle-income segment in at most 28 years, and the upper-middle-income segment in at most 14 years. Finally, the paper proposes and analyzes one possible reason why some countries get stuck in the middle-income trap: the role played by the changing structure of the economy (from low-productivity activities into high-productivity activities), the types of products exported (not all products have the same consequences for growth and development), and the diversification of the economy. We compare the exports of countries in the middle-income trap with those of countries that graduated from it, across eight dimensions that capture different aspects of a country’s capabilities to undergo structural transformation, and test whether they are different. Results indicate that, in general, they are different. We also compare Korea, Malaysia, and the Philippines according to the number of products that each exports with revealed comparative advantage. We find that while Korea was able to gain comparative advantage in a significant number of sophisticated products and was well connected, Malaysia and the Philippines were able to gain comparative advantage in electronics only.",TRUE,acronym
R11,Science,R27685,"Structural breaks, electricity consumption and economic growth: evidence from Turkey",S90044,R27686,Period,L55809,1968–2005,"This paper investigates the short-run and long-run causality issues between electricity consumption and economic growth in Turkey by using the co-integration and vector error-correction models with structural breaks. It employs annual data covering the period 1968–2005. The study also explores the causal relationship between these variables in terms of the three error-correction based Granger causality models. The empirical results are as follows: i) Both variables are nonstationary in levels and stationary in the first differences with/without structural breaks, ii) there exists a longrun relationship between variables, iii) there is unidirectional causality running from the electricity consumption to economic growth. The overall results indicate that “growth hypothesis” for electricity consumption and growth nexus holds in Turkey. Thus, energy conservation policies, such as rationing electricity consumption, may harm economic growth in Turkey.",TRUE,acronym
R11,Science,R29133,Enterprise resource planning research: where are we now and where should we go from here?,S96520,R29134,Coverage,L59106,1999-2004,"ABSTRACT The research related to Enterprise Resource Planning (ERP) has grown over the past several years. This growing body of ERP research results in an increased need to review this extant literature with the intent of identifying gaps and thus motivate researchers to close this breach. Therefore, this research was intended to critique, synthesize and analyze both the content (e.g., topics, focus) and processes (i.e., methods) of the ERP literature, and then enumerates and discusses an agenda for future research efforts. To accomplish this, we analyzed 49 ERP articles published (1999-2004) in top Information Systems (IS) and Operations Management (OM) journals. We found an increasing level of activity during the 5-year period and a slightly biased distribution of ERP articles targeted at IS journals compared to OM. We also found several research methods either underrepresented or absent from the pool of ERP research. We identified several areas of need within the ERP literature, none more prevalent than the need to analyze ERP within the context of the supply chain. INTRODUCTION Davenport (1998) described the strengths and weaknesses of using Enterprise Resource Planning (ERP). He called attention to the growth of vendors like SAP, Baan, Oracle, and People-Soft, and defined this software as, ""...the seamless integration of all the information flowing through a companyfinancial and accounting information, human resource information, supply chain information, and customer information."" (Davenport, 1998). Since the time of that article, there has been a growing interest among researchers and practitioners in how organization implement and use ERP systems (Amoako-Gyampah and Salam, 2004; Bendoly and Jacobs, 2004; Gattiker and Goodhue, 2004; Lander, Purvis, McCray and Leigh, 2004; Luo and Strong, 2004; Somers and Nelson, 2004; Zoryk-Schalla, Fransoo and de Kok, 2004). This interest is a natural continuation of trends in Information Technology (IT), such as MRP II, (Olson, 2004; Teltumbde, 2000; Toh and Harding, 1999) and in business practice improvement research, such as continuous process improvement and business process reengineering (Markus and Tanis, 2000; Ng, Ip and Lee, 1999; Reijers, Limam and van der Aalst, 2003; Toh and Harding, 1999). This growing body of ERP research results in an increased need to review this extant literature with the intent of ""identifying critical knowledge gaps and thus motivate researchers to close this breach"" (Webster and Watson, 2002). Also, as noted by Scandura & Williams (2000), in order for research to advance, the methods used by researchers must periodically be evaluated to provide insights into the methods utilized and thus the areas of need. These two interrelated needs provide the motivation for this paper. In essence, this research critiques, synthesizes and analyzes both the content (e.g., topics, focus) and processes (i.e., methods) of the ERP literature and then enumerates and discusses an agenda for future research efforts. The remainder of the paper is organized as follows: Section 2 describes the approach to the analysis of the ERP research. Section 3 contains the results and a review of the literature. Section 4 discusses our findings and the needs relative to future ERP research efforts. Finally, section 5 summarizes the research. RESEARCH STUDY We captured the trends pertaining to (1) the number and distribution of ERP articles published in the leading journals, (2) methodologies employed in ERP research, and (3) emphasis relative to topic of ERP research. During the analysis of the ERP literature, we identified gaps and needs in the research and therefore enumerate and discuss a research agenda which allows the progression of research (Webster and Watson, 2002). In short, we sought to paint a representative landscape of the current ERP literature base in order to influence the direction of future research efforts relative to ERP. …",TRUE,acronym
R11,Science,R29140,An Updated ERP Systems Annotated Bibliography: 2001-2005,S96557,R29141,Coverage,L59129,2001-2005,"This study provides an updated annotated bibliography of ERP publications published in the main IS conferences and journals during the period 2001-2005, categorizing them through an ERP lifecycle-based framework that is structured in phases. The first version of this bibliography was published in 2001 (Esteves and Pastor, 2001c). However, so far, we have extended the bibliography with a significant number of new publications in all the categories used in this paper. We also reviewed the categories and some incongruities were eliminated.",TRUE,acronym
R11,Science,R25868,Selective Hydrogenation of Polyunsaturated Fatty Acids Using Alkanethiol Self-Assembled Monolayer-Coated Pd/Al2O3 Catalysts,S79315,R25869,conv (%),L49903,>70,"Pd/Al2O3 catalysts coated with various thiolate self-assembled monolayers (SAMs) were used to direct the partial hydrogenation of 18-carbon polyunsaturated fatty acids, yielding a product stream enriched in monounsaturated fatty acids (with low saturated fatty acid content), a favorable result for increasing the oxidative stability of biodiesel. The uncoated Pd/Al2O3 catalyst quickly saturated all fatty acid reactants under hydrogenation conditions, but the addition of alkanethiol SAMs markedly increased the reaction selectivity to the monounsaturated product oleic acid to a level of 80–90%, even at conversions >70%. This effect, which is attributed to steric effects between the SAMs and reactants, was consistent with the relative consumption rates of linoleic and oleic acid using alkanethiol-coated and uncoated Pd/Al2O3 catalysts. With an uncoated Pd/Al2O3 catalyst, each fatty acid, regardless of its degree of saturation had a reaction rate of ∼0.2 mol reactant consumed per mole of surface palladium per ...",TRUE,acronym
R11,Science,R25861,One-step Synthesis of Core-Gold/Shell- Ceria Nanomaterial and Its Catalysis for Highly Selective Semi- hydrogenation of Alkynes,S79260,R25862,catalyst,L49859,Au@CeO2,"We report a facile synthesis of new core-Au/shell-CeO2 nanoparticles (Au@CeO2) using a redox-coprecipitation method, where the Au nanoparticles and the nanoporous shell of CeO2 are simultaneously formed in one step. The Au@CeO2 catalyst enables the highly selective semihydrogenation of various alkynes at ambient temperature under additive-free conditions. The core-shell structure plays a crucial role in providing the excellent selectivity for alkenes through the selective dissociation of H2 in a heterolytic manner by maximizing interfacial sites between the core-Au and the shell-CeO2.",TRUE,acronym
R11,Science,R25738,CTU-Mine: An Efficient High Utility Itemset Mining Algorithm Using the Pattern Growth Approach,S78176,R25739,Algorithm name,L48958,CTU-Mine,"Frequent pattern mining discovers patterns in transaction databases based only on the relative frequency of occurrence of items without considering their utility. For many real world applications, however, utility of itemsets based on cost, profit or revenue is of importance. The utility mining problem is to find itemsets that have higher utility than a user specified minimum. Unlike itemset support in frequent pattern mining, itemset utility does not have the anti-monotone property and so efficient high utility mining poses a greater challenge. Recent research on utility mining has been based on the candidate-generation-and-test approach which is suitable for sparse data sets with short patterns, but not feasible for dense data sets or long patterns. In this paper we propose a new algorithm called CTU-Mine that mines high utility itemsets using the pattern growth approach. We have tested our algorithm on several dense data sets, compared it with the recent algorithms and the results show that our algorithm works efficiently.",TRUE,acronym
R11,Science,R32677,Segmentation and wake removal of seafaring vessels in optical satellite images,S111528,R32678,Satellite sensor,R32676,GeoEye-1,"This paper aims at the segmentation of seafaring vessels in optical satellite images, which allows an accurate length estimation. In maritime situation awareness, vessel length is an important parameter to classify a vessel. The proposed segmentation system consists of robust foreground-background separation, wake detection and ship-wake separation, simultaneous position and profile clustering and a special module for small vessel segmentation. We compared our system with a baseline implementation on 53 vessels that were observed with GeoEye-1. The results show that the relative L1 error in the length estimation is reduced from 3.9 to 0.5, which is an improvement of 87%. We learned that the wake removal is an important element for the accurate segmentation and length estimation of ships.",TRUE,acronym
R11,Science,R32594,Automatic ship detection in HJ-1A satellite data,S110987,R32595,Satellite sensor,R32593,HJ-1,"In this paper, we use HJ-1A satellite data to ship detection and a ship target detection algorithm based on optical remote sensing images of moving window and entropy Maximum is presented. The method uses a moving window to get ship candidates and the Shannon theory to image segmentation. Basic principle is that the entropy of the image segmented by the threshold value is max. After completing the image segmentation, an automatic discriminator is used. The identify algorithm is used to get rid of the false alarm caused by spray, cloudy and solar flare. Some feature is considered include area, length ratio and extent. The detection results indicate that most ship target can be detected without regard to cloudy.",TRUE,acronym
R11,Science,R33819,The Effect of Algorithms on Copy Number Variant Detection,S117278,R33820,Algorithm,R33817,HMMSeg,"Background The detection of copy number variants (CNVs) and the results of CNV-disease association studies rely on how CNVs are defined, and because array-based technologies can only infer CNVs, CNV-calling algorithms can produce vastly different findings. Several authors have noted the large-scale variability between CNV-detection methods, as well as the substantial false positive and false negative rates associated with those methods. In this study, we use variations of four common algorithms for CNV detection (PennCNV, QuantiSNP, HMMSeg, and cnvPartition) and two definitions of overlap (any overlap and an overlap of at least 40% of the smaller CNV) to illustrate the effects of varying algorithms and definitions of overlap on CNV discovery. Methodology and Principal Findings We used a 56 K Illumina genotyping array enriched for CNV regions to generate hybridization intensities and allele frequencies for 48 Caucasian schizophrenia cases and 48 age-, ethnicity-, and gender-matched control subjects. No algorithm found a difference in CNV burden between the two groups. However, the total number of CNVs called ranged from 102 to 3,765 across algorithms. The mean CNV size ranged from 46 kb to 787 kb, and the average number of CNVs per subject ranged from 1 to 39. The number of novel CNVs not previously reported in normal subjects ranged from 0 to 212. Conclusions and Significance Motivated by the availability of multiple publicly available genome-wide SNP arrays, investigators are conducting numerous analyses to identify putative additional CNVs in complex genetic disorders. However, the number of CNVs identified in array-based studies, and whether these CNVs are novel or valid, will depend on the algorithm(s) used. Thus, given the variety of methods used, there will be many false positives and false negatives. Both guidelines for the identification of CNVs inferred from high-density arrays and the establishment of a gold standard for validation of CNVs are needed.",TRUE,acronym
R11,Science,R25720,A Visual Summary for Linked Open Data sources,S7857,R6470,implementation,R6471,LODeX,"In this paper we propose LODeX, a tool that produces a representative summary of a Linked open Data (LOD) source starting from scratch, thus supporting users in exploring and understanding the contents of a dataset. The tool takes in input the URL of a SPARQL endpoint and launches a set of predefined SPARQL queries, from the results of the queries it generates a visual summary of the source. The summary reports statistical and structural information of the LOD dataset and it can be browsed to focus on particular classes or to explore their properties and their use. LODeX was tested on the 137 public SPARQL endpoints contained in Data Hub (formerly CKAN), one of the main Open Data catalogues. The statistical and structural information extraction was successfully performed on 107 sources, among these the most significant ones are included in the online version of the tool.",TRUE,acronym
R11,Science,R25720,A Visual Summary for Linked Open Data sources,S78051,R25721,System,L48861,LODeX ,"In this paper we propose LODeX, a tool that produces a representative summary of a Linked open Data (LOD) source starting from scratch, thus supporting users in exploring and understanding the contents of a dataset. The tool takes in input the URL of a SPARQL endpoint and launches a set of predefined SPARQL queries, from the results of the queries it generates a visual summary of the source. The summary reports statistical and structural information of the LOD dataset and it can be browsed to focus on particular classes or to explore their properties and their use. LODeX was tested on the 137 public SPARQL endpoints contained in Data Hub (formerly CKAN), one of the main Open Data catalogues. The statistical and structural information extraction was successfully performed on 107 sources, among these the most significant ones are included in the online version of the tool.",TRUE,acronym
R11,Science,R28626,A Study of the Multi-Objective Next Release Problem,S94877,R28776,Algorithm(s),R28623,MOCell,"One of the first issues which has to be taken into account by software companies is to determine what should be included in the next release of their products, in such a way that the highest possible number of customers get satisfied while this entails a minimum cost for the company. This problem is known as the Next Release Problem (NRP). Since minimizing the total cost of including new features into a software package and maximizing the total satisfaction of customers are contradictory objectives, the problem has a multi-objective nature. In this work we study the NRP problem from the multi-objective point of view, paying attention to the quality of the obtained solutions, the number of solutions, the range of solutions covered by these fronts, and the number of optimal solutions obtained.Also, we evaluate the performance of two state-of-the-art multi-objective metaheuristics for solving NRP: NSGA-II and MOCell. The obtained results show that MOCell outperforms NSGA-II in terms of the range of solutions covered, while this latter is able of obtaining better solutions than MOCell in large instances. Furthermore, we have observed that the optimal solutions found are composed of a high percentage of low-cost requirements and, also, the requirements that produce most satisfaction on the customers.",TRUE,acronym
R11,Science,R29741,Economic Development and Environmental Quality in Nigeria: Is There an Environmental Kuznets Curve?,S98694,R29742,Shape of EKC,R29735,N-shaped,"This study utilizes standard- and nested-EKC models to investigate the income-environment relation for Nigeria, between 1960 and 2008. The results from the standard-EKC model provides weak evidence of an inverted-U shaped relationship with turning point (T.P) around $280.84, while the nested model presents strong evidence of an N-shaped relationship between income and emissions in Nigeria, with a T.P around $237.23. Tests for structural breaks caused by the 1973 oil price shocks and 1986 Structural Adjustment are not rejected, implying that these factors have not significantly affected the income-environment relationship in Nigeria. Further, results from the rolling interdecadal analysis shows that the observed relationship is stable and insensitive to the sample interval chosen. Overall, our findings imply that economic development is compatible with environmental improvements in Nigeria. However, tighter and concentrated environmental policy regimes will be required to ensure that the relationship is maintained around the first two-strands of the N-shape",TRUE,acronym
R11,Science,R27251,An introduction to robot component model for opros(open platform for robotic services),S87818,R27252,Name,L54337,OPRoS,"The OPRoS(Open Platform for Robotic Service) is a platform for network based intelligent robots supported by the IT R&D program of Ministry of Knowledge Economy of KOREA. The OPRoS technology aims at establishing a component based standard software platform for the robot which enables complicated functions to be developed easily by using the standardized COTS components. The OPRoS provides a software component model for supporting reusability and compatibility of the robot software component in the heterogeneous communication network. In this paper, we will introduce the OPRoS component model and its background.",TRUE,acronym
R11,Science,R25880,Palladium nanoparticles supported on mpg-C3N4 as active catalyst for semihydrogenation of phenylacetylene under mild conditions,S79416,R25881,catalyst,L49986,Pd@mpg-C3N4,"Palladium nanoparticles supported on a mesoporous graphitic carbon nitride, Pd@mpg-C3N4, has been developed as an effective, heterogeneous catalyst for the liquid-phase semihydrogenation of phenylacetylene under mild conditions (303 K, atmospheric H2). A total conversion was achieved with high selectivity of styrene (higher than 94%) within 85 minutes. Moreover, the spent catalyst can be easily recovered by filtration and then reused nine times without apparent lose of selectivity. The generality of Pd@mpg-C3N4 catalyst for partial hydrogenation of alkynes was also checked for terminal and internal alkynes with similar performance. The Pd@mpg-C3N4 catalyst was proven to be of industrial interest.",TRUE,acronym
R11,Science,R25872,Metal-Ligand Core-Shell Nanocomposite Catalysts for the Selective Semihydrogenation of Alkynes,S79354,R25873,catalyst,L49936,Pd@MPSO/SiO2,"In recent years, hybrid nanocomposites with core–shell structures have increasingly attracted enormous attention in many important research areas such as quantum dots, optical, magnetic, and electronic devices, and catalysts. In the catalytic applications of core–shell materials, core-metals having magnetic properties enable easy separation of the catalysts from the reaction mixtures by a magnet. The core-metals can also affect the active shell-metals, delivering significant improvements in their activities and selectivities. However, it is difficult for core-metals to act directly as the catalytic active species because they are entirely covered by the shell. Thus, few successful designs of core–shell nanocomposite catalysts having active metal species in the core have appeared to date. Recently, we have demonstrated the design of a core–shell catalyst consisting of active metal nanoparticles (NPs) in the core and closely assembled oxides with nano-gaps in the shell, allowing the access of substrates to the core-metal. The shell acted as a macro ligand (shell ligand) for the core-metal and the core–shell structure maximized the metal–ligand interaction (ligand effect), promoting highly selective reactions. The design concept of core–shell catalysts having core-metal NPs with a shell ligand is highly useful for selective organic transformations owing to the ideal structure of these catalysts for maximizing the ligand effect, leading to superior catalytic performances compared to those of conventional supported metal NPs. Semihydrogenation of alkynes is a powerful tool to synthesize (Z)-alkenes which are important building blocks for fine chemicals, such as bioactive molecules, flavors, and natural products. In this context, the Lindlar catalyst (Pd/ CaCO3 treated with Pb(OAc)2) has been widely used. [13] Unfortunately, the Lindlar catalyst has serious drawbacks including the requirement of a toxic lead salt and the addition of large amounts of quinoline to suppress the over-hydrogenation of the product alkenes. Furthermore, the Lindlar catalyst has a limited substrate scope; terminal alkynes cannot be converted selectively into terminal alkenes because of the rapid over-hydrogenation of the resulting alkenes to alkanes. Aiming at the development of environmentally benign catalyst systems, a number of alternative lead-free catalysts have been reported. 15] Recently, we also developed a leadfree catalytic system for the selective semihydrogenation consisting of SiO2-supported Pd nanoparticles (PdNPs) and dimethylsulfoxide (DMSO), in which the addition of DMSO drastically suppressed the over-hydrogenation and isomerization of the alkene products even after complete consumption of the alkynes. This effect is due to the coordination of DMSO to the PdNPs. DMSO adsorbed on the surface of PdNPs inhibits the coordination of alkenes to the PdNPs, while alkynes can adsorb onto the PdNPs surface because they have a higher coordination ability than DMSO. This phenomenon inspired us to design PdNPs coordinated with a DMSO-like species in a solid matrix. If a core–shell structured nanocomposite involving PdNPs encapsulated by a shell having a DMSO-like species could be constructed, it would act as an efficient and functional solid catalyst for the selective semihydrogenation of alkynes. Herein, we successfully synthesized core–shell nanocomposites of PdNPs covered with a DMSO-like matrix on the surface of SiO2 (Pd@MPSO/SiO2). The shell, consisting of an alkyl sulfoxide network, acted as a macroligand and allowed the selective access of alkynes to the active center of the PdNPs, promoting the selective semihydrogenation of not only internal but also terminal alkynes without any additives. Moreover, these catalysts were reusable while maintaining high activity and selectivity. Pd@MPSO/SiO2 catalysts were synthesized as follows. Pd/ SiO2 prepared according to our procedure [16] was stirred in n-heptane with small amounts of 3,5-di-tert-butyl-4-hydroxytoluene (BHT) and water at room temperature. Next, methyl3-trimethoxysilylpropylsulfoxide (MPSO) was added to the mixture and the mixture was heated. The slurry obtained was collected by filtration, washed, and dried in vacuo, affording Pd@MPSO/SiO2 as a gray powder. Altering the molar ratios of MPSO to Pd gave two kinds of catalysts: Pd@MPSO/SiO21 (MPSO:Pd = 7:1), and Pd@MPSO/SiO2-2 (MPSO:Pd = 100:1). [*] Dr. T. Mitsudome, Y. Takahashi, Dr. T. Mizugaki, Prof. Dr. K. Jitsukawa, Prof. Dr. K. Kaneda Department of Materials Engineering Science Graduate School of Engineering Science, Osaka University 1–3, Machikaneyama, Toyonaka, Osaka 560-8531 (Japan) E-mail: kaneda@cheng.es.osaka-u.ac.jp",TRUE,acronym
R11,Science,R32786,SENTINEL-1/2 DATA FOR SHIP TRAFFIC MONITORING ON THE DANUBE RIVER,S112240,R32787,Satellite sensor,R32784,Sentinel-2,"After a long period of drought, the water level of the Danube River has significantly dropped especially on the Romanian sector, in July-August 2015. Danube reached the lowest water level recorded in the last 12 years, causing the blockage of the ships in the sector located close to Zimnicea Harbour. The rising sand banks in the navigable channel congested the commercial traffic for a few days with more than 100 ships involved. The monitoring of the decreasing water level and the traffic jam was performed based on Sentinel-1 and Sentinel-2 free data provided by the European Space Agency and the European Commission within the Copernicus Programme. Specific processing methods (calibration, speckle filtering, geocoding, change detection, image classification, principal component analysis, etc.) were applied in order to generate useful products that the responsible authorities could benefit from. The Sentinel data yielded good results for water mask extraction and ships detection. The analysis continued after the closure of the crisis situation when the water reached the nominal level again. The results indicate that Sentinel data can be successfully used for ship traffic monitoring, building the foundation of future endeavours for a durable monitoring of the Danube River.",TRUE,acronym
R11,Science,R32794,A Direct and Fast Methodology for Ship Recognition in Sentinel-2 Multispectral Imagery,S112295,R32795,Satellite sensor,R32784,Sentinel-2,"The European Space Agency satellite Sentinel-2 provides multispectral images with pixel sizes down to 10 m. This high resolution allows for ship detection and recognition by determining a number of important ship parameters. We are able to show how a ship position, its heading, length and breadth can be determined down to a subpixel resolution. If the ship is moving, its velocity can also be determined from its Kelvin waves. The 13 spectrally different visual and infrared images taken using multispectral imagery (MSI) are “fingerprints” that allow for the recognition and identification of ships. Furthermore, the multispectral image profiles along the ship allow for discrimination between the ship, its turbulent wakes, and the Kelvin waves, such that the ship’s length and breadth can be determined more accurately even when sailing. The ship’s parameters are determined by using satellite imagery taken from several ships, which are then compared to known values from the automatic identification system. The agreement is on the order of the pixel resolution or better.",TRUE,acronym
R11,Science,R26637,A two-levels hierarchy for low-energy adaptive clustering hierarchy (TL-LEACH),S83789,R26638,Protocol,R26636,TL-LEACH,"Wireless sensor networks with thousands of tiny sensor nodes are expected to find wide applicability and increasing deployment in coming years, as they enable reliable monitoring and analysis of the environment. In this paper we propose a modification to a well-known protocol for sensor networks called Low Energy Adaptive Clustering Hierarchy (LEACH). This last is designed for sensor networks where end- user wants to remotely monitor the environment. In such situation, the data from the individual nodes must be sent to a central base station, often located far from the sensor network, through which the end-user can access the data. In this context our contribution is represented by building a two-level hierarchy to realize a protocol that saves better the energy consumption. Our TL-LEACH uses random rotation of local cluster base stations (primary cluster-heads and secondary cluster-heads). In this way we build, where it is possible, a two-level hierarchy. This permits to better distribute the energy load among the sensors in the network especially when the density of network is higher. TL- LEACH uses localized coordination to enable scalability and robustness. We evaluated the performances of our protocol with NS-2 and we observed that our protocol outperforms the LEACH in terms of energy consumption and lifetime of the network.",TRUE,acronym
R141823,Semantic Web,R142576,A method for re-engineering a thesaurus into an ontology,S572685,R142578,data source,R142582,AGROVOC,"The construction of complex ontologies can be faci lit ted by adapting existing vocabularies. There is little clarity and i fact little consensus as to what modifications of vocabularies are necessary in orde r to re-engineer them into ontologies. In this paper we present a method that provides clear steps to follow when re-engineering a thesaurus. The method makes u se of top-level ontologies and was derived from the structural differences bet we n thesauri and ontologies as well as from best practices in modeling, some of wh ich ave been advocated in the biomedical domain. We illustrate each step of our m ethod with examples from a re-engineering case study about agricultural fertil izers based on the AGROVOC thesaurus. Our method makes clear that re-engineeri ng thesauri requires far more than just a syntactic conversion into a formal lang uage or other easily automatable steps. The method can not only be used for re-engin ering thesauri, but does also summarize steps for building ontologies in general, and can hence be adapted for the re-engineering of other types of vocabularies o r terminologies.",TRUE,acronym
R141823,Semantic Web,R185271,Multimedia ontology learning for automatic annotation and video browsing,S709709,R185273,Output format,R185281,MOWL,"In this work, we offer an approach to combine standard multimedia analysis techniques with knowledge drawn from conceptual metadata provided by domain experts of a specialized scholarly domain, to learn a domain-specific multimedia ontology from a set of annotated examples. A standard Bayesian network learning algorithm that learns structure and parameters of a Bayesian network is extended to include media observables in the learning. An expert group provides domain knowledge to construct a basic ontology of the domain as well as to annotate a set of training videos. These annotations help derive the associations between high-level semantic concepts of the domain and low-level MPEG-7 based features representing audio-visual content of the videos. We construct a more robust and refined version of this ontology by learning from this set of conceptually annotated videos. To encode this knowledge, we use MOWL, a multimedia extension of Web Ontology Language (OWL) which is capable of describing domain concepts in terms of their media properties and of capturing the inherent uncertainties involved. We use the ontology specified knowledge for recognizing concepts relevant to a video to annotate fresh addition to the video database with relevant concepts in the ontology. These conceptual annotations are used to create hyperlinks in the video collection, to provide an effective video browsing interface to the user.",TRUE,acronym
R141823,Semantic Web,R142319,An Innovative Statistical Tool for Automatic OWL-ERD Alignment,S571985,R142321,Output format,R142338,OWL,"Aligning two representations of the same domain with different expressiveness is a crucial topic in nowadays semantic web and big data research. OWL ontologies and Entity Relation Diagrams are the most widespread representations whose alignment allows for semantic data access via ontology interface, and ontology storing techniques. The term """"alignment"" encompasses three different processes: OWL-to-ERD and ERD-to-OWL transformation, and OWL-ERD mapping. In this paper an innovative statistical tool is presented to accomplish all the three aspects of the alignment. The main idea relies on the use of a HMM to estimate the most likely ERD sentence that is stated in a suitable grammar, and corresponds to the observed OWL axiom. The system and its theoretical background are presented, and some experiments are reported.",TRUE,acronym
R141823,Semantic Web,R142323,Mapping ER Schemas to OWL Ontologies,S572080,R142325,Output format,R142338,OWL,"As the Semantic Web initiative gains momentum, a fundamental problem of integrating existing data-intensive WWW applications into the Semantic Web emerges. In order for today’s relational database supported Web applications to transparently participate in the Semantic Web, their associated database schemas need to be converted into semantically equivalent ontologies. In this paper we present a solution to an important special case of the automatic mapping problem with wide applicability: mapping well-formed Entity-Relationship (ER) schemas to semantically equivalent OWL Lite ontologies. We present a set of mapping rules that fully capture the ER schema semantics, along with an overview of an implementation of the complete mapping algorithm integrated into the current SFSU ER Design Tools software.",TRUE,acronym
R141823,Semantic Web,R142508,Automatic Domain Ontology Construction Based on Thesauri,S572475,R142510,Output format,R142528,OWL,"The research on the automatic ontology construction has become very popular. It is very useful for the ontology construction to reengineer the existing knowledge resource, such as the thesauri. But many relationships in the thesauri are incorrect or are defined too broadly. Accordingly, extracting ontological relations from the thesauri becomes very important. This paper proposes the method to reengineer the thesauri to ontology, and especially the method to how to obtain the correct semantic relations. The test result shows the accuracy of the semantic relations is86.23%, and one is the hierarchical relations with 89.02%, and the other is non-hierarchical relations with 83.44%.",TRUE,acronym
R141823,Semantic Web,R185271,Multimedia ontology learning for automatic annotation and video browsing,S709706,R185273,Output format,R149648,OWL,"In this work, we offer an approach to combine standard multimedia analysis techniques with knowledge drawn from conceptual metadata provided by domain experts of a specialized scholarly domain, to learn a domain-specific multimedia ontology from a set of annotated examples. A standard Bayesian network learning algorithm that learns structure and parameters of a Bayesian network is extended to include media observables in the learning. An expert group provides domain knowledge to construct a basic ontology of the domain as well as to annotate a set of training videos. These annotations help derive the associations between high-level semantic concepts of the domain and low-level MPEG-7 based features representing audio-visual content of the videos. We construct a more robust and refined version of this ontology by learning from this set of conceptually annotated videos. To encode this knowledge, we use MOWL, a multimedia extension of Web Ontology Language (OWL) which is capable of describing domain concepts in terms of their media properties and of capturing the inherent uncertainties involved. We use the ontology specified knowledge for recognizing concepts relevant to a video to annotate fresh addition to the video database with relevant concepts in the ontology. These conceptual annotations are used to create hyperlinks in the video collection, to provide an effective video browsing interface to the user.",TRUE,acronym
R141823,Semantic Web,R185335,Ontology Learning Process as a Bottom-up Strategy for Building Domain-specific Ontology from Legal Texts,S709848,R185337,Output format,R185342,OWL,"The objective of this paper is to present the role of Ontology Learning Process in supporting an ontology engineer for creating and maintaining ontologies from textual resources. The knowledge structures that interest us are legal domain-specific ontologies. We will use these ontologies to build legal domain ontology for a Lebanese legal knowledge based system. The domain application of this work is the Lebanese criminal system. Ontologies can be learnt from various sources, such as databases, structured and unstructured documents. Here, the focus is on the acquisition of ontologies from unstructured text, provided as input. In this work, the Ontology Learning Process represents a knowledge extraction phase using Natural Language Processing techniques. The resulted ontology is considered as inexpressive ontology. There is a need to reengineer it in order to build a complete, correct and more expressive domain-specific ontology.",TRUE,acronym
R141823,Semantic Web,R144129,Representing the Hierarchy of Industrial Taxonomies in OWL: The gen/tax Approach,S576886,R144131,data source,R142551,UNSPSC,"Existing taxonomies are valuable input for creating ontologies, because they reflect some degree of community consensus and contain, readily available, a wealth of concept definitions plus a hierarchy. However, the transformation of such taxonomies into useful ontologies is not as straightforward as it appears, because simply taking the hierarchy of concepts, which was originally developed for some external purpose other than ontology engineering, as the subsumption hierarchy using rdfs:subClassOf can yield useless ontologies. In this paper, we (1) illustrate the problem by analyzing OWL and RDF-S ontologies derived from UNSPSC (a products and services taxonomy), (2) detail how the interpretation and representation of the original taxonomic relationship is an important modeling decision when deriving ontologies from existing taxonomies, (3) propose a novel “gen/tax” approach to capture the original semantics of taxonomies in OWL, based on the split of each category in the taxonomy into two concepts, a generic concept and a taxonomy concept, and (4) show the usefulness of this approach by transforming eCl@ss into a fully-fledged products and services ontology.",TRUE,acronym
R141823,Semantic Web,R180001,A Deep Learning based Approach for Precise Video Tagging,S702048,R180016,has Data Source,R180008,UCF-101,"With the increase in smart devices and abundance of video contents, efficient techniques for the indexing, analysis and retrieval of videos are becoming more and more desirable. Improved indexing and automated analysis of millions of videos could be accomplished by getting videos tagged automatically. A lot of existing methods fail to precisely tag videos because of their lack of ability to capture the video context. The context in a video represents the interactions of objects in a scene and their overall meaning. In this work, we propose a novel approach that integrates the video scene ontology with CNN (Convolutional Neural Network) for improved video tagging. Our method captures the content of a video by extracting the information from individual key frames. The key frames are then fed to a CNN based deep learning model to train its parameters. The trained parameters are used to generate the most frequent tags. Highly frequent tags are used to summarize the input video. The proposed technique is benchmarked on the most widely used dataset of video activities, namely, UCF-101. Our method managed to achieve an overall accuracy of 99.8% with an F1- score of 96.2%.",TRUE,acronym
R141823,Semantic Web,R144129,Representing the Hierarchy of Industrial Taxonomies in OWL: The gen/tax Approach,S576885,R144131,data source,R142554,eCl@ss,"Existing taxonomies are valuable input for creating ontologies, because they reflect some degree of community consensus and contain, readily available, a wealth of concept definitions plus a hierarchy. However, the transformation of such taxonomies into useful ontologies is not as straightforward as it appears, because simply taking the hierarchy of concepts, which was originally developed for some external purpose other than ontology engineering, as the subsumption hierarchy using rdfs:subClassOf can yield useless ontologies. In this paper, we (1) illustrate the problem by analyzing OWL and RDF-S ontologies derived from UNSPSC (a products and services taxonomy), (2) detail how the interpretation and representation of the original taxonomic relationship is an important modeling decision when deriving ontologies from existing taxonomies, (3) propose a novel “gen/tax” approach to capture the original semantics of taxonomies in OWL, based on the split of each category in the taxonomy into two concepts, a generic concept and a taxonomy concept, and (4) show the usefulness of this approach by transforming eCl@ss into a fully-fledged products and services ontology.",TRUE,acronym
R259,Semiconductor and Optical Materials,R135948,Application of ALD-Al2O3 in CdS/CdTe Thin-Film Solar Cells,S538232,R135950,keywords,L379316,Al2O3,"The application of thinner cadmium sulfide (CdS) window layer is a feasible approach to improve the performance of cadmium telluride (CdTe) thin film solar cells. However, the reduction of compactness and continuity of thinner CdS always deteriorates the device performance. In this work, transparent Al2O3 films with different thicknesses, deposited by using atomic layer deposition (ALD), were utilized as buffer layers between the front electrode transparent conductive oxide (TCO) and CdS layers to solve this problem, and then, thin-film solar cells with a structure of TCO/Al2O3/CdS/CdTe/BC/Ni were fabricated. The characteristics of the ALD-Al2O3 films were studied by UV–visible transmittance spectrum, Raman spectroscopy, and atomic force microscopy (AFM). The light and dark J–V performances of solar cells were also measured by specific instrumentations. The transmittance measurement conducted on the TCO/Al2O3 films verified that the transmittance of TCO/Al2O3 were comparable to that of single TCO layer, meaning that no extra absorption loss occurred when Al2O3 buffer layers were introduced into cells. Furthermore, due to the advantages of the ALD method, the ALD-Al2O3 buffer layers formed an extremely continuous and uniform coverage on the substrates to effectively fill and block the tiny leakage channels in CdS/CdTe polycrystalline films and improve the characteristics of the interface between TCO and CdS. However, as the thickness of alumina increased, the negative effects of cells were gradually exposed, especially the increase of the series resistance (Rs) and the more serious “roll-over” phenomenon. Finally, the cell conversion efficiency (η) of more than 13.0% accompanied by optimized uniformity performances was successfully achieved corresponding to the 10 nm thick ALD-Al2O3 thin film.",TRUE,acronym
R259,Semiconductor and Optical Materials,R135948,Application of ALD-Al2O3 in CdS/CdTe Thin-Film Solar Cells,S538228,R135950,Solar cell structure,L379313,Al2O3/CdS/CdTe,"The application of thinner cadmium sulfide (CdS) window layer is a feasible approach to improve the performance of cadmium telluride (CdTe) thin film solar cells. However, the reduction of compactness and continuity of thinner CdS always deteriorates the device performance. In this work, transparent Al2O3 films with different thicknesses, deposited by using atomic layer deposition (ALD), were utilized as buffer layers between the front electrode transparent conductive oxide (TCO) and CdS layers to solve this problem, and then, thin-film solar cells with a structure of TCO/Al2O3/CdS/CdTe/BC/Ni were fabricated. The characteristics of the ALD-Al2O3 films were studied by UV–visible transmittance spectrum, Raman spectroscopy, and atomic force microscopy (AFM). The light and dark J–V performances of solar cells were also measured by specific instrumentations. The transmittance measurement conducted on the TCO/Al2O3 films verified that the transmittance of TCO/Al2O3 were comparable to that of single TCO layer, meaning that no extra absorption loss occurred when Al2O3 buffer layers were introduced into cells. Furthermore, due to the advantages of the ALD method, the ALD-Al2O3 buffer layers formed an extremely continuous and uniform coverage on the substrates to effectively fill and block the tiny leakage channels in CdS/CdTe polycrystalline films and improve the characteristics of the interface between TCO and CdS. However, as the thickness of alumina increased, the negative effects of cells were gradually exposed, especially the increase of the series resistance (Rs) and the more serious “roll-over” phenomenon. Finally, the cell conversion efficiency (η) of more than 13.0% accompanied by optimized uniformity performances was successfully achieved corresponding to the 10 nm thick ALD-Al2O3 thin film.",TRUE,acronym
R106,Systems Biology,R49453,MetaboMAPS: Pathway sharing and multi-omics data visualization in metabolic context,S147512,R49455,Input format,R49457,SVG,"Metabolic pathways are an important part of systems biology research since they illustrate complex interactions between metabolites, enzymes, and regulators. Pathway maps are drawn to elucidate metabolism or to set data in a metabolic context. We present MetaboMAPS, a web-based platform to visualize numerical data on individual metabolic pathway maps. Metabolic maps can be stored, distributed and downloaded in SVG-format. MetaboMAPS was designed for users without computational background and supports pathway sharing without strict conventions. In addition to existing applications that established standards for well-studied pathways, MetaboMAPS offers a niche for individual, customized pathways beyond common knowledge, supporting ongoing research by creating publication-ready visualizations of experimental data.",TRUE,acronym
R106,Systems Biology,R49453,MetaboMAPS: Pathway sharing and multi-omics data visualization in metabolic context,S147514,R49455,Output format,R49457,SVG,"Metabolic pathways are an important part of systems biology research since they illustrate complex interactions between metabolites, enzymes, and regulators. Pathway maps are drawn to elucidate metabolism or to set data in a metabolic context. We present MetaboMAPS, a web-based platform to visualize numerical data on individual metabolic pathway maps. Metabolic maps can be stored, distributed and downloaded in SVG-format. MetaboMAPS was designed for users without computational background and supports pathway sharing without strict conventions. In addition to existing applications that established standards for well-studied pathways, MetaboMAPS offers a niche for individual, customized pathways beyond common knowledge, supporting ongoing research by creating publication-ready visualizations of experimental data.",TRUE,acronym
R141,Theory/Algorithms,R178451,Finding a team of experts in social networks,S699921,R178455,dataset,L471079,DBLP,"Given a task T, a pool of individuals X with different skills, and a social network G that captures the compatibility among these individuals, we study the problem of finding X, a subset of X, to perform the task. We call this the TEAM FORMATION problem. We require that members of X' not only meet the skill requirements of the task, but can also work effectively together as a team. We measure effectiveness using the communication cost incurred by the subgraph in G that only involves X'. We study two variants of the problem for two different communication-cost functions, and show that both variants are NP-hard. We explore their connections with existing combinatorial problems and give novel algorithms for their solution. To the best of our knowledge, this is the first work to consider the TEAM FORMATION problem in the presence of a social network of individuals. Experiments on the DBLP dataset show that our framework works well in practice and gives useful and intuitive results.",TRUE,acronym
R374,Urban Studies and Planning,R142709,Unified IoT ontology to enable interoperability and federation of testbeds,S576211,R142852,Ontologies which have been used as referenced,R143679,DUL,"After a thorough analysis of existing Internet of Things (IoT) related ontologies, in this paper we propose a solution that aims to achieve semantic interoperability among heterogeneous testbeds. Our model is framed within the EU H2020's FIESTA-IoT project, that aims to seamlessly support the federation of testbeds through the usage of semantic-based technologies. Our proposed model (ontology) takes inspiration from the well-known Noy et al. methodology for reusing and interconnecting existing ontologies. To build the ontology, we leverage a number of core concepts from various mainstream ontologies and taxonomies, such as Semantic Sensor Network (SSN), M3-lite (a lite version of M3 and also an outcome of this study), WGS84, IoT-lite, Time, and DUL. In addition, we also introduce a set of tools that aims to help external testbeds adapt their respective datasets to the developed ontology.",TRUE,acronym
R374,Urban Studies and Planning,R142729,CityPulse: Large Scale Data Analytics Framework for Smart Cities,S576188,R143938,Ontologies which have been used as referenced,R143679,DUL,"Our world and our lives are changing in many ways. Communication, networking, and computing technologies are among the most influential enablers that shape our lives today. Digital data and connected worlds of physical objects, people, and devices are rapidly changing the way we work, travel, socialize, and interact with our surroundings, and they have a profound impact on different domains, such as healthcare, environmental monitoring, urban systems, and control and management applications, among several other areas. Cities currently face an increasing demand for providing services that can have an impact on people's everyday lives. The CityPulse framework supports smart city service creation by means of a distributed system for semantic discovery, data analytics, and interpretation of large-scale (near-)real-time Internet of Things data and social media data streams. To goal is to break away from silo applications and enable cross-domain data integration. The CityPulse framework integrates multimodal, mixed quality, uncertain and incomplete data to create reliable, dependable information and continuously adapts data processing techniques to meet the quality of information requirements from end users. Different than existing solutions that mainly offer unified views of the data, the CityPulse framework is also equipped with powerful data analytics modules that perform intelligent data aggregation, event detection, quality assessment, contextual filtering, and decision support. This paper presents the framework, describes its components, and demonstrates how they interact to support easy development of custom-made applications for citizens. The benefits and the effectiveness of the framework are demonstrated in a use-case scenario implementation presented in this paper.",TRUE,acronym
R57,Virology,R175284,Porcine Circoviruses and Herpesviruses Are Prevalent in an Austrian Game,S694374,R175286,Has Virus,L466926,PLHV-1,"During the annual hunt in a privately owned Austrian game population in fall 2019 and 2020, 64 red deer (Cervus elaphus), 5 fallow deer (Dama dama), 6 mouflon (Ovis gmelini musimon), and 95 wild boars (Sus scrofa) were shot and sampled for PCR testing. Pools of spleen, lung, and tonsillar swabs were screened for specific nucleic acids of porcine circoviruses. Wild ruminants were additionally tested for herpesviruses and pestiviruses, and wild boars were screened for pseudorabies virus (PrV) and porcine lymphotropic herpesviruses (PLHV-1-3). PCV2 was detectable in 5% (3 of 64) of red deer and 75% (71 of 95) of wild boar samples. In addition, 24 wild boar samples (25%) but none of the ruminants tested positive for PCV3 specific nucleic acids. Herpesviruses were detected in 15 (20%) ruminant samples. Sequence analyses showed the closest relationships to fallow deer herpesvirus and elk gammaherpesvirus. In wild boars, PLHV-1 was detectable in 10 (11%), PLHV-2 in 44 (46%), and PLHV-3 in 66 (69%) of animals, including 36 double and 3 triple infections. No pestiviruses were detectable in any ruminant samples, and all wild boar samples were negative in PrV-PCR. Our data demonstrate a high prevalence of PCV2 and PLHVs in an Austrian game population, confirm the presence of PCV3 in Austrian wild boars, and indicate a low risk of spillover of notifiable animal diseases into the domestic animal population.",TRUE,acronym
R57,Virology,R175284,Porcine Circoviruses and Herpesviruses Are Prevalent in an Austrian Game,S694373,R175286,Has Virus,L466925,PLHV-2,"During the annual hunt in a privately owned Austrian game population in fall 2019 and 2020, 64 red deer (Cervus elaphus), 5 fallow deer (Dama dama), 6 mouflon (Ovis gmelini musimon), and 95 wild boars (Sus scrofa) were shot and sampled for PCR testing. Pools of spleen, lung, and tonsillar swabs were screened for specific nucleic acids of porcine circoviruses. Wild ruminants were additionally tested for herpesviruses and pestiviruses, and wild boars were screened for pseudorabies virus (PrV) and porcine lymphotropic herpesviruses (PLHV-1-3). PCV2 was detectable in 5% (3 of 64) of red deer and 75% (71 of 95) of wild boar samples. In addition, 24 wild boar samples (25%) but none of the ruminants tested positive for PCV3 specific nucleic acids. Herpesviruses were detected in 15 (20%) ruminant samples. Sequence analyses showed the closest relationships to fallow deer herpesvirus and elk gammaherpesvirus. In wild boars, PLHV-1 was detectable in 10 (11%), PLHV-2 in 44 (46%), and PLHV-3 in 66 (69%) of animals, including 36 double and 3 triple infections. No pestiviruses were detectable in any ruminant samples, and all wild boar samples were negative in PrV-PCR. Our data demonstrate a high prevalence of PCV2 and PLHVs in an Austrian game population, confirm the presence of PCV3 in Austrian wild boars, and indicate a low risk of spillover of notifiable animal diseases into the domestic animal population.",TRUE,acronym
R57,Virology,R51373,Identification of antiviral drug candidates against SARS-CoV-2 from FDA-approved drugs,S157288,R51399,Has participant,R51403,SARS-CoV-2,"Drug repositioning is the only feasible option to immediately address the COVID-19 global challenge. We screened a panel of 48 FDA-approved drugs against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) which were preselected by an assay of SARS-CoV. We identified 24 potential antiviral drug candidates against SARS-CoV-2 infection. Some drug candidates showed very low 50% inhibitory concentrations (IC 50 s), and in particular, two FDA-approved drugs—niclosamide and ciclesonide—were notable in some respects.",TRUE,acronym
R57,Virology,R69999,Human organ chip-enabled pipeline to rapidly repurpose therapeutics during viral pandemics,S332566,R70000,Has participant,R51403,SARS-CoV-2,"The rising threat of pandemic viruses, such as SARS-CoV-2, requires development of new preclinical discovery platforms that can more rapidly identify therapeutics that are active in vitro and also translate in vivo. Here we show that human organ-on-a-chip (Organ Chip) microfluidic culture devices lined by highly differentiated human primary lung airway epithelium and endothelium can be used to model virus entry, replication, strain-dependent virulence, host cytokine production, and recruitment of circulating immune cells in response to infection by respiratory viruses with great pandemic potential. We provide a first demonstration of drug repurposing by using oseltamivir in influenza A virus-infected organ chip cultures and show that co-administration of the approved anticoagulant drug, nafamostat, can double oseltamivir’s therapeutic time window. With the emergence of the COVID-19 pandemic, the Airway Chips were used to assess the inhibitory activities of approved drugs that showed inhibition in traditional cell culture assays only to find that most failed when tested in the Organ Chip platform. When administered in human Airway Chips under flow at a clinically relevant dose, one drug – amodiaquine - significantly inhibited infection by a pseudotyped SARS-CoV-2 virus. Proof of concept was provided by showing that amodiaquine and its active metabolite (desethylamodiaquine) also significantly reduce viral load in both direct infection and animal-to-animal transmission models of native SARS-CoV-2 infection in hamsters. These data highlight the value of Organ Chip technology as a more stringent and physiologically relevant platform for drug repurposing, and suggest that amodiaquine should be considered for future clinical testing.",TRUE,acronym
R57,Virology,R44087,Modelling the Potential Health Impact of the COVID-19 Pandemic on a Hypothetical European Country,S134228,R44090,Total Deaths,L82092,"6,450","A SEIR simulation model for the COVID-19 pandemic was developed (http://covidsim.eu) and applied to a hypothetical European country of 10 million population. Our results show which interventions potentially push the epidemic peak into the subsequent year (when vaccinations may be available) or which fail. Different levels of control (via contact reduction) resulted in 22% to 63% of the population sick, 0.2% to 0.6% hospitalised, and 0.07% to 0.28% dead (n=6,450 to 28,228).",TRUE,number
R57,Virology,R44087,Modelling the Potential Health Impact of the COVID-19 Pandemic on a Hypothetical European Country,S134336,R44098,Total Deaths,L82164,"28,228","A SEIR simulation model for the COVID-19 pandemic was developed (http://covidsim.eu) and applied to a hypothetical European country of 10 million population. Our results show which interventions potentially push the epidemic peak into the subsequent year (when vaccinations may be available) or which fail. Different levels of control (via contact reduction) resulted in 22% to 63% of the population sick, 0.2% to 0.6% hospitalised, and 0.07% to 0.28% dead (n=6,450 to 28,228).",TRUE,number
R57,Virology,R12241,Report 3: transmissibility of 2019- nCoV,S123580,R12242,95% Confidence interval,L74386,1.5-3.5,"Self-sustaining human-to-human transmission of the novel coronavirus (2019-nCov) is the only plausible explanation of the scale of the outbreak in Wuhan. We estimate that, on average, each case infected 2.6 (uncertainty range: 1.5-3.5) other people up to 18 January 2020, based on an analysis combining our past estimates of the size of the outbreak in Wuhan with computational modelling of potential epidemic trajectories. This implies that control measures need to block well over 60% of transmission to be effective in controlling the outbreak. It is likely, based on the experience of SARS and MERS-CoV, that the number of secondary cases caused by a case of 2019-nCoV is highly variable – with many cases causing no secondary infections, and a few causing many. Whether transmission is continuing at the same rate currently depends on the effectiveness of current control measures implemented in China and the extent to which the populations of affected areas have adopted risk-reducing behaviours. In the absence of antiviral drugs or vaccines, control relies upon the prompt detection and isolation of symptomatic cases. It is unclear at the current time whether this outbreak can be contained within China; uncertainties include the severity spectrum of the disease caused by this virus and whether cases with relatively mild symptoms are able to transmit the virus efficiently. Identification and testing of potential cases need to be as extensive as is permitted by healthcare and diagnostic testing capacity – including the identification, testing and isolation of suspected cases with only mild to moderate disease (e.g. influenza-like illness), when logistically feasible.",TRUE,count/measurement
R57,Virology,R12237,"Preliminary estimation of the basic reproduction number of novel coronavirus (2019-nCoV) in China, from 2019 to 2020: A data-driven analysis in the early phase of the outbreak",S18650,R12238,95% Confidence interval,L12272,1.96-2.55,"Abstract Backgrounds An ongoing outbreak of a novel coronavirus (2019-nCoV) pneumonia hit a major city of China, Wuhan, December 2019 and subsequently reached other provinces/regions of China and countries. We present estimates of the basic reproduction number, R 0 , of 2019-nCoV in the early phase of the outbreak. Methods Accounting for the impact of the variations in disease reporting rate, we modelled the epidemic curve of 2019-nCoV cases time series, in mainland China from January 10 to January 24, 2020, through the exponential growth. With the estimated intrinsic growth rate ( γ ), we estimated R 0 by using the serial intervals (SI) of two other well-known coronavirus diseases, MERS and SARS, as approximations for the true unknown SI. Findings The early outbreak data largely follows the exponential growth. We estimated that the mean R 0 ranges from 2.24 (95%CI: 1.96-2.55) to 3.58 (95%CI: 2.89-4.39) associated with 8-fold to 2-fold increase in the reporting rate. We demonstrated that changes in reporting rate substantially affect estimates of R 0 . Conclusion The mean estimate of R 0 for the 2019-nCoV ranges from 2.24 to 3.58, and significantly larger than 1. Our findings indicate the potential of 2019-nCoV to cause outbreaks.",TRUE,count/measurement
R57,Virology,R12231,Novel coronavirus 2019-nCoV: early estimation of epidemiological parameters and epidemic predictions,S18603,R12232,95% Confidence interval,L12237,2.39-4.13,"Since first identified, the epidemic scale of the recently emerged novel coronavirus (2019-nCoV) in Wuhan, China, has increased rapidly, with cases arising across China and other countries and regions. using a transmission model, we estimate a basic reproductive number of 3.11 (95%CI, 2.39-4.13); 58-76% of transmissions must be prevented to stop increasing; Wuhan case ascertainment of 5.0% (3.6-7.4); 21022 (11090-33490) total infections in Wuhan 1 to 22 January.",TRUE,count/measurement
R57,Virology,R12237,"Preliminary estimation of the basic reproduction number of novel coronavirus (2019-nCoV) in China, from 2019 to 2020: A data-driven analysis in the early phase of the outbreak",S18673,R12240,95% Confidence interval,L12291,2.89-4.39,"Abstract Backgrounds An ongoing outbreak of a novel coronavirus (2019-nCoV) pneumonia hit a major city of China, Wuhan, December 2019 and subsequently reached other provinces/regions of China and countries. We present estimates of the basic reproduction number, R 0 , of 2019-nCoV in the early phase of the outbreak. Methods Accounting for the impact of the variations in disease reporting rate, we modelled the epidemic curve of 2019-nCoV cases time series, in mainland China from January 10 to January 24, 2020, through the exponential growth. With the estimated intrinsic growth rate ( γ ), we estimated R 0 by using the serial intervals (SI) of two other well-known coronavirus diseases, MERS and SARS, as approximations for the true unknown SI. Findings The early outbreak data largely follows the exponential growth. We estimated that the mean R 0 ranges from 2.24 (95%CI: 1.96-2.55) to 3.58 (95%CI: 2.89-4.39) associated with 8-fold to 2-fold increase in the reporting rate. We demonstrated that changes in reporting rate substantially affect estimates of R 0 . Conclusion The mean estimate of R 0 for the 2019-nCoV ranges from 2.24 to 3.58, and significantly larger than 1. Our findings indicate the potential of 2019-nCoV to cause outbreaks.",TRUE,count/measurement
R57,Virology,R36114,Estimation of the epidemic properties of the 2019 novel coronavirus: A mathematical modeling study,S123670,R36117,95% Confidence interval,L74452,3.09-3.70,"Background The 2019 novel Coronavirus (COVID-19) emerged in Wuhan, China in December 2019 and has been spreading rapidly in China. Decisions about its pandemic threat and the appropriate level of public health response depend heavily on estimates of its basic reproduction number and assessments of interventions conducted in the early stages of the epidemic. Methods We conducted a mathematical modeling study using five independent methods to assess the basic reproduction number (R0) of COVID-19, using data on confirmed cases obtained from the China National Health Commission for the period 10th January to 8th February. We analyzed the data for the period before the closure of Wuhan city (10th January to 23rd January) and the post-closure period (23rd January to 8th February) and for the whole period, to assess both the epidemic risk of the virus and the effectiveness of the closure of Wuhan city on spread of COVID-19. Findings Before the closure of Wuhan city the basic reproduction number of COVID-19 was 4.38 (95% CI: 3.63-5.13), dropping to 3.41 (95% CI: 3.16-3.65) after the closure of Wuhan city. Over the entire epidemic period COVID-19 had a basic reproduction number of 3.39 (95% CI: 3.09-3.70), indicating it has a very high transmissibility. Interpretation COVID-19 is a highly transmissible virus with a very high risk of epidemic outbreak once it emerges in metropolitan areas. The closure of Wuhan city was effective in reducing the severity of the epidemic, but even after closure of the city and the subsequent expansion of that closure to other parts of Hubei the virus remained extremely infectious. Emergency planners in other cities should consider this high infectiousness when considering responses to this virus.",TRUE,count/measurement
R57,Virology,R36114,Estimation of the epidemic properties of the 2019 novel coronavirus: A mathematical modeling study,S123662,R36116,95% Confidence interval,L74447,3.16-3.65,"Background The 2019 novel Coronavirus (COVID-19) emerged in Wuhan, China in December 2019 and has been spreading rapidly in China. Decisions about its pandemic threat and the appropriate level of public health response depend heavily on estimates of its basic reproduction number and assessments of interventions conducted in the early stages of the epidemic. Methods We conducted a mathematical modeling study using five independent methods to assess the basic reproduction number (R0) of COVID-19, using data on confirmed cases obtained from the China National Health Commission for the period 10th January to 8th February. We analyzed the data for the period before the closure of Wuhan city (10th January to 23rd January) and the post-closure period (23rd January to 8th February) and for the whole period, to assess both the epidemic risk of the virus and the effectiveness of the closure of Wuhan city on spread of COVID-19. Findings Before the closure of Wuhan city the basic reproduction number of COVID-19 was 4.38 (95% CI: 3.63-5.13), dropping to 3.41 (95% CI: 3.16-3.65) after the closure of Wuhan city. Over the entire epidemic period COVID-19 had a basic reproduction number of 3.39 (95% CI: 3.09-3.70), indicating it has a very high transmissibility. Interpretation COVID-19 is a highly transmissible virus with a very high risk of epidemic outbreak once it emerges in metropolitan areas. The closure of Wuhan city was effective in reducing the severity of the epidemic, but even after closure of the city and the subsequent expansion of that closure to other parts of Hubei the virus remained extremely infectious. Emergency planners in other cities should consider this high infectiousness when considering responses to this virus.",TRUE,count/measurement
R57,Virology,R36128,Risk estimation and prediction by modeling the transmission of the novel coronavirus (COVID-19) in mainland China excluding Hubei province,S123754,R36129,95% Confidence interval,L74511,3.20-3.64,"Background: In December 2019, an outbreak of coronavirus disease (COVID-19)was identified in Wuhan, China and, later on, detected in other parts of China. Our aim is to evaluate the effectiveness of the evolution of interventions and self-protection measures, estimate the risk of partial lifting control measures and predict the epidemic trend of the virus in mainland China excluding Hubei province based on the published data and a novel mathematical model. Methods: A novel COVID-19 transmission dynamic model incorporating the intervention measures implemented in China is proposed. We parameterize the model by using the Markov Chain Monte Carlo (MCMC) method and estimate the control reproduction number Rc, as well as the effective daily reproduction ratio Re(t), of the disease transmission in mainland China excluding Hubei province. Results: The estimation outcomes indicate that the control reproduction number is 3.36 (95% CI 3.20-3.64) and Re(t) has dropped below 1 since January 31st, 2020, which implies that the containment strategies implemented by the Chinese government in mainland China excluding Hubei province are indeed effective and magnificently suppressed COVID-19 transmission. Moreover, our results show that relieving personal protection too early may lead to the spread of disease for a longer time and more people would be infected, and may even cause epidemic or outbreak again. By calculating the effective reproduction ratio, we proved that the contact rate should be kept at least less than 30% of the normal level by April, 2020. Conclusions: To ensure the epidemic ending rapidly, it is necessary to maintain the current integrated restrict interventions and self-protection measures, including travel restriction, quarantine of entry, contact tracing followed by quarantine and isolation and reduction of contact, like wearing masks, etc. People should be fully aware of the real-time epidemic situation and keep sufficient personal protection until April. If all the above conditions are met, the outbreak is expected to be ended by April in mainland China apart from Hubei province.",TRUE,count/measurement
R57,Virology,R36114,Estimation of the epidemic properties of the 2019 novel coronavirus: A mathematical modeling study,S123658,R36115,95% Confidence interval,L74445,3.63-5.13,"Background The 2019 novel Coronavirus (COVID-19) emerged in Wuhan, China in December 2019 and has been spreading rapidly in China. Decisions about its pandemic threat and the appropriate level of public health response depend heavily on estimates of its basic reproduction number and assessments of interventions conducted in the early stages of the epidemic. Methods We conducted a mathematical modeling study using five independent methods to assess the basic reproduction number (R0) of COVID-19, using data on confirmed cases obtained from the China National Health Commission for the period 10th January to 8th February. We analyzed the data for the period before the closure of Wuhan city (10th January to 23rd January) and the post-closure period (23rd January to 8th February) and for the whole period, to assess both the epidemic risk of the virus and the effectiveness of the closure of Wuhan city on spread of COVID-19. Findings Before the closure of Wuhan city the basic reproduction number of COVID-19 was 4.38 (95% CI: 3.63-5.13), dropping to 3.41 (95% CI: 3.16-3.65) after the closure of Wuhan city. Over the entire epidemic period COVID-19 had a basic reproduction number of 3.39 (95% CI: 3.09-3.70), indicating it has a very high transmissibility. Interpretation COVID-19 is a highly transmissible virus with a very high risk of epidemic outbreak once it emerges in metropolitan areas. The closure of Wuhan city was effective in reducing the severity of the epidemic, but even after closure of the city and the subsequent expansion of that closure to other parts of Hubei the virus remained extremely infectious. Emergency planners in other cities should consider this high infectiousness when considering responses to this virus.",TRUE,count/measurement
R57,Virology,R12245,Estimation of the Transmission Risk of 2019-nCov and Its Implication for Public Health Interventions,S18715,R12246,95% Confidence interval,L12321,5.71-7.23,"English Abstract: Background: Since the emergence of the first pneumonia cases in Wuhan, China, the novel coronavirus (2019-nCov) infection has been quickly spreading out to other provinces and neighbouring countries. Estimation of the basic reproduction number by means of mathematical modelling can be helpful for determining the potential and severity of an outbreak, and providing critical information for identifying the type of disease interventions and intensity. Methods: A deterministic compartmental model was devised based on the clinical progression of the disease, epidemiological status of the individuals, and the intervention measures. Findings: The estimation results based on likelihood and model analysis reveal that the control reproduction number may be as high as 6.47 (95% CI 5.71-7.23). Sensitivity analyses reveal that interventions, such as intensive contact tracing followed by quarantine and isolation, can effectively reduce the control reproduction number and transmission risk, with the effect of travel restriction of Wuhan on 2019-nCov infection in Beijing being almost equivalent to increasing quarantine by 100-thousand baseline value. Interpretation: It is essential to assess how the expensive, resource-intensive measures implemented by the Chinese authorities can contribute to the prevention and control of the 2019-nCov infection, and how long should be maintained. Under the most restrictive measures, the outbreak is expected to peak within two weeks (since January 23rd 2020) with significant low peak value. With travel restriction (no imported exposed individuals to Beijing), the number of infected individuals in 7 days will decrease by 91.14% in Beijing, compared with the scenario of no travel restriction. Mandarin Abstract: 背景:自从中国武汉出现第一例肺炎病例以来,新型冠状病毒(2019-nCov)感染已迅速传播到其他省份和周边国家。通过数学模型估计基本再生数,有助于确定疫情爆发的可能性和严重性,并为确定疾病干预类型和强度提供关键信息。 方法:根据疾病的临床进展,个体的流行病学状况和干预措施,设计确定性的仓室模型。 结果:基于似然函数和模型分析的估计结果表明,控制再生数可能高达6.47(95%CI 5.71-7.23)。敏感性分析显示,密集接触追踪和隔离等干预措施可以有效减少控制再生数和传播风险,武汉封城措施对北京2019-nCov感染的影响几乎等同于增加隔离措施10万的基线值。 解释:必须评估中国当局实施的昂贵,资源密集型措施如何有助于预防和控制2019-nCov感染,以及应维持多长时间。在最严格的措施下,预计疫情将在两周内(自2020年1月23日起)达到峰值,峰值较低。与没有出行限制的情况相比,有了出行限制(即没有输入的潜伏类个体进入北京),北京的7天感染者数量将减少91.14%。",TRUE,count/measurement
R57,Virology,R175260,Alphacoronavirus in a Daubenton’s Myotis Bat (Myotis daubentonii) in Sweden.,S694143,R175262,Has Virus,L466719,COVID-19,"The ongoing COVID-19 pandemic has stimulated a search for reservoirs and species potentially involved in back and forth transmission. Studies have postulated bats as one of the key reservoirs of coronaviruses (CoVs), and different CoVs have been detected in bats. So far, CoVs have not been found in bats in Sweden and we therefore tested whether they carry CoVs. In summer 2020, we sampled a total of 77 adult bats comprising 74 Myotis daubentonii, 2 Pipistrellus pygmaeus, and 1 M. mystacinus bats in southern Sweden. Blood, saliva and feces were sampled, processed and subjected to a virus next-generation sequencing target enrichment protocol. An Alphacoronavirus was detected and sequenced from feces of a M. daubentonii adult female bat. Phylogenetic analysis of the almost complete virus genome revealed a close relationship with Finnish and Danish strains. This was the first finding of a CoV in bats in Sweden, and bats may play a role in the transmission cycle of CoVs in Sweden. Focused and targeted surveillance of CoVs in bats is warranted, with consideration of potential conflicts between public health and nature conservation required as many bat species in Europe are threatened and protected.",TRUE,acronym
R57,Virology,R51386,In vitro screening of a FDA approved chemical library reveals potential inhibitors of SARS-CoV-2 replication,S157306,R51404,has endpoint,R51409,EC50,"A novel coronavirus, named SARS-CoV-2, emerged in 2019 from Hubei region in China and rapidly spread worldwide. As no approved therapeutics exists to treat Covid-19, the disease associated to SARS-Cov-2, there is an urgent need to propose molecules that could quickly enter into clinics. Repurposing of approved drugs is a strategy that can bypass the time consuming stages of drug development. In this study, we screened the Prestwick Chemical Library® composed of 1,520 approved drugs in an infected cell-based assay. 90 compounds were identified. The robustness of the screen was assessed by the identification of drugs, such as Chloroquine derivatives and protease inhibitors, already in clinical trials. The hits were sorted according to their chemical composition and their known therapeutic effect, then EC50 and CC50 were determined for a subset of compounds. Several drugs, such as Azithromycine, Opipramol, Quinidine or Omeprazol present antiviral potency with 2www.WHO.org). There are no FDA approved antivirals or vaccines for any coronavirus, including SARS-CoV-2. Current treatments for COVID-19 are limited to supportive therapies and off-label use of FDA approved drugs. Rapid development and human testing of potential antivirals is greatly needed. A quick way to test compounds with potential antiviral activity is through drug repurposing. Numerous drugs are already approved for human use and subsequently there is a good understanding of their safety profiles and potential side effects, making them easier to fast-track to clinical studies in COVID-19 patients. Here, we present data on the antiviral activity of 20 FDA approved drugs against SARS-CoV-2 that also inhibit SARS-CoV and MERS-CoV. We found that 17 of these inhibit SARS-CoV-2 at a range of IC50 values at non-cytotoxic concentrations. We directly follow up with seven of these to demonstrate all are capable of inhibiting infectious SARS-CoV-2 production. Moreover, we have evaluated two of these, chloroquine and chlorpromazine, in vivo using a mouse-adapted SARS-CoV model and found both drugs protect mice from clinical disease.",TRUE,acronym
R77,Animal Sciences,R44429,Pharmacokinetics of levetiracetam after oral and intravenous administration of a single dose to clinically normal cats,S134967,R44430,Disease definitions (characterization),L82356,clear,"OBJECTIVE To determine whether therapeutic concentrations of levetiracetam can be achieved in cats and to establish reasonable i.v. and oral dosing intervals that would not be associated with adverse effects in cats. ANIMALS 10 healthy purpose-bred cats. PROCEDURES In a randomized crossover study, levetiracetam (20 mg/kg) was administered orally and i.v. to each cat. Blood samples were collected 0, 10, 20, and 40 minutes and 1, 1.5, 2, 3, 4, 6, 9, 12, and 24 hours after administration. Plasma levetiracetam concentrations were determined via high-performance liquid chromatography. RESULTS Mean ± SD peak concentration was 25.54 ± 7.97 μg/mL. The mean y-intercept for i.v. administration was 37.52 ± 6.79 μg/mL. Half-life (harmonic mean ± pseudo-SD) was 2.95 ± 0.95 hours and 2.86 ± 0.65 hours for oral and i.v. administration, respectively. Mean volume of distribution at steady state was 0.52 ± 0.09 L/kg, and mean clearance was 2.0 ± 0.60 mL/kg/min. Mean oral bioavailability was 102 ± 39%. Plasma drug concentrations were maintained in the therapeutic range reported for humans (5 to 45 μg/mL) for at least 9 hours after administration in 7 of 10 cats. Only mild, transient hypersalivation was evident in some cats after oral administration. CONCLUSIONS AND CLINICAL RELEVANCE Levetiracetam (20 mg/kg) administered orally or i.v. to cats every 8 hours should achieve and maintain concentrations within the therapeutic range for humans. Levetiracetam administration has favorable pharmacokinetics for clinical use, was apparently tolerated well, and may be a reasonable alternative antiepileptic drug in cats.",TRUE,adj
R77,Animal Sciences,R44495,Use of continuous electroencephalography for diagnosis and monitoring of treatment of nonconvulsive status epilepticus in a cat,S135710,R44508,Pre-treatment SF (seizures/ month or year),L82943,continuous,"CASE DESCRIPTION A 10-year-old domestic shorthair cat was evaluated because of presumed seizures. CLINICAL FINDINGS The cat had intermittent mydriasis, hyperthermia, and facial twitching. Findings of MRI and CSF sample analysis were unremarkable, and results of infectious disease testing were negative. Treatment was initiated with phenobarbital, zonisamide, and levetiracetam; however, the presumed seizure activity continued. Results of analysis of continuous electroencephalographic recording indicated the cat had nonconvulsive status epilepticus. TREATMENT AND OUTCOME The cat was treated with phenobarbital IV (6 mg/kg [2.7 mg/lb] q 30 min during a 9-hour period; total dose, 108 mg/kg [49.1 mg/lb]); treatment was stopped when a burst-suppression electroencephalographic pattern was detected. During this high-dose phenobarbital treatment period, an endotracheal tube was placed and the cat was monitored and received fluids, hetastarch, and dopamine IV. Continuous mechanical ventilation was not required. After treatment, the cat developed unclassified cardiomyopathy, azotemia, anemia, and pneumonia. These problems resolved during a 9-month period. CLINICAL RELEVANCE Findings for the cat of this report indicated electroencephalographic evidence of nonconvulsive status epilepticus. Administration of a high total dose of phenobarbital and monitoring of treatment by use of electroencephalography were successful for resolution of the problem, and treatment sequelae resolved.",TRUE,adj
R77,Animal Sciences,R44425,"Levetiracetam in the management of feline audiogenic reflex seizures: a randomised, controlled, open-label study",S134930,R44426,allocation concealment,L82327,high,"Objectives Currently, there are no published randomised, controlled veterinary trials evaluating the efficacy of antiepileptic medication in the treatment of myoclonic seizures. Myoclonic seizures are a hallmark of feline audiogenic seizures (FARS). Methods This prospective, randomised, open-label trial compared the efficacy and tolerability of levetiracetam (20–25 mg/kg q8h) with phenobarbital (3–5 mg/kg q12h) in cats with suspected FARS that experienced myoclonic seizures. Cats were included that had ⩾12 myoclonic seizure days during a prospective 12 week baseline period. This was followed by a 4 week titration phase (until a therapeutic serum concentration of phenobarbital was achieved) and a 12 week treatment phase. Results Fifty-seven cats completed the study: 28 in the levetiracetam group and 29 in the phenobarbital group. A reduction of ⩾50% in the number of myoclonic seizure days was seen in 100% of patients in the levetiracetam group and in 3% of patients in the phenobarbital group ( P <0.001) during the treatment period. Levetiracetam-treated cats had higher freedom from myoclonic seizures (50.0% vs 0%; P <0.001) during the treatment period. The most common adverse events were lethargy, inappetence and ataxia, with no difference in incidence between levetiracetam and phenobarbital. Adverse events were mild and transient with levetiracetam but persistent with phenobarbital. Conclusions and relevance These results suggest that levetiracetam is an effective and well tolerated treatment for cats with myoclonic seizures and is more effective than phenobarbital. Whether it will prevent the occurrence of generalised tonic–clonic seizures and other forebrain signs if used early in the course of FARS is not yet clear. ",TRUE,adj
R77,Animal Sciences,R44429,Pharmacokinetics of levetiracetam after oral and intravenous administration of a single dose to clinically normal cats,S134970,R44430,allocation concealment,L82359,high,"OBJECTIVE To determine whether therapeutic concentrations of levetiracetam can be achieved in cats and to establish reasonable i.v. and oral dosing intervals that would not be associated with adverse effects in cats. ANIMALS 10 healthy purpose-bred cats. PROCEDURES In a randomized crossover study, levetiracetam (20 mg/kg) was administered orally and i.v. to each cat. Blood samples were collected 0, 10, 20, and 40 minutes and 1, 1.5, 2, 3, 4, 6, 9, 12, and 24 hours after administration. Plasma levetiracetam concentrations were determined via high-performance liquid chromatography. RESULTS Mean ± SD peak concentration was 25.54 ± 7.97 μg/mL. The mean y-intercept for i.v. administration was 37.52 ± 6.79 μg/mL. Half-life (harmonic mean ± pseudo-SD) was 2.95 ± 0.95 hours and 2.86 ± 0.65 hours for oral and i.v. administration, respectively. Mean volume of distribution at steady state was 0.52 ± 0.09 L/kg, and mean clearance was 2.0 ± 0.60 mL/kg/min. Mean oral bioavailability was 102 ± 39%. Plasma drug concentrations were maintained in the therapeutic range reported for humans (5 to 45 μg/mL) for at least 9 hours after administration in 7 of 10 cats. Only mild, transient hypersalivation was evident in some cats after oral administration. CONCLUSIONS AND CLINICAL RELEVANCE Levetiracetam (20 mg/kg) administered orally or i.v. to cats every 8 hours should achieve and maintain concentrations within the therapeutic range for humans. Levetiracetam administration has favorable pharmacokinetics for clinical use, was apparently tolerated well, and may be a reasonable alternative antiepileptic drug in cats.",TRUE,adj
R77,Animal Sciences,R44438,Levetiracetam as an adjunct to phenobarbital treatment in cats with suspected idiopathic epilepsy,S135068,R44439,allocation concealment,L82439,high,"OBJECTIVE To assess pharmacokinetics, efficacy, and tolerability of oral levetiracetam administered as an adjunct to phenobarbital treatment in cats with poorly controlled suspected idiopathic epilepsy. DESIGN-Open-label, noncomparative clinical trial. ANIMALS 12 cats suspected to have idiopathic epilepsy that was poorly controlled with phenobarbital or that had unacceptable adverse effects when treated with phenobarbital. PROCEDURES Cats were treated with levetiracetam (20 mg/kg [9.1 mg/lb], PO, q 8 h). After a minimum of 1 week of treatment, serum levetiracetam concentrations were measured before and 2, 4, and 6 hours after drug administration, and maximum and minimum serum concentrations and elimination half-life were calculated. Seizure frequencies before and after initiation of levetiracetam treatment were compared, and adverse effects were recorded. RESULTS Median maximum serum levetiracetam concentration was 25.5 microg/mL, median minimum serum levetiracetam concentration was 8.3 microg/mL, and median elimination half-life was 2.9 hours. Median seizure frequency prior to treatment with levetiracetam (2.1 seizures/mo) was significantly higher than median seizure frequency after initiation of levetiracetam treatment (0.42 seizures/mo), and 7 of 10 cats were classified as having responded to levetiracetam treatment (ie, reduction in seizure frequency of >or=50%). Two cats had transient lethargy and inappetence. CONCLUSIONS AND CLINICAL RELEVANCE Results suggested that levetiracetam is well tolerated in cats and may be useful as an adjunct to phenobarbital treatment in cats with idiopathic epilepsy.",TRUE,adj
R77,Animal Sciences,R44450,Pharmacokinetics of phenobarbital in the cat following intravenous and oral administration,S135189,R44451,allocation concealment,L82536,high,"Phenobarbital was administered to eight healthy cats as a single intravenous dose of 10 mg/kg. Serum phenobarbital concentrations were determined using an immunoassay technique. The intravenous data were fitted to one-, two- and three-compartment models. After statistical comparison of the three models, a two-compartment model was selected. Following intravenous administration, the drug was rapidly distributed (distribution half-life = 0.046 +/- 0.007 h) with a large apparent volume of distribution (931 +/- 44.8 mL/kg). Subsequent elimination of phenobarbital from the body was slow (elimination half-life = 58.8 +/- 4.21 h). Three weeks later, a single oral dose of phenobarbital (10 mg/kg) was administered to the same group of cats. A one-compartment model with an input component was used to describe the results. After oral administration, the initial rapid absorption phase (absorption half-life = 0.382 +/- 0.099 h) was followed by a plateau in the serum concentration (13.5 +/- 0.148 micrograms/mL) for approximately 10 h. The half-life of the terminal elimination phase (76.1 +/- 6.96 h) was not significantly different from the half-life determined for the intravenous route. Bioavailability of the oral drug was high (F = 1.20 +/- 0.120). Based on the pharmacokinetic parameters determined in this study, phenobarbital appears to be a suitable drug for use as an anticonvulsant in the cat.",TRUE,adj
R77,Animal Sciences,R44425,"Levetiracetam in the management of feline audiogenic reflex seizures: a randomised, controlled, open-label study",S134924,R44426,Blinding of outcome assessment,L82321,high,"Objectives Currently, there are no published randomised, controlled veterinary trials evaluating the efficacy of antiepileptic medication in the treatment of myoclonic seizures. Myoclonic seizures are a hallmark of feline audiogenic seizures (FARS). Methods This prospective, randomised, open-label trial compared the efficacy and tolerability of levetiracetam (20–25 mg/kg q8h) with phenobarbital (3–5 mg/kg q12h) in cats with suspected FARS that experienced myoclonic seizures. Cats were included that had ⩾12 myoclonic seizure days during a prospective 12 week baseline period. This was followed by a 4 week titration phase (until a therapeutic serum concentration of phenobarbital was achieved) and a 12 week treatment phase. Results Fifty-seven cats completed the study: 28 in the levetiracetam group and 29 in the phenobarbital group. A reduction of ⩾50% in the number of myoclonic seizure days was seen in 100% of patients in the levetiracetam group and in 3% of patients in the phenobarbital group ( P <0.001) during the treatment period. Levetiracetam-treated cats had higher freedom from myoclonic seizures (50.0% vs 0%; P <0.001) during the treatment period. The most common adverse events were lethargy, inappetence and ataxia, with no difference in incidence between levetiracetam and phenobarbital. Adverse events were mild and transient with levetiracetam but persistent with phenobarbital. Conclusions and relevance These results suggest that levetiracetam is an effective and well tolerated treatment for cats with myoclonic seizures and is more effective than phenobarbital. Whether it will prevent the occurrence of generalised tonic–clonic seizures and other forebrain signs if used early in the course of FARS is not yet clear. ",TRUE,adj
R77,Animal Sciences,R44429,Pharmacokinetics of levetiracetam after oral and intravenous administration of a single dose to clinically normal cats,S134965,R44430,Blinding of outcome assessment,L82354,high,"OBJECTIVE To determine whether therapeutic concentrations of levetiracetam can be achieved in cats and to establish reasonable i.v. and oral dosing intervals that would not be associated with adverse effects in cats. ANIMALS 10 healthy purpose-bred cats. PROCEDURES In a randomized crossover study, levetiracetam (20 mg/kg) was administered orally and i.v. to each cat. Blood samples were collected 0, 10, 20, and 40 minutes and 1, 1.5, 2, 3, 4, 6, 9, 12, and 24 hours after administration. Plasma levetiracetam concentrations were determined via high-performance liquid chromatography. RESULTS Mean ± SD peak concentration was 25.54 ± 7.97 μg/mL. The mean y-intercept for i.v. administration was 37.52 ± 6.79 μg/mL. Half-life (harmonic mean ± pseudo-SD) was 2.95 ± 0.95 hours and 2.86 ± 0.65 hours for oral and i.v. administration, respectively. Mean volume of distribution at steady state was 0.52 ± 0.09 L/kg, and mean clearance was 2.0 ± 0.60 mL/kg/min. Mean oral bioavailability was 102 ± 39%. Plasma drug concentrations were maintained in the therapeutic range reported for humans (5 to 45 μg/mL) for at least 9 hours after administration in 7 of 10 cats. Only mild, transient hypersalivation was evident in some cats after oral administration. CONCLUSIONS AND CLINICAL RELEVANCE Levetiracetam (20 mg/kg) administered orally or i.v. to cats every 8 hours should achieve and maintain concentrations within the therapeutic range for humans. Levetiracetam administration has favorable pharmacokinetics for clinical use, was apparently tolerated well, and may be a reasonable alternative antiepileptic drug in cats.",TRUE,adj
R77,Animal Sciences,R44438,Levetiracetam as an adjunct to phenobarbital treatment in cats with suspected idiopathic epilepsy,S135063,R44439,Blinding of outcome assessment,L82434,high,"OBJECTIVE To assess pharmacokinetics, efficacy, and tolerability of oral levetiracetam administered as an adjunct to phenobarbital treatment in cats with poorly controlled suspected idiopathic epilepsy. DESIGN-Open-label, noncomparative clinical trial. ANIMALS 12 cats suspected to have idiopathic epilepsy that was poorly controlled with phenobarbital or that had unacceptable adverse effects when treated with phenobarbital. PROCEDURES Cats were treated with levetiracetam (20 mg/kg [9.1 mg/lb], PO, q 8 h). After a minimum of 1 week of treatment, serum levetiracetam concentrations were measured before and 2, 4, and 6 hours after drug administration, and maximum and minimum serum concentrations and elimination half-life were calculated. Seizure frequencies before and after initiation of levetiracetam treatment were compared, and adverse effects were recorded. RESULTS Median maximum serum levetiracetam concentration was 25.5 microg/mL, median minimum serum levetiracetam concentration was 8.3 microg/mL, and median elimination half-life was 2.9 hours. Median seizure frequency prior to treatment with levetiracetam (2.1 seizures/mo) was significantly higher than median seizure frequency after initiation of levetiracetam treatment (0.42 seizures/mo), and 7 of 10 cats were classified as having responded to levetiracetam treatment (ie, reduction in seizure frequency of >or=50%). Two cats had transient lethargy and inappetence. CONCLUSIONS AND CLINICAL RELEVANCE Results suggested that levetiracetam is well tolerated in cats and may be useful as an adjunct to phenobarbital treatment in cats with idiopathic epilepsy.",TRUE,adj
R77,Animal Sciences,R44450,Pharmacokinetics of phenobarbital in the cat following intravenous and oral administration,S135184,R44451,Blinding of outcome assessment,L82531,high,"Phenobarbital was administered to eight healthy cats as a single intravenous dose of 10 mg/kg. Serum phenobarbital concentrations were determined using an immunoassay technique. The intravenous data were fitted to one-, two- and three-compartment models. After statistical comparison of the three models, a two-compartment model was selected. Following intravenous administration, the drug was rapidly distributed (distribution half-life = 0.046 +/- 0.007 h) with a large apparent volume of distribution (931 +/- 44.8 mL/kg). Subsequent elimination of phenobarbital from the body was slow (elimination half-life = 58.8 +/- 4.21 h). Three weeks later, a single oral dose of phenobarbital (10 mg/kg) was administered to the same group of cats. A one-compartment model with an input component was used to describe the results. After oral administration, the initial rapid absorption phase (absorption half-life = 0.382 +/- 0.099 h) was followed by a plateau in the serum concentration (13.5 +/- 0.148 micrograms/mL) for approximately 10 h. The half-life of the terminal elimination phase (76.1 +/- 6.96 h) was not significantly different from the half-life determined for the intravenous route. Bioavailability of the oral drug was high (F = 1.20 +/- 0.120). Based on the pharmacokinetic parameters determined in this study, phenobarbital appears to be a suitable drug for use as an anticonvulsant in the cat.",TRUE,adj
R77,Animal Sciences,R44425,"Levetiracetam in the management of feline audiogenic reflex seizures: a randomised, controlled, open-label study",S134931,R44426,Incomplete outcome data,L82328,high,"Objectives Currently, there are no published randomised, controlled veterinary trials evaluating the efficacy of antiepileptic medication in the treatment of myoclonic seizures. Myoclonic seizures are a hallmark of feline audiogenic seizures (FARS). Methods This prospective, randomised, open-label trial compared the efficacy and tolerability of levetiracetam (20–25 mg/kg q8h) with phenobarbital (3–5 mg/kg q12h) in cats with suspected FARS that experienced myoclonic seizures. Cats were included that had ⩾12 myoclonic seizure days during a prospective 12 week baseline period. This was followed by a 4 week titration phase (until a therapeutic serum concentration of phenobarbital was achieved) and a 12 week treatment phase. Results Fifty-seven cats completed the study: 28 in the levetiracetam group and 29 in the phenobarbital group. A reduction of ⩾50% in the number of myoclonic seizure days was seen in 100% of patients in the levetiracetam group and in 3% of patients in the phenobarbital group ( P <0.001) during the treatment period. Levetiracetam-treated cats had higher freedom from myoclonic seizures (50.0% vs 0%; P <0.001) during the treatment period. The most common adverse events were lethargy, inappetence and ataxia, with no difference in incidence between levetiracetam and phenobarbital. Adverse events were mild and transient with levetiracetam but persistent with phenobarbital. Conclusions and relevance These results suggest that levetiracetam is an effective and well tolerated treatment for cats with myoclonic seizures and is more effective than phenobarbital. Whether it will prevent the occurrence of generalised tonic–clonic seizures and other forebrain signs if used early in the course of FARS is not yet clear. ",TRUE,adj
R77,Animal Sciences,R44438,Levetiracetam as an adjunct to phenobarbital treatment in cats with suspected idiopathic epilepsy,S135069,R44439,Incomplete outcome data,L82440,high,"OBJECTIVE To assess pharmacokinetics, efficacy, and tolerability of oral levetiracetam administered as an adjunct to phenobarbital treatment in cats with poorly controlled suspected idiopathic epilepsy. DESIGN-Open-label, noncomparative clinical trial. ANIMALS 12 cats suspected to have idiopathic epilepsy that was poorly controlled with phenobarbital or that had unacceptable adverse effects when treated with phenobarbital. PROCEDURES Cats were treated with levetiracetam (20 mg/kg [9.1 mg/lb], PO, q 8 h). After a minimum of 1 week of treatment, serum levetiracetam concentrations were measured before and 2, 4, and 6 hours after drug administration, and maximum and minimum serum concentrations and elimination half-life were calculated. Seizure frequencies before and after initiation of levetiracetam treatment were compared, and adverse effects were recorded. RESULTS Median maximum serum levetiracetam concentration was 25.5 microg/mL, median minimum serum levetiracetam concentration was 8.3 microg/mL, and median elimination half-life was 2.9 hours. Median seizure frequency prior to treatment with levetiracetam (2.1 seizures/mo) was significantly higher than median seizure frequency after initiation of levetiracetam treatment (0.42 seizures/mo), and 7 of 10 cats were classified as having responded to levetiracetam treatment (ie, reduction in seizure frequency of >or=50%). Two cats had transient lethargy and inappetence. CONCLUSIONS AND CLINICAL RELEVANCE Results suggested that levetiracetam is well tolerated in cats and may be useful as an adjunct to phenobarbital treatment in cats with idiopathic epilepsy.",TRUE,adj
R77,Animal Sciences,R44438,Levetiracetam as an adjunct to phenobarbital treatment in cats with suspected idiopathic epilepsy,S135067,R44439,Randomization,L82438,high,"OBJECTIVE To assess pharmacokinetics, efficacy, and tolerability of oral levetiracetam administered as an adjunct to phenobarbital treatment in cats with poorly controlled suspected idiopathic epilepsy. DESIGN-Open-label, noncomparative clinical trial. ANIMALS 12 cats suspected to have idiopathic epilepsy that was poorly controlled with phenobarbital or that had unacceptable adverse effects when treated with phenobarbital. PROCEDURES Cats were treated with levetiracetam (20 mg/kg [9.1 mg/lb], PO, q 8 h). After a minimum of 1 week of treatment, serum levetiracetam concentrations were measured before and 2, 4, and 6 hours after drug administration, and maximum and minimum serum concentrations and elimination half-life were calculated. Seizure frequencies before and after initiation of levetiracetam treatment were compared, and adverse effects were recorded. RESULTS Median maximum serum levetiracetam concentration was 25.5 microg/mL, median minimum serum levetiracetam concentration was 8.3 microg/mL, and median elimination half-life was 2.9 hours. Median seizure frequency prior to treatment with levetiracetam (2.1 seizures/mo) was significantly higher than median seizure frequency after initiation of levetiracetam treatment (0.42 seizures/mo), and 7 of 10 cats were classified as having responded to levetiracetam treatment (ie, reduction in seizure frequency of >or=50%). Two cats had transient lethargy and inappetence. CONCLUSIONS AND CLINICAL RELEVANCE Results suggested that levetiracetam is well tolerated in cats and may be useful as an adjunct to phenobarbital treatment in cats with idiopathic epilepsy.",TRUE,adj
R77,Animal Sciences,R44450,Pharmacokinetics of phenobarbital in the cat following intravenous and oral administration,S135188,R44451,Randomization,L82535,high,"Phenobarbital was administered to eight healthy cats as a single intravenous dose of 10 mg/kg. Serum phenobarbital concentrations were determined using an immunoassay technique. The intravenous data were fitted to one-, two- and three-compartment models. After statistical comparison of the three models, a two-compartment model was selected. Following intravenous administration, the drug was rapidly distributed (distribution half-life = 0.046 +/- 0.007 h) with a large apparent volume of distribution (931 +/- 44.8 mL/kg). Subsequent elimination of phenobarbital from the body was slow (elimination half-life = 58.8 +/- 4.21 h). Three weeks later, a single oral dose of phenobarbital (10 mg/kg) was administered to the same group of cats. A one-compartment model with an input component was used to describe the results. After oral administration, the initial rapid absorption phase (absorption half-life = 0.382 +/- 0.099 h) was followed by a plateau in the serum concentration (13.5 +/- 0.148 micrograms/mL) for approximately 10 h. The half-life of the terminal elimination phase (76.1 +/- 6.96 h) was not significantly different from the half-life determined for the intravenous route. Bioavailability of the oral drug was high (F = 1.20 +/- 0.120). Based on the pharmacokinetic parameters determined in this study, phenobarbital appears to be a suitable drug for use as an anticonvulsant in the cat.",TRUE,adj
R77,Animal Sciences,R44429,Pharmacokinetics of levetiracetam after oral and intravenous administration of a single dose to clinically normal cats,S134973,R44430,Selective reporting,L82361,high,"OBJECTIVE To determine whether therapeutic concentrations of levetiracetam can be achieved in cats and to establish reasonable i.v. and oral dosing intervals that would not be associated with adverse effects in cats. ANIMALS 10 healthy purpose-bred cats. PROCEDURES In a randomized crossover study, levetiracetam (20 mg/kg) was administered orally and i.v. to each cat. Blood samples were collected 0, 10, 20, and 40 minutes and 1, 1.5, 2, 3, 4, 6, 9, 12, and 24 hours after administration. Plasma levetiracetam concentrations were determined via high-performance liquid chromatography. RESULTS Mean ± SD peak concentration was 25.54 ± 7.97 μg/mL. The mean y-intercept for i.v. administration was 37.52 ± 6.79 μg/mL. Half-life (harmonic mean ± pseudo-SD) was 2.95 ± 0.95 hours and 2.86 ± 0.65 hours for oral and i.v. administration, respectively. Mean volume of distribution at steady state was 0.52 ± 0.09 L/kg, and mean clearance was 2.0 ± 0.60 mL/kg/min. Mean oral bioavailability was 102 ± 39%. Plasma drug concentrations were maintained in the therapeutic range reported for humans (5 to 45 μg/mL) for at least 9 hours after administration in 7 of 10 cats. Only mild, transient hypersalivation was evident in some cats after oral administration. CONCLUSIONS AND CLINICAL RELEVANCE Levetiracetam (20 mg/kg) administered orally or i.v. to cats every 8 hours should achieve and maintain concentrations within the therapeutic range for humans. Levetiracetam administration has favorable pharmacokinetics for clinical use, was apparently tolerated well, and may be a reasonable alternative antiepileptic drug in cats.",TRUE,adj
R77,Animal Sciences,R44438,Levetiracetam as an adjunct to phenobarbital treatment in cats with suspected idiopathic epilepsy,S135071,R44439,Selective reporting,L82441,high,"OBJECTIVE To assess pharmacokinetics, efficacy, and tolerability of oral levetiracetam administered as an adjunct to phenobarbital treatment in cats with poorly controlled suspected idiopathic epilepsy. DESIGN-Open-label, noncomparative clinical trial. ANIMALS 12 cats suspected to have idiopathic epilepsy that was poorly controlled with phenobarbital or that had unacceptable adverse effects when treated with phenobarbital. PROCEDURES Cats were treated with levetiracetam (20 mg/kg [9.1 mg/lb], PO, q 8 h). After a minimum of 1 week of treatment, serum levetiracetam concentrations were measured before and 2, 4, and 6 hours after drug administration, and maximum and minimum serum concentrations and elimination half-life were calculated. Seizure frequencies before and after initiation of levetiracetam treatment were compared, and adverse effects were recorded. RESULTS Median maximum serum levetiracetam concentration was 25.5 microg/mL, median minimum serum levetiracetam concentration was 8.3 microg/mL, and median elimination half-life was 2.9 hours. Median seizure frequency prior to treatment with levetiracetam (2.1 seizures/mo) was significantly higher than median seizure frequency after initiation of levetiracetam treatment (0.42 seizures/mo), and 7 of 10 cats were classified as having responded to levetiracetam treatment (ie, reduction in seizure frequency of >or=50%). Two cats had transient lethargy and inappetence. CONCLUSIONS AND CLINICAL RELEVANCE Results suggested that levetiracetam is well tolerated in cats and may be useful as an adjunct to phenobarbital treatment in cats with idiopathic epilepsy.",TRUE,adj
R77,Animal Sciences,R44450,Pharmacokinetics of phenobarbital in the cat following intravenous and oral administration,S135192,R44451,Selective reporting,L82538,high,"Phenobarbital was administered to eight healthy cats as a single intravenous dose of 10 mg/kg. Serum phenobarbital concentrations were determined using an immunoassay technique. The intravenous data were fitted to one-, two- and three-compartment models. After statistical comparison of the three models, a two-compartment model was selected. Following intravenous administration, the drug was rapidly distributed (distribution half-life = 0.046 +/- 0.007 h) with a large apparent volume of distribution (931 +/- 44.8 mL/kg). Subsequent elimination of phenobarbital from the body was slow (elimination half-life = 58.8 +/- 4.21 h). Three weeks later, a single oral dose of phenobarbital (10 mg/kg) was administered to the same group of cats. A one-compartment model with an input component was used to describe the results. After oral administration, the initial rapid absorption phase (absorption half-life = 0.382 +/- 0.099 h) was followed by a plateau in the serum concentration (13.5 +/- 0.148 micrograms/mL) for approximately 10 h. The half-life of the terminal elimination phase (76.1 +/- 6.96 h) was not significantly different from the half-life determined for the intravenous route. Bioavailability of the oral drug was high (F = 1.20 +/- 0.120). Based on the pharmacokinetic parameters determined in this study, phenobarbital appears to be a suitable drug for use as an anticonvulsant in the cat.",TRUE,adj
R77,Animal Sciences,R44446,Pharmacokinetics and toxicity of zonisamide in cats,S135150,R44447,Incomplete outcome data,L82505,low,"With the eventual goal of making zonisamide (ZNS), a relatively new antiepileptic drug, available for the treatment of epilepsy in cats, the pharmacokinetics after a single oral administration at 10 mg/kg and the toxicity after 9-week daily administration of 20 mg/kg/day of ZNS were studied in healthy cats. Pharmacokinetic parameters obtained with a single administration of ZNS at 10 mg/day were as follows: C max =13.1 μg/ml; T max =4.0 h; T 1/2 =33.0 h; areas under the curves (AUCs)=720.3 μg/mlh (values represent the medians). The study with daily administrations revealed that the toxicity of ZNS was comparatively low in cats, suggesting that it may be an available drug for cats. However, half of the cats that were administered 20 mg/kg/day daily showed adverse reactions such as anorexia, diarrhoea, vomiting, somnolence and locomotor ataxia.",TRUE,adj
R77,Animal Sciences,R44450,Pharmacokinetics of phenobarbital in the cat following intravenous and oral administration,S135190,R44451,Incomplete outcome data,L82537,low,"Phenobarbital was administered to eight healthy cats as a single intravenous dose of 10 mg/kg. Serum phenobarbital concentrations were determined using an immunoassay technique. The intravenous data were fitted to one-, two- and three-compartment models. After statistical comparison of the three models, a two-compartment model was selected. Following intravenous administration, the drug was rapidly distributed (distribution half-life = 0.046 +/- 0.007 h) with a large apparent volume of distribution (931 +/- 44.8 mL/kg). Subsequent elimination of phenobarbital from the body was slow (elimination half-life = 58.8 +/- 4.21 h). Three weeks later, a single oral dose of phenobarbital (10 mg/kg) was administered to the same group of cats. A one-compartment model with an input component was used to describe the results. After oral administration, the initial rapid absorption phase (absorption half-life = 0.382 +/- 0.099 h) was followed by a plateau in the serum concentration (13.5 +/- 0.148 micrograms/mL) for approximately 10 h. The half-life of the terminal elimination phase (76.1 +/- 6.96 h) was not significantly different from the half-life determined for the intravenous route. Bioavailability of the oral drug was high (F = 1.20 +/- 0.120). Based on the pharmacokinetic parameters determined in this study, phenobarbital appears to be a suitable drug for use as an anticonvulsant in the cat.",TRUE,adj
R77,Animal Sciences,R44452,Pharmacokinetics of phenobarbital in the cat following multiple oral administration,S135211,R44453,Incomplete outcome data,L82554,low,Phenobarbital was administered orally to seven healthy cats at a dose of 5 mg/kg once a day for 21 days. Serum phenobarbital concentrations were determined using a commercial immunoassay technique. A one-compartment model was used to describe the final elimination curve. The elimination half-life (t1/2 b) after the final day of treatment was 43.3 +/- 2.92 h. The large apparent volume of distribution of 695.0 +/- 43.9 mL/kg suggests that the drug was widely distributed within the body. The t1/2 b following multiple oral administration was significantly shorter than previously reported for a single oral dose of phenobarbital in the cat. Analysis of pharmacokinetic results after days 1 and 21 of treatment suggested that the elimination kinetics of phenobarbital did not change significantly with multiple oral administration. It appears that differences in elimination kinetics can exist between populations of cats. These differences emphasize the need for individual monitoring of cats receiving phenobarbital.,TRUE,adj
R77,Animal Sciences,R44425,"Levetiracetam in the management of feline audiogenic reflex seizures: a randomised, controlled, open-label study",S134929,R44426,Randomization,L82326,low,"Objectives Currently, there are no published randomised, controlled veterinary trials evaluating the efficacy of antiepileptic medication in the treatment of myoclonic seizures. Myoclonic seizures are a hallmark of feline audiogenic seizures (FARS). Methods This prospective, randomised, open-label trial compared the efficacy and tolerability of levetiracetam (20–25 mg/kg q8h) with phenobarbital (3–5 mg/kg q12h) in cats with suspected FARS that experienced myoclonic seizures. Cats were included that had ⩾12 myoclonic seizure days during a prospective 12 week baseline period. This was followed by a 4 week titration phase (until a therapeutic serum concentration of phenobarbital was achieved) and a 12 week treatment phase. Results Fifty-seven cats completed the study: 28 in the levetiracetam group and 29 in the phenobarbital group. A reduction of ⩾50% in the number of myoclonic seizure days was seen in 100% of patients in the levetiracetam group and in 3% of patients in the phenobarbital group ( P <0.001) during the treatment period. Levetiracetam-treated cats had higher freedom from myoclonic seizures (50.0% vs 0%; P <0.001) during the treatment period. The most common adverse events were lethargy, inappetence and ataxia, with no difference in incidence between levetiracetam and phenobarbital. Adverse events were mild and transient with levetiracetam but persistent with phenobarbital. Conclusions and relevance These results suggest that levetiracetam is an effective and well tolerated treatment for cats with myoclonic seizures and is more effective than phenobarbital. Whether it will prevent the occurrence of generalised tonic–clonic seizures and other forebrain signs if used early in the course of FARS is not yet clear. ",TRUE,adj
R77,Animal Sciences,R44425,"Levetiracetam in the management of feline audiogenic reflex seizures: a randomised, controlled, open-label study",S134933,R44426,Selective reporting,L82329,low,"Objectives Currently, there are no published randomised, controlled veterinary trials evaluating the efficacy of antiepileptic medication in the treatment of myoclonic seizures. Myoclonic seizures are a hallmark of feline audiogenic seizures (FARS). Methods This prospective, randomised, open-label trial compared the efficacy and tolerability of levetiracetam (20–25 mg/kg q8h) with phenobarbital (3–5 mg/kg q12h) in cats with suspected FARS that experienced myoclonic seizures. Cats were included that had ⩾12 myoclonic seizure days during a prospective 12 week baseline period. This was followed by a 4 week titration phase (until a therapeutic serum concentration of phenobarbital was achieved) and a 12 week treatment phase. Results Fifty-seven cats completed the study: 28 in the levetiracetam group and 29 in the phenobarbital group. A reduction of ⩾50% in the number of myoclonic seizure days was seen in 100% of patients in the levetiracetam group and in 3% of patients in the phenobarbital group ( P <0.001) during the treatment period. Levetiracetam-treated cats had higher freedom from myoclonic seizures (50.0% vs 0%; P <0.001) during the treatment period. The most common adverse events were lethargy, inappetence and ataxia, with no difference in incidence between levetiracetam and phenobarbital. Adverse events were mild and transient with levetiracetam but persistent with phenobarbital. Conclusions and relevance These results suggest that levetiracetam is an effective and well tolerated treatment for cats with myoclonic seizures and is more effective than phenobarbital. Whether it will prevent the occurrence of generalised tonic–clonic seizures and other forebrain signs if used early in the course of FARS is not yet clear. ",TRUE,adj
R77,Animal Sciences,R44479,Treatment and long-term follow-up of cats with suspected primary epilepsy,S135417,R44480,Study groups,L82706,moderate,"We report an evaluation of the treatment and outcome of cats with suspected primary epilepsy. Phenobarbital therapy was used alone or in combination with other anti-epileptic drugs. Outcome after treatment was evaluated mainly on the basis of number of seizures per year and categorised into four groups: seizure-free, good control (1–5 seizures per year), moderate control (6–10 seizures per year) and poor control (more than 10 seizures per year). About 40–50% of cases became seizure-free, 20–30% were considered good-to-moderately controlled and about 30% were poorly controlled depending on the year of treatment considered. The duration of seizure events after treatment decreased in 26/36 cats and was unchanged in eight cats. The subjective severity of seizure also decreased in 25 cats and was unchanged in nine cats. Twenty-six cats had a good quality of life, nine cats an impaired quality of life and one cat a bad quality of life. Despite being free of seizures for years, cessation of treatment may lead to recurrence of seizures in most cats.",TRUE,adj
R77,Animal Sciences,R44483,Clinical characterization of epilepsy of unknown cause in cats,S135450,R44484,Disease definitions (characterization),L82731,well,"Background The diagnosis of feline epilepsy of unknown cause (EUC) requires a thorough diagnostic evaluation, otherwise the prevalence of EUC could be overestimated. Hypothesis Feline EUC is a clinically defined disease entity, which differs from feline hippocampal necrosis by the absence of magnetic resonance imaging (MRI) signal alteration of the hippocampus. The objectives of this study were (1) to evaluate the prevalence of EUC in a hospital population of cats by applying well‐defined inclusion criteria, and (2) to describe the clinical course of EUC. Animals Eighty‐one cats with recurrent seizures. Methods Retrospective study—medical records were reviewed for cats presented for evaluation of recurrent seizures (2005–2010). Inclusion criteria were a defined diagnosis based on laboratory data, and either MRI or histopathology. Final outcome was confirmed by telephone interview with the owner. Magnetic resonance images were reviewed to evaluate hippocampal morphology and signal alterations. Results Epilepsy of unknown cause was diagnosed in 22% of cats with epilepsy. Physical, neurologic, and laboratory examinations, and either 1.5 T MRI and cerebrospinal fluid analysis or postmortem examination failed to identify an underlying cause. Cats with EUC had a higher survival rate (P < .05) and seizure remission occurred frequently (44.4%). Conclusion and Clinical Importance A detailed clinical evaluation and diagnostic imaging with MRI is recommended in any cat with recurrent seizures. The prognosis of cats with normal MRI findings and a clinical diagnosis of EUC are good. Standardized imaging guidelines should be established to assess the hippocampus in cats.",TRUE,adj
R133,Artificial Intelligence,R139899,Building ontologies from XML data sources,S560449,R139900,rule,R140420,Automatic,"In this paper, we present a tool called X2OWL that aims at building an OWL ontology from an XML datasource. This method is based on XML schema to automatically generate the ontology structure, as well as, a set of mapping bridges. The presented method also includes a refinement step that allows to clean the mapping bridges and possibly to restructure the generated ontology.",TRUE,adj
R133,Artificial Intelligence,R139901,Transforming XML schema to OWL using patterns,S560450,R139902,rule,R140420,Automatic,"One of the promises of the Semantic Web is to support applications that easily and seamlessly deal with heterogeneous data. Most data on the Web, however, is in the Extensible Markup Language (XML) format, but using XML requires applications to understand the format of each data source that they access. To achieve the benefits of the Semantic Web involves transforming XML into the Semantic Web language, OWL (Ontology Web Language), a process that generally has manual or only semi-automatic components. In this paper we present a set of patterns that enable the direct, automatic transformation from XML Schema into OWL allowing the integration of much XML data in the Semantic Web. We focus on an advanced logical representation of XML Schema components and present an implementation, including a comparison with related work.",TRUE,adj
R133,Artificial Intelligence,R139903,An efficient XML to OWL converter,S560451,R139904,rule,R140420,Automatic,"XML has become the de-facto standard of data exchange format in E-businesses. Although XML can support syntactic inter-operability, problems arise when data sources represented as XML documents are needed to be integrated. The reason is that XML lacks support for efficient sharing of conceptualization. The Web Ontology Language (OWL) can play an important role here as it can enable semantic inter-operability, and it supports the representation of domain knowledge using classes, properties and instances for applications. In many applications it is required to convert huge XML documents automatically to OWL ontologies, which is receiving a lot of attention. There are some existing converters for this job. Unfortunately they have serious shortcomings, e. g., they do not address the handling of characteristics like internal references, (transitive) import(s), include etc. which are commonly used in XML Schemas. To alleviate these drawbacks, we propose a new framework for mapping XML to OWL automatically. We illustrate our technique on examples to show the efficacy of our approach. We also provide the performance measures of our approach on some standard datasets. We also check the correctness of the conversion process.",TRUE,adj
R133,Artificial Intelligence,R139905,Automatic generation of OWL ontology from XML data source,S560452,R139906,rule,R140420,Automatic,"The eXtensible Markup Language (XML) can be used as data exchange format in different domains. It allows different parties to exchange data by providing common understanding of the basic concepts in the domain. XML covers the syntactic level, but lacks support for reasoning. Ontology can provide a semantic representation of domain knowledge which supports efficient reasoning and expressive power. One of the most popular ontology languages is the Web Ontology Language (OWL). It can represent domain knowledge using classes, properties, axioms and instances for the use in a distributed environment such as the World Wide Web. This paper presents a new method for automatic generation of OWL ontology from XML data sources.",TRUE,adj
R133,Artificial Intelligence,R139907,Automatic transforming XML documents into OWL Ontology,S560453,R139908,rule,R140420,Automatic,"DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.",TRUE,adj
R133,Artificial Intelligence,R69558,A framework for explainable deep neural models using external knowledge graphs,S330305,R69559,Machine Learning Model Integration,L240561,external,"Deep neural networks (DNNs) have become the gold standard for solving challenging classification problems, especially given complex sensor inputs (e.g., images and video). While DNNs are powerful, they are also brittle, and their inner workings are not fully understood by humans, leading to their use as “black-box” models. DNNs often generalize poorly when provided new data sampled from slightly shifted distributions; DNNs are easily manipulated by adversarial examples; and the decision-making process of DNNs can be difficult for humans to interpret. To address these challenges, we propose integrating DNNs with external sources of semantic knowledge. Large quantities of meaningful, formalized knowledge are available in knowledge graphs and other databases, many of which are publicly obtainable. But at present, these sources are inaccessible to deep neural methods, which can only exploit patterns in the signals they are given to classify. In this work, we conduct experiments on the ADE20K dataset, using scene classification as an example task where combining DNNs with external knowledge graphs can result in more robust and explainable models. We align the atomic concepts present in ADE20K (i.e., objects) to WordNet, a hierarchically-organized lexical database. Using this knowledge graph, we expand the concept categories which can be identified in ADE20K and relate these concepts in a hierarchical manner. The neural architecture we present performs scene classification using these concepts, illuminating a path toward DNNs which can efficiently exploit high-level knowledge in place of excessive quantities of direct sensory input. We hypothesize and experimentally validate that incorporating background knowledge via an external knowledge graph into a deep learning-based model should improve the explainability and robustness of the model.",TRUE,adj
R133,Artificial Intelligence,R69597,Fvqa: Fact-based visual question answering,S330546,R69598,Machine Learning Model Integration,L240656,external,"Visual Question Answering (VQA) has attracted much attention in both computer vision and natural language processing communities, not least because it offers insight into the relationships between two important sources of information. Current datasets, and the models built upon them, have focused on questions which are answerable by direct analysis of the question and image alone. The set of such questions that require no external information to answer is interesting, but very limited. It excludes questions which require common sense, or basic factual knowledge to answer, for example. Here we introduce FVQA (Fact-based VQA), a VQA dataset which requires, and supports, much deeper reasoning. FVQA primarily contains questions that require external information to answer. We thus extend a conventional visual question answering dataset, which contains image-question-answer triplets, through additional image-question-answer-supporting fact tuples. Each supporting-fact is represented as a structural triplet, such as . We evaluate several baseline models on the FVQA dataset, and describe a novel model which is capable of reasoning about an image on the basis of supporting-facts.",TRUE,adj
R133,Artificial Intelligence,R69619,Knowledge-driven stock trend prediction and explanation via temporal convolutional network,S330707,R69620,Machine Learning Model Integration,L240719,external,"Deep neural networks have achieved promising results in stock trend prediction. However, most of these models have two common drawbacks, including (i) current methods are not sensitive enough to abrupt changes of stock trend, and (ii) forecasting results are not interpretable for humans. To address these two problems, we propose a novel Knowledge-Driven Temporal Convolutional Network (KDTCN) for stock trend prediction and explanation. Firstly, we extract structured events from financial news, and utilize external knowledge from knowledge graph to obtain event embeddings. Then, we combine event embeddings and price values together to forecast stock trend. We evaluate the prediction accuracy to show how knowledge-driven events work on abrupt changes. We also visualize the effect of events and linkage among events based on knowledge graph, to explain why knowledge-driven events are common sources of abrupt changes. Experiments demonstrate that KDTCN can (i) react to abrupt changes much faster and outperform state-of-the-art methods on stock datasets, as well as (ii) facilitate the explanation of prediction particularly with abrupt changes.",TRUE,adj
R133,Artificial Intelligence,R69623,Knowledge-based transfer learning explanation,S330743,R69624,Machine Learning Model Integration,L240733,external,"Machine learning explanation can significantly boost machine learning's application in decision making, but the usability of current methods is limited in human-centric explanation, especially for transfer learning, an important machine learning branch that aims at utilizing knowledge from one learning domain (i.e., a pair of dataset and prediction task) to enhance prediction model training in another learning domain. In this paper , we propose an ontology-based approach for human-centric explanation of transfer learning. Three kinds of knowledge-based explanatory evidence, with different granularities, including general factors, particular narrators and core contexts are first proposed and then inferred with both local ontologies and external knowledge bases. The evaluation with US flight data and DB-pedia has presented their confidence and availability in explaining the transferability of feature representation in flight departure delay forecasting.",TRUE,adj
R133,Artificial Intelligence,R76400,SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection,S351495,R76979,Languages,R6222,German,"Lexical Semantic Change detection, i.e., the task of identifying words that change meaning over time, is a very active research area, with applications in NLP, lexicography, and linguistics. Evaluation is currently the most pressing problem in Lexical Semantic Change detection, as no gold standards are available to the community, which hinders progress. We present the results of the first shared task that addresses this gap by providing researchers with an evaluation framework and manually annotated, high-quality datasets for English, German, Latin, and Swedish. 33 teams submitted 186 systems, which were evaluated on two subtasks.",TRUE,adj
R133,Artificial Intelligence,R76413,UWB at SemEval-2020 Task 1: Lexical Semantic Change Detection,S351716,R77001,Languages,R6222,German,"In this paper, we describe our method for detection of lexical semantic change, i.e., word sense changes over time. We examine semantic differences between specific words in two corpora, chosen from different time periods, for English, German, Latin, and Swedish. Our method was created for the SemEval 2020 Task 1: Unsupervised Lexical Semantic Change Detection. We ranked 1st in Sub-task 1: binary change detection, and 4th in Sub-task 2: ranked change detection. We present our method which is completely unsupervised and language independent. It consists of preparing a semantic vector space for each corpus, earlier and later; computing a linear transformation between earlier and later spaces, using Canonical Correlation Analysis and orthogonal transformation;and measuring the cosines between the transformed vector for the target word from the earlier corpus and the vector for the target word in the later corpus.",TRUE,adj
R133,Artificial Intelligence,R69619,Knowledge-driven stock trend prediction and explanation via temporal convolutional network,S330716,R69620,Machine Learning Input,R69550,raw,"Deep neural networks have achieved promising results in stock trend prediction. However, most of these models have two common drawbacks, including (i) current methods are not sensitive enough to abrupt changes of stock trend, and (ii) forecasting results are not interpretable for humans. To address these two problems, we propose a novel Knowledge-Driven Temporal Convolutional Network (KDTCN) for stock trend prediction and explanation. Firstly, we extract structured events from financial news, and utilize external knowledge from knowledge graph to obtain event embeddings. Then, we combine event embeddings and price values together to forecast stock trend. We evaluate the prediction accuracy to show how knowledge-driven events work on abrupt changes. We also visualize the effect of events and linkage among events based on knowledge graph, to explain why knowledge-driven events are common sources of abrupt changes. Experiments demonstrate that KDTCN can (i) react to abrupt changes much faster and outperform state-of-the-art methods on stock datasets, as well as (ii) facilitate the explanation of prediction particularly with abrupt changes.",TRUE,adj
R133,Artificial Intelligence,R69657,Knowledge-based interactive postmining of association rules using ontologies,S330990,R69658,Machine Learning Input,R69550,raw,"In Data Mining, the usefulness of association rules is strongly limited by the huge amount of delivered rules. To overcome this drawback, several methods were proposed in the literature such as itemset concise representations, redundancy reduction, and postprocessing. However, being generally based on statistical information, most of these methods do not guarantee that the extracted rules are interesting for the user. Thus, it is crucial to help the decision-maker with an efficient postprocessing step in order to reduce the number of rules. This paper proposes a new interactive approach to prune and filter discovered rules. First, we propose to use ontologies in order to improve the integration of user knowledge in the postprocessing task. Second, we propose the Rule Schema formalism extending the specification language proposed by Liu et al. for user expectations. Furthermore, an interactive framework is designed to assist the user throughout the analyzing task. Applying our new approach over voluminous sets of rules, we were able, by integrating domain expert knowledge in the postprocessing step, to reduce the number of rules to several dozens or less. Moreover, the quality of the filtered rules was validated by the domain expert at various points in the interactive process.",TRUE,adj
R133,Artificial Intelligence,R69665,"Interpreting data mining results with linked data for learning analytics: motivation, case study and direction",S331062,R69666,Machine Learning Input,R69550,raw,"Learning Analytics by nature relies on computational information processing activities intended to extract from raw data some interesting aspects that can be used to obtain insights into the behaviours of learners, the design of learning experiences, etc. There is a large variety of computational techniques that can be employed, all with interesting properties, but it is the interpretation of their results that really forms the core of the analytics process. In this paper, we look at a specific data mining method, namely sequential pattern extraction, and we demonstrate an approach that exploits available linked open data for this interpretation task. Indeed, we show through a case study relying on data about students' enrolment in course modules how linked data can be used to provide a variety of additional dimensions through which the results of the data mining method can be explored, providing, at interpretation time, new input into the analytics process.",TRUE,adj
R133,Artificial Intelligence,R182316,Automatic Chinese food identification and quantity estimation,S705255,R182318,Annotation,R182314,Label,"Computer-aided food identification and quantity estimation have caught more attention in recent years because of the growing concern of our health. The identification problem is usually defined as an image categorization or classification problem and several researches have been proposed. In this paper, we address the issues of feature descriptors in the food identification problem and introduce a preliminary approach for the quantity estimation using depth information. Sparse coding is utilized in the SIFT and Local binary pattern feature descriptors, and these features combined with gabor and color features are used to represent food items. A multi-label SVM classifier is trained for each feature, and these classifiers are combined with multi-class Adaboost algorithm. For evaluation, 50 categories of worldwide food are used, and each category contains 100 photographs from different sources, such as manually taken or from Internet web albums. An overall accuracy of 68.3% is achieved, and success at top-N candidates achieved 80.6%, 84.8%, and 90.9% accuracy accordingly when N equals 2, 3, and 5, thus making mobile application practical. The experimental results show that the proposed methods greatly improve the performance of original SIFT and LBP feature descriptors. On the other hand, for quantity estimation using depth information, a straight forward method is proposed for certain food, while transparent food ingredients such as pure water and cooked rice are temporarily excluded.",TRUE,adj
R175,"Atomic, Molecular and Optical Physics",R184047,Theoretical energies for the n = 1 and 2 states of the helium isoelectronic sequence up to Z = 100,S707171,R184049,Paper type,L478087,Theoretical,"The unified method described previously for combining high-precision nonrelativistic variational calculations with relativistic and quantum electrodynamic corrections is applied to the 1s 2 1 S 0 , 1s2s 1 S 0 , 1s2s 3 S 1 , 1s2p 1 P 1 , and 1s2p 3 P 0,1,2 staters of helium-like ions. Detailed tabulations are presented for all ions in the range 2 ≤ Z ≤ 100 and are compared with a wide range of experimental data up to 34 Kr + . The results for 90 U + significantly alter the recent Lamb shift measurement of Munger and Gould from 70.4 ± 8.3 to 71.0 ± 8.3 eV, in comparison with a revised theoretical value of 74.3 ± 0.4 eV. The improved agreement is due to the inclusion of higher order two-electron corrections in the present work.",TRUE,adj
R175,"Atomic, Molecular and Optical Physics",R185065,Theoretical energies for the n = 1 and 2 states of the helium isoelectronic sequence up to Z = 100,S709137,R185067,Paper type,L479054,Theoretical,"The unified method described previously for combining high-precision nonrelativistic variational calculations with relativistic and quantum electrodynamic corrections is applied to the 1s 2 1 S 0 , 1s2s 1 S 0 , 1s2s 3 S 1 , 1s2p 1 P 1 , and 1s2p 3 P 0,1,2 staters of helium-like ions. Detailed tabulations are presented for all ions in the range 2 ≤ Z ≤ 100 and are compared with a wide range of experimental data up to 34 Kr + . The results for 90 U + significantly alter the recent Lamb shift measurement of Munger and Gould from 70.4 ± 8.3 to 71.0 ± 8.3 eV, in comparison with a revised theoretical value of 74.3 ± 0.4 eV. The improved agreement is due to the inclusion of higher order two-electron corrections in the present work.",TRUE,adj
R175,"Atomic, Molecular and Optical Physics",R185098,Theoretical energies for the n = 1 and 2 states of the helium isoelectronic sequence up to Z = 100,S709196,R185100,Paper type,L479073,Theoretical,"The unified method described previously for combining high-precision nonrelativistic variational calculations with relativistic and quantum electrodynamic corrections is applied to the 1s 2 1 S 0 , 1s2s 1 S 0 , 1s2s 3 S 1 , 1s2p 1 P 1 , and 1s2p 3 P 0,1,2 staters of helium-like ions. Detailed tabulations are presented for all ions in the range 2 ≤ Z ≤ 100 and are compared with a wide range of experimental data up to 34 Kr + . The results for 90 U + significantly alter the recent Lamb shift measurement of Munger and Gould from 70.4 ± 8.3 to 71.0 ± 8.3 eV, in comparison with a revised theoretical value of 74.3 ± 0.4 eV. The improved agreement is due to the inclusion of higher order two-electron corrections in the present work.",TRUE,adj
R175,"Atomic, Molecular and Optical Physics",R185172,Theoretical energies for the n = 1 and 2 states of the helium isoelectronic sequence up to Z = 100,S709337,R185174,Paper type,L479123,Theoretical,"The unified method described previously for combining high-precision nonrelativistic variational calculations with relativistic and quantum electrodynamic corrections is applied to the 1s 2 1 S 0 , 1s2s 1 S 0 , 1s2s 3 S 1 , 1s2p 1 P 1 , and 1s2p 3 P 0,1,2 staters of helium-like ions. Detailed tabulations are presented for all ions in the range 2 ≤ Z ≤ 100 and are compared with a wide range of experimental data up to 34 Kr + . The results for 90 U + significantly alter the recent Lamb shift measurement of Munger and Gould from 70.4 ± 8.3 to 71.0 ± 8.3 eV, in comparison with a revised theoretical value of 74.3 ± 0.4 eV. The improved agreement is due to the inclusion of higher order two-electron corrections in the present work.",TRUE,adj
R175,"Atomic, Molecular and Optical Physics",R185211,Theoretical energies for the n = 1 and 2 states of the helium isoelectronic sequence up to Z = 100,S709412,R185213,Paper type,L479152,Theoretical,"The unified method described previously for combining high-precision nonrelativistic variational calculations with relativistic and quantum electrodynamic corrections is applied to the 1s 2 1 S 0 , 1s2s 1 S 0 , 1s2s 3 S 1 , 1s2p 1 P 1 , and 1s2p 3 P 0,1,2 staters of helium-like ions. Detailed tabulations are presented for all ions in the range 2 ≤ Z ≤ 100 and are compared with a wide range of experimental data up to 34 Kr + . The results for 90 U + significantly alter the recent Lamb shift measurement of Munger and Gould from 70.4 ± 8.3 to 71.0 ± 8.3 eV, in comparison with a revised theoretical value of 74.3 ± 0.4 eV. The improved agreement is due to the inclusion of higher order two-electron corrections in the present work.",TRUE,adj
R136156,Biogerontology and Geriatric Medicine,R175176,Respiratory Care Received by Individuals With Duchenne Muscular Dystrophy From 2000 to 2011,S693797,R175178,Subject Label,R175190,American,"BACKGROUND: Duchenne muscular dystrophy (DMD) causes progressive respiratory muscle weakness and decline in function, which can go undetected without monitoring. DMD respiratory care guidelines recommend scheduled respiratory assessments and use of respiratory assist devices. To determine the extent of adherence to these guidelines, we evaluated respiratory assessments and interventions among males with DMD in the Muscular Dystrophy Surveillance, Tracking, and Research Network (MD STARnet) from 2000 to 2011. METHODS: MD STARnet is a population-based surveillance system that identifies all individuals born during or after 1982 residing in Arizona, Colorado, Georgia, Hawaii, Iowa, and western New York with Duchenne or Becker muscular dystrophy. We analyzed MD STARnet respiratory care data for non-ambulatory adolescent males (12–17 y old) and men (≥18 y old) with DMD, assessing whether: (1) pulmonary function was measured twice yearly; (2) awake and asleep hypoventilation testing was performed at least yearly; (3) home mechanical insufflation-exsufflation, noninvasive ventilation, and tracheostomy/ventilators were prescribed; and (4) pulmonologists provided evaluations. RESULTS: During 2000–2010, no more than 50% of both adolescents and men had their pulmonary function monitored twice yearly in any of the years; 67% or fewer were assessed for awake and sleep hypoventilation yearly. Although the use of mechanical insufflation-exsufflation and noninvasive ventilation is probably increasing, prior use of these devices did not prevent all tracheostomies, and at least 18 of 29 tracheostomies were performed due to acute respiratory illnesses. Fewer than 32% of adolescents and men had pulmonologist evaluations in 2010–2011. CONCLUSIONS: Since the 2004 publication of American Thoracic Society guidelines, there have been few changes in pulmonary clinical practice. Frequencies of respiratory assessments and assist device use among males with DMD were lower than recommended in clinical guidelines. Collaboration of respiratory therapists and pulmonologists with clinicians caring for individuals with DMD should be encouraged to ensure access to the full spectrum of in-patient and out-patient pulmonary interventions.",TRUE,adj
R136156,Biogerontology and Geriatric Medicine,R175176,Respiratory Care Received by Individuals With Duchenne Muscular Dystrophy From 2000 to 2011,S693794,R175178,Subject Label,R175187,Clinical,"BACKGROUND: Duchenne muscular dystrophy (DMD) causes progressive respiratory muscle weakness and decline in function, which can go undetected without monitoring. DMD respiratory care guidelines recommend scheduled respiratory assessments and use of respiratory assist devices. To determine the extent of adherence to these guidelines, we evaluated respiratory assessments and interventions among males with DMD in the Muscular Dystrophy Surveillance, Tracking, and Research Network (MD STARnet) from 2000 to 2011. METHODS: MD STARnet is a population-based surveillance system that identifies all individuals born during or after 1982 residing in Arizona, Colorado, Georgia, Hawaii, Iowa, and western New York with Duchenne or Becker muscular dystrophy. We analyzed MD STARnet respiratory care data for non-ambulatory adolescent males (12–17 y old) and men (≥18 y old) with DMD, assessing whether: (1) pulmonary function was measured twice yearly; (2) awake and asleep hypoventilation testing was performed at least yearly; (3) home mechanical insufflation-exsufflation, noninvasive ventilation, and tracheostomy/ventilators were prescribed; and (4) pulmonologists provided evaluations. RESULTS: During 2000–2010, no more than 50% of both adolescents and men had their pulmonary function monitored twice yearly in any of the years; 67% or fewer were assessed for awake and sleep hypoventilation yearly. Although the use of mechanical insufflation-exsufflation and noninvasive ventilation is probably increasing, prior use of these devices did not prevent all tracheostomies, and at least 18 of 29 tracheostomies were performed due to acute respiratory illnesses. Fewer than 32% of adolescents and men had pulmonologist evaluations in 2010–2011. CONCLUSIONS: Since the 2004 publication of American Thoracic Society guidelines, there have been few changes in pulmonary clinical practice. Frequencies of respiratory assessments and assist device use among males with DMD were lower than recommended in clinical guidelines. Collaboration of respiratory therapists and pulmonologists with clinicians caring for individuals with DMD should be encouraged to ensure access to the full spectrum of in-patient and out-patient pulmonary interventions.",TRUE,adj
R104,Bioinformatics,R168547,CeleST: Computer Vision Software for Quantitative Analysis of C. elegans Swim Behavior Reveals Novel Features of Locomotion,S668430,R168548,creates,R166949,CeleST,"In the effort to define genes and specific neuronal circuits that control behavior and plasticity, the capacity for high-precision automated analysis of behavior is essential. We report on comprehensive computer vision software for analysis of swimming locomotion of C. elegans, a simple animal model initially developed to facilitate elaboration of genetic influences on behavior. C. elegans swim test software CeleST tracks swimming of multiple animals, measures 10 novel parameters of swim behavior that can fully report dynamic changes in posture and speed, and generates data in several analysis formats, complete with statistics. Our measures of swim locomotion utilize a deformable model approach and a novel mathematical analysis of curvature maps that enable even irregular patterns and dynamic changes to be scored without need for thresholding or dropping outlier swimmers from study. Operation of CeleST is mostly automated and only requires minimal investigator interventions, such as the selection of videotaped swim trials and choice of data output format. Data can be analyzed from the level of the single animal to populations of thousands. We document how the CeleST program reveals unexpected preferences for specific swim “gaits” in wild-type C. elegans, uncovers previously unknown mutant phenotypes, efficiently tracks changes in aging populations, and distinguishes “graceful” from poor aging. The sensitivity, dynamic range, and comprehensive nature of CeleST measures elevate swim locomotion analysis to a new level of ease, economy, and detail that enables behavioral plasticity resulting from genetic, cellular, or experience manipulation to be analyzed in ways not previously possible.",TRUE,adj
R104,Bioinformatics,R168521,Chaste: An Open Source C++ Library for Computational Physiology and Biology,S668337,R168526,deposits,R166935,Chaste,"Chaste — Cancer, Heart And Soft Tissue Environment — is an open source C++ library for the computational simulation of mathematical models developed for physiology and biology. Code development has been driven by two initial applications: cardiac electrophysiology and cancer development. A large number of cardiac electrophysiology studies have been enabled and performed, including high-performance computational investigations of defibrillation on realistic human cardiac geometries. New models for the initiation and growth of tumours have been developed. In particular, cell-based simulations have provided novel insight into the role of stem cells in the colorectal crypt. Chaste is constantly evolving and is now being applied to a far wider range of problems. The code provides modules for handling common scientific computing components, such as meshes and solvers for ordinary and partial differential equations (ODEs/PDEs). Re-use of these components avoids the need for researchers to ‘re-invent the wheel’ with each new project, accelerating the rate of progress in new applications. Chaste is developed using industrially-derived techniques, in particular test-driven development, to ensure code quality, re-use and reliability. In this article we provide examples that illustrate the types of problems Chaste can be used to solve, which can be run on a desktop computer. We highlight some scientific studies that have used or are using Chaste, and the insights they have provided. The source code, both for specific releases and the development version, is available to download under an open source Berkeley Software Distribution (BSD) licence at http://www.cs.ox.ac.uk/chaste, together with details of a mailing list and links to documentation and tutorials.",TRUE,adj
R122,Chemistry,R46076,One-step hydrothermal synthesis of N-doped TiO 2/C nanocomposites with high visible light photocatalytic activity,S140156,R46077,chemical doping method,L86062,hydrothermal,"N-doped TiO(2) nanoparticles modified with carbon (denoted N-TiO(2)/C) were successfully prepared by a facile one-pot hydrothermal treatment in the presence of L-lysine, which acts as a ligand to control the nanocrystal growth and as a source of nitrogen and carbon. As-prepared nanocomposites were characterized by thermogravimetric analysis (TGA), X-ray diffraction (XRD), transmission electron microscopy (TEM), high-resolution transmission electron microscopy (HRTEM), Raman spectroscopy, ultraviolet-visible (UV-vis) diffuse reflectance spectroscopy, X-ray photoelectron spectroscopy (XPS), Fourier transform infrared spectroscopy (FTIR), electron paramagnetic resonance (EPR) spectra, and N(2) adsorption-desorption analysis. The photocatalytic activities of the as-prepared photocatalysts were measured by the degradation of methyl orange (MO) under visible light irradiation at λ≥ 400 nm. The results show that N-TiO(2)/C nanocomposites increase absorption in the visible light region and exhibit a higher photocatalytic activity than pure TiO(2), commercial P25 and previously reported N-doped TiO(2) photocatalysts. We have demonstrated that the nitrogen was doped into the lattice and the carbon species were modified on the surface of the photocatalysts. N-doping narrows the band gap and C-modification enhances the visible light harvesting and accelerates the separation of the photo-generated electrons and holes. As a consequence, the photocatalytic activity is significantly improved. The molar ratio of L-lysine/TiCl(4) and the pH of the hydrothermal reaction solution are important factors affecting the photocatalytic activity of the N-TiO(2)/C; the optimum molar ratio of L-lysine/TiCl(4) is 8 and the optimum pH is ca. 4, at which the catalyst exhibits the highest reactivity. Our findings demonstrate that the as-obtained N-TiO(2)/C photocatalyst is a better and more promising candidate than well studied N-doped TiO(2) alternatives as visible light photocatalysts for potential applications in environmental purification.",TRUE,adj
R122,Chemistry,R46082,Formation of New Structures and Their Synergistic Effects in Boron and Nitrogen Codoped TiO2 for Enhancement of Photocatalytic Performance,S140211,R46083,chemical doping method,L86105,hydrothermal,"A novel double hydrothermal method to prepare the boron and nitrogen codoped TiO2 is developed. Two different ways have been used for the synthesis of the catalysts, one through the addition of boron followed by nitrogen, and the other through the addition of nitrogen first and then by boron. The X-ray photoelectron spectroscopy analysis indicates the synergistic effect of boron and nitrogen with the formation of Ti−B−N−Ti and Ti−N−B−O compounds on the surface of catalysts when nitrogen is introduced to the materials first. When the boron is added first, only Ti−N−B−O species occurs on the surface of catalysts. The above two compounds are all thought to enhance the photocatalytic activities of codoped TiO2. Density functional theory simulations are also performed to investigate the B−N synergistic effect. For the (101) surface, the formation of Ti−B−N−Ti structures gives rise to the localized states within the TiO2 band gap.",TRUE,adj
R122,Chemistry,R46084,"Electrical Properties of Nb‐, Ga‐, and Y‐Substituted Nanocrystalline Anatase TiO2 Prepared by Hydrothermal Synthesis",S140229,R46086,chemical doping method,L86119,hydrothermal,"Nanocrystalline anatase titanium dioxide powders were produced by a hydrothermal synthesis route in pure form and substituted with trivalent Ga3+ and Y3+ or pentavalent Nb5+ with the intention of creating acceptor or donor states, respectively. The electrical conductivity of each powder was measured using the powder-solution-composite (PSC) method. The conductivity increased with the addition of Nb5+ from 3 similar to x similar to 10-3 similar to S/cm to 10 similar to x similar to 10-3 similar to S/cm in as-prepared powders, and from 0.3 similar to x similar to 10-3 similar to S/cm to 0.9 similar to x similar to 10-3 similar to S/cm in heat-treated powders (520 degrees C, 1 similar to h). In contrast, substitution with Ga3+ and Y3+ had no measureable effect on the material's conductivity. The lack of change with the addition of Ga3+ and Y3+, and relatively small increase upon Nb5+ addition is attributed to ionic compensation owing to the highly oxidizing nature of hydrothermal synthesis.",TRUE,adj
R122,Chemistry,R46087,"Preparation, Photocatalytic Activity, and Mechanism of Nano-TiO2 Co-Doped with Nitrogen and Iron (III)",S140247,R46088,chemical doping method,L86132,hydrothermal,"Nanoparticles of titanium dioxide co-doped with nitrogen and iron (III) were first prepared using the homogeneous precipitation-hydrothermal method. The structure and properties of the co-doped were studied by XRD, XPS, Raman, FL, and UV-diffuse reflectance spectra. By analyzing the structures and photocatalytic activities of the undoped and nitrogen and/or Fe3+-doped TiO2 under ultraviolet and visible light irradiation, the probable mechanism of co-doped particles was investigated. It is presumed that the nitrogen and Fe3+ ion doping induced the formation of new states closed to the valence band and conduction band, respectively. The co-operation of the nitrogen and Fe3+ ion leads to the much narrowing of the band gap and greatly improves the photocatalytic activity in the visible light region. Meanwhile, the co-doping can also promote the separation of the photogenerated electrons and holes to accelerate the transmission of photocurrent carrier. The photocatalyst co-doped with nitrogen and 0.5% Fe3+ sho...",TRUE,adj
R322,Computational Linguistics,R110753,Generating Abstractive Summaries from Meeting Transcripts,S504686,R110755,Summarization Type,L364528,Abstractive,"Summaries of meetings are very important as they convey the essential content of discussions in a concise form. Both participants and non-participants are interested in the summaries of meetings to plan for their future work. Generally, it is time consuming to read and understand the whole documents. Therefore, summaries play an important role as the readers are interested in only the important context of discussions. In this work, we address the task of meeting document summarization. Automatic summarization systems on meeting conversations developed so far have been primarily extractive, resulting in unacceptable summaries that are hard to read. The extracted utterances contain disfluencies that affect the quality of the extractive summaries. To make summaries much more readable, we propose an approach to generating abstractive summaries by fusing important content from several utterances. We first separate meeting transcripts into various topic segments, and then identify the important utterances in each segment using a supervised learning approach. The important utterances are then combined together to generate a one-sentence summary. In the text generation step, the dependency parses of the utterances in each segment are combined together to create a directed graph. The most informative and well-formed sub-graph obtained by integer linear programming (ILP) is selected to generate a one-sentence summary for each topic segment. The ILP formulation reduces disfluencies by leveraging grammatical relations that are more prominent in non-conversational style of text, and therefore generates summaries that is comparable to human-written abstractive summaries. Experimental results show that our method can generate more informative summaries than the baselines. In addition, readability assessments by human judges as well as log-likelihood estimates obtained from the dependency parser show that our generated summaries are significantly readable and well-formed.",TRUE,adj
R322,Computational Linguistics,R110767,Abstractive Meeting Summarization Using Dependency Graph Fusion,S504762,R110769,Summarization Type,L364582,Abstractive,"Automatic summarization techniques on meeting conversations developed so far have been primarily extractive, resulting in poor summaries. To improve this, we propose an approach to generate abstractive summaries by fusing important content from several utterances. Any meeting is generally comprised of several discussion topic segments. For each topic segment within a meeting conversation, we aim to generate a one sentence summary from the most important utterances using an integer linear programming-based sentence fusion approach. Experimental results show that our method can generate more informative summaries than the baselines.",TRUE,adj
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595273,R148452,Concept types,R148454,Complex,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,adj
R322,Computational Linguistics,R111985,Domain-Independent Abstract Generation for Focused Meeting Summarization,S509281,R111987,Data Domain,L366699,Different,"We address the challenge of generating natural language abstractive summaries for spoken meetings in a domain-independent fashion. We apply Multiple-Sequence Alignment to induce abstract generation templates that can be used for different domains. An Overgenerateand-Rank strategy is utilized to produce and rank candidate abstracts. Experiments using in-domain and out-of-domain training on disparate corpora show that our system uniformly outperforms state-of-the-art supervised extract-based approaches. In addition, human judges rate our system summaries significantly higher than compared systems in fluency and overall quality.",TRUE,adj
R322,Computational Linguistics,R110733,Extractive Summarization of Meeting Recordings,S504600,R110735,Summarization Type,L364465,Extractive,"Several approaches to automatic speech summarization are discussed below, using the ICSI Meetings corpus. We contrast feature-based approaches using prosodic and lexical features with maximal marginal relevance and latent semantic analysis approaches to summarization. While the latter two techniques are borrowed directly from the field of text summarization, feature-based approaches using prosodic information are able to utilize characteristics unique to speech data. We also investigate how the summarization results might deteriorate when carried out on ASR output as opposed to manual transcripts. All of the summaries are of an extractive variety, and are compared using the software ROUGE.",TRUE,adj
R322,Computational Linguistics,R164218,The GENIA corpus: an annotated research abstract corpus in molecular biology domain,S655641,R164220,Concept types,R164228,Other,"With the information overload in genome-related field, there is an increasing need for natural language processing technology to extract information from literature and various attempts of information extraction using NLP has been being made. We are developing the necessary resources including domain ontology and annotated corpus from research abstracts in MEDLINE database (GENIA corpus). We are building the ontology and the corpus simultaneously, using each other. In this paper we report on our new corpus, its ontological basis, annotation scheme, and statistics of annotated objects. We also describe the tools used for corpus annotation and management.",TRUE,adj
R322,Computational Linguistics,R148131,Construction of an annotated corpus to support biomedical information extraction,S593947,R148133,Semantic roles,R148144,Temporal,"Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes.",TRUE,adj
R231,Computer and Systems Architecture,R175456,A deep learning framework for character motion synthesis and editing,S695224,R175458,Activity,L467361,Kicking,"We present a framework to synthesize character movements based on high level parameters, such that the produced movements respect the manifold of human motion, trained on a large motion capture dataset. The learned motion manifold, which is represented by the hidden units of a convolutional autoencoder, represents motion data in sparse components which can be combined to produce a wide range of complex movements. To map from high level parameters to the motion manifold, we stack a deep feedforward neural network on top of the trained autoencoder. This network is trained to produce realistic motion sequences from parameters such as a curve over the terrain that the character should follow, or a target location for punching and kicking. The feedforward control network and the motion manifold are trained independently, allowing the user to easily switch between feedforward networks according to the desired interface, without re-training the motion manifold. Once motion is generated it can be edited by performing optimization in the space of the motion manifold. This allows for imposing kinematic constraints, or transforming the style of the motion, while ensuring the edited motion remains natural. As a result, the system can produce smooth, high quality motion sequences without any manual pre-processing of the training data.",TRUE,adj
R230,Computer Engineering,R74463,Application of data anonymization in Learning Analytics,S497561,R109101,Publication Stage,L360297,Final,"Thanks to the proliferation of academic services on the Web and the opening of educational content, today, students can access a large number of free learning resources, and interact with value-added services. In this context, Learning Analytics can be carried out on a large scale thanks to the proliferation of open practices that promote the sharing of datasets. However, the opening or sharing of data managed through platforms and educational services, without considering the protection of users' sensitive data, could cause some privacy issues. Data anonymization is a strategy that should be adopted during lifecycle of data processing to reduce security risks. In this research, we try to characterize how much and how the anonymization techniques have been used in learning analytics proposals. From an initial exploration made in the Scopus database, we found that less than 6% of the papers focused on LA have also covered the privacy issue. Finally, through a specific case, we applied data anonymization and learning analytics to demonstrate that both technique can be integrated, in a reliably and effectively way, to support decision making in educational institutions.",TRUE,adj
R230,Computer Engineering,R74473,Ontology of personal learning environments in the development of thesis project,S497627,R109106,Publication Stage,L360353,Final,"The thesis is the final step in academic formation of students. Its development may experience some difficulties that cause delays in delivery times. The internet allows students to access relevant and large amounts of information for use in the development of their theses. The internet also allows students to interact with others in order to create knowledge networks. However, exposure to too much information can produce infoxication. Therefore, there is a need to organise such information and technological resources. Through a personal learning environment (PLE), students can use current technology and online resources to develop their projects. Furthermore, by means of an ontological model, the underlying knowledge in the domain and environment can be represented in a readable format for machines. This paper presents an ontological model called PLET4Thesis, which has been designed in order to organise the process of thesis development using the elements required to create a PLE.",TRUE,adj
R230,Computer Engineering,R74490,Systematic Search Process for Doctoral Theses in Centralized Repositories. A Study Case in the Context of Educational Innovation generated by ICT,S497717,R109116,Publication Stage,L360429,Final,"Educational innovation is a set of ideas, processes, and strategies applied in academic centers to improve teaching and learning processes. The application of Communication and Information Technologies in the field of educational innovation has been fundamental to promote changes that lead to the improvement of administrative and academic processes. However, some studies show the deficient and not very innovative use of technologies in the Latin American region. To determine the existing gap, the authors are executing a project that tries to know the current state of the art on this topic. This study corresponds to the first step of the project. Here, the authors describe the systematic search process designed to find the doctoral theses that have been developed on educational innovation generated by ICT. To meet this objective, the process has three phases: (1) identification of centralized repositories of doctoral theses in Spanish, (2) evaluation of selected repositories according to specific search criteria, and (3) selecting centralized repositories where it is possible to find theses on educational innovation and ICT. The analysis of 5 of the 222 repositories found indicates that each system offers different search characteristics, and the results show that the best option is to combine the sets of results since the theses come from different institutions. Finally, considering that the abilities of users to manage information systems can be diverse, providers and administrators of repositories should enhance their search services in such a way that all users can find and use the resources published.",TRUE,adj
R132,Computer Sciences,R36093,TableSeer: automatic table metadata extraction and searching in digital libraries,S123517,R36094,Method automation,R25382,Automatic,"Tables are ubiquitous in digital libraries. In scientific documents, tables are widely used to present experimental results or statistical data in a condensed fashion. However, current search engines do not support table search. The difficulty of automatic extracting tables from un-tagged documents, the lack of a universal table metadata specification, and the limitation of the existing ranking schemes make table search problem challenging. In this paper, we describe TableSeer, a search engine for tables. TableSeer crawls digital libraries, detects tables from documents, extracts tables metadata, indexes and ranks tables, and provides a user-friendly search interface. We propose an extensive set of medium-independent metadata for tables that scientists and other users can adopt for representing table information. In addition, we devise a novel page box-cutting method to improve the performance of the table detection. Given a query, TableSeer ranks the matched tables using an innovative ranking algorithm - TableRank. TableRank rates each ⃭query, tableℂ pair with a tailored vector space model and a specific term weighting scheme. Overall, TableSeer eliminates the burden of manually extract table data from digital libraries and enables users to automatically examine tables. We demonstrate the value of TableSeer with empirical studies on scientific documents.",TRUE,adj
R132,Computer Sciences,R129508,Deeper Task-Specificity Improves Joint Entity and Relation Extraction,S515053,R129509,has model,R116621,Deeper,"Multi-task learning (MTL) is an effective method for learning related tasks, but designing MTL models necessitates deciding which and how many parameters should be task-specific, as opposed to shared between tasks. We investigate this issue for the problem of jointly learning named entity recognition (NER) and relation extraction (RE) and propose a novel neural architecture that allows for deeper task-specificity than does prior work. In particular, we introduce additional task-specific bidirectional RNN layers for both the NER and RE tasks and tune the number of shared and task-specific layers separately for different datasets. We achieve state-of-the-art (SOTA) results for both tasks on the ADE dataset; on the CoNLL04 dataset, we achieve SOTA results on the NER task and competitive results on the RE task while using an order of magnitude fewer trainable parameters than the current SOTA architecture. An ablation study confirms the importance of the additional task-specific layers for achieving these results. Our work suggests that previous solutions to joint NER and RE undervalue task-specificity and demonstrates the importance of correctly balancing the number of shared and task-specific parameters for MTL approaches in general.",TRUE,adj
R417,Cultural History,R139761,The Story of the Markham Car Collection: A Cross-Platform Panoramic Tour of Contested Heritage,S558088,R139763,has stakeholder,R139838,Professional,"In this article, we share our experiences of using digital technologies and various media to present historical narratives of a museum object collection aiming to provide an engaging experience on multiple platforms. Based on P. Joseph’s article, Dawson presented multiple interpretations and historical views of the Markham car collection across various platforms using multimedia resources. Through her creative production, she explored how to use cylindrical panoramas and rich media to offer new ways of telling the controversial story of the contested heritage of a museum’s veteran and vintage car collection. The production’s usability was investigated involving five experts before it was published online and the general users’ experience was investigated. In this article, we present an important component of findings which indicates that virtual panorama tours featuring multimedia elements could be successful in attracting new audiences and that using this type of storytelling technique can be effective in the museum sector. The storyteller panorama tour presented here may stimulate GLAM (galleries, libraries, archives, and museums) professionals to think of new approaches, implement new strategies or services to engage their audiences more effectively. The research may ameliorate the education of future professionals as well.",TRUE,adj
R135,Databases/Information Systems,R6116,A Real-time Heuristic-based Unsupervised Method for Name Disambiguation in Digital Libraries,S6361,R6117,Method,R6112,Heuristic-based,"This paper addresses the problem of name disambiguation in the context of digital libraries that administer bibliographic citations. The problem occurs when multiple authors share a common name or when multiple name variations for an author appear in citation records. Name disambiguation is not a trivial task, and most digital libraries do not provide an ecient way to accurately identify the citation records for an author. Furthermore, lack of complete meta-data information in digital libraries hinders the development of a generic algorithm that can be applicable to any dataset. We propose a heuristic-based, unsupervised and adaptive method that also examines users’ interactions in order to include users’ feedback in the disambiguation process. Moreover, the method exploits important features associated with author and citation records, such as co-authors, aliation, publication title, venue, etc., creating a multilayered hierarchical clustering algorithm which transforms itself according to the available information, and forms clusters of unambiguous records. Our experiments on a set of researchers’ names considered to be highly ambiguous produced high precision and recall results, and decisively armed the viability of our algorithm.",TRUE,adj
R234,Digital Communications and Networking,R11004,Webly Supervised Joint Embedding for Cross-Modal Image-Text Retrieval,S18306,R12096,Has value,R12138,Visual,"Cross-modal retrieval between visual data and natural language description remains a long-standing challenge in multimedia. While recent image-text retrieval methods offer great promise by learning deep representations aligned across modalities, most of these methods are plagued by the issue of training with small-scale datasets covering a limited number of images with ground-truth sentences. Moreover, it is extremely expensive to create a larger dataset by annotating millions of images with sentences and may lead to a biased model. Inspired by the recent success of webly supervised learning in deep neural networks, we capitalize on readily-available web images with noisy annotations to learn robust image-text joint representation. Specifically, our main idea is to leverage web images and corresponding tags, along with fully annotated datasets, in training for learning the visual-semantic joint embedding. We propose a two-stage approach for the task that can augment a typical supervised pair-wise ranking loss based formulation with weakly-annotated web images to learn a more robust visual-semantic embedding. Experiments on two standard benchmark datasets demonstrate that our method achieves a significant performance gain in image-text retrieval compared to state-of-the-art approaches.",TRUE,adj
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145437,DNA Barcoding to Improve the Taxonomy of the Afrotropical Hoverflies (Insecta: Diptera: Syrphidae),S624545,R155749,Biogeographical region,R155753,Afrotropical,"The identification of Afrotropical hoverflies is very difficult because of limited recent taxonomic revisions and the lack of comprehensive identification keys. In order to assist in their identification, and to improve the taxonomy of this group, we constructed a reference dataset of 513 COI barcodes of 90 of the more common nominal species from Ghana, Togo, Benin and Nigeria (W Africa) and added ten publically available COI barcodes from nine nominal Afrotropical species to this (total: 523 COI barcodes; 98 nominal species; 26 genera). The identification accuracy of this dataset was evaluated with three methods (K2P distance-based, Neighbor-Joining (NJ) / Maximum Likelihood (ML) analysis, and using SpeciesIdentifier). Results of the three methods were highly congruent and showed a high identification success. Nine species pairs showed a low (< 0.03) mean interspecific K2P distance that resulted in several incorrect identifications. A high (> 0.03) maximum intraspecific K2P distance was observed in eight species and barcodes of these species not always formed single clusters in the NJ / ML analayses which may indicate the occurrence of cryptic species. Optimal K2P thresholds to differentiate intra- from interspecific K2P divergence were highly different among the three subfamilies (Eristalinae: 0.037, Syrphinae: 0.06, Microdontinae: 0.007–0.02), and among the different general suggesting that optimal thresholds are better defined at the genus level. In addition to providing an alternative identification tool, our study indicates that DNA barcoding improves the taxonomy of Afrotropical hoverflies by selecting (groups of) taxa that deserve further taxonomic study, and by attributing the unknown sex to species for which only one of the sexes is known.",TRUE,adj
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R146646,Comprehensive evaluation of DNA barcoding for the molecular species identification of forensically important Australian Sarcophagidae (Diptera),S623922,R155647,Biogeographical region,R155656,Australian,"Abstract. Carrion-breeding Sarcophagidae (Diptera) can be used to estimate the post-mortem interval in forensic cases. Difficulties with accurate morphological identifications at any life stage and a lack of documented thermobiological profiles have limited their current usefulness. The molecular-based approach of DNA barcoding, which utilises a 648-bp fragment of the mitochondrial cytochrome oxidase subunit I gene, was evaluated in a pilot study for discrimination between 16 Australian sarcophagids. The current study comprehensively evaluated barcoding for a larger taxon set of 588 Australian sarcophagids. In total, 39 of the 84 known Australian species were represented by 580 specimens, which includes 92% of potentially forensically important species. A further eight specimens could not be identified, but were included nonetheless as six unidentifiable taxa. A neighbour-joining tree was generated and nucleotide sequence divergences were calculated. All species except Sarcophaga (Fergusonimyia) bancroftorum, known for high morphological variability, were resolved as monophyletic (99.2% of cases), with bootstrap support of 100. Excluding S. bancroftorum, the mean intraspecific and interspecific variation ranged from 1.12% and 2.81–11.23%, respectively, allowing for species discrimination. DNA barcoding was therefore validated as a suitable method for molecular identification of Australian Sarcophagidae, which will aid in the implementation of this fauna in forensic entomology.",TRUE,adj
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R142471,DNA barcoding of Northern Nearctic Muscidae (Diptera) reveals high correspondence between morphological and molecular species limits,S624768,R155793,Biogeographical region,R149569,Nearctic,"Abstract Background Various methods have been proposed to assign unknown specimens to known species using their DNA barcodes, while others have focused on using genetic divergence thresholds to estimate “species” diversity for a taxon, without a well-developed taxonomy and/or an extensive reference library of DNA barcodes. The major goals of the present work were to: a) conduct the largest species-level barcoding study of the Muscidae to date and characterize the range of genetic divergence values in the northern Nearctic fauna; b) evaluate the correspondence between morphospecies and barcode groupings defined using both clustering-based and threshold-based approaches; and c) use the reference library produced to address taxonomic issues. Results Our data set included 1114 individuals and their COI sequences (951 from Churchill, Manitoba), representing 160 morphologically-determined species from 25 genera, covering 89% of the known fauna of Churchill and 23% of the Nearctic fauna. Following an iterative process through which all specimens belonging to taxa with anomalous divergence values and/or monophyly issues were re-examined, identity was modified for 9 taxa, including the reinstatement of Phaonia luteva (Walker) stat. nov. as a species distinct from Phaonia errans (Meigen). In the post-reassessment data set, no distinct gap was found between maximum pairwise intraspecific distances (range 0.00-3.01%) and minimum interspecific distances (range: 0.77-11.33%). Nevertheless, using a clustering-based approach, all individuals within 98% of species grouped with their conspecifics with high (>95%) bootstrap support; in contrast, a maximum species discrimination rate of 90% was obtained at the optimal threshold of 1.2%. DNA barcoding enabled the determination of females from 5 ambiguous species pairs and confirmed that 16 morphospecies were genetically distinct from named taxa. There were morphological differences among all distinct genetic clusters; thus, no cases of cryptic species were detected. Conclusions Our findings reveal the great utility of building a well-populated, species-level reference barcode database against which to compare unknowns. When such a library is unavailable, it is still possible to obtain a fairly accurate (within ~10%) rapid assessment of species richness based upon a barcode divergence threshold alone, but this approach is most accurate when the threshold is tuned to a particular taxon.",TRUE,adj
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145506,Identification of Nearctic black flies using DNA barcodes (Diptera: Simuliidae),S624288,R155698,Biogeographical region,R149569,Nearctic,"DNA barcoding has gained increased recognition as a molecular tool for species identification in various groups of organisms. In this preliminary study, we tested the efficacy of a 615‐bp fragment of the cytochrome c oxidase I (COI) as a DNA barcode in the medically important family Simuliidae, or black flies. A total of 65 (25%) morphologically distinct species and sibling species in species complexes of the 255 recognized Nearctic black fly species were used to create a preliminary barcode profile for the family. Genetic divergence among congeners averaged 14.93% (range 2.83–15.33%), whereas intraspecific genetic divergence between morphologically distinct species averaged 0.72% (range 0–3.84%). DNA barcodes correctly identified nearly 100% of the morphologically distinct species (87% of the total sampled taxa), whereas in species complexes (13% of the sampled taxa) maximum values of divergence were comparatively higher (max. 4.58–6.5%), indicating cryptic diversity. The existence of sibling species in Prosimulium travisi and P. neomacropyga was also demonstrated, thus confirming previous cytological evidence about the existence of such cryptic diversity in these two taxa. We conclude that DNA barcoding is an effective method for species identification and discovery of cryptic diversity in black flies.",TRUE,adj
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R146643,Revision of Nearctic Dasysyrphus Enderlein (Diptera: Syrphidae),S624023,R155663,Biogeographical region,R149569,Nearctic,"Dasysyrphus Enderlein (Diptera: Syrphidae) has posed taxonomic challenges to researchers in the past, primarily due to their lack of interspecific diagnostic characters. In the present study, DNA data (mitochondrial cytochrome c oxidase sub-unit I—COI) were combined with morphology to help delimit species. This led to two species being resurrected from synonymy (D. laticaudus and D. pacificus) and the discovery of one new species (D. occidualis sp. nov.). An additional new species was described based on morphology alone (D. richardi sp. nov.), as the specimens were too old to obtain COI. Part of the taxonomic challenge presented by this group arises from missing type specimens. Neotypes are designated here for D. pauxillus and D. pinastri to bring stability to these names. An illustrated key to 13 Nearctic species is presented, along with descriptions, maps and supplementary data. A phylogeny based on COI is also presented and discussed.",TRUE,adj
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R142535,DNA Barcodes for the Northern European Tachinid Flies (Diptera: Tachinidae),S624683,R155773,Biogeographical region,R155685,Palearctic,"This data release provides COI barcodes for 366 species of parasitic flies (Diptera: Tachinidae), enabling the DNA based identification of the majority of northern European species and a large proportion of Palearctic genera, regardless of the developmental stage. The data will provide a tool for taxonomists and ecologists studying this ecologically important but challenging parasitoid family. A comparison of minimum distances between the nearest neighbors revealed the mean divergence of 5.52% that is approximately the same as observed earlier with comparable sampling in Lepidoptera, but clearly less than in Coleoptera. Full barcode-sharing was observed between 13 species pairs or triplets, equaling to 7.36% of all species. Delimitation based on Barcode Index Number (BIN) system was compared with traditional classification of species and interesting cases of possible species oversplits and cryptic diversity are discussed. Overall, DNA barcodes are effective in separating tachinid species and provide novel insight into the taxonomy of several genera.",TRUE,adj
R24,Ecology and Evolutionary Biology,R54770,Disturbance-mediated competition and the spread of Phragmites australis in a coastal marsh,S174026,R54771,Habitat,L107370,Terrestrial,"In recent decades the grass Phragmites australis has been aggressively in- vading coastal, tidal marshes of North America, and in many areas it is now considered a nuisance species. While P. australis has historically been restricted to the relatively benign upper border of brackish and salt marshes, it has been expanding seaward into more phys- iologically stressful regions. Here we test a leading hypothesis that the spread of P. australis is due to anthropogenic modification of coastal marshes. We did a field experiment along natural borders between stands of P. australis and the other dominant grasses and rushes (i.e., matrix vegetation) in a brackish marsh in Rhode Island, USA. We applied a pulse disturbance in one year by removing or not removing neighboring matrix vegetation and adding three levels of nutrients (specifically nitrogen) in a factorial design, and then we monitored the aboveground performance of P. australis and the matrix vegetation. Both disturbances increased the density, height, and biomass of shoots of P. australis, and the effects of fertilization were more pronounced where matrix vegetation was removed. Clear- ing competing matrix vegetation also increased the distance that shoots expanded and their reproductive output, both indicators of the potential for P. australis to spread within and among local marshes. In contrast, the biomass of the matrix vegetation decreased with increasing severity of disturbance. Disturbance increased the total aboveground production of plants in the marsh as matrix vegetation was displaced by P. australis. A greenhouse experiment showed that, with increasing nutrient levels, P. australis allocates proportionally more of its biomass to aboveground structures used for spread than to belowground struc- tures used for nutrient acquisition. Therefore, disturbances that enrich nutrients or remove competitors promote the spread of P. australis by reducing belowground competition for nutrients between P. australis and the matrix vegetation, thus allowing P. australis, the largest plant in the marsh, to expand and displace the matrix vegetation. Reducing nutrient load and maintaining buffers of matrix vegetation along the terrestrial-marsh ecotone will, therefore, be important methods of control for this nuisance species.",TRUE,adj
R24,Ecology and Evolutionary Biology,R54956,"The vulnerability of habitats to plant invasion: disentangling the roles of propagule pressure, time and sampling effort",S175702,R54957,Habitat,L108526,Terrestrial,"Aim To quantify the vulnerability of habitats to invasion by alien plants having accounted for the effects of propagule pressure, time and sampling effort. Location New Zealand. Methods We used spatial, temporal and habitat information taken from 9297 herbarium records of 301 alien plant species to examine the vulnerability of 11 terrestrial habitats to plant invasions. A null model that randomized species records across habitats was used to account for variation in sampling effort and to derive a relative measure of invasion based either on all records for a species or only its first record. The relative level of invasion was related to the average distance of each habitat from the nearest conurbation, which was used as a proxy for propagule pressure. The habitat in which a species was first recorded was compared to the habitats encountered for all records of that species to determine whether the initial habitat could predict subsequent habitat occupancy. Results Variation in sampling effort in space and time significantly masked the underlying vulnerability of habitats to plant invasions. Distance from the nearest conurbation had little effect on the relative level of invasion in each habitat, but the number of first records of each species significantly declined with increasing distance. While Urban, Streamside and Coastal habitats were over-represented as sites of initial invasion, there was no evidence of major invasion hotspots from which alien plants might subsequently spread. Rather, the data suggest that certain habitats (especially Roadsides) readily accumulate alien plants from other habitats. Main conclusions Herbarium records combined with a suitable null model provide a powerful tool for assessing the relative vulnerability of habitats to plant invasion. The first records of alien plants tend to be found near conurbations, but this pattern disappears with subsequent spread. Regardless of the habitat where a species was first recorded, ultimately most alien plants spread to Roadside and Sparse habitats. This information suggests that such habitats may be useful targets for weed surveillance and monitoring.",TRUE,adj
R24,Ecology and Evolutionary Biology,R56734,Hawaiian ant-flower networks. Nectar-thieving ants prefer undefended native over introduced plants with floral defense,S189241,R56735,Habitat,L117867,Terrestrial,"Ants are omnipresent in most terrestrial ecosystems, and plants have responded to their dominance by evolving traits that either facilitate positive interactions with ants or reduce negative ones. Because ants are generally poor pollinators, plants often protect their floral nectar against ants. Ants were historically absent from the geographically isolated Hawaiian archipelago, which harbors one of the most endemic floras in the world. We hypothesized that native Hawaiian plants lack floral features that exclude ants and therefore would be heavily exploited by introduced, invasive ants. To test this hypothesis, ant–flower interactions involving co-occurring native and introduced plants were observed in 10 sites on three Hawaiian Islands. We quantified the residual interaction strength of each pair of ant–plant species as the deviation of the observed interaction frequency from a null-model prediction based on available nectar sugar in a local plant community and local ant activity at sugar baits. As pred...",TRUE,adj
R24,Ecology and Evolutionary Biology,R57010,"Alien flora of Europe: species diversity, temporal trends, geographical patterns and research needs",S192601,R57015,Habitat,L120463,Terrestrial,"The paper provides the first estimate of the composition and structure of alien plants occurring in the wild in the European continent, based on the results of the DAISIE project (2004–2008), funded by the 6th Framework Programme of the European Union and aimed at “creating an inventory of invasive species that threaten European terrestrial, freshwater and marine environments”. The plant section of the DAISIE database is based on national checklists from 48 European countries/regions and Israel; for many of them the data were compiled during the project and for some countries DAISIE collected the first comprehensive checklists of alien species, based on primary data (e.g., Cyprus, Greece, F. Y. R. O. Macedonia, Slovenia, Ukraine). In total, the database contains records of 5789 alien plant species in Europe (including those native to a part of Europe but alien to another part), of which 2843 are alien to Europe (of extra-European origin). The research focus was on naturalized species; there are in total 3749 naturalized aliens in Europe, of which 1780 are alien to Europe. This represents a marked increase compared to 1568 alien species reported by a previous analysis of data in Flora Europaea (1964–1980). Casual aliens were marginally considered and are represented by 1507 species with European origins and 872 species whose native range falls outside Europe. The highest diversity of alien species is concentrated in industrialized countries with a tradition of good botanical recording or intensive recent research. The highest number of all alien species, regardless of status, is reported from Belgium (1969), the United Kingdom (1779) and Czech Republic (1378). The United Kingdom (857), Germany (450), Belgium (447) and Italy (440) are countries with the most naturalized neophytes. The number of naturalized neophytes in European countries is determined mainly by the interaction of temperature and precipitation; it increases with increasing precipitation but only in climatically warm and moderately warm regions. Of the nowadays naturalized neophytes alien to Europe, 50% arrived after 1899, 25% after 1962 and 10% after 1989. At present, approximately 6.2 new species, that are capable of naturalization, are arriving each year. Most alien species have relatively restricted European distributions; half of all naturalized species occur in four or fewer countries/regions, whereas 70% of non-naturalized species occur in only one region. Alien species are drawn from 213 families, dominated by large global plant families which have a weedy tendency and have undergone major radiations in temperate regions (Asteraceae, Poaceae, Rosaceae, Fabaceae, Brassicaceae). There are 1567 genera, which have alien members in European countries, the commonest being globally-diverse genera comprising mainly urban and agricultural weeds (e.g., Amaranthus, Chenopodium and Solanum) or cultivated for ornamental purposes (Cotoneaster, the genus richest in alien species). Only a few large genera which have successfully invaded (e.g., Oenothera, Oxalis, Panicum, Helianthus) are predominantly of non-European origin. Conyza canadensis, Helianthus tuberosus and Robinia pseudoacacia are most widely distributed alien species. Of all naturalized aliens present in Europe, 64.1% occur in industrial habitats and 58.5% on arable land and in parks and gardens. Grasslands and woodlands are also highly invaded, with 37.4 and 31.5%, respectively, of all naturalized aliens in Europe present in these habitats. Mires, bogs and fens are least invaded; only approximately 10% of aliens in Europe occur there. Intentional introductions to Europe (62.8% of the total number of naturalized aliens) prevail over unintentional (37.2%). Ornamental and horticultural introductions escaped from cultivation account for the highest number of species, 52.2% of the total. Among unintentional introductions, contaminants of seed, mineral materials and other commodities are responsible for 1091 alien species introductions to Europe (76.6% of all species introduced unintentionally) and 363 species are assumed to have arrived as stowaways (directly associated with human transport but arriving independently of commodity). Most aliens in Europe have a native range in the same continent (28.6% of all donor region records are from another part of Europe where the plant is native); in terms of species numbers the contribution of Europe as a region of origin is 53.2%. Considering aliens to Europe separately, 45.8% of species have their native distribution in North and South America, 45.9% in Asia, 20.7% in Africa and 5.3% in Australasia. Based on species composition, European alien flora can be classified into five major groups: (1) north-western, comprising Scandinavia and the UK; (2) west-central, extending from Belgium and the Netherlands to Germany and Switzerland; (3) Baltic, including only the former Soviet Baltic states; (4) east-central, comprizing the remainder of central and eastern Europe; (5) southern, covering the entire Mediterranean region. The clustering patterns cut across some European bioclimatic zones; cultural factors such as regional trade links and traditional local preferences for crop, forestry and ornamental species are also important by influencing the introduced species pool. Finally, the paper evaluates a state of the art in the field of plant invasions in Europe, points to research gaps and outlines avenues of further research towards documenting alien plant invasions in Europe. The data are of varying quality and need to be further assessed with respect to the invasion status and residence time of the species included. This concerns especially the naturalized/casual status; so far, this information is available comprehensively for only 19 countries/regions of the 49 considered. Collating an integrated database on the alien flora of Europe can form a principal contribution to developing a European-wide management strategy of alien species.",TRUE,adj
R24,Ecology and Evolutionary Biology,R57075,"How well do we understand the impacts of alien species on ecosystem services? A pan-European, cross-taxa assessment",S193326,R57078,Habitat,L121062,Terrestrial,"Recent comprehensive data provided through the DAISIE project (www.europe-aliens.org) have facilitated the development of the first pan-European assessment of the impacts of alien plants, vertebrates, and invertebrates – in terrestrial, freshwater, and marine environments – on ecosystem services. There are 1094 species with documented ecological impacts and 1347 with economic impacts. The two taxonomic groups with the most species causing impacts are terrestrial invertebrates and terrestrial plants. The North Sea is the maritime region that suffers the most impacts. Across taxa and regions, ecological and economic impacts are highly correlated. Terrestrial invertebrates create greater economic impacts than ecological impacts, while the reverse is true for terrestrial plants. Alien species from all taxonomic groups affect “supporting”, “provisioning”, “regulating”, and “cultural” services and interfere with human well-being. Terrestrial vertebrates are responsible for the greatest range of impacts, and these are widely distributed across Europe. Here, we present a review of the financial costs, as the first step toward calculating an estimate of the economic consequences of alien species in Europe.",TRUE,adj
R24,Ecology and Evolutionary Biology,R57596,Post-dispersal losses to seed predators: an experimental comparison of native and exotic old field plants,S198201,R57597,Habitat,L124375,Terrestrial,"Invasions by exotic plants may be more likely if exotics have low rates of attack by natural enemies, includ - ing post-dispersal seed predators (granivores). We investigated this idea with a field experiment conducted near Newmarket, Ontario, in which we experimentally excluded vertebrate and terrestrial insect seed predators from seeds of 43 native and exotic old-field plants. Protection from vertebrates significantly increased recovery of seeds; vertebrate exclusion produced higher recovery than controls for 30 of the experimental species, increasing overall seed recovery from 38.2 to 45.6%. Losses to vertebrates varied among species, significantly increasing with seed mass. In contrast, insect exclusion did not significantly improve seed recovery. There was no evidence that aliens benefitted from a re - duced rate of post-dispersal seed predation. The impacts of seed predators did not differ significantly between natives and exotics, which instead showed very similar responses to predator exclusion treatments. These results indicate that while vertebrate granivores had important impacts, especially on large-seeded species, exotics did not generally benefit from reduced rates of seed predation. Instead, differences between natives and exotics were small compared with interspecific variation within these groups. Resume : L'invasion par les plantes adventices est plus plausible si ces plantes ont peu d'ennemis naturels, incluant les predateurs post-dispersion des graines (granivores). Les auteurs ont examine cette idee lors d'une experience sur le ter- rain, conduite pres de Newmarket en Ontario, dans laquelle ils ont experimentalement empeche les predateurs de grai- nes, vertebres et insectes terrestres, d'avoir acces aux graines de 43 especes de plantes indigenes ou exotiques, de vielles prairies. La protection contre les vertebres augmente significativement la survie des graines; l'exclusion permet de recuperer plus de graines comparativement aux temoins chez 30 especes de plantes experimentales, avec une aug- mentation generale de recuperation allant de 38.2 a 45.6%. Les pertes occasionnees par les vertebres varient selon les especes, augmentant significativement avec la grosseur des graines. Au contraire, l'exclusion des insectes n'augmente pas significativement les nombres de graines recuperees. Ils n'y a pas de preuve que les adventices auraient beneficie d'une reduction du taux de predation post-dispersion des graines. Les impacts des predateurs de graines ne different pas significativement entre les especes indigenes et introduites, qui montrent au contraire des reactions tres similaires aux traitements d'exclusion des predateurs. Ces resultats indiquent que bien que les granivores vertebres aient des im - pacts importants, surtout sur les especes a grosses graines, les plantes introduites ne beneficient generalement pas de taux reduits de predation des graines. Au contraire, les differences entre les plantes indigenes et les plantes introduites sont petites comparativement a la variation interspecifique a l'interieur de chacun de ces groupes. Mots cles : adventices, exotiques, granivores, envahisseurs, vieilles prairies, predateurs de graines. (Traduit par la Redaction) Blaney and Kotanen 292",TRUE,adj
R24,Ecology and Evolutionary Biology,R144046,Land Use and Avian Species Diversity Along an Urban Gradient,S576579,R144048,Hypothesis type,R144050,Urban,"I examined the distribution and abundance of bird species across an urban gradient, and concomitant changes in community structure, by censusing summer resident bird populations at six sites in Santa Clara County, California (all former oak woodlands). These sites represented a gradient of urban land use that ranged from relatively undisturbed to highly developed, and included a biological preserve, recreational area, golf course, residential neighborhood, office park, and business district. The composition of the bird community shifted from predominantly native species in the undisturbed area to invasive and exotic species in the business district. Species richness, Shannon diversity, and bird biomass peaked at moderately disturbed sites. One or more species reached maximal densities in each of the sites, and some species were restricted to a given site. The predevelopment bird species (assumed to be those found at the most undisturbed site) dropped out gradually as the sites became more urban. These patterns were significantly related to shifts in habitat structure that occurred along the gradient, as determined by canonical correspondence analysis (CCA) using the environmental variables of percent land covered by pavement, buildings, lawn, grasslands, and trees or shrubs. I compared each formal site to four additional sites with similar levels of development within a two-county area to verify that the bird communities at the formal study sites were rep- resentative of their land use category.",TRUE,adj
R24,Ecology and Evolutionary Biology,R55092,Inferring Process from Pattern in Plant Invasions: A Semimechanistic Model Incorporating Propagule Pressure and Environmental Factors,S177222,R55093,Measure of invasion success,L109774,Cover,"Propagule pressure is intuitively a key factor in biological invasions: increased availability of propagules increases the chances of establishment, persistence, naturalization, and invasion. The role of propagule pressure relative to disturbance and various environmental factors is, however, difficult to quantify. We explored the relative importance of factors driving invasions using detailed data on the distribution and percentage cover of alien tree species on South Africa’s Agulhas Plain (2,160 km2). Classification trees based on geology, climate, land use, and topography adequately explained distribution but not abundance (canopy cover) of three widespread invasive species (Acacia cyclops, Acacia saligna, and Pinus pinaster). A semimechanistic model was then developed to quantify the roles of propagule pressure and environmental heterogeneity in structuring invasion patterns. The intensity of propagule pressure (approximated by the distance from putative invasion foci) was a much better predictor of canopy cover than any environmental factor that was considered. The influence of environmental factors was then assessed on the residuals of the first model to determine how propagule pressure interacts with environmental factors. The mediating effect of environmental factors was species specific. Models combining propagule pressure and environmental factors successfully predicted more than 70% of the variation in canopy cover for each species.",TRUE,adj
R24,Ecology and Evolutionary Biology,R55129,Propagule pressure and resource availability determine plant community invasibility in a temperate forest understorey,S177636,R55130,Measure of invasion success,L110107,Cover,"Few field experiments have examined the effects of both resource availability and propagule pressure on plant community invasibility. Two non-native forest species, a herb and a shrub (Hesperis matronalis and Rhamnus cathartica, respectively), were sown into 60 1-m 2 sub-plots distributed across three plots. These contained reconstructed native plant communities in a replaced surface soil layer in a North American forest interior. Resource availability and propagule pressure were manipulated as follows: understorey light level (shaded/unshaded), nutrient availability (control/fertilized), and seed pressures of the two non-native species (control/low/high). Hesperis and Rhamnus cover and the above-ground biomass of Hesperis were significantly higher in shaded sub-plots and at greater propagule pressures. Similarly, the above-ground biomass of Rhamnus was significantly increased with propagule pressure, although this was a function of density. In contrast, of species that seeded into plots from the surrounding forest during the growing season, the non-native species had significantly greater cover in unshaded sub-plots. Plants in these unshaded sub-plots were significantly taller than plants in shaded sub-plots, suggesting a greater fitness. Total and non-native species richness varied significantly among plots indicating the importance of fine-scale dispersal patterns. None of the experimental treatments influenced native species. Since the forest seed bank in our study was colonized primarily by non-native ruderal species that dominated understorey vegetation, the management of invasions by non-native species in forest understoreys will have to address factors that influence light levels and dispersal pathways.",TRUE,adj
R302,Economics,R182241,COVID-19 Disruptions Disproportionately Affect Female Academics,S717769,R187531,observation type,R187525,academic,"The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents’ circumstances, such as a spouse’s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.",TRUE,adj
R194,Engineering,R141133,A High-Power Temperature-Stable Electrostatic RF MEMS Capacitive Switch Based on a Thermal Buckle-Beam Design,S564119,R141135,keywords,L395879,Electrostatic,"This paper presents the design, fabrication and measurements of a novel vertical electrostatic RF MEMS switch which utilizes the lateral thermal buckle-beam actuator design in order to reduce the switch sensitivity to thermal stresses. The effect of biaxial and stress gradients are taken into consideration, and the buckle-beam designs show minimal sensitivity to these stresses. Several switches with 4,8, and 12 suspension beams are presented. All the switches demonstrate a low sensitivity to temperature, and the variation in the pull-in voltage is ~ -50 mV/°C from 25-125°C. The change in the up-state capacitance for the same temperature range is <; ± 3%. The switches also exhibit excellent RF and mechanical performances, and a capacitance ratio of ~ 20-23 (Cυ. = 85-115 fF, Cd = 1.7-2.6 pF) with Q > 150 at 10 GHz in the up-state position is reported. The mechanical resonant frequencies and quality factors are fο = 60-160 kHz and Qm = 2.3-4.5, respectively. The measured switching and release times are ~ 2-5 μs and ~ 5-6.5 μs, respectively. Power handling measurements show good stability with ~ 4 W of incident power at 10 GHz.",TRUE,adj
R194,Engineering,R135556,Flexible capacitive pressure sensor with sensitivity and linear measuring range enhanced based on porous composite of carbon conductive paste and polydimethylsiloxane,S536347,R135559,keywords,R135589,Flexible,"In recent years, the development of electronic skin and smart wearable body sensors has put forward high requirements for flexible pressure sensors with high sensitivity and large linear measuring range. However it turns out to be difficult to increase both of them simultaneously. In this paper, a flexible capacitive pressure sensor based on porous carbon conductive paste-PDMS composite is reported, the sensitivity and the linear measuring range of which were developed using multiple methods including adjusting the stiffness of the dielectric layer material, fabricating micro-structure and increasing dielectric permittivity of dielectric layer. The capacitive pressure sensor reported here has a relatively high sensitivity of 1.1 kPa-1 and a large linear measuring range of 10 kPa, making the product of the sensitivity and linear measuring range is 11, which is higher than that of the most reported capacitive pressure sensor to our best knowledge. The sensor has a detection of limit of 4 Pa, response time of 60 ms and great stability. Some potential applications of the sensor were demonstrated such as arterial pulse wave measuring and breathe measuring, which shows a promising candidate for wearable biomedical devices. In addition, a pressure sensor array based on the material was also fabricated and it could identify objects in the shape of different letters clearly, which shows a promising application in the future electronic skins.",TRUE,adj
R194,Engineering,R141656,Natural Biowaste-Cocoon-Derived Granular Activated Carbon-Coated ZnO Nanorods: A Simple Route To Synthesizing a Core–Shell Structure and Its Highly Enhanced UV and Hydrogen Sensing Properties,S567951,R141660,Method of nanomaterial synthesis,L398730,Hydrothermal,"Granular activated carbon (GAC) materials were prepared via simple gas activation of silkworm cocoons and were coated on ZnO nanorods (ZNRs) by the facile hydrothermal method. The present combination of GAC and ZNRs shows a core-shell structure (where the GAC is coated on the surface of ZNRs) and is exposed by systematic material analysis. The as-prepared samples were then fabricated as dual-functional sensors and, most fascinatingly, the as-fabricated core-shell structure exhibits better UV and H2 sensing properties than those of as-fabricated ZNRs and GAC. Thus, the present core-shell structure-based H2 sensor exhibits fast responses of 11% (10 ppm) and 23.2% (200 ppm) with ultrafast response and recovery. However, the UV sensor offers an ultrahigh photoresponsivity of 57.9 A W-1, which is superior to that of as-grown ZNRs (0.6 A W-1). Besides this, switching photoresponse of GAC/ZNR core-shell structures exhibits a higher switching ratio (between dark and photocurrent) of 1585, with ultrafast response and recovery, than that of as-grown ZNRs (40). Because of the fast adsorption ability of GAC, it was observed that the finest distribution of GAC on ZNRs results in rapid electron transportation between the conduction bands of GAC and ZNRs while sensing H2 and UV. Furthermore, the present core-shell structure-based UV and H2 sensors also well-retained excellent sensitivity, repeatability, and long-term stability. Thus, the salient feature of this combination is that it provides a dual-functional sensor with biowaste cocoon and ZnO, which is ecological and inexpensive.",TRUE,adj
R33,Epidemiology,R142094,"A National Iranian Cochlear Implant Registry (ICIR): cochlear implanted recipient
observational study",S570896,R142096,has Application Scope,L400753,Clinical,"BACKGROUND AND OBJECTIVE Patients who receive cochlear implants (CIs) constitutes a significant population in Iran. This population needs regular monitor on long-term outcomes, educational placement and quality of life. Currently, there is no national or regional registry on the long term outcomes of CI users in Iran. The present study aims to introduce the design and implementation of a national patient-outcomes registry on CI recipients for Iran. This Iranian CI registry (ICIR) provides an integrated framework for data collection and sharing, scientific communication and collaboration inCI research. METHODS The national ICIR is a prospective patient-outcomes registry for patients who are implanted in one of Iranian centers. The registry is based on an integrated database that utilizes a secure web-based platform to collect response data from clinicians and patient's proxy via electronic case report forms (e-CRFs) at predefined intervals. The CI candidates are evaluated with a set of standardized and non-standardized questionnaires prior to initial device activation(as baseline variables) and at three-monthly interval follow-up intervals up to 24 months and annually thereafter. RESULTS The software application of the ICIR registry is designed in a user-friendly graphical interface with different entry fields. The collected data are categorized into four subsets including personal information, clinical data, surgery data and commission results. The main parameters include audiometric performance of patient, device use, patient comorbidities, device use, quality of life and health-related utilities, across different types of CI devices from different manufacturers. CONCLUSION The ICIR database could be used by the increasingly growing network of CI centers in Iran. Clinicians, academic and industrial researchers as well as healthcare policy makers could use this database to develop more effective CI devices and better management of the recipients as well as to develop national guidelines.",TRUE,adj
R33,Epidemiology,R142068,"Diseases and Health Outcomes Registry Systems in I.R. Iran: Successful Initiative to Improve Public Health Programs, Quality of Care, and Biomedical Research",S570844,R142071,Geographical scope,L400710,National,"Registration systems for diseases and other health outcomes provide important resource for biomedical research, as well as tools for public health surveillance and improvement of quality of care. The Ministry of Health and Medical Education (MOHME) of Iran launched a national program to establish registration systems for different diseases and health outcomes. Based on the national program, we organized several workshops and training programs and disseminated the concepts and knowledge of the registration systems. Following a call for proposals, we received 100 applications and after thorough evaluation and corrections by the principal investigators, we approved and granted about 80 registries for three years. Having strong steering committee, committed executive and scientific group, establishing national and international collaboration, stating clear objectives, applying feasible software, and considering stable financing were key components for a successful registry and were considered in the evaluation processes. We paid particulate attention to non-communicable diseases, which constitute an emerging public health problem. We prioritized establishment of regional population-based cancer registries (PBCRs) in 10 provinces in collaboration with the International Agency for Research on Cancer. This initiative was successful and registry programs became popular among researchers and research centers and created several national and international collaborations in different areas to answer important public health and clinical questions. In this paper, we report the details of the program and list of registries that were granted in the first round.",TRUE,adj
R33,Epidemiology,R142094,"A National Iranian Cochlear Implant Registry (ICIR): cochlear implanted recipient
observational study",S570895,R142096,Geographical scope,L400752,Regional,"BACKGROUND AND OBJECTIVE Patients who receive cochlear implants (CIs) constitutes a significant population in Iran. This population needs regular monitor on long-term outcomes, educational placement and quality of life. Currently, there is no national or regional registry on the long term outcomes of CI users in Iran. The present study aims to introduce the design and implementation of a national patient-outcomes registry on CI recipients for Iran. This Iranian CI registry (ICIR) provides an integrated framework for data collection and sharing, scientific communication and collaboration inCI research. METHODS The national ICIR is a prospective patient-outcomes registry for patients who are implanted in one of Iranian centers. The registry is based on an integrated database that utilizes a secure web-based platform to collect response data from clinicians and patient's proxy via electronic case report forms (e-CRFs) at predefined intervals. The CI candidates are evaluated with a set of standardized and non-standardized questionnaires prior to initial device activation(as baseline variables) and at three-monthly interval follow-up intervals up to 24 months and annually thereafter. RESULTS The software application of the ICIR registry is designed in a user-friendly graphical interface with different entry fields. The collected data are categorized into four subsets including personal information, clinical data, surgery data and commission results. The main parameters include audiometric performance of patient, device use, patient comorbidities, device use, quality of life and health-related utilities, across different types of CI devices from different manufacturers. CONCLUSION The ICIR database could be used by the increasingly growing network of CI centers in Iran. Clinicians, academic and industrial researchers as well as healthcare policy makers could use this database to develop more effective CI devices and better management of the recipients as well as to develop national guidelines.",TRUE,adj
R146,Geology,R108129,Comparison of Airborne Hyperspectral Data and EO-1 Hyperion for Mineral Mapping,S492581,R108130,Minerals Mapped/ Identified,L357082,Muscovite,"Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-/spl mu/m range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements are required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS.",TRUE,adj
R136,Graphics,R6515,Formal Linked Data Visualization Model,S77685,R25679,Domain,R25666,generic,"Recently, the amount of semantic data available in the Web has increased dramatically. The potential of this vast amount of data is enormous but in most cases it is difficult for users to explore and use this data, especially for those without experience with Semantic Web technologies. Applying information visualization techniques to the Semantic Web helps users to easily explore large amounts of data and interact with them. In this article we devise a formal Linked Data Visualization Model (LDVM), which allows to dynamically connect data with visualizations. We report about our implementation of the LDVM comprising a library of generic visualizations that enable both users and data analysts to get an overview on, visualize and explore the Data Web and perform detailed analyzes on Linked Data.",TRUE,adj
R278,Information Science,R46656,CRFS-based Chinese named entity recognition with improved tag set,S142942,R46657,Language/domain,L87934,Chinese,Chinese Named entity recognition is one of the most important tasks in NLP. The paper mainly describes our work on NER tasks. The paper built up a system under the framework of Conditional Random Fields (CRFs) model. With an improved tag set the system gets an F-value of 93.49 using SIGHAN2007 MSRA corpus.,TRUE,adj
R278,Information Science,R68408,"A Systematic Review of Information Literacy Programs in Higher Education: Effects of Face-to-Face, Online, and Blended Formats on Student Skills and Views",S325358,R68411,Has method,R68415,Meta-analysis,"Objective – Evidence from systematic reviews a decade ago suggested that face-to-face and online methods to provide information literacy training in universities were equally effective in terms of skills learnt, but there was a lack of robust comparative research. The objectives of this review were (1) to update these findings with the inclusion of more recent primary research; (2) to further enhance the summary of existing evidence by including studies of blended formats (with components of both online and face-to-face teaching) compared to single format education; and (3) to explore student views on the various formats employed. Methods – Authors searched seven databases along with a range of supplementary search methods to identify comparative research studies, dated January 1995 to October 2016, exploring skill outcomes for students enrolled in higher education programs. There were 33 studies included, of which 19 also contained comparative data on student views. Where feasible, meta-analyses were carried out to provide summary estimates of skills development and a thematic analysis was completed to identify student views across the different formats. Results – A large majority of studies (27 of 33; 82%) found no statistically significant difference between formats in skills outcomes for students. Of 13 studies that could be included in a meta-analysis, the standardized mean difference (SMD) between skill test results for face-to-face versus online formats was -0.01 (95% confidence interval -0.28 to 0.26). Of ten studies comparing blended to single delivery format, seven (70%) found no statistically significant difference between formats, and the remaining studies had mixed outcomes. From the limited evidence available across all studies, there is a potential dichotomy between outcomes measured via skill test and assignment (course work) which is worthy of further investigation. The thematic analysis of student views found no preference in relation to format on a range of measures in 14 of 19 studies (74%). The remainder identified that students perceived advantages and disadvantages for each format but had no overall preference. Conclusions – There is compelling evidence that information literacy training is effective and well received across a range of delivery formats. Further research looking at blended versus single format methods, and the time implications for each, as well as comparing assignment to skill test outcomes would be valuable. Future studies should adopt a methodologically robust design (such as the randomized controlled trial) with a large student population and validated outcome measures.",TRUE,adj
R278,Information Science,R4464,"""Yeah, I Guess That's Data"": Data Practices and Conceptions among Humanities Faculty",S4746,R4467,has methodology,R4471,Qualitative,"abstract:The ability to interact with and properly manage data in a research setting has become increasingly important in all disciplines. Libraries are attempting to identify their role in providing data management services. However, humanities faculty's conceptions of data and their data management practices are not well-known. This qualitative study explores the data management practices of humanities faculty at a four-year university and examines their perceptions of the term data.",TRUE,adj
R278,Information Science,R76423,Expanding horizons in historical linguistics with the 400-million word Corpus of Historical American English,S351830,R76425,Has preprocessing steps,R77054,lemmatised,"The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.",TRUE,adj
R137681,"Information Systems, Process and Knowledge Management",R166497,Softcite dataset: A dataset of software mentions in biomedical and economic research publications,S663201,R166503,Data domains,R147727,Biomedicine,"Software contributions to academic research are relatively invisible, especially to the formalized scholarly reputation system based on bibliometrics. In this article, we introduce a gold‐standard dataset of software mentions from the manual annotation of 4,971 academic PDFs in biomedicine and economics. The dataset is intended to be used for automatic extraction of software mentions from PDF format research publications by supervised learning at scale. We provide a description of the dataset and an extended discussion of its creation process, including improved text conversion of academic PDFs. Finally, we reflect on our challenges and lessons learned during the dataset creation, in hope of encouraging more discussion about creating datasets for machine learning use.",TRUE,adj
R137681,"Information Systems, Process and Knowledge Management",R156129,DIGITAL MANUFACTURING: REQUIREMENTS AND CHALLENGES FOR IMPLEMENTING DIGITAL SURROGATES,S627008,R156131,has Temporal Integration,R156136,Real-time,"A key challenge for manufacturers today is efficiently producing and delivering products on time. Issues include demand for customized products, changes in orders, and equipment status change, complicating the decision-making process. A real-time digital representation of the manufacturing operation would help address these challenges. Recent technology advancements of smart sensors, IoT, and cloud computing make it possible to realize a ""digital twin"" of a manufacturing system or process. Digital twins or surrogates are data-driven virtual representations that replicate, connect, and synchronize the operation of a manufacturing system or process. They utilize dynamically collected data to track system behaviors, analyze performance, and help make decisions without interrupting production. In this paper, we define digital surrogate, explore their relationships to simulation, digital thread, artificial intelligence, and IoT. We identify the technology and standard requirements and challenges for implementing digital surrogates. A production planning case is used to exemplify the digital surrogate concept.",TRUE,adj
R128,Inorganic Chemistry,R160606,Etching Silicon with Aqueous Acidic Ozone Solutions: Reactivity Studies and Surface Investigations,S640753,R160639,Type of etching mixture,R160643,acidic,"Aqueous acidic ozone (O3)-containing solutions are increasingly used for silicon treatment in photovoltaic and semiconductor industries. We studied the behavior of aqueous hydrofluoric acid (HF)-containing solutions (i.e., HF–O3, HF–H2SO4–O3, and HF–HCl–O3 mixtures) toward boron-doped solar-grade (100) silicon wafers. The solubility of O3 and etching rates at 20 °C were investigated. The mixtures were analyzed for the potential oxidizing species by UV–vis and Raman spectroscopy. Concentrations of O3 (aq), O3 (g), and Cl2 (aq) were determined by titrimetric volumetric analysis. F–, Cl–, and SO42– ion contents were determined by ion chromatography. Model experiments were performed to investigate the oxidation of H-terminated silicon surfaces by H2O–O2, H2O–O3, H2O–H2SO4–O3, and H2O–HCl–O3 mixtures. The oxidation was monitored by diffuse reflection infrared Fourier transformation (DRIFT) spectroscopy. The resulting surfaces were examined by scanning electron microscopy (SEM) and X-ray photoelectron spectrosc...",TRUE,adj
R128,Inorganic Chemistry,R160686,Modified TMAH based etchant for improved etching characteristics on Si{1 0 0} wafer,S640850,R160688,Type of etching,R160671,wet,"Wet bulk micromachining is a popular technique for the fabrication of microstructures in research labs as well as in industry. However, increasing the throughput still remains an active area of research, and can be done by increasing the etching rate. Moreover, the release time of a freestanding structure can be reduced if the undercutting rate at convex corners can be improved. In this paper, we investigate a non-conventional etchant in the form of NH2OH added in 5 wt% tetramethylammonium hydroxide (TMAH) to determine its etching characteristics. Our analysis is focused on a Si{1 0 0} wafer as this is the most widely used in the fabrication of planer devices (e.g. complementary metal oxide semiconductors) and microelectromechanical systems (e.g. inertial sensors). We perform a systematic and parametric analysis with concentrations of NH2OH varying from 5% to 20% in step of 5%, all in 5 wt% TMAH, to obtain the optimum concentration for achieving improved etching characteristics including higher etch rate, undercutting at convex corners, and smooth etched surface morphology. Average surface roughness (R a), etch depth, and undercutting length are measured using a 3D scanning laser microscope. Surface morphology of the etched Si{1 0 0} surface is examined using a scanning electron microscope. Our investigation has revealed a two-fold increment in the etch rate of a {1 0 0} surface with the addition of NH2OH in the TMAH solution. Additionally, the incorporation of NH2OH significantly improves the etched surface morphology and the undercutting at convex corners, which is highly desirable for the quick release of microstructures from the substrate. The results presented in this paper are extremely useful for engineering applications and will open a new direction of research for scientists in both academic and industrial laboratories.",TRUE,adj
R12,Life Sciences,R78055,"Predict new cases of the coronavirus 19; in Michigan, U.S.A. or other countries using Crow-AMSAA method",S353658,R78057,Used models,L251308,Crow-AMSAA ," BACKGROUND Statistical predictions are useful to predict events based on statistical models. The data is useful to determine outcomes based on inputs and calculations. The Crow-AMSAA method will be explored to predict new cases of Coronavirus 19 (COVID19). This method is currently used within engineering reliability design to predict failures and evaluate the reliability growth. The author intents to use this model to predict the COVID19 cases by using daily reported data from Michigan, New York City, U.S.A and other countries. The piece wise Crow-AMSAA (CA) model fits the data very well for the infected cases and deaths at different phases during the start of the COVID19 outbreak. The slope β of the Crow-AMSAA line indicates the speed of the transmission or death rate. The traditional epidemiological model is based on the exponential distribution, but the Crow-AMSAA is the Non Homogeneous Poisson Process (NHPP) which can be used to modeling the complex problem like COVID19, especially when the various mitigation strategies such as social distance, isolation and locking down were implemented by the government at different places.
OBJECTIVE This paper is to use piece wise Crow-AMSAA method to fit the COVID19 confirmed cases in Michigan, New York City, U.S.A and other countries.
METHODS piece wise Crow-AMSAA method to fit the COVID19 confirmed cases
RESULTS From the Crow-AMSAA analysis above, at the beginning of the COVID 19, the infectious cases did not follow the Crow-AMSAA prediction line, but during the outbreak start, the confirmed cases does follow the CA line, the slope β value indicates the pace of the transmission rate or death rate in each case. The piece wise Crow-AMSAA describes the different phases of spreading. This indicates the speed of the transmission rate could change according to the government interference, social distance order or other factors. Comparing the piece wise CA β slopes (β: 1.683-- 0.834--0.092) in China and in U.S.A (β:5.138--10.48--5.259), the speed of infectious rate in U.S.A is much higher than the infectious rate in China. From the piece wise CA plots and summary table 1 of the CA slope βs, the COVID19 spreading has the different behavior at different places and countries where the government implemented the different policy to slow down the spreading.
CONCLUSIONS From the analysis of data and conclusions from confirmed cases and deaths of COVID 19 in Michigan, New York city, U.S.A, China and other countries, the piece wise Crow-AMSAA method can be used to modeling the spreading of COVID19.
",TRUE,adj
R136138,Medical Informatics and Medical Bioinformatics,R148112,"2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text",S593901,R148114,Data domains,R148115,Clinical,"The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records presented three tasks: a concept extraction task focused on the extraction of medical concepts from patient reports; an assertion classification task focused on assigning assertion types for medical problem concepts; and a relation classification task focused on assigning relation types that hold between medical problems, tests, and treatments. i2b2 and the VA provided an annotated reference standard corpus for the three tasks. Using this reference standard, 22 systems were developed for concept extraction, 21 for assertion classification, and 16 for relation classification. These systems showed that machine learning approaches could be augmented with rule-based systems to determine concepts, assertions, and relations. Depending on the task, the rule-based systems can either provide input for machine learning or post-process the output of machine learning. Ensembles of classifiers, information from unlabeled data, and external knowledge sources can help when the training data are inadequate.",TRUE,adj
R67,Medicinal Chemistry and Pharmaceutics,R161051,Formulation of two-layer dissolving polymeric microneedle patches for insulin transdermal delivery in diabetic mice: Two-Layer Polymeric Microneedle Patches for Insulin Transdermal Delivery,S643001,R161053,Microneedles structure,R161054,microneedle,"Dissolving microneedles (MNs) display high efficiency in delivering poorly permeable drugs and vaccines. Here, two-layer dissolving polymeric MN patches composed of gelatin and sodium carboxymethyl cellulose (CMC) were fabricated with a two-step casting and centrifuging process to localize the insulin in the needle and achieve efficient transdermal delivery of insulin. In vitro skin insertion capability was determined by staining with tissue-marking dye after insertion, and the real-time penetration depth was monitored using optical coherence tomography. Confocal microscopy images revealed that the rhodamine 6G and fluorescein isothiocyanate-labeled insulin (insulin-FITC) can gradually diffuse from the puncture sites to deeper tissue. Ex vivo drug-release profiles showed that 50% of the insulin was released and penetrated across the skin after 1 h, and the cumulative permeation reached 80% after 5 h. In vivo and pharmacodynamic studies were then conducted to estimate the feasibility of the administration of insulin-loaded dissolving MN patches on diabetic mice for glucose regulation. The total area above the glucose level versus time curve as an index of hypoglycemic effect was 128.4 ± 28.3 (% h) at 0.25 IU/kg. The relative pharmacologic availability and relative bioavailability (RBA) of insulin from MN patches were 95.6 and 85.7%, respectively. This study verified that the use of gelatin/CMC MN patches for insulin delivery achieved a satisfactory RBA compared to traditional hypodermic injection and presented a promising device to deliver poorly permeable protein drugs for diabetic therapy. © 2016 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 105A: 84-93, 2017.",TRUE,adj
R279,Nanoscience and Nanotechnology,R110312,Atomic Layer Deposition of Titanium Oxide on Single-Layer Graphene: An Atomic-Scale Study toward Understanding Nucleation and Growth,S502740,R110315,Material structure,R110302,Amorphous,"Controlled synthesis of a hybrid nanomaterial based on titanium oxide and single-layer graphene (SLG) using atomic layer deposition (ALD) is reported here. The morphology and crystallinity of the oxide layer on SLG can be tuned mainly with the deposition temperature, achieving either a uniform amorphous layer at 60 °C or ∼2 nm individual nanocrystals on the SLG at 200 °C after only 20 ALD cycles. A continuous and uniform amorphous layer formed on the SLG after 180 cycles at 60 °C can be converted to a polycrystalline layer containing domains of anatase TiO2 after a postdeposition annealing at 400 °C under vacuum. Using aberration-corrected transmission electron microscopy (AC-TEM), characterization of the structure and chemistry was performed on an atomic scale and provided insight into understanding the nucleation and growth. AC-TEM imaging and electron energy loss spectroscopy revealed that rocksalt TiO nanocrystals were occasionally formed at the early stage of nucleation after only 20 ALD cycles. Understanding and controlling nucleation and growth of the hybrid nanomaterial are crucial to achieving novel properties and enhanced performance for a wide range of applications that exploit the synergetic functionalities of the ensemble.",TRUE,adj
R279,Nanoscience and Nanotechnology,R151352,Enzymatic glucose biosensor based on ZnO nanorod array grown by hydrothermal decomposition,S607158,R151354,Method of nanomaterial synthesis,L419828,hydrothermal,"We report herein a glucose biosensor based on glucose oxidase (GOx) immobilized on ZnO nanorod array grown by hydrothermal decomposition. In a phosphate buffer solution with a pH value of 7.4, negatively charged GOx was immobilized on positively charged ZnO nanorods through electrostatic interaction. At an applied potential of +0.8V versus Ag∕AgCl reference electrode, ZnO nanorods based biosensor presented a high and reproducible sensitivity of 23.1μAcm−2mM−1 with a response time of less than 5s. The biosensor shows a linear range from 0.01to3.45mM and an experiment limit of detection of 0.01mM. An apparent Michaelis-Menten constant of 2.9mM shows a high affinity between glucose and GOx immobilized on ZnO nanorods.",TRUE,adj
R279,Nanoscience and Nanotechnology,R143712,Highly Stretchable Core–Sheath Fibers via Wet-Spinning for Wearable Strain Sensors,S575138,R143715,keywords,L402872,wet-spinning ,"Lightweight, stretchable, and wearable strain sensors have recently been widely studied for the development of health monitoring systems, human-machine interfaces, and wearable devices. Herein, highly stretchable polymer elastomer-wrapped carbon nanocomposite piezoresistive core-sheath fibers are successfully prepared using a facile and scalable one-step coaxial wet-spinning assembly approach. The carbon nanotube-polymeric composite core of the stretchable fiber is surrounded by an insulating sheath, similar to conventional cables, and shows excellent electrical conductivity with a low percolation threshold (0.74 vol %). The core-sheath elastic fibers are used as wearable strain sensors, exhibiting ultra-high stretchability (above 300%), excellent stability (>10 000 cycles), fast response, low hysteresis, and good washability. Furthermore, the piezoresistive core-sheath fiber possesses bending-insensitiveness and negligible torsion-sensitive properties, and the strain sensing performance of piezoresistive fibers maintains a high degree of stability under harsh conditions. On the basis of this high level of performance, the fiber-shaped strain sensor can accurately detect both subtle and large-scale human movements by embedding it in gloves and garments or by directly attaching it to the skin. The current results indicate that the proposed stretchable strain sensor has many potential applications in health monitoring, human-machine interfaces, soft robotics, and wearable electronics.",TRUE,adj
R145261,Natural Language Processing,R162391,Overview of the chemical compound and drug name recognition (CHEMDNER) task,S686173,R171841,Coarse-grained Entity type,R166552,Chemical,"There is an increasing need to facilitate automated access to information relevant for chemical compounds and drugs described in text, including scientific articles, patents or health agency reports. A number of recent efforts have implemented natural language processing (NLP) and text mining technologies for the chemical domain (ChemNLP or chemical text mining). Due to the lack of manually labeled Gold Standard datasets together with comprehensive annotation guidelines, both the implementation as well as the comparative assessment of ChemNLP technologies BSF opaque. Two key components for most chemical text mining technologies are the indexing of documents with chemicals (chemical document indexing CDI ) and finding the mentions of chemicals in text (chemical entity mention recognition CEM ). These two tasks formed part of the chemical compound and drug named entity recognition (CHEMDNER) task introduced at the fourth BioCreative challenge, a community effort to evaluate biomedical text mining applications. For this task, the CHEMDNER text corpus was constructed, consisting of 10,000 abstracts containing a total of 84,355 mentions of chemical compounds and drugs that have been manually labeled by domain experts following specific annotation guidelines. This corpus covers representative abstracts from major chemistry-related sub-disciplines such as medicinal chemistry, biochemistry, organic chemistry and toxicology. A total of 27 teams – 23 academic and 4 commercial HSPVQT, comprised of 87 researchers – submitted results for this task. Of these teams, 26 provided submissions for the CEM subtask and 23 for the CDI subtask. Teams were provided with the manual annotations of 7,000 abstracts to implement and train their systems and then had to return predictions for the 3,000 test set abstracts during a short period of time. When comparing exact matches of the automated results against the manually labeled Gold Standard annotations, the best teams reached an F-score ⋆ Corresponding author Proceedings of the fourth BioCreative challenge evaluation workshop, vol. 2 of 87.39% JO the CEM task and of 88.20% JO the CDI task. This can be regarded as a very competitive result when compared to the expected upper boundary, the agreement between to human annotators, at 91%. In general, the technologies used to detect chemicals and drugs by the teams included machine learning methods (particularly CRFs using a considerable range of different features), interaction of chemistry-related lexical resources and manual rules (e.g., to cover abbreviations, chemical formula or chemical identifiers). By promoting the availability of the software of the participating systems as well as through the release of the CHEMDNER corpus to enable implementation of new tools, this work fosters the development of text mining applications like the automatic extraction of biochemical reactions, toxicological properties of compounds, or the detection of associations between genes or mutations BOE drugs in the context pharmacogenomics.",TRUE,adj
R145261,Natural Language Processing,R162474,Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task,S686690,R172005,Coarse-grained Entity type,R166552,Chemical,"Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task—a result that approaches the human inter-annotator agreement (0.8875)—and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system’s ability to return real-time results: the average response time for each team’s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/",TRUE,adj
R145261,Natural Language Processing,R162561,Overview of the NLM-Chem BioCreative VII track: Full-text Chemical Identification and Indexing in PubMed articles,S687006,R172113,Coarse-grained Entity type,R166552,Chemical,"The BioCreative NLM-Chem track calls for a community effort to fine-tune automated recognition of chemical names in biomedical literature. Chemical names are one of the most searched biomedical entities in PubMed and – as highlighted during the COVID-19 pandemic – their identification may significantly advance research in multiple biomedical subfields. While previous community challenges focused on identifying chemical names mentioned in titles and abstracts, the full text contains valuable additional detail. We organized the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document. This manuscript summarizes the BioCreative NLM-Chem track. We received a total of 88 submissions in total from 17 teams worldwide. The highest performance achieved for the Chemical Identification task was 0.8672 f-score (0.8759 precision, 0.8587 recall) for strict NER performance and 0.8136 f-score (0.8621 precision, 0.7702 recall) for strict normalization performance. The highest performance achieved for the Chemical Indexing task was 0.4825 f-score (0.4397 precision, 0.5344 recall). The NLM-Chem track dataset and other challenge materials are publicly available at https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/. This community challenge demonstrated 1) the current substantial achievements in deep learning technologies can be utilized to further improve automated prediction accuracy, and 2) the Chemical Indexing task is substantially more challenging. We look forward to further development of biomedical text mining methods to respond to the rapid growth of biomedical literature. Keywords— biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; text mining; chemical entity recognition; chemical indexing",TRUE,adj
R145261,Natural Language Processing,R171917,Web services-based text-mining demonstrates broad impacts for interoperability and process simplification,S686427,R171919,Coarse-grained Entity type,R166552,Chemical,"The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation tasks collectively represent a community-wide effort to evaluate a variety of text-mining and information extraction systems applied to the biological domain. The BioCreative IV Workshop included five independent subject areas, including Track 3, which focused on named-entity recognition (NER) for the Comparative Toxicogenomics Database (CTD; http://ctdbase.org). Previously, CTD had organized document ranking and NER-related tasks for the BioCreative Workshop 2012; a key finding of that effort was that interoperability and integration complexity were major impediments to the direct application of the systems to CTD's text-mining pipeline. This underscored a prevailing problem with software integration efforts. Major interoperability-related issues included lack of process modularity, operating system incompatibility, tool configuration complexity and lack of standardization of high-level inter-process communications. One approach to potentially mitigate interoperability and general integration issues is the use of Web services to abstract implementation details; rather than integrating NER tools directly, HTTP-based calls from CTD's asynchronous, batch-oriented text-mining pipeline could be made to remote NER Web services for recognition of specific biological terms using BioC (an emerging family of XML formats) for inter-process communications. To test this concept, participating groups developed Representational State Transfer /BioC-compliant Web services tailored to CTD's NER requirements. Participants were provided with a comprehensive set of training materials. CTD evaluated results obtained from the remote Web service-based URLs against a test data set of 510 manually curated scientific articles. Twelve groups participated in the challenge. Recall, precision, balanced F-scores and response times were calculated. Top balanced F-scores for gene, chemical and disease NER were 61, 74 and 51%, respectively. Response times ranged from fractions-of-a-second to over a minute per article. We present a description of the challenge and summary of results, demonstrating how curation groups can effectively use interoperable NER technologies to simplify text-mining pipeline implementation. Database URL: http://ctdbase.org/",TRUE,adj
R145261,Natural Language Processing,R162561,Overview of the NLM-Chem BioCreative VII track: Full-text Chemical Identification and Indexing in PubMed articles,S687037,R172126,Coarse-grained Entity types,R166552,Chemical,"The BioCreative NLM-Chem track calls for a community effort to fine-tune automated recognition of chemical names in biomedical literature. Chemical names are one of the most searched biomedical entities in PubMed and – as highlighted during the COVID-19 pandemic – their identification may significantly advance research in multiple biomedical subfields. While previous community challenges focused on identifying chemical names mentioned in titles and abstracts, the full text contains valuable additional detail. We organized the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document. This manuscript summarizes the BioCreative NLM-Chem track. We received a total of 88 submissions in total from 17 teams worldwide. The highest performance achieved for the Chemical Identification task was 0.8672 f-score (0.8759 precision, 0.8587 recall) for strict NER performance and 0.8136 f-score (0.8621 precision, 0.7702 recall) for strict normalization performance. The highest performance achieved for the Chemical Indexing task was 0.4825 f-score (0.4397 precision, 0.5344 recall). The NLM-Chem track dataset and other challenge materials are publicly available at https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/. This community challenge demonstrated 1) the current substantial achievements in deep learning technologies can be utilized to further improve automated prediction accuracy, and 2) the Chemical Indexing task is substantially more challenging. We look forward to further development of biomedical text mining methods to respond to the rapid growth of biomedical literature. Keywords— biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; text mining; chemical entity recognition; chemical indexing",TRUE,adj
R145261,Natural Language Processing,R163656,"PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track",S653513,R163658,Concept types,R161597,chemical,"One of the biomedical entity types of relevance for medicine or biosciences are chemical compounds and drugs. The correct detection these entities is critical for other text mining applications building on them, such as adverse drug-reaction detection, medication-related fake news or drug-target extraction. Although a significant effort was made to detect mentions of drugs/chemicals in English texts, so far only very limited attempts were made to recognize them in medical documents in other languages. Taking into account the growing amount of medical publications and clinical records written in Spanish, we have organized the first shared task on detecting drug and chemical entities in Spanish medical documents. Additionally, we included a clinical concept-indexing sub-track asking teams to return SNOMED-CT identifiers related to drugs/chemicals for a collection of documents. For this task, named PharmaCoNER, we generated annotation guidelines together with a corpus of 1,000 manually annotated clinical case studies. A total of 22 teams participated in the sub-track 1, (77 system runs), and 7 teams in the sub-track 2 (19 system runs). Top scoring teams used sophisticated deep learning approaches yielding very competitive results with F-measures above 0.91. These results indicate that there is a real interest in promoting biomedical text mining efforts beyond English. We foresee that the PharmaCoNER annotation guidelines, corpus and participant systems will foster the development of new resources for clinical and biomedical text mining systems of Spanish medical data.",TRUE,adj
R145261,Natural Language Processing,R163656,"PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track",S660241,R165689,Entity types,R165694,Chemical,"One of the biomedical entity types of relevance for medicine or biosciences are chemical compounds and drugs. The correct detection these entities is critical for other text mining applications building on them, such as adverse drug-reaction detection, medication-related fake news or drug-target extraction. Although a significant effort was made to detect mentions of drugs/chemicals in English texts, so far only very limited attempts were made to recognize them in medical documents in other languages. Taking into account the growing amount of medical publications and clinical records written in Spanish, we have organized the first shared task on detecting drug and chemical entities in Spanish medical documents. Additionally, we included a clinical concept-indexing sub-track asking teams to return SNOMED-CT identifiers related to drugs/chemicals for a collection of documents. For this task, named PharmaCoNER, we generated annotation guidelines together with a corpus of 1,000 manually annotated clinical case studies. A total of 22 teams participated in the sub-track 1, (77 system runs), and 7 teams in the sub-track 2 (19 system runs). Top scoring teams used sophisticated deep learning approaches yielding very competitive results with F-measures above 0.91. These results indicate that there is a real interest in promoting biomedical text mining efforts beyond English. We foresee that the PharmaCoNER annotation guidelines, corpus and participant systems will foster the development of new resources for clinical and biomedical text mining systems of Spanish medical data.",TRUE,adj
R145261,Natural Language Processing,R162474,Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task,S686711,R172005,Number of development data mentions,R172008,Chemical,"Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task—a result that approaches the human inter-annotator agreement (0.8875)—and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system’s ability to return real-time results: the average response time for each team’s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/",TRUE,adj
R145261,Natural Language Processing,R162474,Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task,S686715,R172005,Number of test data mentions,R172010,Chemical,"Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task—a result that approaches the human inter-annotator agreement (0.8875)—and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system’s ability to return real-time results: the average response time for each team’s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/",TRUE,adj
R145261,Natural Language Processing,R162474,Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task,S686717,R172005,Number of training data mentions,R172012,Chemical,"Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task—a result that approaches the human inter-annotator agreement (0.8875)—and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system’s ability to return real-time results: the average response time for each team’s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/",TRUE,adj
R145261,Natural Language Processing,R163656,"PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track",S660238,R165689,Data domains,R148115,Clinical,"One of the biomedical entity types of relevance for medicine or biosciences are chemical compounds and drugs. The correct detection these entities is critical for other text mining applications building on them, such as adverse drug-reaction detection, medication-related fake news or drug-target extraction. Although a significant effort was made to detect mentions of drugs/chemicals in English texts, so far only very limited attempts were made to recognize them in medical documents in other languages. Taking into account the growing amount of medical publications and clinical records written in Spanish, we have organized the first shared task on detecting drug and chemical entities in Spanish medical documents. Additionally, we included a clinical concept-indexing sub-track asking teams to return SNOMED-CT identifiers related to drugs/chemicals for a collection of documents. For this task, named PharmaCoNER, we generated annotation guidelines together with a corpus of 1,000 manually annotated clinical case studies. A total of 22 teams participated in the sub-track 1, (77 system runs), and 7 teams in the sub-track 2 (19 system runs). Top scoring teams used sophisticated deep learning approaches yielding very competitive results with F-measures above 0.91. These results indicate that there is a real interest in promoting biomedical text mining efforts beyond English. We foresee that the PharmaCoNER annotation guidelines, corpus and participant systems will foster the development of new resources for clinical and biomedical text mining systems of Spanish medical data.",TRUE,adj
R145261,Natural Language Processing,R156121,The Web as a Knowledge-Base for Answering Complex Questions,S626975,R156123,Question Types,L431510,Complex,"Answering complex questions is a time-consuming activity for humans that requires reasoning and integration of information. Recent work on reading comprehension made headway in answering simple questions, but tackling complex questions is still an ongoing research challenge. Conversely, semantic parsers have been successful at handling compositionality, but only when the information resides in a target knowledge-base. In this paper, we present a novel framework for answering broad and complex questions, assuming answering simple questions is possible using a search engine and a reading comprehension model. We propose to decompose complex questions into a sequence of simple questions, and compute the final answer from the sequence of answers. To illustrate the viability of our approach, we create a new dataset of complex questions, ComplexWebQuestions, and present a model that decomposes questions and interacts with the web to compute an answer. We empirically demonstrate that question decomposition improves performance from 20.8 precision@1 to 27.5 precision@1 on this new dataset.",TRUE,adj
R145261,Natural Language Processing,R76157,SemEval-2020 Task 3: Graded Word Similarity in Context,S349109,R76294,Language,R76279,Croatian,"This paper presents the Graded Word Similarity in Context (GWSC) task which asked participants to predict the effects of context on human perception of similarity in English, Croatian, Slovene and Finnish. We received 15 submissions and 11 system description papers. A new dataset (CoSimLex) was created for evaluation in this task: it contains pairs of words, each annotated within two different contexts. Systems beat the baselines by significant margins, but few did well in more than one language or subtask. Almost every system employed a Transformer model, but with many variations in the details: WordNet sense embeddings, translation of contexts, TF-IDF weightings, and the automatic creation of datasets for fine-tuning were all used to good effect.",TRUE,adj
R145261,Natural Language Processing,R76157,SemEval-2020 Task 3: Graded Word Similarity in Context,S349115,R76301,Language,R76281,Finnish,"This paper presents the Graded Word Similarity in Context (GWSC) task which asked participants to predict the effects of context on human perception of similarity in English, Croatian, Slovene and Finnish. We received 15 submissions and 11 system description papers. A new dataset (CoSimLex) was created for evaluation in this task: it contains pairs of words, each annotated within two different contexts. Systems beat the baselines by significant margins, but few did well in more than one language or subtask. Almost every system employed a Transformer model, but with many variations in the details: WordNet sense embeddings, translation of contexts, TF-IDF weightings, and the automatic creation of datasets for fine-tuning were all used to good effect.",TRUE,adj
R145261,Natural Language Processing,R163595,Overview of the Bacteria Biotope Task at BioNLP Shared Task 2016,S653035,R163597,Concept types,R163606,Geographical,"This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2016, which follows the previous 2013 and 2011 editions. The task focuses on the extraction of the locations (biotopes and geographical places) of bacteria from PubMe abstracts and the characterization of bacteria and their associated habitats with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on bacteria habitats for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, the challenge organization, and the evaluation metrics. We also provide an analysis of the results obtained by participants.",TRUE,adj
R145261,Natural Language Processing,R147129,A Hierarchical Attention Retrieval Model for Healthcare Question Answering,S589385,R147131,Question Types,L410220,Non-factoid,"The growth of the Web in recent years has resulted in the development of various online platforms that provide healthcare information services. These platforms contain an enormous amount of information, which could be beneficial for a large number of people. However, navigating through such knowledgebases to answer specific queries of healthcare consumers is a challenging task. A majority of such queries might be non-factoid in nature, and hence, traditional keyword-based retrieval models do not work well for such cases. Furthermore, in many scenarios, it might be desirable to get a short answer that sufficiently answers the query, instead of a long document with only a small amount of useful information. In this paper, we propose a neural network model for ranking documents for question answering in the healthcare domain. The proposed model uses a deep attention mechanism at word, sentence, and document levels, for efficient retrieval for both factoid and non-factoid queries, on documents of varied lengths. Specifically, the word-level cross-attention allows the model to identify words that might be most relevant for a query, and the hierarchical attention at sentence and document levels allows it to do effective retrieval on both long and short documents. We also construct a new large-scale healthcare question-answering dataset, which we use to evaluate our model. Experimental evaluation results against several state-of-the-art baselines show that our model outperforms the existing retrieval techniques.",TRUE,adj
R145261,Natural Language Processing,R69291,The ACL RD-TEC 2.0: A Language Resource for Evaluating Term Extraction and Entity Recognition Methods,S587223,R69292,Concept types,R146669,Other,"This paper introduces the ACL Reference Dataset for Terminology Extraction and Classification, version 2.0 (ACL RD-TEC 2.0). The ACL RD-TEC 2.0 has been developed with the aim of providing a benchmark for the evaluation of term and entity recognition tasks based on specialised text from the computational linguistics domain. This release of the corpus consists of 300 abstracts from articles in the ACL Anthology Reference Corpus, published between 1978–2006. In these abstracts, terms (i.e., single or multi-word lexical units with a specialised meaning) are manually annotated. In addition to their boundaries in running text, annotated terms are classified into one of the seven categories method, tool, language resource (LR), LR product, model, measures and measurements, and other. To assess the quality of the annotations and to determine the difficulty of this annotation task, more than 171 of the abstracts are annotated twice, independently, by each of the two annotators. In total, 6,818 terms are identified and annotated in more than 1300 sentences, resulting in a specialised vocabulary made of 3,318 lexical forms, mapped to 3,471 concepts. We explain the development of the annotation guidelines and discuss some of the challenges we encountered in this annotation task.",TRUE,adj
R145261,Natural Language Processing,R163406,Overview of the Cancer Genetics (CG) task of BioNLP Shared Task 2013,S652180,R163408,Event / Relation Types,R163434,Pathological,"We present the design, preparation, results and analysis of the Cancer Genetics (CG) event extraction task, a main task of the BioNLP Shared Task (ST) 2013. The CG task is an information extraction task targeting the recognition of events in text, represented as structured n-ary associations of given physical entities. In addition to addressing the cancer domain, the CG task is differentiated from previous event extraction tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple levels of biological organization, ranging from the molecular through the cellular and organ levels up to whole organisms. Final test set submissions were accepted from six teams. The highest-performing system achieved an Fscore of 55.4%. This level of performance is broadly comparable with the state of the art for established molecular-level extraction tasks, demonstrating that event extraction resources and methods generalize well to higher levels of biological organization and are applicable to the analysis of scientific texts on cancer. The CG task continues as an open challenge to all interested parties, with tools and resources available from http://2013. bionlp-st.org/.",TRUE,adj
R145261,Natural Language Processing,R163406,Overview of the Cancer Genetics (CG) task of BioNLP Shared Task 2013,S660996,R165824,Event types,R163434,Pathological,"We present the design, preparation, results and analysis of the Cancer Genetics (CG) event extraction task, a main task of the BioNLP Shared Task (ST) 2013. The CG task is an information extraction task targeting the recognition of events in text, represented as structured n-ary associations of given physical entities. In addition to addressing the cancer domain, the CG task is differentiated from previous event extraction tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple levels of biological organization, ranging from the molecular through the cellular and organ levels up to whole organisms. Final test set submissions were accepted from six teams. The highest-performing system achieved an Fscore of 55.4%. This level of performance is broadly comparable with the state of the art for established molecular-level extraction tasks, demonstrating that event extraction resources and methods generalize well to higher levels of biological organization and are applicable to the analysis of scientific texts on cancer. The CG task continues as an open challenge to all interested parties, with tools and resources available from http://2013. bionlp-st.org/.",TRUE,adj
R145261,Natural Language Processing,R156119,Question Answering Benchmarks for Wikidata,S626960,R156120,Question Types,L431500,Simple,"Wikidata is becoming an increasingly important knowledge base whose usage is spreading in the research community. However, most question answering systems evaluation datasets rely on Freebase or DBpedia. We present two new datasets in order to train and benchmark QA systems over Wikidata. The first is a translation of the popular SimpleQuestions dataset to Wikidata, the second is a dataset created by collecting user feedbacks.",TRUE,adj
R145261,Natural Language Processing,R76157,SemEval-2020 Task 3: Graded Word Similarity in Context,S349112,R76298,Language,R76280,Slovene,"This paper presents the Graded Word Similarity in Context (GWSC) task which asked participants to predict the effects of context on human perception of similarity in English, Croatian, Slovene and Finnish. We received 15 submissions and 11 system description papers. A new dataset (CoSimLex) was created for evaluation in this task: it contains pairs of words, each annotated within two different contexts. Systems beat the baselines by significant margins, but few did well in more than one language or subtask. Almost every system employed a Transformer model, but with many variations in the details: WordNet sense embeddings, translation of contexts, TF-IDF weightings, and the automatic creation of datasets for fine-tuning were all used to good effect.",TRUE,adj
R96,Nutritional Epidemiology,R75682,Association between dietary patterns and overweight risk among Malaysian adults: evidence from nationally representative surveys,S346305,R75684,Study design ,L248128,Cross-sectional ,"Abstract Objective: To investigate the association between dietary patterns (DP) and overweight risk in the Malaysian Adult Nutrition Surveys (MANS) of 2003 and 2014. Design: DP were derived from the MANS FFQ using principal component analysis. The cross-sectional association of the derived DP with prevalence of overweight was analysed. Setting: Malaysia. Participants: Nationally representative sample of Malaysian adults from MANS (2003, n 6928; 2014, n 3000). Results: Three major DP were identified for both years. These were ‘Traditional’ (fish, eggs, local cakes), ‘Western’ (fast foods, meat, carbonated beverages) and ‘Mixed’ (ready-to-eat cereals, bread, vegetables). A fourth DP was generated in 2003, ‘Flatbread & Beverages’ (flatbread, creamer, malted beverages), and 2014, ‘Noodles & Meat’ (noodles, meat, eggs). These DP accounted for 25·6 and 26·6 % of DP variations in 2003 and 2014, respectively. For both years, Traditional DP was significantly associated with rural households, lower income, men and Malay ethnicity, while Western DP was associated with younger age and higher income. Mixed DP was positively associated with women and higher income. None of the DP showed positive association with overweight risk, except for reduced adjusted odds of overweight with adherence to Traditional DP in 2003. Conclusions: Overweight could not be attributed to adherence to a single dietary pattern among Malaysian adults. This may be due to the constantly morphing dietary landscape in Malaysia, especially in urban areas, given the ease of availability and relative affordability of multi-ethnic and international foods. Timely surveys are recommended to monitor implications of these changes.",TRUE,adj
R96,Nutritional Epidemiology,R75685,Dietary patterns and cardiometabolic risk factors among adolescents: systematic review and meta-analysis,S346304,R75687,Study design ,L248127,Meta-analysis,"AbstractThis study systematised and synthesised the results of observational studies that were aimed at supporting the association between dietary patterns and cardiometabolic risk (CMR) factors among adolescents. Relevant scientific articles were searched in PUBMED, EMBASE, SCIENCE DIRECT, LILACS, WEB OF SCIENCE and SCOPUS. Observational studies that included the measurement of any CMR factor in healthy adolescents and dietary patterns were included. The search strategy retained nineteen articles for qualitative analysis. Among retained articles, the effects of dietary pattern on the means of BMI (n 18), waist circumference (WC) (n 9), systolic blood pressure (n 7), diastolic blood pressure (n 6), blood glucose (n 5) and lipid profile (n 5) were examined. Systematised evidence showed that an unhealthy dietary pattern appears to be associated with poor mean values of CMR factors among adolescents. However, evidence of a protective effect of healthier dietary patterns in this group remains unclear. Considering the number of studies with available information, a meta-analysis of anthropometric measures showed that dietary patterns characterised by the highest intake of unhealthy foods resulted in a higher mean BMI (0·57 kg/m²; 95 % CI 0·51, 0·63) and WC (0·57 cm; 95 % CI 0·47, 0·67) compared with low intake of unhealthy foods. Controversially, patterns characterised by a low intake of healthy foods were associated with a lower mean BMI (−0·41 kg/m²; 95 % CI −0·46,−0·36) and WC (−0·43 cm; 95 % CI −0·52,−0·33). An unhealthy dietary pattern may influence markers of CMR among adolescents, but considering the small number and limitations of the studies included, further studies are warranted to strengthen the evidence of this relation.",TRUE,adj
R129,Organic Chemistry,R154543,Effect of Pt treated fullerene/TiO2 on the photocatalytic degradation of MO under visible light,S618619,R154545,Incident light,R154526,Visible,"Platinum treated fullerene/TiO2 composites (Pt-fullerene/TiO2) were prepared using a sol–gel method. The composite obtained was characterized by FT-IR, BET surface area measurements, X-ray diffraction, energy dispersive X-ray analysis, transmission electron microscopy (TEM) and UV-vis analysis. A methyl orange (MO) solution under visible light irradiation was used to determine the photocatalytic activity. Excellent photocatalytic degradation of a MO solution was observed using the Pt-TiO2, fullerene-TiO2 and Pt-fullerene/TiO2 composites under visible light. An increase in photocatalytic activity was observed and Pt-fullerene/TiO2 has the best photocatalytic activity, which may be attributable to increase of the photo-absorption effect by the fullerene and the cooperative effect of the Pt.",TRUE,adj
R130,Physical Chemistry,R135710,Continuous Symmetry Breaking Induced by Ion Pairing Effect in Heptamethine Cyanine Dyes: Beyond the Cyanine Limit,S536891,R135712,Counterion interaction in Toluene,L378445,Associated,"The association of heptamethine cyanine cation 1(+) with various counterions A (A = Br(-), I(-), PF(6)(-), SbF(6)(-), B(C(6)F(5))(4)(-), TRISPHAT) was realized. The six different ion pairs have been characterized by X-ray diffraction, and their absorption properties were studied in polar (DCM) and apolar (toluene) solvents. A small, hard anion (Br(-)) is able to strongly polarize the polymethine chain, resulting in the stabilization of an asymmetric dipolar-like structure in the crystal and in nondissociating solvents. On the contrary, in more polar solvents or when it is associated with a bulky soft anion (TRISPHAT or B(C(6)F(5))(4)(-)), the same cyanine dye adopts preferentially the ideal polymethine state. The solid-state and solution absorption properties of heptamethine dyes are therefore strongly correlated to the nature of the counterion.",TRUE,adj
R185,Plasma and Beam Physics,R145188,Stark-profile calculations for Lyman-series lines of one-electron ions in dense plasmas,S581240,R145230,paper:caategory,L406185,Theoretical,"The frequency distributions of the first six Lyman lines of hydrogen-like carbon, oxygen, neon, magnesium, aluminum, and silicon ions broadened by the local fields of both ions and electrons are calculated for dense plasmas. The electron collisions are treated by an impact theory allowing (approximately) for level splittings caused by the ion fields, finite duration of the collisions, and screening of the electron fields. Ion effects are calculated in the quasistatic, linear Stark-effect approximation, using distribution functions of Hooper and Tighe which include correlation and shielding effects. Theoretical uncertainties from the various approximations are estimated, and the scaling of the profiles with density, temperature and nuclear charge is discussed. A correction for the effects caused by low frequency field fluctuations is suggested.",TRUE,adj
R185,Plasma and Beam Physics,R145200,Line shapes of lithium-like ions emitted from plasmas,S581276,R145234,paper:caategory,L406213,Theoretical,"The calculation of the spectral line broadening of lithium-like ions is presented. The motivation for these calculations is to extend present theoretical calculations to more complex atomic structures and provide further diagnostic possibilities. The profiles of Li I, Ti XX and Br XXXIII are shown as a representative sampling of the possible effects which can occur. The calculations are performed for all level 2 to level 3 and 4 transitions, with dipole-forbidden and overlapping components fully taken into account.",TRUE,adj
R185,Plasma and Beam Physics,R145213,Electron impact broadening of spectral lines in Be-like ions: quantum calculations,S581321,R145239,paper:caategory,L406248,Theoretical,"We present in this paper quantum mechanical calculations for the electron impact Stark linewidths of the 2s3s–2s3p transitions for the four beryllium-like ions from N IV to Ne VII. Calculations are made in the frame of the impact approximation and intermediate coupling, taking into account fine-structure effects. A comparison between our calculations, experimental and other theoretical results, shows a good agreement. This is the first time that such a good agreement is found between quantum and experimental linewidths of highly charged ions.",TRUE,adj
R185,Plasma and Beam Physics,R145216,Relativistic quantum mechanical calculations of electron-impact broadening for spectral lines in Be-like ions,S581330,R145240,paper:caategory,L406255,Theoretical,"Aims. We present relativistic quantum mechanical calculations of electron-impact broadening of the singlet and triplet transition 2s3s ← 2s3p in four Be-like ions from NIV to NeVII. Methods. In our theoretical calculations, the K-matrix and related symmetry information determined by the colliding systems are generated by the DARC codes. Results. A careful comparison between our calculations and experimental results shows good agreement. Our calculated widths of spectral lines also agree with earlier theoretical results. Our investigations provide new methods of calculating electron-impact broadening parameters for plasma diagnostics.",TRUE,adj
R131,Polymer Chemistry,R161549,Mechanical Recycling of Packaging Plastics: A Review,S645120,R161553,Method,R161554,mechanical,"The current global plastics economy is highly linear, with the exceptional performance and low carbon footprint of polymeric materials at odds with dramatic increases in plastic waste. Transitioning to a circular economy that retains plastic in its highest value condition is essential to reduce environmental impacts, promoting reduction, reuse, and recycling. Mechanical recycling is an essential tool in an environmentally and economically sustainable economy of plastics, but current mechanical recycling processes are limited by cost, degradation of mechanical properties, and inconsistent quality products. This review covers the current methods and challenges for the mechanical recycling of the five main packaging plastics: poly(ethylene terephthalate), polyethylene, polypropylene, polystyrene, and poly(vinyl chloride) through the lens of a circular economy. Their reprocessing induced degradation mechanisms are introduced and strategies to improve their recycling are discussed. Additionally, this review briefly examines approaches to improve polymer blending in mixed plastic waste streams and applications of lower quality recyclate.",TRUE,adj
R11,Science,R25591,Usage and Perceptions of Agile Software Development in an Industrial Context: An Exploratory Study,S77220,R25592,"Agile
Method",R25590,Agile,"Agile development methodologies have been gaining acceptance in the mainstream software development community. While there are numerous studies of agile development in academic and educational settings, there has been little detailed reporting of the usage, penetration and success of agile methodologies in traditional, professional software development organizations. We report on the results of an empirical study conducted at Microsoft to learn about agile development and its perception by people in development, testing, and management. We found that one-third of the study respondents use agile methodologies to varying degrees, and most view it favorably due to improved communication between team members, quick releases and the increased flexibility of agile designs. The scrum variant of agile methodologies is by far the most popular at Microsoft. Our findings also indicate that developers are most worried about scaling agile to larger projects (greater than twenty members), attending too many meetings and the coordinating agile and non-agile teams.",TRUE,adj
R11,Science,R25595,Agile systems development and stakeholder satisfaction: a South African empirical study,S77239,R25596,"Agile
Method",R25590,Agile,"The high rate of systems development (SD) failure is often attributed to the complexity of traditional SD methodologies (e.g. Waterfall) and their inability to cope with changes brought about by today's dynamic and evolving business environment. Agile methodologies (AM) have emerged to challenge traditional SD and overcome their limitations. Yet empirical research into AM is sparse. This paper develops and tests a research model that hypothesizes the effects of five characteristics of agile systems development (iterative development; continuous integration; test-driven design; feedback; and collective ownership) on two dependent stakeholder satisfaction measures, namely stakeholder satisfaction with the development process and with the development outcome. An empirical study of 59 South African development projects (using self reported data) provided support for all hypothesized relationships and generally supports the efficacy of AM. Iteration and integration together with collective ownership have the strongest effects on the dependent satisfaction measures.",TRUE,adj
R11,Science,R25599,Effects of agile practices on social factors,S77259,R25600,"Agile
Method",R25590,Agile,"Programmers are living in an age of accelerated change. State of the art technology that was employed to facilitate projects a few years ago are typically obsolete today. Presently, there are requirements for higher quality software with less tolerance for errors, produced in compressed timelines with fewer people. Therefore, project success is more elusive than ever and is contingent upon many key aspects. One of the most crucial aspects is social factors. These social factors, such as knowledge sharing, motivation, and customer collaboration, can be addressed through agile practices. This paper will demonstrate two successful industrial software projects which are different in all aspects; however, both still apply agile practices to address social factors. The readers will see how agile practices in both projects were adapted to fit each unique team environment. The paper will also provide lessons learned and recommendations based on retrospective reviews and observations. These recommendations can lead to an improved chance of success in a software development project.",TRUE,adj
R11,Science,R25617,Understanding post-adoptive agile usage: An exploratory cross- case analysis,S77334,R25618,"Agile
Method",R25590,Agile,"The widespread adoption of agile methodologies raises the question of their continued and effective usage in organizations. An agile usage model consisting of innovation, sociological, technological, team, and organizational factors is used to inform an analysis of post-adoptive usage of agile practices in two major organizations. Analysis of the two case studies found that a methodology champion and top management support were the most important factors influencing continued usage, while innovation factors such as compatibility seemed less influential. Both horizontal and vertical usage was found to have significant impact on the effectiveness of agile usage.",TRUE,adj
R11,Science,R25619,The Impact of Organizational Culture on Agile Method Use,S77345,R25620,"Agile
Method",R25590,Agile,Agile method proponents believe that organizational culture has an effect on the extent to which an agile method is used. Research into the relationship between organizational culture and information systems development methodology deployment has been explored by others using the Competing Values Framework (CVF). However this relationship has not been explored with respect to the agile development methodologies. Based on a multi-case study of nine projects we show that specific organizational culture factors correlate with effective use of an agile method. Our results contribute to the literature on organizational culture and system development methodology use.,TRUE,adj
R11,Science,R25627,Experience Report: The Social Nature of Agile Teams,S77382,R25628,"Agile
Method",R25590,Agile,"Agile software development is often, but not always, associated with the term dasiaproject chemistry,psila or the positive team climate that can contribute to high performance. A qualitative study involving 22 participants in agile teams sought to explore this connection, and answer the question: what aspects of agile software development are related to team cohesion? The following is a discussion of participant experiences as seen through a socio-psychological lens. It draws from social-identity theory and socio-psychological literature to explain, not only how, but why agile methodologies support teamwork and collective progress. Agile practices are shown to produce a socio-psychological environment of high-performance, with many of the practical benefits of agile practices being supported and mediated by social and personal concerns.",TRUE,adj
R11,Science,R26438,Chitosan as Tear Substitute: A Wetting Agent Endowed with Antimicrobial Efficacy,S83099,R26439,Advantages,L52483,antibacterial,"A cationic biopolymer, chitosan, is proposed for use in artificial tear formulations. It is endowed with good wetting properties as well as an antibacterial effect that are desirable in cases of dry eye, which is often complicated by secondary infections. Solutions containing 0.5% w/v of a low molecular weight (M(w)) chitosan (160 kDa) were assessed for antibacterial efficacy against E. coli and S. aureus by using the usual broth-dilution technique. The in vitro evaluation showed that concentrations of chitosan as low as 0.0375% still exert a bacteriostatic effect against E. coli. Minimal inhibitory concentration (MIC) values of chitosan were calculated to be as low as 0.375 mg/ml for E. coli and 0.15 mg/ml for S. aureus. Gamma scintigraphic studies demonstrated that chitosan formulations remain on the precorneal surface as long as commonly used commercial artificial tears (Protagent collyrium and Protagent-SE unit-dose) having a 5-fold higher viscosity.",TRUE,adj
R11,Science,R25384,A tactic-centric approach for automating traceability of quality concerns,S76052,R25385,Automation,R25382,Automatic,"The software architectures of business, mission, or safety critical systems must be carefully designed to balance an exacting set of quality concerns describing characteristics such as security, reliability, and performance. Unfortunately, software architectures tend to degrade over time as maintainers modify the system without understanding the underlying architectural decisions. Although this problem can be mitigated by manually tracing architectural decisions into the code, the cost and effort required to do this can be prohibitively expensive. In this paper we therefore present a novel approach for automating the construction of traceability links for architectural tactics. Our approach utilizes machine learning methods and lightweight structural analysis to detect tactic-related classes. The detected tactic-related classes are then mapped to a Tactic Traceability Information Model. We train our trace algorithm using code extracted from fifteen performance-centric and safety-critical open source software systems and then evaluate it against the Apache Hadoop framework. Our results show that automatically generated traceability links can support software maintenance activities while helping to preserve architectural qualities.",TRUE,adj
R11,Science,R151135,The design of a dynamic emergency response management information system,S626109,R156016,paper:Study Type,L430868,Conceptual,"ABSTRACT This paper systematically develops a set of general and supporting design principles and specifications for a ""Dynamic Emergency Response Management Information System"" (DERMIS) by identifying design premises resulting from the use of the ""Emergency Management Information System and Reference Index"" (EMISARI) and design concepts resulting from a comprehensive literature review. Implicit in crises of varying scopes and proportions are communication and information needs that can be addressed by today's information and communication technologies. However, what is required is organizing the premises and concepts that can be mapped into a set of generic design principles in turn providing a framework for the sensible development of flexible and dynamic Emergency Response Information Systems. A framework is presented for the system design and development that addresses the communication and information needs of first responders as well as the decision making needs of command and control personnel. The framework also incorporates thinking about the value of insights and information from communities of geographically dispersed experts and suggests how that expertise can be brought to bear on crisis decision making. Historic experience is used to suggest nine design premises. These premises are complemented by a series of five design concepts based upon the review of pertinent and applicable research. The result is a set of eight general design principles and three supporting design considerations that are recommended to be woven into the detailed specifications of a DERMIS. The resulting DERMIS design model graphically indicates the heuristic taken by this paper and suggests that the result will be an emergency response system flexible, robust, and dynamic enough to support the communication and information needs of emergency and crisis personnel on all levels. In addition it permits the development of dynamic emergency response information systems with tailored flexibility to support and be integrated across different sizes and types of organizations. This paper provides guidelines for system analysts and designers, system engineers, first responders, communities of experts, emergency command and control personnel, and MIS/IT researchers. SECTIONS 1. Introduction 2. Historical Insights about EMISARI 3. The emergency Response Atmosphere of OEP 4. Resulting Requirements for Emergency Response and Conceptual Design Specifics 4.1 Metaphors 4.2 Roles 4.3 Notifications 4.4 Context Visibility 4.5 Hypertext 5. Generalized Design Principles 6. Supporting Design Considerations 6.1 Resource Databases and Community Collaboration 6.2 Collective Memory 6.3 Online Communities of Experts 7. Conclusions and Final Observations 8. References 1. INTRODUCTION There have been, since 9/11, considerable efforts to propose improvements in the ability to respond to emergencies. However, the vast majority of these efforts have concentrated on infrastructure improvements to aid in mitigation of the impacts of either a man-made or natural disaster. In the area of communication and information systems to support the actual ongoing reaction to a disaster situation, the vast majority of the efforts have focused on the underlying technology to reliably support survivability of the underlying networks and physical facilities (Kunreuther and LernerLam 2002; Mork 2002). The fact that there were major failures of the basic technology and loss of the command center for 48 hours in the 9/11 event has made this an understandable result. The very workable commercial paging and digital mail systems supplied immediately afterwards by commercial firms (Michaels 2001; Vatis 2002) to the emergency response workers demonstrated that the correction of underlying technology is largely a process of setting integration standards and deciding to spend the necessary funds to update antiquated systems. …",TRUE,adj
R11,Science,R26582,Design and analysis of a fast local clustering service for wireless sensor networks,S83975,R26666,Alg. Complexity,R26618,Constant,"We present a fast local clustering service, FLOC, that partitions a multi-hop wireless network into nonoverlapping and approximately equal-sited clusters. Each cluster has a clusterhead such that all nodes within unit distance of the clusterhead belong to the cluster but no node beyond distance m from the clusterhead belongs to the cluster. By asserting m /spl ges/ 2, FLOC achieves locality: effects of cluster formation and faults/changes at any part of the network are contained within most m units. By taking unit distance to be the reliable communication radius and m to be the maximum communication radius, FLOC exploits the double-band nature of wireless radio-model and achieves clustering in constant time regardless of the network size. Through simulations and experiments with actual deployments, we analyze the tradeoffs between clustering time and the quality of clustering, and suggest suitable parameters for FLOC to achieve a fast completion time without compromising the quality of the resulting clustering.",TRUE,adj
R11,Science,R26672,A probabilistic clustering algorithm in wireless sensor networks,S85032,R26673,Alg. Complexity,R26618,Constant,"A wireless sensor network consists of nodes that can communicate with each other via wireless links. One way to support efficient communication between sensors is to organize the network into several groups, called clusters, with each cluster electing one node as the head of cluster. The paper describes a constant time clustering algorithm that can be applied on wireless sensor networks. This approach is an extension to the Younis and Fahmy method (1). The simulation results show that the extension can generate a small number of cluster heads in relatively few rounds, especially in sparse networks.",TRUE,adj
R11,Science,R26244,A branch-and-cut algorithm for a vendor-managed inventory-routing problem,S82075,R26245,Demand,R26169,Deterministic,"We consider a distribution problem in which a product has to be shipped from a supplier to several retailers over a given time horizon. Each retailer defines a maximum inventory level. The supplier monitors the inventory of each retailer and determines its replenishment policy, guaranteeing that no stockout occurs at the retailer (vendor-managed inventory policy). Every time a retailer is visited, the quantity delivered by the supplier is such that the maximum inventory level is reached (deterministic order-up-to level policy). Shipments from the supplier to the retailers are performed by a vehicle of given capacity. The problem is to determine for each discrete time instant the quantity to ship to each retailer and the vehicle route. We present a mixed-integer linear programming model and derive new additional valid inequalities used to strengthen the linear relaxation of the model. We implement a branch-and-cut algorithm to solve the model optimally. We then compare the optimal solution of the problem with the optimal solution of two problems obtained by relaxing in different ways the deterministic order-up-to level policy. Computational results are presented on a set of randomly generated problem instances.",TRUE,adj
R11,Science,R26269,One Warehouse Multiple Retailer Systems with Vehicle Routing Costs,S82205,R26270,Demand,R26169,Deterministic,"We consider distribution systems with a depot and many geographically dispersed retailers each of which faces external demands occurring at constant, deterministic but retailer specific rates. All stock enters the system through the depot from where it is distributed to the retailers by a fleet of capacitated vehicles combining deliveries into efficient routes. Inventories are kept at the retailers but not at the depot. We wish to determine feasible replenishment strategies i.e., inventory rules and routing patterns minimising infinite horizon long-run average transportation and inventory costs. We restrict ourselves to a class of strategies in which a collection of regions sets of retailers is specified which cover all outlets: if an outlet belongs to several regions, a specific fraction of its sales/operations is assigned to each of these regions. Each time one of the retailers in a given region receives a delivery, this delivery is made by a vehicle who visits all other outlets in the region as well in an efficient route. We describe a class of low complexity heuristics and show under mild probabilistic assumptions that the generated solutions are asymptotically optimal within the above class of strategies. We also show that lower and upper bounds on the system-wide costs may be computed and that these bounds are asymptotically tight under the same assumptions. A numerical study exhibits the performance of these heuristics and bounds for problems of moderate size.",TRUE,adj
R11,Science,R26274,Two-echelon distribution systems with vehicle routing costs and central inventory,S82237,R26275,Demand,R26169,Deterministic,"We consider distribution systems with a single depot and many retailers each of which faces external demands for a single item that occurs at a specific deterministic demand rate. All stock enters the systems through the depot where it can be stored and then picked up and distributed to the retailers by a fleet of vehicles, combining deliveries into efficient routes. We extend earlier methods for obtaining low complexity lower bounds and heuristics for systems without central stock. We show under mild probabilistic assumptions that the generated solutions and bounds come asymptotically within a few percentage points of optimality (within the considered class of strategies). A numerical study exhibits the performance of these heuristics and bounds for problems of moderate size.",TRUE,adj
R11,Science,R26300,Integrating Routing and Inventory Decisions in One-Warehouse Multiretailer Multiproduct Distribution Systems,S82372,R26301,Demand,R26169,Deterministic,"We consider distribution systems with a central warehouse and many retailers that stock a number of different products. Deterministic demand occurs at the retailers for each product. The warehouse acts as a break-bulk center and does not keep any inventory. The products are delivered from the warehouse to the retailers by vehicles that combine the deliveries to several retailers into efficient vehicle routes. The objective is to determine replenishment policies that specify the delivery quantities and the vehicle routes used for the delivery, so as to minimize the long-run average inventory and transportation costs. A new heuristic that develops a stationary nested joint replenishment policy for the problem is presented in this paper. Unlike existing methods, the proposed heuristic is capable of solving problems involving distribution systems with multiple products. Results of a computational study on randomly generated single-product problems are also presented.",TRUE,adj
R11,Science,R26631,Low energy adaptive clustering hierarchy with deterministic cluster-head selection,S83755,R26632,Protocol,R26169,Deterministic,"This paper focuses on reducing the power consumption of wireless microsensor networks. Therefore, a communication protocol named LEACH (low-energy adaptive clustering hierarchy) is modified. We extend LEACH's stochastic cluster-head selection algorithm by a deterministic component. Depending on the network configuration an increase of network lifetime by about 30% can be accomplished. Furthermore, we present a new approach to define lifetime of microsensor networks using three new metrics FND (First Node Dies), HNA (Half of the Nodes Alive), and LND (Last Node Dies).",TRUE,adj
R11,Science,R26708,A dynamic clustering and energy efficient routing technique for sensor networks,S85282,R26709,Dynamism,R26699,Dynamic,"In the development of various large-scale sensor systems, a particularly challenging problem is how to dynamically organize the sensors into a wireless communication network and route sensed information from the field sensors to a remote base station. This paper presents a new energy-efficient dynamic clustering technique for large-scale sensor networks. By monitoring the received signal power from its neighboring nodes, each node estimates the number of active nodes in realtime and computes its optimal probability of becoming a cluster head, so that the amount of energy spent in both intra- and inter-cluster communications can be minimized. Based on the clustered architecture, this paper also proposes a simple multihop routing algorithm that is designed to be both energy-efficient and power-aware, so as to prolong the network lifetime. The new clustering and routing algorithms scale well and converge fast for large-scale dynamic sensor networks, as shown by our extensive simulation results.",TRUE,adj
R11,Science,R29751,An Empirical Study on the Environmental Kuznets Curve for China’s Carbon Emissions: Based on Provincial Panel Data,S98712,R29752,EKC Turnaround point(s),R29744,Eastern,"Abstract Based on the Environmental Kuznets Curve theory, the authors choose provincial panel data of China in 1990–2007 and adopt panel unit root and co-integration testing method to study whether there is Environmental Kuznets Curve for China’s carbon emissions. The research results show that: carbon emissions per capita of the eastern region and the central region of China fit into Environmental Kuznets Curve, but that of the western region does not. On this basis, the authors carry out scenario analysis on the occurrence time of the inflection point of carbon emissions per capita of different regions, and describe a specific time path.",TRUE,adj
R11,Science,R28169,Empty container repositioning in liner shipping1,S92072,R28170,Container flow,R28146,Empty,"The efficient and effective management of empty containers is an important problem in the shipping industry. Not only does it have an economic effect, but it also has an environmental and sustainability impact, since the reduction of empty container movements will reduce fuel consumption and reduce congestion and emissions. The purposes of this paper are: to identify critical factors that affect empty container movements; to quantify the scale of empty container repositioning in major shipping routes; and to evaluate and contrast different strategies that shipping lines, and container operators, could adopt to reduce their empty container repositioning costs. The critical factors that affect empty container repositioning are identified through a review of the literature and observations of industrial practice. Taking three major routes (Trans-Pacific, Trans-Atlantic, Europe–Asia) as examples, with the assumption that trade demands could be balanced among the whole network regardless the identities of individual shipping lines, the most optimistic estimation of empty container movements can be calculated. This quantifies the scale of the empty repositioning problem. Depending on whether shipping lines are coordinating the container flows over different routes and whether they are willing to share container fleets, four strategies for empty container repositioning are presented. Mathematical programming is then applied to evaluate and contrast the performance of these strategies in three major routes. 1A preliminary version was presented in IAME Annual Conference at Dalian, China, 2–4 April 2008.",TRUE,adj
R11,Science,R28193,A Two-Stage Stochastic Network Model and Solution Methods for the Dynamic Empty Container Allocation Problem,S92178,R28194,Container flow,R28146,Empty,"Containerized liner trades have been growing steadily since the globalization of world economies intensified in the early 1990s. However, these trades are typically imbalanced in terms of the numbers of inbound and outbound containers. As a result, the relocation of empty containers has become one of the major problems faced by liner operators. In this paper, we consider the dynamic empty container allocation problem where we need to reposition empty containers and to determine the number of leased con tainers needed to meet customers? demand over time. We formulate this problem as a two-stage stochastic network: in stage one, the parameters such as supplies, demands, and ship capacities for empty containers are deterministic; whereas in stage two, these parameters are random variables. We need to make decisions in stage one such that the total of the stage one cost and the expected stage two cost is minimized. By taking advantage of the network structure, we show how a stochastic quasi-gradient method and a stochastic hybrid approximation procedure can be applied to solve the problem. In addition, we propose some new variations of these methods that seem to work faster in practice. We conduct numerical tests to evaluate the value of the two-stage stochastic model over a rolling horizon environment and to investigate the behavior of the solution methods with different implementations.",TRUE,adj
R11,Science,R28200,Empty container reposition planning for intra-Asia liner shipping,S92204,R28201,Container flow,R28146,Empty,This paper addresses empty container reposition planning by plainly considering safety stock management and geographical regions. This plan could avoid drawback in practice which collects mass empty containers at a port then repositions most empty containers at a time. Empty containers occupy slots on vessel and the liner shipping company loses chance to yield freight revenue. The problem is drawn up as a two-stage problem. The upper problem is identified to estimate the empty container stock at each port and the lower problem models the empty container reposition planning with shipping service network as the Transportation Problem by Liner Problem. We looked at case studies of the Taiwan Liner Shipping Company to show the application of the proposed model. The results show the model provides optimization techniques to minimize cost of empty container reposition and to provide an evidence to adjust strategy of restructuring the shipping service network.,TRUE,adj
R11,Science,R26582,Design and analysis of a fast local clustering service for wireless sensor networks,S83965,R26666,Cluster Properties Cluster size,L52929,Equal,"We present a fast local clustering service, FLOC, that partitions a multi-hop wireless network into nonoverlapping and approximately equal-sited clusters. Each cluster has a clusterhead such that all nodes within unit distance of the clusterhead belong to the cluster but no node beyond distance m from the clusterhead belongs to the cluster. By asserting m /spl ges/ 2, FLOC achieves locality: effects of cluster formation and faults/changes at any part of the network are contained within most m units. By taking unit distance to be the reliable communication radius and m to be the maximum communication radius, FLOC exploits the double-band nature of wireless radio-model and achieves clustering in constant time regardless of the network size. Through simulations and experiments with actual deployments, we analyze the tradeoffs between clustering time and the quality of clustering, and suggest suitable parameters for FLOC to achieve a fast completion time without compromising the quality of the resulting clustering.",TRUE,adj
R11,Science,R32233,Fungicidal Activity of Artemisia herba alba Asso (Asteraceae),S109607,R32234,Plant material status,R32230,Fresh,"The antifungal activity of Artemisia herba alba was found to be associated with two major volatile compounds isolated from the fresh leaves of the plant. Carvone and piperitone were isolated and identified by GC/MS, GC/IR, and NMR spectroscopy. Antifungal activity was measured against Penicillium citrinum (ATCC 10499) and Mucora rouxii (ATCC 24905). The antifungal activity (IC50) of the purified compounds was estimated to be 5 μ g/ml, 2 μ g/ml against Penicillium citrinum and 7 μ g/ml, 1.5 μ g/ml against Mucora rouxii carvone and piperitone, respectively.",TRUE,adj
R11,Science,R31524,Energy efficiency estimation based on data fusion strategy: Case study of ethylene product industry,S105653,R31525,Types,R31521,Fuzzy,"Data fusion is an emerging technology to fuse data from multiple data or information of the environment through measurement and detection to make a more accurate and reliable estimation or decision. In this Article, energy consumption data are collected from ethylene plants with the high temperature steam cracking process technology. An integrated framework of the energy efficiency estimation is proposed on the basis of data fusion strategy. A Hierarchical Variable Variance Fusion (HVVF) algorithm and a Fuzzy Analytic Hierarchy Process (FAHP) method are proposed to estimate energy efficiencies of ethylene equipments. For different equipment scales with the same process technology, the HVVF algorithm is used to estimate energy efficiency ranks among different equipments. For different technologies based on HVVF results, the FAHP method based on the approximate fuzzy eigenvector is used to get energy efficiency indices (EEI) of total ethylene industries. The comparisons are used to assess energy utilization...",TRUE,adj
R11,Science,R31536,Application of fuzzy logic for state estimation of a microbial fermentation with dual inhibition and variable product kinetics. Food and Bioproducts Processing,S105684,R31537,Types,R31521,Fuzzy,"Fuzzy logic has been applied to a batch microbial fermentation described by a model with two adjustable parameters which associate product formation with the increasing and/or stationary phases of cell growth. The fermentation is inhibited by its product and, beyond a critical concentration, also by the substrate. To mimic an industrial condition, Gaussian noise was added and the resulting performance was simulated by fuzzy estimation systems. Simple rules with a few membership functions were able to portray bioreactor performance and the feedback interactions between cell growth and the concentrations of substrate and product. Through careful choices of the membership functions and the fuzzy logic, accuracies better than previously reported for ideal fermentations could be obtained, suggesting the suitability of fuzzy estimations for on-line applications.",TRUE,adj
R11,Science,R27870,Stereo Processing by Semiglobal Matching and Mutual Information,S90819,R27871,Method,R27864,Global,"This paper describes the Semi-Global Matching (SGM) stereo method. It uses a pixelwise, Mutual Information based matching cost for compensating radiometric differences of input images. Pixelwise matching is supported by a smoothness constraint that is usually expressed as a global cost function. SGM performs a fast approximation by pathwise optimizations from all directions. The discussion also addresses occlusion detection, subpixel refinement and multi-baseline matching. Additionally, postprocessing steps for removing outliers, recovering from specific problems of structured environments and the interpolation of gaps are presented. Finally, strategies for processing almost arbitrarily large images and fusion of disparity images using orthographic projection are proposed.A comparison on standard stereo images shows that SGM is among the currently top-ranked algorithms and is best, if subpixel accuracy is considered. The complexity is linear to the number of pixels and disparity range, which results in a runtime of just 1-2s on typical test images. An in depth evaluation of the Mutual Information based matching cost demonstrates a tolerance against a wide range of radiometric transformations. Finally, examples of reconstructions from huge aerial frame and pushbroom images demonstrate that the presented ideas are working well on practical problems.",TRUE,adj
R11,Science,R27876,"Stereo matching with color-weighted correlation, hierarchical belief propagation, and occlusion handling",S90836,R27877,Method,R27864,Global,"In this paper, we formulate a stereo matching algorithm with careful handling of disparity, discontinuity, and occlusion. The algorithm works with a global matching stereo model based on an energy-minimization framework. The global energy contains two terms, the data term and the smoothness term. The data term is first approximated by a color-weighted correlation, then refined in occluded and low-texture areas in a repeated application of a hierarchical loopy belief propagation algorithm. The experimental results are evaluated on the Middlebury data sets, showing that our algorithm is the top performer among all the algorithms listed there.",TRUE,adj
R11,Science,R27936,Multiresolution energy minimisation framework for stereo matching,S91083,R27937,Method,R27864,Global,"Global optimisation algorithms for stereo dense depth map estimation have demonstrated how to outperform other stereo algorithms such as local methods or dynamic programming. The energy minimisation framework, using Markov random fields model and solved using graph cuts or belief propagation, has especially obtained good results. The main drawback of these methods is that, although they achieve accurate reconstruction, they are not suited for real-time applications. Subsampling the input images does not reduce the complexity of the problem because it also reduces the resolution of the output in the disparity space. Nonetheless, some real-time applications such as navigation would tolerate the reduction of the depth map resolutions (width and height) while maintaining the resolution in the disparity space (number of labels). In this study a new multiresolution energy minimisation framework for real-time robotics applications is proposed where a global optimisation algorithm is applied. A reduction by a factor R of the final depth map's resolution is considered and a speed of up to 50 times has been achieved. Using high-resolution stereo pair input images guarantees that a high resolution on the disparity dimension is preserved. The proposed framework has shown how to obtain real-time performance while keeping accurate results in the Middlebury test data set.",TRUE,adj
R11,Science,R27960,Efficient Disparity Estimation Using Hierarchical Bilateral Disparity Structure Based Graph Cut Algorithm With a Foreground Boundary Refinement Mechanism,S91162,R27961,Method,R27864,Global,"The disparity estimation problem is commonly solved using graph cut (GC) methods, in which the disparity assignment problem is transformed to one of minimizing global energy function. Although such an approach yields an accurate disparity map, the computational cost is relatively high. Accordingly, this paper proposes a hierarchical bilateral disparity structure (HBDS) algorithm in which the efficiency of the GC method is improved without any loss in the disparity estimation performance by dividing all the disparity levels within the stereo image hierarchically into a series of bilateral disparity structures of increasing fineness. To address the well-known foreground fattening effect, a disparity refinement process is proposed comprising a fattening foreground region detection procedure followed by a disparity recovery process. The efficiency and accuracy of the HBDS-based GC algorithm are compared with those of the conventional GC method using benchmark stereo images selected from the Middlebury dataset. In addition, the general applicability of the proposed approach is demonstrated using several real-world stereo images.",TRUE,adj
R11,Science,R27971,Efficient GPU-Based Graph Cuts for Stereo Matching,S91195,R27972,Method,R27864,Global,"Although graph cuts (GC) is popularly used in many computer vision problems, slow execution time due to its high complexity hinders wide usage. Manycore solution using Graphics Processing Unit (GPU) may solve this problem. However, conventional GC implementation does not fully exploit GPU's computing power. To address this issue, a new GC algorithm which is suitable for GPU environment is presented in this paper. First, we present a novel graph construction method that accelerates the convergence speed of GC. Next, a repetitive block-based push and relabel method is used to increase the data transfer efficiency. Finally, we propose a low-overhead global relabeling algorithm to increase the GPU occupancy ratio. The experiments on Middlebury stereo dataset shows that 5.2X speedup can be achieved over the baseline implementation, with identical GPU platform and parameters.",TRUE,adj
R11,Science,R26558,An application-specific protocol architecture for wireless microsensor networks,S83684,R26617,Load balancing,L52829,Good,"Networking together hundreds or thousands of cheap microsensor nodes allows users to accurately monitor a remote environment by intelligently combining the data from the individual nodes. These networks require robust wireless communication protocols that are energy efficient and provide low latency. We develop and analyze low-energy adaptive clustering hierarchy (LEACH), a protocol architecture for microsensor networks that combines the ideas of energy-efficient cluster-based routing and media access together with application-specific data aggregation to achieve good performance in terms of system lifetime, latency, and application-perceived quality. LEACH includes a new, distributed cluster formation technique that enables self-organization of large numbers of nodes, algorithms for adapting clusters and rotating cluster head positions to evenly distribute the energy load among all the nodes, and techniques to enable distributed signal processing to save communication resources. Our results show that LEACH can improve system lifetime by an order of magnitude compared with general-purpose multihop approaches.",TRUE,adj
R11,Science,R26634,SEP: A Stable Election Protocol for clustered heterogeneous wireless sensor networks,S83774,R26635,Node type,R26151,heterogeneous,"We study the impact of heterogeneity of nodes, in terms of their energy, in wireless sensor networks that are hierarchically clustered. In these networks some of the nodes become cluster heads, aggregate the data of their cluster members and transmit it to the sink. We assume that a percentage of the population of sensor nodes is equipped with additional energy resources—this is a source of heterogeneity which may result from the initial setting or as the operation of the network evolves. We also assume that the sensors are randomly (uniformly) distributed and are not mobile, the coordinates of the sink and the dimensions of the sensor field are known. We show that the behavior of such sensor networks becomes very unstable once the first node dies, especially in the presence of node heterogeneity. Classical clustering protocols assume that all the nodes are equipped with the same amount of energy and as a result, they can not take full advantage of the presence of node heterogeneity. We propose SEP, a heterogeneous-aware protocol to prolong the time interval before the death of the first node (we refer to as stability period), which is crucial for many applications where the feedback from the sensor network must be reliable. SEP is based on weighted election probabilities of each node to become cluster head according to the remaining energy in each node. We show by simulation that SEP always prolongs the stability period compared to (and that the average throughput is greater than) the one obtained using current clustering protocols. We conclude by studying the sensitivity of our SEP protocol to heterogeneity parameters capturing energy imbalance in the network. We found that SEP yields longer stability region for higher values of extra energy brought by more powerful nodes.",TRUE,adj
R11,Science,R32057,Instance level transfer learning for cross lingual opinion analysis,S108868,R32058,Computational cost,R32039,High,"This paper presents two instance-level transfer learning based algorithms for cross lingual opinion analysis by transferring useful translated opinion examples from other languages as the supplementary training data for improving the opinion classifier in target language. Starting from the union of small training data in target language and large translated examples in other languages, the Transfer AdaBoost algorithm is applied to iteratively reduce the influence of low quality translated examples. Alternatively, starting only from the training data in target language, the Transfer Self-training algorithm is designed to iteratively select high quality translated examples to enrich the training data set. These two algorithms are applied to sentence- and document-level cross lingual opinion analysis tasks, respectively. The evaluations show that these algorithms effectively improve the opinion analysis by exploiting small target language training data and large cross lingual training data.",TRUE,adj
R11,Science,R33963,Adaptive Data Hiding in Edge Areas of Images With Spatial LSB Domain Systems,S117756,R33964,Invisibility,L71108,High,"This paper proposes a new adaptive least-significant- bit (LSB) steganographic method using pixel-value differencing (PVD) that provides a larger embedding capacity and imperceptible stegoimages. The method exploits the difference value of two consecutive pixels to estimate how many secret bits will be embedded into the two pixels. Pixels located in the edge areas are embedded by a k-bit LSB substitution method with a larger value of k than that of the pixels located in smooth areas. The range of difference values is adaptively divided into lower level, middle level, and higher level. For any pair of consecutive pixels, both pixels are embedded by the k-bit LSB substitution method. However, the value k is adaptive and is decided by the level which the difference value belongs to. In order to remain at the same level where the difference value of two consecutive pixels belongs, before and after embedding, a delicate readjusting phase is used. When compared to the past study of Wu et al.'s PVD and LSB replacement method, our experimental results show that our proposed approach provides both larger embedding capacity and higher image quality.",TRUE,adj
R11,Science,R33969,Edge adaptive image steganography based on LSB matching revisited,S117804,R33970,Invisibility,L71147,High,"The least-significant-bit (LSB)-based approach is a popular type of steganographic algorithms in the spatial domain. However, we find that in most existing approaches, the choice of embedding positions within a cover image mainly depends on a pseudorandom number generator without considering the relationship between the image content itself and the size of the secret message. Thus the smooth/flat regions in the cover images will inevitably be contaminated after data hiding even at a low embedding rate, and this will lead to poor visual quality and low security based on our analysis and extensive experiments, especially for those images with many smooth regions. In this paper, we expand the LSB matching revisited image steganography and propose an edge adaptive scheme which can select the embedding regions according to the size of secret message and the difference between two consecutive pixels in the cover image. For lower embedding rates, only sharper edge regions are used while keeping the other smoother regions as they are. When the embedding rate increases, more edge regions can be released adaptively for data hiding by adjusting just a few parameters. The experimental results evaluated on 6000 natural images with three specific and four universal steganalytic algorithms show that the new scheme can enhance the security significantly compared with typical LSB-based approaches as well as their edge adaptive ones, such as pixel-value-differencing-based approaches, while preserving higher visual quality of stego images at the same time.",TRUE,adj
R11,Science,R33963,Adaptive Data Hiding in Edge Areas of Images With Spatial LSB Domain Systems,S117755,R33964,"Payload
Capacity",L71107,High,"This paper proposes a new adaptive least-significant- bit (LSB) steganographic method using pixel-value differencing (PVD) that provides a larger embedding capacity and imperceptible stegoimages. The method exploits the difference value of two consecutive pixels to estimate how many secret bits will be embedded into the two pixels. Pixels located in the edge areas are embedded by a k-bit LSB substitution method with a larger value of k than that of the pixels located in smooth areas. The range of difference values is adaptively divided into lower level, middle level, and higher level. For any pair of consecutive pixels, both pixels are embedded by the k-bit LSB substitution method. However, the value k is adaptive and is decided by the level which the difference value belongs to. In order to remain at the same level where the difference value of two consecutive pixels belongs, before and after embedding, a delicate readjusting phase is used. When compared to the past study of Wu et al.'s PVD and LSB replacement method, our experimental results show that our proposed approach provides both larger embedding capacity and higher image quality.",TRUE,adj
R11,Science,R33969,Edge adaptive image steganography based on LSB matching revisited,S117803,R33970,"Payload
Capacity",L71146,High,"The least-significant-bit (LSB)-based approach is a popular type of steganographic algorithms in the spatial domain. However, we find that in most existing approaches, the choice of embedding positions within a cover image mainly depends on a pseudorandom number generator without considering the relationship between the image content itself and the size of the secret message. Thus the smooth/flat regions in the cover images will inevitably be contaminated after data hiding even at a low embedding rate, and this will lead to poor visual quality and low security based on our analysis and extensive experiments, especially for those images with many smooth regions. In this paper, we expand the LSB matching revisited image steganography and propose an edge adaptive scheme which can select the embedding regions according to the size of secret message and the difference between two consecutive pixels in the cover image. For lower embedding rates, only sharper edge regions are used while keeping the other smoother regions as they are. When the embedding rate increases, more edge regions can be released adaptively for data hiding by adjusting just a few parameters. The experimental results evaluated on 6000 natural images with three specific and four universal steganalytic algorithms show that the new scheme can enhance the security significantly compared with typical LSB-based approaches as well as their edge adaptive ones, such as pixel-value-differencing-based approaches, while preserving higher visual quality of stego images at the same time.",TRUE,adj
R11,Science,R33957,Reversible data embedding using a difference expansion,S117706,R33958,"Robustness against
image
manipulation",L71067,High,"Reversible data embedding has drawn lots of interest recently. Being reversible, the original digital content can be completely restored. We present a novel reversible data-embedding method for digital images. We explore the redundancy in digital images to achieve very high embedding capacity, and keep the distortion low.",TRUE,adj
R11,Science,R33963,Adaptive Data Hiding in Edge Areas of Images With Spatial LSB Domain Systems,S117752,R33964,"Robustness against
image
manipulation",L71104,High,"This paper proposes a new adaptive least-significant- bit (LSB) steganographic method using pixel-value differencing (PVD) that provides a larger embedding capacity and imperceptible stegoimages. The method exploits the difference value of two consecutive pixels to estimate how many secret bits will be embedded into the two pixels. Pixels located in the edge areas are embedded by a k-bit LSB substitution method with a larger value of k than that of the pixels located in smooth areas. The range of difference values is adaptively divided into lower level, middle level, and higher level. For any pair of consecutive pixels, both pixels are embedded by the k-bit LSB substitution method. However, the value k is adaptive and is decided by the level which the difference value belongs to. In order to remain at the same level where the difference value of two consecutive pixels belongs, before and after embedding, a delicate readjusting phase is used. When compared to the past study of Wu et al.'s PVD and LSB replacement method, our experimental results show that our proposed approach provides both larger embedding capacity and higher image quality.",TRUE,adj
R11,Science,R33969,Edge adaptive image steganography based on LSB matching revisited,S117800,R33970,"Robustness against
image
manipulation",L71143,High,"The least-significant-bit (LSB)-based approach is a popular type of steganographic algorithms in the spatial domain. However, we find that in most existing approaches, the choice of embedding positions within a cover image mainly depends on a pseudorandom number generator without considering the relationship between the image content itself and the size of the secret message. Thus the smooth/flat regions in the cover images will inevitably be contaminated after data hiding even at a low embedding rate, and this will lead to poor visual quality and low security based on our analysis and extensive experiments, especially for those images with many smooth regions. In this paper, we expand the LSB matching revisited image steganography and propose an edge adaptive scheme which can select the embedding regions according to the size of secret message and the difference between two consecutive pixels in the cover image. For lower embedding rates, only sharper edge regions are used while keeping the other smoother regions as they are. When the embedding rate increases, more edge regions can be released adaptively for data hiding by adjusting just a few parameters. The experimental results evaluated on 6000 natural images with three specific and four universal steganalytic algorithms show that the new scheme can enhance the security significantly compared with typical LSB-based approaches as well as their edge adaptive ones, such as pixel-value-differencing-based approaches, while preserving higher visual quality of stego images at the same time.",TRUE,adj
R11,Science,R33957,Reversible data embedding using a difference expansion,S117708,R33958,"Robustness against
statistical attacks",L71069,High,"Reversible data embedding has drawn lots of interest recently. Being reversible, the original digital content can be completely restored. We present a novel reversible data-embedding method for digital images. We explore the redundancy in digital images to achieve very high embedding capacity, and keep the distortion low.",TRUE,adj
R11,Science,R33963,Adaptive Data Hiding in Edge Areas of Images With Spatial LSB Domain Systems,S117754,R33964,"Robustness against
statistical attacks",L71106,High,"This paper proposes a new adaptive least-significant- bit (LSB) steganographic method using pixel-value differencing (PVD) that provides a larger embedding capacity and imperceptible stegoimages. The method exploits the difference value of two consecutive pixels to estimate how many secret bits will be embedded into the two pixels. Pixels located in the edge areas are embedded by a k-bit LSB substitution method with a larger value of k than that of the pixels located in smooth areas. The range of difference values is adaptively divided into lower level, middle level, and higher level. For any pair of consecutive pixels, both pixels are embedded by the k-bit LSB substitution method. However, the value k is adaptive and is decided by the level which the difference value belongs to. In order to remain at the same level where the difference value of two consecutive pixels belongs, before and after embedding, a delicate readjusting phase is used. When compared to the past study of Wu et al.'s PVD and LSB replacement method, our experimental results show that our proposed approach provides both larger embedding capacity and higher image quality.",TRUE,adj
R11,Science,R33967,Reversible image watermarking using interpolation technique,S117787,R33968,"Robustness against
statistical attacks",L71133,High,"In this paper presents a novel reversible watermarking scheme. The proposed scheme use an interpolation technique to generate residual values named as interpolation error. Additionally apply the additive expansion to these interpolation-errors, this project achieve a highly efficient reversible watermarking scheme which can guarantee high image quality without sacrificing embedding capacity. The experimental results show the proposed reversible scheme provides a higher capacity and achieves better image quality for watermarked images. The computational cost of the proposed scheme is small.",TRUE,adj
R11,Science,R33969,Edge adaptive image steganography based on LSB matching revisited,S117801,R33970,"Tolerance to RS
Steganalysis",L71144,High,"The least-significant-bit (LSB)-based approach is a popular type of steganographic algorithms in the spatial domain. However, we find that in most existing approaches, the choice of embedding positions within a cover image mainly depends on a pseudorandom number generator without considering the relationship between the image content itself and the size of the secret message. Thus the smooth/flat regions in the cover images will inevitably be contaminated after data hiding even at a low embedding rate, and this will lead to poor visual quality and low security based on our analysis and extensive experiments, especially for those images with many smooth regions. In this paper, we expand the LSB matching revisited image steganography and propose an edge adaptive scheme which can select the embedding regions according to the size of secret message and the difference between two consecutive pixels in the cover image. For lower embedding rates, only sharper edge regions are used while keeping the other smoother regions as they are. When the embedding rate increases, more edge regions can be released adaptively for data hiding by adjusting just a few parameters. The experimental results evaluated on 6000 natural images with three specific and four universal steganalytic algorithms show that the new scheme can enhance the security significantly compared with typical LSB-based approaches as well as their edge adaptive ones, such as pixel-value-differencing-based approaches, while preserving higher visual quality of stego images at the same time.",TRUE,adj
R11,Science,R33969,Edge adaptive image steganography based on LSB matching revisited,S117798,R33970,"Utilization of edge
areas",L71141,High,"The least-significant-bit (LSB)-based approach is a popular type of steganographic algorithms in the spatial domain. However, we find that in most existing approaches, the choice of embedding positions within a cover image mainly depends on a pseudorandom number generator without considering the relationship between the image content itself and the size of the secret message. Thus the smooth/flat regions in the cover images will inevitably be contaminated after data hiding even at a low embedding rate, and this will lead to poor visual quality and low security based on our analysis and extensive experiments, especially for those images with many smooth regions. In this paper, we expand the LSB matching revisited image steganography and propose an edge adaptive scheme which can select the embedding regions according to the size of secret message and the difference between two consecutive pixels in the cover image. For lower embedding rates, only sharper edge regions are used while keeping the other smoother regions as they are. When the embedding rate increases, more edge regions can be released adaptively for data hiding by adjusting just a few parameters. The experimental results evaluated on 6000 natural images with three specific and four universal steganalytic algorithms show that the new scheme can enhance the security significantly compared with typical LSB-based approaches as well as their edge adaptive ones, such as pixel-value-differencing-based approaches, while preserving higher visual quality of stego images at the same time.",TRUE,adj
R11,Science,R27831,Individual Skill Progression on a Virtual Reality Simulator for Shoulder Arthroscopy A 3-Year Follow-up Study,S90723,R27832,Evaluator,R27733,Independent,"Background Previous studies have demonstrated a correlation between surgical experience and performance on a virtual reality arthroscopy simulator but only provided single time point evaluations. Additional longitudinal studies are necessary to confirm the validity of virtual reality simulation before these teaching aids can be more fully recommended for surgical education. Hypothesis Subjects will show improved performance on simulator retesting several years after an initial baseline evaluation, commensurate with their advanced surgical experience. Study Design Controlled laboratory study. Methods After gaining further arthroscopic experience, 10 orthopaedic residents underwent retesting 3 years after initial evaluation on a Procedicus virtual reality arthroscopy simulator. Using a paired t test, simulator parameters were compared in each subject before and after additional arthroscopic experience. Subjects were evaluated for time to completion, number of probe collisions with the tissues, average probe velocity, and distance traveled with the tip of the simulated probe compared to an optimal computer-determined distance. In addition, to evaluate consistency of simulator performance, results were compared to historical controls of equal experience. Results Subjects improved significantly ( P < .02 for all) in the 4 simulator parameters: completion time (−51 %), probe collisions (−29%), average velocity (+122%), and distance traveled (−;32%). With the exception of probe velocity, there were no significant differences between the performance of this group and that of a historical group with equal experience, indicating that groups with similar arthroscopic experience consistently demonstrate equivalent scores on the simulator. Conclusion Subjects significantly improved their performance on simulator retesting 3 years after initial evaluation. Additionally, across independent groups with equivalent surgical experience, similar performance can be expected on simulator parameters; thus it may eventually be possible to establish simulator benchmarks to indicate likely arthroscopic skill. Clinical Relevance These results further validate the use of surgical simulation as an important tool for the evaluation of surgical skills.",TRUE,adj
R11,Science,R25746,Mining High Utility Itemsets in Large High Dimensional Data,S78232,R25747,Algorithm name,L49002,Inter-transaction,"Existing algorithms for utility mining are inadequate on datasets with high dimensions or long patterns. This paper proposes a hybrid method, which is composed of a row enumeration algorithm (i.e., inter-transaction) and a column enumeration algorithm (i.e., two-phase), to discover high utility itemsets from two directions: Two-phase seeks short high utility itemsets from the bottom, while inter-transaction seeks long high utility itemsets from the top. In addition, optimization technique is adopted to improve the performance of computing the intersection of transactions. Experiments on synthetic data show that the hybrid method achieves high performance in large high dimensional datasets.",TRUE,adj
R11,Science,R27851,How Far Can We Go with Local Optimization in Real-Time Stereo Matching,S90774,R27852,Method,R27845,Local,"Applications such as robot navigation and augmented reality require high-accuracy dense disparity maps in real-time and online. Due to time constraint, most realtime stereo applications rely on local winner-take-all optimization in the disparity computation process. These local approaches are generally outperformed by offline global optimization based algorithms. However, recent research shows that, through carefully selecting and aggregating the matching costs of neighboring pixels, the disparity maps produced by a local approach can be more accurate than those generated by many global optimization techniques. We are therefore motivated to investigate whether these cost aggregation approaches can be adopted in real-time stereo applications and, if so, how well they perform under the real-time constraint. The evaluation is conducted on a real-time stereo platform, which utilizes the processing power of programmable graphics hardware. Several recent cost aggregation approaches are also implemented and optimized for graphics hardware so that real-time speed can be achieved. The performances of these aggregation approaches in terms of both processing speed and result quality are reported.",TRUE,adj
R11,Science,R27857,Adaptive support-weight approach for correspondence search,S90788,R27858,Method,R27845,Local,We present a new window-based method for correspondence search using varying support-weights. We adjust the support-weights of the pixels in a given support window based on color similarity and geometric proximity to reduce the image ambiguity. Our method outperforms other local methods on standard stereo benchmarks.,TRUE,adj
R11,Science,R27913,Vision based autonomous vehicle navigation with self-organizing map feature matching technique,S90985,R27914,Method,R27845,Local,"Vision is becoming more and more common in applications such as localization, autonomous navigation, path finding and many other computer vision applications. This paper presents an improved technique for feature matching in the stereo images captured by the autonomous vehicle. The Scale Invariant Feature Transform (SIFT) algorithm is used to extract distinctive invariant features from images but this algorithm has a high complexity and a long computational time. In order to reduce the computation time, this paper proposes a SIFT improvement technique based on a Self-Organizing Map (SOM) to perform the matching procedure more efficiently for feature matching problems. Experimental results on real stereo images show that the proposed algorithm performs feature group matching with lower computation time than the original SIFT algorithm. The results showing improvement over the original SIFT are validated through matching examples between different pairs of stereo images. The proposed algorithm can be applied to stereo vision based autonomous vehicle navigation for obstacle avoidance, as well as many other feature matching and computer vision applications.",TRUE,adj
R11,Science,R27916,A local iterative refinement method for adaptive support-weight stereo matching,S91002,R27917,Method,R27845,Local,"A new stereo matching algorithm is introduced that performs iterative refinement on the results of adaptive support-weight stereo matching. During each iteration of disparity refinement, adaptive support-weights are used by the algorithm to penalize disparity differences within local windows. Analytical results show that the addition of iterative refinement to adaptive support-weight stereo matching does not significantly increase complexity. In addition, this new algorithm does not rely on image segmentation or plane fitting, which are used by the majority of the most accurate stereo matching algorithms. As a result, this algorithm has lower complexity, is more suitable for parallel implementation, and does not force locally planar surfaces within the scene. When compared to other algorithms that do not rely on image segmentation or plane fitting, results show that the new stereo matching algorithm is one of the most accurate listed on the Middlebury performance benchmark.",TRUE,adj
R11,Science,R27947,A non-local cost aggregation method for stereo matching,S91116,R27948,Method,R27845,Local,"Matching cost aggregation is one of the oldest and still popular methods for stereo correspondence. While effective and efficient, cost aggregation methods typically aggregate the matching cost by summing/averaging over a user-specified, local support region. This is obviously only locally-optimal, and the computational complexity of the full-kernel implementation usually depends on the region size. In this paper, the cost aggregation problem is re-examined and a non-local solution is proposed. The matching cost values are aggregated adaptively based on pixel similarity on a tree structure derived from the stereo image pair to preserve depth edges. The nodes of this tree are all the image pixels, and the edges are all the edges between the nearest neighboring pixels. The similarity between any two pixels is decided by their shortest distance on the tree. The proposed method is non-local as every node receives supports from all other nodes on the tree. As can be expected, the proposed non-local solution outperforms all local cost aggregation methods on the standard (Middlebury) benchmark. Besides, it has great advantage in extremely low computational complexity: only a total of 2 addition/subtraction operations and 3 multiplication operations are required for each pixel at each disparity level. It is very close to the complexity of unnormalized box filtering using integral image which requires 6 addition/subtraction operations. Unnormalized box filter is the fastest local cost aggregation method but blurs across depth edges. The proposed method was tested on a MacBook Air laptop computer with a 1.8 GHz Intel Core i7 CPU and 4 GB memory. The average runtime on the Middlebury data sets is about 90 milliseconds, and is only about 1.25× slower than unnormalized box filter. A non-local disparity refinement method is also proposed based on the non-local cost aggregation method.",TRUE,adj
R11,Science,R28004,Local Disparity Estimation With Three-Moded Cross Census and Advanced Support Weight,S91331,R28005,Method,R27845,Local,"The classical local disparity methods use simple and efficient structure to reduce the computation complexity. To increase the accuracy of the disparity map, new local methods utilize additional processing steps such as iteration, segmentation, calibration and propagation, similar to global methods. In this paper, we present an efficient one-pass local method with no iteration. The proposed method is also extended to video disparity estimation by using motion information as well as imposing spatial temporal consistency. In local method, the accuracy of stereo matching depends on precise similarity measure and proper support window. For the accuracy of similarity measure, we propose a novel three-moded cross census transform with a noise buffer, which increases the robustness to image noise in flat areas. The proposed similarity measure can be used in the same form in both stereo images and videos. We further improve the reliability of the aggregation by adopting the advanced support weight and incorporating motion flow to achieve better depth map near moving edges in video scene. The experimental results show that the proposed method is the best performing local method on the Middlebury stereo benchmark test and outperforms the other state-of-the-art methods on video disparity evaluation.",TRUE,adj
R11,Science,R28008,Matching Cost Filtering for Dense Stereo Correspondence,S91350,R28009,Method,R27845,Local,"Dense stereo correspondence enabling reconstruction of depth information in a scene is of great importance in the field of computer vision. Recently, some local solutions based on matching cost filtering with an edge-preserving filter have been proved to be capable of achieving more accuracy than global approaches. Unfortunately, the computational complexity of these algorithms is quadratically related to the window size used to aggregate the matching costs. The recent trend has been to pursue higher accuracy with greater efficiency in execution. Therefore, this paper proposes a new cost-aggregation module to compute the matching responses for all the image pixels at a set of sampling points generated by a hierarchical clustering algorithm. The complexity of this implementation is linear both in the number of image pixels and the number of clusters. Experimental results demonstrate that the proposed algorithm outperforms state-of-the-art local methods in terms of both accuracy and speed. Moreover, performance tests indicate that parameters such as the height of the hierarchical binary tree and the spatial and range standard deviations have a significant influence on time consumption and the accuracy of disparity maps.",TRUE,adj
R11,Science,R28016,Domain Transformation-Based Efficient Cost Aggregation for Local Stereo Matching,S91382,R28017,Method,R27845,Local,"Binocular stereo matching is one of the most important algorithms in the field of computer vision. Adaptive support-weight approaches, the current state-of-the-art local methods, produce results comparable to those generated by global methods. However, excessive time consumption is the main problem of these algorithms since the computational complexity is proportionally related to the support window size. In this paper, we present a novel cost aggregation method inspired by domain transformation, a recently proposed dimensionality reduction technique. This transformation enables the aggregation of 2-D cost data to be performed using a sequence of 1-D filters, which lowers computation and memory costs compared to conventional 2-D filters. Experiments show that the proposed method outperforms the state-of-the-art local methods in terms of computational performance, since its computational complexity is independent of the input parameters. Furthermore, according to the experimental results with the Middlebury dataset and real-world images, our algorithm is currently one of the most accurate and efficient local algorithms.",TRUE,adj
R11,Science,R28044,A modified census transform based on the neighborhood information for stereo matching algorithm,S91516,R28045,Method,R27845,Local,"Census transform is a non-parametric local transform. Its weakness is that the results relied on the center pixel too much. This paper proposes a modified Census transform based on the neighborhood information for stereo matching. By improving the classic Census transform, the new technique utilizes more bits to represent the differences between the pixel and its neighborhood information. The result image of the modified Census transform has more detailed information at depth discontinuity. After stereo correspondence, sub-pixel interpolation and the disparity refinement, a better dense disparity map can be obtained. The experiments present that the proposed algorithm has simple mechanism and strong robustness. It can improve the accuracy of matching and is applicable to hardware systems.",TRUE,adj
R11,Science,R28059,Fast and Accurate Stereo Vision System on FPGA,S91589,R28060,Method,R27845,Local,"In this article, we present a fast and high quality stereo matching algorithm on FPGA using cost aggregation (CA) and fast locally consistent (FLC) dense stereo. In many software programs, global matching algorithms are used in order to obtain accurate disparity maps. Although their error rates are considerably low, their processing speeds are far from that required for real-time processing because of their complex processing sequences. In order to realize real-time processing, many hardware systems have been proposed to date. They have achieved considerably high processing speeds; however, their error rates are not as good as those of software programs, because simple local matching algorithms have been widely used in those systems. In our system, sophisticated local matching algorithms (CA and FLC) that are suitable for FPGA implementation are used to achieve low error rate while maintaining the high processing speed. We evaluate the performance of our circuit on Xilinx Vertex-6 FPGAs. Its error rate is comparable to that of top-level software algorithms, and its processing speed is nearly 2 clock cycles per pixel, which reaches 507.9 fps for 640 480 pixel images.",TRUE,adj
R11,Science,R28083,Efficient edge-awareness propagation via single-map filtering for edge-preserving stereo matching,S91700,R28084,Method,R27845,Local,"In this paper, we propose an efficient framework for edge-preserving stereo matching. Local methods for stereo matching are more suitable than global methods for real-time applications. Moreover, we can obtain accurate depth maps by using edge-preserving filter for the cost aggregation process in local stereo matching. The computational cost is high, since we must perform the filter for every number of disparity ranges if the order of the edge-preserving filter is constant time. Therefore, we propose an efficient iterative framework which propagates edge-awareness by using single time edge preserving filtering. In our framework, box filtering is used for the cost aggregation, and then the edge-preserving filtering is once used for refinement of the obtained depth map from the box aggregation. After that, we iteratively estimate a new depth map by local stereo matching which utilizes the previous result of the depth map for feedback of the matching cost. Note that the kernel size of the box filter is varied as coarse-to-fine manner at each iteration. Experimental results show that small and large areas of incorrect regions are gradually corrected. Finally, the accuracy of the depth map estimated by our framework is comparable to the state-of-the-art of stereo matching methods with global optimization methods. Moreover, the computational time of our method is faster than the optimization based method.",TRUE,adj
R11,Science,R28097,A fast trilateral filterbased adaptive support weight method for stereo matching,S91748,R28098,Method,R27845,Local,"Adaptive support weight (ASW) methods represent the state of the art in local stereo matching, while the bilateral filter-based ASW method achieves outstanding performance. However, this method fails to resolve the ambiguity induced by nearby pixels at different disparities but with similar colors. In this paper, we introduce a novel trilateral filter (TF)-based ASW method that remedies such ambiguities by considering the possible disparity discontinuities through color discontinuity boundaries, i.e., the boundary strength between two pixels, which is measured by a local energy model. We also present a recursive TF-based ASW method whose computational complexity is O(N) for the cost aggregation step, and O(NLog2(N)) for boundary detection, where N denotes the input image size. This complexity is thus independent of the support window size. The recursive TF-based method is a nonlocal cost aggregation strategy. The experimental evaluation on the Middlebury benchmark shows that the proposed method, whose average error rate is 4.95%, outperforms other local methods in terms of accuracy. Equally, the average runtime of the proposed TF-based cost aggregation is roughly 260 ms on a 3.4-GHz Inter Core i7 CPU, which is comparable with state-of-the-art efficiency.",TRUE,adj
R11,Science,R33957,Reversible data embedding using a difference expansion,S117710,R33958,Invisibility,L71071,Low,"Reversible data embedding has drawn lots of interest recently. Being reversible, the original digital content can be completely restored. We present a novel reversible data-embedding method for digital images. We explore the redundancy in digital images to achieve very high embedding capacity, and keep the distortion low.",TRUE,adj
R11,Science,R33957,Reversible data embedding using a difference expansion,S117709,R33958,"Payload
Capacity",L71070,Low,"Reversible data embedding has drawn lots of interest recently. Being reversible, the original digital content can be completely restored. We present a novel reversible data-embedding method for digital images. We explore the redundancy in digital images to achieve very high embedding capacity, and keep the distortion low.",TRUE,adj
R11,Science,R33957,Reversible data embedding using a difference expansion,S117707,R33958,"Tolerance to RS
Steganalysis",L71068,Low,"Reversible data embedding has drawn lots of interest recently. Being reversible, the original digital content can be completely restored. We present a novel reversible data-embedding method for digital images. We explore the redundancy in digital images to achieve very high embedding capacity, and keep the distortion low.",TRUE,adj
R11,Science,R33963,Adaptive Data Hiding in Edge Areas of Images With Spatial LSB Domain Systems,S117753,R33964,"Tolerance to RS
Steganalysis",L71105,Low,"This paper proposes a new adaptive least-significant- bit (LSB) steganographic method using pixel-value differencing (PVD) that provides a larger embedding capacity and imperceptible stegoimages. The method exploits the difference value of two consecutive pixels to estimate how many secret bits will be embedded into the two pixels. Pixels located in the edge areas are embedded by a k-bit LSB substitution method with a larger value of k than that of the pixels located in smooth areas. The range of difference values is adaptively divided into lower level, middle level, and higher level. For any pair of consecutive pixels, both pixels are embedded by the k-bit LSB substitution method. However, the value k is adaptive and is decided by the level which the difference value belongs to. In order to remain at the same level where the difference value of two consecutive pixels belongs, before and after embedding, a delicate readjusting phase is used. When compared to the past study of Wu et al.'s PVD and LSB replacement method, our experimental results show that our proposed approach provides both larger embedding capacity and higher image quality.",TRUE,adj
R11,Science,R33957,Reversible data embedding using a difference expansion,S117704,R33958,"Utilization of edge
areas",L71065,Low,"Reversible data embedding has drawn lots of interest recently. Being reversible, the original digital content can be completely restored. We present a novel reversible data-embedding method for digital images. We explore the redundancy in digital images to achieve very high embedding capacity, and keep the distortion low.",TRUE,adj
R11,Science,R33963,Adaptive Data Hiding in Edge Areas of Images With Spatial LSB Domain Systems,S117750,R33964,"Utilization of edge
areas",L71102,Low,"This paper proposes a new adaptive least-significant- bit (LSB) steganographic method using pixel-value differencing (PVD) that provides a larger embedding capacity and imperceptible stegoimages. The method exploits the difference value of two consecutive pixels to estimate how many secret bits will be embedded into the two pixels. Pixels located in the edge areas are embedded by a k-bit LSB substitution method with a larger value of k than that of the pixels located in smooth areas. The range of difference values is adaptively divided into lower level, middle level, and higher level. For any pair of consecutive pixels, both pixels are embedded by the k-bit LSB substitution method. However, the value k is adaptive and is decided by the level which the difference value belongs to. In order to remain at the same level where the difference value of two consecutive pixels belongs, before and after embedding, a delicate readjusting phase is used. When compared to the past study of Wu et al.'s PVD and LSB replacement method, our experimental results show that our proposed approach provides both larger embedding capacity and higher image quality.",TRUE,adj
R11,Science,R26300,Integrating Routing and Inventory Decisions in One-Warehouse Multiretailer Multiproduct Distribution Systems,S82366,R26301,Fleet size,R26149,multiple,"We consider distribution systems with a central warehouse and many retailers that stock a number of different products. Deterministic demand occurs at the retailers for each product. The warehouse acts as a break-bulk center and does not keep any inventory. The products are delivered from the warehouse to the retailers by vehicles that combine the deliveries to several retailers into efficient vehicle routes. The objective is to determine replenishment policies that specify the delivery quantities and the vehicle routes used for the delivery, so as to minimize the long-run average inventory and transportation costs. A new heuristic that develops a stationary nested joint replenishment policy for the problem is presented in this paper. Unlike existing methods, the proposed heuristic is capable of solving problems involving distribution systems with multiple products. Results of a computational study on randomly generated single-product problems are also presented.",TRUE,adj
R11,Science,R26321,Dynamic Programming Approximations for a Stochastic Inventory Routing Problem,S82489,R26322,Fleet size,R26149,multiple,"This work is motivated by the need to solve the inventory routing problem when implementing a business practice called vendor managed inventory replenishment (VMI). With VMI, vendors monitor their customers' inventories and decide when and how much inventory should be replenished at each customer. The inventory routing problem attempts to coordinate inventory replenishment and transportation in such a way that the cost is minimized over the long run. We formulate a Markov decision process model of the stochastic inventory routing problem and propose approximation methods to find good solutions with reasonable computational effort. We indicate how the proposed approach can be used for other Markov decision processes involving the control of multiple resources.",TRUE,adj
R11,Science,R28304,Liner shipping fleet deployment with cargo transshipment and demand uncertainty,S92665,R28305,oute,R26149,multiple,"This paper addresses a novel liner shipping fleet deployment problem characterized by cargo transshipment, multiple container routing options and uncertain demand, with the objective of maximizing the expected profit. This problem is formulated as a stochastic program and solved by the sample average approximation method. In this technique the objective function of the stochastic program is approximated by a sample average estimate derived from a random sample, and then the resulting deterministic program is solved. This process is repeated with different samples to obtain a good candidate solution along with the statistical estimate of its optimality gap. We apply the proposed model to a case study inspired from real-world problems faced by a major liner shipping company. Results show that the case is efficiently solved to 1% of relative optimality gap at 95% confidence level.",TRUE,adj
R11,Science,R28307,Schedule Design and Container Routing in Liner Shipping,S92677,R28308,oute,R26149,multiple,"A liner shipping company seeks to provide liner services with shorter transit time compared with the benchmark of market-level transit time because of the ever-increasing competition. When the itineraries of its liner service routes are determined, the liner shipping company designs the schedules of the liner routes such that the wait time at transshipment ports is minimized. As a result of transshipment, multiple paths are available for delivering containers from the origin port to the destination port. Therefore, the medium-term (3 to 6 months) schedule design problem and the operational-level container-routing problem must be investigated simultaneously. The schedule design and container-routing problems were formulated by minimization of the sum of the total transshipment cost and penalty cost associated with longer transit time than the market-level transit time, minus the bonus for shorter transit time. The formulation is nonlinear, noncontinuous, and nonconvex. A genetic local search approach was developed to find good solutions to the problem. The proposed solution method was applied to optimize the Asia–Europe–Oceania liner shipping services of a global liner company.",TRUE,adj
R11,Science,R26181,Dynamic allocations for multi-product distribution,S81739,R26182,outing,R26149,multiple,"Consider the problem of allocating multiple products by a distributor with limited capacity (truck size), who has a fixed sequence of customers (retailers) whose demands are unknown. Each time the distributor visits a customer, he gets information about the realization of the demand for this customer, but he does not yet know the demands of the following customers. The decision faced by the distributor is how much to allocate to each customer given that the penalties for not satisfying demand are not identical. In addition, we optimally solve the problem of loading the truck with the multiple products, given the limited storage capacity. This framework can also be used for the general problem of seat allocation in the airline industry. As with the truck in the distribution problem, the airplane has limited capacity. A critical decision is how to allocate the available seats between early and late reservations (sequence of customers), for the different fare classes (multiple products), where the revenues from discount (early) and regular (late) passengers are different.",TRUE,adj
R11,Science,R26300,Integrating Routing and Inventory Decisions in One-Warehouse Multiretailer Multiproduct Distribution Systems,S82370,R26301,outing,R26149,multiple,"We consider distribution systems with a central warehouse and many retailers that stock a number of different products. Deterministic demand occurs at the retailers for each product. The warehouse acts as a break-bulk center and does not keep any inventory. The products are delivered from the warehouse to the retailers by vehicles that combine the deliveries to several retailers into efficient vehicle routes. The objective is to determine replenishment policies that specify the delivery quantities and the vehicle routes used for the delivery, so as to minimize the long-run average inventory and transportation costs. A new heuristic that develops a stationary nested joint replenishment policy for the problem is presented in this paper. Unlike existing methods, the proposed heuristic is capable of solving problems involving distribution systems with multiple products. Results of a computational study on randomly generated single-product problems are also presented.",TRUE,adj
R11,Science,R26321,Dynamic Programming Approximations for a Stochastic Inventory Routing Problem,S82494,R26322,outing,R26149,multiple,"This work is motivated by the need to solve the inventory routing problem when implementing a business practice called vendor managed inventory replenishment (VMI). With VMI, vendors monitor their customers' inventories and decide when and how much inventory should be replenished at each customer. The inventory routing problem attempts to coordinate inventory replenishment and transportation in such a way that the cost is minimized over the long run. We formulate a Markov decision process model of the stochastic inventory routing problem and propose approximation methods to find good solutions with reasonable computational effort. We indicate how the proposed approach can be used for other Markov decision processes involving the control of multiple resources.",TRUE,adj
R11,Science,R27129,Effects of Exchange Rate Volatility on Trade: Some Further Evidence (Effets de l'instabilite des taux de change sur le commerce mondial: nouvelles constatations) (Efectos de la inestabilidad de los tipos de cambio en el comercio internacional: Alguna evidencia adicional),S87245,R27130,Nominal or real exchange rate used,R27124,Nominal,"A recent survey of the empirical studies examining the effects of exchange rate volatility on international trade concluded that ""the large majority of empirical studies... are unable to establish a systematically significant link between measured exchange rate variability and the volume of international trade, whether on an aggregated or on a bilateral basis"" (International Monetary Fund, Exchange Rate Volatility and World Trade, Washington, July 1984, p. 36). A recent paper by M.A. Akhtar and R.S. Hilton (""Exchange Rate Uncertainty and International Trade,"" Federal Reserve Bank of New York, May 1984), in contrast, suggests that exchange rate volatility, as measured by the standard deviation of indices of nominal effective exchange rates, has had significant adverse effects on the trade in manufactures of the United States and the Federal Republic of Germany. The purpose of the present study is to test the robustness of Akhtar and Hilton's empirical results, with their basic theoretical framework taken as given. The study extends their analysis to include France, Japan, and the United Kingdom; it then examines the robustness of the results with respect to changes in the choice of sample period, volatility measure, and estimation techniques. The main conclusion of the analysis is that the methodology of Akhtar and Hilton fails to establish a systematically significant link between exchange rate volatility and the volume of international trade. This is not to say that significant adverse effects cannot be detected in individual cases, but rather that, viewed in the large, the results tend to be insignificant or unstable. Specifically, the results suggest that straightforward application of Akhtar and Hilton's methodology to three additional countries (France, Japan, and the United Kingdom) yields mixed results; that their methodology seems to be flawed in several respects, and that correction for such flaws has the effect of weakening their conclusions; that the estimates are quite sensitive to fairly minor variations in methodology; and that ""revised"" estimates for the five countries do not, for the most part, support the hypothesis that exchange rate volatility has had a systematically adverse effect on trade. /// Un rA©cent aperA§u des A©tudes empiriques consacrA©es aux effets de l'instabilitA© des taux de change sur le commerce international conclut que ""dans leur grande majoritA©, les A©tudes empiriques... ne rA©ussissent pas A A©tablir un lien significatif et systA©matique entre la variabilitA© mesurA©e des taux de change et le volume du commerce international, que celui-ci soit exprimA© sous forme globale ou bilatA©rale"" (Fonds monA©taire international, Exchange Rate Volatility and World Trade, Washington, juillet 1984, page 36). Par contre, un article publiA© rA©cemment par M.A. Akhtar et R.S. Hilton (""Exchange Rate Uncertainty and International Trade"", Federal Reserve Bank of New York, mai 1984) soutient que l'instabilitA© des taux de change, mesurA©e par l'A©cart type des indices des taux de change effectifs nominaux, a eu un effet dA©favorable significatif sur le commerce de produits manufacturA©s des Etats-Unis et de la RA©publique fA©dA©rale d'Allemagne. La prA©sente A©tude a pour objet d'A©valuer la soliditA© des rA©sultats empiriques prA©sentA©s par Akhtar et Hilton, en prenant comme donnA© leur cadre thA©orique de base. L'auteur A©tend l'analyse au cas de la France, du Japon et du Royaume-Uni; elle cherche ensuite dans quelle mesure ces rA©sultats restent valables si l'on modifie la pA©riode de rA©fA©rence, la mesure de l'instabilitA© et les techniques d'estimation. La principale conclusion de cette A©tude est que la mA©thode utilisA©e par Akhtar et Hilton n'A©tablit pas de lien significatif et systA©matique entre l'instabilitA© des taux de change et le volume du commerce international. Ceci ne veut pas dire que l'on ne puisse pas constater dans certains cas particuliers des effets dA©favorables significatifs, mais plutA´t que, pris dans leur ensemble, les rA©sultats sont peu significatifs ou peu stables. Plus prA©cisA©ment, cette A©tude laisse entendre qu'une application systA©matique de la mA©thode d'Akhtar et Hilton A trois pays supplA©mentaires (France, Japon et Royaume-Uni) donne des rA©sultats mitigA©s; que leur mA©thode semble prA©senter plusieurs dA©fauts et que la correction de ces dA©fauts a pour effet d'affaiblir la portA©e de leurs conclusions; que leurs estimations sont trA¨s sensibles A des variations relativement mineures de la mA©thode utilisA©e et que la plupart des estimations ""rA©visA©es"" pour les cinq pays ne confirment pas l'hypothA¨se selon laquelle l'instabilitA© des taux de change aurait eu un effet systA©matiquement nA©gatif sur le commerce international. /// En un examen reciente de los estudios empAricos sobre los efectos de la inestabilidad de los tipos de cambio en el comercio internacional se llega a la conclusiA³n de que ""la gran mayorAa de estos anAilisis empAricos no consiguen demostrar sistemAiticamente un vAnculo significativo entre los diferentes grados de variabilidad cambiaria y el volumen del comercio internacional, tanto sea en tA©rminos agregados como bilaterales"". (Fondo Monetario Internacional, Exchange Rate Volatility and World Trade, Washington, julio de 1984, pAig. 36). Un estudio reciente de M.A. Akhtar y R.S. Hilton (""Exchange Rate Uncertainty and International Trade,"" Banco de la Reserva Federal de Nueva York, mayo de 1984) indica, por el contrario, que la inestabilidad de los tipos de cambio, expresada segAon la desviaciA³n estAindar de los Andices de los tipos de cambio efectivos nominales, ha tenido efectos negativos considerables en el comercio de productos manufacturados de Estados Unidos y de la RepAoblica Federal de Alemania. El presente estudio tiene por objeto comprobar la solidez de los resultados empAricos de Akhtar y Hilton, tomando como base de partida su marco teA³rico bAisico. El estudio amplAa su anAilisis incluyendo a Francia, JapA³n y el Reino Unido, pasando luego a examinar la solidez de los resultados con respecto a variaciones en la selecciA³n del perAodo de la muestra, medida de la inestabilidad y tA©cnicas de estimaciA³n. La conclusiA³n principal del anAilisis es que la metodologAa de Akhtar y Hilton no logra establecer un vAnculo significativo sistemAitico entre la inestabilidad de los tipos de cambio y el volumen del comercio internacional. Esto no quiere decir que no puedan obsevarse en casos especAficos efectos negativos importantes, sino mAis bien que, en tA©rminos generales, los resultados no suelen ser ni considerables ni estables. En concreto, de los resultados se desprende que la aplicaciA³n directa de la metodologAa de Akhtar y Hilton a tres nuevos paAses (Francia, JapA³n y el Reino Unido) arroja resultados dispares; que esta metodologAa parece ser defectuosa en varios aspectos y que la correcciA³n de tales deficiencias tiene como efecto el debilitamiento de sus conclusiones; que las estimaciones son muy sensibles a modificaciones poco importantes de la metodologAa, y que las estimaciones ""revisadas"" para los cinco paAses no confirman, en su mayor parte, la hipotA©sis de que la inestabilidad de los tipos de cambio ha ejercido un efecto negativo sistemAitico en el comercio exterior.",TRUE,adj
R11,Science,R27220,On the Trade Impact of Nominal Exchange Rate Volatility,S87623,R27221,Nominal or real exchange rate used,R27124,Nominal,"What is the effect of nominal exchange rate variability on trade? I argue that the methods conventionally used to answer this perennial question are plagued by a variety of sources of systematic bias. I propose a novel approach that simultaneously addresses all of these biases, and present new estimates from a broad sample of countries from 1970 to 1997. The answer to the question is: Not much.",TRUE,adj
R11,Science,R27753,International Evaluation of a Localized Geography Educational Software,S90360,R27754,Result,R27727,Positive,"A report on the implementation and evaluation of an intelligent learning system; the multimedia geography tutor and game software titled Lainos World SM was localized into English, French, Spanish, German, Portuguese, Russian and Simplified Chinese. Thereafter, multilingual online surveys were setup to which High school students were globally invited via mails to schools, targeted adverts and recruitment on Facebook, Google, etc. 1125 respondents from selected nations completed both the initial and final surveys. The effect of the software on students’ geographical knowledge was analyzed through pre and post achievement test scores. In general, the mean score were higher after exposure to the educational software for fifteen days and it was established that the score differences were statistically significant. This positive effect and other qualitative data show that the localized software from students’ perspective is a widely acceptable and effective educational tool for learning geography in an interactive and gaming environment..",TRUE,adj
R11,Science,R27781,Gameplaying for maths learning: cooperative or not?,S90488,R27782,Result,R27727,Positive,"This study investigated the effects of gameplaying on fifth-graders’ maths performance and attitudes. One hundred twenty five fifth graders were recruited and assigned to a cooperative Teams-Games-Tournament (TGT), interpersonal competitive or no gameplaying condition. A state standards-based maths exam and an inventory on attitudes towards maths were used for the pretest and posttest. The students’ gender, socio-economic status and prior maths ability were examined as the moderating variables and covariate. Multivariate analysis of covariance (MANCOVA) indicated that gameplaying was more effective than drills in promoting maths performance, and cooperative gameplaying was most effective for promoting positive maths attitudes regardless of students’ individual differences.",TRUE,adj
R11,Science,R27800,"Principles underlying the design of “The Number Race”, an adaptive computer game for remediation of dyscalculia",S90590,R27801,Result,R27727,Positive,"Abstract Background Adaptive game software has been successful in remediation of dyslexia. Here we describe the cognitive and algorithmic principles underlying the development of similar software for dyscalculia. Our software is based on current understanding of the cerebral representation of number and the hypotheses that dyscalculia is due to a ""core deficit"" in number sense or in the link between number sense and symbolic number representations. Methods ""The Number Race"" software trains children on an entertaining numerical comparison task, by presenting problems adapted to the performance level of the individual child. We report full mathematical specifications of the algorithm used, which relies on an internal model of the child's knowledge in a multidimensional ""learning space"" consisting of three difficulty dimensions: numerical distance, response deadline, and conceptual complexity (from non-symbolic numerosity processing to increasingly complex symbolic operations). Results The performance of the software was evaluated both by mathematical simulations and by five weeks of use by nine children with mathematical learning difficulties. The results indicate that the software adapts well to varying levels of initial knowledge and learning speeds. Feedback from children, parents and teachers was positive. A companion article [1] describes the evolution of number sense and arithmetic scores before and after training. Conclusion The software, open-source and freely available online, is designed for learning disabled children aged 5–8, and may also be useful for general instruction of normal preschool children. The learning algorithm reported is highly general, and may be applied in other domains.",TRUE,adj
R11,Science,R27804,Outdoor natural science learning with an RFID-supported immersive ubiquitous learning environment,S90602,R27805,Result,R27727,Positive,"Despite their successful use in many conscientious studies involving outdoor learning applications, mobile learning systems still have certain limitations. For instance, because students cannot obtain real-time, contextaware content in outdoor locations such as historical sites, endangered animal habitats, and geological landscapes, they are unable to search, collect, share, and edit information by using information technology. To address such concerns, this work proposes an environment of ubiquitous learning with educational resources (EULER) based on radio frequency identification (RFID), augmented reality (AR), the Internet, ubiquitous computing, embedded systems, and database technologies. EULER helps teachers deliver lessons on site and cultivate student competency in adopting information technology to improve learning. To evaluate its effectiveness, we used the proposed EULER for natural science learning at the Guandu Nature Park in Taiwan. The participants were elementary school teachers and students. The analytical results revealed that the proposed EULER improves student learning. Moreover, the largely positive feedback from a post-study survey confirms the effectiveness of EULER in supporting outdoor learning and its ability to attract the interest of students.",TRUE,adj
R11,Science,R27764,"Mobile game-based learning in secondary education: engagement, motivation and learning in a mobile city game",S90406,R27765,Method,R27763,Quasi-experimental,"Using mobile games in education combines situated and active learning with fun in a potentially excellent manner. The effects of a mobile city game called Frequency 1550, which was developed by The Waag Society to help pupils in their first year of secondary education playfully acquire historical knowledge of medieval Amsterdam, were investigated in terms of pupil engagement in the game, historical knowledge, and motivation for History in general and the topic of the Middle Ages in particular. A quasi-experimental design was used with 458 pupils from 20 classes from five schools. The pupils in 10 of the classes played the mobile history game whereas the pupils in the other 10 classes received a regular, project-based lesson series. The results showed those pupils who played the game to be engaged and to gain significantly more knowledge about medieval Amsterdam than those pupils who received regular project-based instruction. No significant differences were found between the two groups with respect to motivation for History or the Middle Ages. The impact of location-based technology and game-based learning on pupil knowledge and motivation are discussed along with suggestions for future research.",TRUE,adj
R11,Science,R26554,Energy-efficient communication protocol for wireless microsensor networks,S83669,R26614,CH election,R26611,Random,"Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.",TRUE,adj
R11,Science,R26570,PEGASIS: power efficient gathering in sensor informa- tion systems,S83722,R26626,CH election,R26611,Random,"Sensor webs consisting of nodes with limited battery power and wireless communications are deployed to collect useful information from the field. Gathering sensed information in an energy efficient manner is critical to operate the sensor network for a long period of time. In W. Heinzelman et al. (Proc. Hawaii Conf. on System Sci., 2000), a data collection problem is defined where, in a round of communication, each sensor node has a packet to be sent to the distant base station. If each node transmits its sensed data directly to the base station then it will deplete its power quickly. The LEACH protocol presented by W. Heinzelman et al. is an elegant solution where clusters are formed to fuse data before transmitting to the base station. By randomizing the cluster heads chosen to transmit to the base station, LEACH achieves a factor of 8 improvement compared to direct transmissions, as measured in terms of when nodes die. In this paper, we propose PEGASIS (power-efficient gathering in sensor information systems), a near optimal chain-based protocol that is an improvement over LEACH. In PEGASIS, each node communicates only with a close neighbor and takes turns transmitting to the base station, thus reducing the amount of energy spent per round. Simulation results show that PEGASIS performs better than LEACH by about 100 to 300% when 1%, 20%, 50%, and 100% of nodes die for different network sizes and topologies.",TRUE,adj
R11,Science,R26637,A two-levels hierarchy for low-energy adaptive clustering hierarchy (TL-LEACH),S83796,R26638,CH election,R26611,Random,"Wireless sensor networks with thousands of tiny sensor nodes are expected to find wide applicability and increasing deployment in coming years, as they enable reliable monitoring and analysis of the environment. In this paper we propose a modification to a well-known protocol for sensor networks called Low Energy Adaptive Clustering Hierarchy (LEACH). This last is designed for sensor networks where end- user wants to remotely monitor the environment. In such situation, the data from the individual nodes must be sent to a central base station, often located far from the sensor network, through which the end-user can access the data. In this context our contribution is represented by building a two-level hierarchy to realize a protocol that saves better the energy consumption. Our TL-LEACH uses random rotation of local cluster base stations (primary cluster-heads and secondary cluster-heads). In this way we build, where it is possible, a two-level hierarchy. This permits to better distribute the energy load among the sensors in the network especially when the density of network is higher. TL- LEACH uses localized coordination to enable scalability and robustness. We evaluated the performances of our protocol with NS-2 and we observed that our protocol outperforms the LEACH in terms of energy consumption and lifetime of the network.",TRUE,adj
R11,Science,R26554,Energy-efficient communication protocol for wireless microsensor networks,S83901,R26657,Clustering Process CH Election,R26611,Random,"Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.",TRUE,adj
R11,Science,R26602,WSN16-5: Distributed Formation of Overlapping Multi-hop Clusters in Wireless Sensor Networks,S83989,R26667,Clustering Process CH Election,R26611,Random,"Clustering is a standard approach for achieving efficient and scalable performance in wireless sensor networks. Most of the published clustering algorithms strive to generate the minimum number of disjoint clusters. However, we argue that guaranteeing some degree of overlap among clusters can facilitate many applications, like inter-cluster routing, topology discovery and node localization, recovery from cluster head failure, etc. We formulate the overlapping multi-hop clustering problem as an extension to the k-dominating set problem. Then we propose MOCA; a randomized distributed multi-hop clustering algorithm for organizing the sensors into overlapping clusters. We validate MOCA in a simulated environment and analyze the effect of different parameters, e.g. node density and network connectivity, on its performance. The simulation results demonstrate that MOCA is scalable, introduces low overhead and produces approximately equal-sized clusters.",TRUE,adj
R11,Science,R26664,An energy efficient hierarchical clustering algorithm for wireless sensor networks,S83960,R26665,Clustering Process CH Election,R26611,Random,"A wireless network consisting of a large number of small sensors with low-power transceivers can be an effective tool for gathering data in a variety of environments. The data collected by each sensor is communicated through the network to a single processing center that uses all reported data to determine characteristics of the environment or detect an event. The communication or message passing process must be designed to conserve the limited energy resources of the sensors. Clustering sensors into groups, so that sensors communicate information only to clusterheads and then the clusterheads communicate the aggregated information to the processing center, may save energy. In this paper, we propose a distributed, randomized clustering algorithm to organize the sensors in a wireless sensor network into clusters. We then extend this algorithm to generate a hierarchy of clusterheads and observe that the energy savings increase with the number of levels in the hierarchy. Results in stochastic geometry are used to derive solutions for the values of parameters of our algorithm that minimize the total energy spent in the network when all sensors report data through the clusterheads to the processing center.",TRUE,adj
R11,Science,R26562,TEEN: a routing protocol for enhanced efficiency in wireless sensor networks,S83698,R26621,Nature,R26620,Reactive,"Wireless sensor networks are expected to find wide applicability and increasing deployment in the near future. In this paper, we propose a formal classification of sensor networks, based on their mode of functioning, as proactive and reactive networks. Reactive networks, as opposed to passive data collecting proactive networks, respond immediately to changes in the relevant parameters of interest. We also introduce a new energy efficient protocol, TEEN (Threshold sensitive Energy Efficient sensor Network protocol) for reactive networks. We evaluate the performance of our protocol for a simple temperature sensing application. In terms of energy efficiency, our protocol has been observed to outperform existing conventional sensor network protocols.",TRUE,adj
R11,Science,R25374,DSL-based support for semi-automated architectural component model abstraction throughout the software lifecycle,S76013,R25375,Automation,R25365,Semi-Automatic,"In this paper we present an approach for supporting the semi-automated abstraction of architectural models throughout the software lifecycle. It addresses the problem that the design and the implementation of a software system often drift apart as software systems evolve, leading to architectural knowledge evaporation. Our approach provides concepts and tool support for the semi-automatic abstraction of architectural knowledge from implemented systems and keeping the abstracted architectural knowledge up-to-date. In particular, we propose architecture abstraction concepts that are supported through a domain-specific language (DSL). Our main focus is on providing architectural abstraction specifications in the DSL that only need to be changed, if the architecture changes, but can tolerate non-architectural changes in the underlying source code. The DSL and its tools support abstracting the source code into UML component models for describing the architecture. Once the software architect has defined an architectural abstraction in the DSL, we can automatically generate UML component models from the source code and check whether the architectural design constraints are fulfilled by the models. Our approach supports full traceability between source code elements and architectural abstractions, and allows software architects to compare different versions of the generated UML component model with each other. We evaluate our research results by studying the evolution of architectural abstractions in different consecutive versions and the execution times for five existing open source systems.",TRUE,adj
R11,Science,R26311,Heavy Traffic Analysis of the Dynamic Stochastic Inventory-Routing Problem,S82419,R26312,Fleet size,R26177,single,"We analyze three queueing control problems that model a dynamic stochastic distribution system, where a single capacitated vehicle serves a finite number of retailers in a make-to-stock fashion. The objective in each of these vehicle routing and inventory problems is to minimize the long run average inventory (holding and backordering) and transportation cost. In all three problems, the controller dynamically specifies whether a vehicle at the warehouse should idle orembark with a full load. In the first problem, the vehicle must travel along a prespecified (TSP) tour of all retailers, and the controller dynamically decides how many units to deliver to each retailer. In the second problem, the vehicle delivers an entire load to one retailer (direct shipping) and the controller decides which retailer to visit next. The third problem allows the additional dynamic choice between the TSP and direct shipping options. Motivated by existing heavy traffic limit theorems, we make a time scale decomposition assumption that allows us to approximate these queueing control problems by diffusion control problems, which are explicitly solved in the fixed route problems, and numerically solved in the dynamic routing case. Simulation experiments confirm that the heavy traffic approximations are quite accurate over a broad range of problem parameters. Our results lead to some new observations about the behavior of this complex system.",TRUE,adj
R11,Science,R26192,Deliveries in an inventory/routing problem using stochastic dynamic programming,S81790,R26193,Demand,R26147,Stochastic,"An industrial gases tanker vehicle visitsn customers on a tour, with a possible ( n + 1)st customer added at the end. The amount of needed product at each customer is a known random process, typically a Wiener process. The objective is to adjust dynamically the amount of product provided on scene to each customer so as to minimize total expected costs, comprising costs of earliness, lateness, product shortfall, and returning to the depot nonempty. Earliness costs are computed by invocation of an annualized incremental cost argument. Amounts of product delivered to each customer are not known until the driver is on scene at the customer location, at which point the customer is either restocked to capacity or left with some residual empty capacity, the policy determined by stochastic dynamic programming. The methodology has applications beyond industrial gases.",TRUE,adj
R11,Science,R26279,A Markov Decision Model and Decomposition Heuristic for Dynamic Vehicle Dispatching,S82253,R26280,Demand,R26147,Stochastic,"We describe a dynamic and stochastic vehicle dispatching problem called the delivery dispatching problem. This problem is modeled as a Markov decision process. Because exact solution of this model is impractical, we adopt a heuristic approach for handling the problem. The heuristic is based in part on a decomposition of the problem by customer, where customer subproblems generate penalty functions that are applied in a master dispatching problem. We describe how to compute bounds on the algorithm's performance, and apply it to several examples with good results.",TRUE,adj
R11,Science,R26311,Heavy Traffic Analysis of the Dynamic Stochastic Inventory-Routing Problem,S82426,R26312,Demand,R26147,Stochastic,"We analyze three queueing control problems that model a dynamic stochastic distribution system, where a single capacitated vehicle serves a finite number of retailers in a make-to-stock fashion. The objective in each of these vehicle routing and inventory problems is to minimize the long run average inventory (holding and backordering) and transportation cost. In all three problems, the controller dynamically specifies whether a vehicle at the warehouse should idle orembark with a full load. In the first problem, the vehicle must travel along a prespecified (TSP) tour of all retailers, and the controller dynamically decides how many units to deliver to each retailer. In the second problem, the vehicle delivers an entire load to one retailer (direct shipping) and the controller decides which retailer to visit next. The third problem allows the additional dynamic choice between the TSP and direct shipping options. Motivated by existing heavy traffic limit theorems, we make a time scale decomposition assumption that allows us to approximate these queueing control problems by diffusion control problems, which are explicitly solved in the fixed route problems, and numerically solved in the dynamic routing case. Simulation experiments confirm that the heavy traffic approximations are quite accurate over a broad range of problem parameters. Our results lead to some new observations about the behavior of this complex system.",TRUE,adj
R11,Science,R26319,A Price-Directed Approach to Stochastic Inventory/Routing,S82478,R26320,Demand,R26147,Stochastic,"We consider a new approach to stochastic inventory/routing that approximates the future costs of current actions using optimal dual prices of a linear program. We obtain two such linear programs by formulating the control problem as a Markov decision process and then replacing the optimal value function with the sum of single-customer inventory value functions. The resulting approximation yields statewise lower bounds on optimal infinite-horizon discounted costs. We present a linear program that takes into account inventory dynamics and economics in allocating transportation costs for stochastic inventory routing. On test instances we find that these allocations do not introduce any error in the value function approximations relative to the best approximations that can be achieved without them. Also, unlike other approaches, we do not restrict the set of allowable vehicle itineraries in any way. Instead, we develop an efficient algorithm to both generate and eliminate itineraries during solution of the linear programs and control policy. In simulation experiments, the price-directed policy outperforms other policies from the literature.",TRUE,adj
R11,Science,R26321,Dynamic Programming Approximations for a Stochastic Inventory Routing Problem,S82496,R26322,Demand,R26147,Stochastic,"This work is motivated by the need to solve the inventory routing problem when implementing a business practice called vendor managed inventory replenishment (VMI). With VMI, vendors monitor their customers' inventories and decide when and how much inventory should be replenished at each customer. The inventory routing problem attempts to coordinate inventory replenishment and transportation in such a way that the cost is minimized over the long run. We formulate a Markov decision process model of the stochastic inventory routing problem and propose approximation methods to find good solutions with reasonable computational effort. We indicate how the proposed approach can be used for other Markov decision processes involving the control of multiple resources.",TRUE,adj
R11,Science,R26343,Scenario Tree-Based Heuristics for Stochastic Inventory-Routing Problems,S82619,R26344,Demand,R26147,Stochastic,"In vendor-managed inventory replenishment, the vendor decides when to make deliveries to customers, how much to deliver, and how to combine shipments using the available vehicles. This gives rise to the inventory-routing problem in which the goal is to coordinate inventory replenishment and transportation to minimize costs. The problem tackled in this paper is the stochastic inventory-routing problem, where stochastic demands are specified through general discrete distributions. The problem is formulated as a discounted infinite-horizon Markov decision problem. Heuristics based on finite scenario trees are developed. Computational results confirm the efficiency of these heuristics.",TRUE,adj
R11,Science,R28193,A Two-Stage Stochastic Network Model and Solution Methods for the Dynamic Empty Container Allocation Problem,S92182,R28194,Market,R26147,Stochastic,"Containerized liner trades have been growing steadily since the globalization of world economies intensified in the early 1990s. However, these trades are typically imbalanced in terms of the numbers of inbound and outbound containers. As a result, the relocation of empty containers has become one of the major problems faced by liner operators. In this paper, we consider the dynamic empty container allocation problem where we need to reposition empty containers and to determine the number of leased con tainers needed to meet customers? demand over time. We formulate this problem as a two-stage stochastic network: in stage one, the parameters such as supplies, demands, and ship capacities for empty containers are deterministic; whereas in stage two, these parameters are random variables. We need to make decisions in stage one such that the total of the stage one cost and the expected stage two cost is minimized. By taking advantage of the network structure, we show how a stochastic quasi-gradient method and a stochastic hybrid approximation procedure can be applied to solve the problem. In addition, we propose some new variations of these methods that seem to work faster in practice. We conduct numerical tests to evaluate the value of the two-stage stochastic model over a rolling horizon environment and to investigate the behavior of the solution methods with different implementations.",TRUE,adj
R11,Science,R28304,Liner shipping fleet deployment with cargo transshipment and demand uncertainty,S92669,R28305,Market,R26147,Stochastic,"This paper addresses a novel liner shipping fleet deployment problem characterized by cargo transshipment, multiple container routing options and uncertain demand, with the objective of maximizing the expected profit. This problem is formulated as a stochastic program and solved by the sample average approximation method. In this technique the objective function of the stochastic program is approximated by a sample average estimate derived from a random sample, and then the resulting deterministic program is solved. This process is repeated with different samples to obtain a good candidate solution along with the statistical estimate of its optimality gap. We apply the proposed model to a case study inspired from real-world problems faced by a major liner shipping company. Results show that the case is efficiently solved to 1% of relative optimality gap at 95% confidence level.",TRUE,adj
R11,Science,R30629,Enhanced Pictorial Structures for precise eye localization under incontrolled conditions,S102141,R30630,Challenges,R30628,Uncontrolled,"In this paper, we present an enhanced pictorial structure (PS) model for precise eye localization, a fundamental problem involved in many face processing tasks. PS is a computationally efficient framework for part-based object modelling. For face images taken under uncontrolled conditions, however, the traditional PS model is not flexible enough for handling the complicated appearance and structural variations. To extend PS, we 1) propose a discriminative PS model for a more accurate part localization when appearance changes seriously, 2) introduce a series of global constraints to improve the robustness against scale, rotation and translation, and 3) adopt a heuristic prediction method to address the difficulty of eye localization with partial occlusion. Experimental results on the challenging LFW (Labeled Face in the Wild) database show that our model can locate eyes accurately and efficiently under a broad range of uncontrolled variations involving poses, expressions, lightings, camera qualities, occlusions, etc.",TRUE,adj
R11,Science,R26640,An energy-efficient unequal clustering mechanism for wireless sensor networks,S85441,R26736,Cluster Properties Cluster size,R26730,Unequal,"Clustering provides an effective way for prolonging the lifetime of a wireless sensor network. Current clustering algorithms usually utilize two techniques, selecting cluster heads with more residual energy and rotating cluster heads periodically, to distribute the energy consumption among nodes in each cluster and extend the network lifetime. However, they rarely consider the hot spots problem in multihop wireless sensor networks. When cluster heads cooperate with each other to forward their data to the base station, the cluster heads closer to the base station are burdened with heavy relay traffic and tend to die early, leaving areas of the network uncovered and causing network partition. To address the problem, we propose an energy-efficient unequal clustering (EEUC) mechanism for periodical data gathering in wireless sensor networks. It partitions the nodes into clusters of unequal size, and clusters closer to the base station have smaller sizes than those farther away from the base station. Thus cluster heads closer to the base station can preserve some energy for the inter-cluster data forwarding. We also propose an energy-aware multihop routing protocol for the inter-cluster communication. Simulation results show that our unequal clustering mechanism balances the energy consumption well among all sensor nodes and achieves an obvious improvement on the network lifetime",TRUE,adj
R11,Science,R26739,PRODUCE: A Probability-Driven Unequal Clustering Mechanism for Wireless Sensor Networks,S85471,R26740,Cluster Properties Cluster size,R26730,Unequal,"There has been proliferation of research on seeking for distributing the energy consumption among nodes in each cluster and between cluster heads to extend the network lifetime. However, they hardly consider the hot spots problem caused by heavy relay traffic forwarded. In this paper, we propose a distributed and randomized clustering algorithm that consists of unequal sized clusters. The cluster heads closer to the base station may focus more on inter-cluster communication while distant cluster heads concentrate more on intra-cluster communication. As a result, it nearly guarantees no communication in the network gets excessively long communication distance that significantly attenuates signal strength. Simulation results show that our algorithm achieves abundant improvement in terms of the coverage time and network lifetime, especially when the density of distributed nodes is high.",TRUE,adj
R11,Science,R26742,An energy-efficient distributed unequal clustering protocol for wireless sensor networks,S85498,R26743,Cluster Properties Cluster size,R26730,Unequal,"Due to the imbalance of energy consumption of nodes in wireless sensor networks (WSNs), some local nodes die prematurely, which causes the network partitions and then shortens the lifetime of the network. The phenomenon is called “hot spot” or “energy hole” problem. For this problem, an energy-aware distributed unequal clustering protocol (EADUC) in multihop heterogeneous WSNs is proposed. Compared with the previous protocols, the cluster heads obtained by EADUC can achieve balanced energy, good distribution, and seamless coverage for all the nodes.Moreover, the complexity of time and control message is low. Simulation experiments show that EADUC can prolong the lifetime of the network significantly.",TRUE,adj
R11,Science,R26754,An Energy-Aware Distributed Unequal Clustering Protocol for Wireless Sensor Networks,S85597,R26755,Cluster Properties Cluster size,R26730,Unequal,"Due to the imbalance of energy consumption of nodes in wireless sensor networks (WSNs), some local nodes die prematurely, which causes the network partitions and then shortens the lifetime of the network. The phenomenon is called “hot spot” or “energy hole” problem. For this problem, an energy-aware distributed unequal clustering protocol (EADUC) in multihop heterogeneous WSNs is proposed. Compared with the previous protocols, the cluster heads obtained by EADUC can achieve balanced energy, good distribution, and seamless coverage for all the nodes. Moreover, the complexity of time and control message is low. Simulation experiments show that EADUC can prolong the lifetime of the network significantly.",TRUE,adj
R11,Science,R26763,UHEED - An Unequal Clustering Algorithm for Wireless Sensor Networks,S85673,R26764,Cluster Properties Cluster size,R26730,Unequal,"Prolonging the lifetime of wireless sensor networks has always been a determining factor when designing and deploying such networks. Clustering is one technique that can be used to extend the lifetime of sensor networks by grouping sensors together. However, there exists the hot spot problem which causes an unbalanced energy consumption in equally formed clusters. In this paper, we propose UHEED, an unequal clustering algorithm which mitigates this problem and which leads to a more uniform residual energy in the network and improves the network lifetime. Furthermore, from the simulation results presented, we were able to deduce the most appropriate unequal cluster size to be used.",TRUE,adj
R11,Science,R26766,Multihop Routing Protocol with Unequal Clustering for Wireless Sensor Networks,S85697,R26767,Cluster Properties Cluster size,R26730,Unequal,"In order to prolong the lifetime of wireless sensor networks, this paper presents a multihop routing protocol with unequal clustering (MRPUC). On the one hand, cluster heads deliver the data to the base station with relay to reduce energy consumption. On the other hand, MRPUC uses many measures to balance the energy of nodes. First, it selects the nodes with more residual energy as cluster heads, and clusters closer to the base station have smaller sizes to preserve some energy during intra-cluster communication for inter-cluster packets forwarding. Second, when regular nodes join clusters, they consider not only the distance to cluster heads but also the residual energy of cluster heads. Third, cluster heads choose those nodes as relay nodes, which have minimum energy consumption for forwarding and maximum residual energy to avoid dying earlier. Simulation results show that MRPUC performs much better than similar protocols.",TRUE,adj
R11,Science,R26773,An energy aware fuzzy unequal clustering algorithm for wireless sensor networks,S85744,R26774,Cluster Properties Cluster size,R26730,Unequal,"In order to gather information more efficiently, wireless sensor networks (WSNs) are partitioned into clusters. The most of the proposed clustering algorithms do not consider the location of the base station. This situation causes hot spots problem in multi-hop WSNs. Unequal clustering mechanisms, which are designed by considering the base station location, solve this problem. In this paper, we introduce a fuzzy unequal clustering algorithm (EAUCF) which aims to prolong the lifetime of WSNs. EAUCF adjusts the cluster-head radius considering the residual energy and the distance to the base station parameters of the sensor nodes. This helps decreasing the intra-cluster work of the sensor nodes which are closer to the base station or have lower battery level. We utilize fuzzy logic for handling the uncertainties in cluster-head radius estimation. We compare our algorithm with some popular algorithms in literature, namely LEACH, CHEF and EEUC, according to First Node Dies (FND), Half of the Nodes Alive (HNA) and energy-efficiency metrics. Our simulation results show that EAUCF performs better than the other algorithms in most of the cases. Therefore, EAUCF is a stable and energy-efficient clustering algorithm to be utilized in any real time WSN application.",TRUE,adj
R11,Science,R29751,An Empirical Study on the Environmental Kuznets Curve for China’s Carbon Emissions: Based on Provincial Panel Data,S98714,R29752,EKC Turnaround point(s),R29746,Western,"Abstract Based on the Environmental Kuznets Curve theory, the authors choose provincial panel data of China in 1990–2007 and adopt panel unit root and co-integration testing method to study whether there is Environmental Kuznets Curve for China’s carbon emissions. The research results show that: carbon emissions per capita of the eastern region and the central region of China fit into Environmental Kuznets Curve, but that of the western region does not. On this basis, the authors carry out scenario analysis on the occurrence time of the inflection point of carbon emissions per capita of different regions, and describe a specific time path.",TRUE,adj
R11,Science,R31244,Transitioning from Low-Income Growth to High-Income Growth: Is There a Middle Income Trap?,S104837,R31261,approach,R31258,Relative,"Is there a “middle-income trap”? Theory suggests that the determinants of growth at low and high income levels may be different. If countries struggle to transition from growth strategies that are effective at low income levels to growth strategies that are effective at high income levels, they may stagnate at some middle income level; this phenomenon can be thought of as a “middle-income trap.” Defining income levels based on per capita gross domestic product relative to the United States, we do not find evidence for (unusual) stagnation at any particular middle income level. However, we do find evidence that the determinants of growth at low and high income levels differ. These findings suggest a mixed conclusion: middle-income countries may need to change growth strategies in order to transition smoothly to high income growth strategies, but this can be done smoothly and does not imply the existence of a middle-income trap.",TRUE,adj
R11,Science,R28522,RIGHT HEPATOLOBECTOMY FOR PRIMARY MESENCHYMOMA OF THE LIVER*,S93533,R28523,Site,L57391,Right,SUMMARYA case report of a large primary malignant mesenchymoma of the liver is presented. This tumor was successfully removed with normal liver tissue surrounding the tumor by right hepatolobectomy. The pathologic characteristics and clinical behavior of tumors falling into this general category are,TRUE,adj
R11,Science,R26679,Distributed clustering with directional antennas for wireless sensor networks,S85077,R26680,Method,R26615,Centralized,"This paper proposes a decentralized algorithm for organizing an ad hoc sensor network into clusters with directional antennas. The proposed autonomous clustering scheme aims to reduce the sensing redundancy and maintain sufficient sensing coverage and network connectivity in sensor networks. With directional antennas, random waiting timers, and local criterions, cluster performance may be substantially improved and sensing redundancy can be drastically suppressed. The simulation results show that the proposed scheme achieves connected coverage and provides efficient network topology management.",TRUE,adj
R11,Science,R26704,A centralized energy-efficient routing protocol for wireless sensor networks,S85248,R26705,Method,R26615,Centralized,"Wireless sensor networks consist of small battery powered devices with limited energy resources. Once deployed, the small sensor nodes are usually inaccessible to the user, and thus replacement of the energy source is not feasible. Hence, energy efficiency is a key design issue that needs to be enhanced in order to improve the life span of the network. Several network layer protocols have been proposed to improve the effective lifetime of a network with a limited energy supply. In this article we propose a centralized routing protocol called base-station controlled dynamic clustering protocol (BCDCP), which distributes the energy dissipation evenly among all sensor nodes to improve network lifetime and average energy savings. The performance of BCDCP is then compared to clustering-based schemes such as low-energy adaptive clustering hierarchy (LEACH), LEACH-centralized (LEACH-C), and power-efficient gathering in sensor information systems (PEGASIS). Simulation results show that BCDCP reduces overall energy consumption and improves network lifetime over its comparatives.",TRUE,adj
R11,Science,R26554,Energy-efficient communication protocol for wireless microsensor networks,S83668,R26614,Inter-cluster topology,R26219,direct,"Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.",TRUE,adj
R11,Science,R26272,On the Effectiveness of Direct Shipping Strategy for the One-Warehouse Multi-Retailer R-Systems,S82220,R26273,outing,R26219,direct,"We consider the problem of integrating inventory control and vehicle routing into a cost-effective strategy for a distribution system consisting of one depot and many geographically dispersed retailers. All stock enters the system through the depot and is distributed to the retailers by vehicles of limited constant capacity. We assume that each one of the retailers faces a constant, retailer specific, demand rate and that inventory is charged only at the retailers but not at the depot. We provide a lower bound on the long run average cost over all inventory-routing strategies. We use this lower bound to show that the effectiveness of direct shipping over all inventory-routing strategies is at least 94% whenever the Economic Lot Size of each of the retailers is at least 71% of vehicle capacity. The effectiveness deteriorates as the Economic Lot Sizes become smaller. These results are important because they provide useful guidelines as to when to embark into the much more difficult task of finding cost-effective routes. Additional advantages of direct shipping are lower in-transit inventory and ease of coordination.",TRUE,adj
R11,Science,R26297,Fully Loaded Direct Shipping Strategy in One Warehouse/NRetailer Systems without Central Inventories,S82354,R26298,outing,R26219,direct,"In this paper, we consider one warehouse/multiple retailer systems with transportation costs. The planning horizon is infinite and the warehouse keeps no central inventory. It is shown that the fully loaded direct shipping strategy is optimal among all possible shipping/allocation strategies if the truck capacity is smaller than a certain quantity, and a bound is provided for the general case.",TRUE,adj
R11,Science,R26313,The Stochastic Inventory Routing Problem with Direct Deliveries,S82442,R26314,outing,R26219,direct,"Vendor managed inventory replenishment is a business practice in which vendors monitor their customers' inventories, and decide when and how much inventory should be replenished. The inventory routing problem addresses the coordination of inventory management and transportation. The ability to solve the inventory routing problem contributes to the realization of the potential savings in inventory and transportation costs brought about by vendor managed inventory replenishment. The inventory routing problem is hard, especially if a large number of customers is involved. We formulate the inventory routing problem as a Markov decision process, and we propose approximation methods to find good solutions with reasonable computational effort. Computational results are presented for the inventory routing problem with direct deliveries.",TRUE,adj
R11,Science,R26554,Energy-efficient communication protocol for wireless microsensor networks,S83899,R26657,Method,R26609,Distributed,"Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.",TRUE,adj
R11,Science,R26566,A clustering scheme for hierarchical control in multi-hop wireless networks,S85205,R26700,Method,R26609,Distributed,"In this paper we present a clustering scheme to create a hierarchical control structure for multi-hop wireless networks. A cluster is defined as a subset of vertices, whose induced graph is connected. In addition, a cluster is required to obey certain constraints that are useful for management and scalability of the hierarchy. All these constraints cannot be met simultaneously for general graphs, but we show how such a clustering can be obtained for wireless network topologies. Finally, we present an efficient distributed implementation of our clustering algorithm for a set of wireless nodes to create the set of desired clusters.",TRUE,adj
R11,Science,R26586,"Distributed clustering in ad-hoc sensor networks: a hybrid, energy-efficient approach",S85009,R26670,Method,R26609,Distributed,"Prolonged network lifetime, scalability, and load balancing are important requirements for many ad-hoc sensor network applications. Clustering sensor nodes is an effective technique for achieving these goals. In this work, we propose a new energy-efficient approach for clustering nodes in ad-hoc sensor networks. Based on this approach, we present a protocol, HEED (hybrid energy-efficient distributed clustering), that periodically selects cluster heads according to a hybrid of their residual energy and a secondary parameter, such as node proximity to its neighbors or node degree. HEED does not make any assumptions about the distribution or density of nodes, or about node capabilities, e.g., location-awareness. The clustering process terminates in O(1) iterations, and does not depend on the network topology or size. The protocol incurs low overhead in terms of processing cycles and messages exchanged. It also achieves fairly uniform cluster head distribution across the network. A careful selection of the secondary clustering parameter can balance load among cluster heads. Our simulation results demonstrate that HEED outperforms weight-based clustering protocols in terms of several cluster characteristics. We also apply our approach to a simple application to demonstrate its effectiveness in prolonging the network lifetime and supporting data aggregation.",TRUE,adj
R11,Science,R26602,WSN16-5: Distributed Formation of Overlapping Multi-hop Clusters in Wireless Sensor Networks,S83987,R26667,Method,R26609,Distributed,"Clustering is a standard approach for achieving efficient and scalable performance in wireless sensor networks. Most of the published clustering algorithms strive to generate the minimum number of disjoint clusters. However, we argue that guaranteeing some degree of overlap among clusters can facilitate many applications, like inter-cluster routing, topology discovery and node localization, recovery from cluster head failure, etc. We formulate the overlapping multi-hop clustering problem as an extension to the k-dominating set problem. Then we propose MOCA; a randomized distributed multi-hop clustering algorithm for organizing the sensors into overlapping clusters. We validate MOCA in a simulated environment and analyze the effect of different parameters, e.g. node density and network connectivity, on its performance. The simulation results demonstrate that MOCA is scalable, introduces low overhead and produces approximately equal-sized clusters.",TRUE,adj
R11,Science,R26634,SEP: A Stable Election Protocol for clustered heterogeneous wireless sensor networks,S83778,R26635,Method,R26609,Distributed,"We study the impact of heterogeneity of nodes, in terms of their energy, in wireless sensor networks that are hierarchically clustered. In these networks some of the nodes become cluster heads, aggregate the data of their cluster members and transmit it to the sink. We assume that a percentage of the population of sensor nodes is equipped with additional energy resources—this is a source of heterogeneity which may result from the initial setting or as the operation of the network evolves. We also assume that the sensors are randomly (uniformly) distributed and are not mobile, the coordinates of the sink and the dimensions of the sensor field are known. We show that the behavior of such sensor networks becomes very unstable once the first node dies, especially in the presence of node heterogeneity. Classical clustering protocols assume that all the nodes are equipped with the same amount of energy and as a result, they can not take full advantage of the presence of node heterogeneity. We propose SEP, a heterogeneous-aware protocol to prolong the time interval before the death of the first node (we refer to as stability period), which is crucial for many applications where the feedback from the sensor network must be reliable. SEP is based on weighted election probabilities of each node to become cluster head according to the remaining energy in each node. We show by simulation that SEP always prolongs the stability period compared to (and that the average throughput is greater than) the one obtained using current clustering protocols. We conclude by studying the sensitivity of our SEP protocol to heterogeneity parameters capturing energy imbalance in the network. We found that SEP yields longer stability region for higher values of extra energy brought by more powerful nodes.",TRUE,adj
R11,Science,R26646,A clustering method for energy efficient routing in wireless sensor networks,S83849,R26647,Method,R26609,Distributed,"Low-Energy Adaptive Clustering Hierarchy (LEACH) is one of the most popular distributed cluster-based routing protocols in wireless sensor networks. Clustering algorithm of the LEACH is simple but offers no guarantee about even distribution of cluster heads over the network. And it assumes that each cluster head transmits data to sink over a single hop. In this paper, we propose a new method for selecting cluster heads to evenly distribute cluster heads. It avoids creating redundant cluster heads within a small geographical range. Simulation results show that our scheme reduces energy dissipation and prolongs network lifetime as compared with LEACH.",TRUE,adj
R11,Science,R26652,Distance based thresholds for cluster head selection in wireless sensor networks,S83883,R26653,Method,R26609,Distributed,"Central to the cluster-based routing protocols is the cluster head (CH) selection procedure that allows even distribution of energy consumption among the sensors, and therefore prolonging the lifespan of a sensor network. We propose a distributed CH selection algorithm that takes into account the distances from sensors to a base station that optimally balances the energy consumption among the sensors. NS-2 simulations show that our proposed scheme outperforms existing algorithms in terms of the average node lifespan and the time to first node death.",TRUE,adj
R11,Science,R26664,An energy efficient hierarchical clustering algorithm for wireless sensor networks,S83961,R26665,Method,R26609,Distributed,"A wireless network consisting of a large number of small sensors with low-power transceivers can be an effective tool for gathering data in a variety of environments. The data collected by each sensor is communicated through the network to a single processing center that uses all reported data to determine characteristics of the environment or detect an event. The communication or message passing process must be designed to conserve the limited energy resources of the sensors. Clustering sensors into groups, so that sensors communicate information only to clusterheads and then the clusterheads communicate the aggregated information to the processing center, may save energy. In this paper, we propose a distributed, randomized clustering algorithm to organize the sensors in a wireless sensor network into clusters. We then extend this algorithm to generate a hierarchy of clusterheads and observe that the energy savings increase with the number of levels in the hierarchy. Results in stochastic geometry are used to derive solutions for the values of parameters of our algorithm that minimize the total energy spent in the network when all sensors report data through the clusterheads to the processing center.",TRUE,adj
R11,Science,R26687,TASC: topology adaptive spatial clustering for sensor networks,S85137,R26688,Method,R26609,Distributed,"The ability to extract topological regularity out of large randomly deployed sensor networks holds the promise to maximally leverage correlation for data aggregation and also to assist with sensor localization and hierarchy creation. This paper focuses on extracting such regular structures from physical topology through the development of a distributed clustering scheme. The topology adaptive spatial clustering (TASC) algorithm presented here is a distributed algorithm that partitions the network into a set of locally isotropic, non-overlapping clusters without prior knowledge of the number of clusters, cluster size and node coordinates. This is achieved by deriving a set of weights that encode distance measurements, connectivity and density information within the locality of each node. The derived weights form the terrain for holding a coordinated leader election in which each node selects the node closer to the center of mass of its neighborhood to become its leader. The clustering algorithm also employs a dynamic density reachability criterion that groups nodes according to their neighborhood's density properties. Our simulation results show that the proposed algorithm can trace locally isotropic structures in non-isotropic network and cluster the network with respect to local density attributes. We also found out that TASC exhibits consistent behavior in the presence of moderate measurement noise levels",TRUE,adj
R11,Science,R26718,Topology-controlled adaptive clustering for uniformity and increased lifetime in wireless sensor networks,S85354,R26719,Method,R26609,Distributed,"Owing to the dynamic nature of sensor network applications the adoption of adaptive cluster-based topologies has many untapped desirable benefits for the wireless sensor networks. In this study, the authors explore such possibility and present an adaptive clustering algorithm to increase the network's lifetime while maintaining the required network connectivity. The proposed scheme features capability of cluster heads to adjust their power level to achieve optimal degree and maintain this value throughout the network operation. Under the proposed method a topology control allows an optimal degree, which results in a better distributed sensors and well-balanced clustering system enhancing networks' lifetime. The simulation results show that the proposed clustering algorithm maintains the required degree for inter-cluster connectivity on many more rounds compared with hybrid energy-efficient distributed clustering (HEED), energy-efficient clustering scheme (EECS), low-energy adaptive clustering hierarchy (LEACH) and energy-based LEACH.",TRUE,adj
R11,Science,R26742,An energy-efficient distributed unequal clustering protocol for wireless sensor networks,S85505,R26743,Method,R26609,Distributed,"Due to the imbalance of energy consumption of nodes in wireless sensor networks (WSNs), some local nodes die prematurely, which causes the network partitions and then shortens the lifetime of the network. The phenomenon is called “hot spot” or “energy hole” problem. For this problem, an energy-aware distributed unequal clustering protocol (EADUC) in multihop heterogeneous WSNs is proposed. Compared with the previous protocols, the cluster heads obtained by EADUC can achieve balanced energy, good distribution, and seamless coverage for all the nodes.Moreover, the complexity of time and control message is low. Simulation experiments show that EADUC can prolong the lifetime of the network significantly.",TRUE,adj
R11,Science,R26748,An Energy-Efficient Clustering Solution for Wireless Sensor Networks,S85554,R26749,Method,R26609,Distributed,"Hot spots in a wireless sensor network emerge as locations under heavy traffic load. Nodes in such areas quickly deplete energy resources, leading to disruption in network services. This problem is common for data collection scenarios in which Cluster Heads (CH) have a heavy burden of gathering and relaying information. The relay load on CHs especially intensifies as the distance to the sink decreases. To balance the traffic load and the energy consumption in the network, the CH role should be rotated among all nodes and the cluster sizes should be carefully determined at different parts of the network. This paper proposes a distributed clustering algorithm, Energy-efficient Clustering (EC), that determines suitable cluster sizes depending on the hop distance to the data sink, while achieving approximate equalization of node lifetimes and reduced energy consumption levels. We additionally propose a simple energy-efficient multihop data collection protocol to evaluate the effectiveness of EC and calculate the end-to-end energy consumption of this protocol; yet EC is suitable for any data collection protocol that focuses on energy conservation. Performance results demonstrate that EC extends network lifetime and achieves energy equalization more effectively than two well-known clustering algorithms, HEED and UCR.",TRUE,adj
R11,Science,R26754,An Energy-Aware Distributed Unequal Clustering Protocol for Wireless Sensor Networks,S85604,R26755,Method,R26609,Distributed,"Due to the imbalance of energy consumption of nodes in wireless sensor networks (WSNs), some local nodes die prematurely, which causes the network partitions and then shortens the lifetime of the network. The phenomenon is called “hot spot” or “energy hole” problem. For this problem, an energy-aware distributed unequal clustering protocol (EADUC) in multihop heterogeneous WSNs is proposed. Compared with the previous protocols, the cluster heads obtained by EADUC can achieve balanced energy, good distribution, and seamless coverage for all the nodes. Moreover, the complexity of time and control message is low. Simulation experiments show that EADUC can prolong the lifetime of the network significantly.",TRUE,adj
R11,Science,R32197,Chemical Composition of the Essential Oil ofArtemisia herba-albaAsso Grown in Algeria,S109504,R32198,Plant material status,R32191,Dry,"Abstract The essential oil obtained by hydrodistillation from the aerial parts of Artemisia herba-alba Asso growing wild in M'sila-Algeria, was investigated using both capillary GC and GC/MS techniques. The oil yield was 1.02% based on dry weight. Sixty-eight components amounting to 94.7% of the oil were identifed, 33 of them being reported for the frst time in Algerian A. herba-alba oil and 21 of these components have not been previously reported in A. herba-alba oils. The oil contained camphor (19.4%), trans-pinocarveol (16.9%), chrysanthenone (15.8%) and β-thujone (15%) as major components. Monoterpenoids are the main components (86.1%), and the irregular monoterpenes fraction represented a 3.1% yield.",TRUE,adj
R11,Science,R32377,IMPACT OF SEASON AND HARVEST FREQUENCY ON BIOMASS AND ESSENTIAL OIL YIELDS OF ARTEMISIA HERBA-ALBA CULTIVATED IN SOUTHERN TUNISIA,S109953,R32378,Plant material status,R32191,Dry,"SUMMARYArtemisia herba-alba Asso has been successfully cultivated in the Tunisian arid zone. However, information regarding the effects of the harvest frequency on its biomass and essential oil yields is very limited. In this study, the effects of three different frequencies of harvesting the upper half of the A. herba-alba plant tuft were compared. The harvest treatments were: harvesting the same individual plants at the flowering stage annually; harvesting the same individual plants at the full vegetative growth stage annually and harvesting the same individual plants every six months. Statistical analyses indicated that all properties studied were affected by the harvest frequency. Essential oil yield, depended both on the dry biomass and its essential oil content, and was significantly higher from plants harvested annually at the flowering stage than the other two treatments. The composition of the β- and α-thujone-rich oils did not vary throughout the experimental period.",TRUE,adj
R11,Science,R26302,Probabilistic Analyses and Algorithms for Three-Level Distribution Systems,S82386,R26303,Inventory,R26163,Fixed,"We consider the problem of integrating inventory control and vehicle routing into a cost-effective strategy for a distribution system consisting of a single outside vendor, a fixed number of warehouses and many geographically dispersed retailers. Each retailer faces a constant, retailer specific, demand rate and inventory holding cost is charged at the retailers and the warehouses. We show that, in an effective strategy which minimizes the asymptotic long run average cost, each warehouse receives fully loaded trucks from the vendor but never holds inventory. That is, each warehouse serves only as a coordinator of the frequency, time and sizes of deliveries to the retailers. This insight is used to construct an inventory control policy and vehicle routing strategy for multi-echelon distribution systems. Computational results are also reported.",TRUE,adj
R11,Science,R25701,GrOWL: A Tool for Visualization and Editing of OWL Ontologies,S77845,R25702,System,L48719,GrOWL ,"In an effort to optimize visualization and editing of OWL ontologies we have developed GrOWL: a browser and visual editor for OWL that accurately visualizes the underlying DL semantics of OWL ontologies while avoiding the difficulties of the verbose OWL syntax. In this paper, we discuss GrOWL visualization model and the essential visualization techniques implemented in GrOWL.",TRUE,adj
R11,Science,R28842,Multi-Objective Approaches to Optimal Testing Resource Allocation in Modular Software Systems,S95185,R28843,Algorithm(s),R28840,HaD-MOEA,"Software testing is an important issue in software engineering. As software systems become increasingly large and complex, the problem of how to optimally allocate the limited testing resource during the testing phase has become more important, and difficult. Traditional Optimal Testing Resource Allocation Problems (OTRAPs) involve seeking an optimal allocation of a limited amount of testing resource to a number of activities with respect to some objectives (e.g., reliability, or cost). We suggest solving OTRAPs with Multi-Objective Evolutionary Algorithms (MOEAs). Specifically, we formulate OTRAPs as two types of multi-objective problems. First, we consider the reliability of the system and the testing cost as two objectives. Second, the total testing resource consumed is also taken into account as the third objective. The advantages of MOEAs over state-of-the-art single objective approaches to OTRAPs will be shown through empirical studies. Our study has revealed that a well-known MOEA, namely Nondominated Sorting Genetic Algorithm II (NSGA-II), performs well on the first problem formulation, but fails on the second one. Hence, a Harmonic Distance Based Multi-Objective Evolutionary Algorithm (HaD-MOEA) is proposed and evaluated in this paper. Comprehensive experimental studies on both parallel-series, and star-structure modular software systems have shown the superiority of HaD-MOEA over NSGA-II for OTRAPs.",TRUE,adj
R11,Science,R29725,On the Relationship Between CO 2 Emissions and Economic Growth: The Mauritian Experience,S98638,R29726,Shape of EKC,R29366,increasing,"This paper analyses the relationship between GDP and carbon dioxide emissions for Mauritius and vice-versa in a historical perspective. Using rigorous econometrics analysis, our results suggest that the carbon dioxide emission trajectory is closely related to the GDP time path. We show that emissions elasticity on income has been increasing over time. By estimating the EKC for the period 1975-2009, we were unable to prove the existence of a reasonable turning point and thus no EKC “U” shape was obtained. Our results suggest that Mauritius could not curb its carbon dioxide emissions in the last three decades. Thus, as hypothesized, the cost of degradation associated with GDP grows over time and it suggests that the economic and human activities are having increasingly negative environmental impacts on the country as cpmpared to their economic prosperity.",TRUE,adj
R11,Science,R29843,"An econometric study of carbon dioxide (CO2) emissions, energy consumption, and economic growth of Pakistan",S99032,R29844,Shape of EKC,R29366,increasing,"Purpose – The purpose of this paper is to examine the relationship among environmental pollution, economic growth and energy consumption per capita in the case of Pakistan. The per capital carbon dioxide (CO2) emission is used as the environmental indicator, the commercial energy use per capita as the energy consumption indicator, and the per capita gross domestic product (GDP) as the economic indicator.Design/methodology/approach – The investigation is made on the basis of the environmental Kuznets curve (EKC), using time series data from 1971 to 2006, by applying different econometric tools like ADF Unit Root Johansen Co‐integration VECM and Granger causality tests.Findings – The Granger causality test shows that there is a long term relationship between these three indicators, with bidirectional causality between per capita CO2 emission and per capita energy consumption. A monotonically increasing curve between GDP and CO2 emission has been found for the sample period, rejecting the EKC relationship, i...",TRUE,adj
R11,Science,R28558,Undifferentiated Sarcoma of the Liver in a 21-year-old Woman: Case Report,S93853,R28559,Site,L57610,Left,"A successful surgical case of malignant undifferentiated (embryonal) sarcoma of the liver (USL), a rare tumor normally found in children, is reported. The patient was a 21-year-old woman, complaining of epigastric pain and abdominal fullness. Chemical analyses of the blood and urine and complete blood counts revealed no significant changes, and serum alpha-fetoprotein levels were within normal limits. A physical examination demonstrated a film, slightly tender lesion at the liver's edge palpable 10 cm below the xiphoid process. CT scan and ultrasonography showed an oval mass, confined to the left lobe of the liver, which proved to be hypovascular on angiography. At laparotomy, a large, 18 x 15 x 13 cm tumor, found in the left hepatic lobe was resected. The lesion was dark red in color, encapsulated, smooth surfaced and of an elastic firm consistency. No metastasis was apparent. Histological examination resulted in a diagnosis of undifferentiated sarcoma of the liver. Three courses of adjuvant chemotherapy, including adriamycin, cis-diaminodichloroplatinum, vincristine and dacarbazine were administered following the surgery with no serious adverse effects. The patient remains well with no evidence of recurrence 12 months after her operation.",TRUE,adj
R11,Science,R28567,"Embryonal sarcoma of the liver in an adult treated with preoperative chemotherapy, radiation therapy, and hepatic lobectomy",S93960,R28568,Site,L57687,Left,"A rare case of embryonal sarcoma of the liver in a 28‐year‐old man is reported. The patient was treated preoperatively with a combination of chemotherapy and radiation therapy. Complete surgical resection, 4.5 months after diagnosis, consisted of a left hepatic lobectomy. No viable tumor was found in the operative specimen. The patient was disease‐free 20 months postoperatively.",TRUE,adj
R11,Science,R28576,Undifferentiated embryonal sarcoma of the liver,S94083,R28578,Site,L57780,Left,"IMAGING FINDINGS Case 1: Initial abdominal ultrasound scan demonstrated a large heterogeneous, echogenic mass within the liver displaying poor blood flow (Figure 1). A contrast-enhanced CT scan of the chest, abdomen and pelvis was then performed, revealing a well-defined, hypodense mass in the right lobe of the liver (Figure 2) measuring approximately 11.3 cm AP x 9.8 cm transverse x 9.2 cm in the sagittal plane. An arterial phase CT scan showed a hypodense mass with a hyperdense rim (Figure 3A) and a delayed venous phase scan showed the low-density mass with areas of increased density displaying the solid nature of the lesion (Figure 3A). These findings combined with biopsy confirmed undifferentiated embryonal sarcoma (UES). Case 2: An abdominal ultrasound scan initially revealed a large heterogeneous lesion in the center of the liver with a small amount of blood flow (Figure 4). Inconclusive ultrasound results warranted a CT scan of the chest, abdomen and pelvis with contrast, which showed a heterogeneous low-density lesion within the right lobe of the liver that extended to the left lobe (Figure 5). The mass measured approximately 12.3 AP x 12.3 transverse x 10.7 in the sagittal plane. Arterial-phase CT showed a well-defined hypodense mass with vessels coursing throughout (Figure 6A). Delayed venous phase demonstrated the solid consistency of the mass by showing continued filling in of the mass (Figure 6B). A PET scan was done to evaluate the extent of the disease. FDG-avid tissue was documented in the large lobulated hepatic mass (Figure 7A,7B).",TRUE,adj
R11,Science,R28599,Undifferentiated (embryonal) sarcoma of liver in adult: a case report,S94368,R28600,Site,L57997,Left,"We report a case of undifferentiated (embryonal) sarcoma of the liver (UESL), which showed cystic formation in a 20-year-old man with no prior history of any hepatitis or liver cirrhosis. He was admitted with abdominal pain and a palpable epigastric mass. The physical examination findings were unremarkable except for a tenderness mass and the results of routine laboratory studies were all within normal limits. Abdominal ultrasound and computed tomography (CT) both showed a cystic mass in the left hepatic lobe. Subsequently, the patient underwent a tumor excision and another two times of hepatectomy because of tumor recurrence. Immunohistochemical study results showed that the tumor cells were positive for vimentin, alpha-1-antichymotrypsin (AACT) and desmin staining, and negative for alpha-fetoprotein (AFP), and eosinophilic hyaline globules in the cytoplasm of some giant cells were strongly positive for periodic acid-Schiff (PAS) staining. The pathological diagnosis was UESL. The patient is still alive with no tumor recurrence for four months.",TRUE,adj
R141823,Semantic Web,R172802,Building accurate semantic taxonomies from monolingual MRDs,S689393,R172804,ontology learning approach,L463591,automatic,"This paper presents a method that conbines a set of unsupervised algorithms in order to accurately build large taxonomies from any machine-readable dictionary (MRD). Our aim is to profit from conventional MRDs, with no explicit semantic coding. We propose a system that 1) performs fully automatic extraction of taxonomic links from MRD entries and 2) ranks the extracted relations in a way that selective manual refinement is allowed. Tested accuracy can reach around 100% depending on the degree of coverage selected, showing that taxonomy building is not limited to structured dictionaries such as LDOCE.",TRUE,adj
R141823,Semantic Web,R172818,Automated Learning of Social Ontologies,S689444,R172820,ontology learning approach,L463626,automatic,"Learned social ontologies can be viewed as products of a social fermentation process, i.e. a process between users who belong in communities of common interests (CoI), in open, collaborative, and communicative environments. In such a setting, social fermentation ensures the automatic encapsulation of agreement and trust of shared knowledge that participating stakeholders provide during an ontology learning task. This chapter discusses the requirements for the automated learning of social ontologies and presents a working method and results of preliminary work. Furthermore, due to its importance for the exploitation of the learned ontologies, it introduces a model for representing the interlinking of agreement, trust and the learned domain conceptualizations that are extracted from social content. The motivation behind this work is an effort towards supporting the design of methods for learning ontologies from social content i.e. methods that aim to learn not only domain conceptualizations but also the degree that agents (software and human) may trust these conceptualizations or not.",TRUE,adj
R141823,Semantic Web,R142487,Medical Ontology Learning Based on Web Resources,S572374,R142489,Application Domain,R142490,Medical,"In order to deal with heterogeneous knowledge in the medical field, this paper proposes a method which can learn a heavy-weighted medical ontology based on medical glossaries and Web resources. Firstly, terms and taxonomic relations are extracted based on disease and drug glossaries and a light-weighted ontology is constructed, Secondly, non-taxonomic relations are automatically learned from Web resources with linguistic patterns, and the two ontologies (disease and drug) are expanded from light-weighted level towards heavy-weighted level, At last, the disease ontology and drug ontology are integrated to create a practical medical ontology. Experiment shows that this method can integrate and expand medical terms with taxonomic and different kinds of non-taxonomic relations. Our experiments show that the performance is promising.",TRUE,adj
R141823,Semantic Web,R149947,Image based mammographie ontology learning,S601096,R149949,Application Domain,R149641,Medical,"Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.",TRUE,adj
R370,"Work, Economy and Organizations",R4308,Challenge: Processing web texts for classifying job offers,S4462,R4319,method,L3129,explicit-rules,"Today the Web represents a rich source of labour market data for both public and private operators, as a growing number of job offers are advertised through Web portals and services. In this paper we apply and compare several techniques, namely explicit-rules, machine learning, and LDA-based algorithms to classify a real dataset of Web job offers collected from 12 heterogeneous sources against a standard classification system of occupations.",TRUE,adj
R370,"Work, Economy and Organizations",R4372,Challenge: Processing web texts for classifying job offers,S4577,R4383,method,L3181,explicit-rules,"Today the Web represents a rich source of labour market data for both public and private operators, as a growing number of job offers are advertised through Web portals and services. In this paper we apply and compare several techniques, namely explicit-rules, machine learning, and LDA-based algorithms to classify a real dataset of Web job offers collected from 12 heterogeneous sources against a standard classification system of occupations.",TRUE,adj
R417,Cultural History,R139993,The Role of Smart City Characteristics in the Plans of Fifteen Cities,S558966,R139995,proposes,R140014,smart city characteristic,"ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.",TRUE,adjective phrase
R135,Databases/Information Systems,R6100,A fast method based on multiple clustering for name disambiguation in bibliographic citations,S6299,R6101,Performance metric,R6064,Pairwise F1,"Name ambiguity in the context of bibliographic citation affects the quality of services in digital libraries. Previous methods are not widely applied in practice because of their high computational complexity and their strong dependency on excessive attributes, such as institutional affiliation, research area, address, etc., which are difficult to obtain in practice. To solve this problem, we propose a novel coarse‐to‐fine framework for name disambiguation which sequentially employs 3 common and easily accessible attributes (i.e., coauthor name, article title, and publication venue). Our proposed framework is based on multiple clustering and consists of 3 steps: (a) clustering articles by coauthorship and obtaining rough clusters, that is fragments; (b) clustering fragments obtained in step 1 by title information and getting bigger fragments; (c) and clustering fragments obtained in step 2 by the latent relations among venues. Experimental results on a Digital Bibliography and Library Project (DBLP) data set show that our method outperforms the existing state‐of‐the‐art methods by 2.4% to 22.7% on the average pairwise F1 score and is 10 to 100 times faster in terms of execution time.",TRUE,adjective phrase
R135,Databases/Information Systems,R6116,A Real-time Heuristic-based Unsupervised Method for Name Disambiguation in Digital Libraries,S6362,R6117,Method,R6113,Unsupervised and Adaptive,"This paper addresses the problem of name disambiguation in the context of digital libraries that administer bibliographic citations. The problem occurs when multiple authors share a common name or when multiple name variations for an author appear in citation records. Name disambiguation is not a trivial task, and most digital libraries do not provide an ecient way to accurately identify the citation records for an author. Furthermore, lack of complete meta-data information in digital libraries hinders the development of a generic algorithm that can be applicable to any dataset. We propose a heuristic-based, unsupervised and adaptive method that also examines users’ interactions in order to include users’ feedback in the disambiguation process. Moreover, the method exploits important features associated with author and citation records, such as co-authors, aliation, publication title, venue, etc., creating a multilayered hierarchical clustering algorithm which transforms itself according to the available information, and forms clusters of unambiguous records. Our experiments on a set of researchers’ names considered to be highly ambiguous produced high precision and recall results, and decisively armed the viability of our algorithm.",TRUE,adjective phrase
R142,Earth Sciences,R143486,Application of remote sensing methods to hydrology and water resources,S574474,R143488,Outcomes,R143475, Snow water equivalent,"Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.",TRUE,adjective phrase
R194,Engineering,R143687,Design and Development of a Flexible Strain Sensor for Textile Structures Based on a Conductive Polymer Composite,S574967,R143689,keywords,L402731,carbon black,"The aim of this work is to develop a smart flexible sensor adapted to textile structures, able to measure their strain deformations. The sensors are “smart” because of their capacity to adapt to the specific mechanical properties of textile structures that are lightweight, highly flexible, stretchable, elastic, etc. Because of these properties, textile structures are continuously in movement and easily deformed, even under very low stresses. It is therefore important that the integration of a sensor does not modify their general behavior. The material used for the sensor is based on a thermoplastic elastomer (Evoprene)/carbon black nanoparticle composite, and presents general mechanical properties strongly compatible with the textile substrate. Two preparation techniques are investigated: the conventional melt-mixing process, and the solvent process which is found to be more adapted for this particular application. The preparation procedure is fully described, namely the optimization of the process in terms of filler concentration in which the percolation theory aspects have to be considered. The sensor is then integrated on a thin, lightweight Nylon fabric, and the electromechanical characterization is performed to demonstrate the adaptability and the correct functioning of the sensor as a strain gauge on the fabric. A normalized relative resistance is defined in order to characterize the electrical response of the sensor. Finally, the influence of environmental factors, such as temperature and atmospheric humidity, on the sensor performance is investigated. The results show that the sensor's electrical resistance is particularly affected by humidity. This behavior is discussed in terms of the sensitivity of the carbon black filler particles to the presence of water.",TRUE,adjective phrase
R194,Engineering,R139273,A Highly Sensitive Nonenzymatic Glucose Biosensor Based on the Regulatory Effect of Glucose on Electrochemical Behaviors of Colloidal Silver Nanoparticles on MoS2,S555078,R139276,keywords,L390465,colloidal silver nanoparticle,"A novel and highly sensitive nonenzymatic glucose biosensor was developed by nucleating colloidal silver nanoparticles (AgNPs) on MoS2. The facile fabrication method, high reproducibility (97.5%) and stability indicates a promising capability for large-scale manufacturing. Additionally, the excellent sensitivity (9044.6 μA·mM−1·cm−2), low detection limit (0.03 μM), appropriate linear range of 0.1–1000 μM, and high selectivity suggests that this biosensor has a great potential to be applied for noninvasive glucose detection in human body fluids, such as sweat and saliva.",TRUE,adjective phrase
R137681,"Information Systems, Process and Knowledge Management",R164003,SoMeSci- A 5 Star Open Data Gold Standard Knowledge Graph of Software Mentions in Scientific Articles,S662963,R166456,data source,R164295,PubMed Central,"Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.",TRUE,adjective phrase
R145261,Natural Language Processing,R163224,An empirical evaluation of resources for the identification of diseases and adverse effects in biomedical literature,S651015,R163226,Other resources,R163264,SNOMED CT,"The mentions of human health perturbations such as the diseases and adverse effects denote a special entity class in the biomedical literature. They help in understanding the underlying risk factors and develop a preventive rationale. The recognition of these named entities in texts through dictionary-based approaches relies on the availability of appropriate terminological resources. Although few resources are publicly available, not all are suitable for the text mining needs. Therefore, this work provides an overview of the well known resources with respect to human diseases and adverse effects such as the MeSH, MedDRA, ICD-10, SNOMED CT, and UMLS. Individual dictionaries are generated from these resources and their performance in recognizing the named entities is evaluated over a manually annotated corpus. In addition, the steps for curating the dictionaries, rule-based acronym disambiguation and their impact on the dictionary performance is discussed. The results show that the MedDRA and UMLS achieve the best recall. Besides this, MedDRA provides an additional benefit of achieving a higher precision. The combination of search results of all the dictionaries achieve a considerably high recall. The corpus is available on http://www.scai.fraunhofer.de/disease-ae-corpus.html",TRUE,adjective phrase
R129,Organic Chemistry,R154543,Effect of Pt treated fullerene/TiO2 on the photocatalytic degradation of MO under visible light,S618623,R154545,Degraded substance,R154549,methyl orange,"Platinum treated fullerene/TiO2 composites (Pt-fullerene/TiO2) were prepared using a sol–gel method. The composite obtained was characterized by FT-IR, BET surface area measurements, X-ray diffraction, energy dispersive X-ray analysis, transmission electron microscopy (TEM) and UV-vis analysis. A methyl orange (MO) solution under visible light irradiation was used to determine the photocatalytic activity. Excellent photocatalytic degradation of a MO solution was observed using the Pt-TiO2, fullerene-TiO2 and Pt-fullerene/TiO2 composites under visible light. An increase in photocatalytic activity was observed and Pt-fullerene/TiO2 has the best photocatalytic activity, which may be attributable to increase of the photo-absorption effect by the fullerene and the cooperative effect of the Pt.",TRUE,adjective phrase
R11,Science,R26330,On the Interactions Between Routing and Inventory-Management Policies in a One-WarehouseN-Retailer Distribution System,S82544,R26331,approach,R26329,change-revert heuristic,"This paper examines the interactions between routing and inventory-management decisions in a two-level supply chain consisting of a cross-docking warehouse and N retailers. Retailer demand is normally distributed and independent across retailers and over time. Travel times are fixed between pairs of system sites. Every m time periods, system inventory is replenished at the warehouse, whereupon an uncapacitated vehicle departs on a route that visits each retailer once and only once, allocating all of its inventory based on the status of inventory at the retailers who have not yet received allocations. The retailers experience newsvendor-type inventory-holding and backorder-penalty costs each period; the vehicle experiences in-transit inventory-holding costs each period. Our goal is to determine a combined system inventory-replenishment, routing, and inventory-allocation policy that minimizes the total expected cost/period of the system over an infinite time horizon. Our analysis begins by examining the determination of the optimal static route, i.e., the best route if the vehicle must travel the same route every replenishment-allocation cycle. Here we demonstrate that the optimal static route is not the shortest-total-distance (TSP) route, but depends on the variance of customer demands, and, if in-transit inventory-holding costs are charged, also on mean customer demands. We then examine dynamic-routing policies, i.e., policies that can change the route from one system-replenishment-allocation cycle to another, based on the status of the retailers' inventories. Here we argue that in the absence of transportation-related cost, the optimal dynamic-routing policy should be viewed as balancing management's ability to respond to system uncertainties (by changing routes) against system uncertainties that are induced by changing routes. We then examine the performance of a change-revert heuristic policy. Although its routing decisions are not fully dynamic, but determined and fixed for a given cycle at the time of each system replenishment, simulation tests with N = 2 and N = 6 retailers indicate that its use can substantially reduce system inventory-related costs even if most of the time the chosen route is the optimal static route.",TRUE,adjective phrase
R11,Science,R32077,Development of a real-time learning scheduler using reinforcement learning concepts,S108928,R32078,Criterion A,R32069,Not available,"A scheme for the scheduling of flexible manufacturing systems (FMS) has been developed which divides the scheduling function (built upon a generic controller architecture) into four different steps: candidate rule selection, transient phenomena analysis, multicriteria compromise analysis, and learning. This scheme is based on a hybrid architecture which utilizes neural networks, simulation, genetic algorithms, and induction mechanism. This paper investigates the candidate rule selection process, which selects a small list of scheduling rules from a larger list of such rules. This candidate rule selector is developed by using the integration of dynamic programming and neural networks. The system achieves real-time learning using this approach. In addition, since an expert scheduler is not available, it utilizes reinforcement signals from the environment (a measure of how desirable the achieved state is as measured by the resulting performance criteria). The approach is discussed and further research issues are presented.<>",TRUE,adjective phrase
R11,Science,R32077,Development of a real-time learning scheduler using reinforcement learning concepts,S108925,R32078,job complexity and routing flexibility,R32069,Not available,"A scheme for the scheduling of flexible manufacturing systems (FMS) has been developed which divides the scheduling function (built upon a generic controller architecture) into four different steps: candidate rule selection, transient phenomena analysis, multicriteria compromise analysis, and learning. This scheme is based on a hybrid architecture which utilizes neural networks, simulation, genetic algorithms, and induction mechanism. This paper investigates the candidate rule selection process, which selects a small list of scheduling rules from a larger list of such rules. This candidate rule selector is developed by using the integration of dynamic programming and neural networks. The system achieves real-time learning using this approach. In addition, since an expert scheduler is not available, it utilizes reinforcement signals from the environment (a measure of how desirable the achieved state is as measured by the resulting performance criteria). The approach is discussed and further research issues are presented.<>",TRUE,adjective phrase
R11,Science,R32077,Development of a real-time learning scheduler using reinforcement learning concepts,S108926,R32078,Resources and their constraints,L65458,Not available,"A scheme for the scheduling of flexible manufacturing systems (FMS) has been developed which divides the scheduling function (built upon a generic controller architecture) into four different steps: candidate rule selection, transient phenomena analysis, multicriteria compromise analysis, and learning. This scheme is based on a hybrid architecture which utilizes neural networks, simulation, genetic algorithms, and induction mechanism. This paper investigates the candidate rule selection process, which selects a small list of scheduling rules from a larger list of such rules. This candidate rule selector is developed by using the integration of dynamic programming and neural networks. The system achieves real-time learning using this approach. In addition, since an expert scheduler is not available, it utilizes reinforcement signals from the environment (a measure of how desirable the achieved state is as measured by the resulting performance criteria). The approach is discussed and further research issues are presented.<>",TRUE,adjective phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338130,R71590,Data,R71618,several months or more,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,adjective phrase
,electrical engineering,R145522,Remarkable Improvement in Foldability of Poly‐Si Thin‐Film Transistor on Polyimide Substrate Using Blue Laser Crystallization of Amorphous Si and Comparison with Conventional Poly‐Si Thin‐Film Transistor Used for Foldable Displays,S582943,R145526,keywords,L407151,Amorphous Si,"Highly robust poly‐Si thin‐film transistor (TFT) on polyimide (PI) substrate using blue laser annealing (BLA) of amorphous silicon (a‐Si) for lateral crystallization is demonstrated. Its foldability is compared with the conventional excimer laser annealing (ELA) poly‐Si TFT on PI used for foldable displays exhibiting field‐effect mobility of 85 cm2 (V s)−1. The BLA poly‐Si TFT on PI exhibits the field‐effect mobility, threshold voltage (VTH), and subthreshold swing of 153 cm2 (V s)−1, −2.7 V, and 0.2 V dec−1, respectively. Most important finding is the excellent foldability of BLA TFT compared with the ELA poly‐Si TFTs on PI substrates. The VTH shift of BLA poly‐Si TFT is ≈0.1 V, which is much smaller than that (≈2 V) of ELA TFT on PI upon 30 000 cycle folding. The defects are generated at the grain boundary region of ELA poly‐Si during folding. However, BLA poly‐Si has no protrusion in the poly‐Si channel and thus no defect generation during folding. This leads to excellent foldability of BLA poly‐Si on PI substrate.",TRUE,adjective phrase
,electrical engineering,R110372,Fabrication of a Monolithic Implantable Neural Interface from Cubic Silicon Carbide,S503033,R110377, Film structure,L363497,Single crystalline,"One of the main issues with micron-sized intracortical neural interfaces (INIs) is their long-term reliability, with one major factor stemming from the material failure caused by the heterogeneous integration of multiple materials used to realize the implant. Single crystalline cubic silicon carbide (3C-SiC) is a semiconductor material that has been long recognized for its mechanical robustness and chemical inertness. It has the benefit of demonstrated biocompatibility, which makes it a promising candidate for chronically-stable, implantable INIs. Here, we report on the fabrication and initial electrochemical characterization of a nearly monolithic, Michigan-style 3C-SiC microelectrode array (MEA) probe. The probe consists of a single 5 mm-long shank with 16 electrode sites. An ~8 µm-thick p-type 3C-SiC epilayer was grown on a silicon-on-insulator (SOI) wafer, which was followed by a ~2 µm-thick epilayer of heavily n-type (n+) 3C-SiC in order to form conductive traces and the electrode sites. Diodes formed between the p and n+ layers provided substrate isolation between the channels. A thin layer of amorphous silicon carbide (a-SiC) was deposited via plasma-enhanced chemical vapor deposition (PECVD) to insulate the surface of the probe from the external environment. Forming the probes on a SOI wafer supported the ease of probe removal from the handle wafer by simple immersion in HF, thus aiding in the manufacturability of the probes. Free-standing probes and planar single-ended test microelectrodes were fabricated from the same 3C-SiC epiwafers. Cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS) were performed on test microelectrodes with an area of 491 µm2 in phosphate buffered saline (PBS) solution. The measurements showed an impedance magnitude of 165 kΩ ± 14.7 kΩ (mean ± standard deviation) at 1 kHz, anodic charge storage capacity (CSC) of 15.4 ± 1.46 mC/cm2, and a cathodic CSC of 15.2 ± 1.03 mC/cm2. Current-voltage tests were conducted to characterize the p-n diode, n-p-n junction isolation, and leakage currents. The turn-on voltage was determined to be on the order of ~1.4 V and the leakage current was less than 8 μArms. This all-SiC neural probe realizes nearly monolithic integration of device components to provide a likely neurocompatible INI that should mitigate long-term reliability issues associated with chronic implantation.",TRUE,adjective phrase
R20,Anatomy,R110614,Hypercholesterolemia in pregnant mice increases the susceptibility to atherosclerosis in adult life,S504044,R110616,atherosclerosis incidence,L364128,90%,"Purpose To determine the effects of hypercholesterolemia in pregnant mice on the susceptibility to atherosclerosis in adult life through a new animal modeling approach. Methods Male offspring from apoE−/− mice fed with regular (R) or high (H) cholesterol chow during pregnancy were randomly subjected to regular (Groups R–R and H–R, n = 10) or high cholesterol diet (Groups R–H and H–H, n = 10) for 14 weeks. Plasma lipid profiles were determined in all rats. The abdominal aorta was examined for the severity of atherosclerotic lesions in offspring. Results Lipids significantly increased while high-density lipoprotein-cholesterol/low-density lipoprotein-cholesterol decreased in mothers fed high cholesterol chow after delivery compared with before pregnancy (p < 0.01). Groups R–H and H–R indicated dyslipidemia and significant atherosclerotic lesions. Group H–H demonstrated the highest lipids, lowest high-density lipoprotein-cholesterol/low-density lipoprotein-cholesterol, highest incidence (90%), plaque area to luminal area ratio (0.78 ± 0.02) and intima to media ratio (1.57 ± 0.05). Conclusion Hypercholesterolemia in pregnant mice may increase susceptibility to atherosclerosis in their adult offspring.",TRUE,count/measurement
R114008,Applied Physics,R137416,Flux of OH and O radicals onto a surface by an atmospheric-pressure helium plasma jet measured by laser-induced fluorescence,S543692,R137418,Excitation_frequency,L382847,10 kHz,"The atmospheric-pressure helium plasma jet is of emerging interest as a cutting-edge biomedical device for cancer treatment, wound healing and sterilization. Reactive oxygen species such as OH and O radicals are considered to be major factors in the application of biological plasma. In this study, density distribution, temporal behaviour and flux of OH and O radicals on a surface are measured using laser-induced fluorescence. A helium plasma jet is generated by applying pulsed high voltage of 8 kV with 10 kHz using a quartz tube with an inner diameter of 4 mm. To evaluate the relation between the surface condition and active species production, three surfaces are used: dry, wet and rat skin. When the helium flow rate is 1.5 l min−1, radial distribution of OH density on the rat skin surface shows a maximum density of 1.2 × 1013 cm−3 at the centre of the plasma-mediated area, while O atom density shows a maximum of 1.0 × 1015 cm−3 at 2.0 mm radius from the centre of the plasma-mediated area. Their densities in the effluent of the plasma jet are almost constant during the intervals of the discharge pulses because their lifetimes are longer than the pulse interval. Their density distribution depends on the helium flow rate and the surface humidity. With these results, OH and O production mechanisms in the plasma jet and their flux onto the surface are discussed.",TRUE,count/measurement
R114008,Applied Physics,R137447,Spectroscopic Investigation of a Microwave-Generated Atmospheric Pressure Plasma Torch,S543902,R137449,Excitation_frequency,L383015,2.45 GHz,"The investigated new microwave plasma torch is based on an axially symmetric resonator. Microwaves of a frequency of 2.45 GHz are resonantly fed into this cavity resulting in a sufficiently high electric field to ignite plasma without any additional igniters as well as to maintain stable plasma operation. Optical emission spectroscopy was carried out to characterize a humid air plasma. OH‐bands were used to determine the gas rotational temperature Trot while the electron temperature was estimated by a Boltzmann plot of oxygen lines. Maximum temperatures of Trot of about 3600 K and electron temperatures of 5800 K could be measured. The electron density ne was estimated to ne ≈ 3 · 1020m–3 by using Saha's equation. Parametric studies in dependence of the gas flow and the supplied microwave power revealed that the maximum temperatures are independent of these parameters. However, the volume of the plasma increases with increasing microwave power and with a decrease of the gas flow. Considerations using collision frequencies, energy transfer times and power coupling provide an explanation of the observed phenomena: The optimal microwave heating is reached for electron‐neutral collision frequencies νen being near to the angular frequency of the wave ω (© 2012 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)",TRUE,count/measurement
R114008,Applied Physics,R137450,Modeling of microwave-induced plasma in argon at atmospheric pressure,S543920,R137452,Excitation_frequency,L383029,2.45 GHz,"A two-dimensional model of microwave-induced plasma (field frequency 2.45 GHz) in argon at atmospheric pressure is presented. The model describes in a self-consistent manner the gas flow and heat transfer, the in-coupling of the microwave energy into the plasma, and the reaction kinetics relevant to high-pressure argon plasma including the contribution of molecular ion species. The model provides the gas and electron temperature distributions, the electron, ion, and excited state number densities, and the power deposited into the plasma for given gas flow rate and temperature at the inlet, and input power of the incoming TEM microwave. For flow rate and absorbed microwave power typical for analytical applications (200-400 ml/min and 20 W), the plasma is far from thermodynamic equilibrium. The gas temperature reaches values above 2000 K in the plasma region, while the electron temperature is about 1 eV. The electron density reaches a maximum value of about 4 × 10(21) m(-3). The balance of the charged particles is essentially controlled by the kinetics of the molecular ions. For temperatures above 1200 K, quasineutrality of the plasma is provided by the atomic ions, and below 1200 K the molecular ion density exceeds the atomic ion density and a contraction of the discharge is observed. Comparison with experimental data is presented which demonstrates good quantitative and qualitative agreement.",TRUE,count/measurement
R114008,Applied Physics,R137453,Integrated Microwave Atmospheric Plasma Source (IMAPlaS): thermal and spectroscopic properties and antimicrobial effect onB. atrophaeusspores,S543938,R137455,Excitation_frequency,L383043,2.45 GHz,"The Integrated Microwave Atmospheric Plasma Source (IMAPlaS) operating with a microwave resonator at 2.45 GHz driven by a solid-state transistor oscillator generates a core plasma of high temperature (T > 1000 K), therefore producing reactive species such as NO very effectively. The effluent of the plasma source is much colder, which enables direct treatment of thermolabile materials or even living tissue. In this study the source was operated with argon, helium and nitrogen with gas flow rates between 0.3 and 1.0 slm. Depending on working gas and distance, axial gas temperatures between 30 and 250 °C were determined in front of the nozzle. Reactive species were identified by emission spectroscopy in the spectral range from vacuum ultraviolet to near infrared. The irradiance in the ultraviolet range was also measured. Using B. atrophaeus spores to test antimicrobial efficiency, we determined log10-reduction rates of up to a factor of 4.",TRUE,count/measurement
R114008,Applied Physics,R137419,The influence of the geometry and electrical characteristics on the formation of the atmospheric pressure plasma jet,S543708,R137421,Excitation_frequency,L382859,30 kHz,"An extensive electrical study was performed on a coaxial geometry atmospheric pressure plasma jet source in helium, driven by 30 kHz sine voltage. Two modes of operation were observed, a highly reproducible low-power mode that features the emission of one plasma bullet per voltage period and an erratic high-power mode in which micro-discharges appear around the grounded electrode. The minimum of power transfer efficiency corresponds to the transition between the two modes. Effective capacitance was identified as a varying property influenced by the discharge and the dissipated power. The charge carried by plasma bullets was found to be a small fraction of charge produced in the source irrespective of input power and configuration of the grounded electrode. The biggest part of the produced charge stays localized in the plasma source and below the grounded electrode, in the range 1.2–3.3 nC for ground length of 3–8 mm.",TRUE,count/measurement
R133,Artificial Intelligence,R142143,A Computer-Aided Diagnosis System Using Artificial Intelligence for the Diagnosis and Characterization of Thyroid Nodules on Ultrasound: Initial Clinical Assessment,S571096,R142145,Specificity CAD-System ,L400847,74.60%,"BACKGROUND An initial clinical assessment is described of a new, commercially available, computer-aided diagnosis (CAD) system using artificial intelligence (AI) for thyroid ultrasound, and its performance is evaluated in the diagnosis of malignant thyroid nodules and categorization of nodule characteristics. METHODS Patients with thyroid nodules with decisive diagnosis, whether benign or malignant, were consecutively enrolled from November 2015 to February 2016. An experienced radiologist reviewed the ultrasound image characteristics of the thyroid nodules, while another radiologist assessed the same thyroid nodules using the CAD system, providing ultrasound characteristics and a diagnosis of whether nodules were benign or malignant. The diagnostic performance and agreement of US characteristics between the experienced radiologist and the CAD system were compared. RESULTS In total, 102 thyroid nodules from 89 patients were included; 59 (57.8%) were benign and 43 (42.2%) were malignant. The CAD system showed a similar sensitivity as the experienced radiologist (90.7% vs. 88.4%, p > 0.99), but a lower specificity and a lower area under the receiver operating characteristic (AUROC) curve (specificity: 74.6% vs. 94.9%, p = 0.002; AUROC: 0.83 vs. 0.92, p = 0.021). Classifications of the ultrasound characteristics (composition, orientation, echogenicity, and spongiform) between radiologist and CAD system were in substantial agreement (κ = 0.659, 0.740, 0.733, and 0.658, respectively), while the margin showed a fair agreement (κ = 0.239). CONCLUSION The sensitivity of the CAD system using AI for malignant thyroid nodules was as good as that of the experienced radiologist, while specificity and accuracy were lower than those of the experienced radiologist. The CAD system showed an acceptable agreement with the experienced radiologist for characterization of thyroid nodules.",TRUE,count/measurement
R133,Artificial Intelligence,R142143,A Computer-Aided Diagnosis System Using Artificial Intelligence for the Diagnosis and Characterization of Thyroid Nodules on Ultrasound: Initial Clinical Assessment,S571095,R142145,Sensitivity Radiologist,L400846,88.40%,"BACKGROUND An initial clinical assessment is described of a new, commercially available, computer-aided diagnosis (CAD) system using artificial intelligence (AI) for thyroid ultrasound, and its performance is evaluated in the diagnosis of malignant thyroid nodules and categorization of nodule characteristics. METHODS Patients with thyroid nodules with decisive diagnosis, whether benign or malignant, were consecutively enrolled from November 2015 to February 2016. An experienced radiologist reviewed the ultrasound image characteristics of the thyroid nodules, while another radiologist assessed the same thyroid nodules using the CAD system, providing ultrasound characteristics and a diagnosis of whether nodules were benign or malignant. The diagnostic performance and agreement of US characteristics between the experienced radiologist and the CAD system were compared. RESULTS In total, 102 thyroid nodules from 89 patients were included; 59 (57.8%) were benign and 43 (42.2%) were malignant. The CAD system showed a similar sensitivity as the experienced radiologist (90.7% vs. 88.4%, p > 0.99), but a lower specificity and a lower area under the receiver operating characteristic (AUROC) curve (specificity: 74.6% vs. 94.9%, p = 0.002; AUROC: 0.83 vs. 0.92, p = 0.021). Classifications of the ultrasound characteristics (composition, orientation, echogenicity, and spongiform) between radiologist and CAD system were in substantial agreement (κ = 0.659, 0.740, 0.733, and 0.658, respectively), while the margin showed a fair agreement (κ = 0.239). CONCLUSION The sensitivity of the CAD system using AI for malignant thyroid nodules was as good as that of the experienced radiologist, while specificity and accuracy were lower than those of the experienced radiologist. The CAD system showed an acceptable agreement with the experienced radiologist for characterization of thyroid nodules.",TRUE,count/measurement
R133,Artificial Intelligence,R142143,A Computer-Aided Diagnosis System Using Artificial Intelligence for the Diagnosis and Characterization of Thyroid Nodules on Ultrasound: Initial Clinical Assessment,S571094,R142145,Sensitivity CAD-System,L400845,90.70%,"BACKGROUND An initial clinical assessment is described of a new, commercially available, computer-aided diagnosis (CAD) system using artificial intelligence (AI) for thyroid ultrasound, and its performance is evaluated in the diagnosis of malignant thyroid nodules and categorization of nodule characteristics. METHODS Patients with thyroid nodules with decisive diagnosis, whether benign or malignant, were consecutively enrolled from November 2015 to February 2016. An experienced radiologist reviewed the ultrasound image characteristics of the thyroid nodules, while another radiologist assessed the same thyroid nodules using the CAD system, providing ultrasound characteristics and a diagnosis of whether nodules were benign or malignant. The diagnostic performance and agreement of US characteristics between the experienced radiologist and the CAD system were compared. RESULTS In total, 102 thyroid nodules from 89 patients were included; 59 (57.8%) were benign and 43 (42.2%) were malignant. The CAD system showed a similar sensitivity as the experienced radiologist (90.7% vs. 88.4%, p > 0.99), but a lower specificity and a lower area under the receiver operating characteristic (AUROC) curve (specificity: 74.6% vs. 94.9%, p = 0.002; AUROC: 0.83 vs. 0.92, p = 0.021). Classifications of the ultrasound characteristics (composition, orientation, echogenicity, and spongiform) between radiologist and CAD system were in substantial agreement (κ = 0.659, 0.740, 0.733, and 0.658, respectively), while the margin showed a fair agreement (κ = 0.239). CONCLUSION The sensitivity of the CAD system using AI for malignant thyroid nodules was as good as that of the experienced radiologist, while specificity and accuracy were lower than those of the experienced radiologist. The CAD system showed an acceptable agreement with the experienced radiologist for characterization of thyroid nodules.",TRUE,count/measurement
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329498,R69391,Data,R69397,93.71%,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,count/measurement
R133,Artificial Intelligence,R142143,A Computer-Aided Diagnosis System Using Artificial Intelligence for the Diagnosis and Characterization of Thyroid Nodules on Ultrasound: Initial Clinical Assessment,S571097,R142145,Specificity Radiologist,L400848,94.90%,"BACKGROUND An initial clinical assessment is described of a new, commercially available, computer-aided diagnosis (CAD) system using artificial intelligence (AI) for thyroid ultrasound, and its performance is evaluated in the diagnosis of malignant thyroid nodules and categorization of nodule characteristics. METHODS Patients with thyroid nodules with decisive diagnosis, whether benign or malignant, were consecutively enrolled from November 2015 to February 2016. An experienced radiologist reviewed the ultrasound image characteristics of the thyroid nodules, while another radiologist assessed the same thyroid nodules using the CAD system, providing ultrasound characteristics and a diagnosis of whether nodules were benign or malignant. The diagnostic performance and agreement of US characteristics between the experienced radiologist and the CAD system were compared. RESULTS In total, 102 thyroid nodules from 89 patients were included; 59 (57.8%) were benign and 43 (42.2%) were malignant. The CAD system showed a similar sensitivity as the experienced radiologist (90.7% vs. 88.4%, p > 0.99), but a lower specificity and a lower area under the receiver operating characteristic (AUROC) curve (specificity: 74.6% vs. 94.9%, p = 0.002; AUROC: 0.83 vs. 0.92, p = 0.021). Classifications of the ultrasound characteristics (composition, orientation, echogenicity, and spongiform) between radiologist and CAD system were in substantial agreement (κ = 0.659, 0.740, 0.733, and 0.658, respectively), while the margin showed a fair agreement (κ = 0.239). CONCLUSION The sensitivity of the CAD system using AI for malignant thyroid nodules was as good as that of the experienced radiologist, while specificity and accuracy were lower than those of the experienced radiologist. The CAD system showed an acceptable agreement with the experienced radiologist for characterization of thyroid nodules.",TRUE,count/measurement
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329495,R69391,Data,R69394,15000 sarcastic and 25000 non-sarcastic messages,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,count/measurement
R133,Artificial Intelligence,R75785,SemEval-2020 Task 5: Counterfactual Recognition,S348035,R75787,Subtask 1,R76013,27 submissions,"We present a counterfactual recognition (CR) task, the shared Task 5 of SemEval-2020. Counterfactuals describe potential outcomes (consequents) produced by actions or circumstances that did not happen or cannot happen and are counter to the facts (antecedent). Counterfactual thinking is an important characteristic of the human cognitive system; it connects antecedents and consequent with causal relations. Our task provides a benchmark for counterfactual recognition in natural language with two subtasks. Subtask-1 aims to determine whether a given sentence is a counterfactual statement or not. Subtask-2 requires the participating systems to extract the antecedent and consequent in a given counterfactual statement. During the SemEval-2020 official evaluation period, we received 27 submissions to Subtask-1 and 11 to Subtask-2. Our data and baseline code are made publicly available at https://zenodo.org/record/3932442. The task website and leaderboard can be found at https://competitions.codalab.org/competitions/21691.",TRUE,count/measurement
R133,Artificial Intelligence,R141030,*SEM 2013 shared task: Semantic Textual Similarity,S581380,R145246,description,L406281,"CORE is similar in set up to SemEval STS 2012 task with pairs of sentences from sources related to those of 2012, yet different in genre from the 2012 set, namely, this year we included newswire headlines, machine translation evaluation datasets and multiple lexical resource glossed sets.","In Semantic Textual Similarity (STS), systems rate the degree of semantic equivalence, on a graded scale from 0 to 5, with 5 being the most similar. This year we set up two tasks: (i) a core task (CORE), and (ii) a typed-similarity task (TYPED). CORE is similar in set up to SemEval STS 2012 task with pairs of sentences from sources related to those of 2012, yet different in genre from the 2012 set, namely, this year we included newswire headlines, machine translation evaluation datasets and multiple lexical resource glossed sets. TYPED, on the other hand, is novel and tries to characterize why two items are deemed similar, using cultural heritage items which are described with metadata such as title, author or description. Several types of similarity have been defined, including similar author, similar time period or similar location. The annotation for both tasks leverages crowdsourcing, with relative high interannotator correlation, ranging from 62% to 87%. The CORE task attracted 34 participants with 89 runs, and the TYPED task attracted 6 teams with 14 runs.",TRUE,count/measurement
R133,Artificial Intelligence,R6649,ERSS 2005: Coreference-Based Summarization Reloaded,S8561,R6650,implementation,R6652,ERSS 2005,"We present ERSS 2005, our entry to this year’s DUC competition. With only slight modifications from last year’s version to accommodate the more complex context information present in DUC 2005, we achieved a similar performance to last year’s entry, ranking roughly in the upper third when examining the ROUGE-1 and Basic Element score. We also participated in the additional manual evaluation based on the new Pyramid method and performed further evaluations based on the Basic Elements method and the automatic generation of Pyramids. Interestingly, the ranking of our system differs greatly between the different measures; we attempt to analyse this effect based on correlations between the different results using the Spearman coefficient.",TRUE,count/measurement
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329540,R69419,Data,R69433,F1 score equal to 45.9%,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,count/measurement
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329516,R69391,Material,R69415,SemEval 2015 Task 11,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,count/measurement
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329497,R69391,Data,R69396,superior sarcasm-classification accuracy of 97.87% for the Twitter dataset,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,count/measurement
R14,Biochemistry,R109331,Potential inhibitors of coronavirus 3-chymotrypsin-like protease (3CLpro): an in silico screening of alkaloids and terpenoids from African medicinal plants,S498915,R109333,Bioactive compounds,R109339,"10-Hydroxyusambarensine, Cryptoquindoline, 6-Oxoisoiguesterin, 22-Hydroxyhopan-3-one, Cryptospirolepine, Isoiguesterin and 20-Epibryonolic acid","Abstract The novel coronavirus disease 2019 (COVID-19) caused by SARS-COV-2 has raised myriad of global concerns. There is currently no FDA approved antiviral strategy to alleviate the disease burden. The conserved 3-chymotrypsin-like protease (3CLpro), which controls coronavirus replication is a promising drug target for combating the coronavirus infection. This study screens some African plants derived alkaloids and terpenoids as potential inhibitors of coronavirus 3CLpro using in silico approach. Bioactive alkaloids (62) and terpenoids (100) of plants native to Africa were docked to the 3CLpro of the novel SARS-CoV-2. The top twenty alkaloids and terpenoids with high binding affinities to the SARS-CoV-2 3CLpro were further docked to the 3CLpro of SARS-CoV and MERS-CoV. The docking scores were compared with 3CLpro-referenced inhibitors (Lopinavir and Ritonavir). The top docked compounds were further subjected to ADEM/Tox and Lipinski filtering analyses for drug-likeness prediction analysis. This ligand-protein interaction study revealed that more than half of the top twenty alkaloids and terpenoids interacted favourably with the coronaviruses 3CLpro, and had binding affinities that surpassed that of lopinavir and ritonavir. Also, a highly defined hit-list of seven compounds (10-Hydroxyusambarensine, Cryptoquindoline, 6-Oxoisoiguesterin, 22-Hydroxyhopan-3-one, Cryptospirolepine, Isoiguesterin and 20-Epibryonolic acid) were identified. Furthermore, four non-toxic, druggable plant derived alkaloids (10-Hydroxyusambarensine, and Cryptoquindoline) and terpenoids (6-Oxoisoiguesterin and 22-Hydroxyhopan-3-one), that bind to the receptor-binding site and catalytic dyad of SARS-CoV-2 3CLpro were identified from the predictive ADME/tox and Lipinski filter analysis. However, further experimental analyses are required for developing these possible leads into natural anti-COVID-19 therapeutic agents for combating the pandemic. Communicated by Ramaswamy H. Sarma",TRUE,count/measurement
R104,Bioinformatics,R138756,Learning Spatial–Spectral–Temporal EEG Features With Recurrent 3D Convolutional Neural Networks for Cross-Task Mental Workload Assessment,S551438,R138761,Study cohort,L387949,20 subjects,"Mental workload assessment is essential for maintaining human health and preventing accidents. Most research on this issue is limited to a single task. However, cross-task assessment is indispensable for extending a pre-trained model to new workload conditions. Because brain dynamics are complex across different tasks, it is difficult to propose efficient human-designed features based on prior knowledge. Therefore, this paper proposes a concatenated structure of deep recurrent and 3D convolutional neural networks (R3DCNNs) to learn EEG features across different tasks without prior knowledge. First, this paper adds frequency and time dimensions to EEG topographic maps based on a Morlet wavelet transformation. Then, R3DCNN is proposed to simultaneously learn EEG features from the spatial, spectral, and temporal dimensions. The proposed model is validated based on the EEG signals collected from 20 subjects. This paper employs a binary classification of low and high mental workload across spatial n-back and arithmetic tasks. The results show that the R3DCNN achieves an average accuracy of 88.9%, which is a significant increase compared with that of the state-of-the-art methods. In addition, the visualization of the convolutional layers demonstrates that the deep neural network can extract detailed features. These results indicate that R3DCNN is capable of identifying the mental workload levels for cross-task conditions.",TRUE,count/measurement
R104,Bioinformatics,R5107,Implementing LOINC – Current Status and Ongoing Work at a Medical University,S5641,R5119,Data,R5123,376 LOINC codes,"The Logical Observation Identifiers, Names and Codes (LOINC) is a common terminology used for standardizing laboratory terms. Within the consortium of the HiGHmed project, LOINC is one of the central terminologies used for health data sharing across all university sites. Therefore, linking the LOINC codes to the site-specific tests and measures is one crucial step to reach this goal. In this work we report our ongoing efforts in implementing LOINC to our laboratory information system and research infrastructure, as well as our challenges and the lessons learned. 407 local terms could be mapped to 376 LOINC codes of which 209 are already available to routine laboratory data. In our experience, mapping of local terms to LOINC is a widely manual and time consuming process for reasons of language and expert knowledge of local laboratory procedures.",TRUE,count/measurement
R104,Bioinformatics,R138690,Deep learning based automatic diagnoses of attention deficit hyperactive disorder,S551139,R138692,Used models,R138693,3D CNN,"In this paper, we aim to develop a deep learning based automatic Attention Deficit Hyperactive Disorder (ADHD) diagnosis algorithm using resting state functional magnetic resonance imaging (rs-fMRI) scans. However, relative to millions of parameters in deep neural networks (DNN), the number of fMRI samples is still limited to learn discriminative features from the raw data. In light of this, we first encode our prior knowledge on 3D features voxel-wisely, including Regional Homogeneity (ReHo), fractional Amplitude of Low Frequency Fluctuations (fALFF) and Voxel-Mirrored Homotopic Connectivity (VMHC), and take these 3D images as the input to the DNN. Inspired by the way that radiologists examine brain images, we further investigate a novel 3D convolutional neural network (CNN) architecture to learn 3D local patterns which may boost the diagnosis accuracy. Investigation on the hold-out testing data of the ADHD-200 Global competition demonstrates that the proposed 3D CNN approach yields superior performances when compared to the reported classifiers in the literature, even with less training samples.",TRUE,count/measurement
R104,Bioinformatics,R5107,Implementing LOINC – Current Status and Ongoing Work at a Medical University,S5643,R5119,Data,R5125,407 local terms,"The Logical Observation Identifiers, Names and Codes (LOINC) is a common terminology used for standardizing laboratory terms. Within the consortium of the HiGHmed project, LOINC is one of the central terminologies used for health data sharing across all university sites. Therefore, linking the LOINC codes to the site-specific tests and measures is one crucial step to reach this goal. In this work we report our ongoing efforts in implementing LOINC to our laboratory information system and research infrastructure, as well as our challenges and the lessons learned. 407 local terms could be mapped to 376 LOINC codes of which 209 are already available to routine laboratory data. In our experience, mapping of local terms to LOINC is a widely manual and time consuming process for reasons of language and expert knowledge of local laboratory procedures.",TRUE,count/measurement
R104,Bioinformatics,R150537,LINNAEUS: A species name identification system for biomedical literature,S603593,R150539,Results,L417833,94% recall and 97% precision at the mention level,"Abstract Background The task of recognizing and identifying species names in biomedical literature has recently been regarded as critical for a number of applications in text and data mining, including gene name recognition, species-specific document retrieval, and semantic enrichment of biomedical articles. Results In this paper we describe an open-source species name recognition and normalization software system, LINNAEUS, and evaluate its performance relative to several automatically generated biomedical corpora, as well as a novel corpus of full-text documents manually annotated for species mentions. LINNAEUS uses a dictionary-based approach (implemented as an efficient deterministic finite-state automaton) to identify species names and a set of heuristics to resolve ambiguous mentions. When compared against our manually annotated corpus, LINNAEUS performs with 94% recall and 97% precision at the mention level, and 98% recall and 90% precision at the document level. Our system successfully solves the problem of disambiguating uncertain species mentions, with 97% of all mentions in PubMed Central full-text documents resolved to unambiguous NCBI taxonomy identifiers. Conclusions LINNAEUS is an open source, stand-alone software system capable of recognizing and normalizing species name mentions with speed and accuracy, and can therefore be integrated into a range of bioinformatics and text-mining applications. The software and manually annotated corpus can be downloaded freely at http://linnaeus.sourceforge.net/.",TRUE,count/measurement
R104,Bioinformatics,R150537,LINNAEUS: A species name identification system for biomedical literature,S603594,R150539,Results,L417834,98% recall and 90% precision at the document level,"Abstract Background The task of recognizing and identifying species names in biomedical literature has recently been regarded as critical for a number of applications in text and data mining, including gene name recognition, species-specific document retrieval, and semantic enrichment of biomedical articles. Results In this paper we describe an open-source species name recognition and normalization software system, LINNAEUS, and evaluate its performance relative to several automatically generated biomedical corpora, as well as a novel corpus of full-text documents manually annotated for species mentions. LINNAEUS uses a dictionary-based approach (implemented as an efficient deterministic finite-state automaton) to identify species names and a set of heuristics to resolve ambiguous mentions. When compared against our manually annotated corpus, LINNAEUS performs with 94% recall and 97% precision at the mention level, and 98% recall and 90% precision at the document level. Our system successfully solves the problem of disambiguating uncertain species mentions, with 97% of all mentions in PubMed Central full-text documents resolved to unambiguous NCBI taxonomy identifiers. Conclusions LINNAEUS is an open source, stand-alone software system capable of recognizing and normalizing species name mentions with speed and accuracy, and can therefore be integrated into a range of bioinformatics and text-mining applications. The software and manually annotated corpus can be downloaded freely at http://linnaeus.sourceforge.net/.",TRUE,count/measurement
R16,Biophysics,R74944,Differential Interaction of Antimicrobial Peptides with Lipid Structures Studied by Coarse-Grained Molecular Dynamics Simulations,S499465,R74946,Lead compound,L361439,Aurein 1.2,In this work; we investigated the differential interaction of amphiphilic antimicrobial peptides with 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) lipid structures by means of extensive molecular dynamics simulations. By using a coarse-grained (CG) model within the MARTINI force field; we simulated the peptide–lipid system from three different initial configurations: (a) peptides in water in the presence of a pre-equilibrated lipid bilayer; (b) peptides inside the hydrophobic core of the membrane; and (c) random configurations that allow self-assembled molecular structures. This last approach allowed us to sample the structural space of the systems and consider cooperative effects. The peptides used in our simulations are aurein 1.2 and maculatin 1.1; two well-known antimicrobial peptides from the Australian tree frogs; and molecules that present different membrane-perturbing behaviors. Our results showed differential behaviors for each type of peptide seen in a different organization that could guide a molecular interpretation of the experimental data. While both peptides are capable of forming membrane aggregates; the aurein 1.2 ones have a pore-like structure and exhibit a higher level of organization than those conformed by maculatin 1.1. Furthermore; maculatin 1.1 has a strong tendency to form clusters and induce curvature at low peptide–lipid ratios. The exploration of the possible lipid–peptide structures; as the one carried out here; could be a good tool for recognizing specific configurations that should be further studied with more sophisticated methodologies.,TRUE,count/measurement
R16,Biophysics,R75519,"Direct Visualization of Membrane Leakage Induced by the Antibiotic Peptides: Maculatin, Citropin, and Aurein",S499460,R75521,Lead compound,L361434,Aurein 1.2,"Membrane lysis caused by antibiotic peptides is often rationalized by means of two different models: the so-called carpet model and the pore-forming model. We report here on the lytic activity of antibiotic peptides from Australian tree frogs, maculatin 1.1, citropin 1.1, and aurein 1.2, on POPC or POPC/POPG model membranes. Leakage experiments using fluorescence spectroscopy indicated that the peptide/lipid mol ratio necessary to induce 50% of probe leakage was smaller for maculatin compared with aurein or citropin, regardless of lipid membrane composition. To gain further insight into the lytic mechanism of these peptides we performed single vesicle experiments using confocal fluorescence microscopy. In these experiments, the time course of leakage for different molecular weight (water soluble) fluorescent markers incorporated inside of single giant unilamellar vesicles is observed after peptide exposure. We conclude that maculatin and its related peptides demonstrate a pore-forming mechanism (differential leakage of small fluorescent probe compared with high molecular weight markers). Conversely, citropin and aurein provoke a total membrane destabilization with vesicle burst without sequential probe leakage, an effect that can be assigned to a carpeting mechanism of lytic action. Additionally, to study the relevance of the proline residue on the membrane-action properties of maculatin, the same experimental approach was used for maculatin-Ala and maculatin-Gly (Pro-15 was replaced by Ala or Gly, respectively). Although a similar peptide/lipid mol ratio was necessary to induce 50% of leakage for POPC membranes, the lytic activity of maculatin-Ala and maculatin-Gly decreased in POPC/POPG (1:1 mol) membranes compared with that observed for the naturally occurring maculatin sequence. As observed for maculatin, the lytic action of Maculatin-Ala and maculatin-Gly is in keeping with the formation of pore-like structures at the membrane independently of lipid composition.",TRUE,count/measurement
R16,Biophysics,R75540,Differential Stability of Aurein 1.2 Pores in Model Membranes of Two Probiotic Strains,S499459,R75546,Lead compound,L361433,Aurein 1.2,"Aurein 1.2 is an antimicrobial peptide from the skin secretion of an Australian frog. In previous experimental work, we reported a differential action of aurein 1.2 on two probiotic strains Lactobacillus delbrueckii subsp. Bulgaricus (CIDCA331) and Lactobacillus delbrueckii subsp. Lactis (CIDCA133). The differences found were attributed to the bilayer compositions. Cell cultures and CIDCA331-derived liposomes showed higher susceptibility than the ones derived from the CIDCA133 strain, leading to content leakage and structural disruption. Here, we used Molecular Dynamics simulations to explore these systems at atomistic level. We hypothesize that if the antimicrobial peptides organized themselves to form a pore, it will be more stable in membranes that emulate the CIDCA331 strain than in those of the CIDCA133 strain. To test this hypothesis, we simulated pre-assembled aurein 1.2 pores embedded into bilayer models that emulate the two probiotic strains. It was found that the general behavior of the systems depends on the composition of the membrane rather than the pre-assemble system characteristics. Overall, it was observed that aurein 1.2 pores are more stable in the CIDCA331 model membranes. This fact coincides with the high susceptibility of this strain against antimicrobial peptide. In contrast, in the case of the CIDCA133 model membranes, peptides migrate to the water-lipid interphase, the pore shrinks and the transport of water through the pore is reduced. The tendency of glycolipids to make hydrogen bonds with peptides destabilize the pore structures. This feature is observed to a lesser extent in CIDCA 331 due to the presence of anionic lipids. Glycolipid transverse diffusion (flip-flop) between monolayers occurs in the pore surface region in all the cases considered. These findings expand our understanding of the antimicrobial peptide resistance properties of probiotic strains.",TRUE,count/measurement
R16,Biophysics,R75547,"Could Cardiolipin Protect Membranes against the Action of Certain Antimicrobial Peptides? Aurein 1.2, a Case Study",S499458,R75551,Lead compound,L361432,Aurein 1.2,"The activity of a host of antimicrobial peptides has been examined against a range of lipid bilayers mimicking bacterial and eukaryotic membranes. Despite this, the molecular mechanisms and the nature of the physicochemical properties underlying the peptide–lipid interactions that lead to membrane disruption are yet to be fully elucidated. In this study, the interaction of the short antimicrobial peptide aurein 1.2 was examined in the presence of an anionic cardiolipin-containing lipid bilayer using molecular dynamics simulations. Aurein 1.2 is known to interact strongly with anionic lipid membranes. In the simulations, the binding of aurein 1.2 was associated with buckling of the lipid bilayer, the degree of which varied with the peptide concentration. The simulations suggest that the intrinsic properties of cardiolipin, especially the fact that it promotes negative membrane curvature, may help protect membranes against the action of peptides such as aurein 1.2 by counteracting the tendency of the peptide to induce positive curvature in target membranes.",TRUE,count/measurement
R16,Biophysics,R75519,"Direct Visualization of Membrane Leakage Induced by the Antibiotic Peptides: Maculatin, Citropin, and Aurein",S499461,R75521,Lead compound,L361435,Citropin 1.1,"Membrane lysis caused by antibiotic peptides is often rationalized by means of two different models: the so-called carpet model and the pore-forming model. We report here on the lytic activity of antibiotic peptides from Australian tree frogs, maculatin 1.1, citropin 1.1, and aurein 1.2, on POPC or POPC/POPG model membranes. Leakage experiments using fluorescence spectroscopy indicated that the peptide/lipid mol ratio necessary to induce 50% of probe leakage was smaller for maculatin compared with aurein or citropin, regardless of lipid membrane composition. To gain further insight into the lytic mechanism of these peptides we performed single vesicle experiments using confocal fluorescence microscopy. In these experiments, the time course of leakage for different molecular weight (water soluble) fluorescent markers incorporated inside of single giant unilamellar vesicles is observed after peptide exposure. We conclude that maculatin and its related peptides demonstrate a pore-forming mechanism (differential leakage of small fluorescent probe compared with high molecular weight markers). Conversely, citropin and aurein provoke a total membrane destabilization with vesicle burst without sequential probe leakage, an effect that can be assigned to a carpeting mechanism of lytic action. Additionally, to study the relevance of the proline residue on the membrane-action properties of maculatin, the same experimental approach was used for maculatin-Ala and maculatin-Gly (Pro-15 was replaced by Ala or Gly, respectively). Although a similar peptide/lipid mol ratio was necessary to induce 50% of leakage for POPC membranes, the lytic activity of maculatin-Ala and maculatin-Gly decreased in POPC/POPG (1:1 mol) membranes compared with that observed for the naturally occurring maculatin sequence. As observed for maculatin, the lytic action of Maculatin-Ala and maculatin-Gly is in keeping with the formation of pore-like structures at the membrane independently of lipid composition.",TRUE,count/measurement
R16,Biophysics,R74944,Differential Interaction of Antimicrobial Peptides with Lipid Structures Studied by Coarse-Grained Molecular Dynamics Simulations,S499466,R74946,Lead compound,L361440,Maculatin 1.1,In this work; we investigated the differential interaction of amphiphilic antimicrobial peptides with 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) lipid structures by means of extensive molecular dynamics simulations. By using a coarse-grained (CG) model within the MARTINI force field; we simulated the peptide–lipid system from three different initial configurations: (a) peptides in water in the presence of a pre-equilibrated lipid bilayer; (b) peptides inside the hydrophobic core of the membrane; and (c) random configurations that allow self-assembled molecular structures. This last approach allowed us to sample the structural space of the systems and consider cooperative effects. The peptides used in our simulations are aurein 1.2 and maculatin 1.1; two well-known antimicrobial peptides from the Australian tree frogs; and molecules that present different membrane-perturbing behaviors. Our results showed differential behaviors for each type of peptide seen in a different organization that could guide a molecular interpretation of the experimental data. While both peptides are capable of forming membrane aggregates; the aurein 1.2 ones have a pore-like structure and exhibit a higher level of organization than those conformed by maculatin 1.1. Furthermore; maculatin 1.1 has a strong tendency to form clusters and induce curvature at low peptide–lipid ratios. The exploration of the possible lipid–peptide structures; as the one carried out here; could be a good tool for recognizing specific configurations that should be further studied with more sophisticated methodologies.,TRUE,count/measurement
R16,Biophysics,R75519,"Direct Visualization of Membrane Leakage Induced by the Antibiotic Peptides: Maculatin, Citropin, and Aurein",S499462,R75521,Lead compound,L361436,Maculatin 1.1,"Membrane lysis caused by antibiotic peptides is often rationalized by means of two different models: the so-called carpet model and the pore-forming model. We report here on the lytic activity of antibiotic peptides from Australian tree frogs, maculatin 1.1, citropin 1.1, and aurein 1.2, on POPC or POPC/POPG model membranes. Leakage experiments using fluorescence spectroscopy indicated that the peptide/lipid mol ratio necessary to induce 50% of probe leakage was smaller for maculatin compared with aurein or citropin, regardless of lipid membrane composition. To gain further insight into the lytic mechanism of these peptides we performed single vesicle experiments using confocal fluorescence microscopy. In these experiments, the time course of leakage for different molecular weight (water soluble) fluorescent markers incorporated inside of single giant unilamellar vesicles is observed after peptide exposure. We conclude that maculatin and its related peptides demonstrate a pore-forming mechanism (differential leakage of small fluorescent probe compared with high molecular weight markers). Conversely, citropin and aurein provoke a total membrane destabilization with vesicle burst without sequential probe leakage, an effect that can be assigned to a carpeting mechanism of lytic action. Additionally, to study the relevance of the proline residue on the membrane-action properties of maculatin, the same experimental approach was used for maculatin-Ala and maculatin-Gly (Pro-15 was replaced by Ala or Gly, respectively). Although a similar peptide/lipid mol ratio was necessary to induce 50% of leakage for POPC membranes, the lytic activity of maculatin-Ala and maculatin-Gly decreased in POPC/POPG (1:1 mol) membranes compared with that observed for the naturally occurring maculatin sequence. As observed for maculatin, the lytic action of Maculatin-Ala and maculatin-Gly is in keeping with the formation of pore-like structures at the membrane independently of lipid composition.",TRUE,count/measurement
R122,Chemistry,R45098,Flash photolysis observation of the absorption spectra of trapped positive holes and electrons in colloidal titanium dioxide,S139223,R45099,transient absorption:trapped electrons,L85096,650 nm,"Photolyse laser eclair a 347 nm d'un sol de TiO 2 contenant un intercepteur d'electron adsorbe (Pt ou MV 2+ ). Etude par spectres d'absorption des especes piegees. A λ max =475 nm observation de «trous» h + . Vitesses de declin de h + en solutions acide et alcaline. Trous h + en exces. Avec un sol de TiO 2 contenant un intercepteur de trous (alcool polyvinylique ou thiocyanate), observation d'un spectre a λ max =650 nm attribue aux electrons pieges en exces, proches de la surface des particules colloidales",TRUE,count/measurement
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5687,R5144,method,R5165,the priori ty program 1103,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,count/measurement
R346,Cognition and Perception,R110142,Do Animals Engage Greater Social Attention in Autism? An Eye Tracking Analysis,S502208,R110144,Data,R110146,"10,362 valid observations","Background Visual atypicalities in autism spectrum disorder (ASD) are a well documented phenomenon, beginning as early as 2–6 months of age and manifesting in a significantly decreased attention to the eyes, direct gaze and socially salient information. Early emerging neurobiological deficits in perceiving social stimuli as rewarding or its active avoidance due to the anxiety it entails have been widely purported as potential reasons for this atypicality. Parallel research evidence also points to the significant benefits of animal presence for reducing social anxiety and enhancing social interaction in children with autism. While atypicality in social attention in ASD has been widely substantiated, whether this atypicality persists equally across species types or is confined to humans has not been a key focus of research insofar. Methods We attempted a comprehensive examination of the differences in visual attention to static images of human and animal faces (40 images; 20 human faces and 20 animal faces) among children with ASD using an eye tracking paradigm. 44 children (ASD n = 21; TD n = 23) participated in the study (10,362 valid observations) across five regions of interest (left eye, right eye, eye region, face and screen). Results Results obtained revealed significantly greater social attention across human and animal stimuli in typical controls when compared to children with ASD. However in children with ASD, a significantly greater attention allocation was seen to animal faces and eye region and lesser attention to the animal mouth when compared to human faces, indicative of a clear attentional preference to socially salient regions of animal stimuli. The positive attentional bias toward animals was also seen in terms of a significantly greater visual attention to direct gaze in animal images. Conclusion Our results suggest the possibility that atypicalities in social attention in ASD may not be uniform across species. It adds to the current neural and biomarker evidence base of the potentially greater social reward processing and lesser social anxiety underlying animal stimuli as compared to human stimuli in children with ASD.",TRUE,count/measurement
R277,Computational Engineering,R108235,Suspended accounts in retrospect: an analysis of twitter spam,S492970,R108237,dataset,L357309,1.8 billion tweets,"In this study, we examine the abuse of online social networks at the hands of spammers through the lens of the tools, techniques, and support infrastructure they rely upon. To perform our analysis, we identify over 1.1 million accounts suspended by Twitter for disruptive activities over the course of seven months. In the process, we collect a dataset of 1.8 billion tweets, 80 million of which belong to spam accounts. We use our dataset to characterize the behavior and lifetime of spam accounts, the campaigns they execute, and the wide-spread abuse of legitimate web services such as URL shorteners and free web hosting. We also identify an emerging marketplace of illegitimate programs operated by spammers that include Twitter account sellers, ad-based URL shorteners, and spam affiliate programs that help enable underground market diversification. Our results show that 77% of spam accounts identified by Twitter are suspended within on day of their first tweet. Because of these pressures, less than 9% of accounts form social relationships with regular Twitter users. Instead, 17% of accounts rely on hijacking trends, while 52% of accounts use unsolicited mentions to reach an audience. In spite of daily account attrition, we show how five spam campaigns controlling 145 thousand accounts combined are able to persist for months at a time, with each campaign enacting a unique spamming strategy. Surprisingly, three of these campaigns send spam directing visitors to reputable store fronts, blurring the line regarding what constitutes spam on social networks.",TRUE,count/measurement
R277,Computational Engineering,R108235,Suspended accounts in retrospect: an analysis of twitter spam,S493615,R108237,Accuracy/Results,L357694,77% of spam accounts identified by Twitter are suspended within on day of their first tweet,"In this study, we examine the abuse of online social networks at the hands of spammers through the lens of the tools, techniques, and support infrastructure they rely upon. To perform our analysis, we identify over 1.1 million accounts suspended by Twitter for disruptive activities over the course of seven months. In the process, we collect a dataset of 1.8 billion tweets, 80 million of which belong to spam accounts. We use our dataset to characterize the behavior and lifetime of spam accounts, the campaigns they execute, and the wide-spread abuse of legitimate web services such as URL shorteners and free web hosting. We also identify an emerging marketplace of illegitimate programs operated by spammers that include Twitter account sellers, ad-based URL shorteners, and spam affiliate programs that help enable underground market diversification. Our results show that 77% of spam accounts identified by Twitter are suspended within on day of their first tweet. Because of these pressures, less than 9% of accounts form social relationships with regular Twitter users. Instead, 17% of accounts rely on hijacking trends, while 52% of accounts use unsolicited mentions to reach an audience. In spite of daily account attrition, we show how five spam campaigns controlling 145 thousand accounts combined are able to persist for months at a time, with each campaign enacting a unique spamming strategy. Surprisingly, three of these campaigns send spam directing visitors to reputable store fronts, blurring the line regarding what constitutes spam on social networks.",TRUE,count/measurement
R322,Computational Linguistics,R148131,Construction of an annotated corpus to support biomedical information extraction,S593934,R148133,number of papers,L412970,240 MEDLINE abstracts,"Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes.",TRUE,count/measurement
R322,Computational Linguistics,R150967,Annotation of Chemical Named Entities,S605297,R150969,description,L418566,"annotation of chemical named entities in scientific text. A set of annotation guidelines defines 5 types of named entities, and provides instructions for the resolution of special cases.","We describe the annotation of chemical named entities in scientific text. A set of annotation guidelines defines 5 types of named entities, and provides instructions for the resolution of special cases. A corpus of fulltext chemistry papers was annotated, with an inter-annotator agreement F score of 93%. An investigation of named entity recognition using LingPipe suggests that F scores of 63% are possible without customisation, and scores of 74% are possible with the addition of custom tokenisation and the use of dictionaries.",TRUE,count/measurement
R322,Computational Linguistics,R150967,Annotation of Chemical Named Entities,S605168,R150969,inter-coder agreement,L418515,F score of 93%,"We describe the annotation of chemical named entities in scientific text. A set of annotation guidelines defines 5 types of named entities, and provides instructions for the resolution of special cases. A corpus of fulltext chemistry papers was annotated, with an inter-annotator agreement F score of 93%. An investigation of named entity recognition using LingPipe suggests that F scores of 63% are possible without customisation, and scores of 74% are possible with the addition of custom tokenisation and the use of dictionaries.",TRUE,count/measurement
R322,Computational Linguistics,R155259,Leveraging Abstract Meaning Representation for Knowledge Base Question Answering,S648274,R155261,On evaluation dataset,R157537,LC-QuAD 1.0,"Knowledge base question answering (KBQA) is an important task in Natural Language Processing. Existing approaches face significant challenges including complex question understanding, necessity for reasoning, and lack of large end-to-end training datasets. In this work, we propose Neuro-Symbolic Question Answering (NSQA), a modular KBQA system, that leverages (1) Abstract Meaning Representation (AMR) parses for task-independent question understanding; (2) a simple yet effective graph transformation approach to convert AMR parses into candidate logical queries that are aligned to the KB; (3) a pipeline-based approach which integrates multiple, reusable modules that are trained specifically for their individual tasks (semantic parser, entity and relationship linkers, and neuro-symbolic reasoner) and do not require end-to-end training data. NSQA achieves state-of-the-art performance on two prominent KBQA datasets based on DBpedia (QALD-9 and LC-QuAD 1.0). Furthermore, our analysis emphasizes that AMR is a powerful tool for KBQA systems.",TRUE,count/measurement
R322,Computational Linguistics,R148549,Medmentions: a large biomedical corpus annotated with UMLS concepts,S595579,R148551,number of papers,L413897,"over 4,000","This paper presents the formal release of {\em MedMentions}, a new manually annotated resource for the recognition of biomedical concepts. What distinguishes MedMentions from other annotated biomedical corpora is its size (over 4,000 abstracts and over 350,000 linked mentions), as well as the size of the concept ontology (over 3 million concepts from UMLS 2017) and its broad coverage of biomedical disciplines. In addition to the full corpus, a sub-corpus of MedMentions is also presented, comprising annotations for a subset of UMLS 2017 targeted towards document retrieval. To encourage research in Biomedical Named Entity Recognition and Linking, data splits for training and testing are included in the release, and a baseline model and its metrics for entity linking are also described.",TRUE,count/measurement
R417,Cultural History,R139800,A systematic review of literature on contested heritage,S558018,R139803,has sources,R139805,102 journal articles,"ABSTRACT Contested heritage has increasingly been studied by scholars over the last two decades in multiple disciplines, however, there is still limited knowledge about what contested heritage is and how it is realized in society. Therefore, the purpose of this paper is to produce a systematic literature review on this topic to provide a holistic understanding of contested heritage, and delineate its current state, trends and gaps. Methodologically, four electronic databases were searched, and 102 journal articles published before 2020 were extracted. A content analysis of each article was then conducted to identify key themes and variables for classification. Findings show that while its research often lacks theoretical underpinnings, contested heritage is marked by its diversity and complexity as it becomes a global issue for both tourism and urbanization. By presenting a holistic understanding of contested heritage, this review offers an extensive investigation of the topic area to help move literature pertaining contested heritage forward.",TRUE,count/measurement
R234,Digital Communications and Networking,R12293,"MAG: A Multilingual, Knowledge-base Agnostic and Deterministic Entity Linking Approach",S18841,R12295,Material,R12310,23 data sets,"Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages.",TRUE,count/measurement
R234,Digital Communications and Networking,R12293,"MAG: A Multilingual, Knowledge-base Agnostic and Deterministic Entity Linking Approach",S18847,R12295,Data,R12316,4 times worse,"Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages.",TRUE,count/measurement
R234,Digital Communications and Networking,R12293,"MAG: A Multilingual, Knowledge-base Agnostic and Deterministic Entity Linking Approach",S18840,R12295,Material,R12309,7 languages,"Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages.",TRUE,count/measurement
R142,Earth Sciences,R140548,"ASTER Data Analyses for Lithological Discrimination of Sittampundi Anorthositic Complex, Southern India",S561774,R140550,reference,R140677,Apollo 14 lunar anorthosites spectra,"ASTER is an advanced Thermal Emission and Reflection Radiometer, a multispectral sensor, which measures reflected and emitted electromagnetic radiation of earth surface with 14 bands. The present study aims to delineate different rock types in the Sittampundi Anorthositic Complex (SAC), Tamil Nadu using Visible (VIS), near-infrared (NIR) and short wave infrared (SWIR) reflectance data of ASTER 9 band data. We used different band ratioing, band combinations in the VNIR and SWIR region for discriminating lithological boundaries. SAC is also considered as a lunar highland analog rock. Anorthosite is a plagioclase-rich igneous rock with subordinate amounts of pyroxenes, olivine and other minerals. A methodology has been applied to correct the cross talk effect and radiance to reflectance. Principal Component Analysis (PCA) has been realized on the 9 ASTER bands in order to reduce the redundancy information in highly correlated bands. PCA derived FCC results enable the validation and support to demarcate the different lithological boundaries defined on previous geological map. The image derived spectral profiles for anorthosite are compared with the ASTER resampled laboratory spectra, JHU spectral library spectra and Apollo 14 lunar anorthosites spectra. The Spectral Angle Mapping imaging spectroscopy technique has been practiced to classify the ASTER image of the study area and found that, the processing of ASTER remote sensing data set can be used as a powerful tool for mapping the terrestrial Anorthositic regions and similar kind of process could be applied to map the planetary surfaces (E.g. Moon).",TRUE,count/measurement
R142,Earth Sciences,R143763,Development and utilization of urban spectral library for remote sensing of urban environment,S575750,R143765,Sensors,R143851,ASD FieldSpec 3 Spectroradiometer,Hyperspectral technology is useful for urban studies due to its capability in examining detailed spectral characteristics of urban materials. This study aims to develop a spectral library of urban materials and demonstrate its application in remote sensing analysis of an urban environment. Field measurements were conducted by using ASD FieldSpec 3 Spectroradiometer with wavelength range from 350 to 2500 nm. The spectral reflectance curves of urban materials were interpreted and analyzed. A collection of 22 spectral data was compiled into a spectral library. The spectral library was put to practical use by utilizing the reference spectra for WorldView-2 satellite image classification which demonstrates the usability of such infrastructure to facilitate further progress of remote sensing applications in Malaysia.,TRUE,count/measurement
R24,Ecology and Evolutionary Biology,R54224,Phenotypic plasticity of an invasive acacia versus two native Mediterranean species,S167537,R54225,Specific traits,L102087,20 physiological and morphological traits,"
The phenotypic plasticity and the competitive ability of the invasive Acacia longifolia v. the indigenous Mediterranean dune species Halimium halimifolium and Pinus pinea were evaluated. In particular, we explored the hypothesis that phenotypic plasticity in response to biotic and abiotic factors explains the observed differences in competitiveness between invasive and native species. The seedlings’ ability to exploit different resource availabilities was examined in a two factorial experimental design of light and nutrient treatments by analysing 20 physiological and morphological traits. Competitiveness was tested using an additive experimental design in combination with 15N-labelling experiments. Light and nutrient availability had only minor effects on most physiological traits and differences between species were not significant. Plasticity in response to changes in resource availability occurred in morphological and allocation traits, revealing A. longifolia to be a species of intermediate responsiveness. The major competitive advantage of A. longifolia was its constitutively high shoot elongation rate at most resource treatments and its effective nutrient acquisition. Further, A. longifolia was found to be highly tolerant against competition from native species. In contrast to common expectations, the competition experiment indicated that A. longifolia expressed a constant allocation pattern and a phenotypic plasticity similar to that of the native species.
",TRUE,count/measurement
R24,Ecology and Evolutionary Biology,R54130,"Morphological differentiation of introduced pikeperch (Sander lucioperca L., 1758) populations in Tunisian freshwaters",S166427,R54131,Specific traits,L101165,Nine meristic counts and 23 morphological measurements,"Summary In order to evaluate the phenotypic plasticity of introduced pikeperch populations in Tunisia, the intra- and interpopulation differentiation was analysed using a biometric approach. Thus, nine meristic counts and 23 morphological measurements were taken from 574 specimens collected from three dams and a hill lake. The univariate (anova) and multivariate analyses (PCA and DFA) showed a low meristic variability between the pikeperch samples and a segregated pikeperch group from the Sidi Salem dam which displayed a high distance between mouth and pectoral fin and a high antedorsal distance. In addition, the Korba hill lake population seemed to have more important values of total length, eye diameter, maximum body height and a higher distance between mouth and operculum than the other populations. However, the most accentuated segregation was found in the Lebna sample where the individuals were characterized by high snout length, body thickness, pectoral fin length, maximum body height and distance between mouth and operculum. This study shows the existence of morphological differentiations between populations derived from a single gene pool that have been isolated in separated sites for several decades although in relatively similar environments.",TRUE,count/measurement
R267,Energy Systems,R110083,Optimal Sizing and Scheduling of Hybrid Energy Systems: The Cases of Morona Santiago and the Galapagos Islands,S502129,R110088,Levelized cost of energy,L362998,0.36 $/kWh,"Hybrid energy systems (HESs) generate electricity from multiple energy sources that complement each other. Recently, due to the reduction in costs of photovoltaic (PV) modules and wind turbines, these types of systems have become economically competitive. In this study, a mathematical programming model is applied to evaluate the techno-economic feasibility of autonomous units located in two isolated areas of Ecuador: first, the province of Galapagos (subtropical island) and second, the province of Morona Santiago (Amazonian tropical forest). The two case studies suggest that HESs are potential solutions to reduce the dependence of rural villages on fossil fuels and viable mechanisms to bring electrical power to isolated communities in Ecuador. Our results reveal that not only from the economic but also from the environmental point of view, for the case of the Galapagos province, a hybrid energy system with a PV–wind–battery configuration and a levelized cost of energy (LCOE) equal to 0.36 $/kWh is the optimal energy supply system. For the case of Morona Santiago, a hybrid energy system with a PV–diesel–battery configuration and an LCOE equal to 0.37 $/kWh is the most suitable configuration to meet the load of a typical isolated community in Ecuador. The proposed optimization model can be used as a decision-support tool for evaluating the viability of autonomous HES projects at any other location.",TRUE,count/measurement
R54,Environmental Microbiology and Microbial Ecology,R78283,The Effect of Rhizosphere Soil and Root Tissues Amendment on Microbial Mineralisation of Target 14C–Hydrocarbons in Contaminated Soil,S354071,R78285,Has method,R78286,"In this study, rhizosphere soil and root tissues of reed canary grass (Phalaris arundinacea), channel grass (Vallisneria spiralis), blackberry (Rubus fructicosus) and goat willow (Salix caprea) were collected from the former Shell and Imperial Industries (ICI) Refinery site in Lancaster, UK. The rates and extents of 14C–hydrocarbons (naphthalene, phenanthrene, hexadecane or octacosane) mineralisation in artificially spiked soils were monitored in the absence and presence of 5% (wet weight) of rhizosphere soil or root tissues. Respirometric and microbial assays were monitored in fresh (0 d) and pre–incubated (28 d) artificially spiked soils following amendment with rhizosphere soil or root tissues. ","The effect of rhizosphere soil or root tissues amendments on the microbial mineralisation of hydrocarbons in soil slurry by the indigenous microbial communities has been investigated. In this study, rhizosphere soil and root tissues of reed canary grass (Phalaris arundinacea), channel grass (Vallisneria spiralis), blackberry (Rubus fructicosus) and goat willow (Salix caprea) were collected from the former Shell and Imperial Industries (ICI) Refinery site in Lancaster, UK. The rates and extents of 14C–hydrocarbons (naphthalene, phenanthrene, hexadecane or octacosane) mineralisation in artificially spiked soils were monitored in the absence and presence of 5% (wet weight) of rhizosphere soil or root tissues. Respirometric and microbial assays were monitored in fresh (0 d) and pre–incubated (28 d) artificially spiked soils following amendment with rhizosphere soil or root tissues. There were significant increases (P < 0.001) in the extents of 14C–naphthalene and 14C–phenanthrene mineralisation in fresh artificially spiked soils amended with rhizosphere soil and root tissues compared to those measured in unamended soils. However, amendment of fresh artificially spiked soils with rhizosphere soil and root tissues did not enhance the microbial mineralisation of 14C–hexadecane or 14C–octacosane by indigenous microbial communities. Apart from artificially spiked soil systems containing naphthalene (amended with reed canary grass and channel grass rhizosphere) and hexadecane amended with goat willow rhizosphere, microbial mineralisation of hydrocarbons was further enhanced following 28 d soil–organic contaminants pre–exposure and subsequent amendment with rhizosphere soil or root tissues. This study suggests that organic chemicals in roots and/or rhizosphere can enhance the microbial degradation of petroleum hydrocarbons in freshly contaminated soil by supporting higher numbers of hydrocarbon–degrading populations, promoting microbial activity and/or enhancing bioavailability of organic contaminants.",TRUE,count/measurement
R54,Environmental Microbiology and Microbial Ecology,R78291,The Effect of Hydroxycinnamic Acids on the Microbial Mineralisation of Phenanthrene in Soil,S354089,R78293,Has method,R78295,"The rate and extent of 14C–phenanthrenemineralisation in artificially spiked soils were monitored in the absence of hydroxycinnamic acids and presence of hydroxycinnamic acids applied at three different concentrations (50, 100 and 200 µg kg-1) either as single compounds or as a mixture of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids at a 1:1:1 ratio). ","The effect of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids) on the microbial mineralisation of phenanthrene in soil slurry by the indigenous microbial community has been investigated. The rate and extent of 14C–phenanthrenemineralisation in artificially spiked soils were monitored in the absence of hydroxycinnamic acids and presence of hydroxycinnamic acids applied at three different concentrations (50, 100 and 200 µg kg-1) either as single compounds or as a mixture of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids at a 1:1:1 ratio). The highest extent of 14C–phenanthrene mineralisation (P 200 µg kg-1. Depending on its concentrationin soil, hydroxycinnamic acids can either stimulate or inhibit mineralisation of phenanthrene by indigenous soil microbial community. Therefore, effective understanding of phytochemical–microbe–organic contaminant interactions is essential for further development of phytotechnologies for remediation of PAH–contaminated soils.",TRUE,count/measurement
R54,Environmental Microbiology and Microbial Ecology,R78283,The Effect of Rhizosphere Soil and Root Tissues Amendment on Microbial Mineralisation of Target 14C–Hydrocarbons in Contaminated Soil,S354072,R78285,Result and Conclusion,R78287,"There were significant increases (P < 0.001) in the extents of 14C–naphthalene and 14C–phenanthrene mineralisation in fresh artificially spiked soils amended with rhizosphere soil and root tissues compared to those measured in unamended soils. However, amendment of fresh artificially spiked soils with rhizosphere soil and root tissues did not enhance the microbial mineralisation of 14C–hexadecane or 14C–octacosane by indigenous microbial communities. Apart from artificially spiked soil systems containing naphthalene (amended with reed canary grass and channel grass rhizosphere) and hexadecane amended with goat willow rhizosphere, microbial mineralisation of hydrocarbons was further enhanced following 28 d soil–organic contaminants pre–exposure and subsequent amendment with rhizosphere soil or root tissues.","The effect of rhizosphere soil or root tissues amendments on the microbial mineralisation of hydrocarbons in soil slurry by the indigenous microbial communities has been investigated. In this study, rhizosphere soil and root tissues of reed canary grass (Phalaris arundinacea), channel grass (Vallisneria spiralis), blackberry (Rubus fructicosus) and goat willow (Salix caprea) were collected from the former Shell and Imperial Industries (ICI) Refinery site in Lancaster, UK. The rates and extents of 14C–hydrocarbons (naphthalene, phenanthrene, hexadecane or octacosane) mineralisation in artificially spiked soils were monitored in the absence and presence of 5% (wet weight) of rhizosphere soil or root tissues. Respirometric and microbial assays were monitored in fresh (0 d) and pre–incubated (28 d) artificially spiked soils following amendment with rhizosphere soil or root tissues. There were significant increases (P < 0.001) in the extents of 14C–naphthalene and 14C–phenanthrene mineralisation in fresh artificially spiked soils amended with rhizosphere soil and root tissues compared to those measured in unamended soils. However, amendment of fresh artificially spiked soils with rhizosphere soil and root tissues did not enhance the microbial mineralisation of 14C–hexadecane or 14C–octacosane by indigenous microbial communities. Apart from artificially spiked soil systems containing naphthalene (amended with reed canary grass and channel grass rhizosphere) and hexadecane amended with goat willow rhizosphere, microbial mineralisation of hydrocarbons was further enhanced following 28 d soil–organic contaminants pre–exposure and subsequent amendment with rhizosphere soil or root tissues. This study suggests that organic chemicals in roots and/or rhizosphere can enhance the microbial degradation of petroleum hydrocarbons in freshly contaminated soil by supporting higher numbers of hydrocarbon–degrading populations, promoting microbial activity and/or enhancing bioavailability of organic contaminants.",TRUE,count/measurement
R145,Environmental Sciences,R78224,"Assessment of Cadmium and Lead Distribution in the Outcrop Rocks of Abakaliki Anticlinorium in the Southern Benue Trough, Nigeria",S353928,R78226,Discussion,R78229,"Apart from an anomalous concentration measured in Afikpo Shale (Middle Segment), the results obtained revealed that rock samples from all the sampling locations yielded cadmium concentrations of < 0.10 mg kg–1 and the measured concentrations were below the average crustal abundance of 0.50 mg kg–1. Although background concentration of <1.00 ± 0.02 mg kg–1 was measured in Abakaliki Shale, rock samples from all the sampling locations revealed anomalous lead concentrations above average crustal abundance of 30 mg kg–1. ","This study investigates the distribution of cadmium and lead concentrations in the outcrop rock samples collected from Abakaliki anticlinorium in the Southern Benue Trough, Nigeria. The outcrop rock samples from seven sampling locations were air–dried for seventy–two hours, homogenized by grinding and pass through < 63 micron mesh sieve. The ground and homogenized rock samples were pulverized and analyzed for cadmium and lead using X-Ray Fluorescence Spectrometer. The concentrations of heavy metals in the outcrop rock samples ranged from < 0.10 – 7.95 mg kg–1 for cadmium (Cd) and < 1.00 – 4966.00 mg kg–1 for lead (Pb). Apart from an anomalous concentration measured in Afikpo Shale (Middle Segment), the results obtained revealed that rock samples from all the sampling locations yielded cadmium concentrations of < 0.10 mg kg–1 and the measured concentrations were below the average crustal abundance of 0.50 mg kg–1. Although background concentration of <1.00 ± 0.02 mg kg–1 was measured in Abakaliki Shale, rock samples from all the sampling locations revealed anomalous lead concentrations above average crustal abundance of 30 mg kg–1. The results obtained reveal important contributions towards understanding of heavy metal distribution patterns and provide baseline data that can be used for potential identification of areas at risk associated with natural sources of heavy metals contamination in the region. The use of outcrop rocks provides a cost–effective approach for monitoring regional heavy metal contamination associated with dissolution and/or weathering of rocks or parent materials. Evaluation of heavy metals may be effectively used in large scale regional pollution monitoring of soil, groundwater, atmospheric and marine environment. Therefore, monitoring of heavy metal concentrations in soils, groundwater and atmospheric environment is imperative in order to prevent bioaccumulation in various ecological receptors.",TRUE,count/measurement
R145,Environmental Sciences,R78224,"Assessment of Cadmium and Lead Distribution in the Outcrop Rocks of Abakaliki Anticlinorium in the Southern Benue Trough, Nigeria",S353930,R78226,Has result,R78231,The concentrations of heavy metals in the outcrop rock samples ranged from < 0.10 – 7.95 mg kg–1 for cadmium (Cd) and < 1.00 – 4966.00 mg kg–1 for lead (Pb). ,"This study investigates the distribution of cadmium and lead concentrations in the outcrop rock samples collected from Abakaliki anticlinorium in the Southern Benue Trough, Nigeria. The outcrop rock samples from seven sampling locations were air–dried for seventy–two hours, homogenized by grinding and pass through < 63 micron mesh sieve. The ground and homogenized rock samples were pulverized and analyzed for cadmium and lead using X-Ray Fluorescence Spectrometer. The concentrations of heavy metals in the outcrop rock samples ranged from < 0.10 – 7.95 mg kg–1 for cadmium (Cd) and < 1.00 – 4966.00 mg kg–1 for lead (Pb). Apart from an anomalous concentration measured in Afikpo Shale (Middle Segment), the results obtained revealed that rock samples from all the sampling locations yielded cadmium concentrations of < 0.10 mg kg–1 and the measured concentrations were below the average crustal abundance of 0.50 mg kg–1. Although background concentration of <1.00 ± 0.02 mg kg–1 was measured in Abakaliki Shale, rock samples from all the sampling locations revealed anomalous lead concentrations above average crustal abundance of 30 mg kg–1. The results obtained reveal important contributions towards understanding of heavy metal distribution patterns and provide baseline data that can be used for potential identification of areas at risk associated with natural sources of heavy metals contamination in the region. The use of outcrop rocks provides a cost–effective approach for monitoring regional heavy metal contamination associated with dissolution and/or weathering of rocks or parent materials. Evaluation of heavy metals may be effectively used in large scale regional pollution monitoring of soil, groundwater, atmospheric and marine environment. Therefore, monitoring of heavy metal concentrations in soils, groundwater and atmospheric environment is imperative in order to prevent bioaccumulation in various ecological receptors.",TRUE,count/measurement
R145,Environmental Sciences,R78224,"Assessment of Cadmium and Lead Distribution in the Outcrop Rocks of Abakaliki Anticlinorium in the Southern Benue Trough, Nigeria",S353927,R78226,Has method,R78228,"The outcrop rock samples from seven sampling locations were air–dried for seventy–two hours, homogenized by grinding and pass through < 63 micron mesh sieve. The ground and homogenized rock samples were pulverized and analyzed for cadmium and lead using X-Ray Fluorescence Spectrometer. ","This study investigates the distribution of cadmium and lead concentrations in the outcrop rock samples collected from Abakaliki anticlinorium in the Southern Benue Trough, Nigeria. The outcrop rock samples from seven sampling locations were air–dried for seventy–two hours, homogenized by grinding and pass through < 63 micron mesh sieve. The ground and homogenized rock samples were pulverized and analyzed for cadmium and lead using X-Ray Fluorescence Spectrometer. The concentrations of heavy metals in the outcrop rock samples ranged from < 0.10 – 7.95 mg kg–1 for cadmium (Cd) and < 1.00 – 4966.00 mg kg–1 for lead (Pb). Apart from an anomalous concentration measured in Afikpo Shale (Middle Segment), the results obtained revealed that rock samples from all the sampling locations yielded cadmium concentrations of < 0.10 mg kg–1 and the measured concentrations were below the average crustal abundance of 0.50 mg kg–1. Although background concentration of <1.00 ± 0.02 mg kg–1 was measured in Abakaliki Shale, rock samples from all the sampling locations revealed anomalous lead concentrations above average crustal abundance of 30 mg kg–1. The results obtained reveal important contributions towards understanding of heavy metal distribution patterns and provide baseline data that can be used for potential identification of areas at risk associated with natural sources of heavy metals contamination in the region. The use of outcrop rocks provides a cost–effective approach for monitoring regional heavy metal contamination associated with dissolution and/or weathering of rocks or parent materials. Evaluation of heavy metals may be effectively used in large scale regional pollution monitoring of soil, groundwater, atmospheric and marine environment. Therefore, monitoring of heavy metal concentrations in soils, groundwater and atmospheric environment is imperative in order to prevent bioaccumulation in various ecological receptors.",TRUE,count/measurement
R145,Environmental Sciences,R186691,Soil Organic Matter Prediction Model with Satellite Hyperspectral Image Based on Optimized Denoising Method,S713777,R186693,Has result,R186704,DWT > FT > SVD,"In order to improve the signal-to-noise ratio of the hyperspectral sensors and exploit the potential of satellite hyperspectral data for predicting soil properties, we took MingShui County as the study area, which the study area is approximately 1481 km2, and we selected Gaofen-5 (GF-5) satellite hyperspectral image of the study area to explore an applicable and accurate denoising method that can effectively improve the prediction accuracy of soil organic matter (SOM) content. First, fractional-order derivative (FOD) processing is performed on the original reflectance (OR) to evaluate the optimal FOD. Second, singular value decomposition (SVD), Fourier transform (FT) and discrete wavelet transform (DWT) are used to denoise the OR and optimal FOD reflectance. Third, the spectral indexes of the reflectance under different denoising methods are extracted by optimal band combination algorithm, and the input variables of different denoising methods are selected by the recursive feature elimination (RFE) algorithm. Finally, the SOM content is predicted by a random forest prediction model. The results reveal that 0.6-order reflectance describes more useful details in satellite hyperspectral data. Five spectral indexes extracted from the reflectance under different denoising methods have a strong correlation with the SOM content, which is helpful for realizing high-accuracy SOM predictions. All three denoising methods can reduce the noise in hyperspectral data, and the accuracies of the different denoising methods are ranked DWT > FT > SVD, where 0.6-order-DWT has the highest accuracy (R2 = 0.84, RMSE = 3.36 g kg−1, and RPIQ = 1.71). This paper is relatively novel, in that GF-5 satellite hyperspectral data based on different denoising methods are used to predict SOM, and the results provide a highly robust and novel method for mapping the spatial distribution of SOM content at the regional scale.",TRUE,count/measurement
R145,Environmental Sciences,R186691,Soil Organic Matter Prediction Model with Satellite Hyperspectral Image Based on Optimized Denoising Method,S713774,R186693,Result,R186704,DWT > FT > SVD,"In order to improve the signal-to-noise ratio of the hyperspectral sensors and exploit the potential of satellite hyperspectral data for predicting soil properties, we took MingShui County as the study area, which the study area is approximately 1481 km2, and we selected Gaofen-5 (GF-5) satellite hyperspectral image of the study area to explore an applicable and accurate denoising method that can effectively improve the prediction accuracy of soil organic matter (SOM) content. First, fractional-order derivative (FOD) processing is performed on the original reflectance (OR) to evaluate the optimal FOD. Second, singular value decomposition (SVD), Fourier transform (FT) and discrete wavelet transform (DWT) are used to denoise the OR and optimal FOD reflectance. Third, the spectral indexes of the reflectance under different denoising methods are extracted by optimal band combination algorithm, and the input variables of different denoising methods are selected by the recursive feature elimination (RFE) algorithm. Finally, the SOM content is predicted by a random forest prediction model. The results reveal that 0.6-order reflectance describes more useful details in satellite hyperspectral data. Five spectral indexes extracted from the reflectance under different denoising methods have a strong correlation with the SOM content, which is helpful for realizing high-accuracy SOM predictions. All three denoising methods can reduce the noise in hyperspectral data, and the accuracies of the different denoising methods are ranked DWT > FT > SVD, where 0.6-order-DWT has the highest accuracy (R2 = 0.84, RMSE = 3.36 g kg−1, and RPIQ = 1.71). This paper is relatively novel, in that GF-5 satellite hyperspectral data based on different denoising methods are used to predict SOM, and the results provide a highly robust and novel method for mapping the spatial distribution of SOM content at the regional scale.",TRUE,count/measurement
R83,Food Processing,R111050,"Optimisation of raw tooke flour, vital gluten and water absorption in tooke/wheat composite bread, using response surface methodology (Part II)",S505742,R111052,optimal level of non wheat flour,L365106,0.56%,"The objective of this study was to optimise raw tooke flour-(RTF), vital gluten (VG) and water absorption (WA) with respect to bread-making quality and cost effectiveness of RTF/wheat composite flour. The hypothesis generated for this study was that optimal substitution of RTF and VG into wheat has no significant effect on baking quality of the resultant composite flour. A basic white wheat bread recipe was adopted and response surface methodology (RSM) procedures applied. A D-optimal design was employed with the following variables: RTF (x1) 0-33%, WA (x2) -2FWA to +2FWA and VG (x3) 0 - 3%. Seven responses were modelled. Baking worth number, volume yield and cost were simultaneously optimized using desirability function approach. Models developed adequately described the relationships and were confirmed by validation studies. RTF showed the greatest effect on all models, which effect impaired baking performance of composite flour. VG and Farinograph water absorption (FWA) as well as their interaction improved bread quality. Vitality of VG was enhanced by RTF. The optimal formulation for maximum baking quality was 0.56%(x1), 0.33%(x2) and -1.24(x3) while a formulation of 22%(x1), 3%(x2) and +1.13(x3) maximized RTF incorporation in the respective and composite bread quality at lowest cost. Thus, the set hypothesis was not rejected. Key words: Raw tooke flour, composite bread, baking quality, response surface methodology, Farinograph water absorption, vital gluten.",TRUE,count/measurement
R83,Food Processing,R111050,"Optimisation of raw tooke flour, vital gluten and water absorption in tooke/wheat composite bread, using response surface methodology (Part II)",S505736,R111052,Upper range of non wheat flour,L365104,33%,"The objective of this study was to optimise raw tooke flour-(RTF), vital gluten (VG) and water absorption (WA) with respect to bread-making quality and cost effectiveness of RTF/wheat composite flour. The hypothesis generated for this study was that optimal substitution of RTF and VG into wheat has no significant effect on baking quality of the resultant composite flour. A basic white wheat bread recipe was adopted and response surface methodology (RSM) procedures applied. A D-optimal design was employed with the following variables: RTF (x1) 0-33%, WA (x2) -2FWA to +2FWA and VG (x3) 0 - 3%. Seven responses were modelled. Baking worth number, volume yield and cost were simultaneously optimized using desirability function approach. Models developed adequately described the relationships and were confirmed by validation studies. RTF showed the greatest effect on all models, which effect impaired baking performance of composite flour. VG and Farinograph water absorption (FWA) as well as their interaction improved bread quality. Vitality of VG was enhanced by RTF. The optimal formulation for maximum baking quality was 0.56%(x1), 0.33%(x2) and -1.24(x3) while a formulation of 22%(x1), 3%(x2) and +1.13(x3) maximized RTF incorporation in the respective and composite bread quality at lowest cost. Thus, the set hypothesis was not rejected. Key words: Raw tooke flour, composite bread, baking quality, response surface methodology, Farinograph water absorption, vital gluten.",TRUE,count/measurement
R146,Geology,R78214,Textural and Heavy Minerals Characterization of Coastal Sediments in Ibeno and Eastern Obolo Local Government Areas of Akwa Ibom State – Nigeria,S353908,R78216,Has method,R78218,Sediment samples were collected at the water–sediment contact along the shoreline at an interval of about 3m. Ten samples were collected from study location 1 (Ibeno Beach) and twelve samples were collected from study location 2 (Eastern Obolo Beach). A total of twenty–two samples were collected from the field and brought to the laboratory for textural and compositional analyses.,"Textural characterization and heavy mineral studies of beach sediments in Ibeno and Eastern Obolo Local Government Areas of Akwa Ibom State were carried out in the present study. The main aim was to infer their provenance, transport history and environment of deposition. Sediment samples were collected at the water–sediment contact along the shoreline at an interval of about 3m. Ten samples were collected from study location 1 (Ibeno Beach) and twelve samples were collected from study location 2 (Eastern Obolo Beach). A total of twenty–two samples were collected from the field and brought to the laboratory for textural and compositional analyses. The results showed that the value of graphic mean size ranged from 1.70Ф to 2.83Ф, sorting values ranged from 0.39Ф – 0.60Ф, skewness values ranged from -0.02 to 0.10 while kurtosis values ranged from 1.02 to 2.46, indicating medium to fine grained and well sorted sediments. This suggested that the sediments have been transported far from their source. Longshore current and onshore–offshore movements of sediment are primarily responsible in sorting of the heavy minerals. The histogram charts for the different samples and standard deviation versus skewness indicated a beach environment of deposition. This implies that the sediments are dominated by one class of grain size; a phenomenon characteristic of beach environments. The heavy mineral assemblages identified in this research work were rutile, zircon, tourmaline, hornblende, apatite, diopside, glauconite, pumpellyite, cassiterite, epidote, garnet, augite, enstatite, andalusite and opaque minerals. The zircon-tourmaline-rutile (ZTR) index ranged from 47.30% to 87.00% with most of the samples showing a ZTR index greater than 50%. These indicated that the sediments were mineralogically sub-mature and have been transported far from their source. The heavy minerals identified are indicative of being products of reworked sediments of both metamorphic (high rank) and igneous (both mafic and sialic) origin probably derived from the basement rocks of the Oban Massif as well as reworked sediments of the Benue Trough. Therefore, findings from the present study indicated that erosion, accretion, and stability of beaches are controlled by strong hydrodynamic and hydraulic processes.",TRUE,count/measurement
R146,Geology,R78214,Textural and Heavy Minerals Characterization of Coastal Sediments in Ibeno and Eastern Obolo Local Government Areas of Akwa Ibom State – Nigeria,S353911,R78216,Has result,R78221,"The results showed that the value of graphic mean size ranged from 1.70Ф to 2.83Ф, sorting values ranged from 0.39Ф – 0.60Ф, skewness values ranged from -0.02 to 0.10 while kurtosis values ranged from 1.02 to 2.46, indicating medium to fine grained and well sorted sediments.","Textural characterization and heavy mineral studies of beach sediments in Ibeno and Eastern Obolo Local Government Areas of Akwa Ibom State were carried out in the present study. The main aim was to infer their provenance, transport history and environment of deposition. Sediment samples were collected at the water–sediment contact along the shoreline at an interval of about 3m. Ten samples were collected from study location 1 (Ibeno Beach) and twelve samples were collected from study location 2 (Eastern Obolo Beach). A total of twenty–two samples were collected from the field and brought to the laboratory for textural and compositional analyses. The results showed that the value of graphic mean size ranged from 1.70Ф to 2.83Ф, sorting values ranged from 0.39Ф – 0.60Ф, skewness values ranged from -0.02 to 0.10 while kurtosis values ranged from 1.02 to 2.46, indicating medium to fine grained and well sorted sediments. This suggested that the sediments have been transported far from their source. Longshore current and onshore–offshore movements of sediment are primarily responsible in sorting of the heavy minerals. The histogram charts for the different samples and standard deviation versus skewness indicated a beach environment of deposition. This implies that the sediments are dominated by one class of grain size; a phenomenon characteristic of beach environments. The heavy mineral assemblages identified in this research work were rutile, zircon, tourmaline, hornblende, apatite, diopside, glauconite, pumpellyite, cassiterite, epidote, garnet, augite, enstatite, andalusite and opaque minerals. The zircon-tourmaline-rutile (ZTR) index ranged from 47.30% to 87.00% with most of the samples showing a ZTR index greater than 50%. These indicated that the sediments were mineralogically sub-mature and have been transported far from their source. The heavy minerals identified are indicative of being products of reworked sediments of both metamorphic (high rank) and igneous (both mafic and sialic) origin probably derived from the basement rocks of the Oban Massif as well as reworked sediments of the Benue Trough. Therefore, findings from the present study indicated that erosion, accretion, and stability of beaches are controlled by strong hydrodynamic and hydraulic processes.",TRUE,count/measurement
R146,Geology,R78214,Textural and Heavy Minerals Characterization of Coastal Sediments in Ibeno and Eastern Obolo Local Government Areas of Akwa Ibom State – Nigeria,S353910,R78216,Discussion,R78220,"This suggested that the sediments have been transported far from their source. Longshore current and onshore–offshore movements of sediment are primarily responsible in sorting of the heavy minerals. The histogram charts for the different samples and standard deviation versus skewness indicated a beach environment of deposition. This implies that the sediments are dominated by one class of grain size; a phenomenon characteristic of beach environments. The heavy mineral assemblages identified in this research work were rutile, zircon, tourmaline, hornblende, apatite, diopside, glauconite, pumpellyite, cassiterite, epidote, garnet, augite, enstatite, andalusite and opaque minerals. The zircon-tourmaline-rutile (ZTR) index ranged from 47.30% to 87.00% with most of the samples showing a ZTR index greater than 50%. These indicated that the sediments were mineralogically sub-mature and have been transported far from their source. The heavy minerals identified are indicative of being products of reworked sediments of both metamorphic (high rank) and igneous (both mafic and sialic) origin probably derived from the basement rocks of the Oban Massif as well as reworked sediments of the Benue Trough.","Textural characterization and heavy mineral studies of beach sediments in Ibeno and Eastern Obolo Local Government Areas of Akwa Ibom State were carried out in the present study. The main aim was to infer their provenance, transport history and environment of deposition. Sediment samples were collected at the water–sediment contact along the shoreline at an interval of about 3m. Ten samples were collected from study location 1 (Ibeno Beach) and twelve samples were collected from study location 2 (Eastern Obolo Beach). A total of twenty–two samples were collected from the field and brought to the laboratory for textural and compositional analyses. The results showed that the value of graphic mean size ranged from 1.70Ф to 2.83Ф, sorting values ranged from 0.39Ф – 0.60Ф, skewness values ranged from -0.02 to 0.10 while kurtosis values ranged from 1.02 to 2.46, indicating medium to fine grained and well sorted sediments. This suggested that the sediments have been transported far from their source. Longshore current and onshore–offshore movements of sediment are primarily responsible in sorting of the heavy minerals. The histogram charts for the different samples and standard deviation versus skewness indicated a beach environment of deposition. This implies that the sediments are dominated by one class of grain size; a phenomenon characteristic of beach environments. The heavy mineral assemblages identified in this research work were rutile, zircon, tourmaline, hornblende, apatite, diopside, glauconite, pumpellyite, cassiterite, epidote, garnet, augite, enstatite, andalusite and opaque minerals. The zircon-tourmaline-rutile (ZTR) index ranged from 47.30% to 87.00% with most of the samples showing a ZTR index greater than 50%. These indicated that the sediments were mineralogically sub-mature and have been transported far from their source. The heavy minerals identified are indicative of being products of reworked sediments of both metamorphic (high rank) and igneous (both mafic and sialic) origin probably derived from the basement rocks of the Oban Massif as well as reworked sediments of the Benue Trough. Therefore, findings from the present study indicated that erosion, accretion, and stability of beaches are controlled by strong hydrodynamic and hydraulic processes.",TRUE,count/measurement
R93,Human and Clinical Nutrition,R182134,On-Farm Crop Species Richness Is Associated with Household Diet Diversity and Quality in Subsistence- and Market-Oriented Farming Households in Malawi,S704533,R182136,p-value,L475339,P < 0.001,"BACKGROUND On-farm crop species richness (CSR) may be important for maintaining the diversity and quality of diets of smallholder farming households. OBJECTIVES The objectives of this study were to 1) determine the association of CSR with the diversity and quality of household diets in Malawi and 2) assess hypothesized mechanisms for this association via both subsistence- and market-oriented pathways. METHODS Longitudinal data were assessed from nationally representative household surveys in Malawi between 2010 and 2013 (n = 3000 households). A household diet diversity score (DDS) and daily intake per adult equivalent of energy, protein, iron, vitamin A, and zinc were calculated from 7-d household consumption data. CSR was calculated from plot-level data on all crops cultivated during the 2009-2010 and 2012-2013 agricultural seasons in Malawi. Adjusted generalized estimating equations were used to assess the longitudinal relation of CSR with household diet quality and diversity. RESULTS CSR was positively associated with DDS (β: 0.08; 95% CI: 0.06, 0.12; P < 0.001), as well as daily intake per adult equivalent of energy (kilocalories) (β: 41.6; 95% CI: 20.9, 62.2; P < 0.001), protein (grams) (β: 1.78; 95% CI: 0.80, 2.75; P < 0.001), iron (milligrams) (β: 0.30; 95% CI: 0.16, 0.44; P < 0.001), vitamin A (micrograms of retinol activity equivalent) (β: 25.8; 95% CI: 12.7, 38.9; P < 0.001), and zinc (milligrams) (β: 0.26; 95% CI: 0.13, 0.38; P < 0.001). Neither proportion of harvest sold nor distance to nearest population center modified the relation between CSR and household diet diversity or quality (P ≥ 0.05). Households with greater CSR were more commercially oriented (least-squares mean proportion of harvest sold ± SE, highest tertile of CSR: 17.1 ± 0.52; lowest tertile of CSR: 8.92 ± 1.09) (P < 0.05). CONCLUSION Promoting on-farm CSR may be a beneficial strategy for simultaneously supporting enhanced diet quality and diversity while also creating opportunities for smallholder farmers to engage with markets in subsistence agricultural contexts.",TRUE,count/measurement
R93,Human and Clinical Nutrition,R78237,Comparative Assessment of Iodine Content of Commercial Table Salt Brands Available in Nigerian Market,S353954,R78239,Has result,R78245,"The iodine content ranged from 14.80 mg/kg – 16.90 mg/kg with mean value of 15.90 mg/kg for Sea salt; 24.30 mg/kg – 25.40 mg/kg with mean value of 24.60 mg/kg for Dangote salt (blue sachet); 22.10 mg/kg – 23.10 mg/kg with mean value of 22.40 mg/kg for Dangote salt (red sachet); 23.30 mg/kg – 24.30 mg/kg with mean value of 23.60 mg/kg for Mr Chef salt; 23.30 mg/kg – 24.30 mg/kg with mean value of 23.60 mg/kg for Annapurna; 26.80 mg/kg – 27.50 mg/kg with mean value of 27.20mg/kg for Uncle Palm salt; 23.30 mg/kg – 29.60 mg/kg with mean content of 26.40 mg/kg for Dangote (bag); 25.40 mg/kg – 26.50 mg/kg with mean value of 26.50 mg/kg for Royal salt; 36.80 mg/kg – 37.20 mg/kg with mean iodine content of 37.0 mg/kg for Abakaliki refined salt, and 30.07 mg/kg – 31.20 mg/kg with mean value of 31.00 mg/kg for Ikom refined salt. ","Iodine deficiency disorders (IDD) has been a major global public health problem threatening more than 2 billion people worldwide. Considering various human health implications associated with iodine deficiency, universal salt iodization programme has been recognized as one of the best methods of preventing iodine deficiency disorder and iodizing table salt is currently done in many countries. In this study, comparative assessment of iodine content of commercially available table salt brands in Nigerian market were investigated and iodine content were measured in ten table salt brands samples using iodometric titration. The iodine content ranged from 14.80 mg/kg – 16.90 mg/kg with mean value of 15.90 mg/kg for Sea salt; 24.30 mg/kg – 25.40 mg/kg with mean value of 24.60 mg/kg for Dangote salt (blue sachet); 22.10 mg/kg – 23.10 mg/kg with mean value of 22.40 mg/kg for Dangote salt (red sachet); 23.30 mg/kg – 24.30 mg/kg with mean value of 23.60 mg/kg for Mr Chef salt; 23.30 mg/kg – 24.30 mg/kg with mean value of 23.60 mg/kg for Annapurna; 26.80 mg/kg – 27.50 mg/kg with mean value of 27.20mg/kg for Uncle Palm salt; 23.30 mg/kg – 29.60 mg/kg with mean content of 26.40 mg/kg for Dangote (bag); 25.40 mg/kg – 26.50 mg/kg with mean value of 26.50 mg/kg for Royal salt; 36.80 mg/kg – 37.20 mg/kg with mean iodine content of 37.0 mg/kg for Abakaliki refined salt, and 30.07 mg/kg – 31.20 mg/kg with mean value of 31.00 mg/kg for Ikom refined salt. The mean iodine content measured in the Sea salt brand (15.70 mg/kg) was significantly P < 0.01 lower compared to those measured in other table salt brands. Although the iodine content of Abakaliki and Ikom refined salt exceed the recommended value, it is clear that only Sea salt brand falls below the World Health Organization (WHO) recommended value (20 – 30 mg/kg), while the remaining table salt samples are just within the range. The results obtained have revealed that 70 % of the table salt brands were adequately iodized while 30 % of the table salt brands were not adequately iodized and provided baseline data that can be used for potential identification of human health risks associated with inadequate and/or excess iodine content in table salt brands consumed in households in Nigeria.",TRUE,count/measurement
R93,Human and Clinical Nutrition,R78237,Comparative Assessment of Iodine Content of Commercial Table Salt Brands Available in Nigerian Market,S353951,R78239,Discussion,R78242,"The mean iodine content measured in the Sea salt brand (15.70 mg/kg) was significantly P < 0.01 lower compared to those measured in other table salt brands. Although the iodine content of Abakaliki and Ikom refined salt exceed the recommended value, it is clear that only Sea salt brand falls below the World Health Organization (WHO) recommended value (20 – 30 mg/kg), while the remaining table salt samples are just within the range. ","Iodine deficiency disorders (IDD) has been a major global public health problem threatening more than 2 billion people worldwide. Considering various human health implications associated with iodine deficiency, universal salt iodization programme has been recognized as one of the best methods of preventing iodine deficiency disorder and iodizing table salt is currently done in many countries. In this study, comparative assessment of iodine content of commercially available table salt brands in Nigerian market were investigated and iodine content were measured in ten table salt brands samples using iodometric titration. The iodine content ranged from 14.80 mg/kg – 16.90 mg/kg with mean value of 15.90 mg/kg for Sea salt; 24.30 mg/kg – 25.40 mg/kg with mean value of 24.60 mg/kg for Dangote salt (blue sachet); 22.10 mg/kg – 23.10 mg/kg with mean value of 22.40 mg/kg for Dangote salt (red sachet); 23.30 mg/kg – 24.30 mg/kg with mean value of 23.60 mg/kg for Mr Chef salt; 23.30 mg/kg – 24.30 mg/kg with mean value of 23.60 mg/kg for Annapurna; 26.80 mg/kg – 27.50 mg/kg with mean value of 27.20mg/kg for Uncle Palm salt; 23.30 mg/kg – 29.60 mg/kg with mean content of 26.40 mg/kg for Dangote (bag); 25.40 mg/kg – 26.50 mg/kg with mean value of 26.50 mg/kg for Royal salt; 36.80 mg/kg – 37.20 mg/kg with mean iodine content of 37.0 mg/kg for Abakaliki refined salt, and 30.07 mg/kg – 31.20 mg/kg with mean value of 31.00 mg/kg for Ikom refined salt. The mean iodine content measured in the Sea salt brand (15.70 mg/kg) was significantly P < 0.01 lower compared to those measured in other table salt brands. Although the iodine content of Abakaliki and Ikom refined salt exceed the recommended value, it is clear that only Sea salt brand falls below the World Health Organization (WHO) recommended value (20 – 30 mg/kg), while the remaining table salt samples are just within the range. The results obtained have revealed that 70 % of the table salt brands were adequately iodized while 30 % of the table salt brands were not adequately iodized and provided baseline data that can be used for potential identification of human health risks associated with inadequate and/or excess iodine content in table salt brands consumed in households in Nigeria.",TRUE,count/measurement
R93,Human and Clinical Nutrition,R78237,Comparative Assessment of Iodine Content of Commercial Table Salt Brands Available in Nigerian Market,S353952,R78239,Conclusion,R78243,The results obtained have revealed that 70 % of the table salt brands were adequately iodized while 30 % of the table salt brands were not adequately iodized and provided baseline data that can be used for potential identification of human health risks associated with inadequate and/or excess iodine content in table salt brands consumed in households in Nigeria.,"Iodine deficiency disorders (IDD) has been a major global public health problem threatening more than 2 billion people worldwide. Considering various human health implications associated with iodine deficiency, universal salt iodization programme has been recognized as one of the best methods of preventing iodine deficiency disorder and iodizing table salt is currently done in many countries. In this study, comparative assessment of iodine content of commercially available table salt brands in Nigerian market were investigated and iodine content were measured in ten table salt brands samples using iodometric titration. The iodine content ranged from 14.80 mg/kg – 16.90 mg/kg with mean value of 15.90 mg/kg for Sea salt; 24.30 mg/kg – 25.40 mg/kg with mean value of 24.60 mg/kg for Dangote salt (blue sachet); 22.10 mg/kg – 23.10 mg/kg with mean value of 22.40 mg/kg for Dangote salt (red sachet); 23.30 mg/kg – 24.30 mg/kg with mean value of 23.60 mg/kg for Mr Chef salt; 23.30 mg/kg – 24.30 mg/kg with mean value of 23.60 mg/kg for Annapurna; 26.80 mg/kg – 27.50 mg/kg with mean value of 27.20mg/kg for Uncle Palm salt; 23.30 mg/kg – 29.60 mg/kg with mean content of 26.40 mg/kg for Dangote (bag); 25.40 mg/kg – 26.50 mg/kg with mean value of 26.50 mg/kg for Royal salt; 36.80 mg/kg – 37.20 mg/kg with mean iodine content of 37.0 mg/kg for Abakaliki refined salt, and 30.07 mg/kg – 31.20 mg/kg with mean value of 31.00 mg/kg for Ikom refined salt. The mean iodine content measured in the Sea salt brand (15.70 mg/kg) was significantly P < 0.01 lower compared to those measured in other table salt brands. Although the iodine content of Abakaliki and Ikom refined salt exceed the recommended value, it is clear that only Sea salt brand falls below the World Health Organization (WHO) recommended value (20 – 30 mg/kg), while the remaining table salt samples are just within the range. The results obtained have revealed that 70 % of the table salt brands were adequately iodized while 30 % of the table salt brands were not adequately iodized and provided baseline data that can be used for potential identification of human health risks associated with inadequate and/or excess iodine content in table salt brands consumed in households in Nigeria.",TRUE,count/measurement
R42,Immunology of Infectious Disease,R109805,Reduced monocytic human leukocyte antigen-DR expression indicates immunosuppression in critically ill COVID-19 patients,S503976,R109807,median mHLA-DR expression in ICU patients at admission to ICU,L364088,9280 antibodies/cell,"BACKGROUND: The cellular immune system is of pivotal importance with regard to the response to severe infections. Monocytes/macrophages are considered key immune cells in infections and downregulation of the surface expression of monocytic human leukocyte antigen-DR (mHLA-DR) within the major histocompatibility complex class II reflects a state of immunosuppression, also referred to as injury-associated immunosuppression. As the role of immunosuppression in coronavirus disease 2019 (COVID-19) is currently unclear, we seek to explore the level of mHLA-DR expression in COVID-19 patients. METHODS: In a preliminary prospective monocentric observational study, 16 COVID-19–positive patients (75% male, median age: 68 [interquartile range 59–75]) requiring hospitalization were included. The median Acute Physiology and Chronic Health Evaluation-II (APACHE-II) score in 9 intensive care unit (ICU) patients with acute respiratory failure was 30 (interquartile range 25–32). Standardized quantitative assessment of HLA-DR on monocytes (cluster of differentiation 14+ cells) was performed using calibrated flow cytometry at baseline (ICU/hospital admission) and at days 3 and 5 after ICU admission. Baseline data were compared to hospitalized noncritically ill COVID-19 patients. RESULTS: While normal mHLA-DR expression was observed in all hospitalized noncritically ill patients (n = 7), 89% (8 of 9) critically ill patients with COVID-19–induced acute respiratory failure showed signs of downregulation of mHLA-DR at ICU admission. mHLA-DR expression at admission was significantly lower in critically ill patients (median, [quartiles]: 9280 antibodies/cell [6114, 16,567]) as compared to the noncritically ill patients (30,900 antibodies/cell [26,777, 52,251]), with a median difference of 21,508 antibodies/cell (95% confidence interval [CI], 14,118–42,971), P = .002. Reduced mHLA-DR expression was observed to persist until day 5 after ICU admission. CONCLUSIONS: When compared to noncritically ill hospitalized COVID-19 patients, ICU patients with severe COVID-19 disease showed reduced mHLA-DR expression on circulating CD14+ monocytes at ICU admission, indicating a dysfunctional immune response. This immunosuppressive (monocytic) phenotype remained unchanged over the ensuing days after ICU admission. Strategies aiming for immunomodulation in this population of critically ill patients should be guided by an immune-monitoring program in an effort to determine who might benefit best from a given immunological intervention.",TRUE,count/measurement
R278,Information Science,R109205,Directing the development of constraint languages by checking constraints on rdf data,S498282,R109207,Number of RDF Datasets,L360649,"15,694 data sets","For research institutes, data libraries, and data archives, validating RDF data according to predefined constraints is a much sought-after feature, particularly as this is taken for granted in the XML world. Based on our work in two international working groups on RDF validation and jointly identified requirements to formulate constraints and validate RDF data, we have published 81 types of constraints that are required by various stakeholders for data applications. In this paper, we evaluate the usability of identified constraint types for assessing RDF data quality by (1) collecting and classifying 115 constraints on vocabularies commonly used in the social, behavioral, and economic sciences, either from the vocabularies themselves or from domain experts, and (2) validating 15,694 data sets (4.26 billion triples) of research data against these constraints. We classify each constraint according to (1) the severity of occurring violations and (2) based on which types of constraint languages are able to express its constraint type. Based on the large-scale evaluation, we formulate several findings to direct the further development of constraint languages.",TRUE,count/measurement
R278,Information Science,R68931,The New DBpedia Release Cycle: Increasing Agility and Efficiency in Knowledge Extraction Workflows,S327341,R68933,Data,R68941,21 billion triples with minimal publishing effort,"Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility , to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .",TRUE,count/measurement
R278,Information Science,R76423,Expanding horizons in historical linguistics with the 400-million word Corpus of Historical American English,S351846,R76425,Corpus statistics,L250551,400 million words,"The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.",TRUE,count/measurement
R278,Information Science,R151319,S2ORC: The Semantic Scholar Open Research Corpus,S617844,R151321,number of papers,L425852,81.1M,"We introduce S2ORC, a large corpus of 81.1M English-language academic papers spanning many academic disciplines. The corpus consists of rich metadata, paper abstracts, resolved bibliographic references, as well as structured full text for 8.1M open access papers. Full text is annotated with automatically-detected inline mentions of citations, figures, and tables, each linked to their corresponding paper objects. In S2ORC, we aggregate papers from hundreds of academic publishers and digital archives into a unified source, and create the largest publicly-available collection of machine-readable academic text to date. We hope this resource will facilitate research and development of tools and tasks for text mining over academic text.",TRUE,count/measurement
R278,Information Science,R76438,"Tools for historical corpus research, and a corpus of Latin",S351865,R76440,Corpus statistics,L250561,a total of 13 million words,"We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word’s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.",TRUE,count/measurement
R278,Information Science,R5166,More Complete Resultset Retrieval from Large Heterogeneous RDF Sources,S5713,R5171,Data,R5189,a total of 221.7 billion,"Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service ""Where is My URI"" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.",TRUE,count/measurement
R278,Information Science,R5166,More Complete Resultset Retrieval from Large Heterogeneous RDF Sources,S5712,R5171,Data,R5188,"a total of 668,166","Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service ""Where is My URI"" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.",TRUE,count/measurement
R278,Information Science,R76423,Expanding horizons in historical linguistics with the 400-million word Corpus of Historical American English,S351847,R76425,Corpus statistics,L250552,"more than 100,000 texts","The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.",TRUE,count/measurement
R278,Information Science,R5166,More Complete Resultset Retrieval from Large Heterogeneous RDF Sources,S5714,R5171,Data,R5190,more than 5 terabytes of information,"Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service ""Where is My URI"" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.",TRUE,count/measurement
R137681,"Information Systems, Process and Knowledge Management",R140080,Interdisciplinary Online Hackathons as an Approach to Combat the COVID-19 Pandemic: Case Study,S559105,R140092,has size,R140094,14 projects,"Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19–related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.",TRUE,count/measurement
R137681,"Information Systems, Process and Knowledge Management",R140080,Interdisciplinary Online Hackathons as an Approach to Combat the COVID-19 Pandemic: Case Study,S559112,R140092,Data,R140101,17 (+5 optional) items,"Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19–related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.",TRUE,count/measurement
R137681,"Information Systems, Process and Knowledge Management",R164003,SoMeSci- A 5 Star Open Data Gold Standard Knowledge Graph of Software Mentions in Scientific Articles,S662967,R166456,Number of mentions,L448345,3756 software mentions,"Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.",TRUE,count/measurement
R137681,"Information Systems, Process and Knowledge Management",R140080,Interdisciplinary Online Hackathons as an Approach to Combat the COVID-19 Pandemic: Case Study,S559107,R140092,has duration,R140096,4-day,"Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19–related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.",TRUE,count/measurement
R137681,"Information Systems, Process and Knowledge Management",R140080,Interdisciplinary Online Hackathons as an Approach to Combat the COVID-19 Pandemic: Case Study,S559113,R140092,Data,R140102,60% (n=18),"Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19–related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.",TRUE,count/measurement
R137681,"Information Systems, Process and Knowledge Management",R140059,Open data hackathons: an innovative strategy to enhance entrepreneurial intention,S559053,R140061,has size,R140062,70 papers,"
Purpose
In terms of entrepreneurship, open data benefits include economic growth, innovation, empowerment and new or improved products and services. Hackathons encourage the development of new applications using open data and the creation of startups based on these applications. Researchers focus on factors that affect nascent entrepreneurs’ decision to create a startup but researches in the field of open data hackathons have not been fully investigated yet. This paper aims to suggest a model that incorporates factors that affect the decision of establishing a startup by developers who have participated in open data hackathons.
Design/methodology/approach
In total, 70 papers were examined and analyzed using a three-phased literature review methodology, which was suggested by Webster and Watson (2002). These surveys investigated several factors that affect a nascent entrepreneur to create a startup.
Findings
Eventually, by identifying the motivations for developers to participate in a hackathon, and understanding the benefits of the use of open data, researchers will be able to elaborate the proposed model and evaluate if the contest has contributed to the decision of establish a startup and what factors affect the decision to establish a startup apply to open data developers, and if the participants of the contest agree with these factors.
Originality/value
The paper expands the scope of open data research on entrepreneurship field, stating the need for more research to be conducted regarding the open data in entrepreneurship through hackathons.
",TRUE,count/measurement
R112125,Machine Learning,R144951,Accurate unlexicalized parsing,S580180,R144953,Has result,L405609,86.36% (LP/LR F1),"We demonstrate that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar. Indeed, its performance of 86.36% (LP/LR F1) is better than that of early lexicalized PCFG models, and surprisingly close to the current state-of-the-art. This result has potential uses beyond establishing a strong lower bound on the maximum possible accuracy of unlexicalized models: an unlexicalized PCFG is much more compact, easier to replicate, and easier to interpret than more complex lexical models, and the parsing algorithms are simpler, more widely understood, of lower asymptotic complexity, and easier to optimize.",TRUE,count/measurement
R126,Materials Chemistry,R41144,Up-scalable and controllable electrolytic production of photo-responsive nanostructured silicon,S130583,R41145,temperature,L79358,900 °C,"The electrochemical reduction of solid silica has been investigated in molten CaCl2 at 900 °C for the one-step, up-scalable, controllable and affordable production of nanostructured silicon with promising photo-responsive properties. Cyclic voltammetry of the metallic cavity electrode loaded with fine silica powder was performed to elaborate the electrochemical reduction mechanism. Potentiostatic electrolysis of porous and dense silica pellets was carried out at different potentials, focusing on the influences of the electrolysis potential and the microstructure of the precursory silica on the product purity and microstructure. The findings suggest a potential range between −0.60 and −0.95 V (vs. Ag/AgCl) for the production of nanostructured silicon with high purity (>99 wt%). According to the elucidated mechanism on the electro-growth of the silicon nanostructures, optimal process parameters for the controllable preparation of high-purity silicon nanoparticles and nanowires were identified. Scaling-up the optimal electrolysis was successful at the gram-scale for the preparation of high-purity silicon nanowires which exhibited promising photo-responsive properties.",TRUE,count/measurement
R67,Medicinal Chemistry and Pharmaceutics,R110813,Resveratrol loaded polymeric micelles for theranostic targeting of breast cancer cells,S505341,R110815,Paricle size of nanoparicles,L364891,179±22 nm,"Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179±22 nm were produced. Res was loaded with high EE of 73±0.9% and DL content of 6.2±0.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.",TRUE,count/measurement
R67,Medicinal Chemistry and Pharmaceutics,R142837,New Method for Delivering a Hydrophobic Drug for Photodynamic Therapy Using Pure Nanocrystal Form of the Drug,S574013,R142839,Uses drug,R142844,2-devinyl-2-(1-hexyloxyethyl)pyropheophorbide (HPPH),"A carrier-free method for delivery of a hydrophobic drug in its pure form, using nanocrystals (nanosized crystals), is proposed. To demonstrate this technique, nanocrystals of a hydrophobic photosensitizing anticancer drug, 2-devinyl-2-(1-hexyloxyethyl)pyropheophorbide (HPPH), have been synthesized using the reprecipitation method. The resulting drug nanocrystals were monodispersed and stable in aqueous dispersion, without the necessity of an additional stabilizer (surfactant). As shown by confocal microscopy, these pure drug nanocrystals were taken up by the cancer cells with high avidity. Though the fluorescence and photodynamic activity of the drug were substantially quenched in the form of nanocrystals in aqueous suspension, both these characteristics were recovered under in vitro and in vivo conditions. This recovery of drug activity and fluorescence is possibly due to the interaction of nanocrystals with serum albumin, resulting in conversion of the drug nanocrystals into the molecular form. This was confirmed by demonstrating similar recovery in presence of fetal bovine serum (FBS) or bovine serum albumin (BSA). Under similar treatment conditions, the HPPH in nanocrystal form or in 1% Tween-80/water formulation showed comparable in vitro and in vivo efficacy.",TRUE,count/measurement
R67,Medicinal Chemistry and Pharmaceutics,R110813,Resveratrol loaded polymeric micelles for theranostic targeting of breast cancer cells,S505350,R110815,Drug loading,L364898,6.2±0.1%,"Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179±22 nm were produced. Res was loaded with high EE of 73±0.9% and DL content of 6.2±0.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.",TRUE,count/measurement
R67,Medicinal Chemistry and Pharmaceutics,R110813,Resveratrol loaded polymeric micelles for theranostic targeting of breast cancer cells,S505346,R110815,Drug entrapment efficiency,L364895,73±0.9%,"Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179±22 nm were produced. Res was loaded with high EE of 73±0.9% and DL content of 6.2±0.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.",TRUE,count/measurement
R67,Medicinal Chemistry and Pharmaceutics,R160810,Influence of Process and Formulation Parameters on Dissolution and Stability Characteristics of Kollidon® VA 64 Hot-Melt Extrudates,S641759,R160812,Carrier for hot melt extrusion,R160785,Kollidon® VA 64,"The objective of the present study was to investigate the effects of processing variables and formulation factors on the characteristics of hot-melt extrudates containing a copolymer (Kollidon® VA 64). Nifedipine was used as a model drug in all of the extrudates. Differential scanning calorimetry (DSC) was utilized on the physical mixtures and melts of varying drug–polymer concentrations to study their miscibility. The drug–polymer binary mixtures were studied for powder flow, drug release, and physical and chemical stabilities. The effects of moisture absorption on the content uniformity of the extrudates were also studied. Processing the materials at lower barrel temperatures (115–135°C) and higher screw speeds (50–100 rpm) exhibited higher post-processing drug content (~99–100%). DSC and X-ray diffraction studies confirmed that melt extrusion of drug–polymer mixtures led to the formation of solid dispersions. Interestingly, the extrusion process also enhanced the powder flow characteristics, which occurred irrespective of the drug load (up to 40% w/w). Moreover, the content uniformity of the extrudates, unlike the physical mixtures, was not sensitive to the amount of moisture absorbed. The extrusion conditions did not influence drug release from the extrudates; however, release was greatly affected by the drug loading. Additionally, the drug release from the physical mixture of nifedipine–Kollidon® VA 64 was significantly different when compared to the corresponding extrudates (f2 = 36.70). The extrudates exhibited both physical and chemical stabilities throughout the period of study. Overall, hot-melt extrusion technology in combination with Kollidon® VA 64 produced extrudates capable of higher drug loading, with enhanced flow characteristics, and excellent stability.",TRUE,count/measurement
R55,Microbial Physiology,R49446,The Impact of Pyroglutamate: Sulfolobus acidocaldarius Has a Growth Advantage over Saccharolobus solfataricus in Glutamate-Containing Media,S147545,R49467,Protein Target,L90714,5-oxoprolinase,"Microorganisms are well adapted to their habitat but are partially sensitive to toxic metabolites or abiotic compounds secreted by other organisms or chemically formed under the respective environmental conditions. Thermoacidophiles are challenged by pyroglutamate, a lactam that is spontaneously formed by cyclization of glutamate under aerobic thermoacidophilic conditions. It is known that growth of the thermoacidophilic crenarchaeon Saccharolobus solfataricus (formerly Sulfolobus solfataricus) is completely inhibited by pyroglutamate. In the present study, we investigated the effect of pyroglutamate on the growth of S. solfataricus and the closely related crenarchaeon Sulfolobus acidocaldarius. In contrast to S. solfataricus, S. acidocaldarius was successfully cultivated with pyroglutamate as a sole carbon source. Bioinformatical analyses showed that both members of the Sulfolobaceae have at least one candidate for a 5-oxoprolinase, which catalyses the ATP-dependent conversion of pyroglutamate to glutamate. In S. solfataricus, we observed the intracellular accumulation of pyroglutamate and crude cell extract assays showed a less effective degradation of pyroglutamate. Apparently, S. acidocaldarius seems to be less versatile regarding carbohydrates and prefers peptidolytic growth compared to S. solfataricus. Concludingly, S. acidocaldarius exhibits a more efficient utilization of pyroglutamate and is not inhibited by this compound, making it a better candidate for applications with glutamate-containing media at high temperatures.",TRUE,count/measurement
R145261,Natural Language Processing,R163011,Results of the WNUT16 Named Entity Recognition Shared Task,S650040,R163013,Total teams,L443023,10 teams,"This paper presents the results of the Twitter Named Entity Recognition shared task associated with W-NUT 2016: a named entity tagging task with 10 teams participating. We outline the shared task, annotation process and dataset statistics, and provide a high-level overview of the participating systems for each shared task.",TRUE,count/measurement
R145261,Natural Language Processing,R147106,"SQuAD: 100,000+ Questions for Machine Comprehension of Text",S589218,R147108,Amount of Questions,L410092,"100,000+","We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at this https URL",TRUE,count/measurement
R145261,Natural Language Processing,R162349,BioCreAtIvE Task 1A: gene mention finding evaluation,S647535,R162350,Total teams,L441768,15 teams,"Abstract Background The biological research literature is a major repository of knowledge. As the amount of literature increases, it will get harder to find the information of interest on a particular topic. There has been an increasing amount of work on text mining this literature, but comparing this work is hard because of a lack of standards for making comparisons. To address this, we worked with colleagues at the Protein Design Group, CNB-CSIC, Madrid to develop BioCreAtIvE (Critical Assessment for Information Extraction in Biology), an open common evaluation of systems on a number of biological text mining tasks. We report here on task 1A, which deals with finding mentions of genes and related entities in text. ""Finding mentions"" is a basic task, which can be used as a building block for other text mining tasks. The task makes use of data and evaluation software provided by the (US) National Center for Biotechnology Information (NCBI). Results 15 teams took part in task 1A. A number of teams achieved scores over 80% F-measure (balanced precision and recall). The teams that tried to use their task 1A systems to help on other BioCreAtIvE tasks reported mixed results. Conclusion The 80% plus F-measure results are good, but still somewhat lag the best scores achieved in some other domains such as newswire, due in part to the complexity and length of gene names, compared to person or organization names in newswire.",TRUE,count/measurement
R145261,Natural Language Processing,R162474,Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task,S648215,R162476,number of papers,L442219,1500 PubMed articles,"Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task—a result that approaches the human inter-annotator agreement (0.8875)—and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system’s ability to return real-time results: the average response time for each team’s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/",TRUE,count/measurement
R145261,Natural Language Processing,R162561,Overview of the NLM-Chem BioCreative VII track: Full-text Chemical Identification and Indexing in PubMed articles,S648636,R162563,Total teams,L442511,17 teams,"The BioCreative NLM-Chem track calls for a community effort to fine-tune automated recognition of chemical names in biomedical literature. Chemical names are one of the most searched biomedical entities in PubMed and – as highlighted during the COVID-19 pandemic – their identification may significantly advance research in multiple biomedical subfields. While previous community challenges focused on identifying chemical names mentioned in titles and abstracts, the full text contains valuable additional detail. We organized the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document. This manuscript summarizes the BioCreative NLM-Chem track. We received a total of 88 submissions in total from 17 teams worldwide. The highest performance achieved for the Chemical Identification task was 0.8672 f-score (0.8759 precision, 0.8587 recall) for strict NER performance and 0.8136 f-score (0.8621 precision, 0.7702 recall) for strict normalization performance. The highest performance achieved for the Chemical Indexing task was 0.4825 f-score (0.4397 precision, 0.5344 recall). The NLM-Chem track dataset and other challenge materials are publicly available at https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/. This community challenge demonstrated 1) the current substantial achievements in deep learning technologies can be utilized to further improve automated prediction accuracy, and 2) the Chemical Indexing task is substantially more challenging. We look forward to further development of biomedical text mining methods to respond to the rapid growth of biomedical literature. Keywords— biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; text mining; chemical entity recognition; chemical indexing",TRUE,count/measurement
R145261,Natural Language Processing,R162568,Overview of the BioCreative VII LitCovid Track: multi-label topic classification for COVID-19 literature annotation,S648661,R162570,Total teams,L442527,19 teams,"The BioCreative LitCovid track calls for a community effort to tackle automated topic annotation for COVID-19 literature. The number of COVID-19-related articles in the literature is growing by about 10,000 articles per month, significantly challenging curation efforts and downstream interpretation. LitCovid is a literature database of COVID-19related articles in PubMed, which has accumulated more than 180,000 articles with millions of accesses each month by users worldwide. The rapid literature growth significantly increases the burden of LitCovid curation, especially for topic annotations. Topic annotation in LitCovid assigns one or more (up to eight) labels to articles. The annotated topics have been widely used both directly in LitCovid (e.g., accounting for ~20% of total uses) and downstream studies such as knowledge network generation and citation analysis. It is, therefore, important to develop innovative text mining methods to tackle the challenge. We organized the BioCreative LitCovid track to call for a community effort to tackle automated topic annotation for COVID-19 literature. This article summarizes the BioCreative LitCovid track in terms of data collection and team participation. The dataset is publicly available via https://ftp.ncbi.nlm.nih.gov/pub/lu/LitCovid/biocreative/. It consists of over 30K PubMed articles, one of the largest multilabel classification datasets on biomedical literature. There were 80 submissions in total from 19 teams worldwide. The highestperforming submissions achieved 0.8875, 0.9181, and 0.9394 for macro F1-score, micro F1-score, and instance-based F1-score, respectively. We look forward to further participation in developing biomedical text mining methods in response to the rapid growth of the COVID-19 literature. Keywords—biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; multi-label classification; COVID-19; LitCovid;",TRUE,count/measurement
R145261,Natural Language Processing,R162342,Overview of BioCreAtIvE: critical assessment of information extraction for biology,S647522,R162344,Total teams,L441759,27 groups from 10 countries,"Abstract Background The goal of the first BioCreAtIvE challenge (Critical Assessment of Information Extraction in Biology) was to provide a set of common evaluation tasks to assess the state of the art for text mining applied to biological problems. The results were presented in a workshop held in Granada, Spain March 28–31, 2004. The articles collected in this BMC Bioinformatics supplement entitled ""A critical assessment of text mining methods in molecular biology"" describe the BioCreAtIvE tasks, systems, results and their independent evaluation. Results BioCreAtIvE focused on two tasks. The first dealt with extraction of gene or protein names from text, and their mapping into standardized gene identifiers for three model organism databases (fly, mouse, yeast). The second task addressed issues of functional annotation, requiring systems to identify specific text passages that supported Gene Ontology annotations for specific proteins, given full text articles. Conclusion The first BioCreAtIvE assessment achieved a high level of international participation (27 groups from 10 countries). The assessment provided state-of-the-art performance results for a basic task (gene name finding and normalization), where the best systems achieved a balanced 80% precision / recall or better, which potentially makes them suitable for real applications in biology. The results for the advanced task (functional annotation from free text) were significantly lower, demonstrating the current limitations of text-mining approaches where knowledge extrapolation and interpretation are required. In addition, an important contribution of BioCreAtIvE has been the creation and release of training and test data sets for both tasks. There are 22 articles in this special issue, including six that provide analyses of results or data quality for the data sets, including a novel inter-annotator consistency assessment for the test set used in task 2.",TRUE,count/measurement
R145261,Natural Language Processing,R162391,Overview of the chemical compound and drug name recognition (CHEMDNER) task,S647856,R162393,Total teams,L441997,27 teams,"There is an increasing need to facilitate automated access to information relevant for chemical compounds and drugs described in text, including scientific articles, patents or health agency reports. A number of recent efforts have implemented natural language processing (NLP) and text mining technologies for the chemical domain (ChemNLP or chemical text mining). Due to the lack of manually labeled Gold Standard datasets together with comprehensive annotation guidelines, both the implementation as well as the comparative assessment of ChemNLP technologies BSF opaque. Two key components for most chemical text mining technologies are the indexing of documents with chemicals (chemical document indexing CDI ) and finding the mentions of chemicals in text (chemical entity mention recognition CEM ). These two tasks formed part of the chemical compound and drug named entity recognition (CHEMDNER) task introduced at the fourth BioCreative challenge, a community effort to evaluate biomedical text mining applications. For this task, the CHEMDNER text corpus was constructed, consisting of 10,000 abstracts containing a total of 84,355 mentions of chemical compounds and drugs that have been manually labeled by domain experts following specific annotation guidelines. This corpus covers representative abstracts from major chemistry-related sub-disciplines such as medicinal chemistry, biochemistry, organic chemistry and toxicology. A total of 27 teams – 23 academic and 4 commercial HSPVQT, comprised of 87 researchers – submitted results for this task. Of these teams, 26 provided submissions for the CEM subtask and 23 for the CDI subtask. Teams were provided with the manual annotations of 7,000 abstracts to implement and train their systems and then had to return predictions for the 3,000 test set abstracts during a short period of time. When comparing exact matches of the automated results against the manually labeled Gold Standard annotations, the best teams reached an F-score ⋆ Corresponding author Proceedings of the fourth BioCreative challenge evaluation workshop, vol. 2 of 87.39% JO the CEM task and of 88.20% JO the CDI task. This can be regarded as a very competitive result when compared to the expected upper boundary, the agreement between to human annotators, at 91%. In general, the technologies used to detect chemicals and drugs by the teams included machine learning methods (particularly CRFs using a considerable range of different features), interaction of chemistry-related lexical resources and manual rules (e.g., to cover abbreviations, chemical formula or chemical identifiers). By promoting the availability of the software of the participating systems as well as through the release of the CHEMDNER corpus to enable implementation of new tools, this work fosters the development of text mining applications like the automatic extraction of biochemical reactions, toxicological properties of compounds, or the detection of associations between genes or mutations BOE drugs in the context pharmacogenomics.",TRUE,count/measurement
R145261,Natural Language Processing,R163190,Cross-lingual Name Tagging and Linking for 282 Languages,S650709,R163192,Supported natural languages,L443324,282 languages,"The ambitious goal of this work is to develop a cross-lingual name tagging and linking framework for 282 languages that exist in Wikipedia. Given a document in any of these languages, our framework is able to identify name mentions, assign a coarse-grained or fine-grained type to each mention, and link it to an English Knowledge Base (KB) if it is linkable. We achieve this goal by performing a series of new KB mining methods: generating “silver-standard” annotations by transferring annotations from English to other languages through cross-lingual links and KB properties, refining annotations through self-training and topic selection, deriving language-specific morphology features from anchor links, and mining word translation pairs from cross-lingual links. Both name tagging and linking results for 282 languages are promising on Wikipedia data and on-Wikipedia data.",TRUE,count/measurement
R145261,Natural Language Processing,R162557,Overview of DrugProt BioCreative VII track: quality evaluation and large scale text mining of drug-gene/protein relations,S648605,R162559,Total teams,L442487,30 teams submitted results for the DrugProt main track,"Considering recent progress in NLP, deep learning techniques and biomedical language models there is a pressing need to generate annotated resources and comparable evaluation scenarios that enable the development of advanced biomedical relation extraction systems that extract interactions between drugs/chemical entities and genes, proteins or miRNAs. Building on the results and experience of the CHEMDNER, CHEMDNER patents and ChemProt tracks, we have posed the DrugProt track at BioCreative VII. The DrugProt track focused on the evaluation of automatic systems able to extract 13 different types of drug-genes/protein relations of importance to understand gene regulatory and pharmacological mechanisms. The DrugProt track addressed regulatory associations (direct/indirect, activator/inhibitor relations), certain types of binding associations (antagonist and agonist relations) as well as metabolic associations (substrate or product relations). To promote development of novel tools and offer a comparative evaluation scenario we have released 61,775 manually annotated gene mentions, 65,561 chemical and drug mentions and a total of 24,526 relationships manually labeled by domain experts. A total of 30 teams submitted results for the DrugProt main track, while 9 teams submitted results for the large-scale text mining subtrack that required processing of over 2,3 million records. Teams obtained very competitive results, with predictions reaching fmeasures of over 0.92 for some relation types (antagonist) and fmeasures across all relation types close to 0.8. INTRODUCTION Among the most relevant biological and pharmacological relation types are those that involve (a) chemical compounds and drugs as well as (b) gene products including genes, proteins, miRNAs. A variety of associations between chemicals and genes/proteins are described in the biomedical literature, and there is a growing interest in facilitating a more systematic extraction of these relations from the literature, either for manual database curation initiatives or to generate large knowledge graphs of importance for drug discovery, drug repurposing, building regulatory or interaction networks or to characterize off-target interactions of drugs that might be of importance to understand better adverse drug reactions. At BioCreative VI, the ChemProt track tried to promote the development of novel systems between chemicals and genes for groups of biologically related association types (ChemProt track relation groups or CPRs). Although the obtained results did have a considerable impact in the development and evaluation of new biomedical relation extraction systems, a limitation of grouping more specific relation types into broader groups was the difficulty to directly exploit the results for database curation efforts and biomedical knowledge graph mining application scenarios. The considerable interest in the integration of chemical and biomedical data for drug-discovery purposes, together with the ongoing curation of relationships between biological and chemical entities from scientific publications and patents due to the recent COVID-19 pandemic, motivated the DrugProt track of BioCreative VII, which proposed using more granular relation types. In order to facilitate the development of more granular relation extraction systems large manually annotated corpora are needed. Those corpora should include high-quality manually labled entity mentions together with exhaustive relation annotations generated by domain experts. TRACK AND CORPUS DESCRIPTION Corpus description To carry out the DrugProt track at BioCreative VII, we have released a large manually labelled corpus including annotations of mentions of chemical compounds and drugs as well as genes, proteins and miRNAs. Domain experts with experience in biomedical literature annotation and database curation annotated by hand all abstracts using the BRAT annotation interface. The manual labeling of chemicals and genes was done in separate steps and by different experts to avoid introducing biases during the text annotation process. The manual tagging of entity mentions of chemicals and drugs as well as genes, proteins and miRNAs was done following a carefully designed annotation process and in line with publicly released annotation guidelines. Gene/protein entity mentions were manually mapped to their corresponding biologic al database identifiers whenever possible and classified as either normalizable to databases (tag: GENE-Y) or non normalizable mentions (GENE-N). Teams that participated at the DrugProt track were only provided with this classification of gene mentions and not the actual database identifier to avoid usage of external knowledge bases for producing their predictions. The corpus construction process required first annotating exhaustively all chemical and gene mentions (phase 1). Afterwards the relation annotation phase followed (phase 2), were relationships between these two types of entities had to be labeled according to public available annotation guidelines. Thus, to facilitate the annotation of chemical-protein interactions, the DrugProt track organizers constructed very granular relation annotation rules described in a 33 pages annotation guidelines document. These guidelines were refined during an iterative process based on the annotation of sample documents. The guidelines provided the basic details of the chemicalprotein interaction annotation task and the conventions that had to be followed during the corpus construction process. They incorporated suggestions made by curators as well as observations of annotation inconsistencies encountered when comparing results from different human curators. In brief, DrugProt interactions covered direct interactions (when a physical contact existed between a chemical/drug and a gene/protein) as well as indirect regulatory interactions that alter either the function or the quantity of the gene/gene product. The aim of the iterative manual annotation cycle was to improve the quality and consistency of the guidelines. During the planning of the guidelines some rules had to be reformulated to make them more explicit and clear and additional rules were added wherever necessary to better cover the practical annotation scenario and for being more complete. The manual annotation task basically consisted of labeling or marking manually through a customized BRAT webinterface the interactions given the article abstracts as content. Figure 1 summarizes the DrugProt relation types included in the annotation guidelines. Fig. 1. Overview of the DrugProt relation type hierarchy. The corpus annotation carried out for the DrugProt track was exhaustive for all the types of interactions previously specified. This implied that mentions of other kind of relationships between chemicals and genes (e.g. phenotypic and biological responses) were not manually labelled. Moreover, the DrugProt relations are directed in the sense that only relations of “what a chemical does to a gene/protein"" (chemical → gene/protein direction) were annotated, and not vice versa. To establish a easy to understand relation nomenclature and avoid redundant class definitions, we reviewed several chemical repositories that included chemical – biology information. We revised DrugBank, the Therapeutic Targets Database (TTD) and ChEMBL, assay normalization ontologies (BAO) and previously existing formalizations for the annotation of relationships: the Biological Expression Language (BEL), curation guidelines for transcription regulation interactions (DNA-binding transcription factor – target gene interaction) and SIGNOR, a database of causal relationships between biological entities. Each of these resources inspired the definition of the subclasses DIRECT REGULATOR (e.g. DrugBank, ChEMBL, BAO and SIGNOR) and the INDIRECT REGULATOR (e.g. BEL, curation guidelines for transcription regulation interactions and SIGNOR). For example, DrugBank relationships for drugs included a total of 22 definitions, some of them overlapping with CHEMPROT subclasses (e.g. “Inhibitor”, “Antagonist”, “Agonist”,...), some of them being regarded as highly specific for the purpose of this task (e.g. “intercalation”, “cross-linking/alkylation”) or referring to biological roles (e.g. “Antibody”, “Incorporation into and Destabilization”) and others, partially overlapping between them (e.g. “Binder” and “Ligand”), that were merged into a single class. Concerning indirect regulatory aspects, the five classes of casual relationships between a subject and an object term defined by BEL (“decreases”, “directlyDecreases”, “increases”, “directlyIncreases” and “causesNoChange”) were highly inspiring. Subclasses definitions of pharmacological modes of action were defined according to the UPHAR/BPS Guide to Pharmacology in 2016. For the DrugProt track a very granular chemical-protein relation annotation was carried out, with the aim to cover most of the relations that are of importance from the point of view of biochemical and pharmacological/biomedical perspective. Nevertheless, for the DrugProt track only a total of 13 relation types were used, keeping those that had enough training instances/examples and sufficient manual annotation consistency. The final list of relation types used for this shared task was: INDIRECT-DOWNREGULATOR, INDIRECTUPREGULATOR, DIRECT-REGULATOR, ACTIVATOR, INHIBITOR, AGONIST, ANTAGONIST, AGONISTACTIVATOR, AGONIST-INHIBITOR, PRODUCT-OF, SUBSTRATE, SUBSTRATE_PRODUCT-OF or PART-OF. The DrugProt corpus was split randomly into training, development and test set. We also included a background and large scale background collection of records that were automatically annotated with drugs/chemicals and genes/proteins/miRNAs using an entity tagger trained on the manual DrugProt entity mentions. The background collections were merged with the test set to be able to get team predictions also for these records. Table 1 shows a su",TRUE,count/measurement
R145261,Natural Language Processing,R69291,The ACL RD-TEC 2.0: A Language Resource for Evaluating Term Extraction and Entity Recognition Methods,S593526,R69292,number of papers,L412749,"300 abstracts from articles in the ACL Anthology Reference Corpus, published between 1978–2006","This paper introduces the ACL Reference Dataset for Terminology Extraction and Classification, version 2.0 (ACL RD-TEC 2.0). The ACL RD-TEC 2.0 has been developed with the aim of providing a benchmark for the evaluation of term and entity recognition tasks based on specialised text from the computational linguistics domain. This release of the corpus consists of 300 abstracts from articles in the ACL Anthology Reference Corpus, published between 1978–2006. In these abstracts, terms (i.e., single or multi-word lexical units with a specialised meaning) are manually annotated. In addition to their boundaries in running text, annotated terms are classified into one of the seven categories method, tool, language resource (LR), LR product, model, measures and measurements, and other. To assess the quality of the annotations and to determine the difficulty of this annotation task, more than 171 of the abstracts are annotated twice, independently, by each of the two annotators. In total, 6,818 terms are identified and annotated in more than 1300 sentences, resulting in a specialised vocabulary made of 3,318 lexical forms, mapped to 3,471 concepts. We explain the development of the annotation guidelines and discuss some of the challenges we encountered in this annotation task.",TRUE,count/measurement
R145261,Natural Language Processing,R162526,Overview of the BioCreative VI text-mining services for Kinome Curation Track,S648502,R162528,Number of concepts,L442411,300 human protein kinases,"Abstract The text-mining services for kinome curation track, part of BioCreative VI, proposed a competition to assess the effectiveness of text mining to perform literature triage. The track has exploited an unpublished curated data set from the neXtProt database. This data set contained comprehensive annotations for 300 human protein kinases. For a given protein and a given curation axis [diseases or gene ontology (GO) biological processes], participants’ systems had to identify and rank relevant articles in a collection of 5.2 M MEDLINE citations (task 1) or 530 000 full-text articles (task 2). Explored strategies comprised named-entity recognition and machine-learning frameworks. For that latter approach, participants developed methods to derive a set of negative instances, as the databases typically do not store articles that were judged as irrelevant by curators. The supervised approaches proposed by the participating groups achieved significant improvements compared to the baseline established in a previous study and compared to a basic PubMed search.",TRUE,count/measurement
R145261,Natural Language Processing,R162540,The extraction of complex relationships and their conversion to biological expression language (BEL) overview of the BioCreative VI (2017) BEL track,S648543,R162542,Best score,L442442,32% F-score for the extraction of complete BEL statements,"Abstract Knowledge of the molecular interactions of biological and chemical entities and their involvement in biological processes or clinical phenotypes is important for data interpretation. Unfortunately, this knowledge is mostly embedded in the literature in such a way that it is unavailable for automated data analysis procedures. Biological expression language (BEL) is a syntax representation allowing for the structured representation of a broad range of biological relationships. It is used in various situations to extract such knowledge and transform it into BEL networks. To support the tedious and time-intensive extraction work of curators with automated methods, we developed the BEL track within the framework of BioCreative Challenges. Within the BEL track, we provide training data and an evaluation environment to encourage the text mining community to tackle the automatic extraction of complex BEL relationships. In 2017 BioCreative VI, the 2015 BEL track was repeated with new test data. Although only minor improvements in text snippet retrieval for given statements were achieved during this second BEL task iteration, a significant increase of BEL statement extraction performance from provided sentences could be seen. The best performing system reached a 32% F-score for the extraction of complete BEL statements and with the given named entities this increased to 49%. This time, besides rule-based systems, new methods involving hierarchical sequence labeling and neural networks were applied for BEL statement extraction.",TRUE,count/measurement
R145261,Natural Language Processing,R162474,Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task,S648206,R162476,Total teams,L442214,34 teams,"Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task—a result that approaches the human inter-annotator agreement (0.8875)—and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system’s ability to return real-time results: the average response time for each team’s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/",TRUE,count/measurement
R145261,Natural Language Processing,R162489,Overview of the interactive task in BioCreative V,S648352,R162491,Total teams,L442330,43 biocurators,"Fully automated text mining (TM) systems promote efficient literature searching, retrieval, and review but are not sufficient to produce ready-to-consume curated documents. These systems are not meant to replace biocurators, but instead to assist them in one or more literature curation steps. To do so, the user interface is an important aspect that needs to be considered for tool adoption. The BioCreative Interactive task (IAT) is a track designed for exploring user-system interactions, promoting development of useful TM tools, and providing a communication channel between the biocuration and the TM communities. In BioCreative V, the IAT track followed a format similar to previous interactive tracks, where the utility and usability of TM tools, as well as the generation of use cases, have been the focal points. The proposed curation tasks are user-centric and formally evaluated by biocurators. In BioCreative V IAT, seven TM systems and 43 biocurators participated. Two levels of user participation were offered to broaden curator involvement and obtain more feedback on usability aspects. The full level participation involved training on the system, curation of a set of documents with and without TM assistance, tracking of time-on-task, and completion of a user survey. The partial level participation was designed to focus on usability aspects of the interface and not the performance per se. In this case, biocurators navigated the system by performing pre-designed tasks and then were asked whether they were able to achieve the task and the level of difficulty in completing the task. In this manuscript, we describe the development of the interactive task, from planning to execution and discuss major findings for the systems tested. Database URL: http://www.biocreative.org",TRUE,count/measurement
R145261,Natural Language Processing,R164240,Towards Exhaustive Protein Modification Event Extraction,S655717,R164242,Number of mentions,L445309,4500 mentions of proteins,"Protein modifications, in particular post-translational modifications, have a central role in bringing about the full repertoire of protein functions, and the identification of specific protein modifications is important for understanding biological systems. This task presents a number of opportunities for the automatic support of manual curation efforts. However, the sheer number of different types of protein modifications is a daunting challenge for automatic extraction that has so far not been met in full, with most studies focusing on single modifications or a few prominent ones. In this work, aim to meet this challenge: we analyse protein modification types through ontologies, databases, and literature and introduce a corpus of 360 abstracts manually annotated in the BioNLP Shared Task event representation for over 4500 mentions of proteins and 1000 statements of modification events of nearly 40 different types. We argue that together with existing resources, this corpus provides sufficient coverage of modification types to make effectively exhaustive extraction of protein modifications from text feasible.",TRUE,count/measurement
R145261,Natural Language Processing,R171842,BC4GO: a full-text corpus for the BioCreative IV GO task,S686210,R171844,inter-annotator agreement,L462439,47% (strict) and 62.9% (hierarchical),"Gene function curation via Gene Ontology (GO) annotation is a common task among Model Organism Database groups. Owing to its manual nature, this task is considered one of the bottlenecks in literature curation. There have been many previous attempts at automatic identification of GO terms and supporting information from full text. However, few systems have delivered an accuracy that is comparable with humans. One recognized challenge in developing such systems is the lack of marked sentence-level evidence text that provides the basis for making GO annotations. We aim to create a corpus that includes the GO evidence text along with the three core elements of GO annotations: (i) a gene or gene product, (ii) a GO term and (iii) a GO evidence code. To ensure our results are consistent with real-life GO data, we recruited eight professional GO curators and asked them to follow their routine GO annotation protocols. Our annotators marked up more than 5000 text passages in 200 articles for 1356 distinct GO terms. For evidence sentence selection, the inter-annotator agreement (IAA) results are 9.3% (strict) and 42.7% (relaxed) in F1-measures. For GO term selection, the IAAs are 47% (strict) and 62.9% (hierarchical). Our corpus analysis further shows that abstracts contain ∼10% of relevant evidence sentences and 30% distinct GO terms, while the Results/Experiment section has nearly 60% relevant sentences and >70% GO terms. Further, of those evidence sentences found in abstracts, less than one-third contain enough experimental detail to fulfill the three core criteria of a GO annotation. This result demonstrates the need of using full-text articles for text mining GO annotations. Through its use at the BioCreative IV GO (BC4GO) task, we expect our corpus to become a valuable resource for the BioNLP research community. Database URL: http://www.biocreative.org/resources/corpora/bc-iv-go-task-corpus/.",TRUE,count/measurement
R145261,Natural Language Processing,R162973,Shared Tasks of the 2015 Workshop on Noisy User-generated Text: Twitter Lexical Normalization and Named Entity Recognition,S649949,R162975,Total teams,L442983,8 participants,"This paper presents the results of the two shared tasks associated with W-NUT 2015: (1) a text normalization task with 10 participants; and (2) a named entity tagging task with 8 participants. We outline the task, annotation process and dataset statistics, and provide a high-level overview of the participating systems for each shared task.",TRUE,count/measurement
R145261,Natural Language Processing,R162557,Overview of DrugProt BioCreative VII track: quality evaluation and large scale text mining of drug-gene/protein relations,S648606,R162559,Total teams,L442488,9 teams submitted results for the large-scale text mining subtrack,"Considering recent progress in NLP, deep learning techniques and biomedical language models there is a pressing need to generate annotated resources and comparable evaluation scenarios that enable the development of advanced biomedical relation extraction systems that extract interactions between drugs/chemical entities and genes, proteins or miRNAs. Building on the results and experience of the CHEMDNER, CHEMDNER patents and ChemProt tracks, we have posed the DrugProt track at BioCreative VII. The DrugProt track focused on the evaluation of automatic systems able to extract 13 different types of drug-genes/protein relations of importance to understand gene regulatory and pharmacological mechanisms. The DrugProt track addressed regulatory associations (direct/indirect, activator/inhibitor relations), certain types of binding associations (antagonist and agonist relations) as well as metabolic associations (substrate or product relations). To promote development of novel tools and offer a comparative evaluation scenario we have released 61,775 manually annotated gene mentions, 65,561 chemical and drug mentions and a total of 24,526 relationships manually labeled by domain experts. A total of 30 teams submitted results for the DrugProt main track, while 9 teams submitted results for the large-scale text mining subtrack that required processing of over 2,3 million records. Teams obtained very competitive results, with predictions reaching fmeasures of over 0.92 for some relation types (antagonist) and fmeasures across all relation types close to 0.8. INTRODUCTION Among the most relevant biological and pharmacological relation types are those that involve (a) chemical compounds and drugs as well as (b) gene products including genes, proteins, miRNAs. A variety of associations between chemicals and genes/proteins are described in the biomedical literature, and there is a growing interest in facilitating a more systematic extraction of these relations from the literature, either for manual database curation initiatives or to generate large knowledge graphs of importance for drug discovery, drug repurposing, building regulatory or interaction networks or to characterize off-target interactions of drugs that might be of importance to understand better adverse drug reactions. At BioCreative VI, the ChemProt track tried to promote the development of novel systems between chemicals and genes for groups of biologically related association types (ChemProt track relation groups or CPRs). Although the obtained results did have a considerable impact in the development and evaluation of new biomedical relation extraction systems, a limitation of grouping more specific relation types into broader groups was the difficulty to directly exploit the results for database curation efforts and biomedical knowledge graph mining application scenarios. The considerable interest in the integration of chemical and biomedical data for drug-discovery purposes, together with the ongoing curation of relationships between biological and chemical entities from scientific publications and patents due to the recent COVID-19 pandemic, motivated the DrugProt track of BioCreative VII, which proposed using more granular relation types. In order to facilitate the development of more granular relation extraction systems large manually annotated corpora are needed. Those corpora should include high-quality manually labled entity mentions together with exhaustive relation annotations generated by domain experts. TRACK AND CORPUS DESCRIPTION Corpus description To carry out the DrugProt track at BioCreative VII, we have released a large manually labelled corpus including annotations of mentions of chemical compounds and drugs as well as genes, proteins and miRNAs. Domain experts with experience in biomedical literature annotation and database curation annotated by hand all abstracts using the BRAT annotation interface. The manual labeling of chemicals and genes was done in separate steps and by different experts to avoid introducing biases during the text annotation process. The manual tagging of entity mentions of chemicals and drugs as well as genes, proteins and miRNAs was done following a carefully designed annotation process and in line with publicly released annotation guidelines. Gene/protein entity mentions were manually mapped to their corresponding biologic al database identifiers whenever possible and classified as either normalizable to databases (tag: GENE-Y) or non normalizable mentions (GENE-N). Teams that participated at the DrugProt track were only provided with this classification of gene mentions and not the actual database identifier to avoid usage of external knowledge bases for producing their predictions. The corpus construction process required first annotating exhaustively all chemical and gene mentions (phase 1). Afterwards the relation annotation phase followed (phase 2), were relationships between these two types of entities had to be labeled according to public available annotation guidelines. Thus, to facilitate the annotation of chemical-protein interactions, the DrugProt track organizers constructed very granular relation annotation rules described in a 33 pages annotation guidelines document. These guidelines were refined during an iterative process based on the annotation of sample documents. The guidelines provided the basic details of the chemicalprotein interaction annotation task and the conventions that had to be followed during the corpus construction process. They incorporated suggestions made by curators as well as observations of annotation inconsistencies encountered when comparing results from different human curators. In brief, DrugProt interactions covered direct interactions (when a physical contact existed between a chemical/drug and a gene/protein) as well as indirect regulatory interactions that alter either the function or the quantity of the gene/gene product. The aim of the iterative manual annotation cycle was to improve the quality and consistency of the guidelines. During the planning of the guidelines some rules had to be reformulated to make them more explicit and clear and additional rules were added wherever necessary to better cover the practical annotation scenario and for being more complete. The manual annotation task basically consisted of labeling or marking manually through a customized BRAT webinterface the interactions given the article abstracts as content. Figure 1 summarizes the DrugProt relation types included in the annotation guidelines. Fig. 1. Overview of the DrugProt relation type hierarchy. The corpus annotation carried out for the DrugProt track was exhaustive for all the types of interactions previously specified. This implied that mentions of other kind of relationships between chemicals and genes (e.g. phenotypic and biological responses) were not manually labelled. Moreover, the DrugProt relations are directed in the sense that only relations of “what a chemical does to a gene/protein"" (chemical → gene/protein direction) were annotated, and not vice versa. To establish a easy to understand relation nomenclature and avoid redundant class definitions, we reviewed several chemical repositories that included chemical – biology information. We revised DrugBank, the Therapeutic Targets Database (TTD) and ChEMBL, assay normalization ontologies (BAO) and previously existing formalizations for the annotation of relationships: the Biological Expression Language (BEL), curation guidelines for transcription regulation interactions (DNA-binding transcription factor – target gene interaction) and SIGNOR, a database of causal relationships between biological entities. Each of these resources inspired the definition of the subclasses DIRECT REGULATOR (e.g. DrugBank, ChEMBL, BAO and SIGNOR) and the INDIRECT REGULATOR (e.g. BEL, curation guidelines for transcription regulation interactions and SIGNOR). For example, DrugBank relationships for drugs included a total of 22 definitions, some of them overlapping with CHEMPROT subclasses (e.g. “Inhibitor”, “Antagonist”, “Agonist”,...), some of them being regarded as highly specific for the purpose of this task (e.g. “intercalation”, “cross-linking/alkylation”) or referring to biological roles (e.g. “Antibody”, “Incorporation into and Destabilization”) and others, partially overlapping between them (e.g. “Binder” and “Ligand”), that were merged into a single class. Concerning indirect regulatory aspects, the five classes of casual relationships between a subject and an object term defined by BEL (“decreases”, “directlyDecreases”, “increases”, “directlyIncreases” and “causesNoChange”) were highly inspiring. Subclasses definitions of pharmacological modes of action were defined according to the UPHAR/BPS Guide to Pharmacology in 2016. For the DrugProt track a very granular chemical-protein relation annotation was carried out, with the aim to cover most of the relations that are of importance from the point of view of biochemical and pharmacological/biomedical perspective. Nevertheless, for the DrugProt track only a total of 13 relation types were used, keeping those that had enough training instances/examples and sufficient manual annotation consistency. The final list of relation types used for this shared task was: INDIRECT-DOWNREGULATOR, INDIRECTUPREGULATOR, DIRECT-REGULATOR, ACTIVATOR, INHIBITOR, AGONIST, ANTAGONIST, AGONISTACTIVATOR, AGONIST-INHIBITOR, PRODUCT-OF, SUBSTRATE, SUBSTRATE_PRODUCT-OF or PART-OF. The DrugProt corpus was split randomly into training, development and test set. We also included a background and large scale background collection of records that were automatically annotated with drugs/chemicals and genes/proteins/miRNAs using an entity tagger trained on the manual DrugProt entity mentions. The background collections were merged with the test set to be able to get team predictions also for these records. Table 1 shows a su",TRUE,count/measurement
R145261,Natural Language Processing,R171842,BC4GO: a full-text corpus for the BioCreative IV GO task,S686209,R171845,inter-annotator agreement,L462438,9.3% (strict) and 42.7% (relaxed) in F1-measures,"Gene function curation via Gene Ontology (GO) annotation is a common task among Model Organism Database groups. Owing to its manual nature, this task is considered one of the bottlenecks in literature curation. There have been many previous attempts at automatic identification of GO terms and supporting information from full text. However, few systems have delivered an accuracy that is comparable with humans. One recognized challenge in developing such systems is the lack of marked sentence-level evidence text that provides the basis for making GO annotations. We aim to create a corpus that includes the GO evidence text along with the three core elements of GO annotations: (i) a gene or gene product, (ii) a GO term and (iii) a GO evidence code. To ensure our results are consistent with real-life GO data, we recruited eight professional GO curators and asked them to follow their routine GO annotation protocols. Our annotators marked up more than 5000 text passages in 200 articles for 1356 distinct GO terms. For evidence sentence selection, the inter-annotator agreement (IAA) results are 9.3% (strict) and 42.7% (relaxed) in F1-measures. For GO term selection, the IAAs are 47% (strict) and 62.9% (hierarchical). Our corpus analysis further shows that abstracts contain ∼10% of relevant evidence sentences and 30% distinct GO terms, while the Results/Experiment section has nearly 60% relevant sentences and >70% GO terms. Further, of those evidence sentences found in abstracts, less than one-third contain enough experimental detail to fulfill the three core criteria of a GO annotation. This result demonstrates the need of using full-text articles for text mining GO annotations. Through its use at the BioCreative IV GO (BC4GO) task, we expect our corpus to become a valuable resource for the BioNLP research community. Database URL: http://www.biocreative.org/resources/corpora/bc-iv-go-task-corpus/.",TRUE,count/measurement
R145261,Natural Language Processing,R164170,Coreference Resolution in Biomedical Texts: a Machine Learning Approach,S655531,R164172,Results,L445236,a high precision of 85.2%,"Motivation: Coreference resolution, the process of identifying different mentions of an entity, is a very important component in a text-mining system. Compared with the work in news articles, the existing study of coreference resolution in biomedical texts is quite preliminary by only focusing on specific types of anaphors like pronouns or definite noun phrases, using heuristic methods, and running on small data sets. Therefore, there is a need for an in-depth exploration of this task in the biomedical domain. Results: In this article, we presented a learning-based approach to coreference resolution in the biomedical domain. We made three contributions in our study. Firstly, we annotated a large scale coreference corpus, MedCo, which consists of 1,999 medline abstracts in the GENIA data set. Secondly, we proposed a detailed framework for the coreference resolution task, in which we augmented the traditional learning model by incorporating non-anaphors into training. Lastly, we explored various sources of knowledge for coreference resolution, particularly, those that can deal with the complexity of biomedical texts. The evaluation on the MedCo corpus showed promising results. Our coreference resolution system achieved a high precision of 85.2% with a reasonable recall of 65.3%, obtaining an F-measure of 73.9%. The results also suggested that our augmented learning model significantly boosted precision (up to 24.0%) without much loss in recall (less than 5%), and brought a gain of over 8% in F-measure.",TRUE,count/measurement
R145261,Natural Language Processing,R164170,Coreference Resolution in Biomedical Texts: a Machine Learning Approach,S655532,R164172,Results,L445237,a reasonable recall of 65.3%,"Motivation: Coreference resolution, the process of identifying different mentions of an entity, is a very important component in a text-mining system. Compared with the work in news articles, the existing study of coreference resolution in biomedical texts is quite preliminary by only focusing on specific types of anaphors like pronouns or definite noun phrases, using heuristic methods, and running on small data sets. Therefore, there is a need for an in-depth exploration of this task in the biomedical domain. Results: In this article, we presented a learning-based approach to coreference resolution in the biomedical domain. We made three contributions in our study. Firstly, we annotated a large scale coreference corpus, MedCo, which consists of 1,999 medline abstracts in the GENIA data set. Secondly, we proposed a detailed framework for the coreference resolution task, in which we augmented the traditional learning model by incorporating non-anaphors into training. Lastly, we explored various sources of knowledge for coreference resolution, particularly, those that can deal with the complexity of biomedical texts. The evaluation on the MedCo corpus showed promising results. Our coreference resolution system achieved a high precision of 85.2% with a reasonable recall of 65.3%, obtaining an F-measure of 73.9%. The results also suggested that our augmented learning model significantly boosted precision (up to 24.0%) without much loss in recall (less than 5%), and brought a gain of over 8% in F-measure.",TRUE,count/measurement
R145261,Natural Language Processing,R69291,The ACL RD-TEC 2.0: A Language Resource for Evaluating Term Extraction and Entity Recognition Methods,S585925,R69292,Dataset name,R146353,ACL RD-TEC 2.0,"This paper introduces the ACL Reference Dataset for Terminology Extraction and Classification, version 2.0 (ACL RD-TEC 2.0). The ACL RD-TEC 2.0 has been developed with the aim of providing a benchmark for the evaluation of term and entity recognition tasks based on specialised text from the computational linguistics domain. This release of the corpus consists of 300 abstracts from articles in the ACL Anthology Reference Corpus, published between 1978–2006. In these abstracts, terms (i.e., single or multi-word lexical units with a specialised meaning) are manually annotated. In addition to their boundaries in running text, annotated terms are classified into one of the seven categories method, tool, language resource (LR), LR product, model, measures and measurements, and other. To assess the quality of the annotations and to determine the difficulty of this annotation task, more than 171 of the abstracts are annotated twice, independently, by each of the two annotators. In total, 6,818 terms are identified and annotated in more than 1300 sentences, resulting in a specialised vocabulary made of 3,318 lexical forms, mapped to 3,471 concepts. We explain the development of the annotation guidelines and discuss some of the challenges we encountered in this annotation task.",TRUE,count/measurement
R145261,Natural Language Processing,R164170,Coreference Resolution in Biomedical Texts: a Machine Learning Approach,S655533,R164172,Results,L445238,an F-measure of 73.9%,"Motivation: Coreference resolution, the process of identifying different mentions of an entity, is a very important component in a text-mining system. Compared with the work in news articles, the existing study of coreference resolution in biomedical texts is quite preliminary by only focusing on specific types of anaphors like pronouns or definite noun phrases, using heuristic methods, and running on small data sets. Therefore, there is a need for an in-depth exploration of this task in the biomedical domain. Results: In this article, we presented a learning-based approach to coreference resolution in the biomedical domain. We made three contributions in our study. Firstly, we annotated a large scale coreference corpus, MedCo, which consists of 1,999 medline abstracts in the GENIA data set. Secondly, we proposed a detailed framework for the coreference resolution task, in which we augmented the traditional learning model by incorporating non-anaphors into training. Lastly, we explored various sources of knowledge for coreference resolution, particularly, those that can deal with the complexity of biomedical texts. The evaluation on the MedCo corpus showed promising results. Our coreference resolution system achieved a high precision of 85.2% with a reasonable recall of 65.3%, obtaining an F-measure of 73.9%. The results also suggested that our augmented learning model significantly boosted precision (up to 24.0%) without much loss in recall (less than 5%), and brought a gain of over 8% in F-measure.",TRUE,count/measurement
R145261,Natural Language Processing,R172664,End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF,S689026,R172666,Material,R172669,CoNLL 2003 corpus for named entity recognition (NER),"State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data pre-processing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks --- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both the two data --- 97.55\% accuracy for POS tagging and 91.21\% F1 for NER.",TRUE,count/measurement
R145261,Natural Language Processing,R172672,Named Entity Recognition with Bidirectional LSTM-CNNs,S689041,R172674,Material,R172676,OntoNotes 5.0 dataset,"Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.",TRUE,count/measurement
R145261,Natural Language Processing,R145798,End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures,S629273,R145800,Has experimental datasets,R116687,SemEval-2010 Task 8,"We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional tree-structured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the state-of-the-art feature-based model on end-to-end relation extraction, achieving 12.1% and 5.7% relative error reductions in F1-score on ACE2005 and ACE2004, respectively. We also show that our LSTM-RNN based model compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8). Finally, we present an extensive ablation analysis of several model components.",TRUE,count/measurement
R145261,Natural Language Processing,R162561,Overview of the NLM-Chem BioCreative VII track: Full-text Chemical Identification and Indexing in PubMed articles,S648644,R162563,description,L442516,"the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document.","The BioCreative NLM-Chem track calls for a community effort to fine-tune automated recognition of chemical names in biomedical literature. Chemical names are one of the most searched biomedical entities in PubMed and – as highlighted during the COVID-19 pandemic – their identification may significantly advance research in multiple biomedical subfields. While previous community challenges focused on identifying chemical names mentioned in titles and abstracts, the full text contains valuable additional detail. We organized the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document. This manuscript summarizes the BioCreative NLM-Chem track. We received a total of 88 submissions in total from 17 teams worldwide. The highest performance achieved for the Chemical Identification task was 0.8672 f-score (0.8759 precision, 0.8587 recall) for strict NER performance and 0.8136 f-score (0.8621 precision, 0.7702 recall) for strict normalization performance. The highest performance achieved for the Chemical Indexing task was 0.4825 f-score (0.4397 precision, 0.5344 recall). The NLM-Chem track dataset and other challenge materials are publicly available at https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/. This community challenge demonstrated 1) the current substantial achievements in deep learning technologies can be utilized to further improve automated prediction accuracy, and 2) the Chemical Indexing task is substantially more challenging. We look forward to further development of biomedical text mining methods to respond to the rapid growth of biomedical literature. Keywords— biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; text mining; chemical entity recognition; chemical indexing",TRUE,count/measurement
R112130,Networking and Internet Architecture,R186234,PredictRoute: A Network Path Prediction Toolkit,S711965,R186236,Result,R186237,"PredictRoute's AS-path predictions differ from the measured path by at most 1 hop, 75% of the time. ","Accurate prediction of network paths between arbitrary hosts on the Internet is of vital importance for network operators, cloud providers, and academic researchers. We present PredictRoute, a system that predicts network paths between hosts on the Internet using historical knowledge of the data and control plane. In addition to feeding on freely available traceroutes and BGP routing tables, PredictRoute optimally explores network paths towards chosen BGP prefixes. PredictRoute's strategy for exploring network paths discovers 4X more autonomous system (AS) hops than other well-known strategies used in practice today. Using a corpus of traceroutes, PredictRoute trains probabilistic models of routing towards prefixes on the Internet to predict network paths and their likelihood. PredictRoute's AS-path predictions differ from the measured path by at most 1 hop, 75% of the time. We expose PredictRoute's path prediction capability via a REST API to facilitate its inclusion in other applications and studies. We additionally demonstrate the utility of PredictRoute in improving real-world applications for circumventing Internet censorship and preserving anonymity online.",TRUE,count/measurement
R172,Oceanography,R109392,First direct measurements of N2 fixation during a Trichodesmium bloom in the eastern Arabian Sea,S499207,R109393,Method (primary production),L361262,13C tracer,"We report the first direct estimates of N2 fixation rates measured during the spring, 2009 using the 15N2 gas tracer technique in the eastern Arabian Sea, which is well known for significant loss of nitrogen due to intense denitrification. Carbon uptake rates are also concurrently estimated using the 13C tracer technique. The N2 fixation rates vary from ∼0.1 to 34 mmol N m−2d−1 after correcting for the isotopic under‐equilibrium with dissolved air in the samples. These higher N2 fixation rates are consistent with higher chlorophyll a and low δ15N of natural particulate organic nitrogen. Our estimates of N2 fixation is a useful step toward reducing the uncertainty in the nitrogen budget.",TRUE,count/measurement
R172,Oceanography,R138486,First direct measurements of N2fixation during aTrichodesmiumbloom in the eastern Arabian Sea: N2FIXATION IN THE ARABIAN SEA,S549767,R138488,Method (primary production),L386818,13C tracer,"We report the first direct estimates of N2 fixation rates measured during the spring, 2009 using the 15N2 gas tracer technique in the eastern Arabian Sea, which is well known for significant loss of nitrogen due to intense denitrification. Carbon uptake rates are also concurrently estimated using the 13C tracer technique. The N2 fixation rates vary from ∼0.1 to 34 mmol N m−2d−1 after correcting for the isotopic under‐equilibrium with dissolved air in the samples. These higher N2 fixation rates are consistent with higher chlorophyll a and low δ15N of natural particulate organic nitrogen. Our estimates of N2 fixation is a useful step toward reducing the uncertainty in the nitrogen budget.",TRUE,count/measurement
R172,Oceanography,R141328,High new production in the Bay of Bengal: Possible causes and implications,S565206,R141329,Method,R141330,15N tracer,"We report the first measurements of new production (15N tracer technique), the component of primary production that sustains on extraneous nutrient inputs to the euphotic zone, in the Bay of Bengal. Experiments done in two different seasons consistently show high new production (averaging around 4 mmol N m−2 d−1 during post monsoon and 5.4 mmol N m−2 d−1 during pre monsoon), validating the earlier conjecture of high new production, based on pCO2 measurements, in the Bay. Averaged over annual time scales, higher new production could cause higher rate of removal of organic carbon. This could also be one of the reasons for comparable organic carbon fluxes observed in the sediment traps of the Bay of Bengal and the eastern Arabian Sea. Thus, oceanic regions like Bay of Bengal may play a more significant role in removing the excess CO2 from the atmosphere than hitherto believed.",TRUE,count/measurement
R129,Organic Chemistry,R137073,"Selective, Nickel-Catalyzed Hydrogenolysis of Aryl Ethers",S541569,R137075,Additive,L381382,1 bar of hydrogen,"A catalyst that cleaves aryl-oxygen bonds but not carbon-carbon bonds may help improve lignin processing. Selective hydrogenolysis of the aromatic carbon-oxygen (C-O) bonds in aryl ethers is an unsolved synthetic problem important for the generation of fuels and chemical feedstocks from biomass and for the liquefaction of coal. Currently, the hydrogenolysis of aromatic C-O bonds requires heterogeneous catalysts that operate at high temperature and pressure and lead to a mixture of products from competing hydrogenolysis of aliphatic C-O bonds and hydrogenation of the arene. Here, we report hydrogenolyses of aromatic C-O bonds in alkyl aryl and diaryl ethers that form exclusively arenes and alcohols. This process is catalyzed by a soluble nickel carbene complex under just 1 bar of hydrogen at temperatures of 80 to 120°C; the relative reactivity of ether substrates scale as Ar-OAr>>Ar-OMe>ArCH2-OMe (Ar, Aryl; Me, Methyl). Hydrogenolysis of lignin model compounds highlights the potential of this approach for the conversion of refractory aryl ether biopolymers to hydrocarbons.",TRUE,count/measurement
R129,Organic Chemistry,R138577,Solvent-Free Chelation-Assisted Catalytic C-C Bond Cleavage of Unstrained Ketone by Rhodium(I) Complexes under Microwave Irradiation,S550510,R138579,Additive,L387411,2-amino-3-picoline,A highly efficient C-C bond cleavage of unstrained aliphatic ketones bearing β-hydrogens with olefins was achieved using a chelation-assisted catalytic system consisting of (Ph 3 P) 3 RhCl and 2-amino-3-picoline by microwave irradiation under solvent-free conditions. The addition of cyclohexylamine catalyst accelerated the reaction rate dramatically under microwave irradiation compared with the classical heating method.,TRUE,count/measurement
R68,Pharmacology,R109548,EFFECT OF PIOGLITAZONE AND GEMFIBROZIL ADMINISTRATION ON C-REACTIVE PROTEIN LEVELS IN NON-DIABETIC HYPERLIPIDEMIC RATS,S499904,R109550,Data,R109563,/L at 4th week,"ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean±SD of 2.59±0.28mg/L, 2.63±0.32mg/L and 2.67±0.23mg/L at 0 week to 3.55±0.44mg/L, 3.59±0.34mg/L and 3.6±0.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean±SD of 2.93±0.33mg/L compared to control group’s 4.42±0.30mg/L and gemfibrozil group’s 4.28±0.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).",TRUE,count/measurement
R68,Pharmacology,R109548,EFFECT OF PIOGLITAZONE AND GEMFIBROZIL ADMINISTRATION ON C-REACTIVE PROTEIN LEVELS IN NON-DIABETIC HYPERLIPIDEMIC RATS,S499901,R109550,Data,R109560,10mg/kg body weight,"ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean±SD of 2.59±0.28mg/L, 2.63±0.32mg/L and 2.67±0.23mg/L at 0 week to 3.55±0.44mg/L, 3.59±0.34mg/L and 3.6±0.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean±SD of 2.93±0.33mg/L compared to control group’s 4.42±0.30mg/L and gemfibrozil group’s 4.28±0.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).",TRUE,count/measurement
R68,Pharmacology,R109548,EFFECT OF PIOGLITAZONE AND GEMFIBROZIL ADMINISTRATION ON C-REACTIVE PROTEIN LEVELS IN NON-DIABETIC HYPERLIPIDEMIC RATS,S499893,R109550,Material,R109552,"27, adult healthy male Sprague Dawley rats","ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean±SD of 2.59±0.28mg/L, 2.63±0.32mg/L and 2.67±0.23mg/L at 0 week to 3.55±0.44mg/L, 3.59±0.34mg/L and 3.6±0.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean±SD of 2.93±0.33mg/L compared to control group’s 4.42±0.30mg/L and gemfibrozil group’s 4.28±0.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).",TRUE,count/measurement
R68,Pharmacology,R109548,EFFECT OF PIOGLITAZONE AND GEMFIBROZIL ADMINISTRATION ON C-REACTIVE PROTEIN LEVELS IN NON-DIABETIC HYPERLIPIDEMIC RATS,S499907,R109550,Data,R109566,4.28±0.39mg/L.,"ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean±SD of 2.59±0.28mg/L, 2.63±0.32mg/L and 2.67±0.23mg/L at 0 week to 3.55±0.44mg/L, 3.59±0.34mg/L and 3.6±0.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean±SD of 2.93±0.33mg/L compared to control group’s 4.42±0.30mg/L and gemfibrozil group’s 4.28±0.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).",TRUE,count/measurement
R68,Pharmacology,R109548,EFFECT OF PIOGLITAZONE AND GEMFIBROZIL ADMINISTRATION ON C-REACTIVE PROTEIN LEVELS IN NON-DIABETIC HYPERLIPIDEMIC RATS,S499906,R109550,Data,R109565,4.42±0.30mg/L,"ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean±SD of 2.59±0.28mg/L, 2.63±0.32mg/L and 2.67±0.23mg/L at 0 week to 3.55±0.44mg/L, 3.59±0.34mg/L and 3.6±0.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean±SD of 2.93±0.33mg/L compared to control group’s 4.42±0.30mg/L and gemfibrozil group’s 4.28±0.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).",TRUE,count/measurement
R68,Pharmacology,R109548,EFFECT OF PIOGLITAZONE AND GEMFIBROZIL ADMINISTRATION ON C-REACTIVE PROTEIN LEVELS IN NON-DIABETIC HYPERLIPIDEMIC RATS,S499895,R109550,Material,R109554,coconut oil 8.0% and sodium cholate,"ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean±SD of 2.59±0.28mg/L, 2.63±0.32mg/L and 2.67±0.23mg/L at 0 week to 3.55±0.44mg/L, 3.59±0.34mg/L and 3.6±0.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean±SD of 2.93±0.33mg/L compared to control group’s 4.42±0.30mg/L and gemfibrozil group’s 4.28±0.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).",TRUE,count/measurement
R68,Pharmacology,R109548,EFFECT OF PIOGLITAZONE AND GEMFIBROZIL ADMINISTRATION ON C-REACTIVE PROTEIN LEVELS IN NON-DIABETIC HYPERLIPIDEMIC RATS,S499903,R109550,Data,R109562,"mean±SD of 2.59±0.28mg/L, 2.63±0.32mg/L and 2.67±0.23mg/L at 0 week to 3.55±0.44mg/L, 3.59±0.34mg/L and 3.6±0.32mg","ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean±SD of 2.59±0.28mg/L, 2.63±0.32mg/L and 2.67±0.23mg/L at 0 week to 3.55±0.44mg/L, 3.59±0.34mg/L and 3.6±0.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean±SD of 2.93±0.33mg/L compared to control group’s 4.42±0.30mg/L and gemfibrozil group’s 4.28±0.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).",TRUE,count/measurement
R68,Pharmacology,R109548,EFFECT OF PIOGLITAZONE AND GEMFIBROZIL ADMINISTRATION ON C-REACTIVE PROTEIN LEVELS IN NON-DIABETIC HYPERLIPIDEMIC RATS,S499905,R109550,Data,R109564,mean±SD of 2.93±0.33mg/L,"ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean±SD of 2.59±0.28mg/L, 2.63±0.32mg/L and 2.67±0.23mg/L at 0 week to 3.55±0.44mg/L, 3.59±0.34mg/L and 3.6±0.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean±SD of 2.93±0.33mg/L compared to control group’s 4.42±0.30mg/L and gemfibrozil group’s 4.28±0.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).",TRUE,count/measurement
R68,Pharmacology,R109548,EFFECT OF PIOGLITAZONE AND GEMFIBROZIL ADMINISTRATION ON C-REACTIVE PROTEIN LEVELS IN NON-DIABETIC HYPERLIPIDEMIC RATS,S499900,R109550,Data,R109559,pioglitazone 10mg/kg body weight,"ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean±SD of 2.59±0.28mg/L, 2.63±0.32mg/L and 2.67±0.23mg/L at 0 week to 3.55±0.44mg/L, 3.59±0.34mg/L and 3.6±0.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean±SD of 2.93±0.33mg/L compared to control group’s 4.42±0.30mg/L and gemfibrozil group’s 4.28±0.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).",TRUE,count/measurement
R68,Pharmacology,R109548,EFFECT OF PIOGLITAZONE AND GEMFIBROZIL ADMINISTRATION ON C-REACTIVE PROTEIN LEVELS IN NON-DIABETIC HYPERLIPIDEMIC RATS,S499902,R109550,Data,R109561,"zero, 4th and 8th week","ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean±SD of 2.59±0.28mg/L, 2.63±0.32mg/L and 2.67±0.23mg/L at 0 week to 3.55±0.44mg/L, 3.59±0.34mg/L and 3.6±0.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean±SD of 2.93±0.33mg/L compared to control group’s 4.42±0.30mg/L and gemfibrozil group’s 4.28±0.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).",TRUE,count/measurement
R138056,Planetary Sciences,R138505,"Far infrared and Raman spectroscopic investigations of lunar materials from Apollo 11, 12, 14, and 15",S549951,R138506,Samples source,L386972,Apollo 11,"We have studied the elastic and inelastic light scattering of twelve lunar surface rocks and eleven lunar soil samples from Apollo 11, 12, 14, and 15, over the range 20-2000 em-I. The phonons occurring in this frequency region have been associated with the different chemical constituents and are used to determine the mineralogical abundances by comparison with the spectra of a wide variety of terrestrial minerals and rocks. KramersKronig analyses of the infrared reflectance spectra provided the dielectric dispersion (e' and s"") and the optical constants (n and k). The dielectric constants at """" 1011 Hz have been obtained for each sample and are compared with the values reported in the 10-10 Hz range. The emissivity peak at the Christianson frequencies for all the lunar samples lie within the range 1195-1250 cm-1; such values are characteristic of terrestrial basalts. The Raman light scattering spectra provided investigation of small individual grains or inclusions and gave unambiguous interpretation of some of the characteristic mineralogical components.",TRUE,count/measurement
R138056,Planetary Sciences,R138520,Silica polymorphs in lunar granite: Implications for granite petrogenesis on the Moon,S550171,R138522,Samples source,L387176,Apollo 12,"Abstract Granitic lunar samples largely consist of granophyric intergrowths of silica and K-feldspar. The identification of the silica polymorph present in the granophyre can clarify the petrogenesis of the lunar granites. The presence of tridymite or cristobalite would indicate rapid crystallization at high temperature. Quartz would indicate crystallization at low temperature or perhaps intrusive, slow crystallization, allowing for the orderly transformation from high-temperature silica polymorphs (tridymite or cristobalite). We identify the silica polymorphs present in four granitic lunar samples from the Apollo 12 regolith using laser Raman spectroscopy. Typically, lunar silica occurs with a hackle fracture pattern. We did an initial density calculation on the hackle fracture pattern of quartz and determined that the volume of quartz and fracture space is consistent with a molar volume contraction from tridymite or cristobalite, both of which are less dense than quartz. Moreover, we analyzed the silica in the granitic fragments from Apollo 12 by electron-probe microanalysis and found it contains up to 0.7 wt% TiO2, consistent with initial formation as the high-temperature silica polymorphs, which have more open crystal structures that can more readily accommodate cations other than Si. The silica in Apollo 12 granitic samples crystallized rapidly as tridymite or cristobalite, consistent with extrusive volcanism. The silica then inverted to quartz at a later time, causing it to contract and fracture. A hackle fracture pattern is common in silica occurring in extrusive lunar lithologies (e.g., mare basalt). The extrusive nature of these granitic samples makes them excellent candidates to be similar to the rocks that compose positive relief silicic features such as the Gruithuisen Domes.",TRUE,count/measurement
R245,Power and Energy,R137145,Application of Interval Type-2 Fuzzy Logic System in Short Term Load Forecasting on Special Days,S542008,R137146,MAPE,L381691,1.03%,"This paper presents the application of Interval Type-2 fuzzy logic systems (Interval Type-2 FLS) in short term load forecasting (STLF) on special days, study case in Bali Indonesia. Type-2 FLS is characterized by a concept called footprint of uncertainty (FOU) that provides the extra mathematical dimension that equips Type-2 FLS with the potential to outperform their Type-1 counterparts. While a Type-2 FLS has the capability to model more complex relationships, the output of a Type-2 fuzzy inference engine needs to be type-reduced. Type reduction is used by applying the Karnik-Mendel (KM) iterative algorithm. This type reduction maps the output of Type-2 FSs into Type-1 FSs then the defuzzification with centroid method converts that Type-1 reduced FSs into a number. The proposed method was tested with the actual load data of special days using 4 days peak load before special days and at the time of special day for the year 2002-2006. There are 20 items of special days in Bali that are used to be forecasted in the year 2005 and 2006 respectively. The test results showed an accurate forecasting with the mean average percentage error of 1.0335% and 1.5683% in the year 2005 and 2006 respectively.",TRUE,count/measurement
R11,Science,R34124,Can CSF predict the course of optic neuritis?,S118446,R34125,Follow-up time/ total observation time after MON n M/F,L71496,1 year,"To discuss the implications of CSF abnormalities for the course of acute monosymptomatic optic neuritis (AMON), various CSF markers were analysed in patients being randomly selected from a population-based cohort. Paired serum and CSF were obtained within a few weeks from onset of AMON. CSF-restricted oligoclonal IgG bands, free kappa and free lambda chain bands were observed in 17, 15, and nine of 27 examined patients, respectively. Sixteen patients showed a polyspecific intrathecal synthesis of oligoclonal IgG antibodies against one or more viruses. At 1 year follow-up five patients had developed clinically definite multiple sclerosis (CDMS); all had CSF oligoclonal IgG bands and virus-specific oligoclonal IgG antibodies at onset. Due to the relative small number studied at the short-term follow-up, no firm conclusion of the prognostic value of these analyses could be reached. CSF Myelin Basic Protein-like material was increased in only two of 29 patients with AMON, but may have potential value in reflecting disease activity, as the highest values were obtained among patients with CSF sampled soon after the worst visual acuity was reached, and among patients with severe visual impairment. In most previous studies of patients with AMON qualitative and quantitative analyses of CSF IgG had a predictive value for development of CDMS, but the results are conflicting.",TRUE,count/measurement
R11,Science,R26095,Field study on occupant comfort and the office thermal environment in rooms with displacement ventilation,S81220,R26096,Place of experiment,L51478,10 office buildings with displacement ventilation,"UNLABELLED A field survey of occupants' response to the indoor environment in 10 office buildings with displacement ventilation was performed. The response of 227 occupants was analyzed. About 24% of the occupants in the survey complained that they were daily bothered by draught, mainly at the lower leg. Vertical air temperature difference measured between head and feet levels was less than 3 degrees C at all workplaces visited. Combined local discomfort because of draught and vertical temperature difference does not seem to be a serious problem in rooms with displacement ventilation. Almost one half (49%) of the occupants reported that they were daily bothered by an uncomfortable room temperature. Forty-eight per cent of the occupants were not satisfied with the air quality. PRACTICAL IMPLICATIONS The PMV and the Draught Rating indices as well as the specifications for local discomfort because of the separate impact of draught and vertical temperature difference, as defined in the present standards, are relevant for the design of a thermal environment in rooms with displacement ventilation and for its assessment in practice. Increasing the supply air temperature in order to counteract draught discomfort is a measure that should be considered carefully; even if the desired stratification of pollution in the occupied zone is preserved, an increase of the inhaled air temperature may have a negative effect on perceived air quality.",TRUE,count/measurement
R11,Science,R25868,Selective Hydrogenation of Polyunsaturated Fatty Acids Using Alkanethiol Self-Assembled Monolayer-Coated Pd/Al2O3 Catalysts,S79319,R25869,substrate,L49907,18-carbon polyunsaturated fatty acids,"Pd/Al2O3 catalysts coated with various thiolate self-assembled monolayers (SAMs) were used to direct the partial hydrogenation of 18-carbon polyunsaturated fatty acids, yielding a product stream enriched in monounsaturated fatty acids (with low saturated fatty acid content), a favorable result for increasing the oxidative stability of biodiesel. The uncoated Pd/Al2O3 catalyst quickly saturated all fatty acid reactants under hydrogenation conditions, but the addition of alkanethiol SAMs markedly increased the reaction selectivity to the monounsaturated product oleic acid to a level of 80–90%, even at conversions >70%. This effect, which is attributed to steric effects between the SAMs and reactants, was consistent with the relative consumption rates of linoleic and oleic acid using alkanethiol-coated and uncoated Pd/Al2O3 catalysts. With an uncoated Pd/Al2O3 catalyst, each fatty acid, regardless of its degree of saturation had a reaction rate of ∼0.2 mol reactant consumed per mole of surface palladium per ...",TRUE,count/measurement
R11,Science,R25892,Palladium–gold single atom alloy catalysts for liquid phase selective hydrogenation of 1-hexyne,S79525,R25893,substrate,L50077,1-hexyne,Silica supported and unsupported PdAu single atom alloys (SAAs) were investigated for the selective hydrogenation of 1-hexyne to hexenes under mild conditions.
,TRUE,count/measurement
R11,Science,R25898,Merging Single-Atom-Dispersed Silver and Carbon Nitride to a Joint Electronic System via Copolymerization with Silver Tricyanomethanide,S79584,R25899,substrate,L50127,1-hexyne,"Herein, we present an approach to create a hybrid between single-atom-dispersed silver and a carbon nitride polymer. Silver tricyanomethanide (AgTCM) is used as a reactive comonomer during templated carbon nitride synthesis to introduce both negative charges and silver atoms/ions to the system. The successful introduction of the extra electron density under the formation of a delocalized joint electronic system is proven by photoluminescence measurements, X-ray photoelectron spectroscopy investigations, and measurements of surface ζ-potential. At the same time, the principal structure of the carbon nitride network is not disturbed, as shown by solid-state nuclear magnetic resonance spectroscopy and electrochemical impedance spectroscopy analysis. The synthesis also results in an improvement of the visible light absorption and the development of higher surface area in the final products. The atom-dispersed AgTCM-doped carbon nitride shows an enhanced performance in the selective hydrogenation of alkynes in comparison with the performance of other conventional Ag-based materials prepared by spray deposition and impregnation-reduction methods, here exemplified with 1-hexyne.",TRUE,count/measurement
R11,Science,R32990,Chromosomal abnormalities in Philadelphia chromosome negative metaphases appearing during imatinib mesylate therapy in patients with newly diagnosed chronic myeloid leukemia in chronic phase,S114252,R32991,N (%) with abnormal karyotype,L69002,21 (9%),"The development of chromosomal abnormalities (CAs) in the Philadelphia chromosome (Ph)-negative metaphases during imatinib (IM) therapy in patients with newly diagnosed chronic myecloid leukemia (CML) has been reported only anecdotally. We assessed the frequency and significance of this phenomenon among 258 patients with newly diagnosed CML in chronic phase receiving IM. After a median follow-up of 37 months, 21 (9%) patients developed 23 CAs in Ph-negative cells; excluding -Y, this incidence was 5%. Sixteen (70%) of all CAs were observed in 2 or more metaphases. The median time from start of IM to the appearance of CAs was 18 months. The most common CAs were -Y and + 8 in 9 and 3 patients, respectively. CAs were less frequent in young patients (P = .02) and those treated with high-dose IM (P = .03). In all but 3 patients, CAs were transient and disappeared after a median of 5 months. One patient developed acute myeloid leukemia (associated with - 7). At last follow-up, 3 patients died from transplantation-related complications, myocardial infarction, and progressive disease and 2 lost cytogenetic response. CAs occur in Ph-negative cells in a small percentage of patients with newly diagnosed CML treated with IM. In rare instances, these could reflect the emergence of a new malignant clone.",TRUE,count/measurement
R11,Science,R30606,2D cascaded AdaBoost for eye localization,S102118,R30627,Methods,L61307,2D Cascaded AdaBoost,"In this paper, 2D cascaded AdaBoost, a novel classifier designing framework, is presented and applied to eye localization. By the term ""2D"", we mean that in our method there are two cascade classifiers in two directions: The first one is a cascade designed by bootstrapping the positive samples, and the second one, as the component classifiers of the first one, is cascaded by bootstrapping the negative samples. The advantages of the 2D structure include: (1) it greatly facilitates the classifier designing on huge-scale training set; (2) it can easily deal with the significant variations within the positive (or negative) samples; (3) both the training and testing procedures are more efficient. The proposed structure is applied to eye localization and evaluated on four public face databases, extensive experimental results verified the effectiveness, efficiency, and robustness of the proposed method",TRUE,count/measurement
R11,Science,R25537,Building a Game Engine: A Tale of Modern Model-Driven Engineering,S76928,R25538,Game Genres,L48093,2D Physics-based Games,"Game engines enable developers to reuse assets from previously developed games, thus easing the software-engineering challenges around the video-game development experience and making the implementation of games less expensive, less technologically brittle, and more efficient. However, the construction of game engines is challenging in itself, it involves the specification of well defined architectures and typical game play behaviors, flexible enough to enable game designers to implement their vision, while, at the same time, simplifying the implementation through asset and code reuse. In this paper we present a set of lessons learned through the design and construction PhyDSL-2, a game engine for 2D physics-based games. Our experience involves the active use of modern model-driven engineering technologies, to overcome the complexity of the engine design and to systematize its maintenance and evolution.",TRUE,count/measurement
R11,Science,R33447,Linking Success Factors to Financial Performance,S115635,R33448,Critical success factors,R33443,3PL experience,"Problem statement: Based on a literature survey, an attempt has been made in this study to develop a framework for identifying the success factors. In addition, a list of key success factors is presented. The emphasis is on success factors dealing with breadth of services, internationalization of operations, industry focus, customer focus, 3PL experience, relationship with 3PLs, investment in quality assets, investment in information systems, availability of skilled professionals and supply chain integration. In developing the factors an effort has been made to align and relate them to financial performance. Conclusion/Recommendations: We found success factors “relationship with 3PLs and skilled logistics professionals” would substantially improves financial performance metric profit growth. Our findings also contribute to managerial practice by offering a benchmarking tool that can be used by managers in the 3PL service provider industry in India.",TRUE,count/measurement
R11,Science,R33001,Prognos- tic and biologic significance of chromosomal imbalances assessed by comparative genomic hybridization in multiple myeloma,S114301,R33002,N (%) with abnormal karyotype,L69035,51 (69%),"Cytogenetic abnormalities, evaluated either by karyotype or by fluorescence in situ hybridization (FISH), are considered the most important prognostic factor in multiple myeloma (MM). However, there is no information about the prognostic impact of genomic changes detected by comparative genomic hybridization (CGH). We have analyzed the frequency and prognostic impact of genetic changes as detected by CGH and evaluated the relationship between these chromosomal imbalances and IGH translocation, analyzed by FISH, in 74 patients with newly diagnosed MM. Genomic changes were identified in 51 (69%) of the 74 MM patients. The most recurrent abnormalities among the cases with genomic changes were gains on chromosome regions 1q (45%), 5q (24%), 9q (24%), 11q (22%), 15q (22%), 3q (16%), and 7q (14%), while losses mainly involved chromosomes 13 (39%), 16q (18%), 6q (10%), and 8p (10%). Remarkably, the 6 patients with gains on 11q had IGH translocations. Multivariate analysis selected chromosomal losses, 11q gains, age, and type of treatment (conventional chemotherapy vs autologous transplantation) as independent parameters for predicting survival. Genomic losses retained the prognostic value irrespective of treatment approach. According to these results, losses of chromosomal material evaluated by CGH represent a powerful prognostic factor in MM patients.",TRUE,count/measurement
R11,Science,R34113,Clinically isolated syndromes: a new oligoclonal band test accurately predicts conversion to MS,S118353,R34114,Follow-up time/ total observation time after MON n M/F,L71423,6 years,"Background: Patients with a clinically isolated demyelinating syndrome (CIS) are at risk of developing a second attack, thus converting into clinically definite multiple sclerosis (CDMS). Therefore, an accurate prognostic marker for that conversion might allow early treatment. Brain MRI and oligoclonal IgG band (OCGB) detection are the most frequent paraclinical tests used in MS diagnosis. A new OCGB test has shown high sensitivity and specificity in differential diagnosis of MS. Objective: To evaluate the accuracy of the new OCGB method and of current MRI criteria (MRI-C) to predict conversion of CIS to CDMS. Methods: Fifty-two patients with CIS were studied with OCGB detection and brain MRI, and followed up for 6 years. The sensitivity and specificity of both methods to predict conversion to CDMS were analyzed. Results: OCGB detection showed a sensitivity of 91.4% and specificity of 94.1%. MRI-C had a sensitivity of 74.23% and specificity of 88.2%. The presence of either OCGB or MRI-C studied simultaneously showed a sensitivity of 97.1% and specificity of 88.2%. Conclusions: The presence of oligoclonal IgG bands is highly specific and sensitive for early prediction of conversion to multiple sclerosis. MRI criteria have a high specificity but less sensitivity. The simultaneous use of both tests shows high sensitivity and specificity in predicting clinically isolated demyelinating syndrome conversion to clinically definite multiple sclerosis.",TRUE,count/measurement
R11,Science,R27331,Effect of shot peening on residual stress and fatigue life of a spring steel,S88173,R27332,Steel Grade,L54581,60SC7 spring steel,"This study describes shot peening effects such as shot hardness, shot size and shot projection pressure, on the residual stress distribution and fatigue life in reversed torsion of a 60SC7 spring steel. There appears to be a correlation between the fatigue strength and the area under the residual stress distribution curve. The biggest shot shows the best fatigue life improvement. However, for a shorter time of shot peening, small hard shot showed the best performance. Moreover, the superficial residual stresses and the amount of work hardening (characterised by the width of the X-ray diffraction line) do not remain stable during fatigue cycling. Indeed they decrease and their reduction rate is a function of the cyclic stress level and an inverse function of the depth of the plastically deformed surface layer.",TRUE,count/measurement
R11,Science,R25081,Private traits and attributes are predictable from digital records of human behavior,S74277,R25082,Data,L46149,"Dataset of over 58,000 volunteers who provided their Facebook Likes, detailed demographic profiles, and the results of several psychometric tests.","We show that easily accessible digital records of behavior, Facebook Likes, can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender. The analysis presented is based on a dataset of over 58,000 volunteers who provided their Facebook Likes, detailed demographic profiles, and the results of several psychometric tests. The proposed model uses dimensionality reduction for preprocessing the Likes data, which are then entered into logistic/linear regression to predict individual psychodemographic profiles from Likes. The model correctly discriminates between homosexual and heterosexual men in 88% of cases, African Americans and Caucasian Americans in 95% of cases, and between Democrat and Republican in 85% of cases. For the personality trait “Openness,” prediction accuracy is close to the test–retest accuracy of a standard personality test. We give examples of associations between attributes and Likes and discuss implications for online personalization and privacy.",TRUE,count/measurement
R11,Science,R33447,Linking Success Factors to Financial Performance,S115634,R33448,Critical success factors,R33442,relationship with 3PL,"Problem statement: Based on a literature survey, an attempt has been made in this study to develop a framework for identifying the success factors. In addition, a list of key success factors is presented. The emphasis is on success factors dealing with breadth of services, internationalization of operations, industry focus, customer focus, 3PL experience, relationship with 3PLs, investment in quality assets, investment in information systems, availability of skilled professionals and supply chain integration. In developing the factors an effort has been made to align and relate them to financial performance. Conclusion/Recommendations: We found success factors “relationship with 3PLs and skilled logistics professionals” would substantially improves financial performance metric profit growth. Our findings also contribute to managerial practice by offering a benchmarking tool that can be used by managers in the 3PL service provider industry in India.",TRUE,count/measurement
R11,Science,R33482,Key success factors and their performance implications in the Indian third-party logistics (3PL) industry,S115717,R33483,Critical success factors,R33479,relationship with 3PLs,"This paper uses the extant literature to identify the key success factors that are associated with performance in the Indian third-party logistics service providers (3PL) sector. We contribute to the sparse literature that has examined the relationship between key success factors and performance in the Indian 3PL context. This study offers new insights and isolates key success factors that vary in their impact on operations and financial performance measures. Specifically, we found that the key success factor of relationship with customers significantly influenced the operations measures of on-time delivery performance and customer satisfaction and the financial measure of profit growth. Similarly, the key success factor of skilled logistics professionals improved the operational measure of customer satisfaction and the financial measure of profit growth. The key success factor of breadth of service significantly affected the financial measure of revenue growth, but did not affect any operational measure. To further unravel the patterns of these results, a contingency analysis of these relationships according to firm size was also conducted. Relationship with 3PLs was significant irrespective of firm size. Our findings contribute to academic theory and managerial practice by offering context-specific suggestions on the usefulness of specific key success factors based on their potential influence on operational and financial performance in the Indian 3PL industry.",TRUE,count/measurement
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5768,R5230,Data,R5234,1.6 million,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,count/measurement
R141823,Semantic Web,R142799,Towards the Reuse of Standardized Thesauri Into Ontologies,S573833,R142801,data source,R142803,iso 25964,"One of the main holdbacks towards a wide use of ontologies is the high building cost. In order to reduce this effort, reuse of existing Knowledge Organization Systems (KOSs), and in particular thesauri, is a valuable and much cheaper alternative to build ontologies from scratch. In the literature tools to support such reuse and conversion of thesauri as well as re-engineering patterns already exist. However, few of these tools rely on a sort of semi-automatic reasoning on the structure of the thesaurus being converted. Furthermore, patterns proposed in the literature are not updated considering the new ISO 25964 standard on thesauri. This paper introduces a new application framework aimed to convert thesauri into OWL ontologies, differing from the existing approaches for taking into consideration ISO 25964 compliant thesauri and for applying completely automatic conversion rules.",TRUE,count/measurement
R141823,Semantic Web,R142799,Towards the Reuse of Standardized Thesauri Into Ontologies,S573857,R142801,dataset,R142803,iso 25964,"One of the main holdbacks towards a wide use of ontologies is the high building cost. In order to reduce this effort, reuse of existing Knowledge Organization Systems (KOSs), and in particular thesauri, is a valuable and much cheaper alternative to build ontologies from scratch. In the literature tools to support such reuse and conversion of thesauri as well as re-engineering patterns already exist. However, few of these tools rely on a sort of semi-automatic reasoning on the structure of the thesaurus being converted. Furthermore, patterns proposed in the literature are not updated considering the new ISO 25964 standard on thesauri. This paper introduces a new application framework aimed to convert thesauri into OWL ontologies, differing from the existing approaches for taking into consideration ISO 25964 compliant thesauri and for applying completely automatic conversion rules.",TRUE,count/measurement
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338128,R71590,Data,R71616,3D orthorhombic structure,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,count/measurement
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338121,R71590,Material,R71609,"780 nm, FAPbI3 NCs","Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,count/measurement
R259,Semiconductor and Optical Materials,R71565,Bright Visible-Infrared Light Emitting Diodes Based on Hybrid Halide Perovskite with Spiro-OMeTAD as a Hole-Injecting Layer,S338067,R71567,Data,R71571,applied voltages as low as 2 V,"Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W·sr(-1)·m(-2) at a current density of 232 mA·cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 ± 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.",TRUE,count/measurement
R259,Semiconductor and Optical Materials,R71565,Bright Visible-Infrared Light Emitting Diodes Based on Hybrid Halide Perovskite with Spiro-OMeTAD as a Hole-Injecting Layer,S338064,R71567,Data,R71568,current density of 232 mA·cm(-2),"Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W·sr(-1)·m(-2) at a current density of 232 mA·cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 ± 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.",TRUE,count/measurement
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338133,R71590,Data,R71621,high external quantum efficiency of 2.3%,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,count/measurement
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338129,R71590,Data,R71617,high quantum yield (QY > 70%),"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,count/measurement
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338132,R71590,Data,R71620,low thresholds of 28 and 7.5 μJ cm–2,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,count/measurement
R259,Semiconductor and Optical Materials,R71565,Bright Visible-Infrared Light Emitting Diodes Based on Hybrid Halide Perovskite with Spiro-OMeTAD as a Hole-Injecting Layer,S338065,R71567,Data,R71569,maximum external quantum efficiency (EQE) of 0.48%,"Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W·sr(-1)·m(-2) at a current density of 232 mA·cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 ± 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.",TRUE,count/measurement
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338113,R71590,Material,R71601,"nonluminescent, wide-band-gap 1D polymorph","Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,count/measurement
R259,Semiconductor and Optical Materials,R71565,Bright Visible-Infrared Light Emitting Diodes Based on Hybrid Halide Perovskite with Spiro-OMeTAD as a Hole-Injecting Layer,S338068,R71567,Data,R71572,turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 ± 0.06 V,"Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W·sr(-1)·m(-2) at a current density of 232 mA·cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 ± 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.",TRUE,count/measurement
R281,Social and Behavioral Sciences,R70733,Everything in moderation: ICT and reading performance of Dutch 15-year-olds,S337441,R70736,has data,R70764,PISA 2015,"Abstract Previous research on the relationship between students’ home and school Information and Communication Technology (ICT) resources and academic performance has shown ambiguous results. The availability of ICT resources at school has been found to be unrelated or negatively related to academic performance, whereas the availability of ICT resources at home has been found to be both positively and negatively related to academic performance. In addition, the frequency of use of ICT is related to students’ academic achievement. This relationship has been found to be negative for ICT use at school, however, for ICT use at home the literature on the relationship with academic performance is again ambiguous. In addition to ICT availability and ICT use, students’ attitudes towards ICT have also been found to play a role in student performance. In the present study, we examine how availability of ICT resources, students’ use of those resources (at school, outside school for schoolwork, outside school for leisure), and students’ attitudes toward ICT (interest in ICT, perceived ICT competence, perceived ICT autonomy) relate to individual differences in performance on a digital assessment of reading in one comprehensive model using the Dutch PISA 2015 sample of 5183 15-year-olds (49.2% male). Student gender and students’ economic, social, and cultural status accounted for a substantial part of the variation in digitally assessed reading performance. Controlling for these relationships, results indicated that students with moderate access to ICT resources, moderate use of ICT at school or outside school for schoolwork, and moderate interest in ICT had the highest digitally assessed reading performance. In contrast, students who reported moderate competence in ICT had the lowest digitally assessed reading performance. In addition, frequent use of ICT outside school for leisure was negatively related to digitally assessed reading performance, whereas perceived autonomy was positively related. Taken together, the findings suggest that excessive access to ICT resources, excessive use of ICT, and excessive interest in ICT is associated with lower digitally assessed reading performance.",TRUE,count/measurement
R281,Social and Behavioral Sciences,R70740,ICT Engagement: a new construct and its assessment in PISA 2015,S337442,R70741,has data,R70764,PISA 2015,"Abstract As a relevant cognitive-motivational aspect of ICT literacy, a new construct ICT Engagement is theoretically based on self-determination theory and involves the factors ICT interest, Perceived ICT competence, Perceived autonomy related to ICT use, and ICT as a topic in social interaction. In this manuscript, we present different sources of validity supporting the construct interpretation of test scores in the ICT Engagement scale, which was used in PISA 2015. Specifically, we investigated the internal structure by dimensional analyses and investigated the relation of ICT Engagement aspects to other variables. The analyses are based on public data from PISA 2015 main study from Switzerland ( n = 5860) and Germany ( n = 6504). First, we could confirm the four-dimensional structure of ICT Engagement for the Swiss sample using a structural equation modelling approach. Second, ICT Engagement scales explained the highest amount of variance in ICT Use for Entertainment, followed by Practical use. Third, we found significantly lower values for girls in all ICT Engagement scales except ICT Interest. Fourth, we found a small negative correlation between the scores in the subscale “ICT as a topic in social interaction” and reading performance in PISA 2015. We could replicate most results for the German sample. Overall, the obtained results support the construct interpretation of the four ICT Engagement subscales.",TRUE,count/measurement
R281,Social and Behavioral Sciences,R70749,The Relation Between ICT and Science in PISA 2015 for Bulgarian and Finnish Students,S336645,R70751,has data,R70764,PISA 2015,"The relationship between Information and Communication Technology (ICT) and science performance has been the focus of much recent research, especially due to the prevalence of ICT in our digital society. However, the exploration of this relationship has yielded mixed results. Thus, the current study aims to uncover the learning processes that are linked to students’ science performance by investigating the effect of ICT variables on science for 15-year-old students in two countries with contrasting levels of technology implementation (Bulgaria n = 5,928 and Finland n = 5,882). The study analyzed PISA 2015 data using structural equation modeling to assess the impact of ICT use, availability, and comfort on students’ science scores, controlling for students’ socio-economic status. In both countries, results revealed that (1) ICT use and availability were associated with lower science scores and (2) students who were more comfortable with ICT performed better in science. This study can inform practical implementations of ICT in classrooms that consider the differential effect of ICT and it can advance theoretical knowledge around technology, learning, and cultural context.",TRUE,count/measurement
R354,Sociology,R44685,Treatment of dysthymia and minor depression in primary care: a randomized trial in patients aged 18 to 59 years,S136559,R44686,Most distal followup,L83460,11 weeks,"OBJECTIVE The researchers evaluated the effectiveness of paroxetine and Problem-Solving Treatment for Primary Care (PST-PC) for patients with minor depression or dysthymia. STUDY DESIGN This was an 11-week randomized placebo-controlled trial conducted in primary care practices in 2 communities (Lebanon, NH, and Seattle, Wash). Paroxetine (n=80) or placebo (n=81) therapy was started at 10 mg per day and increased to a maximum 40 mg per day, or PST-PC was provided (n=80). There were 6 scheduled visits for all treatment conditions. POPULATION A total of 241 primary care patients with minor depression (n=114) or dysthymia (n=127) were included. Of these, 191 patients (79.3%) completed all treatment visits. OUTCOMES Depressive symptoms were measured using the 20-item Hopkins Depression Scale (HSCL-D-20). Remission was scored on the Hamilton Depression Rating Scale (HDRS) as less than or equal to 6 at 11 weeks. Functional status was measured with the physical health component (PHC) and mental health component (MHC) of the 36-item Medical Outcomes Study Short Form. RESULTS All treatment conditions showed a significant decline in depressive symptoms over the 11-week period. There were no significant differences between the interventions or by diagnosis. For dysthymia the remission rate for paroxetine (80%) and PST-PC (57%) was significantly higher than for placebo (44%, P=.008). The remission rate was high for minor depression (64%) and similar for each treatment group. For the MHC there were significant outcome differences related to baseline level for paroxetine compared with placebo. For the PHC there were no significant differences between the treatment groups. CONCLUSIONS For dysthymia, paroxetine and PST-PC improved remission compared with placebo plus nonspecific clinical management. Results varied for the other outcomes measured. For minor depression, the 3 interventions were equally effective; general clinical management (watchful waiting) is an appropriate treatment option.",TRUE,count/measurement
R354,Sociology,R44719,Treatment of dysthymia and minor depression in primary care: A randomized controlled trial in older adults,S136817,R44720,Most distal followup,L83630,11 weeks,"CONTEXT Insufficient evidence exists for recommendation of specific effective treatments for older primary care patients with minor depression or dysthymia. OBJECTIVE To compare the effectiveness of pharmacotherapy and psychotherapy in primary care settings among older persons with minor depression or dysthymia. DESIGN Randomized, placebo-controlled trial (November 1995-August 1998). SETTING Four geographically and clinically diverse primary care practices. PARTICIPANTS A total of 415 primary care patients (mean age, 71 years) with minor depression (n = 204) or dysthymia (n = 211) and a Hamilton Depression Rating Scale (HDRS) score of at least 10 were randomized; 311 (74.9%) completed all study visits. INTERVENTIONS Patients were randomly assigned to receive paroxetine (n = 137) or placebo (n = 140), starting at 10 mg/d and titrated to a maximum of 40 mg/d, or problem-solving treatment-primary care (PST-PC; n = 138). For the paroxetine and placebo groups, the 6 visits over 11 weeks included general support and symptom and adverse effects monitoring; for the PST-PC group, visits were for psychotherapy. MAIN OUTCOME MEASURES Depressive symptoms, by the 20-item Hopkins Symptom Checklist Depression Scale (HSCL-D-20) and the HDRS; and functional status, by the Medical Outcomes Study Short-Form 36 (SF-36) physical and mental components. RESULTS Paroxetine patients showed greater (difference in mean [SE] 11-week change in HSCL-D-20 scores, 0.21 [0. 07]; P =.004) symptom resolution than placebo patients. Patients treated with PST-PC did not show more improvement than placebo (difference in mean [SE] change in HSCL-D-20 scores, 0.11 [0.13]; P =.13), but their symptoms improved more rapidly than those of placebo patients during the latter treatment weeks (P =.01). For dysthymia, paroxetine improved mental health functioning vs placebo among patients whose baseline functioning was high (difference in mean [SE] change in SF-36 mental component scores, 5.8 [2.02]; P =. 01) or intermediate (difference in mean [SE] change in SF-36 mental component scores, 4.4 [1.74]; P =.03). Mental health functioning in dysthymia patients was not significantly improved by PST-PC compared with placebo (P>/=.12 for low-, intermediate-, and high-functioning groups). For minor depression, both paroxetine and PST-PC improved mental health functioning in patients in the lowest tertile of baseline functioning (difference vs placebo in mean [SE] change in SF-36 mental component scores, 4.7 [2.03] for those taking paroxetine; 4.7 [1.96] for the PST-PC treatment; P =.02 vs placebo). CONCLUSIONS Paroxetine showed moderate benefit for depressive symptoms and mental health function in elderly patients with dysthymia and more severely impaired elderly patients with minor depression. The benefits of PST-PC were smaller, had slower onset, and were more subject to site differences than those of paroxetine.",TRUE,count/measurement
R354,Sociology,R44702,Randomised controlled trial comparing problem solving treatment with amitriptyline and placebo for major depression in primary care,S136680,R44703,Most distal followup,L83541,12 weeks,"Abstract Objective: To determine whether, in the treatment of major depression in primary care, a brief psychological treatment (problem solving) was (a) as effective as antidepressant drugs and more effective than placebo; (b) feasible in practice; and (c) acceptable to patients. Design: Randomised controlled trial of problem solving treatment, amitriptyline plus standard clinical management, and drug placebo plus standard clinical management. Each treatment was delivered in six sessions over 12 weeks. Setting: Primary care in Oxfordshire. Subjects: 91 patients in primary care who had major depression. Main outcome measures: Observer and self reported measures of severity of depression, self reported measure of social outcome, and observer measure of psychological symptoms at six and 12 weeks; self reported measure of patient satisfaction at 12 weeks. Numbers of patients recovered at six and 12 weeks. Results: At six and 12 weeks the difference in score on the Hamilton rating scale for depression between problem solving and placebo treatments was significant (5.3 (95% confidence interval 1.6 to 9.0) and 4.7 (0.4 to 9.0) respectively), but the difference between problem solving and amitriptyline was not significant (1.8 (−1.8 to 5.5) and 0.9 (−3.3 to 5.2) respectively). At 12 weeks 60% (18/30) of patients given problem solving treatment had recovered on the Hamilton scale compared with 52% (16/31) given amitriptyline and 27% (8/30) given placebo. Patients were satisfied with problem solving treatment; all patients who completed treatment (28/30) rated the treatment as helpful or very helpful. The six sessions of problem solving treatment totalled a mean therapy time of 3 1/2 hours. Conclusions: As a treatment for major depression in primary care, problem solving treatment is effective, feasible, and acceptable to patients. Key messages Key messages Patient compliance with antidepressant treatment is often poor, so there is a need for a psychological treatment This study found that problem solving is an effective psychological treatment for major depression in primary care—as effective as amitriptyline and more effective than placebo Problem solving is a feasible treatment in primary care, being effective when given over six sessions by a general practitioner Problem solving treatment is acceptable to patients",TRUE,count/measurement
R354,Sociology,R44704,"Randomised controlled trial of problem solving treatment, antidepressant medication, and combined treatment for major depression in primary care",S136702,R44705,Most distal followup,L83555,52 weeks,"Abstract Objectives: To determine whether problem solving treatment combined with antidepressant medication is more effective than either treatment alone in the management of major depression in primary care. To assess the effectiveness of problem solving treatment when given by practice nurses compared with general practitioners when both have been trained in the technique. Design: Randomised controlled trial with four treatment groups. Setting: Primary care in Oxfordshire. Participants: Patients aged 18-65 years with major depression on the research diagnostic criteria—a score of 13 or more on the 17 item Hamilton rating scale for depression and a minimum duration of illness of four weeks. Interventions: Problem solving treatment by research general practitioner or research practice nurse or antidepressant medication or a combination of problem solving treatment and antidepressant medication. Main outcome measures: Hamilton rating scale for depression, Beck depression inventory, clinical interview schedule (revised), and the modified social adjustment schedule assessed at 6, 12, and 52 weeks. Results: Patients in all groups showed a clear improvement over 12 weeks. The combination of problem solving treatment and antidepressant medication was no more effective than either treatment alone. There was no difference in outcome irrespective of who delivered the problem solving treatment. Conclusions: Problem solving treatment is an effective treatment for depressive disorders in primary care. The treatment can be delivered by suitably trained practice nurses or general practitioners. The combination of this treatment with antidepressant medication is no more effective than either treatment alone. Key messages Problem solving treatment is an effective treatment for depressive disorders in primary care Problem solving treatment can be delivered by suitably trained practice nurses as effectively as by general practitioners The combination of problem solving treatment and antidepressant medication is no more effective than either treatment alone Problem solving treatment is most likely to benefit patients who have a depressive disorder of moderate severity and who wish to participate in an active psychological treatment",TRUE,count/measurement
R30,Terrestrial and Aquatic Ecology,R171893,Alien plants can be associated with a decrease in local and regional native richness even when at low abundance,S686367,R171897,Has sample size,L462491,47 focal alien species,"The impacts of alien plants on native richness are usually assessed at small spatial scales and in locations where the alien is at high abundance. But this raises two questions: to what extent do impacts occur where alien species are at low abundance, and do local impacts translate to effects at the landscape scale? In an analysis of 47 widespread alien plant species occurring across a 1,000 km2 landscape, we examined the relationship between their local abundance and native plant species richness in 594 grassland plots. We first defined the critical abundance at which these focal alien species were associated with a decline in native α‐richness (plot‐scale species numbers), and then assessed how this local decline was translated into declines in native species γ‐richness (landscape‐scale species numbers). After controlling for sampling biases and environmental gradients that might lead to spurious relationships, we found that eight out of 47 focal alien species were associated with a significant decline in native α‐richness as their local abundance increased. Most of these significant declines started at low to intermediate classes of abundance. For these eight species, declines in native γ‐richness were, on average, an order of magnitude (32.0 vs. 2.2 species) greater than those found for native α‐richness, mostly due to spatial homogenization of native communities. The magnitude of the decrease at the landscape scale was best explained by the number of plots where an alien species was found above its critical abundance. Synthesis. Even at low abundance, alien plants may impact native plant richness at both local and landscape scales. Local impacts may result in much greater declines in native richness at larger spatial scales. Quantifying impact at the landscape scale requires consideration of not only the prevalence of an alien plant, but also its critical abundance and its effect on native community homogenization. This suggests that management approaches targeting only those locations dominated by alien plants might not mitigate impacts effectively. Our integrated approach will improve the ranking of alien species risks at a spatial scale appropriate for prioritizing management and designing conservation policies.",TRUE,count/measurement
R57,Virology,R44087,Modelling the Potential Health Impact of the COVID-19 Pandemic on a Hypothetical European Country,S134229,R44090,Proportion of population (Deaths),L82093,0.07%,"A SEIR simulation model for the COVID-19 pandemic was developed (http://covidsim.eu) and applied to a hypothetical European country of 10 million population. Our results show which interventions potentially push the epidemic peak into the subsequent year (when vaccinations may be available) or which fail. Different levels of control (via contact reduction) resulted in 22% to 63% of the population sick, 0.2% to 0.6% hospitalised, and 0.07% to 0.28% dead (n=6,450 to 28,228).",TRUE,count/measurement
R57,Virology,R175292,Dynamics of Antibodies to Ebolaviruses in an Eidolon helvum Bat Colony in,S694456,R175294,Has mortality rate,L467000,0.20%,"The ecology of ebolaviruses is still poorly understood and the role of bats in outbreaks needs to be further clarified. Straw-colored fruit bats (Eidolon helvum) are the most common fruit bats in Africa and antibodies to ebolaviruses have been documented in this species. Between December 2018 and November 2019, samples were collected at approximately monthly intervals in roosting and feeding sites from 820 bats from an Eidolon helvum colony. Dried blood spots (DBS) were tested for antibodies to Zaire, Sudan, and Bundibugyo ebolaviruses. The proportion of samples reactive with GP antigens increased significantly with age from 0–9/220 (0–4.1%) in juveniles to 26–158/225 (11.6–70.2%) in immature adults and 10–225/372 (2.7–60.5%) in adult bats. Antibody responses were lower in lactating females. Viral RNA was not detected in 456 swab samples collected from 152 juvenile and 214 immature adult bats. Overall, our study shows that antibody levels increase in young bats suggesting that seroconversion to Ebola or related viruses occurs in older juvenile and immature adult bats. Multiple year monitoring would be needed to confirm this trend. Knowledge of the periods of the year with the highest risk of Ebolavirus circulation can guide the implementation of strategies to mitigate spill-over events.",TRUE,count/measurement
R57,Virology,R44087,Modelling the Potential Health Impact of the COVID-19 Pandemic on a Hypothetical European Country,S134222,R44090,Proportion of population (Severe cases likely to require hospitalisation-1.0% of symptomatic cases),L82086,0.20%,"A SEIR simulation model for the COVID-19 pandemic was developed (http://covidsim.eu) and applied to a hypothetical European country of 10 million population. Our results show which interventions potentially push the epidemic peak into the subsequent year (when vaccinations may be available) or which fail. Different levels of control (via contact reduction) resulted in 22% to 63% of the population sick, 0.2% to 0.6% hospitalised, and 0.07% to 0.28% dead (n=6,450 to 28,228).",TRUE,count/measurement
R57,Virology,R44087,Modelling the Potential Health Impact of the COVID-19 Pandemic on a Hypothetical European Country,S134337,R44098,Proportion of population (Deaths),L82165,0.28%,"A SEIR simulation model for the COVID-19 pandemic was developed (http://covidsim.eu) and applied to a hypothetical European country of 10 million population. Our results show which interventions potentially push the epidemic peak into the subsequent year (when vaccinations may be available) or which fail. Different levels of control (via contact reduction) resulted in 22% to 63% of the population sick, 0.2% to 0.6% hospitalised, and 0.07% to 0.28% dead (n=6,450 to 28,228).",TRUE,count/measurement
R57,Virology,R44087,Modelling the Potential Health Impact of the COVID-19 Pandemic on a Hypothetical European Country,S134330,R44098,Proportion of population (Severe cases likely to require hospitalisation-1.0% of symptomatic cases),L82158,0.60%,"A SEIR simulation model for the COVID-19 pandemic was developed (http://covidsim.eu) and applied to a hypothetical European country of 10 million population. Our results show which interventions potentially push the epidemic peak into the subsequent year (when vaccinations may be available) or which fail. Different levels of control (via contact reduction) resulted in 22% to 63% of the population sick, 0.2% to 0.6% hospitalised, and 0.07% to 0.28% dead (n=6,450 to 28,228).",TRUE,count/measurement
R57,Virology,R175292,Dynamics of Antibodies to Ebolaviruses in an Eidolon helvum Bat Colony in,S694459,R175294,Has mortality rate,L467003,4.10%,"The ecology of ebolaviruses is still poorly understood and the role of bats in outbreaks needs to be further clarified. Straw-colored fruit bats (Eidolon helvum) are the most common fruit bats in Africa and antibodies to ebolaviruses have been documented in this species. Between December 2018 and November 2019, samples were collected at approximately monthly intervals in roosting and feeding sites from 820 bats from an Eidolon helvum colony. Dried blood spots (DBS) were tested for antibodies to Zaire, Sudan, and Bundibugyo ebolaviruses. The proportion of samples reactive with GP antigens increased significantly with age from 0–9/220 (0–4.1%) in juveniles to 26–158/225 (11.6–70.2%) in immature adults and 10–225/372 (2.7–60.5%) in adult bats. Antibody responses were lower in lactating females. Viral RNA was not detected in 456 swab samples collected from 152 juvenile and 214 immature adult bats. Overall, our study shows that antibody levels increase in young bats suggesting that seroconversion to Ebola or related viruses occurs in older juvenile and immature adult bats. Multiple year monitoring would be needed to confirm this trend. Knowledge of the periods of the year with the highest risk of Ebolavirus circulation can guide the implementation of strategies to mitigate spill-over events.",TRUE,count/measurement
R57,Virology,R44087,Modelling the Potential Health Impact of the COVID-19 Pandemic on a Hypothetical European Country,S134318,R44098,Population,R44084,10 million,"A SEIR simulation model for the COVID-19 pandemic was developed (http://covidsim.eu) and applied to a hypothetical European country of 10 million population. Our results show which interventions potentially push the epidemic peak into the subsequent year (when vaccinations may be available) or which fail. Different levels of control (via contact reduction) resulted in 22% to 63% of the population sick, 0.2% to 0.6% hospitalised, and 0.07% to 0.28% dead (n=6,450 to 28,228).",TRUE,count/measurement
R57,Virology,R36138,Estimating the generation interval for COVID-19 based on symptom onset data,S123853,R36142,incubation SD,L74583,2.8 days,"Background: Estimating key infectious disease parameters from the COVID-19 outbreak is quintessential for modelling studies and guiding intervention strategies. Whereas different estimates for the incubation period distribution and the serial interval distribution have been reported, estimates of the generation interval for COVID-19 have not been provided. Methods: We used outbreak data from clusters in Singapore and Tianjin, China to estimate the generation interval from symptom onset data while acknowledging uncertainty about the incubation period distribution and the underlying transmission network. From those estimates we obtained the proportions pre-symptomatic transmission and reproduction numbers. Results: The mean generation interval was 5.20 (95%CI 3.78-6.78) days for Singapore and 3.95 (95%CI 3.01-4.91) days for Tianjin, China when relying on a previously reported incubation period with mean 5.2 and SD 2.8 days. The proportion of pre-symptomatic transmission was 48% (95%CI 32-67%) for Singapore and 62% (95%CI 50-76%) for Tianjin, China. Estimates of the reproduction number based on the generation interval distribution were slightly higher than those based on the serial interval distribution. Conclusions: Estimating generation and serial interval distributions from outbreak data requires careful investigation of the underlying transmission network. Detailed contact tracing information is essential for correctly estimating these quantities.",TRUE,count/measurement
R57,Virology,R36132,Lessons drawn from China and South Korea for managing COVID-19 epidemic: insights from a comparative modeling study,S123800,R36137,standard deviation of serial interval,L74543,3 days,"We conducted a comparative study of COVID-19 epidemic in three different settings: mainland China, the Guangdong province of China and South Korea, by formulating two disease transmission dynamics models incorporating epidemic characteristics and setting-specific interventions, and fitting the models to multi-source data to identify initial and effective reproduction numbers and evaluate effectiveness of interventions. We estimated the initial basic reproduction number for South Korea, the Guangdong province and mainland China as 2.6 (95% confidence interval (CI): (2.5, 2.7)), 3.0 (95%CI: (2.6, 3.3)) and 3.8 (95%CI: (3.5,4.2)), respectively, given a serial interval with mean of 5 days with standard deviation of 3 days. We found that the effective reproduction number for the Guangdong province and mainland China has fallen below the threshold 1 since February 8th and 18th respectively, while the effective reproduction number for South Korea remains high, suggesting that the interventions implemented need to be enhanced in order to halt further infections. We also project the epidemic trend in South Korea under different scenarios where a portion or the entirety of the integrated package of interventions in China is used. We show that a coherent and integrated approach with stringent public health interventions is the key to the success of containing the epidemic in China and specially its provinces outside its epicenter, and we show that this approach can also be effective to mitigate the burden of the COVID-19 epidemic in South Korea. The experience of outbreak control in mainland China should be a guiding reference for the rest of the world including South Korea.",TRUE,count/measurement
R57,Virology,R36109,Transmission interval estimates suggest pre-symptomatic spread of COVID-19,S123621,R36110,mean serial interval,L74415,"4.56 (2.69, 6.42) days","Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.",TRUE,count/measurement
R57,Virology,R36132,Lessons drawn from China and South Korea for managing COVID-19 epidemic: insights from a comparative modeling study,S123799,R36137,mean of serial interval,L74542,5 days,"We conducted a comparative study of COVID-19 epidemic in three different settings: mainland China, the Guangdong province of China and South Korea, by formulating two disease transmission dynamics models incorporating epidemic characteristics and setting-specific interventions, and fitting the models to multi-source data to identify initial and effective reproduction numbers and evaluate effectiveness of interventions. We estimated the initial basic reproduction number for South Korea, the Guangdong province and mainland China as 2.6 (95% confidence interval (CI): (2.5, 2.7)), 3.0 (95%CI: (2.6, 3.3)) and 3.8 (95%CI: (3.5,4.2)), respectively, given a serial interval with mean of 5 days with standard deviation of 3 days. We found that the effective reproduction number for the Guangdong province and mainland China has fallen below the threshold 1 since February 8th and 18th respectively, while the effective reproduction number for South Korea remains high, suggesting that the interventions implemented need to be enhanced in order to halt further infections. We also project the epidemic trend in South Korea under different scenarios where a portion or the entirety of the integrated package of interventions in China is used. We show that a coherent and integrated approach with stringent public health interventions is the key to the success of containing the epidemic in China and specially its provinces outside its epicenter, and we show that this approach can also be effective to mitigate the burden of the COVID-19 epidemic in South Korea. The experience of outbreak control in mainland China should be a guiding reference for the rest of the world including South Korea.",TRUE,count/measurement
R57,Virology,R36109,Transmission interval estimates suggest pre-symptomatic spread of COVID-19,S123623,R36110,mean incubation period,L74417,"7.1 (6.13, 8.25) days","Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.",TRUE,count/measurement
R57,Virology,R36109,Transmission interval estimates suggest pre-symptomatic spread of COVID-19,S123632,R36112,mean incubation period,L74423,"9 (7.92, 10.2) days","Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.",TRUE,count/measurement
R57,Virology,R175292,Dynamics of Antibodies to Ebolaviruses in an Eidolon helvum Bat Colony in,S694474,R175294,has date ,L467018,Between December 2018,"The ecology of ebolaviruses is still poorly understood and the role of bats in outbreaks needs to be further clarified. Straw-colored fruit bats (Eidolon helvum) are the most common fruit bats in Africa and antibodies to ebolaviruses have been documented in this species. Between December 2018 and November 2019, samples were collected at approximately monthly intervals in roosting and feeding sites from 820 bats from an Eidolon helvum colony. Dried blood spots (DBS) were tested for antibodies to Zaire, Sudan, and Bundibugyo ebolaviruses. The proportion of samples reactive with GP antigens increased significantly with age from 0–9/220 (0–4.1%) in juveniles to 26–158/225 (11.6–70.2%) in immature adults and 10–225/372 (2.7–60.5%) in adult bats. Antibody responses were lower in lactating females. Viral RNA was not detected in 456 swab samples collected from 152 juvenile and 214 immature adult bats. Overall, our study shows that antibody levels increase in young bats suggesting that seroconversion to Ebola or related viruses occurs in older juvenile and immature adult bats. Multiple year monitoring would be needed to confirm this trend. Knowledge of the periods of the year with the highest risk of Ebolavirus circulation can guide the implementation of strategies to mitigate spill-over events.",TRUE,count/measurement
R57,Virology,R175292,Dynamics of Antibodies to Ebolaviruses in an Eidolon helvum Bat Colony in,S694476,R175294,has date ,L467020,between December 2018,"The ecology of ebolaviruses is still poorly understood and the role of bats in outbreaks needs to be further clarified. Straw-colored fruit bats (Eidolon helvum) are the most common fruit bats in Africa and antibodies to ebolaviruses have been documented in this species. Between December 2018 and November 2019, samples were collected at approximately monthly intervals in roosting and feeding sites from 820 bats from an Eidolon helvum colony. Dried blood spots (DBS) were tested for antibodies to Zaire, Sudan, and Bundibugyo ebolaviruses. The proportion of samples reactive with GP antigens increased significantly with age from 0–9/220 (0–4.1%) in juveniles to 26–158/225 (11.6–70.2%) in immature adults and 10–225/372 (2.7–60.5%) in adult bats. Antibody responses were lower in lactating females. Viral RNA was not detected in 456 swab samples collected from 152 juvenile and 214 immature adult bats. Overall, our study shows that antibody levels increase in young bats suggesting that seroconversion to Ebola or related viruses occurs in older juvenile and immature adult bats. Multiple year monitoring would be needed to confirm this trend. Knowledge of the periods of the year with the highest risk of Ebolavirus circulation can guide the implementation of strategies to mitigate spill-over events.",TRUE,count/measurement
R57,Virology,R44137,Full-genome sequences of the first two SARS-CoV-2 viruses from India,S136245,R44139,patient characteristics,R44611,case 1,"Background & objectives: Since December 2019, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has globally affected 195 countries. In India, suspected cases were screened for SARS-CoV-2 as per the advisory of the Ministry of Health and Family Welfare. The objective of this study was to characterize SARS-CoV-2 sequences from three identified positive cases as on February 29, 2020. Methods: Throat swab/nasal swab specimens for a total of 881 suspected cases were screened by E gene and confirmed by RdRp (1), RdRp (2) and N gene real-time reverse transcription-polymerase chain reactions and next-generation sequencing. Phylogenetic analysis, molecular characterization and prediction of B- and T-cell epitopes for Indian SARS-CoV-2 sequences were undertaken. Results: Three cases with a travel history from Wuhan, China, were confirmed positive for SARS-CoV-2. Almost complete (29,851 nucleotides) genomes of case 1, case 3 and a fragmented genome for case 2 were obtained. The sequences of Indian SARS-CoV-2 though not identical showed high (~99.98%) identity with Wuhan seafood market pneumonia virus (accession number: NC 045512). Phylogenetic analysis showed that the Indian sequences belonged to different clusters. Predicted linear B-cell epitopes were found to be concentrated in the S1 domain of spike protein, and a conformational epitope was identified in the receptor-binding domain. The predicted T-cell epitopes showed broad human leucocyte antigen allele coverage of A and B supertypes predominant in the Indian population. Interpretation & conclusions: The two SARS-CoV-2 sequences obtained from India represent two different introductions into the country. The genetic heterogeneity is as noted globally. The identified B- and T-cell epitopes may be considered suitable for future experiments towards the design of vaccines and diagnostics. Continuous monitoring and analysis of the sequences of new cases from India and the other affected countries would be vital to understand the genetic evolution and rates of substitution of the SARS-CoV-2.",TRUE,count/measurement
R57,Virology,R44137,Full-genome sequences of the first two SARS-CoV-2 viruses from India,S136246,R44139,patient characteristics,R44612,case 2,"Background & objectives: Since December 2019, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has globally affected 195 countries. In India, suspected cases were screened for SARS-CoV-2 as per the advisory of the Ministry of Health and Family Welfare. The objective of this study was to characterize SARS-CoV-2 sequences from three identified positive cases as on February 29, 2020. Methods: Throat swab/nasal swab specimens for a total of 881 suspected cases were screened by E gene and confirmed by RdRp (1), RdRp (2) and N gene real-time reverse transcription-polymerase chain reactions and next-generation sequencing. Phylogenetic analysis, molecular characterization and prediction of B- and T-cell epitopes for Indian SARS-CoV-2 sequences were undertaken. Results: Three cases with a travel history from Wuhan, China, were confirmed positive for SARS-CoV-2. Almost complete (29,851 nucleotides) genomes of case 1, case 3 and a fragmented genome for case 2 were obtained. The sequences of Indian SARS-CoV-2 though not identical showed high (~99.98%) identity with Wuhan seafood market pneumonia virus (accession number: NC 045512). Phylogenetic analysis showed that the Indian sequences belonged to different clusters. Predicted linear B-cell epitopes were found to be concentrated in the S1 domain of spike protein, and a conformational epitope was identified in the receptor-binding domain. The predicted T-cell epitopes showed broad human leucocyte antigen allele coverage of A and B supertypes predominant in the Indian population. Interpretation & conclusions: The two SARS-CoV-2 sequences obtained from India represent two different introductions into the country. The genetic heterogeneity is as noted globally. The identified B- and T-cell epitopes may be considered suitable for future experiments towards the design of vaccines and diagnostics. Continuous monitoring and analysis of the sequences of new cases from India and the other affected countries would be vital to understand the genetic evolution and rates of substitution of the SARS-CoV-2.",TRUE,count/measurement
R57,Virology,R44137,Full-genome sequences of the first two SARS-CoV-2 viruses from India,S136247,R44139,patient characteristics,R44613,case 3,"Background & objectives: Since December 2019, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has globally affected 195 countries. In India, suspected cases were screened for SARS-CoV-2 as per the advisory of the Ministry of Health and Family Welfare. The objective of this study was to characterize SARS-CoV-2 sequences from three identified positive cases as on February 29, 2020. Methods: Throat swab/nasal swab specimens for a total of 881 suspected cases were screened by E gene and confirmed by RdRp (1), RdRp (2) and N gene real-time reverse transcription-polymerase chain reactions and next-generation sequencing. Phylogenetic analysis, molecular characterization and prediction of B- and T-cell epitopes for Indian SARS-CoV-2 sequences were undertaken. Results: Three cases with a travel history from Wuhan, China, were confirmed positive for SARS-CoV-2. Almost complete (29,851 nucleotides) genomes of case 1, case 3 and a fragmented genome for case 2 were obtained. The sequences of Indian SARS-CoV-2 though not identical showed high (~99.98%) identity with Wuhan seafood market pneumonia virus (accession number: NC 045512). Phylogenetic analysis showed that the Indian sequences belonged to different clusters. Predicted linear B-cell epitopes were found to be concentrated in the S1 domain of spike protein, and a conformational epitope was identified in the receptor-binding domain. The predicted T-cell epitopes showed broad human leucocyte antigen allele coverage of A and B supertypes predominant in the Indian population. Interpretation & conclusions: The two SARS-CoV-2 sequences obtained from India represent two different introductions into the country. The genetic heterogeneity is as noted globally. The identified B- and T-cell epitopes may be considered suitable for future experiments towards the design of vaccines and diagnostics. Continuous monitoring and analysis of the sequences of new cases from India and the other affected countries would be vital to understand the genetic evolution and rates of substitution of the SARS-CoV-2.",TRUE,count/measurement
R370,"Work, Economy and Organizations",R4542,Mining for Computing Jobs,S4878,R4549,result,L3312,20 clusters of similar skill sets that map to specific job definitions,"A Web content mining approach identified 20 job categories and the associated skills needs prevalent in the computing professions. Using a Web content data mining application, we extracted almost a quarter million unique IT job descriptions from various job search engines and distilled each to its required skill sets. We statistically examined these, revealing 20 clusters of similar skill sets that map to specific job definitions. The results allow software engineering professionals to tune their skills portfolio to match those in demand from real computing jobs across the US to attain more lucrative salaries and more mobility in a chaotic environment.",TRUE,count/measurement
R370,"Work, Economy and Organizations",R4337,You will be…: a study of job advertisements to determine employers' requirements for LIS professionals in the UK in 2007,S4497,R4341,Data,R4343,sample of 180 advertisements,"Purpose – The purpose of this paper is to investigate what employers seek when recruiting library and information professionals in the UK and whether professional skills, generic skills or personal qualities are most in demand.Design/methodology/approach – A content analysis of a sample of 180 advertisements requiring a professional library or information qualification from Chartered Institute of Library and Information Professional's Library + Information Gazette over the period May 2006‐2007.Findings – The findings reveal that a multitude of skills and qualities are required in the profession. When the results were compared with Information National Training Organisation and Library and Information Management Employability Skills research, customer service, interpersonal and communication skills, and general computing skills emerged as the requirements most frequently sought by employers. Overall, requirements from the generic skills area were most important to employers, but the research also demonstra...",TRUE,count/measurement
R370,"Work, Economy and Organizations",R4208,You will be…: a study of job advertisements to determine employers' requirements for LIS professionals in the UK in 2007,S4320,R4212,Material,R4220,sample of 180 advertisements,"Purpose – The purpose of this paper is to investigate what employers seek when recruiting library and information professionals in the UK and whether professional skills, generic skills or personal qualities are most in demand.Design/methodology/approach – A content analysis of a sample of 180 advertisements requiring a professional library or information qualification from Chartered Institute of Library and Information Professional's Library + Information Gazette over the period May 2006‐2007.Findings – The findings reveal that a multitude of skills and qualities are required in the profession. When the results were compared with Information National Training Organisation and Library and Information Management Employability Skills research, customer service, interpersonal and communication skills, and general computing skills emerged as the requirements most frequently sought by employers. Overall, requirements from the generic skills area were most important to employers, but the research also demonstra...",TRUE,count/measurement
R282,Agricultural and Resource Economics,R109335,Socio-economic Factors Affecting Adoption of Modern Information and Communication Technology by Farmers in India: Analysis Using Multivariate Probit Model,S499043,R109337,Country of study,L361131,India,"Abstract Purpose: The paper analyzes factors that affect the likelihood of adoption of different agriculture-related information sources by farmers. Design/Methodology/Approach: The paper links the theoretical understanding of the existing multiple sources of information that farmer use, with the empirical model to analyze the factors that affect the farmer's adoption of different agriculture-related information sources. The analysis is done using a multivariate probit model and primary survey data of 1,200 farmer households of five Indo-Gangetic states of India, covering 120 villages. Findings: The results of the study highlight that farmer's age, education level and farm size influence farmer's behaviour in selecting different sources of information. The results show that farmers use multiple information sources, that may be complementary or substitutes to each other and this also implies that any single source does not satisfy all information needs of the farmer. Practical implication: If we understand the likelihood of farmer's choice of source of information then direction can be provided and policies can be developed to provide information through those sources in targeted regions with the most effective impact. Originality/Value: Information plays a key role in a farmer's life by enhancing their knowledge and strengthening their decision-making ability. Farmers use multiple sources of information as no one source is sufficient in itself.",TRUE,location
R282,Agricultural and Resource Economics,R109321,Farm Households' Simultaneous Use of Sources to Access Information on Cotton Crop Production,S499044,R109323,Country of study,L361132,Pakistan,"ABSTRACT This study has investigated farm households' simultaneous use of social networks, field extension, traditional media, and modern information and communication technologies (ICTs) to access information on cotton crop production. The study was based on a field survey, conducted in Punjab, Pakistan. Data were collected from 399 cotton farm households using the multistage sampling technique. Important combinations of information sources were found in terms of their simultaneous use to access information. The study also examined the factors influencing the use of various available information sources. A multivariate probit model was used considering the correlation among the use of social networks, field extension, traditional media, and modern ICTs. The findings indicated the importance of different socioeconomic and institutional factors affecting farm households' use of available information sources on cotton production. Important policy conclusions are drawn based on findings.",TRUE,location
R20,Anatomy,R110605,Epigenetic Hallmarks of Fetal Early Atherosclerotic Lesions in Humans,S503956,R110607,Study location,R110023,Italy,"Importance Although increasingly strong evidence suggests a role of maternal total cholesterol and low-density lipoprotein cholesterol (LDLC) levels during pregnancy as a risk factor for atherosclerotic disease in the offspring, the underlying mechanisms need to be clarified for future clinical applications. Objective To test whether epigenetic signatures characterize early fetal atherogenesis associated with maternal hypercholesterolemia and to provide a quantitative estimate of the contribution of maternal cholesterol level to fetal lesion size. Design, Setting, and Participants This autopsy study analyzed 78 human fetal aorta autopsy samples from the Division of Human Pathology, Department of Advanced Biomedical Sciences, Federico II University of Naples, Naples, Italy. Maternal levels of total cholesterol, LDLC, high-density lipoprotein cholesterol (HDLC), triglycerides, and glucose and body mass index (BMI) were determined during hospitalization owing to spontaneous fetal death. Data were collected and immediately processed and analyzed to prevent degradation from January 1, 2011, through November 30, 2016. Main Outcomes and Measurements Results of DNA methylation and messenger RNA levels of the following genes involved in cholesterol metabolism were assessed: superoxide dismutase 2 (SOD2), low-density lipoprotein receptor (LDLR), sterol regulatory element binding protein 2 (SREBP2), liver X receptor &agr; (LXR&agr;), and adenosine triphosphate–binding cassette transporter 1 (ABCA1). Results Among the 78 fetal samples included in the analysis (59% male; mean [SD] fetal age, 25 [3] weeks), maternal cholesterol level explained a significant proportion of the fetal aortic lesion variance in multivariate analysis (61%; P = .001) independently by the effect of levels of HDLC, triglycerides, and glucose and BMI. Moreover, maternal total cholesterol and LDLC levels were positively associated with methylation of SREBP2 in fetal aortas (Pearson correlation, 0.488 and 0.503, respectively), whereas in univariate analysis, they were inversely correlated with SREBP2 messenger RNA levels in fetal aortas (Pearson correlation, −0.534 and −0.671, respectively). Epivariations of genes controlling cholesterol metabolism in cholesterol-treated human aortic endothelial cells were also observed. Conclusions and Relevance The present study provides a stringent quantitative estimate of the magnitude of the association of maternal cholesterol levels during pregnancy with fetal aortic lesions and reveals the epigenetic response of fetal aortic SREBP2 to maternal cholesterol level. The role of maternal cholesterol level during pregnancy and epigenetic signature in offspring in cardiovascular primary prevention warrants further long-term causal relationship studies.",TRUE,location
R133,Artificial Intelligence,R139300,Personalized recommendations in e-participation: offline experiments for the 'Decide Madrid' platform,S556763,R139302,has been evaluated in the City,R139555,Madrid,"In e-participation platforms, citizens suggest, discuss and vote online for initiatives aimed to address a wide range of issues and problems in a city, such as economic development, public safety, budges, infrastructure, housing, environment, social rights, and health care. For a particular citizen, the number of proposals and debates may be overwhelming, and recommender systems could help filtering and ranking those that are more relevant. Focusing on a particular case, the `Decide Madrid' platform, in this paper we empirically investigate which sources of user preferences and recommendation approaches could be more effective, in terms of several aspects, namely precision, coverage and diversity.",TRUE,location
R133,Artificial Intelligence,R139297,What's going on in my city?: recommender systems and electronic participatory budgeting,S556760,R139299,has been evaluated in the City,R139554,Miami,"In this paper, we present electronic participatory budgeting (ePB) as a novel application domain for recommender systems. On public data from the ePB platforms of three major US cities - Cambridge, Miami and New York City-, we evaluate various methods that exploit heterogeneous sources and models of user preferences to provide personalized recommendations of citizen proposals. We show that depending on characteristics of the cities and their participatory processes, particular methods are more effective than others for each city. This result, together with open issues identified in the paper, call for further research in the area.",TRUE,location
R133,Artificial Intelligence,R182238,"Food Recognition: A New Dataset, Experiments, and Results",S704939,R182240,type,R182244,Multi,"We propose a new dataset for the evaluation of food recognition algorithms that can be used in dietary monitoring applications. Each image depicts a real canteen tray with dishes and foods arranged in different ways. Each tray contains multiple instances of food classes. The dataset contains 1027 canteen trays for a total of 3616 food instances belonging to 73 food classes. The food on the tray images has been manually segmented using carefully drawn polygonal boundaries. We have benchmarked the dataset by designing an automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest, and predicts for each region the corresponding food class. We have experimented with three different classification strategies using also several visual descriptors. We achieve about 79% of food and tray recognition accuracy using convolutional-neural-networks-based features. The dataset, as well as the benchmark framework, are available to the research community.",TRUE,location
R133,Artificial Intelligence,R182352,Real-Time Mobile Food Recognition System,S705390,R182354,type,R182309,Multi,"We propose a mobile food recognition system the poses of which are estimating calorie and nutritious of foods and recording a user's eating habits. Since all the processes on image recognition performed on a smart-phone, the system does not need to send images to a server and runs on an ordinary smartphone in a real-time way. To recognize food items, a user draws bounding boxes by touching the screen first, and then the system starts food item recognition within the indicated bounding boxes. To recognize them more accurately, we segment each food item region by GrubCut, extract a color histogram and SURF-based bag-of-features, and finally classify it into one of the fifty food categories with linear SVM and fast 2 kernel. In addition, the system estimates the direction of food regions where the higher SVM output score is expected to be obtained, show it as an arrow on the screen in order to ask a user to move a smartphone camera. This recognition process is performed repeatedly about once a second. We implemented this system as an Android smartphone application so as to use multiple CPU cores effectively for real-time recognition. In the experiments, we have achieved the 81.55% classification rate for the top 5 category candidates when the ground-truth bounding boxes are given. In addition, we obtained positive evaluation by user study compared to the food recording system without object recognition.",TRUE,location
R133,Artificial Intelligence,R172768,The design and implementation of the redland RDF application framework,S689229,R172769,Engine,R172708,Redland,"Resource Description Framework (RDF) is a general description technology that can be applied to many application domains. Redland is a exible and eAEcient implementation of RDF that complements this power and provides highlevel interfaces allowing instances of the model to be stored, queried and manipulated in C, Perl, Python, Tcl and other languages. Redland is implemented using an object-based API, providing several of the implementation classes as modules which can be added, removed or replaced to allow di erent functionality or application-speci c optimisations. The framework provides the core technology for developing new RDF applications, experimenting with implementation techniques, APIs and representation issues.",TRUE,location
R104,Bioinformatics,R75371,Isolating SARS-CoV-2 Strains From Countries in the Same Meridian: Genome Evolutionary Analysis,S345262,R75376,Country of study,R29715,Brazil,"Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and “disorder” and “transmembrane” analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein—binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.",TRUE,location
R104,Bioinformatics,R75371,Isolating SARS-CoV-2 Strains From Countries in the Same Meridian: Genome Evolutionary Analysis,S345259,R75376,Country of study,R29994,Italy,"Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and “disorder” and “transmembrane” analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein—binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.",TRUE,location
R104,Bioinformatics,R75371,Isolating SARS-CoV-2 Strains From Countries in the Same Meridian: Genome Evolutionary Analysis,S345261,R75376,Country of study,R29998,Sweden,"Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and “disorder” and “transmembrane” analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein—binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.",TRUE,location
R104,Bioinformatics,R138934,An Affect Prediction Approach Through Depression Severity Parameter Incorporation in Neural Networks,S552084,R138936,Outcome assessment,R138938,Valence,"Humans use emotional expressions to communicate their internal affective states. These behavioral expressions are often multi-modal (e.g. facial expression, voice and gestures) and researchers have proposed several schemes to predict the latent affective states based on these expressions. The relationship between the latent affective states and their expression is hypothesized to be affected by several factors; depression disorder being one of them. Despite a wide interest in affect prediction, and several studies linking the effect of depression on affective expressions, only a limited number of affect prediction models account for the depression severity. In this work, we present a novel scheme that incorporates depression severity as a parameter in Deep Neural Networks (DNNs). In order to predict affective dimensions for an individual at hand, our scheme alters the DNN activation function based on the subject’s depression severity. We perform experiments on affect prediction in two different sessions of the Audio-Visual Depressive language Corpus, which involves patients with varying degree of depression. Our results show improvements in arousal and valence prediction on both the sessions using the proposed DNN modeling. We also present analysis of the impact of such an alteration in DNNs during training and testing.",TRUE,location
R288,Communication Sciences,R172941,Feeling Left Out: Underserved Audiences in Science Communication,S690017,R172944,Country of study,R172955,Germany,"Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and “feeling left out” can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.",TRUE,location
R334,Comparative Politics,R111145,The Social Structure of the Croatian Parliament in Five Mandates,S506120,R111147,Location ,R75535,Croatia,"In this paper we analyzed the social attributes and political experience of the members of the Croatian Parliament in five assemblies. We established that the multiparty parliament, during the 18 years of its existence, was dominated by men, averagely between 47 and 49 years of age, Croats, Catholics, highly educated, predominantly in the social sciences and humanities, and politicians with significant managerial and political experience acquired primarily during their work in political parties. Moreover, we found a relatively large fluctuation of parliamentarians, resulting in a lower level of parliamentary experience and a relatively short parliamentary career. Based on these indicators, it can be stated that in Croatia a socially homogenous parliamentary elite was formed, one with a potentially lower level of political competence, and that patterns of political recruitment, coherent in tendency with those in the developed democratic countries, were established.",TRUE,location
R334,Comparative Politics,R111148,"Central European Parliaments over Two Decades – Diminishing Stability? Parliaments in Czech Republic, Hungary, Poland, and Slovenia",S506134,R111150,Location ,R75536,Czech Republic,"This paper compares the development in four Central European parliaments (Czech Republic, Hungary, Poland, and Slovenia) in the second decade after the fall of communism. At the end of the first decade, the four parliaments could be considered stabilised, functional, independent and internally organised institutions. Attention is paid particularly to the changing institutional context and pressure of ‘Europeanisation’, the changing party strengths, and the functional and political consequences of these changes. Parliaments have been transformed from primary legislative to mediating and supervisory bodies. Though Central European parliaments have become stable in their structure and formal rules as well as in their professionalisation, at the end of the second decade their stability was threatened.",TRUE,location
R417,Cultural History,R139993,The Role of Smart City Characteristics in the Plans of Fifteen Cities,S558951,R139995,has smart city instance,R139999,Amsterdam,"ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.",TRUE,location
R417,Cultural History,R139993,The Role of Smart City Characteristics in the Plans of Fifteen Cities,S558952,R139995,has smart city instance,R140000,Barcelona,"ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.",TRUE,location
R417,Cultural History,R139993,The Role of Smart City Characteristics in the Plans of Fifteen Cities,S558955,R139995,has smart city instance,R140003,Chicago,"ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.",TRUE,location
R417,Cultural History,R139784,The Management Of Heritage In Contested Cross-Border Contexts: Emerging Research On The Island Of Ireland,S557983,R139785,Country of study,R29993,Ireland,"This paper introduces the recently begun REINVENT research project focused on the management of heritage in the cross-border cultural landscape of Derry/Londonderry. The importance of facilitating dialogue over cultural heritage to the maintenance of ‘thin’ borders in contested cross-border contexts is underlined in the paper, as is the relatively favourable strategic policy context for progressing ‘heritage diplomacy’ on the island of Ireland. However, it is argued that more inclusive and participatory approaches to the management of heritage are required to assist in the mediation of contestation, particularly accommodating a greater diversity of ‘non-expert’ opinion, in addition to helping identify value conflicts and dissonance. The application of digital technologies in the form of Public Participation Geographic Information Systems (PPGIS) is proposed, and this is briefly discussed in relation to some of the expected benefits and methodological challenges that must be addressed in the REINVENT project. The paper concludes by emphasising the importance of dialogue and knowledge exchange between academia and heritage policymakers/practitioners.",TRUE,location
R417,Cultural History,R139993,The Role of Smart City Characteristics in the Plans of Fifteen Cities,S558964,R139995,has smart city instance,R140012,Konza,"ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.",TRUE,location
R417,Cultural History,R139993,The Role of Smart City Characteristics in the Plans of Fifteen Cities,S558953,R139995,has smart city instance,R140001,London,"ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.",TRUE,location
R417,Cultural History,R139993,The Role of Smart City Characteristics in the Plans of Fifteen Cities,S558956,R139995,has smart city instance,R140004,New York,"ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.",TRUE,location
R417,Cultural History,R139993,The Role of Smart City Characteristics in the Plans of Fifteen Cities,S558963,R139995,has smart city instance,R140011,Rio de Janeiro,"ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.",TRUE,location
R417,Cultural History,R139993,The Role of Smart City Characteristics in the Plans of Fifteen Cities,S558959,R139995,has smart city instance,R140007,Singapore,"ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.",TRUE,location
R417,Cultural History,R139993,The Role of Smart City Characteristics in the Plans of Fifteen Cities,S558957,R139995,has smart city instance,R140005,Stockholm,"ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.",TRUE,location
R417,Cultural History,R139736,Public History and Contested Heritage: Archival Memories of the Bombing of Italy,S557904,R139743,Country of study,R139747,United Kingdom,"This article presents a case study of a collaborative public history project between participants in two countries, the United Kingdom and Italy. Its subject matter is the bombing war in Europe, 1939-1945, which is remembered and commemorated in very different ways in these two countries: the sensitivities involved thus constitute not only a case of public history conducted at the national level but also one involving contested heritage. An account of the ways in which public history has developed in the UK and Italy is presented. This is followed by an explanation of how the bombing war has been remembered in each country. In the UK, veterans of RAF Bomber Command have long felt a sense of neglect, largely because the deliberate targeting of civilians has not fitted comfortably into the dominant victor narrative. In Italy, recollections of being bombed have remained profoundly dissonant within the received liberation discourse. The International Bomber Command Centre Digital Archive (or Archive) is then described as a case study that employs a public history approach, focusing on various aspects of its inclusive ethos, intended to preserve multiple perspectives. The Italian component of the project is highlighted, problematising the digitisation of contested heritage within the broader context of twentieth-century history. Reflections on the use of digital archiving practices and working in partnership are offered, as well as a brief account of user analytics of the Archive through its first eighteen months online.",TRUE,location
R417,Cultural History,R139975,CULTURAL HERITAGE IN SMART CITY ENVIRONMENTS: THE UPDATE,S558928,R139978,has smart city instance,R139981,Budapest (Hungary),"Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.",TRUE,location
R142,Earth Sciences,R9094,Development and evaluation of an Earth-System model – HadGEM2,S14293,R9095,Earth System Model,R9104,Ocean,"Abstract. We describe here the development and evaluation of an Earth system model suitable for centennial-scale climate prediction. The principal new components added to the physical climate model are the terrestrial and ocean ecosystems and gas-phase tropospheric chemistry, along with their coupled interactions. The individual Earth system components are described briefly and the relevant interactions between the components are explained. Because the multiple interactions could lead to unstable feedbacks, we go through a careful process of model spin up to ensure that all components are stable and the interactions balanced. This spun-up configuration is evaluated against observed data for the Earth system components and is generally found to perform very satisfactorily. The reason for the evaluation phase is that the model is to be used for the core climate simulations carried out by the Met Office Hadley Centre for the Coupled Model Intercomparison Project (CMIP5), so it is essential that addition of the extra complexity does not detract substantially from its climate performance. Localised changes in some specific meteorological variables can be identified, but the impacts on the overall simulation of present day climate are slight. This model is proving valuable both for climate predictions, and for investigating the strengths of biogeochemical feedbacks.",TRUE,location
R142,Earth Sciences,R144217,Humanitarian applications of machine learning with remote-sensing data: review and case study in refugee settlement mapping,S577399,R144219,Study Area,L404185,Africa,"The coordination of humanitarian relief, e.g. in a natural disaster or a conflict situation, is often complicated by a scarcity of data to inform planning. Remote sensing imagery, from satellites or drones, can give important insights into conditions on the ground, including in areas which are difficult to access. Applications include situation awareness after natural disasters, structural damage assessment in conflict, monitoring human rights violations or population estimation in settlements. We review machine learning approaches for automating these problems, and discuss their potential and limitations. We also provide a case study of experiments using deep learning methods to count the numbers of structures in multiple refugee settlements in Africa and the Middle East. We find that while high levels of accuracy are possible, there is considerable variation in the characteristics of imagery collected from different sensors and regions. In this, as in the other applications discussed in the paper, critical inferences must be made from a relatively small amount of pixel data. We, therefore, consider that using machine learning systems as an augmentation of human analysts is a reasonable strategy to transition from current fully manual operational pipelines to ones which are both more efficient and have the necessary levels of quality control. This article is part of a discussion meeting issue ‘The growing ubiquity of algorithms in society: implications, impacts and innovations’.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145296,Molecular identification of mosquitoes (Diptera: Culicidae) in southeastern Australia,S581573,R145298,Study Location ,R144230, Australia,"Abstract DNA barcoding is a modern species identification technique that can be used to distinguish morphologically similar species, and is particularly useful when using small amounts of starting material from partial specimens or from immature stages. In order to use DNA barcoding in a surveillance program, a database containing mosquito barcode sequences is required. This study obtained Cytochrome Oxidase I (COI) sequences for 113 morphologically identified specimens, representing 29 species, six tribes and 12 genera; 17 of these species have not been previously barcoded. Three of the 29 species ─ Culex palpalis, Macleaya macmillani, and an unknown species originally identified as Tripteroides atripes ─ were initially misidentified as they are difficult to separate morphologically, highlighting the utility of DNA barcoding. While most species grouped separately (reciprocally monophyletic), the Cx. pipiens subgroup could not be genetically separated using COI. The average conspecific and congeneric p‐distance was 0.8% and 7.6%, respectively. In our study, we also demonstrate the utility of DNA barcoding in distinguishing exotics from endemic mosquitoes by identifying a single intercepted Stegomyia aegypti egg at an international airport. The use of DNA barcoding dramatically reduced the identification time required compared with rearing specimens through to adults, thereby demonstrating the value of this technique in biosecurity surveillance. The DNA barcodes produced by this study have been uploaded to the ‘Mosquitoes of Australia–Victoria’ project on the Barcode of Life Database (BOLD), which will serve as a resource for the Victorian Arbovirus Disease Control Program and other national and international mosquito surveillance programs.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R146646,Comprehensive evaluation of DNA barcoding for the molecular species identification of forensically important Australian Sarcophagidae (Diptera),S587102,R146648,Study Location ,R144230, Australia,"Abstract. Carrion-breeding Sarcophagidae (Diptera) can be used to estimate the post-mortem interval in forensic cases. Difficulties with accurate morphological identifications at any life stage and a lack of documented thermobiological profiles have limited their current usefulness. The molecular-based approach of DNA barcoding, which utilises a 648-bp fragment of the mitochondrial cytochrome oxidase subunit I gene, was evaluated in a pilot study for discrimination between 16 Australian sarcophagids. The current study comprehensively evaluated barcoding for a larger taxon set of 588 Australian sarcophagids. In total, 39 of the 84 known Australian species were represented by 580 specimens, which includes 92% of potentially forensically important species. A further eight specimens could not be identified, but were included nonetheless as six unidentifiable taxa. A neighbour-joining tree was generated and nucleotide sequence divergences were calculated. All species except Sarcophaga (Fergusonimyia) bancroftorum, known for high morphological variability, were resolved as monophyletic (99.2% of cases), with bootstrap support of 100. Excluding S. bancroftorum, the mean intraspecific and interspecific variation ranged from 1.12% and 2.81–11.23%, respectively, allowing for species discrimination. DNA barcoding was therefore validated as a suitable method for molecular identification of Australian Sarcophagidae, which will aid in the implementation of this fauna in forensic entomology.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145437,DNA Barcoding to Improve the Taxonomy of the Afrotropical Hoverflies (Insecta: Diptera: Syrphidae),S582204,R145438,Study Location ,R144205, Nigeria,"The identification of Afrotropical hoverflies is very difficult because of limited recent taxonomic revisions and the lack of comprehensive identification keys. In order to assist in their identification, and to improve the taxonomy of this group, we constructed a reference dataset of 513 COI barcodes of 90 of the more common nominal species from Ghana, Togo, Benin and Nigeria (W Africa) and added ten publically available COI barcodes from nine nominal Afrotropical species to this (total: 523 COI barcodes; 98 nominal species; 26 genera). The identification accuracy of this dataset was evaluated with three methods (K2P distance-based, Neighbor-Joining (NJ) / Maximum Likelihood (ML) analysis, and using SpeciesIdentifier). Results of the three methods were highly congruent and showed a high identification success. Nine species pairs showed a low (< 0.03) mean interspecific K2P distance that resulted in several incorrect identifications. A high (> 0.03) maximum intraspecific K2P distance was observed in eight species and barcodes of these species not always formed single clusters in the NJ / ML analayses which may indicate the occurrence of cryptic species. Optimal K2P thresholds to differentiate intra- from interspecific K2P divergence were highly different among the three subfamilies (Eristalinae: 0.037, Syrphinae: 0.06, Microdontinae: 0.007–0.02), and among the different general suggesting that optimal thresholds are better defined at the genus level. In addition to providing an alternative identification tool, our study indicates that DNA barcoding improves the taxonomy of Afrotropical hoverflies by selecting (groups of) taxa that deserve further taxonomic study, and by attributing the unknown sex to species for which only one of the sexes is known.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R108983,"Barcoding the butterflies of southern South America: Species delimitation efficacy, cryptic diversity and geographic patterns of divergence",S629473,R157025,Study Location ,R155637,Argentina,"Because the tropical regions of America harbor the highest concentration of butterfly species, its fauna has attracted considerable attention. Much less is known about the butterflies of southern South America, particularly Argentina, where over 1,200 species occur. To advance understanding of this fauna, we assembled a DNA barcode reference library for 417 butterfly species of Argentina, focusing on the Atlantic Forest, a biodiversity hotspot. We tested the efficacy of this library for specimen identification, used it to assess the frequency of cryptic species, and examined geographic patterns of genetic variation, making this study the first large-scale genetic assessment of the butterflies of southern South America. The average sequence divergence to the nearest neighbor (i.e. minimum interspecific distance) was 6.91%, ten times larger than the mean distance to the furthest conspecific (0.69%), with a clear barcode gap present in all but four of the species represented by two or more specimens. As a consequence, the DNA barcode library was extremely effective in the discrimination of these species, allowing a correct identification in more than 95% of the cases. Singletons (i.e. species represented by a single sequence) were also distinguishable in the gene trees since they all had unique DNA barcodes, divergent from those of the closest non-conspecific. The clustering algorithms implemented recognized from 416 to 444 barcode clusters, suggesting that the actual diversity of butterflies in Argentina is 3%–9% higher than currently recognized. Furthermore, our survey added three new records of butterflies for the country (Eurema agave, Mithras hannelore, Melanis hillapana). In summary, this study not only supported the utility of DNA barcoding for the identification of the butterfly species of Argentina, but also highlighted several cases of both deep intraspecific and shallow interspecific divergence that should be studied in more detail.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145296,Molecular identification of mosquitoes (Diptera: Culicidae) in southeastern Australia,S624666,R155768,Study Location ,R155772,Australia,"Abstract DNA barcoding is a modern species identification technique that can be used to distinguish morphologically similar species, and is particularly useful when using small amounts of starting material from partial specimens or from immature stages. In order to use DNA barcoding in a surveillance program, a database containing mosquito barcode sequences is required. This study obtained Cytochrome Oxidase I (COI) sequences for 113 morphologically identified specimens, representing 29 species, six tribes and 12 genera; 17 of these species have not been previously barcoded. Three of the 29 species ─ Culex palpalis, Macleaya macmillani, and an unknown species originally identified as Tripteroides atripes ─ were initially misidentified as they are difficult to separate morphologically, highlighting the utility of DNA barcoding. While most species grouped separately (reciprocally monophyletic), the Cx. pipiens subgroup could not be genetically separated using COI. The average conspecific and congeneric p‐distance was 0.8% and 7.6%, respectively. In our study, we also demonstrate the utility of DNA barcoding in distinguishing exotics from endemic mosquitoes by identifying a single intercepted Stegomyia aegypti egg at an international airport. The use of DNA barcoding dramatically reduced the identification time required compared with rearing specimens through to adults, thereby demonstrating the value of this technique in biosecurity surveillance. The DNA barcodes produced by this study have been uploaded to the ‘Mosquitoes of Australia–Victoria’ project on the Barcode of Life Database (BOLD), which will serve as a resource for the Victorian Arbovirus Disease Control Program and other national and international mosquito surveillance programs.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R146646,Comprehensive evaluation of DNA barcoding for the molecular species identification of forensically important Australian Sarcophagidae (Diptera),S623953,R155647,Study Location ,R155659,Australia,"Abstract. Carrion-breeding Sarcophagidae (Diptera) can be used to estimate the post-mortem interval in forensic cases. Difficulties with accurate morphological identifications at any life stage and a lack of documented thermobiological profiles have limited their current usefulness. The molecular-based approach of DNA barcoding, which utilises a 648-bp fragment of the mitochondrial cytochrome oxidase subunit I gene, was evaluated in a pilot study for discrimination between 16 Australian sarcophagids. The current study comprehensively evaluated barcoding for a larger taxon set of 588 Australian sarcophagids. In total, 39 of the 84 known Australian species were represented by 580 specimens, which includes 92% of potentially forensically important species. A further eight specimens could not be identified, but were included nonetheless as six unidentifiable taxa. A neighbour-joining tree was generated and nucleotide sequence divergences were calculated. All species except Sarcophaga (Fergusonimyia) bancroftorum, known for high morphological variability, were resolved as monophyletic (99.2% of cases), with bootstrap support of 100. Excluding S. bancroftorum, the mean intraspecific and interspecific variation ranged from 1.12% and 2.81–11.23%, respectively, allowing for species discrimination. DNA barcoding was therefore validated as a suitable method for molecular identification of Australian Sarcophagidae, which will aid in the implementation of this fauna in forensic entomology.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145437,DNA Barcoding to Improve the Taxonomy of the Afrotropical Hoverflies (Insecta: Diptera: Syrphidae),S624563,R155749,Study Location ,R155755,Benin,"The identification of Afrotropical hoverflies is very difficult because of limited recent taxonomic revisions and the lack of comprehensive identification keys. In order to assist in their identification, and to improve the taxonomy of this group, we constructed a reference dataset of 513 COI barcodes of 90 of the more common nominal species from Ghana, Togo, Benin and Nigeria (W Africa) and added ten publically available COI barcodes from nine nominal Afrotropical species to this (total: 523 COI barcodes; 98 nominal species; 26 genera). The identification accuracy of this dataset was evaluated with three methods (K2P distance-based, Neighbor-Joining (NJ) / Maximum Likelihood (ML) analysis, and using SpeciesIdentifier). Results of the three methods were highly congruent and showed a high identification success. Nine species pairs showed a low (< 0.03) mean interspecific K2P distance that resulted in several incorrect identifications. A high (> 0.03) maximum intraspecific K2P distance was observed in eight species and barcodes of these species not always formed single clusters in the NJ / ML analayses which may indicate the occurrence of cryptic species. Optimal K2P thresholds to differentiate intra- from interspecific K2P divergence were highly different among the three subfamilies (Eristalinae: 0.037, Syrphinae: 0.06, Microdontinae: 0.007–0.02), and among the different general suggesting that optimal thresholds are better defined at the genus level. In addition to providing an alternative identification tool, our study indicates that DNA barcoding improves the taxonomy of Afrotropical hoverflies by selecting (groups of) taxa that deserve further taxonomic study, and by attributing the unknown sex to species for which only one of the sexes is known.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R138562,Fast Census of Moth Diversity in the Neotropics: A Comparison of Field-Assigned Morphospecies and DNA Barcoding in Tiger Moths,S629213,R156968,Study Location ,R155639,Brazil,"The morphological species delimitations (i.e. morphospecies) have long been the best way to avoid the taxonomic impediment and compare insect taxa biodiversity in highly diverse tropical and subtropical regions. The development of DNA barcoding, however, has shown great potential to replace (or at least complement) the morphospecies approach, with the advantage of relying on automated methods implemented in computer programs or even online rather than in often subjective morphological features. We sampled moths extensively for two years using light traps in a patch of the highly endangered Atlantic Forest of Brazil to produce a nearly complete census of arctiines (Noctuoidea: Erebidae), whose species richness was compared using different morphological and molecular approaches (DNA barcoding). A total of 1,075 barcode sequences of 286 morphospecies were analyzed. Based on the clustering method Barcode Index Number (BIN) we found a taxonomic bias of approximately 30% in our initial morphological assessment. However, a morphological reassessment revealed that the correspondence between morphospecies and molecular operational taxonomic units (MOTUs) can be up to 94% if differences in genitalia morphology are evaluated in individuals of different MOTUs originated from the same morphospecies (putative cases of cryptic species), and by recording if individuals of different genders in different morphospecies merge together in the same MOTU (putative cases of sexual dimorphism). The results of two other clustering methods (i.e. Automatic Barcode Gap Discovery and 2% threshold) were very similar to those of the BIN approach. Using empirical data we have shown that DNA barcoding performed substantially better than the morphospecies approach, based on superficial morphology, to delimit species of a highly diverse moth taxon, and thus should be used in species inventories.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145434,"DNA Barcoding of Neotropical Sand Flies (Diptera, Psychodidae, Phlebotominae): Species Identification and Discovery within Brazil",S624600,R155758,Study Location ,R155762,Brazil,"DNA barcoding has been an effective tool for species identification in several animal groups. Here, we used DNA barcoding to discriminate between 47 morphologically distinct species of Brazilian sand flies. DNA barcodes correctly identified approximately 90% of the sampled taxa (42 morphologically distinct species) using clustering based on neighbor-joining distance, of which four species showed comparatively higher maximum values of divergence (range 4.23–19.04%), indicating cryptic diversity. The DNA barcodes also corroborated the resurrection of two species within the shannoni complex and provided an efficient tool to differentiate between morphologically indistinguishable females of closely related species. Taken together, our results validate the effectiveness of DNA barcoding for species identification and the discovery of cryptic diversity in sand flies from Brazil.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R146639,DNA barcodes for species delimitation in Chironomidae (Diptera): a case study on the genus Labrundinia,S624167,R155674,Study Location ,R155679,Brazil,"Abstract In this study, we analysed the applicability of DNA barcodes for delimitation of 79 specimens of 13 species of nonbiting midges in the subfamily Tanypodinae (Diptera: Chironomidae) from São Paulo State, Brazil. Our results support DNA barcoding as an excellent tool for species identification and for solving taxonomic conflicts in genus Labrundinia. Molecular analysis of cytochrome c oxidase subunit I (COI) gene sequences yielded taxon identification trees, supporting 13 cohesive species clusters, of which three similar groups were subsequently linked to morphological variation at the larval and pupal stage. Additionally, another cluster previously described by means of morphology was linked to molecular markers. We found a distinct barcode gap, and in some species substantial interspecific pairwise divergences (up to 19.3%) were observed, which permitted identification of all analysed species. The results also indicated that barcodes can be used to associate life stages of chironomids since COI was easily amplified and sequenced from different life stages with universal barcode primers. Résumé Notre étude évalue l'utilité des codes à barres d'ADN pour délimiter 79 spécimens de 13 espèces de moucherons de la sous-famille des Tanypodinae (Diptera: Chironomidae) provenant de l’état de São Paulo, Brésil. Notre étude confirme l'utilisation des codes à barres d'ADN comme un excellent outil pour l'identification des espèces et la solution de problèmes taxonomiques dans genre Labrundinia. Une analyse moléculaire des séquences des gènes COI fournit des arbres d'identification des taxons, délimitant 13 groupes cohérents d'espèces, dont trois groupes similaires ont été reliés subséquemment à une variation morphologique des stades larvaires et nymphal. De plus, un autre groupe décrit antérieurement à partir de caractères morphologiques a été relié à des marqueurs moléculaires. Il existe un écart net entre les codes à barres et, chez certaines espèces, d'importantes divergences entre les espèces considérées deux par deux (jusqu’à 19,3%), ce qui a permis l'identification de toutes les espèces examinées. Nos résultats montrent aussi que les codes à barres peuvent servir à associer les différents stades de vie des chironomides, car il est facile d'amplifier et de séquencer le gène COI provenant des différents stades avec les amorces universelles des codes à barres.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R138551,Probing planetary biodiversity with DNA barcodes: The Noctuoidea of North America,S629322,R156994,Study Location ,R149575,Canada,"This study reports the assembly of a DNA barcode reference library for species in the lepidopteran superfamily Noctuoidea from Canada and the USA. Based on the analysis of 69,378 specimens, the library provides coverage for 97.3% of the noctuoid fauna (3565 of 3664 species). In addition to verifying the strong performance of DNA barcodes in the discrimination of these species, the results indicate close congruence between the number of species analyzed (3565) and the number of sequence clusters (3816) recognized by the Barcode Index Number (BIN) system. Distributional patterns across 12 North American ecoregions are examined for the 3251 species that have GPS data while BIN analysis is used to quantify overlap between the noctuoid faunas of North America and other zoogeographic regions. This analysis reveals that 90% of North American noctuoids are endemic and that just 7.5% and 1.8% of BINs are shared with the Neotropics and with the Palearctic, respectively. One third (29) of the latter species are recent introductions and, as expected, they possess low intraspecific divergences.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R157051,"A Transcontinental Challenge — A Test of DNA Barcode Performance for 1,541 Species of Canadian Noctuoidea (Lepidoptera)",S629612,R157052,Study Location ,R149575,Canada,"This study provides a first, comprehensive, diagnostic use of DNA barcodes for the Canadian fauna of noctuoids or “owlet” moths (Lepidoptera: Noctuoidea) based on vouchered records for 1,541 species (99.1% species coverage), and more than 30,000 sequences. When viewed from a Canada-wide perspective, DNA barcodes unambiguously discriminate 90% of the noctuoid species recognized through prior taxonomic study, and resolution reaches 95.6% when considered at a provincial scale. Barcode sharing is concentrated in certain lineages with 54% of the cases involving 1.8% of the genera. Deep intraspecific divergence exists in 7.7% of the species, but further studies are required to clarify whether these cases reflect an overlooked species complex or phylogeographic variation in a single species. Non-native species possess higher Nearest-Neighbour (NN) distances than native taxa, whereas generalist feeders have lower NN distances than those with more specialized feeding habits. We found high concordance between taxonomic names and sequence clusters delineated by the Barcode Index Number (BIN) system with 1,082 species (70%) assigned to a unique BIN. The cases of discordance involve both BIN mergers and BIN splits with 38 species falling into both categories, most likely reflecting bidirectional introgression. One fifth of the species are involved in a BIN merger reflecting the presence of 158 species sharing their barcode sequence with at least one other taxon, and 189 species with low, but diagnostic COI divergence. A very few cases (13) involved species whose members fell into both categories. Most of the remaining 140 species show a split into two or three BINs per species, while Virbia ferruginosa was divided into 16. The overall results confirm that DNA barcodes are effective for the identification of Canadian noctuoids. This study also affirms that BINs are a strong proxy for species, providing a pathway for a rapid, accurate estimation of animal diversity.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R157062,"A Comprehensive DNA Barcode Library for the Looper Moths (Lepidoptera: Geometridae) of British Columbia, Canada",S629694,R157063,Study Location ,R149575,Canada,"Background The construction of comprehensive reference libraries is essential to foster the development of DNA barcoding as a tool for monitoring biodiversity and detecting invasive species. The looper moths of British Columbia (BC), Canada present a challenging case for species discrimination via DNA barcoding due to their considerable diversity and limited taxonomic maturity. Methodology/Principal Findings By analyzing specimens held in national and regional natural history collections, we assemble barcode records from representatives of 400 species from BC and surrounding provinces, territories and states. Sequence variation in the barcode region unambiguously discriminates over 93% of these 400 geometrid species. However, a final estimate of resolution success awaits detailed taxonomic analysis of 48 species where patterns of barcode variation suggest cases of cryptic species, unrecognized synonymy as well as young species. Conclusions/Significance A catalog of these taxa meriting further taxonomic investigation is presented as well as the supplemental information needed to facilitate these investigations.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145554,Identifying the Main Mosquito Species in China Based on DNA Barcoding,S624203,R155680,Study Location ,R155687,China,"Mosquitoes are insects of the Diptera, Nematocera, and Culicidae families, some species of which are important disease vectors. Identifying mosquito species based on morphological characteristics is difficult, particularly the identification of specimens collected in the field as part of disease surveillance programs. Because of this difficulty, we constructed DNA barcodes of the cytochrome c oxidase subunit 1, the COI gene, for the more common mosquito species in China, including the major disease vectors. A total of 404 mosquito specimens were collected and assigned to 15 genera and 122 species and subspecies on the basis of morphological characteristics. Individuals of the same species grouped closely together in a Neighborhood-Joining tree based on COI sequence similarity, regardless of collection site. COI gene sequence divergence was approximately 30 times higher for species in the same genus than for members of the same species. Divergence in over 98% of congeneric species ranged from 2.3% to 21.8%, whereas divergence in conspecific individuals ranged from 0% to 1.67%. Cryptic species may be common and a few pseudogenes were detected.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145495,"DNA Barcoding for the Identification of Sand Fly Species (Diptera, Psychodidae, Phlebotominae) in Colombia",S624419,R155723,Study Location ,R155728,Colombia,"Sand flies include a group of insects that are of medical importance and that vary in geographic distribution, ecology, and pathogen transmission. Approximately 163 species of sand flies have been reported in Colombia. Surveillance of the presence of sand fly species and the actualization of species distribution are important for predicting risks for and monitoring the expansion of diseases which sand flies can transmit. Currently, the identification of phlebotomine sand flies is based on morphological characters. However, morphological identification requires considerable skills and taxonomic expertise. In addition, significant morphological similarity between some species, especially among females, may cause difficulties during the identification process. DNA-based approaches have become increasingly useful and promising tools for estimating sand fly diversity and for ensuring the rapid and accurate identification of species. A partial sequence of the mitochondrial cytochrome oxidase gene subunit I (COI) is currently being used to differentiate species in different animal taxa, including insects, and it is referred as a barcoding sequence. The present study explored the utility of the DNA barcode approach for the identification of phlebotomine sand flies in Colombia. We sequenced 700 bp of the COI gene from 36 species collected from different geographic localities. The COI barcode sequence divergence within a single species was <2% in most cases, whereas this divergence ranged from 9% to 26.6% among different species. These results indicated that the barcoding gene correctly discriminated among the previously morphologically identified species with an efficacy of nearly 100%. Analyses of the generated sequences indicated that the observed species groupings were consistent with the morphological identifications. In conclusion, the barcoding gene was useful for species discrimination in sand flies from Colombia.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R140197,DNA barcodes distinguish species of tropical Lepidoptera,S628651,R156766,Study Location ,R155640,Costa Rica,"Although central to much biological research, the identification of species is often difficult. The use of DNA barcodes, short DNA sequences from a standardized region of the genome, has recently been proposed as a tool to facilitate species identification and discovery. However, the effectiveness of DNA barcoding for identifying specimens in species-rich tropical biotas is unknown. Here we show that cytochrome c oxidase I DNA barcodes effectively discriminate among species in three Lepidoptera families from Area de Conservación Guanacaste in northwestern Costa Rica. We found that 97.9% of the 521 species recognized by prior taxonomic work possess distinctive cytochrome c oxidase I barcodes and that the few instances of interspecific sequence overlap involve very similar species. We also found two or more barcode clusters within each of 13 supposedly single species. Covariation between these clusters and morphological and/or ecological traits indicates overlooked species complexes. If these results are general, DNA barcoding will significantly aid species identification and discovery in tropical settings.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R135750,Characterization and comparison of poorly known moth communities through DNA barcoding in two Afrotropical environments in Gabon,S537022,R135752,Study Location ,R135755,Gabon,"Biodiversity research in tropical ecosystems-popularized as the most biodiverse habitats on Earth-often neglects invertebrates, yet invertebrates represent the bulk of local species richness. Insect communities in particular remain strongly impeded by both Linnaean and Wallacean shortfalls, and identifying species often remains a formidable challenge inhibiting the use of these organisms as indicators for ecological and conservation studies. Here we use DNA barcoding as an alternative to the traditional taxonomic approach for characterizing and comparing the diversity of moth communities in two different ecosystems in Gabon. Though sampling remains very incomplete, as evidenced by the high proportion (59%) of species represented by singletons, our results reveal an outstanding diversity. With about 3500 specimens sequenced and representing 1385 BINs (Barcode Index Numbers, used as a proxy to species) in 23 families, the diversity of moths in the two sites sampled is higher than the current number of species listed for the entire country, highlighting the huge gap in biodiversity knowledge for this country. Both seasonal and spatial turnovers are strikingly high (18.3% of BINs shared between seasons, and 13.3% between sites) and draw attention to the need to account for these when running regional surveys. Our results also highlight the richness and singularity of savannah environments and emphasize the status of Central African ecosystems as hotspots of biodiversity.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R142517,"A DNA barcode library for 5,200 German flies and midges (Insecta: Diptera) and its implications for metabarcoding‐based biomonitoring",S624752,R155788,Study Location ,R155792,Germany,"This study summarizes results of a DNA barcoding campaign on German Diptera, involving analysis of 45,040 specimens. The resultant DNA barcode library includes records for 2,453 named species comprising a total of 5,200 barcode index numbers (BINs), including 2,700 COI haplotype clusters without species‐level assignment, so called “dark taxa.” Overall, 88 out of 117 families (75%) recorded from Germany were covered, representing more than 50% of the 9,544 known species of German Diptera. Until now, most of these families, especially the most diverse, have been taxonomically inaccessible. By contrast, within a few years this study provided an intermediate taxonomic system for half of the German Dipteran fauna, which will provide a useful foundation for subsequent detailed, integrative taxonomic studies. Using DNA extracts derived from bulk collections made by Malaise traps, we further demonstrate that species delineation using BINs and operational taxonomic units (OTUs) constitutes an effective method for biodiversity studies using DNA metabarcoding. As the reference libraries continue to grow, and gaps in the species catalogue are filled, BIN lists assembled by metabarcoding will provide greater taxonomic resolution. The present study has three main goals: (a) to provide a DNA barcode library for 5,200 BINs of Diptera; (b) to demonstrate, based on the example of bulk extractions from a Malaise trap experiment, that DNA barcode clusters, labelled with globally unique identifiers (such as OTUs and/or BINs), provide a pragmatic, accurate solution to the “taxonomic impediment”; and (c) to demonstrate that interim names based on BINs and OTUs obtained through metabarcoding provide an effective method for studies on species‐rich groups that are usually neglected in biodiversity research projects because of their unresolved taxonomy.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145437,DNA Barcoding to Improve the Taxonomy of the Afrotropical Hoverflies (Insecta: Diptera: Syrphidae),S624565,R155749,Study Location ,R155756,Ghana,"The identification of Afrotropical hoverflies is very difficult because of limited recent taxonomic revisions and the lack of comprehensive identification keys. In order to assist in their identification, and to improve the taxonomy of this group, we constructed a reference dataset of 513 COI barcodes of 90 of the more common nominal species from Ghana, Togo, Benin and Nigeria (W Africa) and added ten publically available COI barcodes from nine nominal Afrotropical species to this (total: 523 COI barcodes; 98 nominal species; 26 genera). The identification accuracy of this dataset was evaluated with three methods (K2P distance-based, Neighbor-Joining (NJ) / Maximum Likelihood (ML) analysis, and using SpeciesIdentifier). Results of the three methods were highly congruent and showed a high identification success. Nine species pairs showed a low (< 0.03) mean interspecific K2P distance that resulted in several incorrect identifications. A high (> 0.03) maximum intraspecific K2P distance was observed in eight species and barcodes of these species not always formed single clusters in the NJ / ML analayses which may indicate the occurrence of cryptic species. Optimal K2P thresholds to differentiate intra- from interspecific K2P divergence were highly different among the three subfamilies (Eristalinae: 0.037, Syrphinae: 0.06, Microdontinae: 0.007–0.02), and among the different general suggesting that optimal thresholds are better defined at the genus level. In addition to providing an alternative identification tool, our study indicates that DNA barcoding improves the taxonomy of Afrotropical hoverflies by selecting (groups of) taxa that deserve further taxonomic study, and by attributing the unknown sex to species for which only one of the sexes is known.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145437,DNA Barcoding to Improve the Taxonomy of the Afrotropical Hoverflies (Insecta: Diptera: Syrphidae),S624561,R155749,Study Location ,R155754,Nigeria,"The identification of Afrotropical hoverflies is very difficult because of limited recent taxonomic revisions and the lack of comprehensive identification keys. In order to assist in their identification, and to improve the taxonomy of this group, we constructed a reference dataset of 513 COI barcodes of 90 of the more common nominal species from Ghana, Togo, Benin and Nigeria (W Africa) and added ten publically available COI barcodes from nine nominal Afrotropical species to this (total: 523 COI barcodes; 98 nominal species; 26 genera). The identification accuracy of this dataset was evaluated with three methods (K2P distance-based, Neighbor-Joining (NJ) / Maximum Likelihood (ML) analysis, and using SpeciesIdentifier). Results of the three methods were highly congruent and showed a high identification success. Nine species pairs showed a low (< 0.03) mean interspecific K2P distance that resulted in several incorrect identifications. A high (> 0.03) maximum intraspecific K2P distance was observed in eight species and barcodes of these species not always formed single clusters in the NJ / ML analayses which may indicate the occurrence of cryptic species. Optimal K2P thresholds to differentiate intra- from interspecific K2P divergence were highly different among the three subfamilies (Eristalinae: 0.037, Syrphinae: 0.06, Microdontinae: 0.007–0.02), and among the different general suggesting that optimal thresholds are better defined at the genus level. In addition to providing an alternative identification tool, our study indicates that DNA barcoding improves the taxonomy of Afrotropical hoverflies by selecting (groups of) taxa that deserve further taxonomic study, and by attributing the unknown sex to species for which only one of the sexes is known.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R136201,DNA barcode analysis of butterfly species from Pakistan points towards regional endemism,S629356,R156999,Study Location ,R155767,Pakistan,"DNA barcodes were obtained for 81 butterfly species belonging to 52 genera from sites in north‐central Pakistan to test the utility of barcoding for their identification and to gain a better understanding of regional barcode variation. These species represent 25% of the butterfly fauna of Pakistan and belong to five families, although the Nymphalidae were dominant, comprising 38% of the total specimens. Barcode analysis showed that maximum conspecific divergence was 1.6%, while there was 1.7–14.3% divergence from the nearest neighbour species. Barcode records for 55 species showed <2% sequence divergence to records in the Barcode of Life Data Systems (BOLD), but only 26 of these cases involved specimens from neighbouring India and Central Asia. Analysis revealed that most species showed little incremental sequence variation when specimens from other regions were considered, but a threefold increase was noted in a few cases. There was a clear gap between maximum intraspecific and minimum nearest neighbour distance for all 81 species. Neighbour‐joining cluster analysis showed that members of each species formed a monophyletic cluster with strong bootstrap support. The barcode results revealed two provisional species that could not be clearly linked to known taxa, while 24 other species gained their first coverage. Future work should extend the barcode reference library to include all butterfly species from Pakistan as well as neighbouring countries to gain a better understanding of regional variation in barcode sequences in this topographically and climatically complex region.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145304,Analyzing Mosquito (Diptera: Culicidae) Diversity in Pakistan by DNA Barcoding,S624633,R155763,Study Location ,R155767,Pakistan,"Background Although they are important disease vectors mosquito biodiversity in Pakistan is poorly known. Recent epidemics of dengue fever have revealed the need for more detailed understanding of the diversity and distributions of mosquito species in this region. DNA barcoding improves the accuracy of mosquito inventories because morphological differences between many species are subtle, leading to misidentifications. Methodology/Principal Findings Sequence variation in the barcode region of the mitochondrial COI gene was used to identify mosquito species, reveal genetic diversity, and map the distribution of the dengue-vector species in Pakistan. Analysis of 1684 mosquitoes from 491 sites in Punjab and Khyber Pakhtunkhwa during 2010–2013 revealed 32 species with the assemblage dominated by Culex quinquefasciatus (61% of the collection). The genus Aedes (Stegomyia) comprised 15% of the specimens, and was represented by six taxa with the two dengue vector species, Ae. albopictus and Ae. aegypti, dominant and broadly distributed. Anopheles made up another 6% of the catch with An. subpictus dominating. Barcode sequence divergence in conspecific specimens ranged from 0–2.4%, while congeneric species showed from 2.3–17.8% divergence. A global haplotype analysis of disease-vectors showed the presence of multiple haplotypes, although a single haplotype of each dengue-vector species was dominant in most countries. Geographic distribution of Ae. aegypti and Ae. albopictus showed the later species was dominant and found in both rural and urban environments. Conclusions As the first DNA-based analysis of mosquitoes in Pakistan, this study has begun the construction of a barcode reference library for the mosquitoes of this region. Levels of genetic diversity varied among species. Because of its capacity to differentiate species, even those with subtle morphological differences, DNA barcoding aids accurate tracking of vector populations.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R136193,Complete DNA barcode reference library for a country's butterfly fauna reveals high performance for temperate Europe,S629406,R157016,Study Location ,R157020,Romania,"DNA barcoding aims to accelerate species identification and discovery, but performance tests have shown marked differences in identification success. As a consequence, there remains a great need for comprehensive studies which objectively test the method in groups with a solid taxonomic framework. This study focuses on the 180 species of butterflies in Romania, accounting for about one third of the European butterfly fauna. This country includes five eco-regions, the highest of any in the European Union, and is a good representative for temperate areas. Morphology and DNA barcodes of more than 1300 specimens were carefully studied and compared. Our results indicate that 90 per cent of the species form barcode clusters allowing their reliable identification. The remaining cases involve nine closely related species pairs, some whose taxonomic status is controversial or that hybridize regularly. Interestingly, DNA barcoding was found to be the most effective identification tool, outperforming external morphology, and being slightly better than male genitalia. Romania is now the first country to have a comprehensive DNA barcode reference database for butterflies. Similar barcoding efforts based on comprehensive sampling of specific geographical regions can act as functional modules that will foster the early application of DNA barcoding while a global system is under development.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R139508,Close congruence between Barcode Index Numbers (bins) and species boundaries in the Erebidae (Lepidoptera: Noctuoidea) of the Iberian Peninsula,S556419,R139510,Study Location ,R139511,Spain,"Abstract The DNA barcode reference library for Lepidoptera holds much promise as a tool for taxonomic research and for providing the reliable identifications needed for conservation assessment programs. We gathered sequences for the barcode region of the mitochondrial cytochrome c oxidase subunit I gene from 160 of the 176 nominal species of Erebidae moths (Insecta: Lepidoptera) known from the Iberian Peninsula. These results arise from a research project which constructing a DNA barcode library for the insect species of Spain. New records for 271 specimens (122 species) are coupled with preexisting data for 38 species from the Iberian fauna. Mean interspecific distance was 12.1%, while the mean nearest neighbour divergence was 6.4%. All 160 species possessed diagnostic barcode sequences, but one pair of congeneric taxa (Eublemma rosea and Eublemma rietzi) were assigned to the same BIN. As well, intraspecific sequence divergences higher than 1.5% were detected in four species which likely represent species complexes. This study reinforces the effectiveness of DNA barcoding as a tool for monitoring biodiversity in particular geographical areas and the strong correspondence between sequence clusters delineated by BINs and species recognized through detailed taxonomic analysis.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R139546,"A DNA barcode reference library for Swiss butterflies and forester moths as a tool for species identification, systematics and conservation",S628886,R156861,Study Location ,R44048,Switzerland,"Butterfly monitoring and Red List programs in Switzerland rely on a combination of observations and collection records to document changes in species distributions through time. While most butterflies can be identified using morphology, some taxa remain challenging, making it difficult to accurately map their distributions and develop appropriate conservation measures. In this paper, we explore the use of the DNA barcode (a fragment of the mitochondrial gene COI) as a tool for the identification of Swiss butterflies and forester moths (Rhopalocera and Zygaenidae). We present a national DNA barcode reference library including 868 sequences representing 217 out of 224 resident species, or 96.9% of Swiss fauna. DNA barcodes were diagnostic for nearly 90% of Swiss species. The remaining 10% represent cases of para- and polyphyly likely involving introgression or incomplete lineage sorting among closely related taxa. We demonstrate that integrative taxonomic methods incorporating a combination of morphological and genetic techniques result in a rate of species identification of over 96% in females and over 98% in males, higher than either morphology or DNA barcodes alone. We explore the use of the DNA barcode for exploring boundaries among taxa, understanding the geographical distribution of cryptic diversity and evaluating the status of purportedly endemic taxa. Finally, we discuss how DNA barcodes may be used to improve field practices and ultimately enhance conservation strategies.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145491,DNA barcoding of tropical black flies (Diptera: Simuliidae) of Thailand,S624452,R155729,Study Location ,R155733,Thailand,"The ecological and medical importance of black flies drives the need for rapid and reliable identification of these minute, structurally uniform insects. We assessed the efficiency of DNA barcoding for species identification of tropical black flies. A total of 351 cytochrome c oxidase subunit 1 sequences were obtained from 41 species in six subgenera of the genus Simulium in Thailand. Despite high intraspecific genetic divergence (mean = 2.00%, maximum = 9.27%), DNA barcodes provided 96% correct identification. Barcodes also differentiated cytoforms of selected species complexes, albeit with varying levels of success. Perfect differentiation was achieved for two cytoforms of Simulium feuerborni, and 91% correct identification was obtained for the Simulium angulistylum complex. Low success (33%), however, was obtained for the Simulium siamense complex. The differential efficiency of DNA barcodes to discriminate cytoforms was attributed to different levels of genetic structure and demographic histories of the taxa. DNA barcode trees were largely congruent with phylogenies based on previous molecular, chromosomal and morphological analyses, but revealed inconsistencies that will require further evaluation.",TRUE,location
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145437,DNA Barcoding to Improve the Taxonomy of the Afrotropical Hoverflies (Insecta: Diptera: Syrphidae),S624567,R155749,Study Location ,R155757,Togo,"The identification of Afrotropical hoverflies is very difficult because of limited recent taxonomic revisions and the lack of comprehensive identification keys. In order to assist in their identification, and to improve the taxonomy of this group, we constructed a reference dataset of 513 COI barcodes of 90 of the more common nominal species from Ghana, Togo, Benin and Nigeria (W Africa) and added ten publically available COI barcodes from nine nominal Afrotropical species to this (total: 523 COI barcodes; 98 nominal species; 26 genera). The identification accuracy of this dataset was evaluated with three methods (K2P distance-based, Neighbor-Joining (NJ) / Maximum Likelihood (ML) analysis, and using SpeciesIdentifier). Results of the three methods were highly congruent and showed a high identification success. Nine species pairs showed a low (< 0.03) mean interspecific K2P distance that resulted in several incorrect identifications. A high (> 0.03) maximum intraspecific K2P distance was observed in eight species and barcodes of these species not always formed single clusters in the NJ / ML analayses which may indicate the occurrence of cryptic species. Optimal K2P thresholds to differentiate intra- from interspecific K2P divergence were highly different among the three subfamilies (Eristalinae: 0.037, Syrphinae: 0.06, Microdontinae: 0.007–0.02), and among the different general suggesting that optimal thresholds are better defined at the genus level. In addition to providing an alternative identification tool, our study indicates that DNA barcoding improves the taxonomy of Afrotropical hoverflies by selecting (groups of) taxa that deserve further taxonomic study, and by attributing the unknown sex to species for which only one of the sexes is known.",TRUE,location
R24,Ecology and Evolutionary Biology,R54030,"Seedling traits, plasticity and local differentiation as strategies of invasive species of Impatiens in central Europe",S165252,R54031,Continent,L100190,Europe,"Background and Aims Invasiveness of some alien plants is associated with their traits, plastic responses to environmental conditions and interpopulation differentiation. To obtain insights into the role of these processes in contributing to variation in performance, we compared congeneric species of Impatiens (Balsaminaceae) with different origin and invasion status that occur in central Europe. Methods Native I. noli-tangere and three alien species (highly invasive I. glandulifera, less invasive I. parviflora and potentially invasive I. capensis) were studied and their responses to simulated canopy shading and different nutrient and moisture levels were determined in terms of survival and seedling traits. Key Results and Conclusions Impatiens glandulifera produced high biomass in all the treatments and the control, exhibiting the ‘Jack-and-master’ strategy that makes it a strong competitor from germination onwards. The results suggest that plasticity and differentiation occurred in all the species tested and that along the continuum from plasticity to differentiation, the species at the plasticity end is the better invader. The most invasive species I. glandulifera appears to be highly plastic, whereas the other two less invasive species, I. parviflora and I. capensis, exhibited lower plasticity but rather strong population differentiation. The invasive Impatiens species were taller and exhibited higher plasticity and differentiation than native I. noli-tangere. This suggests that even within one genus, the relative importance of the phenomena contributing to invasiveness appears to be species'specific.",TRUE,location
R24,Ecology and Evolutionary Biology,R54090,Multiple common garden experiments suggest lack of local adaptation in an invasive ornamental plant,S165958,R54091,Continent,L100776,Europe,"Aims Adaptive evolution along geographic gradients of climatic conditions is suggested to facilitate the spread of invasive plant species, leading to clinal variation among populations in the introduced range. We investigated whether adaptation to climate is also involved in the invasive spread of an ornamental shrub, Buddleja davidii, across western and central Europe. Methods We combined a common garden experiment, replicated in three climatically different central European regions, with reciprocal transplantation to quantify genetic differentiation in growth and reproductive traits of 20 invasive B. davidii populations. Additionally, we compared compensatory regrowth among populations after clipping of stems to simulate mechanical damage.",TRUE,location
R24,Ecology and Evolutionary Biology,R54210,Contrasting plant physiological adaptation to climate in the native and introduced range of Hypericum perforatum,S167367,R54211,Continent,L101945,Europe,"Abstract How introduced plants, which may be locally adapted to specific climatic conditions in their native range, cope with the new abiotic conditions that they encounter as exotics is not well understood. In particular, it is unclear what role plasticity versus adaptive evolution plays in enabling exotics to persist under new environmental circumstances in the introduced range. We determined the extent to which native and introduced populations of St. John's Wort (Hypericum perforatum) are genetically differentiated with respect to leaf-level morphological and physiological traits that allow plants to tolerate different climatic conditions. In common gardens in Washington and Spain, and in a greenhouse, we examined clinal variation in percent leaf nitrogen and carbon, leaf δ13C values (as an integrative measure of water use efficiency), specific leaf area (SLA), root and shoot biomass, root/shoot ratio, total leaf area, and leaf area ratio (LAR). As well, we determined whether native European H. perforatum experienced directional selection on leaf-level traits in the introduced range and we compared, across gardens, levels of plasticity in these traits. In field gardens in both Washington and Spain, native populations formed latitudinal clines in percent leaf N. In the greenhouse, native populations formed latitudinal clines in root and shoot biomass and total leaf area, and in the Washington garden only, native populations also exhibited latitudinal clines in percent leaf C and leaf δ13C. Traits that failed to show consistent latitudinal clines instead exhibited significant phenotypic plasticity. Introduced St. John's Wort populations also formed significant or marginally significant latitudinal clines in percent leaf N in Washington and Spain, percent leaf C in Washington, and in root biomass and total leaf area in the greenhouse. In the Washington common garden, there was strong directional selection among European populations for higher percent leaf N and leaf δ13C, but no selection on any other measured trait. The presence of convergent, genetically based latitudinal clines between native and introduced H. perforatum, together with previously published molecular data, suggest that native and exotic genotypes have independently adapted to a broad-scale variation in climate that varies with latitude.",TRUE,location
R24,Ecology and Evolutionary Biology,R54707,Do biodiversity and human impact influence the introduction or establishment of alien mammals?,S197515,R57247,Continent,R49274,Europe,"What determines the number of alien species in a given region? ‘Native biodiversity’ and ‘human impact’ are typical answers to this question. Indeed, studies comparing different regions have frequently found positive relationships between number of alien species and measures of both native biodiversity (e.g. the number of native species) and human impact (e.g. human population). These relationships are typically explained by biotic acceptance or resistance, i.e. by influence of native biodiversity and human impact on the second step of the invasion process, establishment. The first step of the invasion process, introduction, has often been ignored. Here we investigate whether relationships between number of alien mammals and native biodiversity or human impact in 43 European countries are mainly shaped by differences in number of introduced mammals or establishment success. Our results suggest that correlation between number of native and established mammals is spurious, as it is simply explainable by the fact that both quantities are linked to country area. We also demonstrate that countries with higher human impact host more alien mammals than other countries because they received more introductions than other countries. Differences in number of alien mammals cannot be explained by differences in establishment success. Our findings highlight importance of human activities and question, at least for mammals in Europe, importance of biotic acceptance and resistance.",TRUE,location
R24,Ecology and Evolutionary Biology,R54958,"Quarantine arthropod invasions in Europe: the role of climate, hosts and propagule pressure",S175720,R54959,Continent,L108540,Europe,"To quantify the relative importance of propagule pressure, climate‐matching and host availability for the invasion of agricultural pest arthropods in Europe and to forecast newly emerging pest species and European areas with the highest risk of arthropod invasion under current climate and a future climate scenario (A1F1).",TRUE,location
R24,Ecology and Evolutionary Biology,R54977,Introduction history and species characteristics partly explain naturalization success of North American woody species in Europe,S175927,R54978,Continent,L108709,Europe,"1 The search for general characteristics of invasive species has not been very successful yet. A reason for this could be that current invasion patterns are mainly reflecting the introduction history (i.e. time since introduction and propagule pressure) of the species. Accurate data on the introduction history are, however, rare, particularly for introduced alien species that have not established. As a consequence, few studies that tested for the effects of species characteristics on invasiveness corrected for introduction history. 2 We tested whether the naturalization success of 582 North American woody species in Europe, measured as the proportion of European geographic regions in which each species is established, can be explained by their introduction history. For 278 of these species we had data on characteristics related to growth form, life cycle, growth, fecundity and environmental tolerance. We tested whether naturalization success can be further explained by these characteristics. In addition, we tested whether the effects of species characteristics differ between growth forms. 3 Both planting frequency in European gardens and time since introduction significantly increased naturalization success, but the effect of the latter was relatively weak. After correction for introduction history and taxonomy, six of the 26 species characteristics had significant effects on naturalization success. Leaf retention and precipitation tolerance increased naturalization success. Tree species were only 56% as likely to naturalize as non‐tree species (vines, shrubs and subshrubs), and the effect of planting frequency on naturalization success was much stronger for non‐trees than for trees. On the other hand, the naturalization success of trees, but not for non‐trees, increased with native range size, maximum plant height and seed spread rate. 4 Synthesis. Our results suggest that introduction history, particularly planting frequency, is an important determinant of current naturalization success of North American woody species (particularly of non‐trees) in Europe. Therefore, studies comparing naturalization success among species should correct for introduction history. Species characteristics are also significant determinants of naturalization success, but their effects may differ between growth forms.",TRUE,location
R24,Ecology and Evolutionary Biology,R54994,Propagule pressure and the invasion risks of non-native freshwater fishes: a case study in England,S176122,R54995,Continent,L108870,Europe,"European countries in general, and England in particular, have a long history of introducing non-native fish species, but there exist no detailed studies of the introduction pathways and propagules pressure for any European country. Using the nine regions of England as a preliminary case study, the potential relationship between the occurrence in the wild of non-native freshwater fishes (from a recent audit of non-native species) and the intensity (i.e. propagule pressure) and diversity of fish imports was investigated. The main pathways of introduction were via imports of fishes for ornamental use (e.g. aquaria and garden ponds) and sport fishing, with no reported or suspected cases of ballast water or hull fouling introductions. The recorded occurrence of non-native fishes in the wild was found to be related to the time (number of years) since the decade of introduction. A shift in the establishment rate, however, was observed in the 1970s after which the ratio of established-to-introduced species declined. The number of established non-native fish species observed in the wild was found to increase significantly (P < 0·05) with increasing import intensity (log10x + 1 of the numbers of fish imported for the years 2000–2004) and with increasing consignment diversity (log10x + 1 of the numbers of consignment types imported for the years 2000–2004). The implications for policy and management are discussed.",TRUE,location
R24,Ecology and Evolutionary Biology,R54996,"The demography of introduction pathways, propagule pressure and occurrences of non-native freshwater fish in England",S176144,R54997,Continent,L108888,Europe,"1. Biological invasion theory predicts that the introduction and establishment of non-native species is positively correlated with propagule pressure. Releases of pet and aquarium fishes to inland waters has a long history; however, few studies have examined the demographic basis of their importation and incidence in the wild. 2. For the 1500 grid squares (10×10 km) that make up England, data on human demographics (population density, numbers of pet shops, garden centres and fish farms), the numbers of non-native freshwater fishes (from consented licences) imported in those grid squares (i.e. propagule pressure), and the reported incidences (in a national database) of non-native fishes in the wild were used to examine spatial relationships between the occurrence of non-native fishes and the demographic factors associated with propagule pressure, as well as to test whether the demographic factors are statistically reliable predictors of the incidence of non-native fishes, and as such surrogate estimators of propagule pressure. 3. Principal coordinates of neighbour matrices analyses, used to generate spatially explicit models, and confirmatory factor analysis revealed that spatial distributions of non-native species in England were significantly related to human population density, garden centre density and fish farm density. Human population density and the number of fish imports were identified as the best predictors of propagule pressure. 4. Human population density is an effective surrogate estimator of non-native fish propagule pressure and can be used to predict likely areas of non-native fish introductions. In conjunction with fish movements, where available, human population densities can be used to support biological invasion monitoring programmes across Europe (and perhaps globally) and to inform management decisions as regards the prioritization of areas for the control of non-native fish introductions. © Crown copyright 2010. Reproduced with the permission of her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd.",TRUE,location
R24,Ecology and Evolutionary Biology,R55061,Determinants of vertebrate invasion success in Europe and North America,S197519,R57248,Continent,R49274,Europe,"Species that are frequently introduced to an exotic range have a high potential of becoming invasive. Besides propagule pressure, however, no other generally strong determinant of invasion success is known. Although evidence has accumulated that human affiliates (domesticates, pets, human commensals) also have high invasion success, existing studies do not distinguish whether this success can be completely explained by or is partly independent of propagule pressure. Here, we analyze both factors independently, propagule pressure and human affiliation. We also consider a third factor directly related to humans, hunting, and 17 traits on each species' population size and extent, diet, body size, and life history. Our dataset includes all 2362 freshwater fish, mammals, and birds native to Europe or North America. In contrast to most previous studies, we look at the complete invasion process consisting of (1) introduction, (2) establishment, and (3) spread. In this way, we not only consider which of the introduced species became invasive but also which species were introduced. Of the 20 factors tested, propagule pressure and human affiliation were the two strongest determinants of invasion success across all taxa and steps. This was true for multivariate analyses that account for intercorrelations among variables as well as univariate analyses, suggesting that human affiliation influenced invasion success independently of propagule pressure. Some factors affected the different steps of the invasion process antagonistically. For example, game species were much more likely to be introduced to an exotic continent than nonhunted species but tended to be less likely to establish themselves and spread. Such antagonistic effects show the importance of considering the complete invasion process.",TRUE,location
R24,Ecology and Evolutionary Biology,R55074,Predicting invasions by woody species in a temperate zone: a test of three risk assessment schemes in the Czech Republic (Central Europe),S193638,R57006,Continent,R49274,Europe,"To assess the validity of previously developed risk assessment schemes in the conditions of Central Europe, we tested (1) Australian weed risk assessment scheme (WRA; Pheloung et al. 1999); (2) WRA with additional analysis by Daehler et al. (2004); and (3) decision tree scheme of Reichard and Hamilton (1997) developed in North America, on a data set of 180 alien woody species commonly planted in the Czech Republic. This list included 17 invasive species, 9 naturalized but non‐invasive, 31 casual aliens, and 123 species not reported to escape from cultivation. The WRA model with additional analysis provided best results, rejecting 100% of invasive species, accepting 83.8% of non‐invasive, and recommending further 13.0% for additional analysis. Overall accuracy of the WRA model with additional analysis was 85.5%, higher than that of the basic WRA scheme (67.9%) and the Reichard–Hamilton model (61.6%). Only the Reichard–Hamilton scheme accepted some invaders. The probability that an accepted species will become an invader was zero for both WRA models and 3.2% for the Reichard–Hamilton model. The probability that a rejected species would have been an invader was 77.3% for both WRA models and 24.0% for the Reichard–Hamilton model. It is concluded that the WRA model, especially with additional analysis, appears to be a promising template for building a widely applicable system for screening out invasive plant introductions.",TRUE,location
R24,Ecology and Evolutionary Biology,R56545,Ecological traits of the amphipod invader Dikerogammarus villosus on a mesohabitat scale,S187085,R56546,Continent,L116089,Europe,"Since 1995, Dikerogammarus villosus Sowinski, a Ponto-Caspian amphi- pod species, has been invading most of Western Europe' s hydrosystems. D. villosus geographic extension and quickly increasing population density has enabled it to become a major component of macrobenthic assemblages in recipient ecosystems. The ecological characteristics of D. villosus on a mesohabitat scale were investigated at a station in the Moselle River. This amphipod is able to colonize a wide range of sub- stratum types, thus posing a threat to all freshwater ecosystems. Rivers whose domi- nant substratum is cobbles and which have tree roots along the banks could harbour particularly high densities of D. villosus. A relationship exists between substratum par- ticle size and the length of the individuals, and spatial segregation according to length was shown. This allows the species to limit intra-specific competition between genera- tions while facilitating reproduction. A strong association exists between D. villosus and other Ponto-Caspian species, such as Dreissena polymorpha and Corophium cur- vispinum, in keeping with Invasional Meltdown Theory. Four taxa (Coenagrionidae, Calopteryx splendens, Corophium curvispinum and Gammarus pulex ) exhibited spa- tial niches that overlap significantly that of D. villosus. According to the predatory be- haviour of the newcomer, their populations may be severely impacted.",TRUE,location
R24,Ecology and Evolutionary Biology,R56666,Preferences of the Ponto-Caspian amphipod Dikerogammarus haemobaphes for living zebra mussels,S188478,R56667,Continent,L117240,Europe,"A Ponto-Caspian amphipod Dikerogammarus haemobaphes has recently invaded European waters. In the recipient area, it encountered Dreissena polymorpha, a habitat-forming bivalve, co-occurring with the gammarids in their native range. We assumed that interspecific interactions between these two species, which could develop during their long-term co-evolution, may affect the gammarid behaviour in novel areas. We examined the gammarid ability to select a habitat containing living mussels and searched for cues used in that selection. We hypothesized that they may respond to such traits of a living mussel as byssal threads, activity (e.g. valve movements, filtration) and/or shell surface properties. We conducted the pairwise habitat-choice experiments in which we offered various objects to single gammarids in the following combinations: (1) living mussels versus empty shells (the general effect of living Dreissena); (2) living mussels versus shells with added byssal threads and shells with byssus versus shells without it (the effect of byssus); (3) living mussels versus shells, both coated with nail varnish to neutralize the shell surface (the effect of mussel activity); (4) varnished versus clean living mussels (the effect of shell surface); (5) varnished versus clean stones (the effect of varnish). We checked the gammarid positions in the experimental tanks after 24 h. The gammarids preferred clean living mussels over clean shells, regardless of the presence of byssal threads under the latter. They responded to the shell surface, exhibiting preferences for clean mussels over varnished individuals. They were neither affected by the presence of byssus nor by mussel activity. The ability to detect and actively select zebra mussel habitats may be beneficial for D. haemobaphes and help it establish stable populations in newly invaded areas.",TRUE,location
R24,Ecology and Evolutionary Biology,R56670,"Do rabbits eat voles? Apparent competition, habitat heterogeneity and large-scale coexistence under mink predation",S188524,R56671,Continent,L117278,Europe,"Habitat heterogeneity is predicted to profoundly influence the dynamics of indirect interspecific interactions; however, despite potentially significant consequences for multi-species persistence, this remains almost completely unexplored in large-scale natural landscapes. Moreover, how spatial habitat heterogeneity affects the persistence of interacting invasive and native species is also poorly understood. Here we show how the persistence of a native prey (water vole, Arvicola terrestris) is determined by the spatial distribution of an invasive prey (European rabbit, Oryctolagus cuniculus) and directly infer how this is defined by the mobility of a shared invasive predator (American mink, Neovison vison). This study uniquely demonstrates that variation in habitat connectivity in large-scale natural landscapes creates spatial asynchrony, enabling coexistence between apparent competitive native and invasive species. These findings highlight that unexpected interactions may be involved in species declines, and also that in such cases habitat heterogeneity should be considered in wildlife management decisions.",TRUE,location
R24,Ecology and Evolutionary Biology,R56847,Cascading ecological effects caused by the establishment of the emerald ash borer Agrilus planipennis (Coleoptera: Buprestidae) in European Russia,S190493,R56848,Continent,L118893,Europe,"Emerald ash borer, Agrilus planipennis, is a destructive invasive forest pest in North America and European Russia. This pest species is rapidly spreading in European Russia and is likely to arrive in other countries soon. The aim is to analyze the ecological consequences of the establishment of this pest in European Russia and investigate (1) what other xylophagous beetles develop on trees affected by A. planipennis, (2) how common is the parasitoid of the emerald ash borer Spathius polonicus (Hymenoptera: Braconidae: Doryctinae) and what is the level of parasitism by this species, and (3) how susceptible is the native European ash species Fraxinus excelsior to A. planipennis. A survey of approximately 1000 Fraxinus pennsylvanica trees damaged by A. planipennis in 13 localities has shown that Hylesinus varius (Coleoptera: Curculionidae: Scolytinae), Tetrops starkii (Coleoptera: Cerambycidae) and Agrilus convexicollis (Coleoptera: Buprestidae) were common on these trees. Spathius polonicus is frequently recorded. About 50 percent of late instar larvae of A. planipennis sampled were parasitized by S. polonicus. Maps of the distributions of T. starkii, A. convexicollis and S. polonicus before and after the establishment of A. planipennis in European Russia were compiled. It is hypothesized that these species, which are native to the West Palaearctic, spread into central European Russia after A. planipennis became established there. Current observations confirm those of previous authors that native European ash Fraxinus excelsior is susceptible to A. planipennis, increasing the threat posed by this pest. The establishment of A. planipennis has resulted in a cascade of ecological effects, such as outbreaks of other xylophagous beetles in A. planipennis-infested trees. It is likely that the propagation of S. polonicus will reduce the incidence of outbreaks of A. planipennis.",TRUE,location
R24,Ecology and Evolutionary Biology,R56909,Over-invasion in a freshwater ecosystem: newly introduced virile crayfish (Orconectes virilis) outcompete established invasive signal crayfish (Pacifastacus leniusculus),S191182,R56910,Continent,L119458,Europe,"Abstract Biological invasions are a key threat to freshwater biodiversity, and identifying determinants of invasion success is a global conservation priority. The establishment of introduced species is predicted to be hindered by pre-existing, functionally similar invasive species. Over a five-year period we, however, find that in the River Lee (UK), recently introduced non-native virile crayfish (Orconectes virilis) increased in range and abundance, despite the presence of established alien signal crayfish (Pacifastacus leniusculus). In regions of sympatry, virile crayfish had a detrimental effect on signal crayfish abundance but not vice versa. Competition experiments revealed that virile crayfish were more aggressive than signal crayfish and outcompeted them for shelter. Together, these results provide early evidence for the potential over-invasion of signal crayfish by competitively dominant virile crayfish. Based on our results and the limited distribution of virile crayfish in Europe, we recommend that efforts to contain them within the Lee catchment be implemented immediately.",TRUE,location
R24,Ecology and Evolutionary Biology,R56986,"Alien mammals in Europe: updated numbers and trends, and assessment of the effects on biodiversity",S193649,R56988,Continent,R49274,Europe,"This study provides an updated picture of mammal invasions in Europe, based on detailed analysis of information on introductions occurring from the Neolithic to recent times. The assessment considered all information on species introductions, known extinctions and successful eradication campaigns, to reconstruct a trend of alien mammals' establishment in the region. Through a comparative analysis of the data on introduction, with the information on the impact of alien mammals on native and threatened species of Europe, the present study also provides an objective assessment of the overall impact of mammal introductions on European biodiversity, including information on impact mechanisms. The results of this assessment confirm the constant increase of mammal invasions in Europe, with no indication of a reduction of the rate of introduction. The study also confirms the severe impact of alien mammals, which directly threaten a significant number of native species, including many highly threatened species. The results could help to prioritize species for response, as required by international conventions and obligations.",TRUE,location
R24,Ecology and Evolutionary Biology,R56990,Alien aquatic plant species in European countries,S193648,R56991,Continent,R49274,Europe,"Hussner A (2012). Alien aquatic plant species in European countries. Weed Research52, 297–306. Summary Alien aquatic plant species cause serious ecological and economic impacts to European freshwater ecosystems. This study presents a comprehensive overview of all alien aquatic plants in Europe, their places of origin and their distribution within the 46 European countries. In total, 96 aquatic species from 30 families have been reported as aliens from at least one European country. Most alien aquatic plants are native to Northern America, followed by Asia and Southern America. Elodea canadensis is the most widespread alien aquatic plant in Europe, reported from 41 European countries. Azolla filiculoides ranks second (25), followed by Vallisneria spiralis (22) and Elodea nuttallii (20). The highest number of alien aquatic plant species has been found in Italy and France (34 species), followed by Germany (27), Belgium and Hungary (both 26) and the Netherlands (24). Even though the number of alien aquatic plants seems relatively small, the European and Mediterranean Plant Protection Organization (EPPO, http://www.eppo.org) has listed 18 of these species as invasive or potentially invasive within the EPPO region. As ornamental trade has been regarded as the major pathway for the introduction of alien aquatic plants, trading bans seem to be the most effective option to reduce the risk of further unintended entry of alien aquatic plants into Europe.",TRUE,location
R24,Ecology and Evolutionary Biology,R56996,Invasion success of vertebrates in Europe and North America,S197520,R57249,Continent,R49274,Europe,"Species become invasive if they (i) are introduced to a new range, (ii) establish themselves, and (iii) spread. To address the global problems caused by invasive species, several studies investigated steps ii and iii of this invasion process. However, only one previous study looked at step i and examined the proportion of species that have been introduced beyond their native range. We extend this research by investigating all three steps for all freshwater fish, mammals, and birds native to Europe or North America. A higher proportion of European species entered North America than vice versa. However, the introduction rate from Europe to North America peaked in the late 19th century, whereas it is still rising in the other direction. There is no clear difference in invasion success between the two directions, so neither the imperialism dogma (that Eurasian species are exceptionally successful invaders) is supported, nor is the contradictory hypothesis that North America offers more biotic resistance to invaders than Europe because of its less disturbed and richer biota. Our results do not support the tens rule either: that approximately 10% of all introduced species establish themselves and that approximately 10% of established species spread. We find a success of approximately 50% at each step. In comparison, only approximately 5% of native vertebrates were introduced in either direction. These figures show that, once a vertebrate is introduced, it has a high potential to become invasive. Thus, it is crucial to minimize the number of species introductions to effectively control invasive vertebrates.",TRUE,location
R24,Ecology and Evolutionary Biology,R57010,"Alien flora of Europe: species diversity, temporal trends, geographical patterns and research needs",S193640,R57015,Continent,R49274,Europe,"The paper provides the first estimate of the composition and structure of alien plants occurring in the wild in the European continent, based on the results of the DAISIE project (2004–2008), funded by the 6th Framework Programme of the European Union and aimed at “creating an inventory of invasive species that threaten European terrestrial, freshwater and marine environments”. The plant section of the DAISIE database is based on national checklists from 48 European countries/regions and Israel; for many of them the data were compiled during the project and for some countries DAISIE collected the first comprehensive checklists of alien species, based on primary data (e.g., Cyprus, Greece, F. Y. R. O. Macedonia, Slovenia, Ukraine). In total, the database contains records of 5789 alien plant species in Europe (including those native to a part of Europe but alien to another part), of which 2843 are alien to Europe (of extra-European origin). The research focus was on naturalized species; there are in total 3749 naturalized aliens in Europe, of which 1780 are alien to Europe. This represents a marked increase compared to 1568 alien species reported by a previous analysis of data in Flora Europaea (1964–1980). Casual aliens were marginally considered and are represented by 1507 species with European origins and 872 species whose native range falls outside Europe. The highest diversity of alien species is concentrated in industrialized countries with a tradition of good botanical recording or intensive recent research. The highest number of all alien species, regardless of status, is reported from Belgium (1969), the United Kingdom (1779) and Czech Republic (1378). The United Kingdom (857), Germany (450), Belgium (447) and Italy (440) are countries with the most naturalized neophytes. The number of naturalized neophytes in European countries is determined mainly by the interaction of temperature and precipitation; it increases with increasing precipitation but only in climatically warm and moderately warm regions. Of the nowadays naturalized neophytes alien to Europe, 50% arrived after 1899, 25% after 1962 and 10% after 1989. At present, approximately 6.2 new species, that are capable of naturalization, are arriving each year. Most alien species have relatively restricted European distributions; half of all naturalized species occur in four or fewer countries/regions, whereas 70% of non-naturalized species occur in only one region. Alien species are drawn from 213 families, dominated by large global plant families which have a weedy tendency and have undergone major radiations in temperate regions (Asteraceae, Poaceae, Rosaceae, Fabaceae, Brassicaceae). There are 1567 genera, which have alien members in European countries, the commonest being globally-diverse genera comprising mainly urban and agricultural weeds (e.g., Amaranthus, Chenopodium and Solanum) or cultivated for ornamental purposes (Cotoneaster, the genus richest in alien species). Only a few large genera which have successfully invaded (e.g., Oenothera, Oxalis, Panicum, Helianthus) are predominantly of non-European origin. Conyza canadensis, Helianthus tuberosus and Robinia pseudoacacia are most widely distributed alien species. Of all naturalized aliens present in Europe, 64.1% occur in industrial habitats and 58.5% on arable land and in parks and gardens. Grasslands and woodlands are also highly invaded, with 37.4 and 31.5%, respectively, of all naturalized aliens in Europe present in these habitats. Mires, bogs and fens are least invaded; only approximately 10% of aliens in Europe occur there. Intentional introductions to Europe (62.8% of the total number of naturalized aliens) prevail over unintentional (37.2%). Ornamental and horticultural introductions escaped from cultivation account for the highest number of species, 52.2% of the total. Among unintentional introductions, contaminants of seed, mineral materials and other commodities are responsible for 1091 alien species introductions to Europe (76.6% of all species introduced unintentionally) and 363 species are assumed to have arrived as stowaways (directly associated with human transport but arriving independently of commodity). Most aliens in Europe have a native range in the same continent (28.6% of all donor region records are from another part of Europe where the plant is native); in terms of species numbers the contribution of Europe as a region of origin is 53.2%. Considering aliens to Europe separately, 45.8% of species have their native distribution in North and South America, 45.9% in Asia, 20.7% in Africa and 5.3% in Australasia. Based on species composition, European alien flora can be classified into five major groups: (1) north-western, comprising Scandinavia and the UK; (2) west-central, extending from Belgium and the Netherlands to Germany and Switzerland; (3) Baltic, including only the former Soviet Baltic states; (4) east-central, comprizing the remainder of central and eastern Europe; (5) southern, covering the entire Mediterranean region. The clustering patterns cut across some European bioclimatic zones; cultural factors such as regional trade links and traditional local preferences for crop, forestry and ornamental species are also important by influencing the introduced species pool. Finally, the paper evaluates a state of the art in the field of plant invasions in Europe, points to research gaps and outlines avenues of further research towards documenting alien plant invasions in Europe. The data are of varying quality and need to be further assessed with respect to the invasion status and residence time of the species included. This concerns especially the naturalized/casual status; so far, this information is available comprehensively for only 19 countries/regions of the 49 considered. Collating an integrated database on the alien flora of Europe can form a principal contribution to developing a European-wide management strategy of alien species.",TRUE,location
R24,Ecology and Evolutionary Biology,R57030,A generic impact-scoring system applied to alien mammals in Europe,S193618,R57031,Continent,R49274,Europe,"Abstract: We present a generic scoring system that compares the impact of alien species among members of large taxonomic groups. This scoring can be used to identify the most harmful alien species so that conservation measures to ameliorate their negative effects can be prioritized. For all alien mammals in Europe, we assessed impact reports as completely as possible. Impact was classified as either environmental or economic. We subdivided each of these categories into five subcategories (environmental: impact through competition, predation, hybridization, transmission of disease, and herbivory; economic: impact on agriculture, livestock, forestry, human health, and infrastructure). We assigned all impact reports to one of these 10 categories. All categories had impact scores that ranged from zero (minimal) to five (maximal possible impact at a location). We summed all impact scores for a species to calculate ""potential impact"" scores. We obtained ""actual impact"" scores by multiplying potential impact scores by the percentage of area occupied by the respective species in Europe. Finally, we correlated species’ ecological traits with the derived impact scores. Alien mammals from the orders Rodentia, Artiodactyla, and Carnivora caused the highest impact. In particular, the brown rat (Rattus norvegicus), muskrat (Ondathra zibethicus), and sika deer (Cervus nippon) had the highest overall scores. Species with a high potential environmental impact also had a strong potential economic impact. Potential impact also correlated with the distribution of a species in Europe. Ecological flexibility (measured as number of different habitats a species occupies) was strongly related to impact. The scoring system was robust to uncertainty in knowledge of impact and could be adjusted with weight scores to account for specific value systems of particular stakeholder groups (e.g., agronomists or environmentalists). Finally, the scoring system is easily applicable and adaptable to other taxonomic groups.",TRUE,location
R24,Ecology and Evolutionary Biology,R57065,Globalisation in marine ecosystems: the story of non-indigenous marine species across European seas,S193599,R57066,Continent,R49274,Europe,"The introduction of non-indigenous species (NIS) across the major European seas is a dynamic non-stop process. Up to September 2004, 851 NIS (the majority being zoobenthic organ- isms) have been reported in European marine and brackish waters, the majority during the 1960s and 1970s. The Mediterranean is by far the major recipient of exotic species with an average of one introduction every 4 wk over the past 5 yr. Of the 25 species recorded in 2004, 23 were reported in the Mediterranean and only two in the Baltic. The most updated patterns and trends in the rate, mode of introduction and establishment success of introductions were examined, revealing a process similar to introductions in other parts of the world, but with the uniqueness of migrants through the Suez Canal into the Mediterranean (Lessepsian or Erythrean migration). Shipping appears to be the major vector of introduction (excluding the Lessepsian migration). Aquaculture is also an important vector with target species outnumbered by those introduced unintentionally. More than half of immigrants have been estab- lished in at least one regional sea. However, for a significant part of the introductions both the establishment success and mode of introduction remain unknown. Finally, comparing trends across taxa and seas is not as accurate as could have been wished because there are differences in the spatial and taxonomic effort in the study of NIS. These differences lead to the conclusion that the number of NIS remains an underestimate, calling for continuous updating and systematic research.",TRUE,location
R24,Ecology and Evolutionary Biology,R57072,Environmental and economic impact assessment of alien and invasive fish species in Europe using the generic impact scoring system,S193586,R57073,Continent,R49274,Europe,"Invasions by alien species are one of the major threats to the native environment. There are multifold attempts to counter alien species, but limited resources for mitigation or eradication programmes makes prioritisation indispensable. We used the generic impact scoring system to assess the impact of alien fish species in Europe. It prioritises species, but also offers the possibility to compare the impact of alien invasive species between different taxonomic groups. For alien fish in Europe, we compiled a list of 40 established species. By literature research, we assessed the environmental impact (through herbivory, predation, competition, disease transmission, hybridisation and ecosystem alteration) and economic impact (on agriculture, animal production, forestry, human infrastructure, human health and human social life) of each species. The goldfish/gibel complex Carassius auratus/C. gibelio scored the highest impact points, followed by the grass carp Ctenopharyngodon idella and the topmouth gudgeon Pseudorasbora parva. According to our analyses, alien fish species have the strongest impact on the environment through predation, followed by competition with native species. Besides negatively affecting animal production (mainly in aquaculture), alien fish have no pronounced economic impact. At the species level, C. auratus/C. gibelio show similar impact scores to the worst alien mammals in Europe. This study indicates that the generic impact scoring system is useful to investigate the impact of alien fish, also allowing cross-taxa comparisons. Our results are therefore of major relevance for stakeholders and decision-makers involved in management and eradication of alien fish species.",TRUE,location
R24,Ecology and Evolutionary Biology,R57075,"How well do we understand the impacts of alien species on ecosystem services? A pan-European, cross-taxa assessment",S193591,R57080,Continent,R49274,Europe,"Recent comprehensive data provided through the DAISIE project (www.europe-aliens.org) have facilitated the development of the first pan-European assessment of the impacts of alien plants, vertebrates, and invertebrates – in terrestrial, freshwater, and marine environments – on ecosystem services. There are 1094 species with documented ecological impacts and 1347 with economic impacts. The two taxonomic groups with the most species causing impacts are terrestrial invertebrates and terrestrial plants. The North Sea is the maritime region that suffers the most impacts. Across taxa and regions, ecological and economic impacts are highly correlated. Terrestrial invertebrates create greater economic impacts than ecological impacts, while the reverse is true for terrestrial plants. Alien species from all taxonomic groups affect “supporting”, “provisioning”, “regulating”, and “cultural” services and interfere with human well-being. Terrestrial vertebrates are responsible for the greatest range of impacts, and these are widely distributed across Europe. Here, we present a review of the financial costs, as the first step toward calculating an estimate of the economic consequences of alien species in Europe.",TRUE,location
R24,Ecology and Evolutionary Biology,R57157,Human-related processes drive the richness of exotic birds in Europe,S197572,R57158,Continent,R49274,Europe,"Both human-related and natural factors can affect the establishment and distribution of exotic species. Understanding the relative role of the different factors has important scientific and applied implications. Here, we examined the relative effect of human-related and natural factors in determining the richness of exotic bird species established across Europe. Using hierarchical partitioning, which controls for covariation among factors, we show that the most important factor is the human-related community-level propagule pressure (the number of exotic species introduced), which is often not included in invasion studies due to the lack of information for this early stage in the invasion process. Another, though less important, factor was the human footprint (an index that includes human population size, land use and infrastructure). Biotic and abiotic factors of the environment were of minor importance in shaping the number of established birds when tested at a European extent using 50×50 km2 grid squares. We provide, to our knowledge, the first map of the distribution of exotic bird richness in Europe. The richest hotspot of established exotic birds is located in southeastern England, followed by areas in Belgium and The Netherlands. Community-level propagule pressure remains the major factor shaping the distribution of exotic birds also when tested for the UK separately. Thus, studies examining the patterns of establishment should aim at collecting the crucial and hard-to-find information on community-level propagule pressure or develop reliable surrogates for estimating this factor. Allowing future introductions of exotic birds into Europe should be reconsidered carefully, as the number of introduced species is basically the main factor that determines the number established.",TRUE,location
R24,Ecology and Evolutionary Biology,R57321,Biotic acceptance in introduced amphibians and reptiles in Europe and North America,S197453,R57322,Continent,R49274,Europe,"Aim The biotic resistance hypothesis argues that complex plant and animal communities are more resistant to invasion than simpler communities. Conversely, the biotic acceptance hypothesis states that non-native and native species richness are positively related. Most tests of these hypotheses at continental scales, typically conducted on plants, have found support for biotic acceptance. We tested these hypotheses on both amphibians and reptiles across Europe and North America. Location Continental countries in Europe and states/provinces in North America. Methods We used multiple linear regression models to determine which factors predicted successful establishment of amphibians and reptiles in Europe and North America, and additional models to determine which factors predicted native species richness. Results Successful establishment of amphibians and reptiles in Europe and reptiles in North America was positively related to native species richness. We found higher numbers of successful amphibian species in Europe than in North America. Potential evapotranspiration (PET) was positively related to non-native species richness for amphibians and reptiles in Europe and reptiles in North America. PET was also the primary factor determining native species richness for both amphibians and reptiles in Europe and North America. Main conclusions We found support for the biotic acceptance hypothesis for amphibians and reptiles in Europe and reptiles in North America, suggesting that the presence of native amphibian and reptile species generally indicates good habitat for non-native species. Our data suggest that the greater number of established amphibians per native amphibians in Europe than in North America might be explained by more introductions in Europe or climate-matching of the invaders. Areas with high native species richness should be the focus of control and management efforts, especially considering that non-native species located in areas with a high number of natives can have a large impact on biological diversity.",TRUE,location
R24,Ecology and Evolutionary Biology,R57341,Ecological resistance to Acer negundo invasion in a European riparian forest: relative importance of environmental and biotic drivers,S197441,R57342,Continent,R49274,Europe,"Question The relative importance of environmental vs. biotic resistance of recipient ecological communities remains poorly understood in invasion ecology. Acer negundo, a North American tree, has widely invaded riparian forests throughout Europe at the ecotone between early- (Salix spp. and Populus spp.) and late-successional (Fraxinus spp.) species. However, it is not present in the upper part of the Rhone River, where native Alnus incana occurs at an intermediate position along the successional riparian gradient. Is this absence of the invasive tree due to environmental or biotic resistance of the recipient communities, and in particular due to the presence of Alnus? Location Upper Rhone River, France. Methods We undertook a transplant experiment in an Alnus-dominated community along the Upper Rhone River, where we compared Acer negundo survival and growth, with and without biotic interactions (tree and herb layer effects), to those of four native tree species from differing successional positions in the Upper Rhone communities (P. alba, S. alba, F. excelsior and Alnus incana). Results Without biotic interactions Acer negundo performed similarly to native species, suggesting that the Upper Rhone floodplain is not protected from Acer invasion by a simple abiotic barrier. In contrast, this species performed less well than F. excelsior and Alnus incana in environments with intact tree and/or herb layers. Alnus showed the best growth rate in these conditions, indicating biotic resistance of the native plant community. Conclusions We did not find evidence for an abiotic barrier to Acer negundo invasion of the Upper Rhone River floodplain communities, but our results suggest a biotic resistance. In particular, we demonstrated that (i) additive competitive effects of the tree and herb layer led to Acer negundo suppression and (ii) Alnus incana grew more rapidly than Acer negundo in this intermediate successional niche.",TRUE,location
R24,Ecology and Evolutionary Biology,R57609,Invasiveness of Ammophila arenaria: Release from soil-borne pathogens?,S203755,R57611,Continent,R49274,Europe,"The Natural Enemies Hypothesis (i.e., introduced species experience release from their natural enemies) is a common explanation for why invasive species are so successful. We tested this hypothesis for Ammophila arenaria (Poaceae: European beachgrass), an aggressive plant invading the coastal dunes of California, USA, by comparing the demographic effects of belowground pathogens on A. arenaria in its introduced range to those reported in its native range. European research on A. arenaria in its native range has established that soil-borne pathogens, primarily nematodes and fungi, reduce A. arenaria's growth. In a greenhouse experiment designed to parallel European studies, seeds and 2-wk-old seedlings were planted in sterilized and nonsterilized soil collected from the A. arenaria root zone in its introduced range of California. We assessed the effects of pathogens via soil sterilization on three early performance traits: seed germination, seedling survival, and plant growth. We found that seed germinatio...",TRUE,location
R24,Ecology and Evolutionary Biology,R57612,Diversity and abundance patterns of phytophagous insect communities on alien and native host plants in the Brassicaceae,S203753,R57613,Continent,R49274,Europe,"The herbivore load (abundance and species richness of herbivores) on alien plants is supposed to be one of the keys to understand the invasiveness of species. We investigate the phytophagous insect communities on cabbage plants (Brassicaceae) in Europe. We compare the communities of endophagous and ectophagous insects as well as of Coleoptera and Lepidoptera on native and alien cabbage plant species. Contrary to many other reports, we found no differences in the herbivore load between native and alien hosts. The majority of insect species attacked alien as well as native hosts. Across insect species, there was no difference in the patterns of host range on native and on alien hosts. Likewise the similarity of insect communities across pairs of host species was not different between natives and aliens. We conclude that the general similarity in the community patterns between native and alien cabbage plant species are due to the chemical characteristics of this plant family. All cabbage plants share glucosinolates. This may facilitate host switches from natives to aliens. Hence the presence of native congeners may influence invasiveness of alien plants.",TRUE,location
R24,Ecology and Evolutionary Biology,R57618,Plant-soil biota interactions and spatial distribution of black cherry in its native and invasive ranges,S203751,R57619,Continent,R49274,Europe,"One explanation for the higher abundance of invasive species in their non-native than native ranges is the escape from natural enemies. But there are few experimental studies comparing the parallel impact of enemies (or competitors and mutualists) on a plant species in its native and invaded ranges, and release from soil pathogens has been rarely investigated. Here we present evidence showing that the invasion of black cherry (Prunus serotina) into north-western Europe is facilitated by the soil community. In the native range in the USA, the soil community that develops near black cherry inhibits the establishment of neighbouring conspecifics and reduces seedling performance in the greenhouse. In contrast, in the non-native range, black cherry readily establishes in close proximity to conspecifics, and the soil community enhances the growth of its seedlings. Understanding the effects of soil organisms on plant abundance will improve our ability to predict and counteract plant invasions.",TRUE,location
R24,Ecology and Evolutionary Biology,R57654,Phytophagous insects of giant hogweed Heracleum mantegazzianum (Apiaceae) in invaded areas of Europe and in its native area of the Caucasus,S203673,R57655,Continent,R49274,Europe,"Giant hogweed, Heracleum mantegazzianum (Apiaceae), was introduced from the Caucasus into Western Europe more than 150 years ago and later became an invasive weed which created major problems for European authorities. Phytophagous insects were collected in the native range of the giant hogweed (Caucasus) and were compared to those found on plants in the invaded parts of Europe. The list of herbivores was compiled from surveys of 27 localities in nine countries during two seasons. In addition, litera- ture records for herbivores were analysed for a total of 16 Heracleum species. We recorded a total of 265 herbivorous insects on Heracleum species and we analysed them to describe the herbivore assemblages, locate vacant niches, and identify the most host- specific herbivores on H. mantegazzianum. When combining our investigations with similar studies of herbivores on other invasive weeds, all studies show a higher proportion of specialist herbivores in the native habitats compared to the invaded areas, supporting the ""enemy release hypothesis"" (ERH). When analysing the relative size of the niches (measured as plant organ biomass), we found less herbivore species per biomass on the stem and roots, and more on the leaves (Fig. 5). Most herbivores were polyphagous gener- alists, some were found to be oligophagous (feeding within the same family of host plants) and a few had only Heracleum species as host plants (monophagous). None were known to feed exclusively on H. mantegazzianum. The oligophagous herbivores were restricted to a few taxonomic groups, especially within the Hemiptera, and were particularly abundant on this weed.",TRUE,location
R24,Ecology and Evolutionary Biology,R57700,The invasive shrub Buddleja davidii performs better in its introduced range,S203622,R57701,Continent,R49274,Europe,"It is commonly assumed that invasive plants grow more vigorously in their introduced than in their native range, which is then attributed to release from natural enemies or to microevolutionary changes, or both. However, few studies have tested this assumption by comparing the performance of invasive species in their native vs. introduced ranges. Here, we studied abundance, growth, reproduction, and herbivory in 10 native Chinese and 10 invasive German populations of the invasive shrub Buddleja davidii (Scrophulariaceae; butterfly bush). We found strong evidence for increased plant vigour in the introduced range: plants in invasive populations were significantly taller and had thicker stems, larger inflorescences, and heavier seeds than plants in native populations. These differences in plant performance could not be explained by a more benign climate in the introduced range. Since leaf herbivory was substantially reduced in invasive populations, our data rather suggest that escape from natural enemies, associated with increased plant growth and reproduction, contributes to the invasion success of B. davidii in Central Europe.",TRUE,location
R24,Ecology and Evolutionary Biology,R57755,Release from foliar and floral fungal pathogen species does not explain the geographic spread of naturalized North American plants in Europe,S203590,R57756,Continent,R49274,Europe,"1 During the last centuries many alien species have established and spread in new regions, where some of them cause large ecological and economic problems. As one of the main explanations of the spread of alien species, the enemy‐release hypothesis is widely accepted and frequently serves as justification for biological control. 2 We used a global fungus–plant host distribution data set for 140 North American plant species naturalized in Europe to test whether alien plants are generally released from foliar and floral pathogens, whether they are mainly released from pathogens that are rare in the native range, and whether geographic spread of the North American plant species in Europe is associated with release from fungal pathogens. 3 We show that the 140 North American plant species naturalized in Europe were released from 58% of their foliar and floral fungal pathogen species. However, when we also consider fungal pathogens of the native North American host range that in Europe so far have only been reported on other plant species, the estimated release is reduced to 10.3%. Moreover, in Europe North American plants have mainly escaped their rare, pathogens, of which the impact is restricted to few populations. Most importantly and directly opposing the enemy‐release hypothesis, geographic spread of the alien plants in Europe was negatively associated with their release from fungal pathogens. 4 Synthesis. North American plants may have escaped particular fungal species that control them in their native range, but based on total loads of fungal species, release from foliar and floral fungal pathogens does not explain the geographic spread of North American plant species in Europe. To test whether enemy release is the major driver of plant invasiveness, we urgently require more studies comparing release of invasive and non‐invasive alien species from enemies of different guilds, and studies that assess the actual impact of the enemies.",TRUE,location
R24,Ecology and Evolutionary Biology,R57785,Caterpillar assemblages on introduced blue spruce. differences from native Norway spruce ,S203583,R57786,Continent,R49274,Europe,"Blue spruce (Picea pungens Engelm.) is native to the central and southern Rocky Mountains of the USA (DAUBENMIRE, 1972), from where it has been introduced to other parts of North America, Europe, etc. In Central Europe, blue spruce was mostly planted in ornamental settings in urban areas, Christmas tree plantations and forests too. In the Slovak Republic, blue spruce has patchy distribution. Its scattered stands cover the area of 2,618 ha and 0.14% of the forest area (data from the National Forest Centre, Zvolen). Compared to the Slovak Republic, the area afforested with blue spruce in the Czech Republic is much larger –8,741 ha and 0.4% of the forest area (KRIVANEK et al., 2006, UHUL, 2006). Plantations of blue spruce in the Czech Republic were largely established in the western and north-western parts of the country (BERAN and SINDELAR, 1996; BALCAR et al., 2008b).",TRUE,location
R24,Ecology and Evolutionary Biology,R57791,Virulence of soil-borne pathogens and invasion by Prunus serotina,S203582,R57793,Continent,R49274,Europe,"*Globally, exotic invaders threaten biodiversity and ecosystem function. Studies often report that invading plants are less affected by enemies in their invaded vs home ranges, but few studies have investigated the underlying mechanisms. *Here, we investigated the variation in prevalence, species composition and virulence of soil-borne Pythium pathogens associated with the tree Prunus serotina in its native US and non-native European ranges by culturing, DNA sequencing and controlled pathogenicity trials. *Two controlled pathogenicity experiments showed that Pythium pathogens from the native range caused 38-462% more root rot and 80-583% more seedling mortality, and 19-45% less biomass production than Pythium from the non-native range. DNA sequencing indicated that the most virulent Pythium taxa were sampled only from the native range. The greater virulence of Pythium sampled from the native range therefore corresponded to shifts in species composition across ranges rather than variation within a common Pythium species. *Prunus serotina still encounters Pythium in its non-native range but encounters less virulent taxa. Elucidating patterns of enemy virulence in native and nonnative ranges adds to our understanding of how invasive plants escape disease. Moreover, this strategy may identify resident enemies in the non-native range that could be used to manage invasive plants.",TRUE,location
R24,Ecology and Evolutionary Biology,R57924,"Macroparasite Fauna of Alien Grey Squirrels (Sciurus carolinensis): Composition, Variability and Implications for Native Species",S203454,R57925,Continent,R49274,Europe,"Introduced hosts populations may benefit of an ""enemy release"" through impoverishment of parasite communities made of both few imported species and few acquired local ones. Moreover, closely related competing native hosts can be affected by acquiring introduced taxa (spillover) and by increased transmission risk of native parasites (spillback). We determined the macroparasite fauna of invasive grey squirrels (Sciurus carolinensis) in Italy to detect any diversity loss, introduction of novel parasites or acquisition of local ones, and analysed variation in parasite burdens to identify factors that may increase transmission risk for native red squirrels (S. vulgaris). Based on 277 grey squirrels sampled from 7 populations characterised by different time scales in introduction events, we identified 7 gastro-intestinal helminths and 4 parasite arthropods. Parasite richness is lower than in grey squirrel's native range and independent from introduction time lags. The most common parasites are Nearctic nematodes Strongyloides robustus (prevalence: 56.6%) and Trichostrongylus calcaratus (6.5%), red squirrel flea Ceratophyllus sciurorum (26.0%) and Holarctic sucking louse Neohaematopinus sciuri (17.7%). All other parasites are European or cosmopolitan species with prevalence below 5%. S. robustus abundance is positively affected by host density and body mass, C. sciurorum abundance increases with host density and varies with seasons. Overall, we show that grey squirrels in Italy may benefit of an enemy release, and both spillback and spillover processes towards native red squirrels may occur.",TRUE,location
R24,Ecology and Evolutionary Biology,R53261,A phylogenetic approach towards understanding the drivers of plant invasiveness on Robben Island- South Africa,S163812,R53264,Continent,R49273,Africa,"Invasive plant species are a considerable threat to ecosystems globally and on islands in particular where species diversity can be relatively low. In this study, we examined the phylogenetic basis of invasion success on Robben Island in South Africa. The flora of the island was sampled extensively and the phylogeny of the local community was reconstructed using the two core DNA barcode regions, rbcLa and matK. By analysing the phylogenetic patterns of native and invasive floras at two different scales, we found that invasive alien species are more distantly related to native species, a confirmation of Darwin's naturalization hypothesis. However, this pattern also holds even for randomly generated communities, therefore discounting the explanatory power of Darwin's naturalization hypothesis as the unique driver of invasion success on the island. These findings suggest that the drivers of invasion success on the island may be linked to species traits rather than their evolutionary history alone, or to the combination thereof. This result also has implications for the invasion management programmes currently being implemented to rehabilitate the native diversity on Robben Island. © 2013 The Linnean Society of London, Botanical Journal of the Linnean Society, 2013, 172, 142–152.",TRUE,location
R24,Ecology and Evolutionary Biology,R54642,Re-colonisation rate differs between co-existing indigenous and invasive intertidal mussels following major disturbance,S172500,R54643,Continent,L106100,Africa,"The potential of introduced species to become invasive is often linked to their ability to colonise disturbed habitats rapidly. We studied the effects of major disturbance by severe storms on the indigenous mussel Perna perna and the invasive mussel Mytilus galloprovincialis in sympatric intertidal populations on the south coast of South Africa. At the study sites, these species dominate different shore levels and co-exist in the mid mussel zone. We tested the hypotheses that in the mid- zone P. perna would suffer less dislodgment than M. galloprovincialis, because of its greater tenacity, while M. galloprovincialis would respond with a higher re-colonisation rate. We estimated the per- cent cover of the 2 mussels in the mid-zone from photographs, once before severe storms and 3 times afterwards. M. galloprovincialis showed faster re-colonisation and 3 times more cover than P. perna 1 and 1.5 yr after the storms (when populations had recovered). Storm-driven dislodgment in the mid- zone was highest for the species that initially dominated at each site, conforming to the concept of compensatory mortality. This resulted in similar cover of the 2 species immediately after the storms. Thus, the storm wave forces exceeded the tenacity even of P. perna, while the higher recruitment rate of M. galloprovincialis can explain its greater colonisation ability. We predict that, because of its weaker attachment strength, M. galloprovincialis will be largely excluded from open coast sites where wave action is generally stronger, but that its greater capacity for exploitation competition through re-colonisation will allow it to outcompete P. perna in more sheltered areas (especially in bays) that are periodically disturbed by storms.",TRUE,location
R24,Ecology and Evolutionary Biology,R54715,"Human activity facilitates altitudinal expansion of exotic plants along a road in montane grassland, South Africa",S173372,R54716,Continent,L106826,Africa,"ABSTRACT Question: Do anthropogenic activities facilitate the distribution of exotic plants along steep altitudinal gradients? Location: Sani Pass road, Grassland biome, South Africa. Methods: On both sides of this road, presence and abundance of exotic plants was recorded in four 25-m long road-verge plots and in parallel 25 m × 2 m adjacent land plots, nested at five altitudinal levels: 1500, 1800, 2100, 2400 and 2700 m a.s.l. Exotic community structure was analyzed using Canonical Correspondence Analysis while a two-level nested Generalized Linear Model was fitted for richness and cover of exotics. We tested the upper altitudinal limits for all exotics along this road for spatial clustering around four potential propagule sources using a t-test. Results: Community structure, richness and abundance of exotics were negatively correlated with altitude. Greatest invasion by exotics was recorded for adjacent land at the 1500 m level. Of the 45 exotics, 16 were found at higher altitudes than expected and observations were spatially clustered around potential propagule sources. Conclusions: Spatial clustering of upper altitudinal limits around human inhabited areas suggests that exotics originate from these areas, while exceeding expected altitudinal limits suggests that distribution ranges of exotics are presently underestimated. Exotics are generally characterised by a high propagule pressure and/or persistent seedbanks, thus future tarring of the Sani Pass may result in an increase of exotic species richness and abundance. This would initially result from construction-related soil disturbance and subsequently from increased traffic, water run-off, and altered fire frequency. We suggest examples of management actions to prevent this. Nomenclature: Germishuizen & Meyer (2003).",TRUE,location
R24,Ecology and Evolutionary Biology,R55048,Modeling Invasive Plant Spread: The Role of Plant-Environment Interactions and Model Structure,S176721,R55049,Continent,L109361,Africa,"Alien plants invade many ecosystems worldwide and often have substantial negative effects on ecosystem structure and functioning. Our ability to quantitatively predict these impacts is, in part, limited by the absence of suitable plant-spread models and by inadequate parameter estimates for such models. This paper explores the effects of model, plant, and environmental attributes on predicted rates and patterns of spread of alien pine trees (Pinus spp.) in South African fynbos (a mediterranean-type shrubland). A factorial experimental design was used to: (1) compare the predictions of a simple reaction-diffusion model and a spatially explicit, individual-based simulation model; (2) investigate the sensitivity of predicted rates and patterns of spread to parameter values; and (3) quantify the effects of the simulation model's spatial grain on its predictions. The results show that the spatial simulation model places greater emphasis on interactions among ecological processes than does the reaction-diffusion model. This ensures that the predictions of the two models differ substantially for some factor combinations. The most important factor in the model is dispersal ability. Fire frequency, fecundity, and age of reproductive maturity are less important, while adult mortality has little effect on the model's predictions. The simulation model's predictions are sensitive to the model's spatial grain. This suggests that simulation models that use matrices as a spatial framework should ensure that the spatial grain of the model is compatible with the spatial processes being modeled. We conclude that parameter estimation and model development must be integrated pro- cedures. This will ensure that the model's structure is compatible with the biological pro- cesses being modeled. Failure to do so may result in spurious predictions.",TRUE,location
R24,Ecology and Evolutionary Biology,R55059,Reproductive potential and seedling establishment of the invasive alien tree Schinus molle (Anacardiaceae) in South Africa,S176845,R55060,Continent,L109463,Africa,"Schinus molle (Peruvian pepper tree) was introduced to South Africa more than 150 years ago and was widely planted, mainly along roads. Only in the last two decades has the species become naturalized and invasive in some parts of its new range, notably in semi-arid savannas. Research is being undertaken to predict its potential for further invasion in South Africa. We studied production, dispersal and predation of seeds, seed banks, and seedling establishment in relation to land uses at three sites, namely ungrazed savanna once used as a military training ground; a savanna grazed by native game; and an ungrazed mine dump. We found that seed production and seed rain density of S. molle varied greatly between study sites, but was high at all sites (384 864–1 233 690 seeds per tree per year; 3877–9477 seeds per square metre per year). We found seeds dispersed to distances of up to 320 m from female trees, and most seeds were deposited within 50 m of putative source trees. Annual seed rain density below canopies of Acacia tortillis, the dominant native tree at all sites, was significantly lower in grazed savanna. The quality of seed rain was much reduced by endophagous predators. Seed survival in the soil was low, with no survival recorded beyond 1 year. Propagule pressure to drive the rate of recruitment: densities of seedlings and sapling densities were higher in ungrazed savanna and the ungrazed mine dump than in grazed savanna, as reflected by large numbers of young individuals, but adult : seedling ratios did not differ between savanna sites. Frequent and abundant seed production, together with effective dispersal of viable S. molle seed by birds to suitable establishment sites below trees of other species to overcome predation effects, facilitates invasion. Disturbance enhances invasion, probably by reducing competition from native plants.",TRUE,location
R24,Ecology and Evolutionary Biology,R55092,Inferring Process from Pattern in Plant Invasions: A Semimechanistic Model Incorporating Propagule Pressure and Environmental Factors,S177223,R55093,Continent,L109775,Africa,"Propagule pressure is intuitively a key factor in biological invasions: increased availability of propagules increases the chances of establishment, persistence, naturalization, and invasion. The role of propagule pressure relative to disturbance and various environmental factors is, however, difficult to quantify. We explored the relative importance of factors driving invasions using detailed data on the distribution and percentage cover of alien tree species on South Africa’s Agulhas Plain (2,160 km2). Classification trees based on geology, climate, land use, and topography adequately explained distribution but not abundance (canopy cover) of three widespread invasive species (Acacia cyclops, Acacia saligna, and Pinus pinaster). A semimechanistic model was then developed to quantify the roles of propagule pressure and environmental heterogeneity in structuring invasion patterns. The intensity of propagule pressure (approximated by the distance from putative invasion foci) was a much better predictor of canopy cover than any environmental factor that was considered. The influence of environmental factors was then assessed on the residuals of the first model to determine how propagule pressure interacts with environmental factors. The mediating effect of environmental factors was species specific. Models combining propagule pressure and environmental factors successfully predicted more than 70% of the variation in canopy cover for each species.",TRUE,location
R24,Ecology and Evolutionary Biology,R55099,Invasive alien plants infiltrate bird-mediated shrub nucleation processes in arid savanna,S187769,R56605,Continent,L116655,Africa,"1 The cultivation and dissemination of alien ornamental plants increases their potential to invade. More specifically, species with bird‐dispersed seeds can potentially infiltrate natural nucleation processes in savannas. 2 To test (i) whether invasion depends on facilitation by host trees, (ii) whether propagule pressure determines invasion probability, and (iii) whether alien host plants are better facilitators of alien fleshy‐fruited species than indigenous species, we mapped the distribution of alien fleshy‐fruited species planted inside a military base, and compared this with the distribution of alien and native fleshy‐fruited species established in the surrounding natural vegetation. 3 Abundance and diversity of fleshy‐fruited plant species was much greater beneath tree canopies than in open grassland and, although some native fleshy‐fruited plants were found both beneath host trees and in the open, alien fleshy‐fruited plants were found only beneath trees. 4 Abundance of fleshy‐fruited alien species in the natural savanna was positively correlated with the number of individuals of those species planted in the grounds of the military base, while the species richness of alien fleshy‐fruited taxa decreased with distance from the military base, supporting the notion that propagule pressure is a fundamental driver of invasions. 5 There were more fleshy‐fruited species beneath native Acacia tortilis than beneath alien Prosopis sp. trees of the equivalent size. Although there were significant differences in native plant assemblages beneath these hosts, the proportion of alien to native fleshy‐fruited species did not differ with host. 6 Synthesis. Birds facilitate invasion of a semi‐arid African savanna by alien fleshy‐fruited plants, and this process does not require disturbance. Instead, propagule pressure and a few simple biological observations define the probability that a plant will invade, with alien species planted in gardens being a major source of propagules. Some invading species have the potential to transform this savanna by overtopping native trees, leading to ecosystem‐level impacts. Likewise, the invasion of the open savanna by alien host trees (such as Prosopis sp.) may change the diversity, abundance and species composition of the fleshy‐fruited understorey. These results illustrate the complex interplay between propagule pressure, facilitation, and a range of other factors in biological invasions.",TRUE,location
R24,Ecology and Evolutionary Biology,R55146,Propagule pressure drives establishment of introduced freshwater fish: quantitative evidence from an irrigation network,S177831,R55147,Continent,L110268,Africa,"Propagule pressure is recognized as a fundamental driver of freshwater fish invasions, though few studies have quantified its role. Natural experiments can be used to quantify the role of this factor relative to others in driving establishment success. An irrigation network in South Africa takes water from an inter-basin water transfer (IBWT) scheme to supply multiple small irrigation ponds. We compared fish community composition upstream, within, and downstream of the irrigation network, to show that this system is a unidirectional dispersal network with a single immigration source. We then assessed the effect of propagule pressure and biological adaptation on the colonization success of nine fish species across 30 recipient ponds of varying age. Establishing species received significantly more propagules at the source than did incidental species, while rates of establishment across the ponds displayed a saturation response to propagule pressure. This shows that propagule pressure is a significant driver of establishment overall. Those species that did not establish were either extremely rare at the immigration source or lacked the reproductive adaptations to breed in the ponds. The ability of all nine species to arrive at some of the ponds illustrates how long-term continuous propagule pressure from IBWT infrastructure enables range expansion of fishes. The quantitative link between propagule pressure and success and rate of population establishment confirms the driving role of this factor in fish invasion ecology.",TRUE,location
R24,Ecology and Evolutionary Biology,R56867,Comparisons of isotopic niche widths of some invasive and indigenous fauna in a South African river,S190715,R56868,Continent,L119075,Africa,"Summary Biological invasions threaten ecosystem integrity and biodiversity, with numerous adverse implications for native flora and fauna. Established populations of two notorious freshwater invaders, the snail Tarebia granifera and the fish Pterygoplichthys disjunctivus, have been reported on three continents and are frequently predicted to be in direct competition with native species for dietary resources. Using comparisons of species' isotopic niche widths and stable isotope community metrics, we investigated whether the diets of the invasive T. granifera and P. disjunctivus overlapped with those of native species in a highly invaded river. We also attempted to resolve diet composition for both species, providing some insight into the original pathway of invasion in the Nseleni River, South Africa. Stable isotope metrics of the invasive species were similar to or consistently mid-range in comparison with their native counterparts, with the exception of markedly more uneven spread in isotopic space relative to indigenous species. Dietary overlap between the invasive P. disjunctivus and native fish was low, with the majority of shared food resources having overlaps of <0.26. The invasive T. granifera showed effectively no overlap with the native planorbid snail. However, there was a high degree of overlap between the two invasive species (˜0.86). Bayesian mixing models indicated that detrital mangrove Barringtonia racemosa leaves contributed the largest proportion to P. disjunctivus diet (0.12–0.58), while the diet of T. granifera was more variable with high proportions of detrital Eichhornia crassipes (0.24–0.60) and Azolla filiculoides (0.09–0.33) as well as detrital Barringtonia racemosa leaves (0.00–0.30). Overall, although the invasive T. granifera and P. disjunctivus were not in direct competition for dietary resources with native species in the Nseleni River system, their spread in isotopic space suggests they are likely to restrict energy available to higher consumers in the food web. Establishment of these invasive populations in the Nseleni River is thus probably driven by access to resources unexploited or unavailable to native residents.",TRUE,location
R24,Ecology and Evolutionary Biology,R57231,Predicting the landscape-scale distribution of alien plants and their threat to plant diversity,S197531,R57232,Continent,R49273,Africa,"Abstract: Invasive alien organisms pose a major threat to global biodiversity. The Cape Peninsula, South Africa, provides a case study of the threat of alien plants to native plant diversity. We sought to identify where alien plants would invade the landscape and what their threat to plant diversity could be. This information is needed to develop a strategy for managing these invasions at the landscape scale. We used logistic regression models to predict the potential distribution of six important invasive alien plants in relation to several environmental variables. The logistic regression models showed that alien plants could cover over 89% of the Cape Peninsula. Acacia cyclops and Pinus pinaster were predicted to cover the greatest area. These predictions were overlaid on the current distribution of native plant diversity for the Cape Peninsula in order to quantify the threat of alien plants to native plant diversity. We defined the threat to native plant diversity as the number of native plant species (divided into all species, rare and threatened species, and endemic species) whose entire range is covered by the predicted distribution of alien plant species. We used a null model, which assumed a random distribution of invaded sites, to assess whether area invaded is confounded with threat to native plant diversity. The null model showed that most alien species threaten more plant species than might be suggested by the area they are predicted to invade. For instance, the logistic regression model predicted that P. pinaster threatens 350 more native species, 29 more rare and threatened species, and 21 more endemic species than the null model would predict. Comparisons between the null and logistic regression models suggest that species richness and invasibility are positively correlated and that species richness is a poor indicator of invasive resistance in the study site. Our results emphasize the importance of adopting a spatially explicit approach to quantifying threats to biodiversity, and they provide the information needed to prioritize threats from alien species and the sites that need urgent management intervention.",TRUE,location
R24,Ecology and Evolutionary Biology,R57720,"Herbivores, but not other insects, are scarce on alien plants",S203628,R57721,Continent,R49273,Africa,"Abstract Understanding how the landscape-scale replacement of indigenous plants with alien plants influences ecosystem structure and functioning is critical in a world characterized by increasing biotic homogenization. An important step in this process is to assess the impact on invertebrate communities. Here we analyse insect species richness and abundance in sweep collections from indigenous and alien (Australasian) woody plant species in South Africa's Western Cape. We use phylogenetically relevant comparisons and compare one indigenous with three Australasian alien trees within each of Fabaceae: Mimosoideae, Myrtaceae, and Proteaceae: Grevilleoideae. Although some of the alien species analysed had remarkably high abundances of herbivores, even when intentionally introduced biological control agents are discounted, overall, herbivorous insect assemblages from alien plants were slightly less abundant and less diverse compared with those from indigenous plants – in accordance with predictions from the enemy release hypothesis. However, there were no clear differences in other insect feeding guilds. We conclude that insect assemblages from alien plants are generally quite diverse, and significant differences between these and assemblages from indigenous plants are only evident for herbivorous insects.",TRUE,location
R24,Ecology and Evolutionary Biology,R53405,Plant invasions in Taiwan: Insights from the flora of casual and naturalized alien species,S163778,R53406,Continent,R49275,Asia,"Data on floristic status, biological attributes, chronology and distribution of naturalized species have been shown to be a very powerful tool for discerning the patterns of plant invasions and species invasiveness. We analysed the newly compiled list of casual and naturalized plant species in Taiwan (probably the only complete data set of this kind in East Asia) and found that Taiwan is relatively lightly invaded with only 8% of the flora being casual or naturalized. Moreover, the index of casual and naturalized species per log area is also moderate, in striking contrast with many other island floras where contributions of naturalized species are much higher. Casual and naturalized species have accumulated steadily and almost linearly over the past decades. Fabaceae, Asteraceae, and Poaceae are the families with the most species. However, Amaranthaceae, Convolvulaceae, and Onagraceae have the largest ratios of casual and naturalized species to their global numbers. Ipomoea, Solanum and Crotalaria have the highest numbers of casual and naturalized species. About 60% of all genera with exotic species are new to Taiwan. Perennial herbs represent one third of the casual and naturalized flora, followed by annual herbs. About 60% of exotic species were probably introduced unintentionally onto the island; many species imported intentionally have ornamental, medicinal, or forage values. The field status of 50% of these species is unknown, but ornamentals represent noticeable proportions of naturalized species, while forage species represent a relatively larger proportion of casual species. Species introduced for medicinal purposes seem to be less invasive. Most of the casual and naturalized species of Taiwan originated from the Tropical Americas, followed by Asia and Europe.",TRUE,location
R24,Ecology and Evolutionary Biology,R57223,Distributions of exotic plants in eastern Asia and North America,S197543,R57224,Continent,R49275,Asia,"Although some plant traits have been linked to invasion success, the possible effects of regional factors, such as diversity, habitat suitability, and human activity are not well understood. Each of these mechanisms predicts a different pattern of distribution at the regional scale. Thus, where climate and soils are similar, predictions based on regional hypotheses for invasion success can be tested by comparisons of distributions in the source and receiving regions. Here, we analyse the native and alien geographic ranges of all 1567 plant species that have been introduced between eastern Asia and North America or have been introduced to both regions from elsewhere. The results reveal correlations between the spread of exotics and both the native species richness and transportation networks of recipient regions. This suggests that both species interactions and human-aided dispersal influence exotic distributions, although further work on the relative importance of these processes is needed.",TRUE,location
R24,Ecology and Evolutionary Biology,R57710,"Metazoan parasites of introduced round and tubenose gobies in the Great Lakes: Support for the ""Enemy Release Hypothesis""",S203633,R57711,Continent,R49275,Asia,"ABSTRACT Recent invasion theory has hypothesized that newly established exotic species may initially be free of their native parasites, augmenting their population success. Others have hypothesized that invaders may introduce exotic parasites to native species and/or may become hosts to native parasites in their new habitats. Our study analyzed the parasites of two exotic Eurasian gobies that were detected in the Great Lakes in 1990: the round goby Apollonia melanostoma and the tubenose goby Proterorhinus semilunaris. We compared our results from the central region of their introduced ranges in Lakes Huron, St. Clair, and Erie with other studies in the Great Lakes over the past decade, as well as Eurasian native and nonindigenous habitats. Results showed that goby-specific metazoan parasites were absent in the Great Lakes, and all but one species were represented only as larvae, suggesting that adult parasites presently are poorly-adapted to the new gobies as hosts. Seven parasitic species are known to infest the tubenose goby in the Great Lakes, including our new finding of the acanthocephalan Southwellina hispida, and all are rare. We provide the first findings of four parasite species in the round goby and clarified two others, totaling 22 in the Great Lakes—with most being rare. In contrast, 72 round goby parasites occur in the Black Sea region. Trematodes are the most common parasitic group of the round goby in the Great Lakes, as in their native Black Sea range and Baltic Sea introduction. Holarctic trematode Diplostomum spathaceum larvae, which are one of two widely distributed species shared with Eurasia, were found in round goby eyes from all Great Lakes localities except Lake Huron proper. Our study and others reveal no overall increases in parasitism of the invasive gobies over the past decade after their establishment in the Great Lakes. In conclusion, the parasite “load” on the invasive gobies appears relatively low in comparison with their native habitats, lending support to the “enemy release hypothesis.”",TRUE,location
R302,Economics,R78423,Regional Competitiveness as an Aspect Promoting Sustainability of Latvia,S354559,R78425,State,L251809,Latvia,"Providing of sustainability is one of the main priorities in normative documents in various countries. Factors affecting regional competitiveness is seen as close to them determining sustainability in many researches. The aim of this research was to identify and evaluate main factors of competitiveness for statistical regions of Latvia to promote sustainable development of the country, applying the complex regional competitiveness assessment system developed by the author. The analysis of the Regional Competitiveness Index (RCI) and its sub-indexes showed that each statistical region has both: factors promoting and hindering competitiveness. Overall the most competitive is Riga statistical region, but the last place takes Latgale statistical region. It is possible to promote equal regional development and sustainability of Latvia by implementing well-developed regional development strategy and National Action Plan. To develop such strategies, it is necessary to understand the concept of sustainable competitiveness. To evaluate sustainable competitiveness of Latvia and its regions it is necessary to develop further the methodology of regional competitiveness evaluation.",TRUE,location
R267,Energy Systems,R110083,Optimal Sizing and Scheduling of Hybrid Energy Systems: The Cases of Morona Santiago and the Galapagos Islands,S502169,R110088,Country of study,L363028,Ecuador,"Hybrid energy systems (HESs) generate electricity from multiple energy sources that complement each other. Recently, due to the reduction in costs of photovoltaic (PV) modules and wind turbines, these types of systems have become economically competitive. In this study, a mathematical programming model is applied to evaluate the techno-economic feasibility of autonomous units located in two isolated areas of Ecuador: first, the province of Galapagos (subtropical island) and second, the province of Morona Santiago (Amazonian tropical forest). The two case studies suggest that HESs are potential solutions to reduce the dependence of rural villages on fossil fuels and viable mechanisms to bring electrical power to isolated communities in Ecuador. Our results reveal that not only from the economic but also from the environmental point of view, for the case of the Galapagos province, a hybrid energy system with a PV–wind–battery configuration and a levelized cost of energy (LCOE) equal to 0.36 $/kWh is the optimal energy supply system. For the case of Morona Santiago, a hybrid energy system with a PV–diesel–battery configuration and an LCOE equal to 0.37 $/kWh is the most suitable configuration to meet the load of a typical isolated community in Ecuador. The proposed optimization model can be used as a decision-support tool for evaluating the viability of autonomous HES projects at any other location.",TRUE,location
R66,Environmental Health,R109317,Groundwater Arsenic Contamination in the Ganga-Padma-Meghna-Brahmaputra Plain of India and Bangladesh,S498838,R109319,Study location,R109320,India,"magnitude of arsenic groundwater contamination, and its related health effects, in the Ganga-MeghnaBrahmaputra (GMB) plain—an area of 569,749 km2, with a population of over 500 million, which largely comprises the flood plains of 3 major river systems that flow through India and Bangladesh. Design: On the basis of our 17-yr–long study thus far, we report herein the magnitude of groundwater arsenic contamination, its health effects, results of our analyses of biological and food samples, and our investigation into sources of arsenic in the GMB plain Setting: The GMB plain includes the following states in India: Uttar Pradesh in the upper and middle Ganga plain, Bihar and Jharkhand in the middle Ganga plain, West Bengal in the lower Ganga plain, and Assam in the upper Brahmaputra plain. The country of Bangladesh is located in the Padma-Meghna-Brahmaputra plain. In a preliminary study,1 we identified arsenic in water samples from hand-operated tubewells in the GMB plain. Levels in excess of 50 ppb (the permissible limit for arsenic in drinking water in India and Bangladesh) were found in samples from 51 villages in 3 arsenic-affected districts of Uttar Pradesh, 202 villages in 6 districts in Bihar, 11 villages in 1 district in Jharkhand, 3,500 villages in 9 (of a total of 18) districts in West Bengal, 2,000 villages in 50 (of a total of 64) districts in Bangladesh, and 17 villages in 2 districts in Assam. Study Populations: Because, over time, new regions of arsenic contamination have been found, affecting additional populations, the characteristics of our study subjects have varied widely. We feel that, even after working for 17 yr in the GMB plain, we have had only a glimpse of the full extent of the problem. Protocol: Thus far, on the GMB plain, we have analyzed 145,000 tubewell water samples from India and 52,000 from Bangladesh for arsenic contamination. In India, 3,781 villages had arsenic levels above 50 ppb and 5,380 villages had levels exceeding 10 ppb; in Bangladesh, the numbers were 2,000 and 2,450, respectively. We also analyzed 12,954 urine samples, 13,560 hair samples, 13,758 nail samples, and 1,300 skin scale samples from inhabitants of the arsenic-affected villages. Groundwater Arsenic Contamination in the Ganga-Padma-",TRUE,location
R145,Environmental Sciences,R186593,"Extraction of built-up area using multi-sensor data—A case study based on Google earth engine in Zhejiang Province, China",S713463,R186597,Location,R186598,China,"ABSTRACT Accurate and up-to-date built-up area mapping is of great importance to the science community, decision-makers, and society. Therefore, satellite-based, built-up area (BUA) extraction at medium resolution with supervised classification has been widely carried out. However, the spectral confusion between BUA and bare land (BL) is the primary hindering factor for accurate BUA mapping over large regions. Here we propose a new methodology for the efficient BUA extraction using multi-sensor data under Google Earth Engine cloud computing platform. The proposed method mainly employs intra-annual satellite imagery for water and vegetation masks, and a random-forest machine learning classifier combined with auxiliary data to discriminate between BUA and BL. First, a vegetation mask and water mask are generated using NDVI (normalized differenced vegetation index) max in vegetation growth periods and the annual water-occurrence frequency. Second, to accurately extract BUA from unmasked pixels, consisting of BUA and BL, random-forest-based classification is conducted using multi-sensor features, including temperature, night-time light, backscattering, topography, optical spectra, and NDVI time-series metrics. This approach is applied in Zhejiang Province, China, and an overall accuracy of 92.5% is obtained, which is 3.4% higher than classification with spectral data only. For large-scale BUA mapping, it is feasible to enhance the performance of BUA mapping with multi-temporal and multi-sensor data, which takes full advantage of datasets available in Google Earth Engine.",TRUE,location
R145,Environmental Sciences,R9221,"The ACCESS coupled model: description, control climate and evaluation",S14637,R9228,Earth System Model,R9230,Ocean,"4OASIS3.2–5 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 °C in ACCESS1.0 and 0.04 °C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.",TRUE,location
R145,Environmental Sciences,R23260,The NCEP Climate Forecast System Reanalysis,S72107,R23261,Earth System Model,R23268,Ocean,"The NCEP Climate Forecast System Reanalysis (CFSR) was completed for the 31-yr period from 1979 to 2009, in January 2010. The CFSR was designed and executed as a global, high-resolution coupled atmosphere–ocean–land surface–sea ice system to provide the best estimate of the state of these coupled domains over this period. The current CFSR will be extended as an operational, real-time product into the future. New features of the CFSR include 1) coupling of the atmosphere and ocean during the generation of the 6-h guess field, 2) an interactive sea ice model, and 3) assimilation of satellite radiances by the Gridpoint Statistical Interpolation (GSI) scheme over the entire period. The CFSR global atmosphere resolution is ~38 km (T382) with 64 levels extending from the surface to 0.26 hPa. The global ocean's latitudinal spacing is 0.25° at the equator, extending to a global 0.5° beyond the tropics, with 40 levels to a depth of 4737 m. The global land surface model has four soil levels and the global sea ice m...",TRUE,location
R145,Environmental Sciences,R23273,"The ACCESS coupled model: description, control climate and evaluation",S72176,R23274,Earth System Model,R23283,Ocean,"4OASIS3.2–5 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 °C in ACCESS1.0 and 0.04 °C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.",TRUE,location
R145,Environmental Sciences,R23312,GFDL's CM2 Global Coupled Climate Models. Part I: Formulation and Simulation Characteristics,S72364,R23313,Earth System Model,R23321,Ocean,"Abstract The formulation and simulation characteristics of two new global coupled climate models developed at NOAA's Geophysical Fluid Dynamics Laboratory (GFDL) are described. The models were designed to simulate atmospheric and oceanic climate and variability from the diurnal time scale through multicentury climate change, given our computational constraints. In particular, an important goal was to use the same model for both experimental seasonal to interannual forecasting and the study of multicentury global climate change, and this goal has been achieved. Two versions of the coupled model are described, called CM2.0 and CM2.1. The versions differ primarily in the dynamical core used in the atmospheric component, along with the cloud tuning and some details of the land and ocean components. For both coupled models, the resolution of the land and atmospheric components is 2° latitude × 2.5° longitude; the atmospheric model has 24 vertical levels. The ocean resolution is 1° in latitude and longitude, wi...",TRUE,location
R145,Environmental Sciences,R23326,GFDL’s ESM2 Global Coupled Climate–Carbon Earth System Models. Part I: Physical Formulation and Baseline Simulation Characteristics,S72424,R23327,Earth System Model,R23333,Ocean,"AbstractThe authors describe carbon system formulation and simulation characteristics of two new global coupled carbon–climate Earth System Models (ESM), ESM2M and ESM2G. These models demonstrate good climate fidelity as described in part I of this study while incorporating explicit and consistent carbon dynamics. The two models differ almost exclusively in the physical ocean component; ESM2M uses the Modular Ocean Model version 4.1 with vertical pressure layers, whereas ESM2G uses generalized ocean layer dynamics with a bulk mixed layer and interior isopycnal layers. On land, both ESMs include a revised land model to simulate competitive vegetation distributions and functioning, including carbon cycling among vegetation, soil, and atmosphere. In the ocean, both models include new biogeochemical algorithms including phytoplankton functional group dynamics with flexible stoichiometry. Preindustrial simulations are spun up to give stable, realistic carbon cycle means and variability. Significant differences...",TRUE,location
R145,Environmental Sciences,R23443,"The Norwegian Earth System Model, NorESM1-M – Part 1: Description and basic evaluation of the physical climate",S73061,R23444,Earth System Model,R23452,Ocean,"Abstract. The core version of the Norwegian Climate Center's Earth System Model, named NorESM1-M, is presented. The NorESM family of models are based on the Community Climate System Model version 4 (CCSM4) of the University Corporation for Atmospheric Research, but differs from the latter by, in particular, an isopycnic coordinate ocean model and advanced chemistry–aerosol–cloud–radiation interaction schemes. NorESM1-M has a horizontal resolution of approximately 2° for the atmosphere and land components and 1° for the ocean and ice components. NorESM is also available in a lower resolution version (NorESM1-L) and a version that includes prognostic biogeochemical cycling (NorESM1-ME). The latter two model configurations are not part of this paper. Here, a first-order assessment of the model stability, the mean model state and the internal variability based on the model experiments made available to CMIP5 are presented. Further analysis of the model performance is provided in an accompanying paper (Iversen et al., 2013), presenting the corresponding climate response and scenario projections made with NorESM1-M.",TRUE,location
R145,Environmental Sciences,R23457,Evaluation of the carbon cycle components in the Norwegian Earth System Model (NorESM),S73124,R23458,Earth System Model,R23466,Ocean,"Abstract. The recently developed Norwegian Earth System Model (NorESM) is employed for simulations contributing to the CMIP5 (Coupled Model Intercomparison Project phase 5) experiments and the fifth assessment report of the Intergovernmental Panel on Climate Change (IPCC-AR5). In this manuscript, we focus on evaluating the ocean and land carbon cycle components of the NorESM, based on the preindustrial control and historical simulations. Many of the observed large scale ocean biogeochemical features are reproduced satisfactorily by the NorESM. When compared to the climatological estimates from the World Ocean Atlas (WOA), the model simulated temperature, salinity, oxygen, and phosphate distributions agree reasonably well in both the surface layer and deep water structure. However, the model simulates a relatively strong overturning circulation strength that leads to noticeable model-data bias, especially within the North Atlantic Deep Water (NADW). This strong overturning circulation slightly distorts the structure of the biogeochemical tracers at depth. Advancements in simulating the oceanic mixed layer depth with respect to the previous generation model particularly improve the surface tracer distribution as well as the upper ocean biogeochemical processes, particularly in the Southern Ocean. Consequently, near-surface ocean processes such as biological production and air–sea gas exchange, are in good agreement with climatological observations. The NorESM adopts the same terrestrial model as the Community Earth System Model (CESM1). It reproduces the general pattern of land-vegetation gross primary productivity (GPP) when compared to the observationally based values derived from the FLUXNET network of eddy covariance towers. While the model simulates well the vegetation carbon pool, the soil carbon pool is smaller by a factor of three relative to the observational based estimates. The simulated annual mean terrestrial GPP and total respiration are slightly larger than observed, but the difference between the global GPP and respiration is comparable. Model-data bias in GPP is mainly simulated in the tropics (overestimation) and in high latitudes (underestimation). Within the NorESM framework, both the ocean and terrestrial carbon cycle models simulate a steady increase in carbon uptake from the preindustrial period to the present-day. The land carbon uptake is noticeably smaller than the observations, which is attributed to the strong nitrogen limitation formulated by the land model.",TRUE,location
R145,Environmental Sciences,R23471,INGV-CMCC Carbon (ICC): A Carbon Cycle Earth System Model,S73193,R23472,Earth System Model,R23479,Ocean,"This document describes the CMCC Earth System Model (ESM) for the representation of the carbon cycle in the atmosphere, land, and ocean system. The structure of the report follows the software architecture of the full system. It is intended to give a technical description of the numerical models at the base of the ESM, and how they are coupled with each other.",TRUE,location
R145,Environmental Sciences,R74780,"Activity concentration of natural radionuclides in sediments of Bree, Klein-Brak, Bakens, and uMngeni rivers and their associated radiation hazard indices",S343344,R74785,Study location,L247025,South Africa,"A hyper-pure germanium (HPGe) detector was used to measure the activity concentrations in sediment samples of rivers in South Africa, and the associated radiological hazard indices were evaluated. The results of the study indicated that the mean activity concentrations of 226Ra, 232Th and 40K in the sediment samples from the oil-rich areas are 11.13, 7.57, 22.5 ; 5.51, 4.62, 125.02 and 7.60, 5.32, 24.12 for the Bree, Klein-Brak and Bakens Rivers, respectively. In contrast, the control site (UMngeni River) values were 4.13, 3.28, and 13.04 for 226Ra, 232Th, and 40K. The average excess lifetime cancer risks are 0.394 × , 0.393 × , 0.277 × and 0.163 × for sediment samples at Bree, Klein-Brak, Bakens, and uMngeni rivers. All obtained values indicated a significant difference between the natural radionuclide concentrations in the samples from the rivers in oil-rich areas compared to those of the non-oil-rich area. The values reported for the activity concentrations and radiological hazard indices were below the average world values; hence, the risk of radiation health hazard was negligible in all study areas.",TRUE,location
R33,Epidemiology,R109952,Epidemiology of Shiga toxin-producingEscherichia coliO157 in very young calves in the North Island of New Zealand,S501480,R109954,Publication location,L362588,New Zealand,"Abstract AIMS: To study the occurrence and spatial distribution of Shiga toxin-producing Escherichia coli (STEC) O157 in calves less than 1-week-old (bobby calves) born on dairy farms in the North Island of New Zealand, and to determine the association of concentration of IgG in serum, carcass weight, gender and breed with occurrence of E. coli O157 in these calves. METHODS: In total, 309 recto-anal mucosal swabs and blood samples were collected from bobby calves at two slaughter plants in the North Island of New Zealand. The address of the farm, tag number, carcass weight, gender and breed of the sampled animals were recorded. Swabs were tested for the presence of E. coli O157 using real time PCR (RT-PCR). All the farms were mapped geographically to determine the spatial distribution of farms positive for E. coli O157. K function analysis was used to test for clustering of these farms. Multiplex PCR was used for the detection of Shiga toxin 1 (stx1), Shiga toxin 2 (stx2), E. coli attaching and effacing (eae) and Enterohaemolysin (ehxA) genes in E. coli O157 isolates. Genotypes of isolates from this study (n = 10) along with human (n = 18) and bovine isolates (n = 4) obtained elsewhere were determined using bacteriophage insertion typing for stx encoding. RESULTS: Of the 309 samples, 55 (17.7%) were positive for E. coli O157 by RT-PCR and originated from 47/197 (23.8%) farms. E. coli O157 was isolated from 10 samples of which seven isolates were positive for stx2, eae and ehxA genes and the other three isolates were positive for stx1, stx2, eae and ehxA. Bacteriophage insertion typing for stx encoding revealed that 12/18 (67%) human and 13/14 (93%) bovine isolates belonged to genotypes 1 and 3. K function analysis showed some clustering of farms positive for E. coli O157. There was no association between concentration of IgG in serum, carcass weight and gender of the calves, and samples positive for E. coli O157, assessed using linear mixed-effects models. However, Jersey calves were less likely to be positive for E. coli O157 by RT-PCR than Friesian calves (p = 0.055). CONCLUSIONS: Healthy bobby calves are an asymptomatic reservoir of E. coli O157 in New Zealand and may represent an important source of infection for humans. Carriage was not associated with concentration of IgG in serum, carcass weight or gender.",TRUE,location
R33,Epidemiology,R142094,"A National Iranian Cochlear Implant Registry (ICIR): cochlear implanted recipient
observational study",S570898,R142096,Location ,L400755,Iran,"BACKGROUND AND OBJECTIVE Patients who receive cochlear implants (CIs) constitutes a significant population in Iran. This population needs regular monitor on long-term outcomes, educational placement and quality of life. Currently, there is no national or regional registry on the long term outcomes of CI users in Iran. The present study aims to introduce the design and implementation of a national patient-outcomes registry on CI recipients for Iran. This Iranian CI registry (ICIR) provides an integrated framework for data collection and sharing, scientific communication and collaboration inCI research. METHODS The national ICIR is a prospective patient-outcomes registry for patients who are implanted in one of Iranian centers. The registry is based on an integrated database that utilizes a secure web-based platform to collect response data from clinicians and patient's proxy via electronic case report forms (e-CRFs) at predefined intervals. The CI candidates are evaluated with a set of standardized and non-standardized questionnaires prior to initial device activation(as baseline variables) and at three-monthly interval follow-up intervals up to 24 months and annually thereafter. RESULTS The software application of the ICIR registry is designed in a user-friendly graphical interface with different entry fields. The collected data are categorized into four subsets including personal information, clinical data, surgery data and commission results. The main parameters include audiometric performance of patient, device use, patient comorbidities, device use, quality of life and health-related utilities, across different types of CI devices from different manufacturers. CONCLUSION The ICIR database could be used by the increasingly growing network of CI centers in Iran. Clinicians, academic and industrial researchers as well as healthcare policy makers could use this database to develop more effective CI devices and better management of the recipients as well as to develop national guidelines.",TRUE,location
R356,"Family, Life Course, and Society",R75942,Parental well-being in times of Covid-19 in Germany,S351951,R77082,has location,R29991,Germany,"Abstract We examine the effects of Covid-19 and related restrictions on individuals with dependent children in Germany. We specifically focus on the role of day care center and school closures, which may be regarded as a “disruptive exogenous shock” to family life. We make use of a novel representative survey of parental well-being collected in May and June 2020 in Germany, when schools and day care centers were closed but while other measures had been relaxed and new infections were low. In our descriptive analysis, we compare well-being during this period with a pre-crisis period for different groups. In a difference-in-differences design, we compare the change for individuals with children to the change for individuals without children, accounting for unrelated trends as well as potential survey mode and context effects. We find that the crisis lowered the relative well-being of individuals with children, especially for individuals with young children, for women, and for persons with lower secondary schooling qualifications. Our results suggest that public policy measures taken to contain Covid-19 can have large effects on family well-being, with implications for child development and parental labor market outcomes.",TRUE,location
R356,"Family, Life Course, and Society",R76554,The COVID-19 pandemic and subjective well-being: longitudinal evidence on satisfaction with work and family,S351969,R77087,has location,R68481,Germany,"ABSTRACT This paper provides a timely evaluation of whether the main COVID-19 lockdown policies – remote work, short-time work and closure of schools and childcare – have an immediate effect on the German population in terms of changes in satisfaction with work and family life. Relying on individual level panel data collected before and during the lockdown, we examine (1) how family satisfaction and work satisfaction of individuals have changed over the lockdown period, and (2) how lockdown-driven changes in the labour market situation (i.e. working remotely and being sent on short-time work) have affected satisfactions. We apply first-difference regressions for mothers, fathers, and persons without children. Our results show a general decrease in family satisfaction. We also find an overall decline in work satisfaction which is most pronounced for mothers and those without children who have to switch to short-time work. In contrast, fathers' well-being is less affected negatively and their family satisfaction even increased after changing to short-time work. We conclude that while the lockdown circumstances generally have a negative effect on the satisfaction with work and family of individuals in Germany, effects differ between childless persons, mothers, and fathers with the latter being least negatively affected.",TRUE,location
R317,Geographic Information Sciences,R111061,Reversed urbanism: Inferring urban performance through behavioral patterns in temporal telecom data,S505842,R111064,Study Location ,L365153,Andorra,"Abstract A fundamental aspect of well performing cities is successful public spaces. For centuries, understanding these places has been limited to sporadic observations and laborious data collection. This study proposes a novel methodology to analyze citywide, discrete urban spaces using highly accurate anonymized telecom data and machine learning algorithms. Through superposition of human dynamics and urban features, this work aims to expose clear correlations between the design of the city and the behavioral patterns of its users. Geolocated telecom data, obtained for the state of Andorra, were initially analyzed to identify “stay-points”—events in which cellular devices remain within a certain roaming distance for a given length of time. These stay-points were then further analyzed to find clusters of activity characterized in terms of their size, persistence, and diversity. Multivariate linear regression models were used to identify associations between the formation of these clusters and various urban features such as urban morphology or land-use within a 25–50 meters resolution. Some of the urban features that were found to be highly related to the creation of large, diverse and long-lasting clusters were the presence of service and entertainment amenities, natural water features, and the betweenness centrality of the road network; others, such as educational and park amenities were shown to have a negative impact. Ultimately, this study suggests a “reversed urbanism” methodology: an evidence-based approach to urban design, planning, and decision making, in which human behavioral patterns are instilled as a foundational design tool for inferring the success rates of highly performative urban places.",TRUE,location
R317,Geographic Information Sciences,R110803,A new insight into land use classification based on aggregated mobile phone data,S504877,R110805,Study Location ,L364652,Singapore,"Land-use classification is essential for urban planning. Urban land-use types can be differentiated either by their physical characteristics (such as reflectivity and texture) or social functions. Remote sensing techniques have been recognized as a vital method for urban land-use classification because of their ability to capture the physical characteristics of land use. Although significant progress has been achieved in remote sensing methods designed for urban land-use classification, most techniques focus on physical characteristics, whereas knowledge of social functions is not adequately used. Owing to the wide usage of mobile phones, the activities of residents, which can be retrieved from the mobile phone data, can be determined in order to indicate the social function of land use. This could bring about the opportunity to derive land-use information from mobile phone data. To verify the application of this new data source to urban land-use classification, we first construct a vector of aggregated mobile phone data to characterize land-use types. This vector is composed of two aspects: the normalized hourly call volume and the total call volume. A semi-supervised fuzzy c-means clustering approach is then applied to infer the land-use types. The method is validated using mobile phone data collected in Singapore. Land use is determined with a detection rate of 58.03%. An analysis of the land-use classification results shows that the detection rate decreases as the heterogeneity of land use increases, and increases as the density of cell phone towers increases.",TRUE,location
R146,Geology,R109222,"Targeting key alteration minerals in epithermal deposits in Patagonia, Argentina, using ASTER imagery and principal component analysis",S498349,R109223,Study Area,L360698,Argentina,"Principal component analysis (PCA) is an image processing technique that has been commonly applied to Landsat Thematic Mapper (TM) data to locate hydrothermal alteration zones related to metallic deposits. With the advent of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), a 14-band multispectral sensor operating onboard the Earth Observation System (EOS)-Terra satellite, the availability of spectral information in the shortwave infrared (SWIR) portion of the electromagnetic spectrum has been greatly increased. This allows detailed spectral characterization of surface targets, particularly of those belonging to the groups of minerals with diagnostic spectral features in this wavelength range, including phyllosilicates (‘clay’ minerals), sulphates and carbonates, among others. In this study, PCA was applied to ASTER bands covering the SWIR with the objective of mapping the occurrence of mineral endmembers related to an epithermal gold prospect in Patagonia, Argentina. The results illustrate ASTER's ability to provide information on alteration minerals which are valuable for mineral exploration activities and support the role of PCA as a very effective and robust image processing technique for that purpose.",TRUE,location
R146,Geology,R109225,"Mapping mineralogical alteration using principal-component analysis and matched filter processing in the Takab area, north-west Iran, from ASTER data",S498366,R109226,Study Area,L360712,Iran,"The Takab area, located in north‐west Iran, is an important gold mineralized region with a long history of gold mining. The gold is associated with toxic metals/metalloids. In this study, Advanced Space Borne Thermal Emission and Reflection Radiometer data are evaluated for mapping gold and base‐metal mineralization through alteration mapping. Two different methods are used for argillic and silicic alteration mapping: selective principal‐component analysis and matched filter processing (MF). Running a selective principal‐component analysis using the main spectral characteristics of key alteration minerals enhanced the altered areas in PC2. MF using spectral library and laboratory spectra of the study area samples gave similar results. However, MF, using the image reference spectra from principal component (PC) images, produced the best results and indicated the advantage of using image spectra rather than library spectra in spectral mapping techniques. It seems that argillic alteration is more effective than silicic alteration for exploration purposes. It is suggested that alteration mapping can also be used to delineate areas contaminated by potentially toxic metals.",TRUE,location
R93,Human and Clinical Nutrition,R182396,The influence of crop production and socioeconomic factors on seasonal household dietary diversity in Burkina Faso,S705526,R182397,Location,L475870,Burkina Faso,"Households in low-income settings are vulnerable to seasonal changes in dietary diversity because of fluctuations in food availability and access. We assessed seasonal differences in household dietary diversity in Burkina Faso, and determined the extent to which household socioeconomic status and crop production diversity modify changes in dietary diversity across seasons, using data from the nationally representative 2014 Burkina Faso Continuous Multisectoral Survey (EMC). A household dietary diversity score based on nine food groups was created from household food consumption data collected during four rounds of the 2014 EMC. Plot-level crop production data, and data on household assets and education were used to create variables on crop diversity and household socioeconomic status, respectively. Analyses included data for 10,790 households for which food consumption data were available for at least one round. Accounting for repeated measurements and controlling for the complex survey design and confounding covariates using a weighted multi-level model, household dietary diversity was significantly higher during both lean seasons periods, and higher still during the harvest season as compared to the post-harvest season (mean: post-harvest: 4.76 (SE 0.04); beginning of lean: 5.13 (SE 0.05); end of lean: 5.21 (SE 0.05); harvest: 5.72 (SE 0.04)), but was not different between the beginning and the end of lean season. Seasonal differences in household dietary diversity were greater among households with higher food expenditures, greater crop production, and greater monetary value of crops sale (P<0.05). Seasonal changes in household dietary diversity in Burkina Faso may reflect nutritional differences among agricultural households, and may be modified both by households’ socioeconomic status and agricultural characteristics.",TRUE,location
R93,Human and Clinical Nutrition,R184000,Agricultural Food Production Diversity and Dietary Diversity among Female Small Holder Farmers in a Region of the Ecuadorian Andes Experiencing Nutrition Transition,S707018,R184004,Location,L478012,Ecuador,"Some rural areas of Ecuador, including the Imbabura Province of the Andes Highlands, are experiencing a double burden of malnutrition where micronutrient deficiencies persist at the same time obesity is increasing as many traditional home-grown foods are being replaced with more commercially prepared convenience foods. Thus, the relationships among agricultural food production diversity (FPD), dietary diversity (DD), and household food insecurity (HFI) of the rural small holder farmers need further study. Therefore, we examined these associations in small holder farmers residing in this Province in the Andes Highlands (elevation > 2500 m). Non-pregnant maternal home managers (n = 558, x age = 44.1, SD = 16.5 y) were interviewed regarding the number of different agricultural food crops cultivated and domestic animals raised in their family farm plots. DD was determined using the Minimum Dietary Diversity for Women Score (MDD-W) based on the number of 10 different food groups consumed, and household food insecurity (HFI) was determined using the 8-item Household Food Insecurity Experience Scale. The women reported consuming an average of 53% of their total food from what they cultivated or raised. Women with higher DD [MMD-W score ≥ 5 food groups (79% of total sample)] were on farms that cultivated a greater variety of crops (x = 8.7 vs. 6.7), raised more animals (x = 17.9 vs. 12.7, p < 0.05), and reported lower HFI and significantly higher intakes of energy, protein, iron, zinc, and vitamin A (all p < 0.05). Multiple regression analyses demonstrated that FPD was only modestly related to DD, which together with years of education, per capita family income, and HFI accounted for 26% of DD variance. In rural areas of the Imbabura Province, small holder farmers still rely heavily on consumption of self-cultivated foods, but greater diversity of crops grown in family farm plots is only weakly associated with greater DD and lower HFI among the female caretakers.",TRUE,location
R93,Human and Clinical Nutrition,R182127,"Crop diversity is associated with higher child diet diversity in Ethiopia, particularly among low-income households, but not in Vietnam",S704492,R182129,Location,L475311,Ethiopia ,"Abstract Objectives: To examine associations of household crop diversity with school-aged child dietary diversity in Vietnam and Ethiopia and mechanisms underlying these associations. Design: We created a child diet diversity score (DDS) using data on seven food groups consumed in the last 24 h. Generalised estimating equations were used to model associations of household-level crop diversity, measured as a count of crop species richness (CSR) and of plant crop nutritional functional richness (CNFR), with DDS. We examined effect modification by household wealth and subsistence orientation, and mediation by the farm’s market orientation. Setting: Two survey years of longitudinal data from the Young Lives cohort. Participants: Children (aged 5 years in 2006 and 8 years in 2009) from rural farming households in Ethiopia (n 1012) and Vietnam (n 1083). Results: There was a small, positive association between household CNFR and DDS in Ethiopia (CNFR–DDS, β = 0·13; (95 % CI 0·07, 0·19)), but not in Vietnam. Associations of crop diversity and child diet diversity were strongest among poor households in Ethiopia and among subsistence-oriented households in Vietnam. Agricultural earnings positively mediated the crop diversity–diet diversity association in Ethiopia. Discussion: Children from households that are poorer and those that rely more on their own agricultural production for food may benefit most from increased crop diversity.",TRUE,location
R93,Human and Clinical Nutrition,R182376,Relationship between agricultural biodiversity and dietary diversity of children aged 6-36 months in rural areas of Northern Ghana,S705479,R182380,Location,L475843,Ghana,"ABSTRACT In this study, we investigated the relationship between agricultural biodiversity and dietary diversity of children and whether factors such as economic access may affect this relationship. This paper is based on data collected in a baseline cross-sectional survey in November 2013.The study population comprising 1200 mother-child pairs was selected using a two-stage cluster sampling. Dietary diversity was defined as the number of food groups consumed 24 h prior to the assessment. The number of crop and livestock species produced on a farm was used as the measure of production diversity. Hierarchical regression analysis was used to identify predictors and test for interactions. Whereas the average production diversity score was 4.7 ± 1.6, only 42.4% of households consumed at least four food groups out of seven over the preceding 24-h recall period. Agricultural biodiversity (i.e. variety of animals kept and food groups produced) associated positively with dietary diversity of children aged 6–36 months but the relationship was moderated by household socioeconomic status. The interaction term was also statistically significant [β = −0.08 (95% CI: −0.05, −0.01, p = 0.001)]. Spearman correlation (rho) analysis showed that agricultural biodiversity was positively associated with individual dietary diversity of the child more among children of low socioeconomic status in rural households compared to children of high socioeconomic status (r = 0.93, p < 0.001 versus r = 0.08, p = 0.007). Socioeconomic status of the household also partially mediated the link between agricultural biodiversity and dietary diversity of a child’s diet. The effect of increased agricultural biodiversity on dietary diversity was significantly higher in households of lower socioeconomic status. Therefore, improvement of agricultural biodiversity could be one of the best approaches for ensuring diverse diets especially for households of lower socioeconomic status in rural areas of Northern Ghana.",TRUE,location
R93,Human and Clinical Nutrition,R182137,Understanding the Linkages between Crop Diversity and Household Dietary Diversity in the Semi-Arid Regions of India,S704521,R182139,Location,L475331,India,"Agriculture is fundamental to achieving nutrition goals; it provides the food, energy, and nutrients essential for human health and well-being. This paper has examined crop diversity and dietary diversity in six villages using the ICRISAT Village Level Studies (VLS) data from the Telangana and Maharashtra states of India. The study has used the data of cultivating households for constructing the crop diversity index while dietary diversity data is from the special purpose nutritional surveys conducted by ICRISAT in the six villages. The study has revealed that the cropping pattern is not uniform across the six study villages with dominance of mono cropping in Telangana villages and of mixed cropping in Maharashtra villages. The analysis has indicated a positive and significant correlation between crop diversity and household dietary diversity at the bivariate level. In multiple linear regression model, controlling for the other covariates, crop diversity has not shown a significant association with household dietary diversity. However, other covariates have shown strong association with dietary diversity. The regression results have revealed that households which cultivated minimum one food crop in a single cropping year have a significant and positive relationship with dietary diversity. From the study it can be inferred that crop diversity alone does not affect the household dietary diversity in the semi-arid tropics. Enhancing the evidence base and future research, especially in the fragile environment of semi-arid tropics, is highly recommended.",TRUE,location
R93,Human and Clinical Nutrition,R182153,"Agricultural Diversity, Dietary Diversity and Nutritional Intake: An Evidence on Inter-linkages from Village Level Studies in Eastern India",S704574,R182155,Location,L475365,India,"The linkage between agriculture and nutrition is complex and often debated in the policy discourse in India. The enigma of fastest growing economy and yet the largest home of under- and mal-nourished population takes away the sheen from the glory of economic achievements of India. In this context, the study has examined the food consumption patterns, assessed the relationship between agricultural production and dietary diversity, and analysed the impact of dietary diversity on nutritional intake. The study is based on a household level panel data from 12 villages of Bihar, Jharkhand and Odisha in eastern India. The study has shown that agricultural production diversity is a major determinant of dietary diversity which in turn has a strong effect on calorie and protein intake. The study has suggested that efforts to promote agricultural diversification will be helpful to enhance food and nutrition security in the country. Agricultural programmes and policies oriented towards reducing under-nutrition should promote diversity in agricultural production rather than emphasizing on increasing production through focusing on selected staple crops as has been observed in several states of India. The huge fertilizer subsidies and government procurement schemes limited to a few crops provide little incentives for farmers to diversity their production portfolio.",TRUE,location
R93,Human and Clinical Nutrition,R184009,"Market Access, Production Diversity, and Diet Diversity: Evidence From India",S707047,R184011,Location,L478033,India,"Background: Recent literature, largely from Africa, shows mixed effects of own-production on diet diversity. However, the role of own-production, relative to markets, in influencing food consumption becomes more pronounced as market integration increases. Objective: This paper investigates the relative importance of two factors - production diversity and household market integration - for the intake of a nutritious diet by women and households in rural India. Methods: Data analysis is based on primary data from an extensive agriculture-nutrition survey of 3600 Indian households that was collected in 2017. Dietary diversity scores are constructed for women and households is based on 24-hour and 7-day recall periods. Household market integration is measured as monthly household expenditure on key non-staple food groups. We measure production diversity in two ways - field-level and on-farm production diversity - in order to account for the cereal centric rice-wheat cropping system found in our study locations. The analysis is based on Ordinary Least Squares regressions where we control for a variety of village, household, and individual level covariates that affect food consumption, and village fixed effects. Robustness checks are done by way of using a Poisson regression specifications and 7-day recall period. Results: Conventional measures of field-level production diversity, like the number of crops or food groups grown, have no significant association with diet diversity. In contrast, it is on-farm production diversity (the field-level cultivation of pulses and on-farm livestock management, and kitchen gardens in the longer run) that is significantly associated with improved dietary diversity scores, thus suggesting the importance of non-staples in improving both individual and household dietary diversity. Furthermore, market purchases of non-staples like pulses and dairy products are associated with a significantly higher dietary diversity. Other significant determinants of dietary diversity include women’s literacy and awareness of nutrition. These results mostly remain robust to changes in the recall period of the diet diversity measure and the nature of the empirical specification. Conclusions: This study contributes to the scarce empirical evidence related to diets in India. Additionally, our results indicate some key intervention areas - promoting livestock rearing, strengthening households’ market integration (for purchase of non-staples) and increasing women’s awareness about nutrition. These are more impactful than raising production diversity. ",TRUE,location
R93,Human and Clinical Nutrition,R182134,On-Farm Crop Species Richness Is Associated with Household Diet Diversity and Quality in Subsistence- and Market-Oriented Farming Households in Malawi,S704509,R182136,Location,L475323,Malawi,"BACKGROUND On-farm crop species richness (CSR) may be important for maintaining the diversity and quality of diets of smallholder farming households. OBJECTIVES The objectives of this study were to 1) determine the association of CSR with the diversity and quality of household diets in Malawi and 2) assess hypothesized mechanisms for this association via both subsistence- and market-oriented pathways. METHODS Longitudinal data were assessed from nationally representative household surveys in Malawi between 2010 and 2013 (n = 3000 households). A household diet diversity score (DDS) and daily intake per adult equivalent of energy, protein, iron, vitamin A, and zinc were calculated from 7-d household consumption data. CSR was calculated from plot-level data on all crops cultivated during the 2009-2010 and 2012-2013 agricultural seasons in Malawi. Adjusted generalized estimating equations were used to assess the longitudinal relation of CSR with household diet quality and diversity. RESULTS CSR was positively associated with DDS (β: 0.08; 95% CI: 0.06, 0.12; P < 0.001), as well as daily intake per adult equivalent of energy (kilocalories) (β: 41.6; 95% CI: 20.9, 62.2; P < 0.001), protein (grams) (β: 1.78; 95% CI: 0.80, 2.75; P < 0.001), iron (milligrams) (β: 0.30; 95% CI: 0.16, 0.44; P < 0.001), vitamin A (micrograms of retinol activity equivalent) (β: 25.8; 95% CI: 12.7, 38.9; P < 0.001), and zinc (milligrams) (β: 0.26; 95% CI: 0.13, 0.38; P < 0.001). Neither proportion of harvest sold nor distance to nearest population center modified the relation between CSR and household diet diversity or quality (P ≥ 0.05). Households with greater CSR were more commercially oriented (least-squares mean proportion of harvest sold ± SE, highest tertile of CSR: 17.1 ± 0.52; lowest tertile of CSR: 8.92 ± 1.09) (P < 0.05). CONCLUSION Promoting on-farm CSR may be a beneficial strategy for simultaneously supporting enhanced diet quality and diversity while also creating opportunities for smallholder farmers to engage with markets in subsistence agricultural contexts.",TRUE,location
R93,Human and Clinical Nutrition,R182148,"Farm production, market access and dietary diversity in Malawi",S704563,R182150,Location,L475357,Malawi,"Abstract Objective The association between farm production diversity and dietary diversity in rural smallholder households was recently analysed. Most existing studies build on household-level dietary diversity indicators calculated from 7d food consumption recalls. Herein, this association is revisited with individual-level 24 h recall data. The robustness of the results is tested by comparing household- and individual-level estimates. The role of other factors that may influence dietary diversity, such as market access and agricultural technology, is also analysed. Design A survey of smallholder farm households was carried out in Malawi in 2014. Dietary diversity scores are calculated from 24 h recall data. Production diversity scores are calculated from farm production data covering a period of 12 months. Individual- and household-level regression models are developed and estimated. Setting Data were collected in sixteen districts of central and southern Malawi. Subjects Smallholder farm households (n 408), young children (n 519) and mothers (n 408). Results Farm production diversity is positively associated with dietary diversity. However, the estimated effects are small. Access to markets for buying food and selling farm produce and use of chemical fertilizers are shown to be more important for dietary diversity than diverse farm production. Results with household- and individual-level dietary data are very similar. Conclusions Further increasing production diversity may not be the most effective strategy to improve diets in smallholder farm households. Improving access to markets, productivity-enhancing inputs and technologies seems to be more promising.",TRUE,location
R93,Human and Clinical Nutrition,R182161,"Agroecological practices of legume residue management and crop diversification for improved smallholder food security, dietary diversity and sustainable land use in Malawi",S704614,R182165,Location,L475384,Malawi,"ABSTRACT The role of agroecological practices in addressing food security has had limited investigation, particularly in Sub-Saharan Africa. Quasi-experimental methods were used to assess the role of agroecological practices in reducing food insecurity in smallholder households in Malawi. Two key practices – crop diversification and the incorporation of organic matter into soil – were examined. The quasi-experimental study of an agroecological intervention included survey data from 303 households and in-depth interviews with 33 households. The survey sampled 210 intervention households participating in the agroecological intervention, and 93 control households in neighboring villages. Regression analysis of food security indicators found that both agroecological practices significantly predicted higher food security and dietary diversity for smallholder households: the one-third of farming households who incorporated legume residue soon after harvest were almost three times more likely to be food secure than those who had not incorporated crop residue. Qualitative semi-structured interviews with 33 households identified several pathways through which crop diversification and crop residue incorporation contributed to household food security: direct consumption, agricultural income, and changes in underlying production relations. These findings provide evidence of agroecology’s potential to address food insecurity while supporting sustainable food systems.",TRUE,location
R93,Human and Clinical Nutrition,R184012,"If They Grow It, Will They Eat and Grow? Evidence from Zambia on Agricultural Diversity and Child Undernutrition",S707068,R184014,Location,L478048,Zambia,"Abstract In this article we address a gap in our understanding of how household agricultural production diversity affects the diets and nutrition of young children living in rural farming communities in sub-Saharan Africa. The specific objectives of this article are to assess: (1) the association between household agricultural production diversity and child dietary diversity; and (2) the association between household agricultural production diversity and child nutritional status. We use household survey data collected from 3,040 households as part of the Realigning Agriculture for Improved Nutrition (RAIN) intervention in Zambia. The data indicate low agricultural diversity, low dietary diversity and high levels of chronic malnutrition overall in this area. We find a strong positive association between production diversity and dietary diversity among younger children aged 6–23 months, and significant positive associations between production diversity and height for age Z-scores and stunting among older children aged 24–59 months.",TRUE,location
R93,Human and Clinical Nutrition,R182166,"Nutrition education, farm production diversity, and commercialization on household and individual dietary diversity in Zimbabwe",S704634,R182168,Location,L475400,Zimbabwe,"Background Nutrition education is crucial for improved nutrition outcomes. However, there are no studies to the best of our knowledge that have jointly analysed the roles of nutrition education, farm production diversity and commercialization on household, women and child dietary diversity. Objective This article jointly analyses the role of nutrition education, farm production diversity and commercialization on household, women and children dietary diversity in Zimbabwe. In addition, we analyze separately the roles of crop and livestock diversity and individual agricultural practices on dietary diversity. Design Data were collected from 2,815 households randomly selected in eight districts. Negative binomial regression was used for model estimations. Results Nutrition education increased household, women, and child dietary diversity by 3, 9 and 24%, respectively. Farm production diversity had a strong and positive association with household and women dietary diversity. Crop diversification led to a 4 and 5% increase in household and women dietary diversity, respectively. Furthermore, livestock diversification and market participation were positively associated with household, women, and children dietary diversity. The cultivation of pulses and fruits increased household, women, and children dietary diversity. Vegetable production and goat rearing increased household and women dietary diversity. Conclusions Nutrition education and improving access to markets are promising strategies to improve dietary diversity at both household and individual level. Results demonstrate the value of promoting nutrition education; farm production diversity; small livestock; pulses, vegetables and fruits; crop-livestock integration; and market access for improved nutrition.",TRUE,location
R351,Industrial and Organizational Psychology,R76567,Individual differences and changes in subjective wellbeing during the early stages of the COVID-19 pandemic.,S349970,R76571,has location,R68481,Germany,"The COVID-19 pandemic has considerably impacted many people's lives. This study examined changes in subjective wellbeing between December 2019 and May 2020 and how stress appraisals and coping strategies relate to individual differences and changes in subjective wellbeing during the early stages of the pandemic. Data were collected at 4 time points from 979 individuals in Germany. Results showed that, on average, life satisfaction, positive affect, and negative affect did not change significantly between December 2019 and March 2020 but decreased between March and May 2020. Across the latter timespan, individual differences in life satisfaction were positively related to controllability appraisals, active coping, and positive reframing, and negatively related to threat and centrality appraisals and planning. Positive affect was positively related to challenge and controllable-by-self appraisals, active coping, using emotional support, and religion, and negatively related to threat appraisal and humor. Negative affect was positively related to threat and centrality appraisals, denial, substance use, and self-blame, and negatively related to controllability appraisals and emotional support. Contrary to expectations, the effects of stress appraisals and coping strategies on changes in subjective wellbeing were small and mostly nonsignificant. These findings imply that the COVID-19 pandemic represents not only a major medical and economic crisis, but also has a psychological dimension, as it can be associated with declines in key facets of people's subjective wellbeing. Psychological practitioners should address potential declines in subjective wellbeing with their clients and attempt to enhance clients' general capability to use functional stress appraisals and effective coping strategies. (PsycInfo Database Record (c) 2020 APA, all rights reserved).",TRUE,location
R358,Inequality and Stratification,R75946,Who is most affected by the Corona crisis? An analysis of changes in stress and well-being in Switzerland,S351965,R77086,has location,R44048,Switzerland,"ABSTRACT This study analyses the consequences of the Covid-19 crisis on stress and well-being in Switzerland. In particular, we assess whether vulnerable groups in terms of social isolation, increased workload and limited socioeconomic resources are affected more than others. Using longitudinal data from the Swiss Household Panel, including a specific Covid-19 study, we estimate change score models to predict changes in perceived stress and life satisfaction at the end of the semi-lockdown in comparison to before the crisis. We find no general change in life satisfaction and a small decrease in stress. Yet, in line with our expectations, more vulnerable groups in terms of social isolation (young adults, Covid-19 risk group members, individuals without a partner), workload (women) and socioeconomic resources (unemployed and those who experienced a deteriorating financial situation) reported a decrease in life satisfaction. Stress levels decreased most strongly among high earners, workers on short-time work and the highly educated.",TRUE,location
R278,Information Science,R46588,Malay Named Entity Recognition Based on Rule-Based Approach,S142480,R46589,Language/domain,L87604,Malay,"A Named-Entity Recognition (NER) is part of the process in Text Mining and it is a very useful process for information extraction. This NER tool can be used to assist user in identifying and detecting entities such as person, location or organization. However, different languages may have different morphologies and thus require different NER processes. For instance, an English NER process cannot be applied in processing Malay articles due to the different morphology used in different languages. This paper proposes a Rule-Based Named-Entity Recognition algorithm for Malay articles. The proposed Malay NER is designed based on a Malay part-of-speech (POS) tagging features and contextual features that had been implemented to handle Malay articles. Based on the POS results, proper names will be identified or detected as the possible candidates for annotation. Besides that, there are some symbols and conjunctions that will also be considered in the process of identifying named-entity for Malay articles. Several manually constructed dictionaries will be used to handle three named-entities; Person, Location and Organizations. The experimental results show a reasonable output of 89.47% for the F-Measure value. The proposed Malay NER algorithm can be further improved by having more complete dictionaries and refined rules to be used in order to identify the correct Malay entities system.",TRUE,location
R137681,"Information Systems, Process and Knowledge Management",R140070,Hackathons as Co-optation Ritual: Socializing Workers and Institutionalizing Innovation in the “New” Economy,S559076,R140072,has location,R140079,New York,"Abstract Hackathons, time-bounded events where participants write computer code and build apps, have become a popular means of socializing tech students and workers to produce “innovation” despite little promise of material reward. Although they offer participants opportunities for learning new skills and face-to-face networking and set up interaction rituals that create an emotional “high,” potential advantage is even greater for the events’ corporate sponsors, who use them to outsource work, crowdsource innovation, and enhance their reputation. Ethnographic observations and informal interviews at seven hackathons held in New York during the course of a single school year show how the format of the event and sponsors’ discursive tropes, within a dominant cultural frame reflecting the appeal of Silicon Valley, reshape unpaid and precarious work as an extraordinary opportunity, a ritual of ecstatic labor, and a collective imaginary for fictional expectations of innovation that benefits all, a powerful strategy for manufacturing workers’ consent in the “new” economy.",TRUE,location
R12,Life Sciences,R78061,Estimative of real number of infections by COVID-19 in Brazil and possible scenarios,S353645,R78063,has location,L251297,Brazil,"Abstract This paper attempts to provide methods to estimate the real scenario of the novel coronavirus pandemic crisis on Brazil and the states of Sao Paulo, Pernambuco, Espirito Santo, Amazonas and Distrito Federal. By the use of a SEIRD mathematical model with age division, we predict the infection and death curve, stating the peak date for Brazil and these states. We also carry out a prediction for the ICU demand on these states for a visualization of the size of a possible collapse on the local health system. By the end, we establish some future scenarios including the stopping of social isolation and the introduction of vaccines and efficient medicine against the virus.",TRUE,location
R112125,Machine Learning,R178346,Short-Term Traffic Prediction Based on DeepCluster in Large-Scale Road Networks,S699529,R178352,Location,L470827,Beijing,"Short-term traffic prediction (STTP) is one of the most critical capabilities in Intelligent Transportation Systems (ITS), which can be used to support driving decisions, alleviate traffic congestion and improve transportation efficiency. However, STTP of large-scale road networks remains challenging due to the difficulties of effectively modeling the diverse traffic patterns by high-dimensional time series. Therefore, this paper proposes a framework that involves a deep clustering method for STTP in large-scale road networks. The deep clustering method is employed to supervise the representation learning in a visualized way from the large unlabeled dataset. More specifically, to fully exploit the traffic periodicity, the raw series is first divided into a number of sub-series for triplet generation. The convolutional neural networks (CNNs) with triplet loss are utilized to extract the features of shape by transforming the series into visual images. The shape-based representations are then used to cluster road segments into groups. Thereafter, a model sharing strategy is further proposed to build recurrent NNs-based predictions through group-based models (GBMs). GBM is built for a type of traffic patterns, instead of one road segment exclusively or all road segments uniformly. Our framework can not only significantly reduce the number of prediction models, but also improve their generalization by virtue of being trained on more diverse examples. Furthermore, the proposed framework over a selected road network in Beijing is evaluated. Experiment results show that the deep clustering method can effectively cluster the road segments and GBM can achieve comparable prediction accuracy against the IBM with less number of prediction models.",TRUE,location
R58,Neuroscience and Neurobiology,R75661,Prevalence of epilepsy in Croatia: a population-based survey,S346207,R75663,Country of study,R75535,Croatia,Objectives – To investigate the prevalence of active epilepsy in Croatia.,TRUE,location
R58,Neuroscience and Neurobiology,R75482,Prevalence and Incidence of Epilepsy in Italy Based on a Nationwide Database,S346044,R75484,Country of study,R29994,Italy,"Objectives: To estimate the prevalence and incidence of epilepsy in Italy using a national database of general practitioners (GPs). Methods: The Health Search CSD Longitudinal Patient Database (HSD) has been established in 1998 by the Italian College of GPs. Participants were 700 GPs, representing a population of 912,458. For each patient, information on age and sex, EEG, CT scan, and MRI was included. Prevalent cases with a diagnosis of ‘epilepsy' (ICD9CM: 345*) were selected in the 2011 population. Incident cases of epilepsy were identified in 2011 by excluding patients diagnosed for epilepsy and convulsions and those with EEG, CT scan, MRI prescribed for epilepsy and/or convulsions in the previous years. Crude and standardized (Italian population) prevalence and incidence were calculated. Results: Crude prevalence of epilepsy was 7.9 per 1,000 (men 8.1; women 7.7). The highest prevalence was in patients <25 years and ≥75 years. The incidence of epilepsy was 33.5 per 100,000 (women 35.3; men 31.5). The highest incidence was in women <25 years and in men 75 years or older. Conclusions: Prevalence and incidence of epilepsy in this study were similar to those of other industrialized countries. HSD appears as a reliable data source for the surveillance of epilepsy in Italy. i 2014 S. Karger AG, Basel",TRUE,location
R96,Nutritional Epidemiology,R75682,Association between dietary patterns and overweight risk among Malaysian adults: evidence from nationally representative surveys,S346302,R75684,Study location,L248125,Malaysia,"Abstract Objective: To investigate the association between dietary patterns (DP) and overweight risk in the Malaysian Adult Nutrition Surveys (MANS) of 2003 and 2014. Design: DP were derived from the MANS FFQ using principal component analysis. The cross-sectional association of the derived DP with prevalence of overweight was analysed. Setting: Malaysia. Participants: Nationally representative sample of Malaysian adults from MANS (2003, n 6928; 2014, n 3000). Results: Three major DP were identified for both years. These were ‘Traditional’ (fish, eggs, local cakes), ‘Western’ (fast foods, meat, carbonated beverages) and ‘Mixed’ (ready-to-eat cereals, bread, vegetables). A fourth DP was generated in 2003, ‘Flatbread & Beverages’ (flatbread, creamer, malted beverages), and 2014, ‘Noodles & Meat’ (noodles, meat, eggs). These DP accounted for 25·6 and 26·6 % of DP variations in 2003 and 2014, respectively. For both years, Traditional DP was significantly associated with rural households, lower income, men and Malay ethnicity, while Western DP was associated with younger age and higher income. Mixed DP was positively associated with women and higher income. None of the DP showed positive association with overweight risk, except for reduced adjusted odds of overweight with adherence to Traditional DP in 2003. Conclusions: Overweight could not be attributed to adherence to a single dietary pattern among Malaysian adults. This may be due to the constantly morphing dietary landscape in Malaysia, especially in urban areas, given the ease of availability and relative affordability of multi-ethnic and international foods. Timely surveys are recommended to monitor implications of these changes.",TRUE,location
R172,Oceanography,R108803,Dinitrogen fixation rates in the Bay of Bengal during summer monsoon,S495684,R108807,Domain,L358890,Ocean,"Abstract Biological dinitrogen (N 2 ) fixation exerts an important control on oceanic primary production by providing bioavailable form of nitrogen (such as ammonium) to photosynthetic microorganisms. N 2 fixation is dominant in nutrient poor and warm surface waters. The Bay of Bengal is one such region where no measurements of phototrophic N 2 fixation rates exist. The surface water of the Bay of Bengal is generally nitrate-poor and warm due to prevailing stratification and thus, could favour N 2 fixation. We commenced the first N 2 fixation study in the photic zone of the Bay of Bengal using 15 N 2 gas tracer incubation experiment during summer monsoon 2018. We collected seawater samples from four depths (covering the mixed layer depth of up to 75 m) at eight stations. N 2 fixation rates varied from 4 to 75 μ mol N m −2 d −1 . The contribution of N 2 fixation to primary production was negligible (<1%). However, the upper bound of observed N 2 fixation rates is higher than the rates measured in other oceanic regimes, such as the Eastern Tropical South Pacific, the Tropical Northwest Atlantic, and the Equatorial and Southern Indian Ocean.",TRUE,location
R138056,Planetary Sciences,R138505,"Far infrared and Raman spectroscopic investigations of lunar materials from Apollo 11, 12, 14, and 15",S549962,R138506,Rock type,L386983,Basalt,"We have studied the elastic and inelastic light scattering of twelve lunar surface rocks and eleven lunar soil samples from Apollo 11, 12, 14, and 15, over the range 20-2000 em-I. The phonons occurring in this frequency region have been associated with the different chemical constituents and are used to determine the mineralogical abundances by comparison with the spectra of a wide variety of terrestrial minerals and rocks. KramersKronig analyses of the infrared reflectance spectra provided the dielectric dispersion (e' and s"") and the optical constants (n and k). The dielectric constants at """" 1011 Hz have been obtained for each sample and are compared with the values reported in the 10-10 Hz range. The emissivity peak at the Christianson frequencies for all the lunar samples lie within the range 1195-1250 cm-1; such values are characteristic of terrestrial basalts. The Raman light scattering spectra provided investigation of small individual grains or inclusions and gave unambiguous interpretation of some of the characteristic mineralogical components.",TRUE,location
R102,Plant Pathology,R108704,Anastomosis Groups and Pathogenicity of Rhizoctonia solani and Binucleate Rhizoctonia from Potato in South Africa,S495167,R108706,Study location,L358637,South Africa,"A survey of anastomosis groups (AG) of Rhizoctonia spp. associated with potato diseases was conducted in South Africa. In total, 112 Rhizoctonia solani and 19 binucleate Rhizoctonia (BNR) isolates were recovered from diseased potato plants, characterized for AG and pathogenicity. The AG identity of the isolates was confirmed using phylogenetic analysis of the internal transcribed spacer region of ribosomal DNA. R. solani isolates recovered belonged to AG 3-PT, AG 2-2IIIB, AG 4HG-I, AG 4HG-III, and AG 5, while BNR isolates belonged to AG A and AG R, with frequencies of 74, 6.1, 2.3, 2.3, 0.8, 12.2, and 2.3%, respectively. R. solani AG 3-PT was the most predominant AG and occurred in all the potato-growing regions sampled, whereas the other AG occurred in distinct locations. Different AG grouped into distinct clades, with high maximum parsimony and maximum-likelihood bootstrap support for both R. solani and BNR. An experiment under greenhouse conditions with representative isolates from different AG showed differences in aggressiveness between and within AG. Isolates of AG 2-2IIIB, AG 4HG-III, and AG R were the most aggressive in causing stem canker while AG 3-PT, AG 5, and AG R caused black scurf. This is the first comprehensive survey of R. solani and BNR on potato in South Africa using a molecular-based approach. This is the first report of R. solani AG 2-2IIIB and AG 4 HG-I causing stem and stolon canker and BNR AG A and AG R causing stem canker and black scurf on potato in South Africa.",TRUE,location
R245,Power and Energy,R137145,Application of Interval Type-2 Fuzzy Logic System in Short Term Load Forecasting on Special Days,S542009,R137146,Location ,R27644,Indonesia,"This paper presents the application of Interval Type-2 fuzzy logic systems (Interval Type-2 FLS) in short term load forecasting (STLF) on special days, study case in Bali Indonesia. Type-2 FLS is characterized by a concept called footprint of uncertainty (FOU) that provides the extra mathematical dimension that equips Type-2 FLS with the potential to outperform their Type-1 counterparts. While a Type-2 FLS has the capability to model more complex relationships, the output of a Type-2 fuzzy inference engine needs to be type-reduced. Type reduction is used by applying the Karnik-Mendel (KM) iterative algorithm. This type reduction maps the output of Type-2 FSs into Type-1 FSs then the defuzzification with centroid method converts that Type-1 reduced FSs into a number. The proposed method was tested with the actual load data of special days using 4 days peak load before special days and at the time of special day for the year 2002-2006. There are 20 items of special days in Bali that are used to be forecasted in the year 2005 and 2006 respectively. The test results showed an accurate forecasting with the mean average percentage error of 1.0335% and 1.5683% in the year 2005 and 2006 respectively.",TRUE,location
R339,Public Administration,R110181,Constitution Making and Democratization in Kenya (2000–2005),S502330,R110183,Country,L363123,Kenya,"The article analyses the most intense phase of a process of constitutional review in Kenya that has been ongoing since about 1990: that stage began in 2000 and is, perhaps, not yet completed, there being as yet no new constitution. The article describes the reasons for the review and the process. It offers an account of the role of the media and various sectors of society including women and previously marginalized ethnic groups, in shaping the agenda, the process and the outcome. It argues that although civil society, with much popular support, was prominent in pushing for change, when an official process of review began, the vested interests of government and even of those trusted with the review frustrated a quick outcome, and especially any outcome that meant curtailing the powers of government. Even high levels of popular involvement were unable to guarantee a new constitution against manipulation by government and other vested interests involved in review, including the law and the courts. However, a new constitution may yet emerge, and in any case the process may prove to have made an ineradicable impact on the shape of the nation's politics and the consciousness of the ordinary citizen.",TRUE,location
R31,Public Health,R110453,Social isolation and the speed of covid-19 cases: measures to prevent transmission,S503258,R110460,Study location,L363634,Brazil,"OBJECTIVE To evaluate the social isolation index and the speed of new cases of Covid-19 in Brazil. METHODS Quantitative ecological, documentary, descriptive study using secondary data, comparing the period from March 14 to May 1, 2020, carried out with the 27 Brazilian federative units, characterizing the study population. The data were analyzed through descriptive statistics using the Statistical Package for the Social Sciences-SPSS® software, evaluating the correlation between the social isolation index and the number of new cases of Covid-19, using Pearson's correlation coefficient. RESULTS The increase in Covid-19 cases is exponential. There was a significant, negative correlation regarding the social isolation index and the speed of the number of new cases by Pearson's coefficient, which means that as the first one increases, the second one decreases. CONCLUSION Social isolation measures have significant effects on the rate of coronavirus infection in the population.",TRUE,location
R31,Public Health,R110448,The effect of social distance measures on COVID-19 epidemics in Europe: an interrupted time series analysis,S503236,R110452,Study location,L363623,Europe,"Abstract Following the introduction of unprecedented “stay-at-home” national policies, the COVID-19 pandemic recently started declining in Europe. Our research aims were to characterize the changepoint in the flow of the COVID-19 epidemic in each European country and to evaluate the association of the level of social distancing with the observed decline in the national epidemics. Interrupted time series analyses were conducted in 28 European countries. Social distance index was calculated based on Google Community Mobility Reports. Changepoints were estimated by threshold regression, national findings were analyzed by Poisson regression, and the effect of social distancing in mixed effects Poisson regression model. Our findings identified the most probable changepoints in 28 European countries. Before changepoint, incidence of new COVID-19 cases grew by 24% per day on average. From the changepoint, this growth rate was reduced to 0.9%, 0.3% increase, and to 0.7% and 1.7% decrease by increasing social distancing quartiles. The beneficial effect of higher social distance quartiles (i.e., turning the increase into decline) was statistically significant for the fourth quartile. Notably, many countries in lower quartiles also achieved a flat epidemic curve. In these countries, other plausible COVID-19 containment measures could contribute to controlling the first wave of the disease. The association of social distance quartiles with viral spread could also be hindered by local bottlenecks in infection control. Our results allow for moderate optimism related to the gradual lifting of social distance measures in the general population, and call for specific attention to the protection of focal micro-societies enriching high-risk elderly subjects, including nursing homes and chronic care facilities.",TRUE,location
R11,Science,R34138,The impact of the introduction of transgenic crops in Argentinean agriculture,S118778,R34176,[7]Region,L71750,Argentina,"Since the early 1990s, Argentinean grain production underwent a dramatic increase in grains production (from 26 million tons in 1988/89 to over 75 million tons in 2002/2003). Several factors contributed to this ""revolution,"" but probably one of the most important was the introduction of new genetic modification (GM) technologies, specifically herbicide-tolerant soybeans. This article analyses this process, reporting on the economic benefits accruing to producers and other participating actors as well as some of the environmental and social impacts that could be associated with the introduction of the new technologies. In doing so, it lends attention to the synergies between GM soybeans and reduced-tillage technologies and also explores some of the institutional factors that shed light on the success of this case, including aspects such as the early availability of a reliable biosafety mechanism and a special intellectual property rights (IPR) situation. In its concluding comments, this article also posts a number of questions about the replicability of the experience and some pending policy issues regarding the future exploitation of GM technologies in Argentina.",TRUE,location
R11,Science,R34163,"Genetically modified crops, corporate pricing strategies, and farmers' adoption: the case of Bt cotton in Argentina",S118699,R34165,[7]Region,L71690,Argentina,"This article analyzes adoption and impacts of Bt cotton in Argentina against the background of monopoly pricing. Based on survey data, it is shown that the technology significantly reduces insecticide applications and increases yields; however, these advantages are curbed by the high price charged for genetically modified seeds. Using the contingent valuation method, it is shown that farmers' average willingness to pay is less than half the actual technology price. A lower price would not only increase benefits for growers, but could also multiply company profits, thus, resulting in a Pareto improvement. Implications of the sub-optimal pricing strategy are discussed.",TRUE,location
R11,Science,R34288,Macroeconomic Convergence in Southern Africa,S119268,R34289,Countries,R34283,Botswana,"In this paper we aim to answer the following two questions: 1) has the Common Monetary Area in Southern Africa (henceforth CMA) ever been an optimal currency area (OCA)? 2) What are the costs and benefits of the CMA for its participating countries? In order to answer these questions, we carry out a two-step econometric exercise based on the theory of generalised purchasing power parity (G-PPP). The econometric evidence shows that the CMA (but also Botswana as a de facto member) form an OCA given the existence of common long-run trends in their bilateral real exchange rates. Second, we also test that in the case of the CMA and Botswana the smoothness of the operation of the common currency area — measured through the degree of relative price correlation — depends on a variety of factors. These factors signal both the advantages and disadvantages of joining a monetary union. On the one hand, the more open and more similarly diversified the economies are, the higher the benefits they ... Ce Document de travail s'efforce de repondre a deux questions : 1) la zone monetaire commune de l'Afrique australe (Common Monetary Area - CMA) a-t-elle vraiment reussi a devenir une zone monetaire optimale ? 2) quels sont les couts et les avantages de la CMA pour les pays participants ? Nous avons effectue un exercice econometrique en deux etapes base sur la theorie des parites de pouvoir d'achat generalisees. D'apres les resultats econometriques, la CMA (avec le Botswana comme membre de facto) est effectivement une zone monetaire optimale etant donne les evolutions communes sur le long terme de leurs taux de change bilateraux. Nous avons egalement mis en evidence que le bon fonctionnement de l'union monetaire — mesure par le degre de correlation des prix relatifs — depend de plusieurs facteurs. Ces derniers revelent a la fois les couts et les avantages de l'appartenance a une union monetaire. D'un cote, plus les economies sont ouvertes et diversifiees de facon comparable, plus ...",TRUE,location
R11,Science,R29751,An Empirical Study on the Environmental Kuznets Curve for China’s Carbon Emissions: Based on Provincial Panel Data,S98713,R29752,EKC Turnaround point(s),R29745,Central,"Abstract Based on the Environmental Kuznets Curve theory, the authors choose provincial panel data of China in 1990–2007 and adopt panel unit root and co-integration testing method to study whether there is Environmental Kuznets Curve for China’s carbon emissions. The research results show that: carbon emissions per capita of the eastern region and the central region of China fit into Environmental Kuznets Curve, but that of the western region does not. On this basis, the authors carry out scenario analysis on the occurrence time of the inflection point of carbon emissions per capita of different regions, and describe a specific time path.",TRUE,location
R11,Science,R151258,"Role of Social Media in Social Change:
An Analysis of Collective Sense Making
During the 2011 Egypt Revolution",S626546,R156078,Country,L431243,egypt,"This study explores the role of social media in social change by analyzing Twitter data collected during the 2011 Egypt Revolution. Particular attention is paid to the notion of collective sense making, which is considered a critical aspect for the emergence of collective action for social change. We suggest that collective sense making through social media can be conceptualized as human-machine collaborative information processing that involves an interplay of signs, Twitter grammar, humans, and social technologies. We focus on the occurrences of hashtags among a high volume of tweets to study the collective sense-making phenomena of milling and keynoting. A quantitative Markov switching analysis is performed to understand how the hashtag frequencies vary over time, suggesting structural changes that depict the two phenomena. We further explore different hashtags through a qualitative content analysis and find that, although many hashtags were used as symbolic anchors to funnel online users' attention to the Egypt Revolution, other hashtags were used as part of tweet sentences to share changing situational information. We suggest that hashtags functioned as a means to collect information and maintain situational awareness during the unstable political situation of the Egypt Revolution.",TRUE,location
R11,Science,R28946,Identifying Software Project Risks: An International Delphi Study”,S95575,R28950,Country,R28949,Finland,"Advocates of software risk management claim that by identifying and analyzing threats to success (i.e., risks) action can be taken to reduce the chance of failure of a project. The first step in the risk management process is to identify the risk itself, so that appropriate countermeasures can be taken. One problem in this task, however, is that no validated lists are available to help the project manager understand the nature and types of risks typically faced in a software project. This paper represents a first step toward alleviating this problem by developing an authoritative list of common risk factors. We deploy a rigorous data collection method called a ""ranking-type"" Delphi survey to produce a rank-order list of risk factors. This data collection method is designed to elicit and organize opinions of a panel of experts through iterative, controlled feedback. Three simultaneous surveys were conducted in three different settings: Hong Kong, Finland, and the United States. This was done to broaden our view of the types of risks, rather than relying on the view of a single culture-an aspect that has been ignored in past risk management research. In forming the three panels, we recruited experienced project managers in each country. The paper presents the obtained risk factor list, compares it with other published risk factor lists for completeness and variation, and analyzes common features and differences in risk factor rankings in the three countries. We conclude by discussing implications of our findings for both research and improving risk management practice.",TRUE,location
R11,Science,R32446,"«Overeducation, Undereducation, and the Theory of Career Mobility",S110193,R32447,Country of study,R29991,Germany,"The theory of career mobility (Sicherman and Galor, Journal of Political Economy, 98(1), 169–92, 1990) claims that wage penalties for overeducated workers are compensated by better promotion prospects. Sicherman (Journal of Labour Economics, 9(2), 101–22, 1991) was able to confirm this theory in an empirical study using panel data. However, the only retest using panel data so far (Robst, Eastern Economic Journal, 21, 539–50, 1995) produced rather ambiguous results. In the present paper, random effects models to analyse relative wage growth are estimated using data from the German Socio-Economic Panel. It is found that overeducated workers in Germany have markedly lower relative wage growth rates than adequately educated workers. The results cast serious doubt on whether the career mobility model is able to explain overeducation in Germany. The plausibility of the results is supported by the finding that overeducated workers have less access to formal and informal on-the-job training, which is usually found to be positively correlated with wage growth even when controlling for selectivity effects (Pischke, Journal of Population Economics, 14, 523–48, 2001).",TRUE,location
R11,Science,R28946,Identifying Software Project Risks: An International Delphi Study”,S95568,R28948,Country,R27562,Hong Kong,"Advocates of software risk management claim that by identifying and analyzing threats to success (i.e., risks) action can be taken to reduce the chance of failure of a project. The first step in the risk management process is to identify the risk itself, so that appropriate countermeasures can be taken. One problem in this task, however, is that no validated lists are available to help the project manager understand the nature and types of risks typically faced in a software project. This paper represents a first step toward alleviating this problem by developing an authoritative list of common risk factors. We deploy a rigorous data collection method called a ""ranking-type"" Delphi survey to produce a rank-order list of risk factors. This data collection method is designed to elicit and organize opinions of a panel of experts through iterative, controlled feedback. Three simultaneous surveys were conducted in three different settings: Hong Kong, Finland, and the United States. This was done to broaden our view of the types of risks, rather than relying on the view of a single culture-an aspect that has been ignored in past risk management research. In forming the three panels, we recruited experienced project managers in each country. The paper presents the obtained risk factor list, compares it with other published risk factor lists for completeness and variation, and analyzes common features and differences in risk factor rankings in the three countries. We conclude by discussing implications of our findings for both research and improving risk management practice.",TRUE,location
R11,Science,R27514,"Energy consumption, employment and causality in Japan: a multivariate approach",S89196,R27515,Countries,R27513,Japan,"Using Hsiao's version of Granger causality and cointegration, this study finds that employment (EP), energy consumption (EC), Real GNP (RGNP) and capital are not cointegrated. EC is found to negatively cause EP whereas EP and RNGP are found to directly cause EC. It is also found that capital negatively Granger-causes EP while RGNP and EP are found to strongly influence EC. The findings of this study seem to suggest that a policy of energy conservation may not be detrimental to a country such as Japan. In addition, the finding that energy and capital are substitutes implies that energy conservation will promote capital formation, given output constant.",TRUE,location
R11,Science,R151242,Design of a Resilient Information System for Disaster Response,S626487,R156070,Country,L431192,Japan,"The devastating 2011 Great East Japan Earthquake made people aware of the importance of Information and Communication Technology (ICT) for sustaining life during and soon after a disaster. The difficulty in recovering information systems, because of the failure of ICT, hindered all recovery processes. The paper explores ways to make information systems resilient in disaster situations. Resilience is defined as quickly regaining essential capabilities to perform critical post disaster missions and to smoothly return to fully stable operations thereafter. From case studies and the literature, we propose that a frugal IS design that allows creative responses will make information systems resilient in disaster situations. A three-stage model based on a chronological sequence was employed in structuring the proposed design principles.",TRUE,location
R11,Science,R27682,"Electricity consumption, income, foreign direct investment, and population in Malaysia: new evidence from multivariate framework analysis",S90023,R27683,Countries,R27570,Malaysia,"Purpose - This study attempts to re-investigate the electricity consumption function for Malaysia through the cointegration and causality analyses over the period 1970 to 2005. Design/methodology/approach - The study employed the bounds-testing procedure for cointegration to examine the potential long-run relationship, while an autoregressive distributed lag model is used to derive the short- and long-run coefficients. The Granger causality test is applied to determine the causality direction between electricity consumption and its determinants. Findings - New evidence is found in this study: first, electricity consumption, income, foreign direct investment, and population in Malaysia are cointegrated. Second, the influx of foreign direct investment and population growth are positively related to electricity consumption in Malaysia and the Granger causality evidence indicates that electricity consumption, income, and foreign direct investment are of bilateral causality. Originality/value - The estimated multivariate electricity consumption function for Malaysia implies that Malaysia is an energy-dependent country; thus energy-saving policies may have an inverse effect on current and also future economic development in Malaysia.",TRUE,location
R11,Science,R34179,Potential Benefits of Agricultural Biotechnology: An Example from the Mexican Potato Sector,S118805,R34180,[7]Region,L71771,Mexico,"The study analyzes ex ante the socioeconomic effects of transgenic virus resistance technology for potatoes in Mexico. All groups of potato growers could significantly gain from the transgenic varieties to be introduced, and the technology could even improve income distribution. Nonetheless, public support is needed to fully harness this potential. Different policy alternatives are tested within scenario calculations in order to supply information on how to optimize the technological outcome, both from an efficiency and an equity perspective. Transgenic disease resistance is a promising technology for developing countries. Providing these countries with better access to biotechnology should be given higher political priority.",TRUE,location
R11,Science,R26715,Mobility-based clustering protocol for wireless sensor networks with mobile nodes,S85322,R26716,CH Properties Mobility,R26714,Mobile,"In this study, the authors propose a mobility-based clustering (MBC) protocol for wireless sensor networks with mobile nodes. In the proposed clustering protocol, a sensor node elects itself as a cluster-head based on its residual energy and mobility. A non-cluster-head node aims at its link stability with a cluster head during clustering according to the estimated connection time. Each non-cluster-head node is allocated a timeslot for data transmission in ascending order in a time division multiple address (TDMA) schedule based on the estimated connection time. In the steady-state phase, a sensor node transmits its sensed data in its timeslot and broadcasts a joint request message to join in a new cluster and avoid more packet loss when it has lost or is going to lose its connection with its cluster head. Simulation results show that the MBC protocol can reduce the packet loss by 25% compared with the cluster-based routing (CBR) protocol and 50% compared with the low-energy adaptive clustering hierarchy-mobile (LEACH-mobile) protocol. Moreover, it outperforms both the CBR protocol and the LEACH-mobile protocol in terms of average energy consumption and average control overhead, and can better adapt to a highly mobile environment.",TRUE,location
R11,Science,R27690,Co-integration and causality relationship between energy consumption and economic growth: further empirical evidence for Nigeria,S90074,R27691,Countries,R27672,Nigeria,"Abstract The Paper re ‐ examined co‐integration and causality relationship between energy consumption and economic growth for Nigeria using data covering the period 1970 to 2005. Unlike previous related study for Nigeria, different proxies of energy consumption (electricity demand, domestic crude oil consumption and gas utilization) were used for the estimation. It also included government activities proxied by health expenditure and monetary policy proxied by broad money supply though; emphasis was on energy consumption. Using the Johansen co‐integration technique, it was found that there existed a long run relationship among the series. It was also found that all the variables used for the study were I(1). Furthermore, unidirectional causality was established between electricity consumption and economic growth, domestic crude oil production and economic growth as well as between gas utilization and economic growth in Nigeria. While causality runs from electricity consumption to economic growth as well a...",TRUE,location
R11,Science,R27527,The relationship between energy consumption and economic growth in Pakistan,S89930,R27666,Countries,R27525,Pakistan,"This paper investigates the causal relationship between energy consumption and economic growth and energy consumption and employment in Pakistan. By applying techniques of co-integration and Hsiao’s version of Granger causality, the results infer that economic growth causes total energy consumption. Economic growth also leads to growth in petroleum consumption, while on the other hand, neither economic growth nor gas consumption affect each other. However, in the power sector it has been found that electricity consumption leads to economic growth without feedback. The implications of the study are that energy conservation policy regarding petroleum consumption would not lead to any side-effects on economic growth in Pakistan. However, an energy growth policy in the case of gas and electricity consumption should be adopted in such a way that it stimulates growth in the economy and thus expands employment opportunities. The relationship between energy consumption and economic growth is now well established in the literature, yet the direction of causation of this relationship remains controversial. That is, whether economic growth leads to energy consumption or that energy consumption is the engine of economic growth. The direction of causality has significant policy implications. Empirically it has been tried to find the direction of causality between energy consumption and economic activities for the developing as well as for the developed countries employing the Granger or Sims techniques. However, results are mixed. The seminal paper by Kraft and Kraft (1978), supported the unidirectional causality from GNP growth to energy consumption in the case of the United States of America for the period 1947-1974. Erol, and Yu, (1987), tested data for six industrialized countries, and found no significant causal relationship between energy consumption and GDP growth and, energy and employment. Yu, et. al. (1988), found no relationship between energy and GNP, and",TRUE,location
R11,Science,R34130,First impact of biotechnology in the EU: Bt maize adoption in Spain,S118477,R34131,[7]Region,L71521,Spain,"Summary In the present paper we build a bio-economic model to estimate the impact of a biotechnology innovation in EU agriculture. Transgenic Bt maize offers the potential to efficiently control corn borers, that cause economically important losses in maize growing in Spain. Since 1998, Syngenta has commercialised the variety Compa CB, equivalent to an annual maize area of about 25,000 ha. During the six-year period 1998-2003 a total welfare gain of €15.5 million is estimated from the adoption of Bt maize, of which Spanish farmers captured two thirds, the rest accruing to the seed industry.",TRUE,location
R11,Science,R32427,"Unemployment Persistency, Over-education and the Employment Chances of the Less Educated",S110100,R32428,Country of study,R29998,Sweden,"The research question addressed in this article concerns whether unemployment persistency can be regarded as a phenomenon that increases employment difficulties for the less educated and, if so, whether their employment chances are reduced by an overly rapid reduction in the number of jobs with low educational requirements. The empirical case is Sweden and the data covers the period 1976-2000. The empirical analyses point towards a negative response to both questions. First, it is shown that jobs with low educational requirements have declined but still constitute a substantial share of all jobs. Secondly, educational attainment has changed at a faster rate than the job structure with increasing over-education in jobs with low educational requirements as a result. This, together with changed selection patterns into the low education group, are the main reasons for the poor employment chances of the less educated in periods with low general demand for labour.",TRUE,location
R11,Science,R151256,"ICT-Enabled Community Empowerment in Crisis
Response: Social Media in Thailand Flooding 2011",S626533,R156077,Country,L431231,Thailand,"In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency’s perspective, which undermines communities’ role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.",TRUE,location
R11,Science,R151302,"Digitally enabled disaster response: the
emergence of social media as boundary
objects in a flooding disaster",S626687,R156098,Country,L431364,Thailand,"In recent times, social media has been increasingly playing a critical role in response actions following natural catastrophes. From facilitating the recruitment of volunteers during an earthquake to supporting emotional recovery after a hurricane, social media has demonstrated its power in serving as an effective disaster response platform. Based on a case study of Thailand flooding in 2011 – one of the worst flooding disasters in more than 50 years that left the country severely impaired – this paper provides an in‐depth understanding on the emergent roles of social media in disaster response. Employing the perspective of boundary object, we shed light on how different boundary spanning competences of social media emerged in practice to facilitate cross‐boundary response actions during a disaster, with an aim to promote further research in this area. We conclude this paper with guidelines for response agencies and impacted communities to deploy social media for future disaster response.",TRUE,location
R11,Science,R153575,"ICT-Enabled Community Empowerment in Crisis
Response: Social Media in Thailand Flooding 2011.",S616720,R153885,Country,L425299,Thailand,"In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency’s perspective, which undermines communities’ role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.",TRUE,location
R11,Science,R27685,"Structural breaks, electricity consumption and economic growth: evidence from Turkey",S90045,R27686,Countries,R27522,Turkey,"This paper investigates the short-run and long-run causality issues between electricity consumption and economic growth in Turkey by using the co-integration and vector error-correction models with structural breaks. It employs annual data covering the period 1968–2005. The study also explores the causal relationship between these variables in terms of the three error-correction based Granger causality models. The empirical results are as follows: i) Both variables are nonstationary in levels and stationary in the first differences with/without structural breaks, ii) there exists a longrun relationship between variables, iii) there is unidirectional causality running from the electricity consumption to economic growth. The overall results indicate that “growth hypothesis” for electricity consumption and growth nexus holds in Turkey. Thus, energy conservation policies, such as rationing electricity consumption, may harm economic growth in Turkey.",TRUE,location
R11,Science,R34266,Business Cycle Synchronization in the Proposed East African Monetary Union: An Unobserved Component Approach,S119164,R34267,Countries,R34259,East African Community,"This paper uses the business cycle synchronization criteria of the theory of optimum currency area (OCA) to examine the feasibility of the East African Community (EAC) as a monetary union. We also investigate whether the degree of business cycle synchronization has increased after the 1999 EAC Treaty. We use an unobserved component model to measure business cycle synchronization as the proportion of structural shocks that are common across different countries, and a time-varying parameter model to examine the dynamics of synchronization over time. We find that although the degree of synchronization has increased since 2000 when the EAC Treaty came into force, the proportion of shocks that is common across different countries is still small implying weak synchronization. This evidence casts doubt on the feasibility of a monetary union for the EAC as scheduled by 2012.",TRUE,location
R11,Science,R34268,Monetary union for the development process in the East African community: business cycle synchronization approach,S119178,R34269,Countries,R34259,East African Community,"This paper empirically examines the suitability of monetary union in East African community members namely, Burundi, Kenya, Rwanda, Tanzania and Uganda, on the basis of business cycle synchronization. This research considers annual GDP (gross domestic product) data from IMF (international monetary fund) for the period of 1980 to 2010. In order to extract the business cycles and trends, the study uses HP (Hodrick-Prescott) and the BP (band pass) filters. After identifying the cycles and trends of the business cycle, the study considers cross country correlation analysis and analysis of variance technique to examine whether EAC (East African community) countries are characterized by synchronized business cycles or not. The results show that four EAC countries (Burundi, Kenya, Tanzania and Uganda) among five countries are having similar pattern of business cycle and trend from the last ten years of the formation of the EAC. The research concludes that these countries, except Rwanda, do not differ significantly in transitory or cycle components but do differ in permanent components especially in growth trend. Key words: Business cycle synchronization, optimum currency area, East African community, monetary union, development.",TRUE,location
R11,Science,R34270,Design and implementation of a common currency area in the East African community,S119189,R34271,Countries,R34259,East African Community,"The East African Community (EAC) has fast-tracked its plans to create a single currency for the five countries making up the region, and hopes to conclude negotiations on a monetary union protocol by the end of 2012. While the benefits of lower transactions costs from a common currency may be significant, countries will also lose the ability to use monetary policy to respond to different shocks. Evidence presented shows that the countries differ in a number of respects, facing asymmetric shocks and different production structures. Countries have had difficulty meeting convergence criteria, most seriously as concerns fiscal deficits. Preparation for monetary union will require effective institutions for macroeconomic surveillance and enforcing fiscal discipline, and euro zone experience indicates that these institutions will be difficult to design and take a considerable time to become effective. This suggests that a timetable for monetary union in the EAC should allow for a substantial initial period of institution building. In order to have some visible evidence of the commitment to monetary union, in the meantime the EAC may want to consider introducing a common basket currency in the form of notes and coin, to circulate in parallel with national currencies.",TRUE,location
R11,Science,R34272,Monetary Transmission Mechanism in the East African Community: An Empirical Investigation,S119202,R34273,Countries,R34259,East African Community,"Do changes in monetary policy affect inflation and output in the East African Community (EAC)? We find that (i) Monetary Transmission Mechanism (MTM) tends to be generally weak when using standard statistical inferences, but somewhat strong when using non-standard inference methods; (ii) when MTM is present, the precise transmission channels and their importance differ across countries; and (iii) reserve money and the policy rate, two frequently used instruments of monetary policy, sometimes move in directions that exert offsetting expansionary and contractionary effects on inflation - posing challenges to harmonization of monetary policies across the EAC and transition to a future East African Monetary Union. The paper offers some suggestions for strengthening the MTM in the EAC.",TRUE,location
R11,Science,R34276,Macroeconomic Shock Synchronization in the East African Community,S119223,R34277,Countries,R34259,East African Community," The East African Community’s (EAC) economic integration has gained momentum recently, with the EAC countries aiming to adopt a single currency in 2015. This article evaluates empirically the readiness of the EAC countries for monetary union. First, structural similarity in terms of similarity of production and exports of the EAC countries is measured. Second, the symmetry of shocks is examined with structural vector auto-regression analysis (SVAR). The lack of macroeconomic convergence gives evidence against a hurried transition to a monetary union. Given the divergent macroeconomic outcomes, structural reforms, including closing infrastructure gaps and harmonizing macroeconomic policies that would raise synchronization of business cycles, need to be in place before moving to monetary union. ",TRUE,location
R11,Science,R34278,"Monetary, Financial and Fiscal Stability in the East African Community: Ready for a Monetary Union?",S119238,R34279,Countries,R34259,East African Community,"We examine prospects for a monetary union in the East African Community (EAC) by developing a stylized model of policymakers' decision problem that allows for uncertain benefits derived from monetary,financial and fiscal stability, and then calibrating the model for the EAC for the period 2003-2010. When policymakers properly allow for uncertainty, none of the countries wants to pursue a monetary union based on either monetary or financial stability grounds, and only Rwanda might favor it on fiscal stability grounds; we argue that robust institutional arrangements assuring substantial improvements in monetary, financial and fiscal stability are needed to compensate. (This abstract was borrowed from another version of this item.)",TRUE,location
R408,Slavic Languages and Societies,R110128,Mixed language usage in Belarus: the sociostructural background of language choice,S502453,R110130,Population under analysis,R110223,Belarus,"Abstract This article reports findings from a survey on language usage in Belarus, which encompasses bilingual Belarusian and Russian. First, the distribution of language usage is discussed. Then the dependency of language usage on some sociocultural conditions is explored. Finally, the changes in language usage over three generations are discussed. We find that a mixed Belarusian–Russian form of speech is widely used in the cities studied and that it is spoken across all educational levels. However, it seems to be predominantly utilized in informal communication, especially among friends and family members, leaving Russian and Belarusian to more formal or public venues.",TRUE,location
R408,Slavic Languages and Societies,R110154,In the grip of replacive bilingualism: the Belarusian language in contact with Russian,S502457,R110156,Population under analysis,R110223,Belarus,"Abstract Belarusian occupies a very specific position among the Slavic languages. In spite of the fact that it can be characterized as a “middle-sized” Slavic language, the contiguity and all-embracing rivalry with the “strong” Russian language make it into an eternally “small” language. The modern Belarusian standard language was elaborated in the early twentieth century. There was a brief but fruitful period of its promotion in the 1920s, but then Russification became a relevant factor in its development for the following decades. Political factors have always held great significance in the development of Belarusian. The linguistic affinity of Belarusian and Russian in combination with other factors is an insurmountable obstacle for the spread of the Belarusian language. On the one hand, Russian speakers living in Belarus, as a rule, understand Belarusian but do not make the effort to acquire it as an active medium of communication. On the other hand, Belarusian speakers proficient in Russian do not have enough motivation to use Belarusian routinely, on account of the pervading presence of Russian in Belarusian society. As a result, they often lose their Belarusian language skills. There is considerable dissent as to the perspectives of Belarusian. Though it is the “titular” language, which determines its importance in Belarus, it is also a minority language and thus faces the corresponding challenges.",TRUE,location
R408,Slavic Languages and Societies,R110029,Trasjanka: A code of rural migrants in Minsk,S502448,R110031,Population under analysis,R110220,Minsk,"The article deals with an oral speech phenomenon widespread in the Republic of Belarus, where it is known as trasjanka. This code originated through constant contact between Russian and Belarusian, two closely related East Slavonic languages. Discussed are the main features of this code (as used in the city of Minsk), the sources of its origin, different linguistic definitions and the attitude towards this code from those who dwell in the city of Minsk. Special attention is paid to the problem of distinction between trasjanka and different forms of codeswitching, also widely used in the Minsk language community.",TRUE,location
R281,Social and Behavioral Sciences,R70749,The Relation Between ICT and Science in PISA 2015 for Bulgarian and Finnish Students,S337619,R70751,has countries,R71231,Bulgaria,"The relationship between Information and Communication Technology (ICT) and science performance has been the focus of much recent research, especially due to the prevalence of ICT in our digital society. However, the exploration of this relationship has yielded mixed results. Thus, the current study aims to uncover the learning processes that are linked to students’ science performance by investigating the effect of ICT variables on science for 15-year-old students in two countries with contrasting levels of technology implementation (Bulgaria n = 5,928 and Finland n = 5,882). The study analyzed PISA 2015 data using structural equation modeling to assess the impact of ICT use, availability, and comfort on students’ science scores, controlling for students’ socio-economic status. In both countries, results revealed that (1) ICT use and availability were associated with lower science scores and (2) students who were more comfortable with ICT performed better in science. This study can inform practical implementations of ICT in classrooms that consider the differential effect of ICT and it can advance theoretical knowledge around technology, learning, and cultural context.",TRUE,location
R281,Social and Behavioral Sciences,R70746,Measurement invariance of the ICT engagement construct and its association with students’ performance in China and Germany: Evidence from PISA 2015 data,S337614,R70748,has countries,R44804,China,"The present study investigated the factor structure of and measurement invariance in the information and communication technology (ICT) engagement construct, and the relationship between ICT engagement and students' performance on science, mathematics and reading in China and Germany. Samples were derived from the Programme for International Student Assessment (PISA) 2015 survey. Configural, metric and scalar equivalence were found in a multigroup exploratory structural equation model. In the regression model, a significantly positive association between interest in ICT and student achievement was found in China, in contrast to a significantly negative association in Germany. All achievement scores were negatively and significantly correlated with perceived ICT competence scores in China, whereas science and mathematics achievement scores were not predicted by scores on ICT competence in Germany. Similar patterns were found in China and Germany in terms of perceived autonomy in using ICT and social relatedness in using ICT to predict students' achievement. The implications of all the findings were discussed. [ABSTRACT FROM AUTHOR]",TRUE,location
R281,Social and Behavioral Sciences,R70742,"A PISA-2015 Comparative Meta-Analysis between Singapore and Finland: Relations of Students’ Interest in Science, Perceived ICT Competence, and Environmental Awareness and Optimism",S337618,R70745,has countries,R28949,Finland,"The aim of the present study is twofold: (1) to identify a factor structure between variables-interest in broad science topics, perceived information and communications technology (ICT) competence, environmental awareness and optimism; and (2) to explore the relations between these variables at the country level. The first part of the aim is addressed using exploratory factor analysis with data from the Program for International Student Assessment (PISA) for 15-year-old students from Singapore and Finland. The results show that a comparable structure with four factors was verified in both countries. Correlation analyses and linear regression were used to address the second part of the aim. The results show that adolescents’ interest in broad science topics can predict perceived ICT competence. Their interest in broad science topics and perceived ICT competence can predict environmental awareness in both countries. However, there is difference in predicting environmental optimism. Singaporean students’ interest in broad science topics and their perceived ICT competences are positive predictors, whereas environmental awareness is a negative predictor. Finnish students’ environmental awareness negatively predicted environmental optimism.",TRUE,location
R281,Social and Behavioral Sciences,R70749,The Relation Between ICT and Science in PISA 2015 for Bulgarian and Finnish Students,S337620,R70751,has countries,R28949,Finland,"The relationship between Information and Communication Technology (ICT) and science performance has been the focus of much recent research, especially due to the prevalence of ICT in our digital society. However, the exploration of this relationship has yielded mixed results. Thus, the current study aims to uncover the learning processes that are linked to students’ science performance by investigating the effect of ICT variables on science for 15-year-old students in two countries with contrasting levels of technology implementation (Bulgaria n = 5,928 and Finland n = 5,882). The study analyzed PISA 2015 data using structural equation modeling to assess the impact of ICT use, availability, and comfort on students’ science scores, controlling for students’ socio-economic status. In both countries, results revealed that (1) ICT use and availability were associated with lower science scores and (2) students who were more comfortable with ICT performed better in science. This study can inform practical implementations of ICT in classrooms that consider the differential effect of ICT and it can advance theoretical knowledge around technology, learning, and cultural context.",TRUE,location
R281,Social and Behavioral Sciences,R70740,ICT Engagement: a new construct and its assessment in PISA 2015,S337638,R70741,has countries,R68481,Germany,"Abstract As a relevant cognitive-motivational aspect of ICT literacy, a new construct ICT Engagement is theoretically based on self-determination theory and involves the factors ICT interest, Perceived ICT competence, Perceived autonomy related to ICT use, and ICT as a topic in social interaction. In this manuscript, we present different sources of validity supporting the construct interpretation of test scores in the ICT Engagement scale, which was used in PISA 2015. Specifically, we investigated the internal structure by dimensional analyses and investigated the relation of ICT Engagement aspects to other variables. The analyses are based on public data from PISA 2015 main study from Switzerland ( n = 5860) and Germany ( n = 6504). First, we could confirm the four-dimensional structure of ICT Engagement for the Swiss sample using a structural equation modelling approach. Second, ICT Engagement scales explained the highest amount of variance in ICT Use for Entertainment, followed by Practical use. Third, we found significantly lower values for girls in all ICT Engagement scales except ICT Interest. Fourth, we found a small negative correlation between the scores in the subscale “ICT as a topic in social interaction” and reading performance in PISA 2015. We could replicate most results for the German sample. Overall, the obtained results support the construct interpretation of the four ICT Engagement subscales.",TRUE,location
R281,Social and Behavioral Sciences,R70746,Measurement invariance of the ICT engagement construct and its association with students’ performance in China and Germany: Evidence from PISA 2015 data,S337615,R70748,has countries,R68481,Germany,"The present study investigated the factor structure of and measurement invariance in the information and communication technology (ICT) engagement construct, and the relationship between ICT engagement and students' performance on science, mathematics and reading in China and Germany. Samples were derived from the Programme for International Student Assessment (PISA) 2015 survey. Configural, metric and scalar equivalence were found in a multigroup exploratory structural equation model. In the regression model, a significantly positive association between interest in ICT and student achievement was found in China, in contrast to a significantly negative association in Germany. All achievement scores were negatively and significantly correlated with perceived ICT competence scores in China, whereas science and mathematics achievement scores were not predicted by scores on ICT competence in Germany. Similar patterns were found in China and Germany in terms of perceived autonomy in using ICT and social relatedness in using ICT to predict students' achievement. The implications of all the findings were discussed. [ABSTRACT FROM AUTHOR]",TRUE,location
R281,Social and Behavioral Sciences,R76141,Bullying Victimization among In-School Adolescents in Ghana: Analysis of Prevalence and Correlates from the Global School-Based Health Survey,S348422,R76150,location,R34216,Ghana,"(1) Background: Although bullying victimization is a phenomenon that is increasingly being recognized as a public health and mental health concern in many countries, research attention on this aspect of youth violence in low- and middle-income countries, especially sub-Saharan Africa, is minimal. The current study examined the national prevalence of bullying victimization and its correlates among in-school adolescents in Ghana. (2) Methods: A sample of 1342 in-school adolescents in Ghana (55.2% males; 44.8% females) aged 12–18 was drawn from the 2012 Global School-based Health Survey (GSHS) for the analysis. Self-reported bullying victimization “during the last 30 days, on how many days were you bullied?” was used as the central criterion variable. Three-level analyses using descriptive, Pearson chi-square, and binary logistic regression were performed. Results of the regression analysis were presented as adjusted odds ratios (aOR) at 95% confidence intervals (CIs), with a statistical significance pegged at p < 0.05. (3) Results: Bullying victimization was prevalent among 41.3% of the in-school adolescents. Pattern of results indicates that adolescents in SHS 3 [aOR = 0.34, 95% CI = 0.25, 0.47] and SHS 4 [aOR = 0.30, 95% CI = 0.21, 0.44] were less likely to be victims of bullying. Adolescents who had sustained injury [aOR = 2.11, 95% CI = 1.63, 2.73] were more likely to be bullied compared to those who had not sustained any injury. The odds of bullying victimization were higher among adolescents who had engaged in physical fight [aOR = 1.90, 95% CI = 1.42, 2.25] and those who had been physically attacked [aOR = 1.73, 95% CI = 1.32, 2.27]. Similarly, adolescents who felt lonely were more likely to report being bullied [aOR = 1.50, 95% CI = 1.08, 2.08] as against those who did not feel lonely. Additionally, adolescents with a history of suicide attempts were more likely to be bullied [aOR = 1.63, 95% CI = 1.11, 2.38] and those who used marijuana had higher odds of bullying victimization [aOR = 3.36, 95% CI = 1.10, 10.24]. (4) Conclusions: Current findings require the need for policy makers and school authorities in Ghana to design and implement policies and anti-bullying interventions (e.g., Social Emotional Learning (SEL), Emotive Behavioral Education (REBE), Marijuana Cessation Therapy (MCT)) focused on addressing behavioral issues, mental health and substance abuse among in-school adolescents.",TRUE,location
R281,Social and Behavioral Sciences,R76164,Patterns and Correlates for Bullying among Young Adolescents in Ghana,S348479,R76166,location,R34216,Ghana,"Bullying is relatively common and is considered to be a public health problem among adolescents worldwide. The present study examined the risk factors associated with bullying behavior among adolescents in a lower-middle-income country setting. Data on 6235 adolescents aged 11–16 years, derived from the Republic of Ghana’s contribution to the Global School-based Health Survey, were analyzed using bivariate and multinomial logistic regression analysis. A high prevalence of bullying was found among Ghanaian adolescents. Alcohol-related health compromising behaviors (alcohol use, alcohol misuse and getting into trouble as a result of alcohol) increased the risk of being bullied. In addition, substance use, being physically attacked, being seriously injured, hunger and truancy were also found to increase the risk of being bullied. However, having understanding parents and having classmates who were kind and helpful reduced the likelihood of being bullied. These findings suggest that school-based intervention programs aimed at reducing rates of peer victimization should simultaneously target multiple risk behaviors. Teachers can also reduce peer victimization by introducing programs that enhance adolescents’ acceptance of each other in the classroom.",TRUE,location
R281,Social and Behavioral Sciences,R70742,"A PISA-2015 Comparative Meta-Analysis between Singapore and Finland: Relations of Students’ Interest in Science, Perceived ICT Competence, and Environmental Awareness and Optimism",S337617,R70745,has countries,R43052,Singapore,"The aim of the present study is twofold: (1) to identify a factor structure between variables-interest in broad science topics, perceived information and communications technology (ICT) competence, environmental awareness and optimism; and (2) to explore the relations between these variables at the country level. The first part of the aim is addressed using exploratory factor analysis with data from the Program for International Student Assessment (PISA) for 15-year-old students from Singapore and Finland. The results show that a comparable structure with four factors was verified in both countries. Correlation analyses and linear regression were used to address the second part of the aim. The results show that adolescents’ interest in broad science topics can predict perceived ICT competence. Their interest in broad science topics and perceived ICT competence can predict environmental awareness in both countries. However, there is difference in predicting environmental optimism. Singaporean students’ interest in broad science topics and their perceived ICT competences are positive predictors, whereas environmental awareness is a negative predictor. Finnish students’ environmental awareness negatively predicted environmental optimism.",TRUE,location
R281,Social and Behavioral Sciences,R70740,ICT Engagement: a new construct and its assessment in PISA 2015,S337639,R70741,has countries,R44048,Switzerland,"Abstract As a relevant cognitive-motivational aspect of ICT literacy, a new construct ICT Engagement is theoretically based on self-determination theory and involves the factors ICT interest, Perceived ICT competence, Perceived autonomy related to ICT use, and ICT as a topic in social interaction. In this manuscript, we present different sources of validity supporting the construct interpretation of test scores in the ICT Engagement scale, which was used in PISA 2015. Specifically, we investigated the internal structure by dimensional analyses and investigated the relation of ICT Engagement aspects to other variables. The analyses are based on public data from PISA 2015 main study from Switzerland ( n = 5860) and Germany ( n = 6504). First, we could confirm the four-dimensional structure of ICT Engagement for the Swiss sample using a structural equation modelling approach. Second, ICT Engagement scales explained the highest amount of variance in ICT Use for Entertainment, followed by Practical use. Third, we found significantly lower values for girls in all ICT Engagement scales except ICT Interest. Fourth, we found a small negative correlation between the scores in the subscale “ICT as a topic in social interaction” and reading performance in PISA 2015. We could replicate most results for the German sample. Overall, the obtained results support the construct interpretation of the four ICT Engagement subscales.",TRUE,location
R353,Social Psychology,R75832,The Coordination‐Information Bubble in Humanitarian Response: Theoretical Foundations and Empirical Investigations,S346782,R75837,Study location,L248374,The Philippines,"Humanitarian disasters are highly dynamic and uncertain. The shifting situation, volatility of information, and the emergence of decision processes and coordination structures require humanitarian organizations to continuously adapt their operations. In this study, we aim to make headway in understanding adaptive decision-making in a dynamic interplay between changing situation, volatile information, and emerging coordination structures. Starting from theories of sensemaking, coordination, and decision-making, we present two case studies that represent the response to two different humanitarian disasters: Typhoon Haiyan in the Philippines, and the Syria Crisis, one of the most prominent ongoing conflicts. For both, we highlight how volatile information and the urge to respond via sensemaking lead to fragmentation and misalignment of emergent coordination structures and decisions, which, in turn, slow down adaptation. Based on the case studies, we derive propositions and the need to continuously align laterally between different regions and hierarchically between operational and strategic levels to avoid persistence of coordination-information bubbles. We discuss the implications of our findings for the development of methods and theory to ensure that humanitarian operations management captures the critical role of information as a driver of emergent coordination and adaptive decisions.",TRUE,location
R367,Social Psychology and Interaction,R75816,The Reality of Evidence-based Decision Making in Humanitarian Programming: An Exploratory Study of WASH Programs in Uganda,S346694,R75818,Study location,L248316,Uganda,"With ongoing research, increased information sharing and knowledge exchange, humanitarian organizations have an increasing amount of evidence at their disposal to support their decisions. Nevertheless, effectively building decisions on the increasing amount of insights and information remains challenging. At the individual, organizational, and environmental levels, various factors influence the use of evidence in the decision-making process. This research examined these factors and specifically their influence in a case-study on humanitarian organizations and their WASH interventions in Uganda. Interviewees reported several factors that impede the implementation of evidence-based decision making. Revealing that, despite advancements in the past years, evidence-based information itself is relatively small, contradictory, and non-repeatable. Moreover, the information is often not connected or in a format that can be acted upon. Most importantly, however, are the human aspects and organizational settings that limit access to and use of supporting data, information, and evidence. This research shows the importance of considering these factors, in addition to invest in creating knowledge and technologies to support evidence-based decision-making.",TRUE,location
R367,Social Psychology and Interaction,R75822,The Reality of Evidence-based Decision Making in Humanitarian Programming: An Exploratory Study of WASH Programs in Uganda,S346728,R75824,Study location,L248340,Uganda,"With ongoing research, increased information sharing and knowledge exchange, humanitarian organizations have an increasing amount of evidence at their disposal to support their decisions. Nevertheless, effectively building decisions on the increasing amount of insights and information remains challenging. At the individual, organizational, and environmental levels, various factors influence the use of evidence in the decision-making process. This research examined these factors and specifically their influence in a case-study on humanitarian organizations and their WASH interventions in Uganda. Interviewees reported several factors that impede the implementation of evidence-based decision making. Revealing that, despite advancements in the past years, evidence-based information itself is relatively small, contradictory, and non-repeatable. Moreover, the information is often not connected or in a format that can be acted upon. Most importantly, however, are the human aspects and organizational settings that limit access to and use of supporting data, information, and evidence. This research shows the importance of considering these factors, in addition to invest in creating knowledge and technologies to support evidence-based decision-making.",TRUE,location
R153,Soil Science,R109978,Impact of deforestation and subsequent land-use change on soil quality,S501582,R109980,location,L362661,Benin,"150 Impact of deforestation and subsequent land-use change on soil quality Emmanuel Amoakwah a, Mohammad A. Rahman b, Kwabena A. Nketia a, Rousseau Djouaka c, Nataliia Oleksandrivna Didenko d, Khandakar R. Islam b,* a CSIR – Soil Research Institute, Academy Post Office, Kwadaso-Kumasi, Ghana b Ohio State University South Centers, Piketon, Ohio, USA c International Institute of Tropical Agriculture, Benin d Institute of Water Problems and Land Reclamation, Kyiv, Ukraine",TRUE,location
R153,Soil Science,R109981,Assessment of Heavy Metal Pollution of Soil-water-vegetative Ecosystems Associated with Artisanal Gold Mining,S501599,R109983,location,L362673,Ghana,"ABSTRACT Worldwide demand for gold has accelerated unregulated, small-scale artisanal gold mining (AGM) activities, which are responsible for widespread environmental pollution in Ghana. This study was conducted to assess the impact of AGM activities, namely the heavy metals pollution of soil-water-vegetative ecosystems in southern Ghana. Composite soil, stream sediments and water, well water, and plant samples were randomly collected in replicates from adjoining AGM areas, analyzed for soluble and total Fe, Cu, Zn, Pb, Cd, Hg contents and other properties, and calculated for indices to evaluate the extent of environmental pollution and degradation. Results indicated that both well and stream waters were contaminated with heavy metals and were unsuitable for drinking due to high levels of Pb (0.36–0.03 mg/L), Cd (0.01–0.02 mg/L), and Hg (<0.01 mg/L). Enrichment factor and geo-accumulation index showed that the soil and sediments were polluted with Cd and Hg. The soil, which could have acted as a source of the Hg pollutant for natural vegetation and food crops grown near AGM areas, was loaded with 2.3 times more Hg than the sediments. The concentration of heavy metals in fern was significantly higher than in corn, which exceeded the maximum permissible limits of WHO/FAO guidelines. Biocontamination factor suggested that the contamination of plants with Hg was high compared to other heavy metals. Further studies are needed for extensive sampling and monitoring of soil-water-vegetative ecosystems to remediate and control heavy metals pollution in response to AGM activities in Ghana.",TRUE,location
R223,Transport Phenomena,R107744,The trade-off behaviours between virtual and physical activities during the first wave of the COVID-19 pandemic period,S535338,R107750,Country,R110119,India,"Abstract Introduction The first wave of COVID-19 pandemic period has drastically changed people’s lives all over the world. To cope with the disruption, digital solutions have become more popular. However, the ability to adopt digitalised alternatives is different across socio-economic and socio-demographic groups. Objective This study investigates how individuals have changed their activity-travel patterns and internet usage during the first wave of the COVID-19 pandemic period, and which of these changes may be kept. Methods An empirical data collection was deployed through online forms. 781 responses from different countries (Italy, Sweden, India and others) have been collected, and a series of multivariate analyses was carried out. Two linear regression models are presented, related to the change of travel activities and internet usage, before and during the pandemic period. Furthermore, a binary regression model is used to examine the likelihood of the respondents to adopt and keep their behaviours beyond the pandemic period. Results The results show that the possibility to change the behaviour matter. External restrictions and personal characteristics are the driving factors of the reduction in ones' daily trips. However, the estimation results do not show a strong correlation between the countries' restriction policy and the respondents' likelihood to adopt the new and online-based behaviours for any of the activities after the restriction period. Conclusion The acceptance and long-term adoption of the online alternatives for activities are correlated with the respondents' personality and socio-demographic group, highlighting the importance of promoting alternatives as a part of longer-term behavioural and lifestyle changes.",TRUE,location
R223,Transport Phenomena,R107744,The trade-off behaviours between virtual and physical activities during the first wave of the COVID-19 pandemic period,S535337,R107750,Country,R110023,Italy,"Abstract Introduction The first wave of COVID-19 pandemic period has drastically changed people’s lives all over the world. To cope with the disruption, digital solutions have become more popular. However, the ability to adopt digitalised alternatives is different across socio-economic and socio-demographic groups. Objective This study investigates how individuals have changed their activity-travel patterns and internet usage during the first wave of the COVID-19 pandemic period, and which of these changes may be kept. Methods An empirical data collection was deployed through online forms. 781 responses from different countries (Italy, Sweden, India and others) have been collected, and a series of multivariate analyses was carried out. Two linear regression models are presented, related to the change of travel activities and internet usage, before and during the pandemic period. Furthermore, a binary regression model is used to examine the likelihood of the respondents to adopt and keep their behaviours beyond the pandemic period. Results The results show that the possibility to change the behaviour matter. External restrictions and personal characteristics are the driving factors of the reduction in ones' daily trips. However, the estimation results do not show a strong correlation between the countries' restriction policy and the respondents' likelihood to adopt the new and online-based behaviours for any of the activities after the restriction period. Conclusion The acceptance and long-term adoption of the online alternatives for activities are correlated with the respondents' personality and socio-demographic group, highlighting the importance of promoting alternatives as a part of longer-term behavioural and lifestyle changes.",TRUE,location
R223,Transport Phenomena,R107744,The trade-off behaviours between virtual and physical activities during the first wave of the COVID-19 pandemic period,S535336,R107750,Country,R29998,Sweden,"Abstract Introduction The first wave of COVID-19 pandemic period has drastically changed people’s lives all over the world. To cope with the disruption, digital solutions have become more popular. However, the ability to adopt digitalised alternatives is different across socio-economic and socio-demographic groups. Objective This study investigates how individuals have changed their activity-travel patterns and internet usage during the first wave of the COVID-19 pandemic period, and which of these changes may be kept. Methods An empirical data collection was deployed through online forms. 781 responses from different countries (Italy, Sweden, India and others) have been collected, and a series of multivariate analyses was carried out. Two linear regression models are presented, related to the change of travel activities and internet usage, before and during the pandemic period. Furthermore, a binary regression model is used to examine the likelihood of the respondents to adopt and keep their behaviours beyond the pandemic period. Results The results show that the possibility to change the behaviour matter. External restrictions and personal characteristics are the driving factors of the reduction in ones' daily trips. However, the estimation results do not show a strong correlation between the countries' restriction policy and the respondents' likelihood to adopt the new and online-based behaviours for any of the activities after the restriction period. Conclusion The acceptance and long-term adoption of the online alternatives for activities are correlated with the respondents' personality and socio-demographic group, highlighting the importance of promoting alternatives as a part of longer-term behavioural and lifestyle changes.",TRUE,location
R342,Urban Studies,R108909,Conundrum or paradox: deconstructing the spurious case of water scarcity in the Himalayan Region through an institutional economics narrative,S496082,R108911,location,R108915,Darjeeling,"Water scarcity in mountain regions such as the Himalaya has been studied with a pre-existing notion of scarcity justified by decades of communities' suffering from physical water shortages combined by difficulties of access. The Eastern Himalayan Region (EHR) of India receives significantly high amounts of annual precipitation. Studies have nonetheless shown that this region faces a strange dissonance: an acute water scarcity in a supposedly ‘water-rich’ region. The main objective of this paper is to decipher various drivers of water scarcity by locating the contemporary history of water institutions within the development trajectory of the Darjeeling region, particularly Darjeeling Municipal Town in West Bengal, India. A key feature of the region's urban water governance that defines the water scarcity narrative is the multiplicity of water institutions and the intertwining of formal and informal institutions at various scales. These factors affect the availability of and basic access to domestic water by communities in various ways resulting in the creation of a preferred water bundle consisting of informal water markets over and above traditional sourcing from springs and the formal water supply from the town municipality.",TRUE,location
R342,Urban Studies,R109475,"“Hamro Jhora, Hamro Pani” (Our Spring, Our Water): Water and the Politics of Appropriation of ‘Commons’ in Darjeeling Town, India",S499628,R109477,location,R108915,Darjeeling,"Based on the study of Darjeeling Municipality, the paper engages with issues pertaining to understanding the matrixes of power relations involved in the supply of water in Darjeeling town in India. The discussions in the paper focuses on urbanization, the shrinking water resources, and increased demand for water on the one hand; and the role of local administration, the emergence of the water mafia, and the ‘Samaj’ (society) all contributing to a skewed and inequitable distribution of water and the assumption of proprietorship or the appropriation of water commons, culminating in the accentuation of water-rights deprivation in Darjeeling Municipal Area. HYDRO Nepal JournalJournal of Water Energy and EnvironmentIssue No: 22Page: 16-24Uploaded date: January 14, 2018",TRUE,location
R374,Urban Studies and Planning,R74317,Impact of COVID-19 pandemic on mobility in ten countries and associated perceived risk for all transport modes,S535333,R74325,Country,R27640,Australia,"The restrictive measures implemented in response to the COVID-19 pandemic have triggered sudden massive changes to travel behaviors of people all around the world. This study examines the individual mobility patterns for all transport modes (walk, bicycle, motorcycle, car driven alone, car driven in company, bus, subway, tram, train, airplane) before and during the restrictions adopted in ten countries on six continents: Australia, Brazil, China, Ghana, India, Iran, Italy, Norway, South Africa and the United States. This cross-country study also aims at understanding the predictors of protective behaviors related to the transport sector and COVID-19. Findings hinge upon an online survey conducted in May 2020 (N = 9,394). The empirical results quantify tremendous disruptions for both commuting and non-commuting travels, highlighting substantial reductions in the frequency of all types of trips and use of all modes. In terms of potential virus spread, airplanes and buses are perceived to be the riskiest transport modes, while avoidance of public transport is consistently found across the countries. According to the Protection Motivation Theory, the study sheds new light on the fact that two indicators, namely income inequality, expressed as Gini index, and the reported number of deaths due to COVID-19 per 100,000 inhabitants, aggravate respondents’ perceptions. This research indicates that socio-economic inequality and morbidity are not only related to actual health risks, as well documented in the relevant literature, but also to the perceived risks. These findings document the global impact of the COVID-19 crisis as well as provide guidance for transportation practitioners in developing future strategies.",TRUE,location
R374,Urban Studies and Planning,R74317,Impact of COVID-19 pandemic on mobility in ten countries and associated perceived risk for all transport modes,S535331,R74325,Country,R29715,Brazil,"The restrictive measures implemented in response to the COVID-19 pandemic have triggered sudden massive changes to travel behaviors of people all around the world. This study examines the individual mobility patterns for all transport modes (walk, bicycle, motorcycle, car driven alone, car driven in company, bus, subway, tram, train, airplane) before and during the restrictions adopted in ten countries on six continents: Australia, Brazil, China, Ghana, India, Iran, Italy, Norway, South Africa and the United States. This cross-country study also aims at understanding the predictors of protective behaviors related to the transport sector and COVID-19. Findings hinge upon an online survey conducted in May 2020 (N = 9,394). The empirical results quantify tremendous disruptions for both commuting and non-commuting travels, highlighting substantial reductions in the frequency of all types of trips and use of all modes. In terms of potential virus spread, airplanes and buses are perceived to be the riskiest transport modes, while avoidance of public transport is consistently found across the countries. According to the Protection Motivation Theory, the study sheds new light on the fact that two indicators, namely income inequality, expressed as Gini index, and the reported number of deaths due to COVID-19 per 100,000 inhabitants, aggravate respondents’ perceptions. This research indicates that socio-economic inequality and morbidity are not only related to actual health risks, as well documented in the relevant literature, but also to the perceived risks. These findings document the global impact of the COVID-19 crisis as well as provide guidance for transportation practitioners in developing future strategies.",TRUE,location
R374,Urban Studies and Planning,R74317,Impact of COVID-19 pandemic on mobility in ten countries and associated perceived risk for all transport modes,S535334,R74325,Country,R76660,China,"The restrictive measures implemented in response to the COVID-19 pandemic have triggered sudden massive changes to travel behaviors of people all around the world. This study examines the individual mobility patterns for all transport modes (walk, bicycle, motorcycle, car driven alone, car driven in company, bus, subway, tram, train, airplane) before and during the restrictions adopted in ten countries on six continents: Australia, Brazil, China, Ghana, India, Iran, Italy, Norway, South Africa and the United States. This cross-country study also aims at understanding the predictors of protective behaviors related to the transport sector and COVID-19. Findings hinge upon an online survey conducted in May 2020 (N = 9,394). The empirical results quantify tremendous disruptions for both commuting and non-commuting travels, highlighting substantial reductions in the frequency of all types of trips and use of all modes. In terms of potential virus spread, airplanes and buses are perceived to be the riskiest transport modes, while avoidance of public transport is consistently found across the countries. According to the Protection Motivation Theory, the study sheds new light on the fact that two indicators, namely income inequality, expressed as Gini index, and the reported number of deaths due to COVID-19 per 100,000 inhabitants, aggravate respondents’ perceptions. This research indicates that socio-economic inequality and morbidity are not only related to actual health risks, as well documented in the relevant literature, but also to the perceived risks. These findings document the global impact of the COVID-19 crisis as well as provide guidance for transportation practitioners in developing future strategies.",TRUE,location
R374,Urban Studies and Planning,R74326,A mixed-methods analysis of mobility behavior changes in the COVID-19 era in a rural case study,S535322,R74329,Country,R68481,Germany,"Abstract Background As a reaction to the novel coronavirus disease (COVID-19), countries around the globe have implemented various measures to reduce the spread of the virus. The transportation sector is particularly affected by the pandemic situation. The current study aims to contribute to the empirical knowledge regarding the effects of the coronavirus situation on the mobility of people by (1) broadening the perspective to the mobility rural area’s residents and (2) providing subjective data concerning the perceived changes of affected persons’ mobility practices, as these two aspects have scarcely been considered in research so far. Methods To address these research gaps, a mixed-methods study was conducted that integrates a qualitative telephone interview study ( N = 15) and a quantitative household survey ( N = 301). The rural district of Altmarkkreis Salzwedel in Northern Germany was chosen as a model region. Results The results provide in-depth insights into the changing mobility practices of residents of a rural area during the legal restrictions to stem the spread of the virus. A high share of respondents (62.6%) experienced no changes in their mobility behavior due to the COVID-19 pandemic situation. However, nearly one third of trips were also cancelled overall. A modal shift was observed towards the reduction of trips by car and bus, and an increase of trips by bike. The share of trips by foot was unchanged. The majority of respondents did not predict strong long-term effects of the corona pandemic on their mobility behavior.",TRUE,location
R374,Urban Studies and Planning,R74330,Impact of SARS-CoV-2 on the mobility behaviour in Germany,S535321,R74333,Country,R29991,Germany,"Abstract Background The COVID-19 pandemic and the measures taken to combat it led to severe constraints for various areas of life, including mobility. To study the effects of this disruptive situation on the mobility behaviour of entire subgroups, and how they shape their mobility in reaction to the special circumstances, can help to better understand, how people react to external changes. Methodology Aim of the study presented in this article was to investigate to what extent, how and in what areas mobility behaviour has changed during the outbreak of SARS-CoV-2 in Germany. In addition, a focus was put on the comparison of federal states with and without lockdown in order to investigate a possible contribution of this measure to changes in mobility. We asked respondents via an online survey about their trip purposes and trip frequency, their choice of transport mode and the reasons for choosing it in the context of the COVID-19 crisis. For the analyses presented in this paper, we used the data of 4157survey participants (2512 without lockdown, 1645 with lockdown). Results The data confirmed a profound impact on the mobility behaviour with a shift away from public transport and increases in car usage, walking and cycling. Comparisons of federal states with and without lockdown revealed only isolated differences. It seems that, even if the lockdown had some minor effects, its role in the observed behavioural changes was minimal.",TRUE,location
R374,Urban Studies and Planning,R74317,Impact of COVID-19 pandemic on mobility in ten countries and associated perceived risk for all transport modes,S535335,R74325,Country,R34216,Ghana,"The restrictive measures implemented in response to the COVID-19 pandemic have triggered sudden massive changes to travel behaviors of people all around the world. This study examines the individual mobility patterns for all transport modes (walk, bicycle, motorcycle, car driven alone, car driven in company, bus, subway, tram, train, airplane) before and during the restrictions adopted in ten countries on six continents: Australia, Brazil, China, Ghana, India, Iran, Italy, Norway, South Africa and the United States. This cross-country study also aims at understanding the predictors of protective behaviors related to the transport sector and COVID-19. Findings hinge upon an online survey conducted in May 2020 (N = 9,394). The empirical results quantify tremendous disruptions for both commuting and non-commuting travels, highlighting substantial reductions in the frequency of all types of trips and use of all modes. In terms of potential virus spread, airplanes and buses are perceived to be the riskiest transport modes, while avoidance of public transport is consistently found across the countries. According to the Protection Motivation Theory, the study sheds new light on the fact that two indicators, namely income inequality, expressed as Gini index, and the reported number of deaths due to COVID-19 per 100,000 inhabitants, aggravate respondents’ perceptions. This research indicates that socio-economic inequality and morbidity are not only related to actual health risks, as well documented in the relevant literature, but also to the perceived risks. These findings document the global impact of the COVID-19 crisis as well as provide guidance for transportation practitioners in developing future strategies.",TRUE,location
R374,Urban Studies and Planning,R74317,Impact of COVID-19 pandemic on mobility in ten countries and associated perceived risk for all transport modes,S661192,R74325,Country,R110119,India,"The restrictive measures implemented in response to the COVID-19 pandemic have triggered sudden massive changes to travel behaviors of people all around the world. This study examines the individual mobility patterns for all transport modes (walk, bicycle, motorcycle, car driven alone, car driven in company, bus, subway, tram, train, airplane) before and during the restrictions adopted in ten countries on six continents: Australia, Brazil, China, Ghana, India, Iran, Italy, Norway, South Africa and the United States. This cross-country study also aims at understanding the predictors of protective behaviors related to the transport sector and COVID-19. Findings hinge upon an online survey conducted in May 2020 (N = 9,394). The empirical results quantify tremendous disruptions for both commuting and non-commuting travels, highlighting substantial reductions in the frequency of all types of trips and use of all modes. In terms of potential virus spread, airplanes and buses are perceived to be the riskiest transport modes, while avoidance of public transport is consistently found across the countries. According to the Protection Motivation Theory, the study sheds new light on the fact that two indicators, namely income inequality, expressed as Gini index, and the reported number of deaths due to COVID-19 per 100,000 inhabitants, aggravate respondents’ perceptions. This research indicates that socio-economic inequality and morbidity are not only related to actual health risks, as well documented in the relevant literature, but also to the perceived risks. These findings document the global impact of the COVID-19 crisis as well as provide guidance for transportation practitioners in developing future strategies.",TRUE,location
R374,Urban Studies and Planning,R74317,Impact of COVID-19 pandemic on mobility in ten countries and associated perceived risk for all transport modes,S535327,R74325,Country,R110023,Italy,"The restrictive measures implemented in response to the COVID-19 pandemic have triggered sudden massive changes to travel behaviors of people all around the world. This study examines the individual mobility patterns for all transport modes (walk, bicycle, motorcycle, car driven alone, car driven in company, bus, subway, tram, train, airplane) before and during the restrictions adopted in ten countries on six continents: Australia, Brazil, China, Ghana, India, Iran, Italy, Norway, South Africa and the United States. This cross-country study also aims at understanding the predictors of protective behaviors related to the transport sector and COVID-19. Findings hinge upon an online survey conducted in May 2020 (N = 9,394). The empirical results quantify tremendous disruptions for both commuting and non-commuting travels, highlighting substantial reductions in the frequency of all types of trips and use of all modes. In terms of potential virus spread, airplanes and buses are perceived to be the riskiest transport modes, while avoidance of public transport is consistently found across the countries. According to the Protection Motivation Theory, the study sheds new light on the fact that two indicators, namely income inequality, expressed as Gini index, and the reported number of deaths due to COVID-19 per 100,000 inhabitants, aggravate respondents’ perceptions. This research indicates that socio-economic inequality and morbidity are not only related to actual health risks, as well documented in the relevant literature, but also to the perceived risks. These findings document the global impact of the COVID-19 crisis as well as provide guidance for transportation practitioners in developing future strategies.",TRUE,location
R374,Urban Studies and Planning,R74317,Impact of COVID-19 pandemic on mobility in ten countries and associated perceived risk for all transport modes,S535330,R74325,Country,R29996,Norway,"The restrictive measures implemented in response to the COVID-19 pandemic have triggered sudden massive changes to travel behaviors of people all around the world. This study examines the individual mobility patterns for all transport modes (walk, bicycle, motorcycle, car driven alone, car driven in company, bus, subway, tram, train, airplane) before and during the restrictions adopted in ten countries on six continents: Australia, Brazil, China, Ghana, India, Iran, Italy, Norway, South Africa and the United States. This cross-country study also aims at understanding the predictors of protective behaviors related to the transport sector and COVID-19. Findings hinge upon an online survey conducted in May 2020 (N = 9,394). The empirical results quantify tremendous disruptions for both commuting and non-commuting travels, highlighting substantial reductions in the frequency of all types of trips and use of all modes. In terms of potential virus spread, airplanes and buses are perceived to be the riskiest transport modes, while avoidance of public transport is consistently found across the countries. According to the Protection Motivation Theory, the study sheds new light on the fact that two indicators, namely income inequality, expressed as Gini index, and the reported number of deaths due to COVID-19 per 100,000 inhabitants, aggravate respondents’ perceptions. This research indicates that socio-economic inequality and morbidity are not only related to actual health risks, as well documented in the relevant literature, but also to the perceived risks. These findings document the global impact of the COVID-19 crisis as well as provide guidance for transportation practitioners in developing future strategies.",TRUE,location
R374,Urban Studies and Planning,R74317,Impact of COVID-19 pandemic on mobility in ten countries and associated perceived risk for all transport modes,S535332,R74325,Country,R78142,South Africa,"The restrictive measures implemented in response to the COVID-19 pandemic have triggered sudden massive changes to travel behaviors of people all around the world. This study examines the individual mobility patterns for all transport modes (walk, bicycle, motorcycle, car driven alone, car driven in company, bus, subway, tram, train, airplane) before and during the restrictions adopted in ten countries on six continents: Australia, Brazil, China, Ghana, India, Iran, Italy, Norway, South Africa and the United States. This cross-country study also aims at understanding the predictors of protective behaviors related to the transport sector and COVID-19. Findings hinge upon an online survey conducted in May 2020 (N = 9,394). The empirical results quantify tremendous disruptions for both commuting and non-commuting travels, highlighting substantial reductions in the frequency of all types of trips and use of all modes. In terms of potential virus spread, airplanes and buses are perceived to be the riskiest transport modes, while avoidance of public transport is consistently found across the countries. According to the Protection Motivation Theory, the study sheds new light on the fact that two indicators, namely income inequality, expressed as Gini index, and the reported number of deaths due to COVID-19 per 100,000 inhabitants, aggravate respondents’ perceptions. This research indicates that socio-economic inequality and morbidity are not only related to actual health risks, as well documented in the relevant literature, but also to the perceived risks. These findings document the global impact of the COVID-19 crisis as well as provide guidance for transportation practitioners in developing future strategies.",TRUE,location
R374,Urban Studies and Planning,R146416,Collaborating Filtering Community Image Recommendation System Based on Scene,S586215,R146418,has been evaluated in the City,R146420,Wenzhou,"With the advancement of smart city, the development of intelligent mobile terminal and wireless network, the traditional text information service no longer meet the needs of the community residents, community image service appeared as a new media service. “There are pictures of the truth” has become a community residents to understand and master the new dynamic community, image information service has become a new information service. However, there are two major problems in image information service. Firstly, the underlying eigenvalues extracted by current image feature extraction techniques are difficult for users to understand, and there is a semantic gap between the image content itself and the user’s understanding; secondly, in community life of the image data increasing quickly, it is difficult to find their own interested image data. Aiming at the two problems, this paper proposes a unified image semantic scene model to express the image content. On this basis, a collaborative filtering recommendation model of fusion scene semantics is proposed. In the recommendation model, a comprehensiveness and accuracy user interest model is proposed to improve the recommendation quality. The results of the present study have achieved good results in the pilot cities of Wenzhou and Yan'an, and it is applied normally.",TRUE,location
R57,Virology,R36146,COVID-19 outbreak in Algeria: A mathematical model to predict the incidence,S123880,R36147,location,R36145,Algeria,"Abstract Introduction Since December 29, 2019 a pandemic of new novel coronavirus-infected pneumonia named COVID-19 has started from Wuhan, China, has led to 254 996 confirmed cases until midday March 20, 2020. Sporadic cases have been imported worldwide, in Algeria, the first case reported on February 25, 2020 was imported from Italy, and then the epidemic has spread to other parts of the country very quickly with 139 confirmed cases until March 21, 2020. Methods It is crucial to estimate the cases number growth in the early stages of the outbreak, to this end, we have implemented the Alg-COVID-19 Model which allows to predict the incidence and the reproduction number R0 in the coming months in order to help decision makers. The Alg-COVIS-19 Model initial equation 1, estimates the cumulative cases at t prediction time using two parameters: the reproduction number R0 and the serial interval SI. Results We found R0=2.55 based on actual incidence at the first 25 days, using the serial interval SI= 4,4 and the prediction time t=26. The herd immunity HI estimated is HI=61%. Also, The Covid-19 incidence predicted with the Alg-COVID-19 Model fits closely the actual incidence during the first 26 days of the epidemic in Algeria Fig. 1.A. which allows us to use it. According to Alg-COVID-19 Model, the number of cases will exceed 5000 on the 42 th day (April 7 th ) and it will double to 10000 on 46th day of the epidemic (April 11 th ), thus, exponential phase will begin (Table 1; Fig.1.B) and increases continuously until reaching à herd immunity of 61% unless serious preventive measures are considered. Discussion This model is valid only when the majority of the population is vulnerable to COVID-19 infection, however, it can be updated to fit the new parameters values.",TRUE,location
R57,Virology,R175284,Porcine Circoviruses and Herpesviruses Are Prevalent in an Austrian Game,S694394,R175286,has location,L466945,Austria,"During the annual hunt in a privately owned Austrian game population in fall 2019 and 2020, 64 red deer (Cervus elaphus), 5 fallow deer (Dama dama), 6 mouflon (Ovis gmelini musimon), and 95 wild boars (Sus scrofa) were shot and sampled for PCR testing. Pools of spleen, lung, and tonsillar swabs were screened for specific nucleic acids of porcine circoviruses. Wild ruminants were additionally tested for herpesviruses and pestiviruses, and wild boars were screened for pseudorabies virus (PrV) and porcine lymphotropic herpesviruses (PLHV-1-3). PCV2 was detectable in 5% (3 of 64) of red deer and 75% (71 of 95) of wild boar samples. In addition, 24 wild boar samples (25%) but none of the ruminants tested positive for PCV3 specific nucleic acids. Herpesviruses were detected in 15 (20%) ruminant samples. Sequence analyses showed the closest relationships to fallow deer herpesvirus and elk gammaherpesvirus. In wild boars, PLHV-1 was detectable in 10 (11%), PLHV-2 in 44 (46%), and PLHV-3 in 66 (69%) of animals, including 36 double and 3 triple infections. No pestiviruses were detectable in any ruminant samples, and all wild boar samples were negative in PrV-PCR. Our data demonstrate a high prevalence of PCV2 and PLHVs in an Austrian game population, confirm the presence of PCV3 in Austrian wild boars, and indicate a low risk of spillover of notifiable animal diseases into the domestic animal population.",TRUE,location
R57,Virology,R44137,Full-genome sequences of the first two SARS-CoV-2 viruses from India,S134461,R44139,has location,R44148,China,"Background & objectives: Since December 2019, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has globally affected 195 countries. In India, suspected cases were screened for SARS-CoV-2 as per the advisory of the Ministry of Health and Family Welfare. The objective of this study was to characterize SARS-CoV-2 sequences from three identified positive cases as on February 29, 2020. Methods: Throat swab/nasal swab specimens for a total of 881 suspected cases were screened by E gene and confirmed by RdRp (1), RdRp (2) and N gene real-time reverse transcription-polymerase chain reactions and next-generation sequencing. Phylogenetic analysis, molecular characterization and prediction of B- and T-cell epitopes for Indian SARS-CoV-2 sequences were undertaken. Results: Three cases with a travel history from Wuhan, China, were confirmed positive for SARS-CoV-2. Almost complete (29,851 nucleotides) genomes of case 1, case 3 and a fragmented genome for case 2 were obtained. The sequences of Indian SARS-CoV-2 though not identical showed high (~99.98%) identity with Wuhan seafood market pneumonia virus (accession number: NC 045512). Phylogenetic analysis showed that the Indian sequences belonged to different clusters. Predicted linear B-cell epitopes were found to be concentrated in the S1 domain of spike protein, and a conformational epitope was identified in the receptor-binding domain. The predicted T-cell epitopes showed broad human leucocyte antigen allele coverage of A and B supertypes predominant in the Indian population. Interpretation & conclusions: The two SARS-CoV-2 sequences obtained from India represent two different introductions into the country. The genetic heterogeneity is as noted globally. The identified B- and T-cell epitopes may be considered suitable for future experiments towards the design of vaccines and diagnostics. Continuous monitoring and analysis of the sequences of new cases from India and the other affected countries would be vital to understand the genetic evolution and rates of substitution of the SARS-CoV-2.",TRUE,location
R57,Virology,R12231,Novel coronavirus 2019-nCoV: early estimation of epidemiological parameters and epidemic predictions,S18599,R12232,location,R12230,China,"Since first identified, the epidemic scale of the recently emerged novel coronavirus (2019-nCoV) in Wuhan, China, has increased rapidly, with cases arising across China and other countries and regions. using a transmission model, we estimate a basic reproductive number of 3.11 (95%CI, 2.39-4.13); 58-76% of transmissions must be prevented to stop increasing; Wuhan case ascertainment of 5.0% (3.6-7.4); 21022 (11090-33490) total infections in Wuhan 1 to 22 January.",TRUE,location
R57,Virology,R12235,Estimating the effective reproduction number of the 2019-nCoV in China,S18623,R12236,location,R12230,China,We estimate the effective reproduction number for 2019-nCoV based on the daily reported cases from China CDC. The results indicate that 2019-nCoV has a higher effective reproduction number than SARS with a comparable fatality rate.,TRUE,location
R57,Virology,R12237,"Preliminary estimation of the basic reproduction number of novel coronavirus (2019-nCoV) in China, from 2019 to 2020: A data-driven analysis in the early phase of the outbreak",S18668,R12240,location,R12230,China,"Abstract Backgrounds An ongoing outbreak of a novel coronavirus (2019-nCoV) pneumonia hit a major city of China, Wuhan, December 2019 and subsequently reached other provinces/regions of China and countries. We present estimates of the basic reproduction number, R 0 , of 2019-nCoV in the early phase of the outbreak. Methods Accounting for the impact of the variations in disease reporting rate, we modelled the epidemic curve of 2019-nCoV cases time series, in mainland China from January 10 to January 24, 2020, through the exponential growth. With the estimated intrinsic growth rate ( γ ), we estimated R 0 by using the serial intervals (SI) of two other well-known coronavirus diseases, MERS and SARS, as approximations for the true unknown SI. Findings The early outbreak data largely follows the exponential growth. We estimated that the mean R 0 ranges from 2.24 (95%CI: 1.96-2.55) to 3.58 (95%CI: 2.89-4.39) associated with 8-fold to 2-fold increase in the reporting rate. We demonstrated that changes in reporting rate substantially affect estimates of R 0 . Conclusion The mean estimate of R 0 for the 2019-nCoV ranges from 2.24 to 3.58, and significantly larger than 1. Our findings indicate the potential of 2019-nCoV to cause outbreaks.",TRUE,location
R57,Virology,R12245,Estimation of the Transmission Risk of 2019-nCov and Its Implication for Public Health Interventions,S18710,R12246,location,R12230,China,"English Abstract: Background: Since the emergence of the first pneumonia cases in Wuhan, China, the novel coronavirus (2019-nCov) infection has been quickly spreading out to other provinces and neighbouring countries. Estimation of the basic reproduction number by means of mathematical modelling can be helpful for determining the potential and severity of an outbreak, and providing critical information for identifying the type of disease interventions and intensity. Methods: A deterministic compartmental model was devised based on the clinical progression of the disease, epidemiological status of the individuals, and the intervention measures. Findings: The estimation results based on likelihood and model analysis reveal that the control reproduction number may be as high as 6.47 (95% CI 5.71-7.23). Sensitivity analyses reveal that interventions, such as intensive contact tracing followed by quarantine and isolation, can effectively reduce the control reproduction number and transmission risk, with the effect of travel restriction of Wuhan on 2019-nCov infection in Beijing being almost equivalent to increasing quarantine by 100-thousand baseline value. Interpretation: It is essential to assess how the expensive, resource-intensive measures implemented by the Chinese authorities can contribute to the prevention and control of the 2019-nCov infection, and how long should be maintained. Under the most restrictive measures, the outbreak is expected to peak within two weeks (since January 23rd 2020) with significant low peak value. With travel restriction (no imported exposed individuals to Beijing), the number of infected individuals in 7 days will decrease by 91.14% in Beijing, compared with the scenario of no travel restriction. Mandarin Abstract: 背景:自从中国武汉出现第一例肺炎病例以来,新型冠状病毒(2019-nCov)感染已迅速传播到其他省份和周边国家。通过数学模型估计基本再生数,有助于确定疫情爆发的可能性和严重性,并为确定疾病干预类型和强度提供关键信息。 方法:根据疾病的临床进展,个体的流行病学状况和干预措施,设计确定性的仓室模型。 结果:基于似然函数和模型分析的估计结果表明,控制再生数可能高达6.47(95%CI 5.71-7.23)。敏感性分析显示,密集接触追踪和隔离等干预措施可以有效减少控制再生数和传播风险,武汉封城措施对北京2019-nCov感染的影响几乎等同于增加隔离措施10万的基线值。 解释:必须评估中国当局实施的昂贵,资源密集型措施如何有助于预防和控制2019-nCov感染,以及应维持多长时间。在最严格的措施下,预计疫情将在两周内(自2020年1月23日起)达到峰值,峰值较低。与没有出行限制的情况相比,有了出行限制(即没有输入的潜伏类个体进入北京),北京的7天感染者数量将减少91.14%。",TRUE,location
R57,Virology,R12247,"Early transmission dynamics in wuhan, china, of novel coronavirus-infected pneumonia",S18722,R12248,location,R12230,China,"Abstract Background The initial cases of novel coronavirus (2019-nCoV)–infected pneumonia (NCIP) occurred in Wuhan, Hubei Province, China, in December 2019 and January 2020. We analyzed data on the first 425 confirmed cases in Wuhan to determine the epidemiologic characteristics of NCIP. Methods We collected information on demographic characteristics, exposure history, and illness timelines of laboratory-confirmed cases of NCIP that had been reported by January 22, 2020. We described characteristics of the cases and estimated the key epidemiologic time-delay distributions. In the early period of exponential growth, we estimated the epidemic doubling time and the basic reproductive number. Results Among the first 425 patients with confirmed NCIP, the median age was 59 years and 56% were male. The majority of cases (55%) with onset before January 1, 2020, were linked to the Huanan Seafood Wholesale Market, as compared with 8.6% of the subsequent cases. The mean incubation period was 5.2 days (95% confidence interval [CI], 4.1 to 7.0), with the 95th percentile of the distribution at 12.5 days. In its early stages, the epidemic doubled in size every 7.4 days. With a mean serial interval of 7.5 days (95% CI, 5.3 to 19), the basic reproductive number was estimated to be 2.2 (95% CI, 1.4 to 3.9). Conclusions On the basis of this information, there is evidence that human-to-human transmission has occurred among close contacts since the middle of December 2019. Considerable efforts to reduce transmission will be required to control outbreaks if similar dynamics apply elsewhere. Measures to prevent or reduce transmission should be implemented in populations at risk. (Funded by the Ministry of Science and Technology of China and others.)",TRUE,location
R57,Virology,R36143,A Cybernetics-based Dynamic Infection Model for Analyzing SARS-COV-2 Infection Stability and Predicting Uncontrollable Risks,S123870,R36144,location,R12230,China,"Since December 2019, COVID-19 has raged in Wuhan and subsequently all over China and the world. We propose a Cybernetics-based Dynamic Infection Model (CDIM) to the dynamic infection process with a probability distributed incubation delay and feedback principle. Reproductive trends and the stability of the SARS-COV-2 infection in a city can then be analyzed, and the uncontrollable risks can be forecasted before they really happen. The infection mechanism of a city is depicted using the philosophy of cybernetics and approaches of the control engineering. Distinguished with other epidemiological models, such as SIR, SEIR, etc., that compute the theoretical number of infected people in a closed population, CDIM considers the immigration and emigration population as system inputs, and administrative and medical resources as dynamic control variables. The epidemic regulation can be simulated in the model to support the decision-making for containing the outbreak. City case studies are demonstrated for verification and validation.",TRUE,location
R57,Virology,R37008,Estimation of the Transmission Risk of the 2019-nCoV and Its Implication for Public Health Interventions,S124072,R37009,location,R12230,China,"Since the emergence of the first cases in Wuhan, China, the novel coronavirus (2019-nCoV) infection has been quickly spreading out to other provinces and neighboring countries. Estimation of the basic reproduction number by means of mathematical modeling can be helpful for determining the potential and severity of an outbreak and providing critical information for identifying the type of disease interventions and intensity. A deterministic compartmental model was devised based on the clinical progression of the disease, epidemiological status of the individuals, and intervention measures. The estimations based on likelihood and model analysis show that the control reproduction number may be as high as 6.47 (95% CI 5.71–7.23). Sensitivity analyses show that interventions, such as intensive contact tracing followed by quarantine and isolation, can effectively reduce the control reproduction number and transmission risk, with the effect of travel restriction adopted by Wuhan on 2019-nCoV infection in Beijing being almost equivalent to increasing quarantine by a 100 thousand baseline value. It is essential to assess how the expensive, resource-intensive measures implemented by the Chinese authorities can contribute to the prevention and control of the 2019-nCoV infection, and how long they should be maintained. Under the most restrictive measures, the outbreak is expected to peak within two weeks (since 23 January 2020) with a significant low peak value. With travel restriction (no imported exposed individuals to Beijing), the number of infected individuals in seven days will decrease by 91.14% in Beijing, compared with the scenario of no travel restriction.",TRUE,location
R57,Virology,R41016,Unique epidemiological and clinical features of the emerging 2019 novel coronavirus pneumonia (COVID-19) implicate special control measures,S144537,R41023,location,R12230,China,"By 27 February 2020, the outbreak of coronavirus disease 2019 (COVID‐19) caused 82 623 confirmed cases and 2858 deaths globally, more than severe acute respiratory syndrome (SARS) (8273 cases, 775 deaths) and Middle East respiratory syndrome (MERS) (1139 cases, 431 deaths) caused in 2003 and 2013, respectively. COVID‐19 has spread to 46 countries internationally. Total fatality rate of COVID‐19 is estimated at 3.46% by far based on published data from the Chinese Center for Disease Control and Prevention (China CDC). Average incubation period of COVID‐19 is around 6.4 days, ranges from 0 to 24 days. The basic reproductive number (R0) of COVID‐19 ranges from 2 to 3.5 at the early phase regardless of different prediction models, which is higher than SARS and MERS. A study from China CDC showed majority of patients (80.9%) were considered asymptomatic or mild pneumonia but released large amounts of viruses at the early phase of infection, which posed enormous challenges for containing the spread of COVID‐19. Nosocomial transmission was another severe problem. A total of 3019 health workers were infected by 12 February 2020, which accounted for 3.83% of total number of infections, and extremely burdened the health system, especially in Wuhan. Limited epidemiological and clinical data suggest that the disease spectrum of COVID‐19 may differ from SARS or MERS. We summarize latest literatures on genetic, epidemiological, and clinical features of COVID‐19 in comparison to SARS and MERS and emphasize special measures on diagnosis and potential interventions. This review will improve our understanding of the unique features of COVID‐19 and enhance our control measures in the future.",TRUE,location
R57,Virology,R41169,Statistics based predictions of coronavirus 2019-nCoV spreading in mainland China,S130691,R41172,location,R12230,China,"Background. The epidemic outbreak cased by coronavirus 2019-nCoV is of great interest to researches because of the high rate of spread of the infection and the significant number of fatalities. A detailed scientific analysis of the phenomenon is yet to come, but the public is already interested in the questions of the duration of the epidemic, the expected number of patients and deaths. For long time predictions, the complicated mathematical models are necessary which need many efforts for unknown parameters identification and calculations. In this article, some preliminary estimates will be presented. Objective. Since the reliable long time data are available only for mainland China, we will try to predict the epidemic characteristics only in this area. We will estimate some of the epidemic characteristics and present the most reliable dependences for victim numbers, infected and removed persons versus time. Methods. In this study we use the known SIR model for the dynamics of an epidemic, the known exact solution of the linear equations and statistical approach developed before for investigation of the children disease, which occurred in Chernivtsi (Ukraine) in 1988-1989. Results. The optimal values of the SIR model parameters were identified with the use of statistical approach. The numbers of infected, susceptible and removed persons versus time were predicted. Conclusions. Simple mathematical model was used to predict the characteristics of the epidemic caused by coronavirus 2019-nCoV in mainland China. The further research should focus on updating the predictions with the use of fresh data and using more complicated mathematical models.",TRUE,location
R57,Virology,R44799,"Early Transmission Dynamics in Wuhan, China, of Novel Coronavirus–Infected Pneumonia",S137185,R44801,location,R44804,China,"Abstract Background The initial cases of novel coronavirus (2019-nCoV)–infected pneumonia (NCIP) occurred in Wuhan, Hubei Province, China, in December 2019 and January 2020. We analyzed data on the first 425 confirmed cases in Wuhan to determine the epidemiologic characteristics of NCIP. Methods We collected information on demographic characteristics, exposure history, and illness timelines of laboratory-confirmed cases of NCIP that had been reported by January 22, 2020. We described characteristics of the cases and estimated the key epidemiologic time-delay distributions. In the early period of exponential growth, we estimated the epidemic doubling time and the basic reproductive number. Results Among the first 425 patients with confirmed NCIP, the median age was 59 years and 56% were male. The majority of cases (55%) with onset before January 1, 2020, were linked to the Huanan Seafood Wholesale Market, as compared with 8.6% of the subsequent cases. The mean incubation period was 5.2 days (95% confidence interval [CI], 4.1 to 7.0), with the 95th percentile of the distribution at 12.5 days. In its early stages, the epidemic doubled in size every 7.4 days. With a mean serial interval of 7.5 days (95% CI, 5.3 to 19), the basic reproductive number was estimated to be 2.2 (95% CI, 1.4 to 3.9). Conclusions On the basis of this information, there is evidence that human-to-human transmission has occurred among close contacts since the middle of December 2019. Considerable efforts to reduce transmission will be required to control outbreaks if similar dynamics apply elsewhere. Measures to prevent or reduce transmission should be implemented in populations at risk. (Funded by the Ministry of Science and Technology of China and others.)",TRUE,location
R57,Virology,R44806,Estimation of the Transmission Risk of 2019-nCov and Its Implication for Public Health Interventions,S137209,R44808,location,R44804,China,"English Abstract: Background: Since the emergence of the first pneumonia cases in Wuhan, China, the novel coronavirus (2019-nCov) infection has been quickly spreading out to other provinces and neighbouring countries. Estimation of the basic reproduction number by means of mathematical modelling can be helpful for determining the potential and severity of an outbreak, and providing critical information for identifying the type of disease interventions and intensity. Methods: A deterministic compartmental model was devised based on the clinical progression of the disease, epidemiological status of the individuals, and the intervention measures. Findings: The estimation results based on likelihood and model analysis reveal that the control reproduction number may be as high as 6.47 (95% CI 5.71-7.23). Sensitivity analyses reveal that interventions, such as intensive contact tracing followed by quarantine and isolation, can effectively reduce the control reproduction number and transmission risk, with the effect of travel restriction of Wuhan on 2019-nCov infection in Beijing being almost equivalent to increasing quarantine by 100-thousand baseline value. Interpretation: It is essential to assess how the expensive, resource-intensive measures implemented by the Chinese authorities can contribute to the prevention and control of the 2019-nCov infection, and how long should be maintained. Under the most restrictive measures, the outbreak is expected to peak within two weeks (since January 23rd 2020) with significant low peak value. With travel restriction (no imported exposed individuals to Beijing), the number of infected individuals in 7 days will decrease by 91.14% in Beijing, compared with the scenario of no travel restriction. Mandarin Abstract: 背景:自从中国武汉出现第一例肺炎病例以来,新型冠状病毒(2019-nCov)感染已迅速传播到其他省份和周边国家。通过数学模型估计基本再生数,有助于确定疫情爆发的可能性和严重性,并为确定疾病干预类型和强度提供关键信息。 方法:根据疾病的临床进展,个体的流行病学状况和干预措施,设计确定性的仓室模型。 结果:基于似然函数和模型分析的估计结果表明,控制再生数可能高达6.47(95%CI 5.71-7.23)。敏感性分析显示,密集接触追踪和隔离等干预措施可以有效减少控制再生数和传播风险,武汉封城措施对北京2019-nCov感染的影响几乎等同于增加隔离措施10万的基线值。 解释:必须评估中国当局实施的昂贵,资源密集型措施如何有助于预防和控制2019-nCov感染,以及应维持多长时间。在最严格的措施下,预计疫情将在两周内(自2020年1月23日起)达到峰值,峰值较低。与没有出行限制的情况相比,有了出行限制(即没有输入的潜伏类个体进入北京),北京的7天感染者数量将减少91.14%。",TRUE,location
R57,Virology,R44836,Estimating the effective reproduction number of the 2019-nCoV in China,S137322,R44838,location,R44804,China,We estimate the effective reproduction number for 2019-nCoV based on the daily reported cases from China CDC. The results indicate that 2019-nCoV has a higher effective reproduction number than SARS with a comparable fatality rate.,TRUE,location
R57,Virology,R44847,Novel coronavirus 2019-nCoV: early estimation of epidemiological parameters and epidemic predictions,S137367,R44852,location,R44804,China,"Since first identified, the epidemic scale of the recently emerged novel coronavirus (2019-nCoV) in Wuhan, China, has increased rapidly, with cases arising across China and other countries and regions. using a transmission model, we estimate a basic reproductive number of 3.11 (95%CI, 2.39-4.13); 58-76% of transmissions must be prevented to stop increasing; Wuhan case ascertainment of 5.0% (3.6-7.4); 21022 (11090-33490) total infections in Wuhan 1 to 22 January.",TRUE,location
R57,Virology,R44918,Estimation of the Transmission Risk of the 2019-nCoV and Its Implication for Public Health Interventions,S137616,R44921,location,R44804,China,"Since the emergence of the first cases in Wuhan, China, the novel coronavirus (2019-nCoV) infection has been quickly spreading out to other provinces and neighboring countries. Estimation of the basic reproduction number by means of mathematical modeling can be helpful for determining the potential and severity of an outbreak and providing critical information for identifying the type of disease interventions and intensity. A deterministic compartmental model was devised based on the clinical progression of the disease, epidemiological status of the individuals, and intervention measures. The estimations based on likelihood and model analysis show that the control reproduction number may be as high as 6.47 (95% CI 5.71–7.23). Sensitivity analyses show that interventions, such as intensive contact tracing followed by quarantine and isolation, can effectively reduce the control reproduction number and transmission risk, with the effect of travel restriction adopted by Wuhan on 2019-nCoV infection in Beijing being almost equivalent to increasing quarantine by a 100 thousand baseline value. It is essential to assess how the expensive, resource-intensive measures implemented by the Chinese authorities can contribute to the prevention and control of the 2019-nCoV infection, and how long they should be maintained. Under the most restrictive measures, the outbreak is expected to peak within two weeks (since 23 January 2020) with a significant low peak value. With travel restriction (no imported exposed individuals to Beijing), the number of infected individuals in seven days will decrease by 91.14% in Beijing, compared with the scenario of no travel restriction.",TRUE,location
R57,Virology,R178482,"Viral load dynamics and disease severity in patients infected with SARS-CoV-2 in Zhejiang province, China, January-March 2020: retrospective cohort study",S700077,R178498,Location,R178501,China,"Abstract Objective To evaluate viral loads at different stages of disease progression in patients infected with the 2019 severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) during the first four months of the epidemic in Zhejiang province, China. Design Retrospective cohort study. Setting A designated hospital for patients with covid-19 in Zhejiang province, China. Participants 96 consecutively admitted patients with laboratory confirmed SARS-CoV-2 infection: 22 with mild disease and 74 with severe disease. Data were collected from 19 January 2020 to 20 March 2020. Main outcome measures Ribonucleic acid (RNA) viral load measured in respiratory, stool, serum, and urine samples. Cycle threshold values, a measure of nucleic acid concentration, were plotted onto the standard curve constructed on the basis of the standard product. Epidemiological, clinical, and laboratory characteristics and treatment and outcomes data were obtained through data collection forms from electronic medical records, and the relation between clinical data and disease severity was analysed. Results 3497 respiratory, stool, serum, and urine samples were collected from patients after admission and evaluated for SARS-CoV-2 RNA viral load. Infection was confirmed in all patients by testing sputum and saliva samples. RNA was detected in the stool of 55 (59%) patients and in the serum of 39 (41%) patients. The urine sample from one patient was positive for SARS-CoV-2. The median duration of virus in stool (22 days, interquartile range 17-31 days) was significantly longer than in respiratory (18 days, 13-29 days; P=0.02) and serum samples (16 days, 11-21 days; P<0.001). The median duration of virus in the respiratory samples of patients with severe disease (21 days, 14-30 days) was significantly longer than in patients with mild disease (14 days, 10-21 days; P=0.04). In the mild group, the viral loads peaked in respiratory samples in the second week from disease onset, whereas viral load continued to be high during the third week in the severe group. Virus duration was longer in patients older than 60 years and in male patients. Conclusion The duration of SARS-CoV-2 is significantly longer in stool samples than in respiratory and serum samples, highlighting the need to strengthen the management of stool samples in the prevention and control of the epidemic, and the virus persists longer with higher load and peaks later in the respiratory tissue of patients with severe disease.",TRUE,location
R57,Virology,R41605,"Serological and molecular findings during SARS-CoV-2 infection: the first case study in Finland, January to February 2020",S131432,R41607,has location,R28949,Finland,"The first case of coronavirus disease (COVID-19) in Finland was confirmed on 29 January 2020. No secondary cases were detected. We describe the clinical picture and laboratory findings 3–23 days since the first symptoms. The SARS-CoV-2/Finland/1/2020 virus strain was isolated, the genome showing a single nucleotide substitution to the reference strain from Wuhan. Neutralising antibody response appeared within 9 days along with specific IgM and IgG response, targeting particularly nucleocapsid and spike proteins.",TRUE,location
R57,Virology,R41005,Mechanistic-statistical SIR modelling for early estimation of the actual number of cases and mortality rate from COVID-19,S130021,R41006,location,R27555,France,"The first cases of COVID-19 in France were detected on January 24, 2020. The number of screening tests carried out and the methodology used to target the patients tested do not allow for a direct computation of the real number of cases and the mortality this http URL this report, we develop a 'mechanistic-statistical' approach coupling a SIR ODE model describing the unobserved epidemiological dynamics, a probabilistic model describing the data acquisition process and a statistical inference method. The objective of this model is not to make forecasts but to estimate the real number of people infected with COVID-19 during the observation window in France and to deduce the mortality rate associated with the epidemic.Main results. The actual number of infected cases in France is probably much higher than the observations: we find here a factor x 15 (95%-CI: 1.5-11.7), which leads to a 5.2/1000 mortality rate (95%-CI: 1.5 / 1000-11.7/ 1000) at the end of the observation period. We find a R0 of 4.8, a high value which may be linked to the long viral shedding period of 20 days.",TRUE,location
R57,Virology,R41252,Spread of SARS-CoV-2 in the Icelandic Population,S130903,R41255,has location,R9244,Iceland,"Abstract Background During the current worldwide pandemic, coronavirus disease 2019 (Covid-19) was first diagnosed in Iceland at the end of February. However, data are limited on how SARS-CoV-2, the virus that causes Covid-19, enters and spreads in a population. Methods We targeted testing to persons living in Iceland who were at high risk for infection (mainly those who were symptomatic, had recently traveled to high-risk countries, or had contact with infected persons). We also carried out population screening using two strategies: issuing an open invitation to 10,797 persons and sending random invitations to 2283 persons. We sequenced SARS-CoV-2 from 643 samples. Results As of April 4, a total of 1221 of 9199 persons (13.3%) who were recruited for targeted testing had positive results for infection with SARS-CoV-2. Of those tested in the general population, 87 (0.8%) in the open-invitation screening and 13 (0.6%) in the random-population screening tested positive for the virus. In total, 6% of the population was screened. Most persons in the targeted-testing group who received positive tests early in the study had recently traveled internationally, in contrast to those who tested positive later in the study. Children under 10 years of age were less likely to receive a positive result than were persons 10 years of age or older, with percentages of 6.7% and 13.7%, respectively, for targeted testing; in the population screening, no child under 10 years of age had a positive result, as compared with 0.8% of those 10 years of age or older. Fewer females than males received positive results both in targeted testing (11.0% vs. 16.7%) and in population screening (0.6% vs. 0.9%). The haplotypes of the sequenced SARS-CoV-2 viruses were diverse and changed over time. The percentage of infected participants that was determined through population screening remained stable for the 20-day duration of screening. Conclusions In a population-based study in Iceland, children under 10 years of age and females had a lower incidence of SARS-CoV-2 infection than adolescents or adults and males. The proportion of infected persons identified through population screening did not change substantially during the screening period, which was consistent with a beneficial effect of containment efforts. (Funded by deCODE Genetics–Amgen.)",TRUE,location
R57,Virology,R44137,Full-genome sequences of the first two SARS-CoV-2 viruses from India,S134463,R44139,has location,R44149,India,"Background & objectives: Since December 2019, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has globally affected 195 countries. In India, suspected cases were screened for SARS-CoV-2 as per the advisory of the Ministry of Health and Family Welfare. The objective of this study was to characterize SARS-CoV-2 sequences from three identified positive cases as on February 29, 2020. Methods: Throat swab/nasal swab specimens for a total of 881 suspected cases were screened by E gene and confirmed by RdRp (1), RdRp (2) and N gene real-time reverse transcription-polymerase chain reactions and next-generation sequencing. Phylogenetic analysis, molecular characterization and prediction of B- and T-cell epitopes for Indian SARS-CoV-2 sequences were undertaken. Results: Three cases with a travel history from Wuhan, China, were confirmed positive for SARS-CoV-2. Almost complete (29,851 nucleotides) genomes of case 1, case 3 and a fragmented genome for case 2 were obtained. The sequences of Indian SARS-CoV-2 though not identical showed high (~99.98%) identity with Wuhan seafood market pneumonia virus (accession number: NC 045512). Phylogenetic analysis showed that the Indian sequences belonged to different clusters. Predicted linear B-cell epitopes were found to be concentrated in the S1 domain of spike protein, and a conformational epitope was identified in the receptor-binding domain. The predicted T-cell epitopes showed broad human leucocyte antigen allele coverage of A and B supertypes predominant in the Indian population. Interpretation & conclusions: The two SARS-CoV-2 sequences obtained from India represent two different introductions into the country. The genetic heterogeneity is as noted globally. The identified B- and T-cell epitopes may be considered suitable for future experiments towards the design of vaccines and diagnostics. Continuous monitoring and analysis of the sequences of new cases from India and the other affected countries would be vital to understand the genetic evolution and rates of substitution of the SARS-CoV-2.",TRUE,location
R57,Virology,R36151,Effects of voluntary event cancellation and school closure as countermeasures against COVID-19 outbreak in Japan,S123909,R36152,location,R27513,Japan,"
Background
To control the COVID-19 outbreak in Japan, sports and entertainment events were canceled and schools were closed throughout Japan from February 26 through March 19. That policy has been designated as voluntary event cancellation and school closure (VECSC).
Object
This study assesses VECSC effectiveness based on predicted outcomes.
Methods
A simple susceptible–infected–recovered model was applied to data of patients with symptoms in Japan during January 14 through March 26. The respective reproduction numbers for periods before VECSC (R0), during VECSC (Re), and after VECSC (Ra) were estimated.
Results
Results suggest R0 before VECSC as 2.534 [2.449, 2.598], Re during VECSC as 1.077 [0.948, 1.228], and Ra after VECSC as 4.455 [3.615, 5.255].
Discussion and conclusion
Results demonstrated that VECSC can reduce COVID-19 infectiousness considerably, but after VECSC, the value of the reproduction number rose to exceed 4.0.
",TRUE,location
R57,Virology,R44793,Effects of voluntary event cancellation and school closure as countermeasures against COVID−19 outbreak in Japan,S137121,R44794,location,R44797,Japan,"Background To control the COVID-19 outbreak in Japan, sports and entertainment events were canceled and schools were closed throughout Japan from February 26 through March 19. That policy has been designated as voluntary event cancellation and school closure (VECSC). Object This study assesses VECSC effectiveness based on predicted outcomes. Methods A simple susceptible–infected–recovered model was applied to data of patients with symptoms in Japan during January 14 through March 26. The respective reproduction numbers for periods before VECSC (R0), during VECSC (Re), and after VECSC (Ra) were estimated. Results Results suggest R0 before VECSC as 2.534 [2.449, 2.598], Re during VECSC as 1.077 [0.948, 1.228], and Ra after VECSC as 4.455 [3.615, 5.255]. Discussion and conclusion Results demonstrated that VECSC can reduce COVID-19 infectiousness considerably, but after VECSC, the value of the reproduction number rose to exceed 4.0.",TRUE,location
R57,Virology,R36109,Transmission interval estimates suggest pre-symptomatic spread of COVID-19,S123620,R36110,location,R36108,Singapore,"Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.",TRUE,location
R57,Virology,R36138,Estimating the generation interval for COVID-19 based on symptom onset data,S123827,R36140,location,R36108,Singapore,"Background: Estimating key infectious disease parameters from the COVID-19 outbreak is quintessential for modelling studies and guiding intervention strategies. Whereas different estimates for the incubation period distribution and the serial interval distribution have been reported, estimates of the generation interval for COVID-19 have not been provided. Methods: We used outbreak data from clusters in Singapore and Tianjin, China to estimate the generation interval from symptom onset data while acknowledging uncertainty about the incubation period distribution and the underlying transmission network. From those estimates we obtained the proportions pre-symptomatic transmission and reproduction numbers. Results: The mean generation interval was 5.20 (95%CI 3.78-6.78) days for Singapore and 3.95 (95%CI 3.01-4.91) days for Tianjin, China when relying on a previously reported incubation period with mean 5.2 and SD 2.8 days. The proportion of pre-symptomatic transmission was 48% (95%CI 32-67%) for Singapore and 62% (95%CI 50-76%) for Tianjin, China. Estimates of the reproduction number based on the generation interval distribution were slightly higher than those based on the serial interval distribution. Conclusions: Estimating generation and serial interval distributions from outbreak data requires careful investigation of the underlying transmission network. Detailed contact tracing information is essential for correctly estimating these quantities.",TRUE,location
R57,Virology,R44731,Transmission interval estimates suggest pre-symptomatic spread of COVID-19,S136923,R44732,location,R43052,Singapore,"Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.",TRUE,location
R57,Virology,R44776,Estimating the generation interval for COVID-19 based on symptom onset data,S137074,R44781,location,R43052,Singapore,"Background: Estimating key infectious disease parameters from the COVID-19 outbreak is quintessential for modelling studies and guiding intervention strategies. Whereas different estimates for the incubation period distribution and the serial interval distribution have been reported, estimates of the generation interval for COVID-19 have not been provided. Methods: We used outbreak data from clusters in Singapore and Tianjin, China to estimate the generation interval from symptom onset data while acknowledging uncertainty about the incubation period distribution and the underlying transmission network. From those estimates we obtained the proportions pre-symptomatic transmission and reproduction numbers. Results: The mean generation interval was 5.20 (95%CI 3.78-6.78) days for Singapore and 3.95 (95%CI 3.01-4.91) days for Tianjin, China when relying on a previously reported incubation period with mean 5.2 and SD 2.8 days. The proportion of pre-symptomatic transmission was 48% (95%CI 32-67%) for Singapore and 62% (95%CI 50-76%) for Tianjin, China. Estimates of the reproduction number based on the generation interval distribution were slightly higher than those based on the serial interval distribution. Conclusions: Estimating generation and serial interval distributions from outbreak data requires careful investigation of the underlying transmission network. Detailed contact tracing information is essential for correctly estimating these quantities.",TRUE,location
R57,Virology,R36109,Transmission interval estimates suggest pre-symptomatic spread of COVID-19,S164009,R36110,Study location,R36108,Singapore,"Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.",TRUE,location
R57,Virology,R175292,Dynamics of Antibodies to Ebolaviruses in an Eidolon helvum Bat Colony in,S694501,R175294,has location,L467044,Sudan,"The ecology of ebolaviruses is still poorly understood and the role of bats in outbreaks needs to be further clarified. Straw-colored fruit bats (Eidolon helvum) are the most common fruit bats in Africa and antibodies to ebolaviruses have been documented in this species. Between December 2018 and November 2019, samples were collected at approximately monthly intervals in roosting and feeding sites from 820 bats from an Eidolon helvum colony. Dried blood spots (DBS) were tested for antibodies to Zaire, Sudan, and Bundibugyo ebolaviruses. The proportion of samples reactive with GP antigens increased significantly with age from 0–9/220 (0–4.1%) in juveniles to 26–158/225 (11.6–70.2%) in immature adults and 10–225/372 (2.7–60.5%) in adult bats. Antibody responses were lower in lactating females. Viral RNA was not detected in 456 swab samples collected from 152 juvenile and 214 immature adult bats. Overall, our study shows that antibody levels increase in young bats suggesting that seroconversion to Ebola or related viruses occurs in older juvenile and immature adult bats. Multiple year monitoring would be needed to confirm this trend. Knowledge of the periods of the year with the highest risk of Ebolavirus circulation can guide the implementation of strategies to mitigate spill-over events.",TRUE,location
R57,Virology,R175260,Alphacoronavirus in a Daubenton’s Myotis Bat (Myotis daubentonii) in Sweden.,S694163,R175262,has location,L466738,Sweden,"The ongoing COVID-19 pandemic has stimulated a search for reservoirs and species potentially involved in back and forth transmission. Studies have postulated bats as one of the key reservoirs of coronaviruses (CoVs), and different CoVs have been detected in bats. So far, CoVs have not been found in bats in Sweden and we therefore tested whether they carry CoVs. In summer 2020, we sampled a total of 77 adult bats comprising 74 Myotis daubentonii, 2 Pipistrellus pygmaeus, and 1 M. mystacinus bats in southern Sweden. Blood, saliva and feces were sampled, processed and subjected to a virus next-generation sequencing target enrichment protocol. An Alphacoronavirus was detected and sequenced from feces of a M. daubentonii adult female bat. Phylogenetic analysis of the almost complete virus genome revealed a close relationship with Finnish and Danish strains. This was the first finding of a CoV in bats in Sweden, and bats may play a role in the transmission cycle of CoVs in Sweden. Focused and targeted surveillance of CoVs in bats is warranted, with consideration of potential conflicts between public health and nature conservation required as many bat species in Europe are threatened and protected.",TRUE,location
R57,Virology,R36130,Assessing the plausibility of subcritical transmission of 2019-nCoV in the United States,S123764,R36131,location,R416,United States,"Abstract: The 2019-nCoV outbreak has raised concern of global spread. While person-to-person transmission within the Wuhan district has led to a large outbreak, the transmission potential outside of the region remains unclear. Here we present a simple approach for determining whether the upper limit of the confidence interval for the reproduction number exceeds one for transmission in the United States, which would allow endemic transmission. As of February 7, 2020, the number of cases in the United states support subcritical transmission, rather than ongoing transmission. However, this conclusion can change if pre-symptomatic cases resulting from human-to-human transmission have not yet been identified.",TRUE,location
R57,Virology,R12233,Early transmissibility assessment of a novel coronavirus in Wuhan,S18612,R12234,location,R12218,Wuhan,"Between December 1, 2019 and January 26, 2020, nearly 3000 cases of respiratory illness caused by a novel coronavirus originating in Wuhan, China have been reported. In this short analysis, we combine publicly available cumulative case data from the ongoing outbreak with phenomenological modeling methods to conduct an early transmissibility assessment. Our model suggests that the basic reproduction number associated with the outbreak (at time of writing) may range from 2.0 to 3.1. Though these estimates are preliminary and subject to change, they are consistent with previous findings regarding the transmissibility of the related SARS-Coronavirus and indicate the possibility of epidemic potential.",TRUE,location
R57,Virology,R12241,Report 3: transmissibility of 2019- nCoV,S18680,R12242,location,R12218,Wuhan,"Self-sustaining human-to-human transmission of the novel coronavirus (2019-nCov) is the only plausible explanation of the scale of the outbreak in Wuhan. We estimate that, on average, each case infected 2.6 (uncertainty range: 1.5-3.5) other people up to 18 January 2020, based on an analysis combining our past estimates of the size of the outbreak in Wuhan with computational modelling of potential epidemic trajectories. This implies that control measures need to block well over 60% of transmission to be effective in controlling the outbreak. It is likely, based on the experience of SARS and MERS-CoV, that the number of secondary cases caused by a case of 2019-nCoV is highly variable – with many cases causing no secondary infections, and a few causing many. Whether transmission is continuing at the same rate currently depends on the effectiveness of current control measures implemented in China and the extent to which the populations of affected areas have adopted risk-reducing behaviours. In the absence of antiviral drugs or vaccines, control relies upon the prompt detection and isolation of symptomatic cases. It is unclear at the current time whether this outbreak can be contained within China; uncertainties include the severity spectrum of the disease caused by this virus and whether cases with relatively mild symptoms are able to transmit the virus efficiently. Identification and testing of potential cases need to be as extensive as is permitted by healthcare and diagnostic testing capacity – including the identification, testing and isolation of suspected cases with only mild to moderate disease (e.g. influenza-like illness), when logistically feasible.",TRUE,location
R57,Virology,R44819,"Report 3: Transmissibility of 2019-nCoV. 2020. WHO Collaborating Centre for Infectious Disease Modelling, MRC Centre for Global Infectious Disease Analysis",S137253,R44820,location,R44823,Wuhan,"Self-sustaining human-to-human transmission of the novel coronavirus (2019-nCov) is the only plausible explanation of the scale of the outbreak in Wuhan. We estimate that, on average, each case infected 2.6 (uncertainty range: 1.5-3.5) other people up to 18 January 2020, based on an analysis combining our past estimates of the size of the outbreak in Wuhan with computational modelling of potential epidemic trajectories. This implies that control measures need to block well over 60% of transmission to be effective in controlling the outbreak. It is likely, based on the experience of SARS and MERS-CoV, that the number of secondary cases caused by a case of 2019-nCoV is highly variable – with many cases causing no secondary infections, and a few causing many. Whether transmission is continuing at the same rate currently depends on the effectiveness of current control measures implemented in China and the extent to which the populations of affected areas have adopted risk-reducing behaviours. In the absence of antiviral drugs or vaccines, control relies upon the prompt detection and isolation of symptomatic cases. It is unclear at the current time whether this outbreak can be contained within China; uncertainties include the severity spectrum of the disease caused by this virus and whether cases with relatively mild symptoms are able to transmit the virus efficiently. Identification and testing of potential cases need to be as extensive as is permitted by healthcare and diagnostic testing capacity – including the identification, testing and isolation of suspected cases with only mild to moderate disease (e.g. influenza-like illness), when logistically feasible.",TRUE,location
R57,Virology,R44842,"Early Transmissibility Assessment of a Novel Coronavirus in Wuhan, China",S137339,R44843,location,R44823,Wuhan,"Between December 1, 2019 and January 26, 2020, nearly 3000 cases of respiratory illness caused by a novel coronavirus originating in Wuhan, China have been reported. In this short analysis, we combine publicly available cumulative case data from the ongoing outbreak with phenomenological modeling methods to conduct an early transmissibility assessment. Our model suggests that the basic reproduction number associated with the outbreak (at time of writing) may range from 2.0 to 3.1. Though these estimates are preliminary and subject to change, they are consistent with previous findings regarding the transmissibility of the related SARS-Coronavirus and indicate the possibility of epidemic potential.",TRUE,location
R57,Virology,R175276,Development of Real-Time Molecular Assays for the Detection of Wesselsbron Virus,S694332,R175278,has location,L466891,Africa,"Wesselsbron is a neglected, mosquito-borne zoonotic disease endemic to Africa. The virus is mainly transmitted by the mosquitoes of the Aedes genus and primarily affects domestic livestock species with teratogenic effects but can jump to humans. Although no major outbreak or fatal case in humans has been reported as yet worldwide, a total of 31 acute human cases of Wesselsbron infection have been previously described since its first isolation in 1955. However, most of these cases were reported from Sub-Saharan Africa where resources are limited and a lack of diagnostic means exists. We describe here two molecular diagnostic tools suitable for Wesselsbron virus detection. The newly established reverse transcription-quantitative polymerase chain reaction and reverse-transcription-recombinase polymerase amplification assays are highly specific and repeatable, and exhibit good agreement with the reference assay on the samples tested. The validation on clinical and veterinary samples shows that they can be accurately used for Wesselsbron virus detection in public health activities and the veterinary field. Considering the increasing extension of Aedes species worldwide, these new assays could be useful not only in laboratory studies for Wesselsbron virus, but also in routine surveillance activities for zoonotic arboviruses and could be applied in well-equipped central laboratories or in remote areas in Africa, regarding the reverse-transcription-recombinase polymerase amplification assay.",TRUE,location
R57,Virology,R175292,Dynamics of Antibodies to Ebolaviruses in an Eidolon helvum Bat Colony in,S694496,R175294,has location,L467039,Africa,"The ecology of ebolaviruses is still poorly understood and the role of bats in outbreaks needs to be further clarified. Straw-colored fruit bats (Eidolon helvum) are the most common fruit bats in Africa and antibodies to ebolaviruses have been documented in this species. Between December 2018 and November 2019, samples were collected at approximately monthly intervals in roosting and feeding sites from 820 bats from an Eidolon helvum colony. Dried blood spots (DBS) were tested for antibodies to Zaire, Sudan, and Bundibugyo ebolaviruses. The proportion of samples reactive with GP antigens increased significantly with age from 0–9/220 (0–4.1%) in juveniles to 26–158/225 (11.6–70.2%) in immature adults and 10–225/372 (2.7–60.5%) in adult bats. Antibody responses were lower in lactating females. Viral RNA was not detected in 456 swab samples collected from 152 juvenile and 214 immature adult bats. Overall, our study shows that antibody levels increase in young bats suggesting that seroconversion to Ebola or related viruses occurs in older juvenile and immature adult bats. Multiple year monitoring would be needed to confirm this trend. Knowledge of the periods of the year with the highest risk of Ebolavirus circulation can guide the implementation of strategies to mitigate spill-over events.",TRUE,location
R57,Virology,R175260,Alphacoronavirus in a Daubenton’s Myotis Bat (Myotis daubentonii) in Sweden.,S694170,R175262,has location,L466745,Europe,"The ongoing COVID-19 pandemic has stimulated a search for reservoirs and species potentially involved in back and forth transmission. Studies have postulated bats as one of the key reservoirs of coronaviruses (CoVs), and different CoVs have been detected in bats. So far, CoVs have not been found in bats in Sweden and we therefore tested whether they carry CoVs. In summer 2020, we sampled a total of 77 adult bats comprising 74 Myotis daubentonii, 2 Pipistrellus pygmaeus, and 1 M. mystacinus bats in southern Sweden. Blood, saliva and feces were sampled, processed and subjected to a virus next-generation sequencing target enrichment protocol. An Alphacoronavirus was detected and sequenced from feces of a M. daubentonii adult female bat. Phylogenetic analysis of the almost complete virus genome revealed a close relationship with Finnish and Danish strains. This was the first finding of a CoV in bats in Sweden, and bats may play a role in the transmission cycle of CoVs in Sweden. Focused and targeted surveillance of CoVs in bats is warranted, with consideration of potential conflicts between public health and nature conservation required as many bat species in Europe are threatened and protected.",TRUE,location
R57,Virology,R44137,Full-genome sequences of the first two SARS-CoV-2 viruses from India,S134462,R44139,has location,R41015,"Wuhan, China","Background & objectives: Since December 2019, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has globally affected 195 countries. In India, suspected cases were screened for SARS-CoV-2 as per the advisory of the Ministry of Health and Family Welfare. The objective of this study was to characterize SARS-CoV-2 sequences from three identified positive cases as on February 29, 2020. Methods: Throat swab/nasal swab specimens for a total of 881 suspected cases were screened by E gene and confirmed by RdRp (1), RdRp (2) and N gene real-time reverse transcription-polymerase chain reactions and next-generation sequencing. Phylogenetic analysis, molecular characterization and prediction of B- and T-cell epitopes for Indian SARS-CoV-2 sequences were undertaken. Results: Three cases with a travel history from Wuhan, China, were confirmed positive for SARS-CoV-2. Almost complete (29,851 nucleotides) genomes of case 1, case 3 and a fragmented genome for case 2 were obtained. The sequences of Indian SARS-CoV-2 though not identical showed high (~99.98%) identity with Wuhan seafood market pneumonia virus (accession number: NC 045512). Phylogenetic analysis showed that the Indian sequences belonged to different clusters. Predicted linear B-cell epitopes were found to be concentrated in the S1 domain of spike protein, and a conformational epitope was identified in the receptor-binding domain. The predicted T-cell epitopes showed broad human leucocyte antigen allele coverage of A and B supertypes predominant in the Indian population. Interpretation & conclusions: The two SARS-CoV-2 sequences obtained from India represent two different introductions into the country. The genetic heterogeneity is as noted globally. The identified B- and T-cell epitopes may be considered suitable for future experiments towards the design of vaccines and diagnostics. Continuous monitoring and analysis of the sequences of new cases from India and the other affected countries would be vital to understand the genetic evolution and rates of substitution of the SARS-CoV-2.",TRUE,location
R57,Virology,R175292,Dynamics of Antibodies to Ebolaviruses in an Eidolon helvum Bat Colony in,S694480,R175294,has location,L467023,Zaire,"The ecology of ebolaviruses is still poorly understood and the role of bats in outbreaks needs to be further clarified. Straw-colored fruit bats (Eidolon helvum) are the most common fruit bats in Africa and antibodies to ebolaviruses have been documented in this species. Between December 2018 and November 2019, samples were collected at approximately monthly intervals in roosting and feeding sites from 820 bats from an Eidolon helvum colony. Dried blood spots (DBS) were tested for antibodies to Zaire, Sudan, and Bundibugyo ebolaviruses. The proportion of samples reactive with GP antigens increased significantly with age from 0–9/220 (0–4.1%) in juveniles to 26–158/225 (11.6–70.2%) in immature adults and 10–225/372 (2.7–60.5%) in adult bats. Antibody responses were lower in lactating females. Viral RNA was not detected in 456 swab samples collected from 152 juvenile and 214 immature adult bats. Overall, our study shows that antibody levels increase in young bats suggesting that seroconversion to Ebola or related viruses occurs in older juvenile and immature adult bats. Multiple year monitoring would be needed to confirm this trend. Knowledge of the periods of the year with the highest risk of Ebolavirus circulation can guide the implementation of strategies to mitigate spill-over events.",TRUE,location
R57,Virology,R74055,Case fatality risk of the SARS-CoV-2 variant of concern B.1.1.7 in England,S340292,R74056,location,R74057,England,The B.1.1.7 variant of concern (VOC) is increasing in prevalence across Europe. Accurate estimation of disease severity associated with this VOC is critical for pandemic planning. We found increased risk of death for VOC compared with non-VOC cases in England (HR: 1.67 (95% CI: 1.34 - 2.09; P<.0001). Absolute risk of death by 28-days increased with age and comorbidities. VOC has potential to spread faster with higher mortality than the pandemic to date.,TRUE,location
,Occupational psychology,R172966,Acceptance of Workplace Bullying Behaviors and Job Satisfaction: Moderated Mediation Analysis With Coping Self-Efficacy and Exposure to Bullying,S691048,R173008,Country ,L464025,Serbia,"Previous research explored workplace climate as a factor of workplace bullying and coping with workplace bullying, but these concepts were not closely related to workplace bullying behaviors (WBBs). To examine whether the perceived exposure to bullying mediates the relationship between the climate of accepting WBBs and job satisfaction under the condition of different levels of WBBs coping self-efficacy beliefs, we performed moderated mediation analysis. The Negative Acts Questionnaire – Revised was given to 329 employees from Serbia for assessing perceived exposure to bullying. Leaving the original scale items, the instruction of the original Negative Acts Questionnaire – Revised was modified for assessing (1) the climate of accepting WBBs and (2) WBBs coping self-efficacy beliefs. There was a significant negative relationship between exposure to bullying and job satisfaction. WBB acceptance climate was positively related to exposure to workplace bullying and negatively related to job satisfaction. WBB acceptance climate had an indirect relationship with job satisfaction through bullying exposure, and the relationship between WBB acceptance and exposure to bullying was weaker among those who believed that they were more efficient in coping with workplace bullying. Workplace bullying could be sustained by WBB acceptance climate which threatens the job-related outcomes. WBBs coping self-efficacy beliefs have some buffering effects.",TRUE,location
R123,Analytical Chemistry,R139340,Cu2O nanorods modified by reduced graphene oxide for NH3 sensing at room temperature,S555540,R139342,Sensing environment,L390732,Air,"In this work, Cu2O nanorods modified by reduced graphene oxide (rGO) were produced via a two-step synthesis method. CuO rods were firstly prepared in graphene oxide (GO) solution using cetyltrimethyl ammonium bromide (CTAB) as a soft template by the microwave-assisted hydrothermal method, accompanied with the reduction of GO. The complexes were subsequently annealed and Cu2O nanorods/rGO composites were obtained. The as-prepared composites were evaluated using various characterization methods, and were utilized as sensing materials. The room-temperature NH3 sensing properties of a sensor based on the Cu2O nanorods/rGO composites were systematically investigated. The sensor exhibited an excellent sensitivity and linear response toward NH3 at room temperature. Furthermore, the sensor could be easily recovered to its initial state in a short time after exposure to fresh air. The sensor also showed excellent repeatability and selectivity to NH3. The remarkably enhanced NH3-sensing performances could be attributed to the improved conductivity, catalytic activity for the oxygen reduction reaction and increased gas adsorption in the unique hybrid composites. Such composites showed great potential for manufacturing a new generation of low-power and portable ammonia sensors.",TRUE,noun
R123,Analytical Chemistry,R139346,Layer-by-Layer Assembled Conductive Metal-Organic Framework Nanofilms for Room-Temperature Chemiresistive Sensing,S555593,R139350,Architecture,R139335,Chemiresistor,"The utility of electronically conductive metal-organic frameworks (EC-MOFs) in high-performance devices has been limited to date by a lack of high-quality thin film. The controllable thin-film fabrication of an EC-MOF, Cu3 (HHTP)2 , (HHTP=2,3,6,7,10,11-hexahydroxytriphenylene), by a spray layer-by-layer liquid-phase epitaxial method is reported. The Cu3 (HHTP)2 thin film can not only be precisely prepared with thickness increment of about 2 nm per growing cycle, but also shows a smooth surface, good crystallinity, and high orientation. The chemiresistor gas sensor based on this high-quality thin film is one of the best room-temperature sensors for NH3 among all reported sensors based on various materials.",TRUE,noun
R123,Analytical Chemistry,R139374,CuO Nanosheets for Sensitive and Selective Determination of H2S with High Recovery Ability,S555703,R139376,Sensing material,L390836,CuO,"In this article, cupric oxide (CuO) leafletlike nanosheets have been synthesized by a facile, low-cost, and surfactant-free method, and they have further been successfully developed for sensitive and selective determination of hydrogen sulfide (H2S) with high recovery ability. The experimental results have revealed that the sensitivity and recovery time of the present H2S gas sensor are strongly dependent on the working temperature. The best H2S sensing performance has been achieved with a low detection limit of 2 ppb and broad linear range from 30 ppb to 1.2 ppm. The gas sensor is reversible, with a quick response time of 4 s and a short recovery time of 9 s. In addition, negligible responses can be observed exposed to 100-fold concentrations of other gases which may exist in the atmosphere such as nitrogen (N2), oxygen (O2), nitric oxide (NO), cabon monoxide (CO), nitrogen dioxide (NO2), hydrogen (H2), and so on, indicating relatively high selectivity of the present H2S sensor. The H2S sensor based on t...",TRUE,noun
R123,Analytical Chemistry,R139324,Graphene Nanomesh As Highly Sensitive Chemiresistor Gas Sensor,S560960,R140513,Sensing material,R140512,Graphene,"Graphene is a one atom thick carbon allotrope with all surface atoms that has attracted significant attention as a promising material as the conduction channel of a field-effect transistor and chemical field-effect transistor sensors. However, the zero bandgap of semimetal graphene still limits its application for these devices. In this work, ethanol-chemical vapor deposition (CVD) of a grown p-type semiconducting large-area monolayer graphene film was patterned into a nanomesh by the combination of nanosphere lithography and reactive ion etching and evaluated as a field-effect transistor and chemiresistor gas sensors. The resulting neck-width of the synthesized nanomesh was about ∼20 nm and was comprised of the gap between polystyrene (PS) spheres that was formed during the reactive ion etching (RIE) process. The neck-width and the periodicities of the graphene nanomesh (GNM) could be easily controlled depending on the duration/power of the RIE and the size of the PS nanospheres. The fabricated GNM transistor device exhibited promising electronic properties featuring a high drive current and an I(ON)/I(OFF) ratio of about 6, significantly higher than its film counterpart. Similarly, when applied as a chemiresistor gas sensor at room temperature, the graphene nanomesh sensor showed excellent sensitivity toward NO(2) and NH(3), significantly higher than their film counterparts. The ethanol-based graphene nanomesh sensors exhibited sensitivities of about 4.32%/ppm in NO(2) and 0.71%/ppm in NH(3) with limits of detection of 15 and 160 ppb, respectively. Our demonstrated studies on controlling the neck width of the nanomesh would lead to further improvement of graphene-based transistors and sensors.",TRUE,noun
R123,Analytical Chemistry,R140509,Sub-ppt gas detection with pristine graphene,S560947,R140511,Sensing material,R140512,Graphene,"Graphene is widely regarded as one of the most promising materials for sensor applications. Here, we demonstrate that a pristine graphene can detect gas molecules at extremely low concentrations with detection limits as low as 158 parts-per-quadrillion (ppq) for a range of gas molecules at room temperature. The unprecedented sensitivity was achieved by applying our recently developed concept of continuous in situ cleaning of the sensing material with ultraviolet light. The simplicity of the concept, together with graphene’s flexibility to be used on various platforms, is expected to intrigue more investigations to develop ever more sensitive sensors.",TRUE,noun
R123,Analytical Chemistry,R140731,Hydrogen Sensing Using Pd-Functionalized Multi-Layer Graphene Nanoribbon Networks,S562261,R140733,Analyte,R140734,Hydrogen,"www.MaterialsViews.com C O M M Hydrogen Sensing Using Pd-Functionalized Multi-Layer Graphene Nanoribbon Networks U N IC By Jason L. Johnson , Ashkan Behnam , S. J. Pearton , and Ant Ural * A IO N Sensing of gas molecules is critical in many fi elds including environmental monitoring, transportation, defense, space missions, energy, agriculture, and medicine. Solid state gas sensors have been developed for many of these applications. [ 1–3 ] More recently, chemical gas sensors based on nanoscale materials, such as carbon nanotubes and semiconductor nanowires, have attracted signifi cant research attention due to their naturally small size, large surface-to-volume ratio, low power consumption, room temperature operation, and simple fabrication. [ 4–6 ]",TRUE,noun
R123,Analytical Chemistry,R140743,Flower-like Palladium Nanoclusters Decorated Graphene Electrodes for Ultrasensitive and Flexible Hydrogen Gas Sensing,S562326,R140745,Analyte,R140734,Hydrogen,"Abstract Flower-like palladium nanoclusters (FPNCs) are electrodeposited onto graphene electrode that are prepared by chemical vapor deposition (CVD). The CVD graphene layer is transferred onto a poly(ethylene naphthalate) (PEN) film to provide a mechanical stability and flexibility. The surface of the CVD graphene is functionalized with diaminonaphthalene (DAN) to form flower shapes. Palladium nanoparticles act as templates to mediate the formation of FPNCs, which increase in size with reaction time. The population of FPNCs can be controlled by adjusting the DAN concentration as functionalization solution. These FPNCs_CG electrodes are sensitive to hydrogen gas at room temperature. The sensitivity and response time as a function of the FPNCs population are investigated, resulted in improved performance with increasing population. Furthermore, the minimum detectable level (MDL) of hydrogen is 0.1 ppm, which is at least 2 orders of magnitude lower than that of chemical sensors based on other Pd-based hybrid materials.",TRUE,noun
R123,Analytical Chemistry,R140760,Hydrogen gas sensor based on metal oxide nanoparticles decorated graphene transistor,S562416,R140762,Analyte,R140734,Hydrogen,"In this work, in order to enhance the performance of graphene gas sensors, graphene and metal oxide nanoparticles (NPs) are combined to be utilized for high selectivity and fast response gas detection. Whether at the relatively optimal temperature or even room temperature, our gas sensors based on graphene transistors, decorated with SnO2 NPs, exhibit fast response and short recovery times (∼1 seconds) at 50 °C when the hydrogen concentration is 100 ppm. Specifically, X-ray photoelectron spectroscopy and conductive atomic force microscopy are employed to explore the interface properties between graphene and SnO2 NPs. Through the complimentary characterization, a mechanism based on charge transfer and band alignment is elucidated to explain the physical originality of these graphene gas sensors: high carrier mobility of graphene and small energy barrier between graphene and SnO2 NPs have ensured a fast response and a high sensitivity and selectivity of the devices. Generally, these gas sensors will facilitate the rapid development of next-generation hydrogen gas detection.",TRUE,noun
R77,Animal Sciences,R44454,Disposition and clinical use of bromide in cats,S135588,R44500,Most common adverse effects,L82837,Cough,"OBJECTIVE To establish a dosing regimen for potassium bromide and evaluate use of bromide to treat spontaneous seizures in cats. DESIGN Prospective and retrospective studies. ANIMALS 7 healthy adult male cats and records of 17 cats with seizures. PROCEDURE Seven healthy cats were administered potassium bromide (15 mg/kg [6.8 mg/lb], p.o., q 12 h) until steady-state concentrations were reached. Serum samples for pharmacokinetic analysis were obtained weekly until bromide concentrations were not detectable. Clinical data were obtained from records of 17 treated cats. RESULTS In the prospective study, maximum serum bromide concentration was 1.1 +/- 0.2 mg/mL at 8 weeks. Mean disappearance half-life was 1.6 +/- 0.2 weeks. Steady state was achieved at a mean of 5.3 +/-1.1 weeks. No adverse effects were detected and bromide was well tolerated. In the retrospective study, administration of bromide (n = 4) or bromide and phenobarbital (3) was associated with eradication of seizures in 7 of 15 cats (serum bromide concentration range, 1.0 to 1.6 mg/mL); however, bromide administration was associated with adverse effects in 8 of 16 cats. Coughing developed in 6 of these cats, leading to euthanasia in 1 cat and discontinuation of bromide administration in 2 cats. CONCLUSIONS AND CLINICAL RELEVANCE Therapeutic concentrations of bromide are attained within 2 weeks in cats that receive 30 mg/kg/d (13.6 mg/lb/d) orally. Although somewhat effective in seizure control, the incidence of adverse effects may not warrant routine use of bromide for control of seizures in cats.",TRUE,noun
R77,Animal Sciences,R44477,Bromide-associated lower airway disease: a retrospective study of seven cats,S135599,R44501,Most common adverse effects,L82846,Cough,"Seven cats were presented for mild-to-moderate cough and/or dyspnoea after starting bromide (Br) therapy for neurological diseases. The thoracic auscultation was abnormal in three cats showing increased respiratory sounds and wheezes. Haematology revealed mild eosinophilia in one cat. The thoracic radiographs showed bronchial patterns with peribronchial cuffing in most of them. Bronchoalveolar lavage performed in two cats revealed neutrophilic and eosinophilic inflammation. Histopathology conducted in one cat showed endogenous lipid pneumonia (EnLP). All cats improved with steroid therapy after Br discontinuation. Five cats were completely weaned off steroids, with no recurrence of clinical signs. In one cat, the treatment was discontinued despite persistent clinical signs. The cat presenting with EnLP developed secondary pneumothorax and did not recover. Br-associated lower airway disease can appear in cats after months of treatment and clinical improvement occurs only after discontinuing Br therapy.",TRUE,noun
R77,Animal Sciences,R44466,Fulminant hepatic failure associated with oral administration of diazepam in 11 cats,S135766,R44513,AED evaluated,L82989,Diazepam,"Acute fulminant hepatic necrosis was associated with repeated oral administration of diazepam (1.25 to 2 mg, PO, q 24 or 12 h), prescribed for behavioral modification or to facilitate urination. Five of 11 cats became lethargic, atactic, and anorectic within 96 hours of initial treatment. All cats became jaundiced during the first 11 days of illness. Serum biochemical analysis revealed profoundly high alanine transaminase and aspartate transaminase activities. Results of coagulation tests in 3 cats revealed marked abnormalities. Ten cats died or were euthanatized within 15 days of initial drug administration, and only 1 cat survived. Histologic evaluation of hepatic tissue specimens from each cat revealed florid centrilobular hepatic necrosis, profound biliary ductule proliferation and hyperplasia, and suppurative intraductal inflammation. Idiosyncratic hepatotoxicosis was suspected because of the rarity of this condition. Prior sensitization to diazepam was possible in only 1 cat, and consistent risk factors that could explain susceptibility to drug toxicosis were not identified. On the basis of the presumption that diazepam was hepatotoxic in these cats, an increase in serum transaminase activity within 5 days of treatment initiation indicates a need to suspend drug administration and to provide supportive care.",TRUE,noun
R77,Animal Sciences,R44446,Pharmacokinetics and toxicity of zonisamide in cats,S135855,R44520,AED evaluated,L83064,Zonisamide,"With the eventual goal of making zonisamide (ZNS), a relatively new antiepileptic drug, available for the treatment of epilepsy in cats, the pharmacokinetics after a single oral administration at 10 mg/kg and the toxicity after 9-week daily administration of 20 mg/kg/day of ZNS were studied in healthy cats. Pharmacokinetic parameters obtained with a single administration of ZNS at 10 mg/day were as follows: C max =13.1 μg/ml; T max =4.0 h; T 1/2 =33.0 h; areas under the curves (AUCs)=720.3 μg/mlh (values represent the medians). The study with daily administrations revealed that the toxicity of ZNS was comparatively low in cats, suggesting that it may be an available drug for cats. However, half of the cats that were administered 20 mg/kg/day daily showed adverse reactions such as anorexia, diarrhoea, vomiting, somnolence and locomotor ataxia.",TRUE,noun
R133,Artificial Intelligence,R6571,"Trainable, scalable summarization using robust NLP and machine learning",S8233,R6572,evaluation,R6573,Evaluation,"We describe a trainable and scalable summarization system which utilizes features derived from information retrieval, information extraction, and NLP techniques and on-line resources. The system combines these features using a trainable feature combiner learned from summary examples through a machine learning algorithm. We demonstrate system scalability by reporting results on the best combination of summarization features for different document sources. We also present preliminary results from a task-based evaluation on summarization output usability.",TRUE,noun
R133,Artificial Intelligence,R6578,Discourse Trees Are Good Indicators of Importance in Text,S8259,R6579,evaluation,R6580,Evaluation,"Researchers in computational linguistics have long speculated that the nuclei of the rhetorical structure tree of a text form an adequate \summary"" of the text for which that tree was built. However, to my knowledge, there has been no experiment to connrm how valid this speculation really is. In this paper, I describe a psycholinguistic experiment that shows that the concepts of discourse structure and nuclearity can be used eeectively in text summarization. More precisely, I show that there is a strong correlation between the nuclei of the discourse structure of a text and what readers perceive to be the most important units in that text. In addition, I propose and evaluate the quality of an automatic, discourse-based summa-rization system that implements the methods that were validated by the psycholinguistic experiment. The evaluation indicates that although the system does not match yet the results that would be obtained if discourse trees had been built manually, it still signiicantly outperforms both a baseline algorithm and Microsoft's OOce97 summarizer. 1 Motivation Traditionally, previous approaches to automatic text summarization have assumed that the salient parts of a text can be determined by applying one or more of the following assumptions: important sentences in a text contain words that are used frequently (Luhn 1958; Edmundson 1968); important sentences contain words that are used in the title and section headings (Edmundson 1968); important sentences are located at the beginning or end of paragraphs (Baxendale 1958); important sentences are located at positions in a text that are genre dependent, and these positions can be determined automatically, through training important sentences use bonus words such as \greatest"" and \signiicant"" or indicator phrases such as \the main aim of this paper"" and \the purpose of this article"", while unimportant sentences use stigma words such as \hardly"" and \im-possible"" important sentences and concepts are the highest connected entities in elaborate semantic struc-important and unimportant sentences are derivable from a discourse representation of the text (Sparck Jones 1993b; Ono, Sumita, & Miike 1994). In determining the words that occur most frequently in a text or the sentences that use words that occur in the headings of sections, computers are accurate tools. Therefore, in testing the validity of using these indicators for determining the most important units in a text, it is adequate to compare the direct output of a summarization program that implements the assump-tion(s) under scrutiny with a human-made …",TRUE,noun
R133,Artificial Intelligence,R6599,Automated multi-document summarization in NeATS,S8341,R6600,evaluation,R6601,Evaluation,"This paper describes the multi-document text summarization system NeATS. Using a simple algorithm, NeATS was among the top two performers of the DUC-01 evaluation.",TRUE,noun
R133,Artificial Intelligence,R6649,ERSS 2005: Coreference-Based Summarization Reloaded,S8555,R6650,evaluation,R6651,Evaluation,"We present ERSS 2005, our entry to this year’s DUC competition. With only slight modifications from last year’s version to accommodate the more complex context information present in DUC 2005, we achieved a similar performance to last year’s entry, ranking roughly in the upper third when examining the ROUGE-1 and Basic Element score. We also participated in the additional manual evaluation based on the new Pyramid method and performed further evaluations based on the Basic Elements method and the automatic generation of Pyramids. Interestingly, the ranking of our system differs greatly between the different measures; we attempt to analyse this effect based on correlations between the different results using the Spearman coefficient.",TRUE,noun
R133,Artificial Intelligence,R6657,MSBGA: A Multi-Document Summarization System Based on Genetic Algorithm,S8591,R6658,evaluation,R6659,Evaluation,"The multi-document summarizer using genetic algorithm-based sentence extraction (MSBGA) regards summarization process as an optimization problem where the optimal summary is chosen among a set of summaries formed by the conjunction of the original articles sentences. To solve the NP hard optimization problem, MSBGA adopts genetic algorithm, which can choose the optimal summary on global aspect. The evaluation function employs four features according to the criteria of a good summary: satisfied length, high coverage, high informativeness and low redundancy. To improve the accuracy of term frequency, MSBGA employs a novel method TFS, which takes word sense into account while calculating term frequency. The experiments on DUC04 data show that our strategy is effective and the ROUGE-1 score is only 0.55% lower than the best participant in DUC04",TRUE,noun
R133,Artificial Intelligence,R6685,Personalized PageRank Based Multi-document Summarization,S8711,R6686,evaluation,R6687,Evaluation,"This paper presents a novel multi-document summarization approach based on personalized pagerank (PPRSum). In this algorithm, we uniformly integrate various kinds of information in the corpus. At first, we train a salience model of sentence global features based on Naive Bayes Model. Secondly, we generate a relevance model for each corpus utilizing the query of it. Then, we compute the personalized prior probability for each sentence in the corpus utilizing the salience model and the relevance model both. With the help of personalized prior probability, a Personalized PageRank ranking process is performed depending on the relationships among all sentences in the corpus. Additionally, the redundancy penalty is imposed on each sentence. The summary is produced by choosing the sentences with both high query-focused information richness and high information novelty. Experiments on DUC2007 are performed and the ROUGE evaluation results show that PPRSum ranks between the 1st and the 2nd systems on DUC2007 main task.",TRUE,noun
R133,Artificial Intelligence,R6689,AdaSum: an adaptive model for summarization,S8736,R6690,implementation,R6692,AdaSum,"Topic representation mismatch is a key problem in topic-oriented summarization for the specified topic is usually too short to understand/interpret. This paper proposes a novel adaptive model for summarization, AdaSum, under the assumption that the summary and the topic representation can be mutually boosted. AdaSum aims to simultaneously optimize the topic representation and extract effective summaries. This model employs a mutual boosting process to minimize the topic representation mismatch for base summarizers. Furthermore, a linear combination of base summarizers is proposed to further reduce the topic representation mismatch from the diversity of base summarizers with a general learning framework. We prove that the training process of AdaSum can enhance the performance measure used. Experimental results on DUC 2007 dataset show that AdaSum significantly outperforms the baseline methods for summarization (e.g. MRP, LexRank, and GSPS).",TRUE,noun
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329543,R69419,Method,R69436,approach,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun
R133,Artificial Intelligence,R74367,A Blocking Scheme for Entity Resolution in the Semantic Web,S341357,R74369,Has method,R25029,Blocking,"The amount and diversity of data in the Semantic Web has grown quite. RDF datasets has proportionally more problems than relational datasets due to the way data are published, usually without formal criteria. Entity Resolution is n important issue which is related to a known task of many research communities and it aims at finding all representations that refer to the same entity in different datasets. Yet, it is still an open problem. Blocking methods are used to avoid the quadratic complexity of the brute force approach by clustering entities into blocks and limiting the evaluation of entity specifications to entity pairs within blocks. In the last years only a few blocking methods were conceived to deal with RDF data and novel blocking techniques are required for dealing with noisy and heterogeneous data in the Web of Data. In this paper we present a blocking scheme, CER-Blocking, which is based on an inverted index structure and that uses different data evidences from a triple, aiming to maximize its effectiveness. To overcome the problems of data quality or even the very absence thereof, we use two blocking key definitions. This scheme is part of an ER approach which is based on a relational learning algorithm that addresses the problem by statistical approximation. It was empirically evaluated on real and synthetic datasets which are part of consolidated benchmarks found on the literature.",TRUE,noun
R133,Artificial Intelligence,R74535,Towards Exploring Literals to Enrich Data Linking in Knowledge Graphs,S342601,R74537,Has method,R25029,Blocking,"Knowledge graph completion is still a challenging solution that uses techniques from distinct areas to solve many different tasks. Most recent works, which are based on embedding models, were conceived to improve an existing knowledge graph using the link prediction task. However, even considering the ability of these solutions to solve other tasks, they did not present results for data linking, for example. Furthermore, most of these works focuses only on structural information, i.e., the relations between entities. In this paper, we present an approach for data linking that enrich entity embeddings in a model with their literal information and that do not rely on external information of these entities. The key aspect of this proposal is that we use a blocking scheme to improve the effectiveness of the solution in relation to the use of literals. Thus, in addition to the literals from object elements in a triple, we use other literals from subjects and predicates. By merging entity embeddings with their literal information it is possible to extend many popular embedding models. Preliminary experiments were performed on real-world datasets and our solution showed competitive results to the performance of the task of data linking.",TRUE,noun
R133,Artificial Intelligence,R182238,"Food Recognition: A New Dataset, Experiments, and Results",S704941,R182240,Acquisition,R182245,Canteen,"We propose a new dataset for the evaluation of food recognition algorithms that can be used in dietary monitoring applications. Each image depicts a real canteen tray with dishes and foods arranged in different ways. Each tray contains multiple instances of food classes. The dataset contains 1027 canteen trays for a total of 3616 food instances belonging to 73 food classes. The food on the tray images has been manually segmented using carefully drawn polygonal boundaries. We have benchmarked the dataset by designing an automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest, and predicts for each region the corresponding food class. We have experimented with three different classification strategies using also several visual descriptors. We achieve about 79% of food and tray recognition accuracy using convolutional-neural-networks-based features. The dataset, as well as the benchmark framework, are available to the research community.",TRUE,noun
R133,Artificial Intelligence,R138300,Bridging the Gap between Citizens and Local Administrations with Knowledge-Based Service Bundle Recommendations,S548012,R138302,has Target users,R138235,Citizens,"The Italian Public Administration Services (IPAS) is a registry of services provided to Italian citizens likewise the Local Government Service List (UK), or the European Service List for local authorities from other nations. Unlike existing registries, IPAS presents the novelty of modelling public services from the view point of the value they have for the consumers and the providers. A value-added-service (VAS) is linked to a life event that requires its fruition, addresses consumer categories to identify market opportunities for private providers, and is described by non-functional-properties such as price and time of fruition. Where Italian local authorities leave the citizen-users in a daedalus of references to understand whether they can/have to apply for a service, the IPAS model captures the necessary back-ground knowledge about the connection between administrative legislation and service specifications, life events, and application contexts to support the citizen-users to fulfill their needs. As a proof of concept, we developed an operational Web environment named ASSO, designed to assist the citizen-user to intuitively create bundles of mandatory-by-legislation and recommended services, to accomplish his bureaucratic fulfillments. Although ASSO is an ongoing project, domain experts gave preliminary positive feedback on the innovativeness and effectiveness of the proposed approach.",TRUE,noun
R133,Artificial Intelligence,R139300,Personalized recommendations in e-participation: offline experiments for the 'Decide Madrid' platform,S555366,R139302,has Target users,R138235,Citizens,"In e-participation platforms, citizens suggest, discuss and vote online for initiatives aimed to address a wide range of issues and problems in a city, such as economic development, public safety, budges, infrastructure, housing, environment, social rights, and health care. For a particular citizen, the number of proposals and debates may be overwhelming, and recommender systems could help filtering and ranking those that are more relevant. Focusing on a particular case, the `Decide Madrid' platform, in this paper we empirically investigate which sources of user preferences and recommendation approaches could be more effective, in terms of several aspects, namely precision, coverage and diversity.",TRUE,noun
R133,Artificial Intelligence,R139297,What's going on in my city?: recommender systems and electronic participatory budgeting,S555346,R139299,has Application Scope,R138226,City,"In this paper, we present electronic participatory budgeting (ePB) as a novel application domain for recommender systems. On public data from the ePB platforms of three major US cities - Cambridge, Miami and New York City-, we evaluate various methods that exploit heterogeneous sources and models of user preferences to provide personalized recommendations of citizen proposals. We show that depending on characteristics of the cities and their participatory processes, particular methods are more effective than others for each city. This result, together with open issues identified in the paper, call for further research in the area.",TRUE,noun
R133,Artificial Intelligence,R139300,Personalized recommendations in e-participation: offline experiments for the 'Decide Madrid' platform,S555347,R139302,has Application Scope,R138226,City,"In e-participation platforms, citizens suggest, discuss and vote online for initiatives aimed to address a wide range of issues and problems in a city, such as economic development, public safety, budges, infrastructure, housing, environment, social rights, and health care. For a particular citizen, the number of proposals and debates may be overwhelming, and recommender systems could help filtering and ranking those that are more relevant. Focusing on a particular case, the `Decide Madrid' platform, in this paper we empirically investigate which sources of user preferences and recommendation approaches could be more effective, in terms of several aspects, namely precision, coverage and diversity.",TRUE,noun
R133,Artificial Intelligence,R69558,A framework for explainable deep neural models using external knowledge graphs,S330309,R69559,Machine Learning Task,R69534,Classification,"Deep neural networks (DNNs) have become the gold standard for solving challenging classification problems, especially given complex sensor inputs (e.g., images and video). While DNNs are powerful, they are also brittle, and their inner workings are not fully understood by humans, leading to their use as “black-box” models. DNNs often generalize poorly when provided new data sampled from slightly shifted distributions; DNNs are easily manipulated by adversarial examples; and the decision-making process of DNNs can be difficult for humans to interpret. To address these challenges, we propose integrating DNNs with external sources of semantic knowledge. Large quantities of meaningful, formalized knowledge are available in knowledge graphs and other databases, many of which are publicly obtainable. But at present, these sources are inaccessible to deep neural methods, which can only exploit patterns in the signals they are given to classify. In this work, we conduct experiments on the ADE20K dataset, using scene classification as an example task where combining DNNs with external knowledge graphs can result in more robust and explainable models. We align the atomic concepts present in ADE20K (i.e., objects) to WordNet, a hierarchically-organized lexical database. Using this knowledge graph, we expand the concept categories which can be identified in ADE20K and relate these concepts in a hierarchical manner. The neural architecture we present performs scene classification using these concepts, illuminating a path toward DNNs which can efficiently exploit high-level knowledge in place of excessive quantities of direct sensory input. We hypothesize and experimentally validate that incorporating background knowledge via an external knowledge graph into a deep learning-based model should improve the explainability and robustness of the model.",TRUE,noun
R133,Artificial Intelligence,R69562,The more you know: Using knowledge graphs for image classification,S330346,R69563,Machine Learning Task,R69534,Classification,"One characteristic that sets humans apart from modern learning-based computer vision algorithms is the ability to acquire knowledge about the world and use that knowledge to reason about the visual world. Humans can learn about the characteristics of objects and the relationships that occur between them to learn a large variety of visual concepts, often with few examples. This paper investigates the use of structured prior knowledge in the form of knowledge graphs and shows that using this knowledge improves performance on image classification. We build on recent work on end-to-end learning on graphs, introducing the Graph Search Neural Network as a way of efficiently incorporating large knowledge graphs into a vision classification pipeline. We show in a number of experiments that our method outperforms standard neural network baselines for multi-label classification.",TRUE,noun
R133,Artificial Intelligence,R69572,Linking imagenet-wordnet synsets with wikidata,S330400,R69573,Machine Learning Task,R69534,Classification,The linkage of ImageNet WordNet synsets to Wikidata items will leverage deep learning algorithm with access to a rich multilingual knowledge graph. Here I will describe our on-going efforts in linking the two resources and issues faced in matching the Wikidata and WordNet knowledge graphs. I show an example on how the linkage can be used in a deep learning setting with real-time image classification and labeling in a non-English language and discuss what opportunities lies ahead.,TRUE,noun
R133,Artificial Intelligence,R4857,How are topics born? Understanding the research dynamics preceding the emergence of new areas,S5330,R4863,users,R4865,companies,"The ability to promptly recognise new research trends is strategic for many stakeholders, including universities, institutional funding bodies, academic publishers and companies. While the literature describes several approaches which aim to identify the emergence of new research topics early in their lifecycle, these rely on the assumption that the topic in question is already associated with a number of publications and consistently referred to by a community of researchers. Hence, detecting the emergence of a new research area at an embryonic stage, i.e., before the topic has been consistently labelled by a community of researchers and associated with a number of publications, is still an open challenge. In this paper, we begin to address this challenge by performing a study of the dynamics preceding the creation of new topics. This study indicates that the emergence of a new topic is anticipated by a significant increase in the pace of collaboration between relevant research areas, which can be seen as the ‘parents’ of the new topic. These initial findings (i) confirm our hypothesis that it is possible in principle to detect the emergence of a new topic at the embryonic stage, (ii) provide new empirical evidence supporting relevant theories in Philosophy of Science, and also (iii) suggest that new topics tend to emerge in an environment in which weakly interconnected research areas begin to cross-fertilise.",TRUE,noun
R133,Artificial Intelligence,R141030,*SEM 2013 shared task: Semantic Textual Similarity,S581398,R145247,Other resources,R145250,crowdsourcing,"In Semantic Textual Similarity (STS), systems rate the degree of semantic equivalence, on a graded scale from 0 to 5, with 5 being the most similar. This year we set up two tasks: (i) a core task (CORE), and (ii) a typed-similarity task (TYPED). CORE is similar in set up to SemEval STS 2012 task with pairs of sentences from sources related to those of 2012, yet different in genre from the 2012 set, namely, this year we included newswire headlines, machine translation evaluation datasets and multiple lexical resource glossed sets. TYPED, on the other hand, is novel and tries to characterize why two items are deemed similar, using cultural heritage items which are described with metadata such as title, author or description. Several types of similarity have been defined, including similar author, similar time period or similar location. The annotation for both tasks leverages crowdsourcing, with relative high interannotator correlation, ranging from 62% to 87%. The CORE task attracted 34 participants with 89 runs, and the TYPED task attracted 6 teams with 14 runs.",TRUE,noun
R133,Artificial Intelligence,R140600,SemEval-2007 Task 12: Turkish Lexical Sample Task,S568808,R140602,Other resources,R141784,dictionary,"This paper presents the task definition, resources, and the single participant system for Task 12: Turkish Lexical Sample Task (TLST), which was organized in the SemEval-2007 evaluation exercise. The methodology followed for developing the specific linguistic resources necessary for the task has been described in this context. A language-specific feature set was defined for Turkish. TLST consists of three pieces of data: The dictionary, the training data, and the evaluation data. Finally, a single system that utilizes a simple statistical method was submitted for the task and evaluated.",TRUE,noun
R133,Artificial Intelligence,R76400,SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection,S351473,R76978,Languages,R6219,English,"Lexical Semantic Change detection, i.e., the task of identifying words that change meaning over time, is a very active research area, with applications in NLP, lexicography, and linguistics. Evaluation is currently the most pressing problem in Lexical Semantic Change detection, as no gold standards are available to the community, which hinders progress. We present the results of the first shared task that addresses this gap by providing researchers with an evaluation framework and manually annotated, high-quality datasets for English, German, Latin, and Swedish. 33 teams submitted 186 systems, which were evaluated on two subtasks.",TRUE,noun
R133,Artificial Intelligence,R76413,UWB at SemEval-2020 Task 1: Lexical Semantic Change Detection,S351715,R77000,Languages,R6219,English,"In this paper, we describe our method for detection of lexical semantic change, i.e., word sense changes over time. We examine semantic differences between specific words in two corpora, chosen from different time periods, for English, German, Latin, and Swedish. Our method was created for the SemEval 2020 Task 1: Unsupervised Lexical Semantic Change Detection. We ranked 1st in Sub-task 1: binary change detection, and 4th in Sub-task 2: ranked change detection. We present our method which is completely unsupervised and language independent. It consists of preparing a semantic vector space for each corpus, earlier and later; computing a linear transformation between earlier and later spaces, using Canonical Correlation Analysis and orthogonal transformation;and measuring the cosines between the transformed vector for the target word from the earlier corpus and the vector for the target word in the later corpus.",TRUE,noun
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329536,R69419,Data,R69429,feasibility,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun
R133,Artificial Intelligence,R111779,Fermi at SemEval-2019 Task 8: An elementary but effective approach to Question Discernment in Community QA Forums,S513235,R111781,Team Name,L368512,Fermi,"Online Community Question Answering Forums (cQA) have gained massive popularity within recent years. The rise in users for such forums have led to the increase in the need for automated evaluation for question comprehension and fact evaluation of the answers provided by various participants in the forum. Our team, Fermi, participated in sub-task A of Task 8 at SemEval 2019 - which tackles the first problem in the pipeline of factual evaluation in cQA forums, i.e., deciding whether a posed question asks for a factual information, an opinion/advice or is just socializing. This information is highly useful in segregating factual questions from non-factual ones which highly helps in organizing the questions into useful categories and trims down the problem space for the next task in the pipeline for fact evaluation among the available answers. Our system uses the embeddings obtained from Universal Sentence Encoder combined with XGBoost for the classification sub-task A. We also evaluate other combinations of embeddings and off-the-shelf machine learning algorithms to demonstrate the efficacy of the various representations and their combinations. Our results across the evaluation test set gave an accuracy of 84% and received the first position in the final standings judged by the organizers.",TRUE,noun
R133,Artificial Intelligence,R139451,Mapping XML to OWL Ontologies,S556195,R139453,Prototype extraction tool,R139455,Framework,"By now, XML has reached a wide acceptance as data exchange format in E-Business. An efficient collaboration between different participants in E-Business thus, is only possible, when business partners agree on a common syntax and have a common understanding of the basic concepts in the domain. XML covers the syntactic level, but lacks support for efficient sharing of conceptualizations. The Web Ontology Language (OWL [Bec04]) in turn supports the representation of domain knowledge using classes, properties and instances for the use in a distributed environment as the WorldWideWeb. We present in this paper a mapping between the data model elements of XML and OWL. We give account about its implementation within a ready-to-use XSLT framework, as well as its evaluation for common use cases.",TRUE,noun
R133,Artificial Intelligence,R69543,How a general-purpose common- sense ontology can improve performance of learning-based image retrieval,S330242,R69544,Machine Learning Input,R69532,image,"The knowledge representation community has built general-purpose ontologies which contain large amounts of commonsense knowledge over relevant aspects of the world, including useful visual information, e.g.: ""a ball is used by a football player"", ""a tennis player is located at a tennis court"". Current state-of-the-art approaches for visual recognition do not exploit these rule-based knowledge sources. Instead, they learn recognition models directly from training examples. In this paper, we study how general-purpose ontologies—specifically, MIT's ConceptNet ontology—can improve the performance of state-of-the-art vision systems. As a testbed, we tackle the problem of sentence-based image retrieval. Our retrieval approach incorporates knowledge from ConceptNet on top of a large pool of object detectors derived from a deep learning technique. In our experiments, we show that ConceptNet can improve performance on a common benchmark dataset. Key to our performance is the use of the ESPGAME dataset to select visually relevant relations from ConceptNet. Consequently, a main conclusion of this work is that general-purpose commonsense ontologies improve performance on visual reasoning tasks when properly filtered to select meaningful visual relations.",TRUE,noun
R133,Artificial Intelligence,R69558,A framework for explainable deep neural models using external knowledge graphs,S330316,R69559,Machine Learning Input,R69532,image,"Deep neural networks (DNNs) have become the gold standard for solving challenging classification problems, especially given complex sensor inputs (e.g., images and video). While DNNs are powerful, they are also brittle, and their inner workings are not fully understood by humans, leading to their use as “black-box” models. DNNs often generalize poorly when provided new data sampled from slightly shifted distributions; DNNs are easily manipulated by adversarial examples; and the decision-making process of DNNs can be difficult for humans to interpret. To address these challenges, we propose integrating DNNs with external sources of semantic knowledge. Large quantities of meaningful, formalized knowledge are available in knowledge graphs and other databases, many of which are publicly obtainable. But at present, these sources are inaccessible to deep neural methods, which can only exploit patterns in the signals they are given to classify. In this work, we conduct experiments on the ADE20K dataset, using scene classification as an example task where combining DNNs with external knowledge graphs can result in more robust and explainable models. We align the atomic concepts present in ADE20K (i.e., objects) to WordNet, a hierarchically-organized lexical database. Using this knowledge graph, we expand the concept categories which can be identified in ADE20K and relate these concepts in a hierarchical manner. The neural architecture we present performs scene classification using these concepts, illuminating a path toward DNNs which can efficiently exploit high-level knowledge in place of excessive quantities of direct sensory input. We hypothesize and experimentally validate that incorporating background knowledge via an external knowledge graph into a deep learning-based model should improve the explainability and robustness of the model.",TRUE,noun
R133,Artificial Intelligence,R69562,The more you know: Using knowledge graphs for image classification,S330352,R69563,Machine Learning Input,R69532,image,"One characteristic that sets humans apart from modern learning-based computer vision algorithms is the ability to acquire knowledge about the world and use that knowledge to reason about the visual world. Humans can learn about the characteristics of objects and the relationships that occur between them to learn a large variety of visual concepts, often with few examples. This paper investigates the use of structured prior knowledge in the form of knowledge graphs and shows that using this knowledge improves performance on image classification. We build on recent work on end-to-end learning on graphs, introducing the Graph Search Neural Network as a way of efficiently incorporating large knowledge graphs into a vision classification pipeline. We show in a number of experiments that our method outperforms standard neural network baselines for multi-label classification.",TRUE,noun
R133,Artificial Intelligence,R69597,Fvqa: Fact-based visual question answering,S330556,R69598,Machine Learning Input,R69532,image,"Visual Question Answering (VQA) has attracted much attention in both computer vision and natural language processing communities, not least because it offers insight into the relationships between two important sources of information. Current datasets, and the models built upon them, have focused on questions which are answerable by direct analysis of the question and image alone. The set of such questions that require no external information to answer is interesting, but very limited. It excludes questions which require common sense, or basic factual knowledge to answer, for example. Here we introduce FVQA (Fact-based VQA), a VQA dataset which requires, and supports, much deeper reasoning. FVQA primarily contains questions that require external information to answer. We thus extend a conventional visual question answering dataset, which contains image-question-answer triplets, through additional image-question-answer-supporting fact tuples. Each supporting-fact is represented as a structural triplet, such as . We evaluate several baseline models on the FVQA dataset, and describe a novel model which is capable of reasoning about an image on the basis of supporting-facts.",TRUE,noun
R133,Artificial Intelligence,R69599,Explicit knowledge-based reasoning for visual question answering,S330575,R69600,Machine Learning Input,R69532,image,"We describe a method for visual question answering which is capable of reasoning about contents of an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can provide an explanation of the reasoning by which it developed its answer. The method is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperforms it significantly in the testing. We also provide a dataset and a protocol by which to evaluate such methods, thus addressing one of the key issues in general visual ques- tion answering.",TRUE,noun
R133,Artificial Intelligence,R69601,Out of the box: Reasoning with graph convolution nets for factual visual question answering,S330593,R69602,Machine Learning Input,R69532,image,"Accurately answering a question about a given image requires combining observations with general knowledge. While this is effortless for humans, reasoning with general knowledge remains an algorithmic challenge. To advance research in this direction a novel `fact-based' visual question answering (FVQA) task has been introduced recently along with a large set of curated facts which link two entities, i.e., two possible answers, via a relation. Given a question-image pair, deep network techniques have been employed to successively reduce the large set of facts until one of the two entities of the final remaining fact is predicted as the answer. We observe that a successive process which considers one fact at a time to form a local decision is sub-optimal. Instead, we develop an entity graph and use a graph convolutional network to `reason' about the correct answer by jointly considering all entities. We show on the challenging FVQA dataset that this leads to an improvement in accuracy of around 7% compared to the state of the art.",TRUE,noun
R133,Artificial Intelligence,R75305,Extracting ontological knowledge from Java source code using Hidden Markov Models,S541226,R75307,programming language,R77027,Java,"Abstract Ontologies have become a key element since many decades in information systems such as in epidemiological surveillance domain. Building domain ontologies requires the access to domain knowledge owned by domain experts or contained in knowledge sources. However, domain experts are not always available for interviews. Therefore, there is a lot of value in using ontology learning which consists in automatic or semi-automatic extraction of ontological knowledge from structured or unstructured knowledge sources such as texts, databases, etc. Many techniques have been used but they all are limited in concepts, properties and terminology extraction leaving behind axioms and rules. Source code which naturally embed domain knowledge is rarely used. In this paper, we propose an approach based on Hidden Markov Models (HMMs) for concepts, properties, axioms and rules learning from Java source code. This approach is experimented with the source code of EPICAM, an epidemiological platform developed in Java and used in Cameroon for tuberculosis surveillance. Domain experts involved in the evaluation estimated that knowledge extracted was relevant to the domain. In addition, we performed an automatic evaluation of the relevance of the terms extracted to the medical domain by aligning them with ontologies hosted on Bioportal platform through the Ontology Recommender tool. The results were interesting since the terms extracted were covered at 82.9% by many biomedical ontologies such as NCIT, SNOWMEDCT and ONTOPARON.",TRUE,noun
R133,Artificial Intelligence,R182290,PFID: Pittsburgh fast-food image dataset,S705180,R182292,Acquisition,R182298,Lab,"We introduce the first visual dataset of fast foods with a total of 4,545 still images, 606 stereo pairs, 303 360° videos for structure from motion, and 27 privacy-preserving videos of eating events of volunteers. This work was motivated by research on fast food recognition for dietary assessment. The data was collected by obtaining three instances of 101 foods from 11 popular fast food chains, and capturing images and videos in both restaurant conditions and a controlled lab setting. We benchmark the dataset using two standard approaches, color histogram and bag of SIFT features in conjunction with a discriminative classifier. Our dataset and the benchmarks are designed to stimulate research in this area and will be released freely to the research community.",TRUE,noun
R133,Artificial Intelligence,R76400,SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection,S351510,R76980,Languages,R76407,Latin,"Lexical Semantic Change detection, i.e., the task of identifying words that change meaning over time, is a very active research area, with applications in NLP, lexicography, and linguistics. Evaluation is currently the most pressing problem in Lexical Semantic Change detection, as no gold standards are available to the community, which hinders progress. We present the results of the first shared task that addresses this gap by providing researchers with an evaluation framework and manually annotated, high-quality datasets for English, German, Latin, and Swedish. 33 teams submitted 186 systems, which were evaluated on two subtasks.",TRUE,noun
R133,Artificial Intelligence,R76413,UWB at SemEval-2020 Task 1: Lexical Semantic Change Detection,S351714,R77002,Languages,R76407,Latin,"In this paper, we describe our method for detection of lexical semantic change, i.e., word sense changes over time. We examine semantic differences between specific words in two corpora, chosen from different time periods, for English, German, Latin, and Swedish. Our method was created for the SemEval 2020 Task 1: Unsupervised Lexical Semantic Change Detection. We ranked 1st in Sub-task 1: binary change detection, and 4th in Sub-task 2: ranked change detection. We present our method which is completely unsupervised and language independent. It consists of preparing a semantic vector space for each corpus, earlier and later; computing a linear transformation between earlier and later spaces, using Canonical Correlation Analysis and orthogonal transformation;and measuring the cosines between the transformed vector for the target word from the earlier corpus and the vector for the target word in the later corpus.",TRUE,noun
R133,Artificial Intelligence,R142196,Improving Breast Cancer Detection Accuracy of Mammography with the Concurrent Use of an Artificial Intelligence Tool,S571321,R142201,Imaging modality ,L400987,Mammography ,"Purpose To evaluate the benefits of an artificial intelligence (AI)-based tool for two-dimensional mammography in the breast cancer detection process. Materials and Methods In this multireader, multicase retrospective study, 14 radiologists assessed a dataset of 240 digital mammography images, acquired between 2013 and 2016, using a counterbalance design in which half of the dataset was read without AI and the other half with the help of AI during a first session and vice versa during a second session, which was separated from the first by a washout period. Area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and reading time were assessed as endpoints. Results The average AUC across readers was 0.769 (95% CI: 0.724, 0.814) without AI and 0.797 (95% CI: 0.754, 0.840) with AI. The average difference in AUC was 0.028 (95% CI: 0.002, 0.055, P = .035). Average sensitivity was increased by 0.033 when using AI support (P = .021). Reading time changed dependently to the AI-tool score. For low likelihood of malignancy (< 2.5%), the time was about the same in the first reading session and slightly decreased in the second reading session. For higher likelihood of malignancy, the reading time was on average increased with the use of AI. Conclusion This clinical investigation demonstrated that the concurrent use of this AI tool improved the diagnostic performance of radiologists in the detection of breast cancer without prolonging their workflow.Supplemental material is available for this article.© RSNA, 2020.",TRUE,noun
R133,Artificial Intelligence,R139415,A GRAPH-BASED TOOL FOR THE TRANSLATION OF XML DATA TO OWL-DL ONTOLOGIES: ,S555957,R139417,Extraction methods,R139364,Mapping,"Today most of the data exchanged between information systems is done with the help of the XML syntax. Unfortunately when these data have to be integrated, the integration becomes difficult because of the semantics' heterogeneity. Consequently, leading researches in the domain of database systems are moving to semantic model in order to store data and its semantics definition. To benefit from these new systems and technologies, and to integrate different data sources, a flexible method consists in populating an existing OWL ontology from XML data. In paper we present such a method based on the definition of a graph which represents rules that drive the populating process. The graph of rules facilitates the mapping definition that consists in mapping elements from an XSD schema to the elements of the OWL schema.",TRUE,noun
R133,Artificial Intelligence,R139451,Mapping XML to OWL Ontologies,S556179,R139453,Extraction methods,R139364,Mapping,"By now, XML has reached a wide acceptance as data exchange format in E-Business. An efficient collaboration between different participants in E-Business thus, is only possible, when business partners agree on a common syntax and have a common understanding of the basic concepts in the domain. XML covers the syntactic level, but lacks support for efficient sharing of conceptualizations. The Web Ontology Language (OWL [Bec04]) in turn supports the representation of domain knowledge using classes, properties and instances for the use in a distributed environment as the WorldWideWeb. We present in this paper a mapping between the data model elements of XML and OWL. We give account about its implementation within a ready-to-use XSLT framework, as well as its evaluation for common use cases.",TRUE,noun
R133,Artificial Intelligence,R6599,Automated multi-document summarization in NeATS,S8346,R6600,implementation,R6602,NeATS,"This paper describes the multi-document text summarization system NeATS. Using a simple algorithm, NeATS was among the top two performers of the DUC-01 evaluation.",TRUE,noun
R133,Artificial Intelligence,R6593,"NewsInEssence: A System For Domain-Independent, Real-Time News Clustering and Multi-Document Summarization",S8319,R6594,implementation,R6595,NewsInEssence,"NEWSINESSENCE is a system for finding, visualizing and summarizing a topic-based cluster of news stories. In the generic scenario for NEWSINESSENCE, a user selects a single news story from a news Web site. Our system then searches other live sources of news for other stories related to the same event and produces summaries of a subset of the stories that it finds, according to parameters specified by the user.",TRUE,noun
R133,Artificial Intelligence,R182107,Multi-Task Learning for Calorie Prediction on a Novel Large-Scale Recipe Dataset Enriched with Nutritional Information,S705718,R182189,dataset,R182111,pic2kcal,"A rapidly growing amount of content posted online, such as food recipes, opens doors to new exciting applications at the intersection of vision and language. In this work, we aim to estimate the calorie amount of a meal directly from an image by learning from recipes people have published on the Internet, thus skipping time-consuming manual data annotation. Since there are few large-scale publicly available datasets captured in unconstrained environments, we propose the pic2kcal benchmark comprising 308 000 images from over 70 000 recipes including photographs, ingredients, and instructions. To obtain nutritional information of the ingredients and automatically determine the ground-truth calorie value, we match the items in the recipes with structured information from a food item database. We evaluate various neural networks for regression of the calorie quantity and extend them with the multi-task paradigm. Our learning procedure combines the calorie estimation with prediction of proteins, carbohydrates, and fat amounts as well as a multi-label ingredient classification. Our experiments demonstrate clear benefits of multi-task learning for calorie estimation, surpassing the single-task calorie regression by 9.9%. To encourage further research on this task, we make the code for generating the dataset and the models publicly available.",TRUE,noun
R133,Artificial Intelligence,R182238,"Food Recognition: A New Dataset, Experiments, and Results",S704943,R182240,Annotation,R182246,Poly,"We propose a new dataset for the evaluation of food recognition algorithms that can be used in dietary monitoring applications. Each image depicts a real canteen tray with dishes and foods arranged in different ways. Each tray contains multiple instances of food classes. The dataset contains 1027 canteen trays for a total of 3616 food instances belonging to 73 food classes. The food on the tray images has been manually segmented using carefully drawn polygonal boundaries. We have benchmarked the dataset by designing an automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest, and predicts for each region the corresponding food class. We have experimented with three different classification strategies using also several visual descriptors. We achieve about 79% of food and tray recognition accuracy using convolutional-neural-networks-based features. The dataset, as well as the benchmark framework, are available to the research community.",TRUE,noun
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329528,R69419,Data,R69421,posts,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun
R133,Artificial Intelligence,R6353,PowerAqua: Supporting users in querying and exploring the Semantic Web,S7324,R6354,implementation,R6355,PowerAqua,"With the continued growth of online semantic information, the processes of searching and managing this massive scale and heterogeneous content have become increasingly challenging. In this work, we present PowerAqua, an ontologybased Question Answering system that is able to answer queries by locating and integrating information, which can be distributed across heterogeneous semantic resources. We provide a complete overview of the system including: the research challenges that it addresses, its architecture, the evaluations that have been conducted to test it, and an in-depth discussion showing how PowerAqua effectively supports users in querying and exploring Semantic Web content.",TRUE,noun
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329538,R69419,Data,R69431,results,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun
R133,Artificial Intelligence,R41079,Speech Recognition Using Deep Neural Networks: A Systematic Review,S130285,R41082,Method,R41099,Results,"Over the past decades, a tremendous amount of research has been done on the use of machine learning for speech processing applications, especially speech recognition. However, in the past few years, research has focused on utilizing deep learning for speech-related applications. This new area of machine learning has yielded far better results when compared to others in a variety of applications including speech, and thus became a very attractive area of research. This paper provides a thorough examination of the different studies that have been conducted since 2006, when deep learning first arose as a new area of machine learning, for speech applications. A thorough statistical analysis is provided in this review which was conducted by extracting specific information from 174 papers published between the years 2006 and 2018. The results provided in this paper shed light on the trends of research in this area as well as bring focus to new research topics.",TRUE,noun
R133,Artificial Intelligence,R76338,SemEval-2020 Task 6: Definition Extraction from Free Text with the DEFT Corpus,S349257,R76340,Data annotation granularities,R76364,Sentences,"Research on definition extraction has been conducted for well over a decade, largely with significant constraints on the type of definitions considered. In this work, we present DeftEval, a SemEval shared task in which participants must extract definitions from free text using a term-definition pair corpus that reflects the complex reality of definitions in natural language. Definitions and glosses in free text often appear without explicit indicators, across sentences boundaries, or in an otherwise complex linguistic manner. DeftEval involved 3 distinct subtasks: 1) Sentence classification, 2) sequence labeling, and 3) relation extraction.",TRUE,noun
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329534,R69419,Data,R69427,sentiments,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun
R133,Artificial Intelligence,R6614,Generating Indicative-Informative Summaries with SumUM,S8414,R6615,implementation,R6617,SumUM,"We present and evaluate SumUM, a text summarization system that takes a raw technical text as input and produces an indicative informative summary. The indicative part of the summary identifies the topics of the document, and the informative part elaborates on some of these topics according to the reader's interest. SumUM motivates the topics, describes entities, and defines concepts. It is a first step for exploring the issue of dynamic summarization. This is accomplished through a process of shallow syntactic and semantic analysis, concept identification, and text regeneration. Our method was developed through the study of a corpus of abstracts written by professional abstractors. Relying on human judgment, we have evaluated indicativeness, informativeness, and text acceptability of the automatic summaries. The results thus far indicate good performance when compared with other summarization technologies.",TRUE,noun
R133,Artificial Intelligence,R76400,SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection,S351528,R76981,Languages,R76408,Swedish,"Lexical Semantic Change detection, i.e., the task of identifying words that change meaning over time, is a very active research area, with applications in NLP, lexicography, and linguistics. Evaluation is currently the most pressing problem in Lexical Semantic Change detection, as no gold standards are available to the community, which hinders progress. We present the results of the first shared task that addresses this gap by providing researchers with an evaluation framework and manually annotated, high-quality datasets for English, German, Latin, and Swedish. 33 teams submitted 186 systems, which were evaluated on two subtasks.",TRUE,noun
R133,Artificial Intelligence,R76413,UWB at SemEval-2020 Task 1: Lexical Semantic Change Detection,S351713,R77003,Languages,R76408,Swedish,"In this paper, we describe our method for detection of lexical semantic change, i.e., word sense changes over time. We examine semantic differences between specific words in two corpora, chosen from different time periods, for English, German, Latin, and Swedish. Our method was created for the SemEval 2020 Task 1: Unsupervised Lexical Semantic Change Detection. We ranked 1st in Sub-task 1: binary change detection, and 4th in Sub-task 2: ranked change detection. We present our method which is completely unsupervised and language independent. It consists of preparing a semantic vector space for each corpus, earlier and later; computing a linear transformation between earlier and later spaces, using Canonical Correlation Analysis and orthogonal transformation;and measuring the cosines between the transformed vector for the target word from the earlier corpus and the vector for the target word in the later corpus.",TRUE,noun
R133,Artificial Intelligence,R74026,Task 11 at SemEval-2021: NLPContributionGraph - Structuring Scholarly NLP Contributions for a Research Knowledge Graph,S586085,R75321,Information Units,R146390,Tasks,"There is currently a gap between the natural language expression of scholarly publications and their structured semantic content modeling to enable intelligent content search. With the volume of research growing exponentially every year, a search feature operating over semantically structured content is compelling. The SemEval-2021 Shared Task NLPContributionGraph (a.k.a. ‘the NCG task’) tasks participants to develop automated systems that structure contributions from NLP scholarly articles in the English language. Being the first-of-its-kind in the SemEval series, the task released structured data from NLP scholarly articles at three levels of information granularity, i.e. at sentence-level, phrase-level, and phrases organized as triples toward Knowledge Graph (KG) building. The sentence-level annotations comprised the few sentences about the article’s contribution. The phrase-level annotations were scientific term and predicate phrases from the contribution sentences. Finally, the triples constituted the research overview KG. For the Shared Task, participating systems were then expected to automatically classify contribution sentences, extract scientific terms and relations from the sentences, and organize them as KG triples. Overall, the task drew a strong participation demographic of seven teams and 27 participants. The best end-to-end task system classified contribution sentences at 57.27% F1, phrases at 46.41% F1, and triples at 22.28% F1. While the absolute performance to generate triples remains low, as conclusion to the article, the difficulty of producing such data and as a consequence of modeling it is highlighted.",TRUE,noun
R133,Artificial Intelligence,R4857,How are topics born? Understanding the research dynamics preceding the emergence of new areas,S5331,R4863,users,R4866,universities,"The ability to promptly recognise new research trends is strategic for many stakeholders, including universities, institutional funding bodies, academic publishers and companies. While the literature describes several approaches which aim to identify the emergence of new research topics early in their lifecycle, these rely on the assumption that the topic in question is already associated with a number of publications and consistently referred to by a community of researchers. Hence, detecting the emergence of a new research area at an embryonic stage, i.e., before the topic has been consistently labelled by a community of researchers and associated with a number of publications, is still an open challenge. In this paper, we begin to address this challenge by performing a study of the dynamics preceding the creation of new topics. This study indicates that the emergence of a new topic is anticipated by a significant increase in the pace of collaboration between relevant research areas, which can be seen as the ‘parents’ of the new topic. These initial findings (i) confirm our hypothesis that it is possible in principle to detect the emergence of a new topic at the embryonic stage, (ii) provide new empirical evidence supporting relevant theories in Philosophy of Science, and also (iii) suggest that new topics tend to emerge in an environment in which weakly interconnected research areas begin to cross-fertilise.",TRUE,noun
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329549,R69419,Material,R69442,user,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun
R133,Artificial Intelligence,R172579,WISARD·a radical step forward in image recognition,S688974,R172581,utilizes,R162686,WiSARD,"The WISARD recognition system invented at Brunei University has been developed into an industrialised product by Computer Recognition Systems under licence from the British Technology Group. Using statistical pattern classification it already shows great potential in rapid sorting, and research indicates that it will track objects with positional feedback, rather like the human eye.",TRUE,noun
R133,Artificial Intelligence,R76400,SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection,S349368,R76402,Data annotation granularities,R76409,words,"Lexical Semantic Change detection, i.e., the task of identifying words that change meaning over time, is a very active research area, with applications in NLP, lexicography, and linguistics. Evaluation is currently the most pressing problem in Lexical Semantic Change detection, as no gold standards are available to the community, which hinders progress. We present the results of the first shared task that addresses this gap by providing researchers with an evaluation framework and manually annotated, high-quality datasets for English, German, Latin, and Swedish. 33 teams submitted 186 systems, which were evaluated on two subtasks.",TRUE,noun
R133,Artificial Intelligence,R69577,Knowledgeable reader: Enhancing cloze-style read- ing comprehension with external commonsense knowledge,S330424,R69578,Machine Learning Input,R69575,text,"We introduce a neural reading comprehension model that integrates external commonsense knowledge, encoded as a key-value memory, in a cloze-style setting. Instead of relying only on document-to-question interaction or discrete features as in prior work, our model attends to relevant external knowledge and combines this knowledge with the context representation before inferring the answer. This allows the model to attract and imply knowledge from an external knowledge source that is not explicitly stated in the text, but that is relevant for inferring the answer. Our model improves results over a very strong baseline on a hard Common Nouns dataset, making it a strong competitor of much more complex models. By including knowledge explicitly, our model can also provide evidence about the background knowledge used in the RC process.",TRUE,noun
R133,Artificial Intelligence,R69587,Answering science exam questions using query reformulation with background knowledge,S330481,R69588,Machine Learning Input,R69575,text,"Open-domain question answering (QA) is an important problem in AI and NLP that is emerging as a bellwether for progress on the generalizability of AI methods and techniques. Much of the progress in open-domain QA systems has been realized through advances in information retrieval methods and corpus construction. In this paper, we focus on the recently introduced ARC Challenge dataset, which contains 2,590 multiple choice questions authored for grade-school science exams. These questions are selected to be the most challenging for current QA systems, and current state of the art performance is only slightly better than random chance. We present a system that reformulates a given question into queries that are used to retrieve supporting text from a large corpus of science-related text. Our rewriter is able to incorporate background knowledge from ConceptNet and -- in tandem with a generic textual entailment system trained on SciTail that identifies support in the retrieved results -- outperforms several strong baselines on the end-to-end QA task despite only being trained to identify essential terms in the original source question. We use a generalizable decision methodology over the retrieved evidence and answer candidates to select the best answer. By combining query reformulation, background knowledge, and textual entailment our system is able to outperform several strong baselines on the ARC dataset.",TRUE,noun
R133,Artificial Intelligence,R69607,Knowledge-based conversational agents and virtual story telling,S330629,R69608,Machine Learning Input,R69575,text,"We describe an architecture for building speech-enabled conversational agents, deployed as self-contained Web services, with ability to provide inference processing on very large knowledge bases and its application to voice enabled chatbots in a virtual storytelling environment. The architecture integrates inference engines, natural language pattern matching components and story-specific information extraction from RDF/XML files. Our Web interface is dynamically generated by server side agents supporting multi-modal interface components (speech and animation). Prolog refactorings of the WordNet lexical knowledge base, FrameNet and the Open Mind common sense knowledge repository are combined with internet meta-search to provide high-quality knowledge sources to our conversational agents. An example of conversational agent with speech capabilities is deployed on the Web at http://logic.csci.unt.edu:8080/wordnet_agent/frame.html. The agent is also accessible for live multi-user text-based chat, through a Yahoo Instant Messenger protocol adaptor, from wired or wireless devices, as the jinni_agent Yahoo IM ""handle"".",TRUE,noun
R133,Artificial Intelligence,R69609,Towards a knowledge graph based speech interface,S330645,R69610,Machine Learning Input,R69575,text,"Applications which use human speech as an input require a speech interface with high recognition accuracy. The words or phrases in the recognised text are annotated with a machine-understandable meaning and linked to knowledge graphs for further processing by the target application. These semantic annotations of recognised words can be represented as a subject-predicate-object triples which collectively form a graph often referred to as a knowledge graph. This type of knowledge representation facilitates to use speech interfaces with any spoken input application, since the information is represented in logical, semantic form, retrieving and storing can be followed using any web standard query languages. In this work, we develop a methodology for linking speech input to knowledge graphs and study the impact of recognition errors in the overall process. We show that for a corpus with lower WER, the annotation and linking of entities to the DBpedia knowledge graph is considerable. DBpedia Spotlight, a tool to interlink text documents with the linked open data is used to link the speech recognition output to the DBpedia knowledge graph. Such a knowledge-based speech recognition interface is useful for applications such as question answering or spoken dialog systems.",TRUE,noun
R133,Artificial Intelligence,R69611,Algorithmic transparency of conversational agents,S330661,R69612,Machine Learning Input,R69575,text,A lack of algorithmic transparency is a major barrier to the adoption of artificial intelligence technologies within contexts which require high risk and high consequence decision making. In this paper we present a framework for providing transparency of algorithmic processes. We include important considerations not identified in research to date for the high risk and high consequence context of defence intelligence analysis. To demonstrate the core concepts of our framework we explore an example application (a conversational agent for knowledge exploration) which demonstrates shared human-machine reasoning in a critical decision making scenario. We include new findings from interviews with a small number of analysts and recommendations for future research.,TRUE,noun
R133,Artificial Intelligence,R69623,Knowledge-based transfer learning explanation,S330752,R69624,Machine Learning Input,R69575,text,"Machine learning explanation can significantly boost machine learning's application in decision making, but the usability of current methods is limited in human-centric explanation, especially for transfer learning, an important machine learning branch that aims at utilizing knowledge from one learning domain (i.e., a pair of dataset and prediction task) to enhance prediction model training in another learning domain. In this paper , we propose an ontology-based approach for human-centric explanation of transfer learning. Three kinds of knowledge-based explanatory evidence, with different granularities, including general factors, particular narrators and core contexts are first proposed and then inferred with both local ontologies and external knowledge bases. The evaluation with US flight data and DB-pedia has presented their confidence and availability in explaining the transferability of feature representation in flight departure delay forecasting.",TRUE,noun
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329553,R69419,Material,R69446,text,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun
R375,Arts and Humanities,R51006,Are the FAIR Data Principles fair?,S537884,R135913,Publication type,L379024,paper,"This practice paper describes an ongoing research project to test the effectiveness and relevance of the FAIR Data Principles. Simultaneously, it will analyse how easy it is for data archives to adhere to the principles. The research took place from November 2016 to January 2017, and will be underpinned with feedback from the repositories. The FAIR Data Principles feature 15 facets corresponding to the four letters of FAIR - Findable, Accessible, Interoperable, Reusable. These principles have already gained traction within the research world. The European Commission has recently expanded its demand for research to produce open data. The relevant guidelines1are explicitly written in the context of the FAIR Data Principles. Given an increasing number of researchers will have exposure to the guidelines, understanding their viability and suggesting where there may be room for modification and adjustment is of vital importance. This practice paper is connected to a dataset(Dunning et al.,2017) containing the original overview of the sample group statistics and graphs, in an Excel spreadsheet. Over the course of two months, the web-interfaces, help-pages and metadata-records of over 40 data repositories have been examined, to score the individual data repository against the FAIR principles and facets. The traffic-light rating system enables colour-coding according to compliance and vagueness. The statistical analysis provides overall, categorised, on the principles focussing, and on the facet focussing results. The analysis includes the statistical and descriptive evaluation, followed by elaborations on Elements of the FAIR Data Principles, the subject specific or repository specific differences, and subsequently what repositories can do to improve their information architecture.",TRUE,noun
R136156,Biogerontology and Geriatric Medicine,R175176,Respiratory Care Received by Individuals With Duchenne Muscular Dystrophy From 2000 to 2011,S693788,R175178,Subject Label,R175183,Access,"BACKGROUND: Duchenne muscular dystrophy (DMD) causes progressive respiratory muscle weakness and decline in function, which can go undetected without monitoring. DMD respiratory care guidelines recommend scheduled respiratory assessments and use of respiratory assist devices. To determine the extent of adherence to these guidelines, we evaluated respiratory assessments and interventions among males with DMD in the Muscular Dystrophy Surveillance, Tracking, and Research Network (MD STARnet) from 2000 to 2011. METHODS: MD STARnet is a population-based surveillance system that identifies all individuals born during or after 1982 residing in Arizona, Colorado, Georgia, Hawaii, Iowa, and western New York with Duchenne or Becker muscular dystrophy. We analyzed MD STARnet respiratory care data for non-ambulatory adolescent males (12–17 y old) and men (≥18 y old) with DMD, assessing whether: (1) pulmonary function was measured twice yearly; (2) awake and asleep hypoventilation testing was performed at least yearly; (3) home mechanical insufflation-exsufflation, noninvasive ventilation, and tracheostomy/ventilators were prescribed; and (4) pulmonologists provided evaluations. RESULTS: During 2000–2010, no more than 50% of both adolescents and men had their pulmonary function monitored twice yearly in any of the years; 67% or fewer were assessed for awake and sleep hypoventilation yearly. Although the use of mechanical insufflation-exsufflation and noninvasive ventilation is probably increasing, prior use of these devices did not prevent all tracheostomies, and at least 18 of 29 tracheostomies were performed due to acute respiratory illnesses. Fewer than 32% of adolescents and men had pulmonologist evaluations in 2010–2011. CONCLUSIONS: Since the 2004 publication of American Thoracic Society guidelines, there have been few changes in pulmonary clinical practice. Frequencies of respiratory assessments and assist device use among males with DMD were lower than recommended in clinical guidelines. Collaboration of respiratory therapists and pulmonologists with clinicians caring for individuals with DMD should be encouraged to ensure access to the full spectrum of in-patient and out-patient pulmonary interventions.",TRUE,noun
R136156,Biogerontology and Geriatric Medicine,R175176,Respiratory Care Received by Individuals With Duchenne Muscular Dystrophy From 2000 to 2011,S693789,R175178,Subject Label,R146605,Clinicians,"BACKGROUND: Duchenne muscular dystrophy (DMD) causes progressive respiratory muscle weakness and decline in function, which can go undetected without monitoring. DMD respiratory care guidelines recommend scheduled respiratory assessments and use of respiratory assist devices. To determine the extent of adherence to these guidelines, we evaluated respiratory assessments and interventions among males with DMD in the Muscular Dystrophy Surveillance, Tracking, and Research Network (MD STARnet) from 2000 to 2011. METHODS: MD STARnet is a population-based surveillance system that identifies all individuals born during or after 1982 residing in Arizona, Colorado, Georgia, Hawaii, Iowa, and western New York with Duchenne or Becker muscular dystrophy. We analyzed MD STARnet respiratory care data for non-ambulatory adolescent males (12–17 y old) and men (≥18 y old) with DMD, assessing whether: (1) pulmonary function was measured twice yearly; (2) awake and asleep hypoventilation testing was performed at least yearly; (3) home mechanical insufflation-exsufflation, noninvasive ventilation, and tracheostomy/ventilators were prescribed; and (4) pulmonologists provided evaluations. RESULTS: During 2000–2010, no more than 50% of both adolescents and men had their pulmonary function monitored twice yearly in any of the years; 67% or fewer were assessed for awake and sleep hypoventilation yearly. Although the use of mechanical insufflation-exsufflation and noninvasive ventilation is probably increasing, prior use of these devices did not prevent all tracheostomies, and at least 18 of 29 tracheostomies were performed due to acute respiratory illnesses. Fewer than 32% of adolescents and men had pulmonologist evaluations in 2010–2011. CONCLUSIONS: Since the 2004 publication of American Thoracic Society guidelines, there have been few changes in pulmonary clinical practice. Frequencies of respiratory assessments and assist device use among males with DMD were lower than recommended in clinical guidelines. Collaboration of respiratory therapists and pulmonologists with clinicians caring for individuals with DMD should be encouraged to ensure access to the full spectrum of in-patient and out-patient pulmonary interventions.",TRUE,noun
R136156,Biogerontology and Geriatric Medicine,R175176,Respiratory Care Received by Individuals With Duchenne Muscular Dystrophy From 2000 to 2011,S693791,R175178,Subject Label,R175184,Collaboration,"BACKGROUND: Duchenne muscular dystrophy (DMD) causes progressive respiratory muscle weakness and decline in function, which can go undetected without monitoring. DMD respiratory care guidelines recommend scheduled respiratory assessments and use of respiratory assist devices. To determine the extent of adherence to these guidelines, we evaluated respiratory assessments and interventions among males with DMD in the Muscular Dystrophy Surveillance, Tracking, and Research Network (MD STARnet) from 2000 to 2011. METHODS: MD STARnet is a population-based surveillance system that identifies all individuals born during or after 1982 residing in Arizona, Colorado, Georgia, Hawaii, Iowa, and western New York with Duchenne or Becker muscular dystrophy. We analyzed MD STARnet respiratory care data for non-ambulatory adolescent males (12–17 y old) and men (≥18 y old) with DMD, assessing whether: (1) pulmonary function was measured twice yearly; (2) awake and asleep hypoventilation testing was performed at least yearly; (3) home mechanical insufflation-exsufflation, noninvasive ventilation, and tracheostomy/ventilators were prescribed; and (4) pulmonologists provided evaluations. RESULTS: During 2000–2010, no more than 50% of both adolescents and men had their pulmonary function monitored twice yearly in any of the years; 67% or fewer were assessed for awake and sleep hypoventilation yearly. Although the use of mechanical insufflation-exsufflation and noninvasive ventilation is probably increasing, prior use of these devices did not prevent all tracheostomies, and at least 18 of 29 tracheostomies were performed due to acute respiratory illnesses. Fewer than 32% of adolescents and men had pulmonologist evaluations in 2010–2011. CONCLUSIONS: Since the 2004 publication of American Thoracic Society guidelines, there have been few changes in pulmonary clinical practice. Frequencies of respiratory assessments and assist device use among males with DMD were lower than recommended in clinical guidelines. Collaboration of respiratory therapists and pulmonologists with clinicians caring for individuals with DMD should be encouraged to ensure access to the full spectrum of in-patient and out-patient pulmonary interventions.",TRUE,noun
R136156,Biogerontology and Geriatric Medicine,R175176,Respiratory Care Received by Individuals With Duchenne Muscular Dystrophy From 2000 to 2011,S693792,R175178,Subject Label,R175185,Devices,"BACKGROUND: Duchenne muscular dystrophy (DMD) causes progressive respiratory muscle weakness and decline in function, which can go undetected without monitoring. DMD respiratory care guidelines recommend scheduled respiratory assessments and use of respiratory assist devices. To determine the extent of adherence to these guidelines, we evaluated respiratory assessments and interventions among males with DMD in the Muscular Dystrophy Surveillance, Tracking, and Research Network (MD STARnet) from 2000 to 2011. METHODS: MD STARnet is a population-based surveillance system that identifies all individuals born during or after 1982 residing in Arizona, Colorado, Georgia, Hawaii, Iowa, and western New York with Duchenne or Becker muscular dystrophy. We analyzed MD STARnet respiratory care data for non-ambulatory adolescent males (12–17 y old) and men (≥18 y old) with DMD, assessing whether: (1) pulmonary function was measured twice yearly; (2) awake and asleep hypoventilation testing was performed at least yearly; (3) home mechanical insufflation-exsufflation, noninvasive ventilation, and tracheostomy/ventilators were prescribed; and (4) pulmonologists provided evaluations. RESULTS: During 2000–2010, no more than 50% of both adolescents and men had their pulmonary function monitored twice yearly in any of the years; 67% or fewer were assessed for awake and sleep hypoventilation yearly. Although the use of mechanical insufflation-exsufflation and noninvasive ventilation is probably increasing, prior use of these devices did not prevent all tracheostomies, and at least 18 of 29 tracheostomies were performed due to acute respiratory illnesses. Fewer than 32% of adolescents and men had pulmonologist evaluations in 2010–2011. CONCLUSIONS: Since the 2004 publication of American Thoracic Society guidelines, there have been few changes in pulmonary clinical practice. Frequencies of respiratory assessments and assist device use among males with DMD were lower than recommended in clinical guidelines. Collaboration of respiratory therapists and pulmonologists with clinicians caring for individuals with DMD should be encouraged to ensure access to the full spectrum of in-patient and out-patient pulmonary interventions.",TRUE,noun
R104,Bioinformatics,R75371,Isolating SARS-CoV-2 Strains From Countries in the Same Meridian: Genome Evolutionary Analysis,S345256,R75376,Has evaluation,R70823,Date,"Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and “disorder” and “transmembrane” analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein—binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.",TRUE,noun
R104,Bioinformatics,R169922,Multiagent cooperation and competition with deep reinforcement learning,S674692,R169923,uses,R167834,Pong,"Evolution of cooperation and competition can appear when multiple adaptive agents share a biological, social, or technological niche. In the present work we study how cooperation and competition emerge between autonomous agents that learn by reinforcement while using only their raw visual input as the state representation. In particular, we extend the Deep Q-Learning framework to multiagent environments to investigate the interaction between two learning agents in the well-known video game Pong. By manipulating the classical rewarding scheme of Pong we show how competitive and collaborative behaviors emerge. We also describe the progression from competitive to collaborative behavior when the incentive to cooperate is increased. Finally we show how learning by playing against another adaptive agent, instead of against a hard-wired algorithm, results in more robust strategies. The present work shows that Deep Q-Networks can become a useful tool for studying decentralized learning of multiagent systems coping with high-dimensional environments.",TRUE,noun
R104,Bioinformatics,R168683,Strawberry: Fast and accurate genome-guided transcript reconstruction and quantification from RNA-Seq,S668974,R168684,creates,R167038,Strawberry,"We propose a novel method and software tool, Strawberry, for transcript reconstruction and quantification from RNA-Seq data under the guidance of genome alignment and independent of gene annotation. Strawberry consists of two modules: assembly and quantification. The novelty of Strawberry is that the two modules use different optimization frameworks but utilize the same data graph structure, which allows a highly efficient, expandable and accurate algorithm for dealing large data. The assembly module parses aligned reads into splicing graphs, and uses network flow algorithms to select the most likely transcripts. The quantification module uses a latent class model to assign read counts from the nodes of splicing graphs to transcripts. Strawberry simultaneously estimates the transcript abundances and corrects for sequencing bias through an EM algorithm. Based on simulations, Strawberry outperforms Cufflinks and StringTie in terms of both assembly and quantification accuracies. Under the evaluation of a real data set, the estimated transcript expression by Strawberry has the highest correlation with Nanostring probe counts, an independent experiment measure for transcript expression. Availability: Strawberry is written in C++14, and is available as open source software at https://github.com/ruolin/strawberry under the MIT license.",TRUE,noun
R104,Bioinformatics,R168683,Strawberry: Fast and accurate genome-guided transcript reconstruction and quantification from RNA-Seq,S668976,R168685,deposits,R167039,Strawberry,"We propose a novel method and software tool, Strawberry, for transcript reconstruction and quantification from RNA-Seq data under the guidance of genome alignment and independent of gene annotation. Strawberry consists of two modules: assembly and quantification. The novelty of Strawberry is that the two modules use different optimization frameworks but utilize the same data graph structure, which allows a highly efficient, expandable and accurate algorithm for dealing large data. The assembly module parses aligned reads into splicing graphs, and uses network flow algorithms to select the most likely transcripts. The quantification module uses a latent class model to assign read counts from the nodes of splicing graphs to transcripts. Strawberry simultaneously estimates the transcript abundances and corrects for sequencing bias through an EM algorithm. Based on simulations, Strawberry outperforms Cufflinks and StringTie in terms of both assembly and quantification accuracies. Under the evaluation of a real data set, the estimated transcript expression by Strawberry has the highest correlation with Nanostring probe counts, an independent experiment measure for transcript expression. Availability: Strawberry is written in C++14, and is available as open source software at https://github.com/ruolin/strawberry under the MIT license.",TRUE,noun
R104,Bioinformatics,R168642,AlignerBoost: A Generalized Software Toolkit for Boosting Next-Gen Sequencing Mapping Accuracy Using a Bayesian-Based Mapping Quality Framework,S668794,R168643,creates,R167011,AlignerBoost,"Accurate mapping of next-generation sequencing (NGS) reads to reference genomes is crucial for almost all NGS applications and downstream analyses. Various repetitive elements in human and other higher eukaryotic genomes contribute in large part to ambiguously (non-uniquely) mapped reads. Most available NGS aligners attempt to address this by either removing all non-uniquely mapping reads, or reporting one random or ""best"" hit based on simple heuristics. Accurate estimation of the mapping quality of NGS reads is therefore critical albeit completely lacking at present. Here we developed a generalized software toolkit ""AlignerBoost"", which utilizes a Bayesian-based framework to accurately estimate mapping quality of ambiguously mapped NGS reads. We tested AlignerBoost with both simulated and real DNA-seq and RNA-seq datasets at various thresholds. In most cases, but especially for reads falling within repetitive regions, AlignerBoost dramatically increases the mapping precision of modern NGS aligners without significantly compromising the sensitivity even without mapping quality filters. When using higher mapping quality cutoffs, AlignerBoost achieves a much lower false mapping rate while exhibiting comparable or higher sensitivity compared to the aligner default modes, therefore significantly boosting the detection power of NGS aligners even using extreme thresholds. AlignerBoost is also SNP-aware, and higher quality alignments can be achieved if provided with known SNPs. AlignerBoost’s algorithm is computationally efficient, and can process one million alignments within 30 seconds on a typical desktop computer. AlignerBoost is implemented as a uniform Java application and is freely available at https://github.com/Grice-Lab/AlignerBoost.",TRUE,noun
R104,Bioinformatics,R168642,AlignerBoost: A Generalized Software Toolkit for Boosting Next-Gen Sequencing Mapping Accuracy Using a Bayesian-Based Mapping Quality Framework,S668796,R168644,deposits,R167012,AlignerBoost,"Accurate mapping of next-generation sequencing (NGS) reads to reference genomes is crucial for almost all NGS applications and downstream analyses. Various repetitive elements in human and other higher eukaryotic genomes contribute in large part to ambiguously (non-uniquely) mapped reads. Most available NGS aligners attempt to address this by either removing all non-uniquely mapping reads, or reporting one random or ""best"" hit based on simple heuristics. Accurate estimation of the mapping quality of NGS reads is therefore critical albeit completely lacking at present. Here we developed a generalized software toolkit ""AlignerBoost"", which utilizes a Bayesian-based framework to accurately estimate mapping quality of ambiguously mapped NGS reads. We tested AlignerBoost with both simulated and real DNA-seq and RNA-seq datasets at various thresholds. In most cases, but especially for reads falling within repetitive regions, AlignerBoost dramatically increases the mapping precision of modern NGS aligners without significantly compromising the sensitivity even without mapping quality filters. When using higher mapping quality cutoffs, AlignerBoost achieves a much lower false mapping rate while exhibiting comparable or higher sensitivity compared to the aligner default modes, therefore significantly boosting the detection power of NGS aligners even using extreme thresholds. AlignerBoost is also SNP-aware, and higher quality alignments can be achieved if provided with known SNPs. AlignerBoost’s algorithm is computationally efficient, and can process one million alignments within 30 seconds on a typical desktop computer. AlignerBoost is implemented as a uniform Java application and is freely available at https://github.com/Grice-Lab/AlignerBoost.",TRUE,noun
R104,Bioinformatics,R170100,A correlation comparison between Altmetric Attention Scores and citations for six PLOS journals,S675561,R170103,uses,R167947,Altmetric,"This study considered all articles published in six Public Library of Science (PLOS) journals in 2012 and Web of Science citations for these articles as of May 2015. A total of 2,406 articles were analyzed to examine the relationships between Altmetric Attention Scores (AAS) and Web of Science citations. The AAS for an article, provided by Altmetric aggregates activities surrounding research outputs in social media (news outlet mentions, tweets, blogs, Wikipedia, etc.). Spearman correlation testing was done on all articles and articles with AAS. Further analysis compared the stratified datasets based on percentile ranks of AAS: top 50%, top 25%, top 10%, and top 1%. Comparisons across the six journals provided additional insights. The results show significant positive correlations between AAS and citations with varied strength for all articles and articles with AAS (or social media mentions), as well as for normalized AAS in the top 50%, top 25%, top 10%, and top 1% datasets. Four of the six PLOS journals, Genetics, Pathogens, Computational Biology, and Neglected Tropical Diseases, show significant positive correlations across all datasets. However, for the two journals with high impact factors, PLOS Biology and Medicine, the results are unexpected: the Medicine articles showed no significant correlations but the Biology articles tested positive for correlations with the whole dataset and the set with AAS. Both journals published substantially fewer articles than the other four journals. Further research to validate the AAS algorithm, adjust the weighting scheme, and include appropriate social media sources is needed to understand the potential uses and meaning of AAS in different contexts and its relationship to other metrics.",TRUE,noun
R104,Bioinformatics,R138710,A general prediction model for the detection of ADHD and Autism using structural and functional MRI,S551260,R138713,Used models,R138714,Autoencoder,"This work presents a novel method for learning a model that can diagnose Attention Deficit Hyperactivity Disorder (ADHD), as well as Autism, using structural texture and functional connectivity features obtained from 3-dimensional structural magnetic resonance imaging (MRI) and 4-dimensional resting-state functional magnetic resonance imaging (fMRI) scans of subjects. We explore a series of three learners: (1) The LeFMS learner first extracts features from the structural MRI images using the texture-based filters produced by a sparse autoencoder. These filters are then convolved with the original MRI image using an unsupervised convolutional network. The resulting features are used as input to a linear support vector machine (SVM) classifier. (2) The LeFMF learner produces a diagnostic model by first computing spatial non-stationary independent components of the fMRI scans, which it uses to decompose each subject’s fMRI scan into the time courses of these common spatial components. These features can then be used with a learner by themselves or in combination with other features to produce the model. Regardless of which approach is used, the final set of features are input to a linear support vector machine (SVM) classifier. (3) Finally, the overall LeFMSF learner uses the combined features obtained from the two feature extraction processes in (1) and (2) above as input to an SVM classifier, achieving an accuracy of 0.673 on the ADHD-200 holdout data and 0.643 on the ABIDE holdout data. Both of these results, obtained with the same LeFMSF framework, are the best known, over all hold-out accuracies on these datasets when only using imaging data—exceeding previously-published results by 0.012 for ADHD and 0.042 for Autism. Our results show that combining multi-modal features can yield good classification accuracy for diagnosis of ADHD and Autism, which is an important step towards computer-aided diagnosis of these psychiatric diseases and perhaps others as well.",TRUE,noun
R104,Bioinformatics,R138725,Using deep autoencoders to identify abnormal brain structural patterns in neuropsychiatric disorders: A large‐scale multi‐sample study,S551322,R138728,Used models,R138714,Autoencoder,"Machine learning is becoming an increasingly popular approach for investigating spatially distributed and subtle neuroanatomical alterations in brain‐based disorders. However, some machine learning models have been criticized for requiring a large number of cases in each experimental group, and for resembling a “black box” that provides little or no insight into the nature of the data. In this article, we propose an alternative conceptual and practical approach for investigating brain‐based disorders which aim to overcome these limitations. We used an artificial neural network known as “deep autoencoder” to create a normative model using structural magnetic resonance imaging data from 1,113 healthy people. We then used this model to estimate total and regional neuroanatomical deviation in individual patients with schizophrenia and autism spectrum disorder using two independent data sets (n = 263). We report that the model was able to generate different values of total neuroanatomical deviation for each disease under investigation relative to their control group (p < .005). Furthermore, the model revealed distinct patterns of neuroanatomical deviations for the two diseases, consistent with the existing neuroimaging literature. We conclude that the deep autoencoder provides a flexible and promising framework for assessing total and regional neuroanatomical deviations in neuropsychiatric populations.",TRUE,noun
R104,Bioinformatics,R138865,Detection of mood disorder using speech emotion profiles and LSTM,S551774,R138867,Used models,R138714,Autoencoder,"In mood disorder diagnosis, bipolar disorder (BD) patients are often misdiagnosed as unipolar depression (UD) on initial presentation. It is crucial to establish an accurate distinction between BD and UD to make a correct and early diagnosis, leading to improvements in treatment and course of illness. To deal with this misdiagnosis problem, in this study, we experimented on eliciting subjects' emotions by watching six eliciting emotional video clips. After watching each video clips, their speech responses were collected when they were interviewing with a clinician. In mood disorder detection, speech emotions play an import role to detect manic or depressive symptoms. Therefore, speech emotion profiles (EP) are obtained by using the support vector machine (SVM) which are built via speech features adapted from selected databases using a denoising autoencoder-based method. Finally, a Long Short-Term Memory (LSTM) recurrent neural network is employed to characterize the temporal information of the EPs with respect to six emotional videos. Comparative experiments clearly show the promising advantage and efficacy of the LSTM-based approach for mood disorder detection.",TRUE,noun
R104,Bioinformatics,R138876,Mood disorder identification using deep bottleneck features of elicited speech,S551816,R138878,Used models,R138714,Autoencoder,"In the diagnosis of mental health disorder, a large portion of the Bipolar Disorder (BD) patients is likely to be misdiagnosed as Unipolar Depression (UD) on initial presentation. As speech is the most natural way to express emotion, this work focuses on tracking emotion profile of elicited speech for short-term mood disorder identification. In this work, the Deep Scattering Spectrum (DSS) and Low Level Descriptors (LLDs) of the elicited speech signals are extracted as the speech features. The hierarchical spectral clustering (HSC) algorithm is employed to adapt the emotion database to the mood disorder database to alleviate the data bias problem. The denoising autoencoder is then used to extract the bottleneck features of DSS and LLDs for better representation. Based on the bottleneck features, a long short term memory (LSTM) is applied to generate the time-varying emotion profile sequence. Finally, given the emotion profile sequence, the HMM-based identification and verification model is used to determine mood disorder. This work collected the elicited emotional speech data from 15 BDs, 15 UDs and 15 healthy controls for system training and evaluation. Five-fold cross validation was employed for evaluation. Experimental results show that the system using the bottleneck feature achieved an identification accuracy of 73.33%, improving by 8.89%, compared to that without bottleneck features. Furthermore, the system with verification mechanism, improving by 4.44%, outperformed that without verification.",TRUE,noun
R104,Bioinformatics,R138879,Exploring microscopic fluctuation of facial expression for mood disorder classification,S551837,R138881,Used models,R138714,Autoencoder,"In clinical diagnosis of mood disorder, depression is one of the most common psychiatric disorders. There are two major types of mood disorders: major depressive disorder (MDD) and bipolar disorder (BPD). A large portion of BPD are misdiagnosed as MDD in the diagnostic of mood disorders. Short-term detection which could be used in early detection and intervention is thus desirable. This study investigates microscopic facial expression changes for the subjects with MDD, BPD and control group (CG), when elicited by emotional video clips. This study uses eight basic orientations of motion vector (MV) to characterize the subtle changes in microscopic facial expression. Then, wavelet decomposition is applied to extract entropy and energy of different frequency bands. Next, an autoencoder neural network is adopted to extract the bottleneck features for dimensionality reduction. Finally, the long short term memory (LSTM) is employed for modeling the long-term variation among different mood disorders types. For evaluation of the proposed method, the elicited data from 36 subjects (12 for each of MDD, BPD and CG) were considered in the K-fold (K=12) cross validation experiments, and the performance for distinguishing among MDD, BPD and CG achieved 67.7% accuracy.",TRUE,noun
R104,Bioinformatics,R138984,Cell-Coupled Long Short-Term Memory With $L$ -Skip Fusion Mechanism for Mood Disorder Detection Through Elicited Audiovisual Features,S552250,R138988,Used models,R138714,Autoencoder,"In early stages, patients with bipolar disorder are often diagnosed as having unipolar depression in mood disorder diagnosis. Because the long-term monitoring is limited by the delayed detection of mood disorder, an accurate and one-time diagnosis is desirable to avoid delay in appropriate treatment due to misdiagnosis. In this paper, an elicitation-based approach is proposed for realizing a one-time diagnosis by using responses elicited from patients by having them watch six emotion-eliciting videos. After watching each video clip, the conversations, including patient facial expressions and speech responses, between the participant and the clinician conducting the interview were recorded. Next, the hierarchical spectral clustering algorithm was employed to adapt the facial expression and speech response features by using the extended Cohn–Kanade and eNTERFACE databases. A denoizing autoencoder was further applied to extract the bottleneck features of the adapted data. Then, the facial and speech bottleneck features were input into support vector machines to obtain speech emotion profiles (EPs) and the modulation spectrum (MS) of the facial action unit sequence for each elicited response. Finally, a cell-coupled long short-term memory (LSTM) network with an $L$ -skip fusion mechanism was proposed to model the temporal information of all elicited responses and to loosely fuse the EPs and the MS for conducting mood disorder detection. The experimental results revealed that the cell-coupled LSTM with the $L$ -skip fusion mechanism has promising advantages and efficacy for mood disorder detection.",TRUE,noun
R104,Bioinformatics,R168721,Bamgineer: Introduction of simulated allele-specific copy number variants into exome and targeted sequence data sets,S669111,R168722,creates,R167062,Bamgineer,"AbstractSomatic copy number variations (CNVs) play a crucial role in development of many human cancers. The broad availability of next-generation sequencing data has enabled the development of algorithms to computationally infer CNV profiles from a variety of data types including exome and targeted sequence data; currently the most prevalent types of cancer genomics data. However, systemic evaluation and comparison of these tools remains challenging due to a lack of ground truth reference sets. To address this need, we have developed Bamgineer, a tool written in Python to introduce user-defined haplotype-phased allele-specific copy number events into an existing Binary Alignment Mapping (BAM) file, with a focus on targeted and exome sequencing experiments. As input, this tool requires a read alignment file (BAM format), lists of non-overlapping genome coordinates for introduction of gains and losses (bed file), and an optional file defining known haplotypes (vcf format). To improve runtime performance, Bamgineer introduces the desired CNVs in parallel using queuing and parallel processing on a local machine or on a high-performance computing cluster. As proof-of-principle, we applied Bamgineer to a single high-coverage (mean: 220X) exome sequence file from a blood sample to simulate copy number profiles of 3 exemplar tumors from each of 10 tumor types at 5 tumor cellularity levels (20-100%, 150 BAM files in total). To demonstrate feasibility beyond exome data, we introduced read alignments to a targeted 5-gene cell-free DNA sequencing library to simulate EGFR amplifications at frequencies consistent with circulating tumor DNA (10, 1, 0.1 and 0.01%) while retaining the multimodal insert size distribution of the original data. We expect Bamgineer to be of use for development and systematic benchmarking of CNV calling algorithms by users using locally-generated data for a variety of applications. The source code is freely available at http://github.com/pughlab/bamgineer.Author summaryWe present Bamgineer, a software program to introduce user-defined, haplotype-specific copy number variants (CNVs) at any frequency into standard Binary Alignment Mapping (BAM) files. Copy number gains are simulated by introducing new DNA sequencing read pairs sampled from existing reads and modified to contain SNPs of the haplotype of interest. This approach retains biases of the original data such as local coverage, strand bias, and insert size. Deletions are simulated by removing reads corresponding to one or both haplotypes. In our proof-of-principle study, we simulated copy number profiles from 10 cancer types at varying cellularity levels typically encountered in clinical samples. We also demonstrated introduction of low frequency CNVs into cell-free DNA sequencing data that retained the bimodal fragment size distribution characteristic of these data. Bamgineer is flexible and enables users to simulate CNVs that reflect characteristics of locally-generated sequence files and can be used for many applications including development and benchmarking of CNV inference tools for a variety of data types.",TRUE,noun
R104,Bioinformatics,R168721,Bamgineer: Introduction of simulated allele-specific copy number variants into exome and targeted sequence data sets,S669115,R168724,uses,R167062,Bamgineer,"AbstractSomatic copy number variations (CNVs) play a crucial role in development of many human cancers. The broad availability of next-generation sequencing data has enabled the development of algorithms to computationally infer CNV profiles from a variety of data types including exome and targeted sequence data; currently the most prevalent types of cancer genomics data. However, systemic evaluation and comparison of these tools remains challenging due to a lack of ground truth reference sets. To address this need, we have developed Bamgineer, a tool written in Python to introduce user-defined haplotype-phased allele-specific copy number events into an existing Binary Alignment Mapping (BAM) file, with a focus on targeted and exome sequencing experiments. As input, this tool requires a read alignment file (BAM format), lists of non-overlapping genome coordinates for introduction of gains and losses (bed file), and an optional file defining known haplotypes (vcf format). To improve runtime performance, Bamgineer introduces the desired CNVs in parallel using queuing and parallel processing on a local machine or on a high-performance computing cluster. As proof-of-principle, we applied Bamgineer to a single high-coverage (mean: 220X) exome sequence file from a blood sample to simulate copy number profiles of 3 exemplar tumors from each of 10 tumor types at 5 tumor cellularity levels (20-100%, 150 BAM files in total). To demonstrate feasibility beyond exome data, we introduced read alignments to a targeted 5-gene cell-free DNA sequencing library to simulate EGFR amplifications at frequencies consistent with circulating tumor DNA (10, 1, 0.1 and 0.01%) while retaining the multimodal insert size distribution of the original data. We expect Bamgineer to be of use for development and systematic benchmarking of CNV calling algorithms by users using locally-generated data for a variety of applications. The source code is freely available at http://github.com/pughlab/bamgineer.Author summaryWe present Bamgineer, a software program to introduce user-defined, haplotype-specific copy number variants (CNVs) at any frequency into standard Binary Alignment Mapping (BAM) files. Copy number gains are simulated by introducing new DNA sequencing read pairs sampled from existing reads and modified to contain SNPs of the haplotype of interest. This approach retains biases of the original data such as local coverage, strand bias, and insert size. Deletions are simulated by removing reads corresponding to one or both haplotypes. In our proof-of-principle study, we simulated copy number profiles from 10 cancer types at varying cellularity levels typically encountered in clinical samples. We also demonstrated introduction of low frequency CNVs into cell-free DNA sequencing data that retained the bimodal fragment size distribution characteristic of these data. Bamgineer is flexible and enables users to simulate CNVs that reflect characteristics of locally-generated sequence files and can be used for many applications including development and benchmarking of CNV inference tools for a variety of data types.",TRUE,noun
R104,Bioinformatics,R168677,"BeWith: A Between-Within method to discover relationships between cancer modules via integrated analysis of mutual exclusivity, co-occurrence and functional interactions",S668942,R168678,creates,R167034,BeWith,"The analysis of the mutational landscape of cancer, including mutual exclusivity and co-occurrence of mutations, has been instrumental in studying the disease. We hypothesized that exploring the interplay between co-occurrence, mutual exclusivity, and functional interactions between genes will further improve our understanding of the disease and help to uncover new relations between cancer driving genes and pathways. To this end, we designed a general framework, BeWith, for identifying modules with different combinations of mutation and interaction patterns. We focused on three different settings of the BeWith schema: (i) BeME-WithFun, in which the relations between modules are enriched with mutual exclusivity, while genes within each module are functionally related; (ii) BeME-WithCo, which combines mutual exclusivity between modules with co-occurrence within modules; and (iii) BeCo-WithMEFun, which ensures co-occurrence between modules, while the within module relations combine mutual exclusivity and functional interactions. We formulated the BeWith framework using Integer Linear Programming (ILP), enabling us to find optimally scoring sets of modules. Our results demonstrate the utility of BeWith in providing novel information about mutational patterns, driver genes, and pathways. In particular, BeME-WithFun helped identify functionally coherent modules that might be relevant for cancer progression. In addition to finding previously well-known drivers, the identified modules pointed to other novel findings such as the interaction between NCOR2 and NCOA3 in breast cancer. Additionally, an application of the BeME-WithCo setting revealed that gene groups differ with respect to their vulnerability to different mutagenic processes, and helped us to uncover pairs of genes with potentially synergistic effects, including a potential synergy between mutations in TP53 and the metastasis related DCC gene. Overall, BeWith not only helped us uncover relations between potential driver genes and pathways, but also provided additional insights on patterns of the mutational landscape, going beyond cancer driving mutations. Implementation is available at https://www.ncbi.nlm.nih.gov/CBBresearch/Przytycka/software/bewith.html",TRUE,noun
R104,Bioinformatics,R38466,"Biotea-2-Bioschemas, facilitating structured markup for semantically annotated scholarly publications",S126231,R38472,Related Resource,R38473,Bioschemas,"The total number of scholarly publications grows day by day, making it necessary to explore and use simple yet effective ways to expose their metadata. Schema.org supports adding structured metadata to web pages via markup, making it easier for data providers but also for search engines to provide the right search results. Bioschemas is based on the standards of schema.org, providing new types, properties and guidelines for metadata, i.e., providing metadata profiles tailored to the Life Sciences domain. Here we present our proposed contribution to Bioschemas (from the project “Biotea”), which supports metadata contributions for scholarly publications via profiles and web components. Biotea comprises a semantic model to represent publications together with annotated elements recognized from the scientific text; our Biotea model has been mapped to schema.org following Bioschemas standards.",TRUE,noun
R104,Bioinformatics,R150549,Classifying semantic relations in bioscience texts,S603646,R150551,Data domains,R150561,Bioscience,"A crucial step toward the goal of automatic extraction of propositional information from natural language text is the identification of semantic relations between constituents in sentences. We examine the problem of distinguishing among seven relation types that can occur between the entities ""treatment"" and ""disease"" in bioscience text, and the problem of identifying such entities. We compare five generative graphical models and a neural network, using lexical, syntactic, and semantic features, finding that the latter help achieve high classification accuracy.",TRUE,noun
R104,Bioinformatics,R38466,"Biotea-2-Bioschemas, facilitating structured markup for semantically annotated scholarly publications",S126233,R38472,Related Resource,R38475,Biotea,"The total number of scholarly publications grows day by day, making it necessary to explore and use simple yet effective ways to expose their metadata. Schema.org supports adding structured metadata to web pages via markup, making it easier for data providers but also for search engines to provide the right search results. Bioschemas is based on the standards of schema.org, providing new types, properties and guidelines for metadata, i.e., providing metadata profiles tailored to the Life Sciences domain. Here we present our proposed contribution to Bioschemas (from the project “Biotea”), which supports metadata contributions for scholarly publications via profiles and web components. Biotea comprises a semantic model to represent publications together with annotated elements recognized from the scientific text; our Biotea model has been mapped to schema.org following Bioschemas standards.",TRUE,noun
R104,Bioinformatics,R168717,LAILAPS-QSM: A RESTful API and JAVA library for semantic query suggestions,S669100,R168720,uses,R167061,Bitbucket,"In order to access and filter content of life-science databases, full text search is a widely applied query interface. But its high flexibility and intuitiveness is paid for with potentially imprecise and incomplete query results. To reduce this drawback, query assistance systems suggest those combinations of keywords with the highest potential to match most of the relevant data records. Widespread approaches are syntactic query corrections that avoid misspelling and support expansion of words by suffixes and prefixes. Synonym expansion approaches apply thesauri, ontologies, and query logs. All need laborious curation and maintenance. Furthermore, access to query logs is in general restricted. Approaches that infer related queries by their query profile like research field, geographic location, co-authorship, affiliation etc. require user’s registration and its public accessibility that contradict privacy concerns. To overcome these drawbacks, we implemented LAILAPS-QSM, a machine learning approach that reconstruct possible linguistic contexts of a given keyword query. The context is referred from the text records that are stored in the databases that are going to be queried or extracted for a general purpose query suggestion from PubMed abstracts and UniProt data. The supplied tool suite enables the pre-processing of these text records and the further computation of customized distributed word vectors. The latter are used to suggest alternative keyword queries. An evaluated of the query suggestion quality was done for plant science use cases. Locally present experts enable a cost-efficient quality assessment in the categories trait, biological entity, taxonomy, affiliation, and metabolic function which has been performed using ontology term similarities. LAILAPS-QSM mean information content similarity for 15 representative queries is 0.70, whereas 34% have a score above 0.80. In comparison, the information content similarity for human expert made query suggestions is 0.90. The software is either available as tool set to build and train dedicated query suggestion services or as already trained general purpose RESTful web service. The service uses open interfaces to be seamless embeddable into database frontends. The JAVA implementation uses highly optimized data structures and streamlined code to provide fast and scalable response for web service calls. The source code of LAILAPS-QSM is available under GNU General Public License version 2 in Bitbucket GIT repository: https://bitbucket.org/ipk_bit_team/bioescorte-suggestion",TRUE,noun
R104,Bioinformatics,R169056,Distinctive Gut Microbiota of Honey Bees Assessed Using Deep Sampling from Individual Worker Bees,S670658,R169058,uses,R167270,blastn,"Surveys of 16S rDNA sequences from the honey bee, Apis mellifera, have revealed the presence of eight distinctive bacterial phylotypes in intestinal tracts of adult worker bees. Because previous studies have been limited to relatively few sequences from samples pooled from multiple hosts, the extent of variation in this microbiota among individuals within and between colonies and locations has been unclear. We surveyed the gut microbiota of 40 individual workers from two sites, Arizona and Maryland USA, sampling four colonies per site. Universal primers were used to amplify regions of 16S ribosomal RNA genes, and amplicons were sequenced using 454 pyrotag methods, enabling analysis of about 330,000 bacterial reads. Over 99% of these sequences belonged to clusters for which the first blastn hits in GenBank were members of the known bee phylotypes. Four phylotypes, one within Gammaproteobacteria (corresponding to “Candidatus Gilliamella apicola”) one within Betaproteobacteria (“Candidatus Snodgrassella alvi”), and two within Lactobacillus, were present in every bee, though their frequencies varied. The same typical bacterial phylotypes were present in all colonies and at both sites. Community profiles differed significantly among colonies and between sites, mostly due to the presence in some Arizona colonies of two species of Enterobacteriaceae not retrieved previously from bees. Analysis of Sanger sequences of rRNA of the Snodgrassella and Gilliamella phylotypes revealed that single bees contain numerous distinct strains of each phylotype. Strains showed some differentiation between localities, especially for the Snodgrassella phylotype.",TRUE,noun
R104,Bioinformatics,R169056,Distinctive Gut Microbiota of Honey Bees Assessed Using Deep Sampling from Individual Worker Bees,S670674,R169066,uses,R167277,Blastn,"Surveys of 16S rDNA sequences from the honey bee, Apis mellifera, have revealed the presence of eight distinctive bacterial phylotypes in intestinal tracts of adult worker bees. Because previous studies have been limited to relatively few sequences from samples pooled from multiple hosts, the extent of variation in this microbiota among individuals within and between colonies and locations has been unclear. We surveyed the gut microbiota of 40 individual workers from two sites, Arizona and Maryland USA, sampling four colonies per site. Universal primers were used to amplify regions of 16S ribosomal RNA genes, and amplicons were sequenced using 454 pyrotag methods, enabling analysis of about 330,000 bacterial reads. Over 99% of these sequences belonged to clusters for which the first blastn hits in GenBank were members of the known bee phylotypes. Four phylotypes, one within Gammaproteobacteria (corresponding to “Candidatus Gilliamella apicola”) one within Betaproteobacteria (“Candidatus Snodgrassella alvi”), and two within Lactobacillus, were present in every bee, though their frequencies varied. The same typical bacterial phylotypes were present in all colonies and at both sites. Community profiles differed significantly among colonies and between sites, mostly due to the presence in some Arizona colonies of two species of Enterobacteriaceae not retrieved previously from bees. Analysis of Sanger sequences of rRNA of the Snodgrassella and Gilliamella phylotypes revealed that single bees contain numerous distinct strains of each phylotype. Strains showed some differentiation between localities, especially for the Snodgrassella phylotype.",TRUE,noun
R104,Bioinformatics,R169215,Ecological Genetics of Chinese Rhesus Macaque in Response to Mountain Building: All Things Are Not Equal,S671381,R169236,uses,R167406,Bottleneck,"Background Pliocene uplifting of the Qinghai-Tibetan Plateau (QTP) and Quaternary glaciation may have impacted the Asian biota more than any other events. Little is documented with respect to how the geological and climatological events influenced speciation as well as spatial and genetic structuring, especially in vertebrate endotherms. Macaca mulatta is the most widely distributed non-human primate. It may be the most suitable model to test hypotheses regarding the genetic consequences of orogenesis on an endotherm. Methodology and Principal Findings Using a large dataset of maternally inherited mitochondrial DNA gene sequences and nuclear microsatellite DNA data, we discovered two maternal super-haplogroups exist, one in western China and the other in eastern China. M. mulatta formed around 2.31 Ma (1.51–3.15, 95%), and divergence of the two major matrilines was estimated at 1.15 Ma (0.78–1.55, 95%). The western super-haplogroup exhibits significant geographic structure. In contrast, the eastern super-haplogroup has far greater haplotypic variability with little structure based on analyses of six variable microsatellite loci using Structure and Geneland. Analysis using Migrate detected greater gene flow from WEST to EAST than vice versa. We did not detect signals of bottlenecking in most populations. Conclusions Analyses of the nuclear and mitochondrial datasets obtained large differences in genetic patterns for M. mulatta. The difference likely reflects inheritance mechanisms of the maternally inherited mtDNA genome versus nuclear biparentally inherited STRs and male-mediated gene flow. Dramatic environmental changes may be responsible for shaping the matrilineal history of macaques. The timing of events, the formation of M. mulatta, and the divergence of the super-haplogroups, corresponds to both the uplifting of the QTP and Quaternary climatic oscillations. Orogenesis likely drove divergence of western populations in China, and Pleistocene glaciations are likely responsible for genetic structuring in the eastern super-haplogroup via geographic isolation and secondary contact.",TRUE,noun
R104,Bioinformatics,R168467,CellProfiler 3.0: Next-generation image processing for biology,S668148,R168469,creates,R166902,CellProfiler,"CellProfiler has enabled the scientific research community to create flexible, modular image analysis pipelines since its release in 2005. Here, we describe CellProfiler 3.0, a new version of the software supporting both whole-volume and plane-wise analysis of three-dimensional (3D) image stacks, increasingly common in biomedical research. CellProfiler’s infrastructure is greatly improved, and we provide a protocol for cloud-based, large-scale image processing. New plugins enable running pretrained deep learning models on images. Designed by and for biologists, CellProfiler equips researchers with powerful computational tools via a well-documented user interface, empowering biologists in all fields to create quantitative, reproducible image analysis workflows.",TRUE,noun
R104,Bioinformatics,R168467,CellProfiler 3.0: Next-generation image processing for biology,S668150,R168470,uses,R166902,CellProfiler,"CellProfiler has enabled the scientific research community to create flexible, modular image analysis pipelines since its release in 2005. Here, we describe CellProfiler 3.0, a new version of the software supporting both whole-volume and plane-wise analysis of three-dimensional (3D) image stacks, increasingly common in biomedical research. CellProfiler’s infrastructure is greatly improved, and we provide a protocol for cloud-based, large-scale image processing. New plugins enable running pretrained deep learning models on images. Designed by and for biologists, CellProfiler equips researchers with powerful computational tools via a well-documented user interface, empowering biologists in all fields to create quantitative, reproducible image analysis workflows.",TRUE,noun
R104,Bioinformatics,R169612,Intrinsic Functional Connectivity in Salience and Default Mode Networks and Aberrant Social Processes in Youth at Ultra-High Risk for Psychosis,S673239,R169617,uses,R167647,conn,"Social processes are key to navigating the world, and investigating their underlying mechanisms and cognitive architecture can aid in understanding disease states such as schizophrenia, where social processes are highly impacted. Evidence suggests that social processes are impaired in individuals at ultra high-risk for the development of psychosis (UHR). Understanding these phenomena in UHR youth may clarify disease etiology and social processes in a period that is characterized by significantly fewer confounds than schizophrenia. Furthermore, understanding social processing deficits in this population will help explain these processes in healthy individuals. The current study examined resting state connectivity of the salience (SN) and default mode networks (DMN) in association with facial emotion recognition (FER), an integral aspect of social processes, as well as broader social functioning (SF) in UHR individuals and healthy controls. Consistent with the existing literature, UHR youth were impaired in FER and SF when compared with controls. In the UHR group, we found increased connectivity between the SN and the medial prefrontal cortex, an area of the DMN relative to controls. In UHR youth, the DMN exhibited both positive and negative correlations with the somatosensory cortex/cerebellum and precuneus, respectively, which was linked with better FER performance. For SF, results showed that sensory processing links with the SN might be important in allowing for better SF for both groups, but especially in controls where sensory processing is likely to be unimpaired. These findings clarify how social processing deficits may manifest in psychosis, and underscore the importance of SN and DMN connectivity for social processing more generally.",TRUE,noun
R104,Bioinformatics,R168577,dcGOR: An R Package for Analysing Ontologies and Protein Domain Annotations,S668538,R168579,deposits,R166970,dcGOR,"I introduce an open-source R package ‘dcGOR’ to provide the bioinformatics community with the ease to analyse ontologies and protein domain annotations, particularly those in the dcGO database. The dcGO is a comprehensive resource for protein domain annotations using a panel of ontologies including Gene Ontology. Although increasing in popularity, this database needs statistical and graphical support to meet its full potential. Moreover, there are no bioinformatics tools specifically designed for domain ontology analysis. As an add-on package built in the R software environment, dcGOR offers a basic infrastructure with great flexibility and functionality. It implements new data structure to represent domains, ontologies, annotations, and all analytical outputs as well. For each ontology, it provides various mining facilities, including: (i) domain-based enrichment analysis and visualisation; (ii) construction of a domain (semantic similarity) network according to ontology annotations; and (iii) significance analysis for estimating a contact (statistical significance) network. To reduce runtime, most analyses support high-performance parallel computing. Taking as inputs a list of protein domains of interest, the package is able to easily carry out in-depth analyses in terms of functional, phenotypic and diseased relevance, and network-level understanding. More importantly, dcGOR is designed to allow users to import and analyse their own ontologies and annotations on domains (taken from SCOP, Pfam and InterPro) and RNAs (from Rfam) as well. The package is freely available at CRAN for easy installation, and also at GitHub for version control. The dedicated website with reproducible demos can be found at http://supfam.org/dcGOR.",TRUE,noun
R104,Bioinformatics,R150549,Classifying semantic relations in bioscience texts,S603631,R150551,Concept types,R150515,Disease,"A crucial step toward the goal of automatic extraction of propositional information from natural language text is the identification of semantic relations between constituents in sentences. We examine the problem of distinguishing among seven relation types that can occur between the entities ""treatment"" and ""disease"" in bioscience text, and the problem of identifying such entities. We compare five generative graphical models and a neural network, using lexical, syntactic, and semantic features, finding that the latter help achieve high classification accuracy.",TRUE,noun
R104,Bioinformatics,R168608,Ensembler: Enabling High-Throughput Molecular Simulations at the Superfamily Scale,S668672,R168610,creates,R166992,Ensembler,"The rapidly expanding body of available genomic and protein structural data provides a rich resource for understanding protein dynamics with biomolecular simulation. While computational infrastructure has grown rapidly, simulations on an omics scale are not yet widespread, primarily because software infrastructure to enable simulations at this scale has not kept pace. It should now be possible to study protein dynamics across entire (super)families, exploiting both available structural biology data and conformational similarities across homologous proteins. Here, we present a new tool for enabling high-throughput simulation in the genomics era. Ensembler takes any set of sequences - from a single sequence to an entire superfamily - and shepherds them through various stages of modeling and refinement to produce simulation-ready structures. This includes comparative modeling to all relevant PDB structures (which may span multiple conformational states of interest), reconstruction of missing loops, addition of missing atoms, culling of nearly identical structures, assignment of appropriate protonation states, solvation in explicit solvent, and refinement and filtering with molecular simulation to ensure stable simulation. The output of this pipeline is an ensemble of structures ready for subsequent molecular simulations using computer clusters, supercomputers, or distributed computing projects like Folding@home. Ensembler thus automates much of the time-consuming process of preparing protein models suitable for simulation, while allowing scalability up to entire superfamilies. A particular advantage of this approach can be found in the construction of kinetic models of conformational dynamics - such as Markov state models (MSMs) - which benefit from a diverse array of initial configurations that span the accessible conformational states to aid sampling. We demonstrate the power of this approach by constructing models for all catalytic domains in the human tyrosine kinase family, using all available kinase catalytic domain structures from any organism as structural templates. Ensembler is free and open source software licensed under the GNU General Public License (GPL) v2. It is compatible with Linux and OS X. The latest release can be installed via the conda package manager, and the latest source can be downloaded from https://github.com/choderalab/ensembler.",TRUE,noun
R104,Bioinformatics,R168608,Ensembler: Enabling High-Throughput Molecular Simulations at the Superfamily Scale,S668676,R168612,deposits,R166993,Ensembler,"The rapidly expanding body of available genomic and protein structural data provides a rich resource for understanding protein dynamics with biomolecular simulation. While computational infrastructure has grown rapidly, simulations on an omics scale are not yet widespread, primarily because software infrastructure to enable simulations at this scale has not kept pace. It should now be possible to study protein dynamics across entire (super)families, exploiting both available structural biology data and conformational similarities across homologous proteins. Here, we present a new tool for enabling high-throughput simulation in the genomics era. Ensembler takes any set of sequences - from a single sequence to an entire superfamily - and shepherds them through various stages of modeling and refinement to produce simulation-ready structures. This includes comparative modeling to all relevant PDB structures (which may span multiple conformational states of interest), reconstruction of missing loops, addition of missing atoms, culling of nearly identical structures, assignment of appropriate protonation states, solvation in explicit solvent, and refinement and filtering with molecular simulation to ensure stable simulation. The output of this pipeline is an ensemble of structures ready for subsequent molecular simulations using computer clusters, supercomputers, or distributed computing projects like Folding@home. Ensembler thus automates much of the time-consuming process of preparing protein models suitable for simulation, while allowing scalability up to entire superfamilies. A particular advantage of this approach can be found in the construction of kinetic models of conformational dynamics - such as Markov state models (MSMs) - which benefit from a diverse array of initial configurations that span the accessible conformational states to aid sampling. We demonstrate the power of this approach by constructing models for all catalytic domains in the human tyrosine kinase family, using all available kinase catalytic domain structures from any organism as structural templates. Ensembler is free and open source software licensed under the GNU General Public License (GPL) v2. It is compatible with Linux and OS X. The latest release can be installed via the conda package manager, and the latest source can be downloaded from https://github.com/choderalab/ensembler.",TRUE,noun
R104,Bioinformatics,R170241,"Malaria knowledge and its associated factors among pregnant women attending antenatal clinic of Adis Zemen Hospital, North-western Ethiopia, 2018",S676300,R170242,uses,R168027,epidata,"Introduction In Ethiopia, the burden of malaria during pregnancy remains a public health problem. Having a good malaria knowledge leads to practicing the prevention of malaria and seeking a health care. Researches regarding pregnant women’s knowledge on malaria in Ethiopia is limited. So the aim of this study was to assess malaria knowledge and its associated factors among pregnant woman, 2018. Methods An institutional-basedcross-sectional study was conducted in Adis Zemen Hospital. Data were collected using pre-tested, an interviewer-administered structured questionnaire among 236 mothers. Women’s knowledge on malaria was measured using six malaria-related questions (cause of malaria, mode of transmission, signs and symptoms, complication and prevention of malaria). The collected data were entered using Epidata version 3.1 and exported to SPSS version 20 for analysis. Bivariate and multivariate logistic regressions were computed to identify predictor variables at 95% confidence interval. Variables having P value of <0.05 were considered as predictor variables of malaria knowledge. Result A total of 235 pregnant women participated which makes the response rate 99.6%. One hundred seventy two pregnant women (73.2%) of mothers had good knowledge on malaria.Women who were from urban (AOR; 2.4: CI; 1.8, 5.7), had better family monthly income (AOR; 3.4: CI; 2.7, 3.8), attended education (AOR; 1.8: CI; 1.4, 3.5) were more knowledgeable. Conclusion and recommendation Majority of participants had good knowledge on malaria. Educational status, household monthly income and residence werepredictors of malaria knowledge. Increasing women’s knowledge especially for those who are from rural, have no education, and have low monthly income is still needed.",TRUE,noun
R104,Bioinformatics,R168574,FamSeq: A Variant Calling Program for Family-Based Sequencing Data Using Graphics Processing Units,S668525,R168575,creates,R166967,FamSeq,"Various algorithms have been developed for variant calling using next-generation sequencing data, and various methods have been applied to reduce the associated false positive and false negative rates. Few variant calling programs, however, utilize the pedigree information when the family-based sequencing data are available. Here, we present a program, FamSeq, which reduces both false positive and false negative rates by incorporating the pedigree information from the Mendelian genetic model into variant calling. To accommodate variations in data complexity, FamSeq consists of four distinct implementations of the Mendelian genetic model: the Bayesian network algorithm, a graphics processing unit version of the Bayesian network algorithm, the Elston-Stewart algorithm and the Markov chain Monte Carlo algorithm. To make the software efficient and applicable to large families, we parallelized the Bayesian network algorithm that copes with pedigrees with inbreeding loops without losing calculation precision on an NVIDIA graphics processing unit. In order to compare the difference in the four methods, we applied FamSeq to pedigree sequencing data with family sizes that varied from 7 to 12. When there is no inbreeding loop in the pedigree, the Elston-Stewart algorithm gives analytical results in a short time. If there are inbreeding loops in the pedigree, we recommend the Bayesian network method, which provides exact answers. To improve the computing speed of the Bayesian network method, we parallelized the computation on a graphics processing unit. This allowed the Bayesian network method to process the whole genome sequencing data of a family of 12 individuals within two days, which was a 10-fold time reduction compared to the time required for this computation on a central processing unit.",TRUE,noun
R104,Bioinformatics,R168574,FamSeq: A Variant Calling Program for Family-Based Sequencing Data Using Graphics Processing Units,S668527,R168576,deposits,R166968,FamSeq,"Various algorithms have been developed for variant calling using next-generation sequencing data, and various methods have been applied to reduce the associated false positive and false negative rates. Few variant calling programs, however, utilize the pedigree information when the family-based sequencing data are available. Here, we present a program, FamSeq, which reduces both false positive and false negative rates by incorporating the pedigree information from the Mendelian genetic model into variant calling. To accommodate variations in data complexity, FamSeq consists of four distinct implementations of the Mendelian genetic model: the Bayesian network algorithm, a graphics processing unit version of the Bayesian network algorithm, the Elston-Stewart algorithm and the Markov chain Monte Carlo algorithm. To make the software efficient and applicable to large families, we parallelized the Bayesian network algorithm that copes with pedigrees with inbreeding loops without losing calculation precision on an NVIDIA graphics processing unit. In order to compare the difference in the four methods, we applied FamSeq to pedigree sequencing data with family sizes that varied from 7 to 12. When there is no inbreeding loop in the pedigree, the Elston-Stewart algorithm gives analytical results in a short time. If there are inbreeding loops in the pedigree, we recommend the Bayesian network method, which provides exact answers. To improve the computing speed of the Bayesian network method, we parallelized the computation on a graphics processing unit. This allowed the Bayesian network method to process the whole genome sequencing data of a family of 12 individuals within two days, which was a 10-fold time reduction compared to the time required for this computation on a central processing unit.",TRUE,noun
R104,Bioinformatics,R148050,Tagging gene and protein names in biomedical text,S593720,R148052,Concept types,R148042,Gene,"MOTIVATION The MEDLINE database of biomedical abstracts contains scientific knowledge about thousands of interacting genes and proteins. Automated text processing can aid in the comprehension and synthesis of this valuable information. The fundamental task of identifying gene and protein names is a necessary first step towards making full use of the information encoded in biomedical text. This remains a challenging task due to the irregularities and ambiguities in gene and protein nomenclature. We propose to approach the detection of gene and protein names in scientific abstracts as part-of-speech tagging, the most basic form of linguistic corpus annotation. RESULTS We present a method for tagging gene and protein names in biomedical text using a combination of statistical and knowledge-based strategies. This method incorporates automatically generated rules from a transformation-based part-of-speech tagger, and manually generated rules from morphological clues, low frequency trigrams, indicator terms, suffixes and part-of-speech information. Results of an experiment on a test corpus of 56K MEDLINE documents demonstrate that our method to extract gene and protein names can be applied to large sets of MEDLINE abstracts, without the need for special conditions or human experts to predetermine relevant subsets. AVAILABILITY The programs are available on request from the authors.",TRUE,noun
R104,Bioinformatics,R169215,Ecological Genetics of Chinese Rhesus Macaque in Response to Mountain Building: All Things Are Not Equal,S671385,R169238,uses,R167408,Geneland,"Background Pliocene uplifting of the Qinghai-Tibetan Plateau (QTP) and Quaternary glaciation may have impacted the Asian biota more than any other events. Little is documented with respect to how the geological and climatological events influenced speciation as well as spatial and genetic structuring, especially in vertebrate endotherms. Macaca mulatta is the most widely distributed non-human primate. It may be the most suitable model to test hypotheses regarding the genetic consequences of orogenesis on an endotherm. Methodology and Principal Findings Using a large dataset of maternally inherited mitochondrial DNA gene sequences and nuclear microsatellite DNA data, we discovered two maternal super-haplogroups exist, one in western China and the other in eastern China. M. mulatta formed around 2.31 Ma (1.51–3.15, 95%), and divergence of the two major matrilines was estimated at 1.15 Ma (0.78–1.55, 95%). The western super-haplogroup exhibits significant geographic structure. In contrast, the eastern super-haplogroup has far greater haplotypic variability with little structure based on analyses of six variable microsatellite loci using Structure and Geneland. Analysis using Migrate detected greater gene flow from WEST to EAST than vice versa. We did not detect signals of bottlenecking in most populations. Conclusions Analyses of the nuclear and mitochondrial datasets obtained large differences in genetic patterns for M. mulatta. The difference likely reflects inheritance mechanisms of the maternally inherited mtDNA genome versus nuclear biparentally inherited STRs and male-mediated gene flow. Dramatic environmental changes may be responsible for shaping the matrilineal history of macaques. The timing of events, the formation of M. mulatta, and the divergence of the super-haplogroups, corresponds to both the uplifting of the QTP and Quaternary climatic oscillations. Orogenesis likely drove divergence of western populations in China, and Pleistocene glaciations are likely responsible for genetic structuring in the eastern super-haplogroup via geographic isolation and secondary contact.",TRUE,noun
R104,Bioinformatics,R168738,ggsashimi: Sashimi plot revised for browser- and annotation-independent splicing visualization,S669196,R168739,creates,R167070,ggsashimi,"We present ggsashimi, a command-line tool for the visualization of splicing events across multiple samples. Given a specified genomic region, ggsashimi creates sashimi plots for individual RNA-seq experiments as well as aggregated plots for groups of experiments, a feature unique to this software. Compared to the existing versions of programs generating sashimi plots, it uses popular bioinformatics file formats, it is annotation-independent, and allows the visualization of splicing events even for large genomic regions by scaling down the genomic segments between splice sites. ggsashimi is freely available at https://github.com/guigolab/ggsashimi. It is implemented in python, and internally generates R code for plotting.",TRUE,noun
R104,Bioinformatics,R168738,ggsashimi: Sashimi plot revised for browser- and annotation-independent splicing visualization,S669198,R168740,deposits,R167071,ggsashimi,"We present ggsashimi, a command-line tool for the visualization of splicing events across multiple samples. Given a specified genomic region, ggsashimi creates sashimi plots for individual RNA-seq experiments as well as aggregated plots for groups of experiments, a feature unique to this software. Compared to the existing versions of programs generating sashimi plots, it uses popular bioinformatics file formats, it is annotation-independent, and allows the visualization of splicing events even for large genomic regions by scaling down the genomic segments between splice sites. ggsashimi is freely available at https://github.com/guigolab/ggsashimi. It is implemented in python, and internally generates R code for plotting.",TRUE,noun
R104,Bioinformatics,R168577,dcGOR: An R Package for Analysing Ontologies and Protein Domain Annotations,S668546,R168583,uses,R166944,GitHub,"I introduce an open-source R package ‘dcGOR’ to provide the bioinformatics community with the ease to analyse ontologies and protein domain annotations, particularly those in the dcGO database. The dcGO is a comprehensive resource for protein domain annotations using a panel of ontologies including Gene Ontology. Although increasing in popularity, this database needs statistical and graphical support to meet its full potential. Moreover, there are no bioinformatics tools specifically designed for domain ontology analysis. As an add-on package built in the R software environment, dcGOR offers a basic infrastructure with great flexibility and functionality. It implements new data structure to represent domains, ontologies, annotations, and all analytical outputs as well. For each ontology, it provides various mining facilities, including: (i) domain-based enrichment analysis and visualisation; (ii) construction of a domain (semantic similarity) network according to ontology annotations; and (iii) significance analysis for estimating a contact (statistical significance) network. To reduce runtime, most analyses support high-performance parallel computing. Taking as inputs a list of protein domains of interest, the package is able to easily carry out in-depth analyses in terms of functional, phenotypic and diseased relevance, and network-level understanding. More importantly, dcGOR is designed to allow users to import and analyse their own ontologies and annotations on domains (taken from SCOP, Pfam and InterPro) and RNAs (from Rfam) as well. The package is freely available at CRAN for easy installation, and also at GitHub for version control. The dedicated website with reproducible demos can be found at http://supfam.org/dcGOR.",TRUE,noun
R104,Bioinformatics,R168595,VDJtools: Unifying Post-analysis of T Cell Receptor Repertoires,S668613,R168598,uses,R166944,GitHub,"Despite the growing number of immune repertoire sequencing studies, the field still lacks software for analysis and comprehension of this high-dimensional data. Here we report VDJtools, a complementary software suite that solves a wide range of T cell receptor (TCR) repertoires post-analysis tasks, provides a detailed tabular output and publication-ready graphics, and is built on top of a flexible API. Using TCR datasets for a large cohort of unrelated healthy donors, twins, and multiple sclerosis patients we demonstrate that VDJtools greatly facilitates the analysis and leads to sound biological conclusions. VDJtools software and documentation are available at https://github.com/mikessh/vdjtools.",TRUE,noun
R104,Bioinformatics,R168599,Wham: Identifying Structural Variants of Biological Consequence,S668633,R168602,uses,R166986,github,"Existing methods for identifying structural variants (SVs) from short read datasets are inaccurate. This complicates disease-gene identification and efforts to understand the consequences of genetic variation. In response, we have created Wham (Whole-genome Alignment Metrics) to provide a single, integrated framework for both structural variant calling and association testing, thereby bypassing many of the difficulties that currently frustrate attempts to employ SVs in association testing. Here we describe Wham, benchmark it against three other widely used SV identification tools–Lumpy, Delly and SoftSearch–and demonstrate Wham’s ability to identify and associate SVs with phenotypes using data from humans, domestic pigeons, and vaccinia virus. Wham and all associated software are covered under the MIT License and can be freely downloaded from github (https://github.com/zeeev/wham), with documentation on a wiki (http://zeeev.github.io/wham/). For community support please post questions to https://www.biostars.org/.",TRUE,noun
R104,Bioinformatics,R168608,Ensembler: Enabling High-Throughput Molecular Simulations at the Superfamily Scale,S668678,R168613,uses,R166944,GitHub,"The rapidly expanding body of available genomic and protein structural data provides a rich resource for understanding protein dynamics with biomolecular simulation. While computational infrastructure has grown rapidly, simulations on an omics scale are not yet widespread, primarily because software infrastructure to enable simulations at this scale has not kept pace. It should now be possible to study protein dynamics across entire (super)families, exploiting both available structural biology data and conformational similarities across homologous proteins. Here, we present a new tool for enabling high-throughput simulation in the genomics era. Ensembler takes any set of sequences - from a single sequence to an entire superfamily - and shepherds them through various stages of modeling and refinement to produce simulation-ready structures. This includes comparative modeling to all relevant PDB structures (which may span multiple conformational states of interest), reconstruction of missing loops, addition of missing atoms, culling of nearly identical structures, assignment of appropriate protonation states, solvation in explicit solvent, and refinement and filtering with molecular simulation to ensure stable simulation. The output of this pipeline is an ensemble of structures ready for subsequent molecular simulations using computer clusters, supercomputers, or distributed computing projects like Folding@home. Ensembler thus automates much of the time-consuming process of preparing protein models suitable for simulation, while allowing scalability up to entire superfamilies. A particular advantage of this approach can be found in the construction of kinetic models of conformational dynamics - such as Markov state models (MSMs) - which benefit from a diverse array of initial configurations that span the accessible conformational states to aid sampling. We demonstrate the power of this approach by constructing models for all catalytic domains in the human tyrosine kinase family, using all available kinase catalytic domain structures from any organism as structural templates. Ensembler is free and open source software licensed under the GNU General Public License (GPL) v2. It is compatible with Linux and OS X. The latest release can be installed via the conda package manager, and the latest source can be downloaded from https://github.com/choderalab/ensembler.",TRUE,noun
R104,Bioinformatics,R168616,QuIN: A Web Server for Querying and Visualizing Chromatin Interaction Networks,S668716,R168622,uses,R166944,GitHub,"Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. AVAILABILITY: QuIN’s web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/.",TRUE,noun
R104,Bioinformatics,R168735,PyPhi: A toolbox for integrated information theory,S669184,R168737,uses,R166944,GitHub,"Integrated information theory provides a mathematical framework to fully characterize the cause-effect structure of a physical system. Here, we introduce PyPhi, a Python software package that implements this framework for causal analysis and unfolds the full cause-effect structure of discrete dynamical systems of binary elements. The software allows users to easily study these structures, serves as an up-to-date reference implementation of the formalisms of integrated information theory, and has been applied in research on complexity, emergence, and certain biological questions. We first provide an overview of the main algorithm and demonstrate PyPhi’s functionality in the course of analyzing an example system, and then describe details of the algorithm’s design and implementation. PyPhi can be installed with Python’s package manager via the command ‘pip install pyphi’ on Linux and macOS systems equipped with Python 3.4 or higher. PyPhi is open-source and licensed under the GPLv3; the source code is hosted on GitHub at https://github.com/wmayner/pyphi. Comprehensive and continually-updated documentation is available at https://pyphi.readthedocs.io. The pyphi-users mailing list can be joined at https://groups.google.com/forum/#!forum/pyphi-users. A web-based graphical interface to the software is available at http://integratedinformationtheory.org/calculate.html.",TRUE,noun
R104,Bioinformatics,R135546,Acute Lymphoblastic Leukemia Detection from Microscopic Images Using Weighted Ensemble of Convolutional Neural Networks,S536123,R135550,Used models,L378124,InceptionResNet-V2,"Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.",TRUE,noun
R104,Bioinformatics,R168498,Podbat: A Novel Genomic Tool Reveals Swr1-Independent H2A.Z Incorporation at Gene Coding Sequences through Epigenetic Meta-Analysis,S668247,R168502,uses,R166921,Java,"Epigenetic regulation consists of a multitude of different modifications that determine active and inactive states of chromatin. Conditions such as cell differentiation or exposure to environmental stress require concerted changes in gene expression. To interpret epigenomics data, a spectrum of different interconnected datasets is needed, ranging from the genome sequence and positions of histones, together with their modifications and variants, to the transcriptional output of genomic regions. Here we present a tool, Podbat (Positioning database and analysis tool), that incorporates data from various sources and allows detailed dissection of the entire range of chromatin modifications simultaneously. Podbat can be used to analyze, visualize, store and share epigenomics data. Among other functions, Podbat allows data-driven determination of genome regions of differential protein occupancy or RNA expression using Hidden Markov Models. Comparisons between datasets are facilitated to enable the study of the comprehensive chromatin modification system simultaneously, irrespective of data-generating technique. Any organism with a sequenced genome can be accommodated. We exemplify the power of Podbat by reanalyzing all to-date published genome-wide data for the histone variant H2A.Z in fission yeast together with other histone marks and also phenotypic response data from several sources. This meta-analysis led to the unexpected finding of H2A.Z incorporation in the coding regions of genes encoding proteins involved in the regulation of meiosis and genotoxic stress responses. This incorporation was partly independent of the H2A.Z-incorporating remodeller Swr1. We verified an Swr1-independent role for H2A.Z following genotoxic stress in vivo. Podbat is open source software freely downloadable from www.podbat.org, distributed under the GNU LGPL license. User manuals, test data and instructions are available at the website, as well as a repository for third party–developed plug-in modules. Podbat requires Java version 1.6 or higher.",TRUE,noun
R104,Bioinformatics,R168584,PathVisio 3: An Extendable Pathway Analysis Toolbox,S668567,R168588,uses,R166921,Java,"PathVisio is a commonly used pathway editor, visualization and analysis software. Biological pathways have been used by biologists for many years to describe the detailed steps in biological processes. Those powerful, visual representations help researchers to better understand, share and discuss knowledge. Since the first publication of PathVisio in 2008, the original paper was cited more than 170 times and PathVisio was used in many different biological studies. As an online editor PathVisio is also integrated in the community curated pathway database WikiPathways. Here we present the third version of PathVisio with the newest additions and improvements of the application. The core features of PathVisio are pathway drawing, advanced data visualization and pathway statistics. Additionally, PathVisio 3 introduces a new powerful extension systems that allows other developers to contribute additional functionality in form of plugins without changing the core application. PathVisio can be downloaded from http://www.pathvisio.org and in 2014 PathVisio 3 has been downloaded over 5,500 times. There are already more than 15 plugins available in the central plugin repository. PathVisio is a freely available, open-source tool published under the Apache 2.0 license (http://www.apache.org/licenses/LICENSE-2.0). It is implemented in Java and thus runs on all major operating systems. The code repository is available at http://svn.bigcat.unimaas.nl/pathvisio. The support mailing list for users is available on https://groups.google.com/forum/#!forum/wikipathways-discuss and for developers on https://groups.google.com/forum/#!forum/wikipathways-devel.",TRUE,noun
R104,Bioinformatics,R168616,QuIN: A Web Server for Querying and Visualizing Chromatin Interaction Networks,S668708,R168618,uses,R166921,Java,"Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. AVAILABILITY: QuIN’s web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/.",TRUE,noun
R104,Bioinformatics,R168642,AlignerBoost: A Generalized Software Toolkit for Boosting Next-Gen Sequencing Mapping Accuracy Using a Bayesian-Based Mapping Quality Framework,S668798,R168645,uses,R166921,Java,"Accurate mapping of next-generation sequencing (NGS) reads to reference genomes is crucial for almost all NGS applications and downstream analyses. Various repetitive elements in human and other higher eukaryotic genomes contribute in large part to ambiguously (non-uniquely) mapped reads. Most available NGS aligners attempt to address this by either removing all non-uniquely mapping reads, or reporting one random or ""best"" hit based on simple heuristics. Accurate estimation of the mapping quality of NGS reads is therefore critical albeit completely lacking at present. Here we developed a generalized software toolkit ""AlignerBoost"", which utilizes a Bayesian-based framework to accurately estimate mapping quality of ambiguously mapped NGS reads. We tested AlignerBoost with both simulated and real DNA-seq and RNA-seq datasets at various thresholds. In most cases, but especially for reads falling within repetitive regions, AlignerBoost dramatically increases the mapping precision of modern NGS aligners without significantly compromising the sensitivity even without mapping quality filters. When using higher mapping quality cutoffs, AlignerBoost achieves a much lower false mapping rate while exhibiting comparable or higher sensitivity compared to the aligner default modes, therefore significantly boosting the detection power of NGS aligners even using extreme thresholds. AlignerBoost is also SNP-aware, and higher quality alignments can be achieved if provided with known SNPs. AlignerBoost’s algorithm is computationally efficient, and can process one million alignments within 30 seconds on a typical desktop computer. AlignerBoost is implemented as a uniform Java application and is freely available at https://github.com/Grice-Lab/AlignerBoost.",TRUE,noun
R104,Bioinformatics,R168655,ASPASIA: A toolkit for evaluating the effects of biological interventions on SBML model behaviour,S668848,R168657,uses,R166921,Java,"A calibrated computational model reflects behaviours that are expected or observed in a complex system, providing a baseline upon which sensitivity analysis techniques can be used to analyse pathways that may impact model responses. However, calibration of a model where a behaviour depends on an intervention introduced after a defined time point is difficult, as model responses may be dependent on the conditions at the time the intervention is applied. We present ASPASIA (Automated Simulation Parameter Alteration and SensItivity Analysis), a cross-platform, open-source Java toolkit that addresses a key deficiency in software tools for understanding the impact an intervention has on system behaviour for models specified in Systems Biology Markup Language (SBML). ASPASIA can generate and modify models using SBML solver output as an initial parameter set, allowing interventions to be applied once a steady state has been reached. Additionally, multiple SBML models can be generated where a subset of parameter values are perturbed using local and global sensitivity analysis techniques, revealing the model’s sensitivity to the intervention. To illustrate the capabilities of ASPASIA, we demonstrate how this tool has generated novel hypotheses regarding the mechanisms by which Th17-cell plasticity may be controlled in vivo. By using ASPASIA in conjunction with an SBML model of Th17-cell polarisation, we predict that promotion of the Th1-associated transcription factor T-bet, rather than inhibition of the Th17-associated transcription factor RORγt, is sufficient to drive switching of Th17 cells towards an IFN-γ-producing phenotype. Our approach can be applied to all SBML-encoded models to predict the effect that intervention strategies have on system behaviour. ASPASIA, released under the Artistic License (2.0), can be downloaded from http://www.york.ac.uk/ycil/software.",TRUE,noun
R104,Bioinformatics,R168616,QuIN: A Web Server for Querying and Visualizing Chromatin Interaction Networks,S668710,R168619,uses,R166996,JavaScript,"Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. AVAILABILITY: QuIN’s web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/.",TRUE,noun
R104,Bioinformatics,R168472,SNPdetector: A Software Tool for Sensitive and Accurate SNP Detection,S668175,R168477,uses,R166907,Linux,"Identification of single nucleotide polymorphisms (SNPs) and mutations is important for the discovery of genetic predisposition to complex diseases. PCR resequencing is the method of choice for de novo SNP discovery. However, manual curation of putative SNPs has been a major bottleneck in the application of this method to high-throughput screening. Therefore it is critical to develop a more sensitive and accurate computational method for automated SNP detection. We developed a software tool, SNPdetector, for automated identification of SNPs and mutations in fluorescence-based resequencing reads. SNPdetector was designed to model the process of human visual inspection and has a very low false positive and false negative rate. We demonstrate the superior performance of SNPdetector in SNP and mutation analysis by comparing its results with those derived by human inspection, PolyPhred (a popular SNP detection tool), and independent genotype assays in three large-scale investigations. The first study identified and validated inter- and intra-subspecies variations in 4,650 traces of 25 inbred mouse strains that belong to either the Mus musculus species or the M. spretus species. Unexpected heterozgyosity in CAST/Ei strain was observed in two out of 1,167 mouse SNPs. The second study identified 11,241 candidate SNPs in five ENCODE regions of the human genome covering 2.5 Mb of genomic sequence. Approximately 50% of the candidate SNPs were selected for experimental genotyping; the validation rate exceeded 95%. The third study detected ENU-induced mutations (at 0.04% allele frequency) in 64,896 traces of 1,236 zebra fish. Our analysis of three large and diverse test datasets demonstrated that SNPdetector is an effective tool for genome-scale research and for large-sample clinical studies. SNPdetector runs on Unix/Linux platform and is available publicly (http://lpg.nci.nih.gov).",TRUE,noun
R104,Bioinformatics,R168508,ACME: Automated Cell Morphology Extractor for Comprehensive Reconstruction of Cell Membranes,S668289,R168514,uses,R166907,Linux,"The quantification of cell shape, cell migration, and cell rearrangements is important for addressing classical questions in developmental biology such as patterning and tissue morphogenesis. Time-lapse microscopic imaging of transgenic embryos expressing fluorescent reporters is the method of choice for tracking morphogenetic changes and establishing cell lineages and fate maps in vivo. However, the manual steps involved in curating thousands of putative cell segmentations have been a major bottleneck in the application of these technologies especially for cell membranes. Segmentation of cell membranes while more difficult than nuclear segmentation is necessary for quantifying the relations between changes in cell morphology and morphogenesis. We present a novel and fully automated method to first reconstruct membrane signals and then segment out cells from 3D membrane images even in dense tissues. The approach has three stages: 1) detection of local membrane planes, 2) voting to fill structural gaps, and 3) region segmentation. We demonstrate the superior performance of the algorithms quantitatively on time-lapse confocal and two-photon images of zebrafish neuroectoderm and paraxial mesoderm by comparing its results with those derived from human inspection. We also compared with synthetic microscopic images generated by simulating the process of imaging with fluorescent reporters under varying conditions of noise. Both the over-segmentation and under-segmentation percentages of our method are around 5%. The volume overlap of individual cells, compared to expert manual segmentation, is consistently over 84%. By using our software (ACME) to study somite formation, we were able to segment touching cells with high accuracy and reliably quantify changes in morphogenetic parameters such as cell shape and size, and the arrangement of epithelial and mesenchymal cells. Our software has been developed and tested on Windows, Mac, and Linux platforms and is available publicly under an open source BSD license (https://github.com/krm15/ACME).",TRUE,noun
R104,Bioinformatics,R168556,Pep2Path: Automated Mass Spectrometry-Guided Genome Mining of Peptidic Natural Products,S668472,R168561,uses,R166907,Linux,"Nonribosomally and ribosomally synthesized bioactive peptides constitute a source of molecules of great biomedical importance, including antibiotics such as penicillin, immunosuppressants such as cyclosporine, and cytostatics such as bleomycin. Recently, an innovative mass-spectrometry-based strategy, peptidogenomics, has been pioneered to effectively mine microbial strains for novel peptidic metabolites. Even though mass-spectrometric peptide detection can be performed quite fast, true high-throughput natural product discovery approaches have still been limited by the inability to rapidly match the identified tandem mass spectra to the gene clusters responsible for the biosynthesis of the corresponding compounds. With Pep2Path, we introduce a software package to fully automate the peptidogenomics approach through the rapid Bayesian probabilistic matching of mass spectra to their corresponding biosynthetic gene clusters. Detailed benchmarking of the method shows that the approach is powerful enough to correctly identify gene clusters even in data sets that consist of hundreds of genomes, which also makes it possible to match compounds from unsequenced organisms to closely related biosynthetic gene clusters in other genomes. Applying Pep2Path to a data set of compounds without known biosynthesis routes, we were able to identify candidate gene clusters for the biosynthesis of five important compounds. Notably, one of these clusters was detected in a genome from a different subphylum of Proteobacteria than that in which the molecule had first been identified. All in all, our approach paves the way towards high-throughput discovery of novel peptidic natural products. Pep2Path is freely available from http://pep2path.sourceforge.net/, implemented in Python, licensed under the GNU General Public License v3 and supported on MS Windows, Linux and Mac OS X.",TRUE,noun
R104,Bioinformatics,R168508,ACME: Automated Cell Morphology Extractor for Comprehensive Reconstruction of Cell Membranes,S668287,R168513,uses,R166928,Mac,"The quantification of cell shape, cell migration, and cell rearrangements is important for addressing classical questions in developmental biology such as patterning and tissue morphogenesis. Time-lapse microscopic imaging of transgenic embryos expressing fluorescent reporters is the method of choice for tracking morphogenetic changes and establishing cell lineages and fate maps in vivo. However, the manual steps involved in curating thousands of putative cell segmentations have been a major bottleneck in the application of these technologies especially for cell membranes. Segmentation of cell membranes while more difficult than nuclear segmentation is necessary for quantifying the relations between changes in cell morphology and morphogenesis. We present a novel and fully automated method to first reconstruct membrane signals and then segment out cells from 3D membrane images even in dense tissues. The approach has three stages: 1) detection of local membrane planes, 2) voting to fill structural gaps, and 3) region segmentation. We demonstrate the superior performance of the algorithms quantitatively on time-lapse confocal and two-photon images of zebrafish neuroectoderm and paraxial mesoderm by comparing its results with those derived from human inspection. We also compared with synthetic microscopic images generated by simulating the process of imaging with fluorescent reporters under varying conditions of noise. Both the over-segmentation and under-segmentation percentages of our method are around 5%. The volume overlap of individual cells, compared to expert manual segmentation, is consistently over 84%. By using our software (ACME) to study somite formation, we were able to segment touching cells with high accuracy and reliably quantify changes in morphogenetic parameters such as cell shape and size, and the arrangement of epithelial and mesenchymal cells. Our software has been developed and tested on Windows, Mac, and Linux platforms and is available publicly under an open source BSD license (https://github.com/krm15/ACME).",TRUE,noun
R104,Bioinformatics,R169394,"Identification of Priority Conservation Areas and Potential Corridors for Jaguars in the Caatinga Biome, Brazil",S672204,R169395,uses,R167503,Maxent,"The jaguar, Panthera onca, is a top predator with the extant population found within the Brazilian Caatinga biome now known to be on the brink of extinction. Designing new conservation units and potential corridors are therefore crucial for the long-term survival of the species within the Caatinga biome. Thus, our aims were: 1) to recognize suitable areas for jaguar occurrence, 2) to delineate areas for jaguar conservation (PJCUs), 3) to design corridors among priority areas, and 4) to prioritize PJCUs. A total of 62 points records of jaguar occurrence and 10 potential predictors were analyzed in a GIS environment. A predictive distributional map was obtained using Species Distribution Modeling (SDM) as performed by the Maximum Entropy (Maxent) algorithm. Areas equal to or higher than the median suitability value of 0.595 were selected as of high suitability for jaguar occurrence and named as Priority Jaguar Conservation Units (PJCU). Ten PJCUs with sizes varying from 23.6 km2 to 4,311.0 km2 were identified. Afterwards, we combined the response curve, as generated by SDM, and expert opinions to create a permeability matrix and to identify least cost corridors and buffer zones between each PJCU pair. Connectivity corridors and buffer zone for jaguar movement included an area of 8.884,26 km2 and the total corridor length is about 160.94 km. Prioritizing criteria indicated the PJCU representing c.a. 68.61% of the total PJCU area (PJCU # 1) as of high priority for conservation and connectivity with others PJCUs (PJCUs # 4, 5 and 7) desirable for the long term survival of the species. In conclusion, by using the jaguar as a focal species and combining SDM and expert opinion we were able to create a valid framework for practical conservation actions at the Caatinga biome. The same approach could be used for the conservation of other carnivores.",TRUE,noun
R104,Bioinformatics,R169876,Global Potential Distribution of Bactrocera carambolae and the Risks for Fruit Production in Brazil,S674403,R169879,uses,R167813,MaxEnt,"The carambola fruit fly, Bactrocera carambolae, is a tephritid native to Asia that has invaded South America through small-scale trade of fruits from Indonesia. The economic losses associated with biological invasions of other fruit flies around the world and the polyphagous behaviour of B. carambolae have prompted much concern among government agencies and farmers with the potential spread of this pest. Here, ecological niche models were employed to identify suitable environments available to B. carambolae in a global scale and assess the extent of the fruit acreage that may be at risk of attack in Brazil. Overall, 30 MaxEnt models built with different combinations of environmental predictors and settings were evaluated for predicting the potential distribution of the carambola fruit fly. The best model was selected based on threshold-independent and threshold-dependent metrics. Climatically suitable areas were identified in tropical and subtropical regions of Central and South America, Sub-Saharan Africa, west and east coast of India and northern Australia. The suitability map of B. carambola was intersected against maps of fruit acreage in Brazil. The acreage under potential risk of attack varied widely among fruit species, which is expected because the production areas are concentrated in different regions of the country. The production of cashew is the one that is at higher risk, with almost 90% of its acreage within the suitable range of B. carambolae, followed by papaya (78%), tangerine (51%), guava (38%), lemon (30%), orange (29%), mango (24%) and avocado (20%). This study provides an important contribution to the knowledge of the ecology of B. carambolae, and the information generated here can be used by government agencies as a decision-making tool to prevent the carambola fruit fly spread across the world.",TRUE,noun
R104,Bioinformatics,R148043,MedPost: a part-of-speech tagger for bioMedical text,S593691,R148045,model,R148036,MedPost,"SUMMARY We present a part-of-speech tagger that achieves over 97% accuracy on MEDLINE citations. AVAILABILITY Software, documentation and a corpus of 5700 manually tagged sentences are available at ftp://ftp.ncbi.nlm.nih.gov/pub/lsmith/MedPost/medpost.tar.gz",TRUE,noun
R104,Bioinformatics,R168679,Metabomatching: Using genetic association to identify metabolites in proton NMR spectroscopy,S668960,R168680,creates,R167035,Metabomatching,"A metabolome-wide genome-wide association study (mGWAS) aims to discover the effects of genetic variants on metabolome phenotypes. Most mGWASes use as phenotypes concentrations of limited sets of metabolites that can be identified and quantified from spectral information. In contrast, in an untargeted mGWAS both identification and quantification are forgone and, instead, all measured metabolome features are tested for association with genetic variants. While the untargeted approach does not discard data that may have eluded identification, the interpretation of associated features remains a challenge. To address this issue, we developed metabomatching to identify the metabolites underlying significant associations observed in untargeted mGWASes on proton NMR metabolome data. Metabomatching capitalizes on genetic spiking, the concept that because metabolome features associated with a genetic variant tend to correspond to the peaks of the NMR spectrum of the underlying metabolite, genetic association can allow for identification. Applied to the untargeted mGWASes in the SHIP and CoLaus cohorts and using 180 reference NMR spectra of the urine metabolome database, metabomatching successfully identified the underlying metabolite in 14 of 19, and 8 of 9 associations, respectively. The accuracy and efficiency of our method make it a strong contender for facilitating or complementing metabolomics analyses in large cohorts, where the availability of genetic, or other data, enables our approach, but targeted quantification is limited.",TRUE,noun
R104,Bioinformatics,R168679,Metabomatching: Using genetic association to identify metabolites in proton NMR spectroscopy,S668964,R168682,deposits,R167037,Metabomatching,"A metabolome-wide genome-wide association study (mGWAS) aims to discover the effects of genetic variants on metabolome phenotypes. Most mGWASes use as phenotypes concentrations of limited sets of metabolites that can be identified and quantified from spectral information. In contrast, in an untargeted mGWAS both identification and quantification are forgone and, instead, all measured metabolome features are tested for association with genetic variants. While the untargeted approach does not discard data that may have eluded identification, the interpretation of associated features remains a challenge. To address this issue, we developed metabomatching to identify the metabolites underlying significant associations observed in untargeted mGWASes on proton NMR metabolome data. Metabomatching capitalizes on genetic spiking, the concept that because metabolome features associated with a genetic variant tend to correspond to the peaks of the NMR spectrum of the underlying metabolite, genetic association can allow for identification. Applied to the untargeted mGWASes in the SHIP and CoLaus cohorts and using 180 reference NMR spectra of the urine metabolome database, metabomatching successfully identified the underlying metabolite in 14 of 19, and 8 of 9 associations, respectively. The accuracy and efficiency of our method make it a strong contender for facilitating or complementing metabolomics analyses in large cohorts, where the availability of genetic, or other data, enables our approach, but targeted quantification is limited.",TRUE,noun
R104,Bioinformatics,R135546,Acute Lymphoblastic Leukemia Detection from Microscopic Images Using Weighted Ensemble of Convolutional Neural Networks,S536122,R135550,Used models,L378123,MobileNet,"Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.",TRUE,noun
R104,Bioinformatics,R168616,QuIN: A Web Server for Querying and Visualizing Chromatin Interaction Networks,S668714,R168621,uses,R166998,MySQL,"Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. AVAILABILITY: QuIN’s web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/.",TRUE,noun
R104,Bioinformatics,R170184,Thought experiment: Decoding cognitive processes from the fMRI data of one individual,S676032,R170191,uses,R168000,NeuroSynth,"Cognitive processes, such as the generation of language, can be mapped onto the brain using fMRI. These maps can in turn be used for decoding the respective processes from the brain activation patterns. Given individual variations in brain anatomy and organization, analyzes on the level of the single person are important to improve our understanding of how cognitive processes correspond to patterns of brain activity. They also allow to advance clinical applications of fMRI, because in the clinical setting making diagnoses for single cases is imperative. In the present study, we used mental imagery tasks to investigate language production, motor functions, visuo-spatial memory, face processing, and resting-state activity in a single person. Analysis methods were based on similarity metrics, including correlations between training and test data, as well as correlations with maps from the NeuroSynth meta-analysis. The goal was to make accurate predictions regarding the cognitive domain (e.g. language) and the specific content (e.g. animal names) of single 30-second blocks. Four teams used the dataset, each blinded regarding the true labels of the test data. Results showed that the similarity metrics allowed to reach the highest degrees of accuracy when predicting the cognitive domain of a block. Overall, 23 of the 25 test blocks could be correctly predicted by three of the four teams. Excluding the unspecific rest condition, up to 10 out of 20 blocks could be successfully decoded regarding their specific content. The study shows how the information contained in a single fMRI session and in each of its single blocks can allow to draw inferences about the cognitive processes an individual engaged in. Simple methods like correlations between blocks of fMRI data can serve as highly reliable approaches for cognitive decoding. We discuss the implications of our results in the context of clinical fMRI applications, with a focus on how decoding can support functional localization.",TRUE,noun
R104,Bioinformatics,R168725,OpenSim: Simulating musculoskeletal dynamics and neuromuscular control to study human and animal movement,S669144,R168728,deposits,R167065,OpenSim,"Movement is fundamental to human and animal life, emerging through interaction of complex neural, muscular, and skeletal systems. Study of movement draws from and contributes to diverse fields, including biology, neuroscience, mechanics, and robotics. OpenSim unites methods from these fields to create fast and accurate simulations of movement, enabling two fundamental tasks. First, the software can calculate variables that are difficult to measure experimentally, such as the forces generated by muscles and the stretch and recoil of tendons during movement. Second, OpenSim can predict novel movements from models of motor control, such as kinematic adaptations of human gait during loaded or inclined walking. Changes in musculoskeletal dynamics following surgery or due to human–device interaction can also be simulated; these simulations have played a vital role in several applications, including the design of implantable mechanical devices to improve human grasping in individuals with paralysis. OpenSim is an extensible and user-friendly software package built on decades of knowledge about computational modeling and simulation of biomechanical systems. OpenSim’s design enables computational scientists to create new state-of-the-art software tools and empowers others to use these tools in research and clinical applications. OpenSim supports a large and growing community of biomechanics and rehabilitation researchers, facilitating exchange of models and simulations for reproducing and extending discoveries. Examples, tutorials, documentation, and an active user forum support this community. The OpenSim software is covered by the Apache License 2.0, which permits its use for any purpose including both nonprofit and commercial applications. The source code is freely and anonymously accessible on GitHub, where the community is welcomed to make contributions. Platform-specific installers of OpenSim include a GUI and are available on simtk.org.",TRUE,noun
R104,Bioinformatics,R168584,PathVisio 3: An Extendable Pathway Analysis Toolbox,S668565,R168587,creates,R166976,PathVisio,"PathVisio is a commonly used pathway editor, visualization and analysis software. Biological pathways have been used by biologists for many years to describe the detailed steps in biological processes. Those powerful, visual representations help researchers to better understand, share and discuss knowledge. Since the first publication of PathVisio in 2008, the original paper was cited more than 170 times and PathVisio was used in many different biological studies. As an online editor PathVisio is also integrated in the community curated pathway database WikiPathways. Here we present the third version of PathVisio with the newest additions and improvements of the application. The core features of PathVisio are pathway drawing, advanced data visualization and pathway statistics. Additionally, PathVisio 3 introduces a new powerful extension systems that allows other developers to contribute additional functionality in form of plugins without changing the core application. PathVisio can be downloaded from http://www.pathvisio.org and in 2014 PathVisio 3 has been downloaded over 5,500 times. There are already more than 15 plugins available in the central plugin repository. PathVisio is a freely available, open-source tool published under the Apache 2.0 license (http://www.apache.org/licenses/LICENSE-2.0). It is implemented in Java and thus runs on all major operating systems. The code repository is available at http://svn.bigcat.unimaas.nl/pathvisio. The support mailing list for users is available on https://groups.google.com/forum/#!forum/wikipathways-discuss and for developers on https://groups.google.com/forum/#!forum/wikipathways-devel.",TRUE,noun
R104,Bioinformatics,R168584,PathVisio 3: An Extendable Pathway Analysis Toolbox,S668569,R168589,deposits,R166977,PathVisio,"PathVisio is a commonly used pathway editor, visualization and analysis software. Biological pathways have been used by biologists for many years to describe the detailed steps in biological processes. Those powerful, visual representations help researchers to better understand, share and discuss knowledge. Since the first publication of PathVisio in 2008, the original paper was cited more than 170 times and PathVisio was used in many different biological studies. As an online editor PathVisio is also integrated in the community curated pathway database WikiPathways. Here we present the third version of PathVisio with the newest additions and improvements of the application. The core features of PathVisio are pathway drawing, advanced data visualization and pathway statistics. Additionally, PathVisio 3 introduces a new powerful extension systems that allows other developers to contribute additional functionality in form of plugins without changing the core application. PathVisio can be downloaded from http://www.pathvisio.org and in 2014 PathVisio 3 has been downloaded over 5,500 times. There are already more than 15 plugins available in the central plugin repository. PathVisio is a freely available, open-source tool published under the Apache 2.0 license (http://www.apache.org/licenses/LICENSE-2.0). It is implemented in Java and thus runs on all major operating systems. The code repository is available at http://svn.bigcat.unimaas.nl/pathvisio. The support mailing list for users is available on https://groups.google.com/forum/#!forum/wikipathways-discuss and for developers on https://groups.google.com/forum/#!forum/wikipathways-devel.",TRUE,noun
R104,Bioinformatics,R168556,Pep2Path: Automated Mass Spectrometry-Guided Genome Mining of Peptidic Natural Products,S668464,R168557,creates,R166955,Pep2Path,"Nonribosomally and ribosomally synthesized bioactive peptides constitute a source of molecules of great biomedical importance, including antibiotics such as penicillin, immunosuppressants such as cyclosporine, and cytostatics such as bleomycin. Recently, an innovative mass-spectrometry-based strategy, peptidogenomics, has been pioneered to effectively mine microbial strains for novel peptidic metabolites. Even though mass-spectrometric peptide detection can be performed quite fast, true high-throughput natural product discovery approaches have still been limited by the inability to rapidly match the identified tandem mass spectra to the gene clusters responsible for the biosynthesis of the corresponding compounds. With Pep2Path, we introduce a software package to fully automate the peptidogenomics approach through the rapid Bayesian probabilistic matching of mass spectra to their corresponding biosynthetic gene clusters. Detailed benchmarking of the method shows that the approach is powerful enough to correctly identify gene clusters even in data sets that consist of hundreds of genomes, which also makes it possible to match compounds from unsequenced organisms to closely related biosynthetic gene clusters in other genomes. Applying Pep2Path to a data set of compounds without known biosynthesis routes, we were able to identify candidate gene clusters for the biosynthesis of five important compounds. Notably, one of these clusters was detected in a genome from a different subphylum of Proteobacteria than that in which the molecule had first been identified. All in all, our approach paves the way towards high-throughput discovery of novel peptidic natural products. Pep2Path is freely available from http://pep2path.sourceforge.net/, implemented in Python, licensed under the GNU General Public License v3 and supported on MS Windows, Linux and Mac OS X.",TRUE,noun
R104,Bioinformatics,R168556,Pep2Path: Automated Mass Spectrometry-Guided Genome Mining of Peptidic Natural Products,S668476,R168563,deposits,R166960,Pep2Path,"Nonribosomally and ribosomally synthesized bioactive peptides constitute a source of molecules of great biomedical importance, including antibiotics such as penicillin, immunosuppressants such as cyclosporine, and cytostatics such as bleomycin. Recently, an innovative mass-spectrometry-based strategy, peptidogenomics, has been pioneered to effectively mine microbial strains for novel peptidic metabolites. Even though mass-spectrometric peptide detection can be performed quite fast, true high-throughput natural product discovery approaches have still been limited by the inability to rapidly match the identified tandem mass spectra to the gene clusters responsible for the biosynthesis of the corresponding compounds. With Pep2Path, we introduce a software package to fully automate the peptidogenomics approach through the rapid Bayesian probabilistic matching of mass spectra to their corresponding biosynthetic gene clusters. Detailed benchmarking of the method shows that the approach is powerful enough to correctly identify gene clusters even in data sets that consist of hundreds of genomes, which also makes it possible to match compounds from unsequenced organisms to closely related biosynthetic gene clusters in other genomes. Applying Pep2Path to a data set of compounds without known biosynthesis routes, we were able to identify candidate gene clusters for the biosynthesis of five important compounds. Notably, one of these clusters was detected in a genome from a different subphylum of Proteobacteria than that in which the molecule had first been identified. All in all, our approach paves the way towards high-throughput discovery of novel peptidic natural products. Pep2Path is freely available from http://pep2path.sourceforge.net/, implemented in Python, licensed under the GNU General Public License v3 and supported on MS Windows, Linux and Mac OS X.",TRUE,noun
R104,Bioinformatics,R168638,"PhyloBot: A Web Portal for Automated Phylogenetics, Ancestral Sequence Reconstruction, and Exploration of Mutational Trajectories",S668780,R168639,creates,R167008,PhyloBot,"The method of phylogenetic ancestral sequence reconstruction is a powerful approach for studying evolutionary relationships among protein sequence, structure, and function. In particular, this approach allows investigators to (1) reconstruct and “resurrect” (that is, synthesize in vivo or in vitro) extinct proteins to study how they differ from modern proteins, (2) identify key amino acid changes that, over evolutionary timescales, have altered the function of the protein, and (3) order historical events in the evolution of protein function. Widespread use of this approach has been slow among molecular biologists, in part because the methods require significant computational expertise. Here we present PhyloBot, a web-based software tool that makes ancestral sequence reconstruction easy. Designed for non-experts, it integrates all the necessary software into a single user interface. Additionally, PhyloBot provides interactive tools to explore evolutionary trajectories between ancestors, enabling the rapid generation of hypotheses that can be tested using genetic or biochemical approaches. Early versions of this software were used in previous studies to discover genetic mechanisms underlying the functions of diverse protein families, including V-ATPase ion pumps, DNA-binding transcription regulators, and serine/threonine protein kinases. PhyloBot runs in a web browser, and is available at the following URL: http://www.phylobot.com. The software is implemented in Python using the Django web framework, and runs on elastic cloud computing resources from Amazon Web Services. Users can create and submit jobs on our free server (at the URL listed above), or use our open-source code to launch their own PhyloBot server.",TRUE,noun
R104,Bioinformatics,R168638,"PhyloBot: A Web Portal for Automated Phylogenetics, Ancestral Sequence Reconstruction, and Exploration of Mutational Trajectories",S668784,R168641,deposits,R167010,PhyloBot,"The method of phylogenetic ancestral sequence reconstruction is a powerful approach for studying evolutionary relationships among protein sequence, structure, and function. In particular, this approach allows investigators to (1) reconstruct and “resurrect” (that is, synthesize in vivo or in vitro) extinct proteins to study how they differ from modern proteins, (2) identify key amino acid changes that, over evolutionary timescales, have altered the function of the protein, and (3) order historical events in the evolution of protein function. Widespread use of this approach has been slow among molecular biologists, in part because the methods require significant computational expertise. Here we present PhyloBot, a web-based software tool that makes ancestral sequence reconstruction easy. Designed for non-experts, it integrates all the necessary software into a single user interface. Additionally, PhyloBot provides interactive tools to explore evolutionary trajectories between ancestors, enabling the rapid generation of hypotheses that can be tested using genetic or biochemical approaches. Early versions of this software were used in previous studies to discover genetic mechanisms underlying the functions of diverse protein families, including V-ATPase ion pumps, DNA-binding transcription regulators, and serine/threonine protein kinases. PhyloBot runs in a web browser, and is available at the following URL: http://www.phylobot.com. The software is implemented in Python using the Django web framework, and runs on elastic cloud computing resources from Amazon Web Services. Users can create and submit jobs on our free server (at the URL listed above), or use our open-source code to launch their own PhyloBot server.",TRUE,noun
R104,Bioinformatics,R168483,PhyloGibbs-MP: Module Prediction and Discriminative Motif-Finding by Gibbs Sampling,S668194,R168485,uses,R166912,PhyloGibbs,"PhyloGibbs, our recent Gibbs-sampling motif-finder, takes phylogeny into account in detecting binding sites for transcription factors in DNA and assigns posterior probabilities to its predictions obtained by sampling the entire configuration space. Here, in an extension called PhyloGibbs-MP, we widen the scope of the program, addressing two major problems in computational regulatory genomics. First, PhyloGibbs-MP can localise predictions to small, undetermined regions of a large input sequence, thus effectively predicting cis-regulatory modules (CRMs) ab initio while simultaneously predicting binding sites in those modules—tasks that are usually done by two separate programs. PhyloGibbs-MP's performance at such ab initio CRM prediction is comparable with or superior to dedicated module-prediction software that use prior knowledge of previously characterised transcription factors. Second, PhyloGibbs-MP can predict motifs that differentiate between two (or more) different groups of regulatory regions, that is, motifs that occur preferentially in one group over the others. While other “discriminative motif-finders” have been published in the literature, PhyloGibbs-MP's implementation has some unique features and flexibility. Benchmarks on synthetic and actual genomic data show that this algorithm is successful at enhancing predictions of differentiating sites and suppressing predictions of common sites and compares with or outperforms other discriminative motif-finders on actual genomic data. Additional enhancements include significant performance and speed improvements, the ability to use “informative priors” on known transcription factors, and the ability to output annotations in a format that can be visualised with the Generic Genome Browser. In stand-alone motif-finding, PhyloGibbs-MP remains competitive, outperforming PhyloGibbs-1.0 and other programs on benchmark data.",TRUE,noun
R104,Bioinformatics,R168483,PhyloGibbs-MP: Module Prediction and Discriminative Motif-Finding by Gibbs Sampling,S668196,R168486,deposits,R166913,PhyloGibbs-MP,"PhyloGibbs, our recent Gibbs-sampling motif-finder, takes phylogeny into account in detecting binding sites for transcription factors in DNA and assigns posterior probabilities to its predictions obtained by sampling the entire configuration space. Here, in an extension called PhyloGibbs-MP, we widen the scope of the program, addressing two major problems in computational regulatory genomics. First, PhyloGibbs-MP can localise predictions to small, undetermined regions of a large input sequence, thus effectively predicting cis-regulatory modules (CRMs) ab initio while simultaneously predicting binding sites in those modules—tasks that are usually done by two separate programs. PhyloGibbs-MP's performance at such ab initio CRM prediction is comparable with or superior to dedicated module-prediction software that use prior knowledge of previously characterised transcription factors. Second, PhyloGibbs-MP can predict motifs that differentiate between two (or more) different groups of regulatory regions, that is, motifs that occur preferentially in one group over the others. While other “discriminative motif-finders” have been published in the literature, PhyloGibbs-MP's implementation has some unique features and flexibility. Benchmarks on synthetic and actual genomic data show that this algorithm is successful at enhancing predictions of differentiating sites and suppressing predictions of common sites and compares with or outperforms other discriminative motif-finders on actual genomic data. Additional enhancements include significant performance and speed improvements, the ability to use “informative priors” on known transcription factors, and the ability to output annotations in a format that can be visualised with the Generic Genome Browser. In stand-alone motif-finding, PhyloGibbs-MP remains competitive, outperforming PhyloGibbs-1.0 and other programs on benchmark data.",TRUE,noun
R104,Bioinformatics,R168697,PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems,S669029,R168698,creates,R167047,PhysiCell,"Abstract Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal “virtual laboratory” for such multicellular systems simulates both the biochemical microenvironment (the “stage”) and many mechanically and biochemically interacting cells (the “players” upon the stage). PhysiCell—physics-based multicellular simulator—is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility “out of the box.” The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 10 5 -10 6 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a “cellular cargo delivery” system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net. Author Summary This paper introduces PhysiCell: an open source, agent-based modeling framework for 3-D multicellular simulations. It includes a standard library of sub-models for cell fluid and solid volume changes, cycle progression, apoptosis, necrosis, mechanics, and motility. PhysiCell is directly coupled to a biotransport solver to simulate many diffusing substrates and cell-secreted signals. Each cell can dynamically update its phenotype based on its microenvironmental conditions. Users can customize or replace the included sub-models. PhysiCell runs on a variety of platforms (Linux, OSX, and Windows) with few software dependencies. Its computational cost scales linearly in the number of cells. It is feasible to simulate 500,000 cells on quad-core desktop workstations, and millions of cells on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on hanging drop tumor spheroids (HDS) and ductal carcinoma in situ (DCIS) of the breast. We demonstrate contact- and chemokine-based interactions among multiple cell types with examples in synthetic multicellular bioengineering, cancer heterogeneity, and cancer immunology. We developed PhysiCell to help the scientific community tackle multicellular systems biology problems involving many interacting cells in multi-substrate microenvironments. PhysiCell is also an independent, cross-platform codebase for replicating results from other simulators.",TRUE,noun
R104,Bioinformatics,R168697,PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems,S669033,R168700,deposits,R167048,PhysiCell,"Abstract Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal “virtual laboratory” for such multicellular systems simulates both the biochemical microenvironment (the “stage”) and many mechanically and biochemically interacting cells (the “players” upon the stage). PhysiCell—physics-based multicellular simulator—is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility “out of the box.” The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 10 5 -10 6 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a “cellular cargo delivery” system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net. Author Summary This paper introduces PhysiCell: an open source, agent-based modeling framework for 3-D multicellular simulations. It includes a standard library of sub-models for cell fluid and solid volume changes, cycle progression, apoptosis, necrosis, mechanics, and motility. PhysiCell is directly coupled to a biotransport solver to simulate many diffusing substrates and cell-secreted signals. Each cell can dynamically update its phenotype based on its microenvironmental conditions. Users can customize or replace the included sub-models. PhysiCell runs on a variety of platforms (Linux, OSX, and Windows) with few software dependencies. Its computational cost scales linearly in the number of cells. It is feasible to simulate 500,000 cells on quad-core desktop workstations, and millions of cells on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on hanging drop tumor spheroids (HDS) and ductal carcinoma in situ (DCIS) of the breast. We demonstrate contact- and chemokine-based interactions among multiple cell types with examples in synthetic multicellular bioengineering, cancer heterogeneity, and cancer immunology. We developed PhysiCell to help the scientific community tackle multicellular systems biology problems involving many interacting cells in multi-substrate microenvironments. PhysiCell is also an independent, cross-platform codebase for replicating results from other simulators.",TRUE,noun
R104,Bioinformatics,R168697,PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems,S669031,R168699,uses,R167047,PhysiCell,"Abstract Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal “virtual laboratory” for such multicellular systems simulates both the biochemical microenvironment (the “stage”) and many mechanically and biochemically interacting cells (the “players” upon the stage). PhysiCell—physics-based multicellular simulator—is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility “out of the box.” The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 10 5 -10 6 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a “cellular cargo delivery” system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net. Author Summary This paper introduces PhysiCell: an open source, agent-based modeling framework for 3-D multicellular simulations. It includes a standard library of sub-models for cell fluid and solid volume changes, cycle progression, apoptosis, necrosis, mechanics, and motility. PhysiCell is directly coupled to a biotransport solver to simulate many diffusing substrates and cell-secreted signals. Each cell can dynamically update its phenotype based on its microenvironmental conditions. Users can customize or replace the included sub-models. PhysiCell runs on a variety of platforms (Linux, OSX, and Windows) with few software dependencies. Its computational cost scales linearly in the number of cells. It is feasible to simulate 500,000 cells on quad-core desktop workstations, and millions of cells on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on hanging drop tumor spheroids (HDS) and ductal carcinoma in situ (DCIS) of the breast. We demonstrate contact- and chemokine-based interactions among multiple cell types with examples in synthetic multicellular bioengineering, cancer heterogeneity, and cancer immunology. We developed PhysiCell to help the scientific community tackle multicellular systems biology problems involving many interacting cells in multi-substrate microenvironments. PhysiCell is also an independent, cross-platform codebase for replicating results from other simulators.",TRUE,noun
R104,Bioinformatics,R168498,Podbat: A Novel Genomic Tool Reveals Swr1-Independent H2A.Z Incorporation at Gene Coding Sequences through Epigenetic Meta-Analysis,S668245,R168501,creates,R166920,Podbat,"Epigenetic regulation consists of a multitude of different modifications that determine active and inactive states of chromatin. Conditions such as cell differentiation or exposure to environmental stress require concerted changes in gene expression. To interpret epigenomics data, a spectrum of different interconnected datasets is needed, ranging from the genome sequence and positions of histones, together with their modifications and variants, to the transcriptional output of genomic regions. Here we present a tool, Podbat (Positioning database and analysis tool), that incorporates data from various sources and allows detailed dissection of the entire range of chromatin modifications simultaneously. Podbat can be used to analyze, visualize, store and share epigenomics data. Among other functions, Podbat allows data-driven determination of genome regions of differential protein occupancy or RNA expression using Hidden Markov Models. Comparisons between datasets are facilitated to enable the study of the comprehensive chromatin modification system simultaneously, irrespective of data-generating technique. Any organism with a sequenced genome can be accommodated. We exemplify the power of Podbat by reanalyzing all to-date published genome-wide data for the histone variant H2A.Z in fission yeast together with other histone marks and also phenotypic response data from several sources. This meta-analysis led to the unexpected finding of H2A.Z incorporation in the coding regions of genes encoding proteins involved in the regulation of meiosis and genotoxic stress responses. This incorporation was partly independent of the H2A.Z-incorporating remodeller Swr1. We verified an Swr1-independent role for H2A.Z following genotoxic stress in vivo. Podbat is open source software freely downloadable from www.podbat.org, distributed under the GNU LGPL license. User manuals, test data and instructions are available at the website, as well as a repository for third party–developed plug-in modules. Podbat requires Java version 1.6 or higher.",TRUE,noun
R104,Bioinformatics,R168498,Podbat: A Novel Genomic Tool Reveals Swr1-Independent H2A.Z Incorporation at Gene Coding Sequences through Epigenetic Meta-Analysis,S668243,R168500,deposits,R166919,Podbat,"Epigenetic regulation consists of a multitude of different modifications that determine active and inactive states of chromatin. Conditions such as cell differentiation or exposure to environmental stress require concerted changes in gene expression. To interpret epigenomics data, a spectrum of different interconnected datasets is needed, ranging from the genome sequence and positions of histones, together with their modifications and variants, to the transcriptional output of genomic regions. Here we present a tool, Podbat (Positioning database and analysis tool), that incorporates data from various sources and allows detailed dissection of the entire range of chromatin modifications simultaneously. Podbat can be used to analyze, visualize, store and share epigenomics data. Among other functions, Podbat allows data-driven determination of genome regions of differential protein occupancy or RNA expression using Hidden Markov Models. Comparisons between datasets are facilitated to enable the study of the comprehensive chromatin modification system simultaneously, irrespective of data-generating technique. Any organism with a sequenced genome can be accommodated. We exemplify the power of Podbat by reanalyzing all to-date published genome-wide data for the histone variant H2A.Z in fission yeast together with other histone marks and also phenotypic response data from several sources. This meta-analysis led to the unexpected finding of H2A.Z incorporation in the coding regions of genes encoding proteins involved in the regulation of meiosis and genotoxic stress responses. This incorporation was partly independent of the H2A.Z-incorporating remodeller Swr1. We verified an Swr1-independent role for H2A.Z following genotoxic stress in vivo. Podbat is open source software freely downloadable from www.podbat.org, distributed under the GNU LGPL license. User manuals, test data and instructions are available at the website, as well as a repository for third party–developed plug-in modules. Podbat requires Java version 1.6 or higher.",TRUE,noun
R104,Bioinformatics,R168472,SNPdetector: A Software Tool for Sensitive and Accurate SNP Detection,S668179,R168479,uses,R166908,PolyPhred,"Identification of single nucleotide polymorphisms (SNPs) and mutations is important for the discovery of genetic predisposition to complex diseases. PCR resequencing is the method of choice for de novo SNP discovery. However, manual curation of putative SNPs has been a major bottleneck in the application of this method to high-throughput screening. Therefore it is critical to develop a more sensitive and accurate computational method for automated SNP detection. We developed a software tool, SNPdetector, for automated identification of SNPs and mutations in fluorescence-based resequencing reads. SNPdetector was designed to model the process of human visual inspection and has a very low false positive and false negative rate. We demonstrate the superior performance of SNPdetector in SNP and mutation analysis by comparing its results with those derived by human inspection, PolyPhred (a popular SNP detection tool), and independent genotype assays in three large-scale investigations. The first study identified and validated inter- and intra-subspecies variations in 4,650 traces of 25 inbred mouse strains that belong to either the Mus musculus species or the M. spretus species. Unexpected heterozgyosity in CAST/Ei strain was observed in two out of 1,167 mouse SNPs. The second study identified 11,241 candidate SNPs in five ENCODE regions of the human genome covering 2.5 Mb of genomic sequence. Approximately 50% of the candidate SNPs were selected for experimental genotyping; the validation rate exceeded 95%. The third study detected ENU-induced mutations (at 0.04% allele frequency) in 64,896 traces of 1,236 zebra fish. Our analysis of three large and diverse test datasets demonstrated that SNPdetector is an effective tool for genome-scale research and for large-sample clinical studies. SNPdetector runs on Unix/Linux platform and is available publicly (http://lpg.nci.nih.gov).",TRUE,noun
R104,Bioinformatics,R168492,PoreWalker: A Novel Tool for the Identification and Characterization of Channels in Transmembrane Proteins from Their Three-Dimensional Structure,S668224,R168494,creates,R166916,PoreWalker,"Transmembrane channel proteins play pivotal roles in maintaining the homeostasis and responsiveness of cells and the cross-membrane electrochemical gradient by mediating the transport of ions and molecules through biological membranes. Therefore, computational methods which, given a set of 3D coordinates, can automatically identify and describe channels in transmembrane proteins are key tools to provide insights into how they function. Herein we present PoreWalker, a fully automated method, which detects and fully characterises channels in transmembrane proteins from their 3D structures. A stepwise procedure is followed in which the pore centre and pore axis are first identified and optimised using geometric criteria, and then the biggest and longest cavity through the channel is detected. Finally, pore features, including diameter profiles, pore-lining residues, size, shape and regularity of the pore are calculated, providing a quantitative and visual characterization of the channel. To illustrate the use of this tool, the method was applied to several structures of transmembrane channel proteins and was able to identify shape/size/residue features representative of specific channel families. The software is available as a web-based resource at http://www.ebi.ac.uk/thornton-srv/software/PoreWalker/.",TRUE,noun
R104,Bioinformatics,R148050,Tagging gene and protein names in biomedical text,S593719,R148052,Concept types,R147749,Protein,"MOTIVATION The MEDLINE database of biomedical abstracts contains scientific knowledge about thousands of interacting genes and proteins. Automated text processing can aid in the comprehension and synthesis of this valuable information. The fundamental task of identifying gene and protein names is a necessary first step towards making full use of the information encoded in biomedical text. This remains a challenging task due to the irregularities and ambiguities in gene and protein nomenclature. We propose to approach the detection of gene and protein names in scientific abstracts as part-of-speech tagging, the most basic form of linguistic corpus annotation. RESULTS We present a method for tagging gene and protein names in biomedical text using a combination of statistical and knowledge-based strategies. This method incorporates automatically generated rules from a transformation-based part-of-speech tagger, and manually generated rules from morphological clues, low frequency trigrams, indicator terms, suffixes and part-of-speech information. Results of an experiment on a test corpus of 56K MEDLINE documents demonstrate that our method to extract gene and protein names can be applied to large sets of MEDLINE abstracts, without the need for special conditions or human experts to predetermine relevant subsets. AVAILABILITY The programs are available on request from the authors.",TRUE,noun
R104,Bioinformatics,R148576,Exploiting syntax when detecting protein names in text,S595650,R148578,Concept types,R148579,Protein,"This paper presents work on a method to detect names of proteins in running text. Our system - Yapex - uses a combination of lexical and syntactic knowledge, heuristic filters and a local dynamic dictionary. The syntactic information given by a general-purpose off-the-shelf parser supports the correct identification of the boundaries of protein names, and the local dynamic dictionary finds protein names in positions incompletely analysed by the parser. We present the different steps involved in our approach to protein tagging, and show how combinations of them influence recall and precision. We evaluate the system on a corpus of MEDLINE abstracts and compare it with the KeX system (Fukuda et al., 1998) along four different notions of correctness.",TRUE,noun
R104,Bioinformatics,R150475,Biomedical named entity recognition and linking datasets: survey and our recent development,S603437,R150477,Concept types,R148579,Protein,"Natural language processing (NLP) is widely applied in biological domains to retrieve information from publications. Systems to address numerous applications exist, such as biomedical named entity recognition (BNER), named entity normalization (NEN) and protein-protein interaction extraction (PPIE). High-quality datasets can assist the development of robust and reliable systems; however, due to the endless applications and evolving techniques, the annotations of benchmark datasets may become outdated and inappropriate. In this study, we first review commonlyused BNER datasets and their potential annotation problems such as inconsistency and low portability. Then, we introduce a revised version of the JNLPBA dataset that solves potential problems in the original and use state-of-the-art named entity recognition systems to evaluate its portability to different kinds of biomedical literature, including protein-protein interaction and biology events. Lastly, we introduce an ensembled biomedical entity dataset (EBED) by extending the revised JNLPBA dataset with PubMed Central full-text paragraphs, figure captions and patent abstracts. This EBED is a multi-task dataset that covers annotations including gene, disease and chemical entities. In total, it contains 85000 entity mentions, 25000 entity mentions with database identifiers and 5000 attribute tags. To demonstrate the usage of the EBED, we review the BNER track from the AI CUP Biomedical Paper Analysis challenge. Availability: The revised JNLPBA dataset is available at https://iasl-btm.iis.sinica.edu.tw/BNER/Content/Re vised_JNLPBA.zip. The EBED dataset is available at https://iasl-btm.iis.sinica.edu.tw/BNER/Content/AICUP _EBED_dataset.rar. Contact: Email: thtsai@g.ncu.edu.tw, Tel. 886-3-4227151 ext. 35203, Fax: 886-3-422-2681 Email: hsu@iis.sinica.edu.tw, Tel. 886-2-2788-3799 ext. 2211, Fax: 886-2-2782-4814 Supplementary information: Supplementary data are available at Briefings in Bioinformatics online.",TRUE,noun
R104,Bioinformatics,R168735,PyPhi: A toolbox for integrated information theory,S669182,R168736,deposits,R167069,PyPhi,"Integrated information theory provides a mathematical framework to fully characterize the cause-effect structure of a physical system. Here, we introduce PyPhi, a Python software package that implements this framework for causal analysis and unfolds the full cause-effect structure of discrete dynamical systems of binary elements. The software allows users to easily study these structures, serves as an up-to-date reference implementation of the formalisms of integrated information theory, and has been applied in research on complexity, emergence, and certain biological questions. We first provide an overview of the main algorithm and demonstrate PyPhi’s functionality in the course of analyzing an example system, and then describe details of the algorithm’s design and implementation. PyPhi can be installed with Python’s package manager via the command ‘pip install pyphi’ on Linux and macOS systems equipped with Python 3.4 or higher. PyPhi is open-source and licensed under the GPLv3; the source code is hosted on GitHub at https://github.com/wmayner/pyphi. Comprehensive and continually-updated documentation is available at https://pyphi.readthedocs.io. The pyphi-users mailing list can be joined at https://groups.google.com/forum/#!forum/pyphi-users. A web-based graphical interface to the software is available at http://integratedinformationtheory.org/calculate.html.",TRUE,noun
R104,Bioinformatics,R168556,Pep2Path: Automated Mass Spectrometry-Guided Genome Mining of Peptidic Natural Products,S668468,R168559,uses,R166957,Python,"Nonribosomally and ribosomally synthesized bioactive peptides constitute a source of molecules of great biomedical importance, including antibiotics such as penicillin, immunosuppressants such as cyclosporine, and cytostatics such as bleomycin. Recently, an innovative mass-spectrometry-based strategy, peptidogenomics, has been pioneered to effectively mine microbial strains for novel peptidic metabolites. Even though mass-spectrometric peptide detection can be performed quite fast, true high-throughput natural product discovery approaches have still been limited by the inability to rapidly match the identified tandem mass spectra to the gene clusters responsible for the biosynthesis of the corresponding compounds. With Pep2Path, we introduce a software package to fully automate the peptidogenomics approach through the rapid Bayesian probabilistic matching of mass spectra to their corresponding biosynthetic gene clusters. Detailed benchmarking of the method shows that the approach is powerful enough to correctly identify gene clusters even in data sets that consist of hundreds of genomes, which also makes it possible to match compounds from unsequenced organisms to closely related biosynthetic gene clusters in other genomes. Applying Pep2Path to a data set of compounds without known biosynthesis routes, we were able to identify candidate gene clusters for the biosynthesis of five important compounds. Notably, one of these clusters was detected in a genome from a different subphylum of Proteobacteria than that in which the molecule had first been identified. All in all, our approach paves the way towards high-throughput discovery of novel peptidic natural products. Pep2Path is freely available from http://pep2path.sourceforge.net/, implemented in Python, licensed under the GNU General Public License v3 and supported on MS Windows, Linux and Mac OS X.",TRUE,noun
R104,Bioinformatics,R168568,IDEPI: Rapid Prediction of HIV-1 Antibody Epitopes and Other Phenotypic Features from Sequence Data Using a Flexible Machine Learning Platform,S668508,R168570,uses,R166957,Python,"Since its identification in 1983, HIV-1 has been the focus of a research effort unprecedented in scope and difficulty, whose ultimate goals — a cure and a vaccine – remain elusive. One of the fundamental challenges in accomplishing these goals is the tremendous genetic variability of the virus, with some genes differing at as many as 40% of nucleotide positions among circulating strains. Because of this, the genetic bases of many viral phenotypes, most notably the susceptibility to neutralization by a particular antibody, are difficult to identify computationally. Drawing upon open-source general-purpose machine learning algorithms and libraries, we have developed a software package IDEPI (IDentify EPItopes) for learning genotype-to-phenotype predictive models from sequences with known phenotypes. IDEPI can apply learned models to classify sequences of unknown phenotypes, and also identify specific sequence features which contribute to a particular phenotype. We demonstrate that IDEPI achieves performance similar to or better than that of previously published approaches on four well-studied problems: finding the epitopes of broadly neutralizing antibodies (bNab), determining coreceptor tropism of the virus, identifying compartment-specific genetic signatures of the virus, and deducing drug-resistance associated mutations. The cross-platform Python source code (released under the GPL 3.0 license), documentation, issue tracking, and a pre-configured virtual machine for IDEPI can be found at https://github.com/veg/idepi.",TRUE,noun
R104,Bioinformatics,R168721,Bamgineer: Introduction of simulated allele-specific copy number variants into exome and targeted sequence data sets,S669113,R168723,uses,R166957,Python,"AbstractSomatic copy number variations (CNVs) play a crucial role in development of many human cancers. The broad availability of next-generation sequencing data has enabled the development of algorithms to computationally infer CNV profiles from a variety of data types including exome and targeted sequence data; currently the most prevalent types of cancer genomics data. However, systemic evaluation and comparison of these tools remains challenging due to a lack of ground truth reference sets. To address this need, we have developed Bamgineer, a tool written in Python to introduce user-defined haplotype-phased allele-specific copy number events into an existing Binary Alignment Mapping (BAM) file, with a focus on targeted and exome sequencing experiments. As input, this tool requires a read alignment file (BAM format), lists of non-overlapping genome coordinates for introduction of gains and losses (bed file), and an optional file defining known haplotypes (vcf format). To improve runtime performance, Bamgineer introduces the desired CNVs in parallel using queuing and parallel processing on a local machine or on a high-performance computing cluster. As proof-of-principle, we applied Bamgineer to a single high-coverage (mean: 220X) exome sequence file from a blood sample to simulate copy number profiles of 3 exemplar tumors from each of 10 tumor types at 5 tumor cellularity levels (20-100%, 150 BAM files in total). To demonstrate feasibility beyond exome data, we introduced read alignments to a targeted 5-gene cell-free DNA sequencing library to simulate EGFR amplifications at frequencies consistent with circulating tumor DNA (10, 1, 0.1 and 0.01%) while retaining the multimodal insert size distribution of the original data. We expect Bamgineer to be of use for development and systematic benchmarking of CNV calling algorithms by users using locally-generated data for a variety of applications. The source code is freely available at http://github.com/pughlab/bamgineer.Author summaryWe present Bamgineer, a software program to introduce user-defined, haplotype-specific copy number variants (CNVs) at any frequency into standard Binary Alignment Mapping (BAM) files. Copy number gains are simulated by introducing new DNA sequencing read pairs sampled from existing reads and modified to contain SNPs of the haplotype of interest. This approach retains biases of the original data such as local coverage, strand bias, and insert size. Deletions are simulated by removing reads corresponding to one or both haplotypes. In our proof-of-principle study, we simulated copy number profiles from 10 cancer types at varying cellularity levels typically encountered in clinical samples. We also demonstrated introduction of low frequency CNVs into cell-free DNA sequencing data that retained the bimodal fragment size distribution characteristic of these data. Bamgineer is flexible and enables users to simulate CNVs that reflect characteristics of locally-generated sequence files and can be used for many applications including development and benchmarking of CNV inference tools for a variety of data types.",TRUE,noun
R104,Bioinformatics,R168729,COBRAme: A computational framework for genome-scale models of metabolism and gene expression,S669164,R168732,uses,R166957,Python,"Genome-scale models of metabolism and macromolecular expression (ME-models) explicitly compute the optimal proteome composition of a growing cell. ME-models expand upon the well-established genome-scale models of metabolism (M-models), and they enable a new fundamental understanding of cellular growth. ME-models have increased predictive capabilities and accuracy due to their inclusion of the biosynthetic costs for the machinery of life, but they come with a significant increase in model size and complexity. This challenge results in models which are both difficult to compute and challenging to understand conceptually. As a result, ME-models exist for only two organisms (Escherichia coli and Thermotoga maritima) and are still used by relatively few researchers. To address these challenges, we have developed a new software framework called COBRAme for building and simulating ME-models. It is coded in Python and built on COBRApy, a popular platform for using M-models. COBRAme streamlines computation and analysis of ME-models. It provides tools to simplify constructing and editing ME-models to enable ME-model reconstructions for new organisms. We used COBRAme to reconstruct a condensed E. coli ME-model called iJL1678b-ME. This reformulated model gives functionally identical solutions to previous E. coli ME-models while using 1/6 the number of free variables and solving in less than 10 minutes, a marked improvement over the 6 hour solve time of previous ME-model formulations. Errors in previous ME-models were also corrected leading to 52 additional genes that must be expressed in iJL1678b-ME to grow aerobically in glucose minimal in silico media. This manuscript outlines the architecture of COBRAme and demonstrates how ME-models can be created, modified, and shared most efficiently using the new software framework.",TRUE,noun
R104,Bioinformatics,R168616,QuIN: A Web Server for Querying and Visualizing Chromatin Interaction Networks,S668720,R168624,deposits,R167000,QuIN,"Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. AVAILABILITY: QuIN’s web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/.",TRUE,noun
R104,Bioinformatics,R168931,Cost-effectiveness of dog rabies vaccination programs in East Africa,S669988,R168934,creates,R167193,RabiesEcon,"Background Dog rabies annually causes 24,000–70,000 deaths globally. We built a spreadsheet tool, RabiesEcon, to aid public health officials to estimate the cost-effectiveness of dog rabies vaccination programs in East Africa. Methods RabiesEcon uses a mathematical model of dog-dog and dog-human rabies transmission to estimate dog rabies cases averted, the cost per human rabies death averted and cost per year of life gained (YLG) due to dog vaccination programs (US 2015 dollars). We used an East African human population of 1 million (approximately 2/3 living in urban setting, 1/3 rural). We considered, using data from the literature, three vaccination options; no vaccination, annual vaccination of 50% of dogs and 20% of dogs vaccinated semi-annually. We assessed 2 transmission scenarios: low (1.2 dogs infected per infectious dog) and high (1.7 dogs infected). We also examined the impact of annually vaccinating 70% of all dogs (World Health Organization recommendation for dog rabies elimination). Results Without dog vaccination, over 10 years there would a total of be approximately 44,000–65,000 rabid dogs and 2,100–2,900 human deaths. Annually vaccinating 50% of dogs results in 10-year reductions of 97% and 75% in rabid dogs (low and high transmissions scenarios, respectively), approximately 2,000–1,600 human deaths averted, and an undiscounted cost-effectiveness of $451-$385 per life saved. Semi-annual vaccination of 20% of dogs results in in 10-year reductions of 94% and 78% in rabid dogs, and approximately 2,000–1,900 human deaths averted, and cost $404-$305 per life saved. In the low transmission scenario, vaccinating either 50% or 70% of dogs eliminated dog rabies. Results were most sensitive to dog birth rate and the initial rate of dog-to-dog transmission (Ro). Conclusions Dog rabies vaccination programs can control, and potentially eliminate, dog rabies. The frequency and coverage of vaccination programs, along with the level of dog rabies transmission, can affect the cost-effectiveness of such programs. RabiesEcon can aid both the planning and assessment of dog rabies vaccination programs.",TRUE,noun
R104,Bioinformatics,R168931,Cost-effectiveness of dog rabies vaccination programs in East Africa,S669986,R168933,uses,R167193,RabiesEcon,"Background Dog rabies annually causes 24,000–70,000 deaths globally. We built a spreadsheet tool, RabiesEcon, to aid public health officials to estimate the cost-effectiveness of dog rabies vaccination programs in East Africa. Methods RabiesEcon uses a mathematical model of dog-dog and dog-human rabies transmission to estimate dog rabies cases averted, the cost per human rabies death averted and cost per year of life gained (YLG) due to dog vaccination programs (US 2015 dollars). We used an East African human population of 1 million (approximately 2/3 living in urban setting, 1/3 rural). We considered, using data from the literature, three vaccination options; no vaccination, annual vaccination of 50% of dogs and 20% of dogs vaccinated semi-annually. We assessed 2 transmission scenarios: low (1.2 dogs infected per infectious dog) and high (1.7 dogs infected). We also examined the impact of annually vaccinating 70% of all dogs (World Health Organization recommendation for dog rabies elimination). Results Without dog vaccination, over 10 years there would a total of be approximately 44,000–65,000 rabid dogs and 2,100–2,900 human deaths. Annually vaccinating 50% of dogs results in 10-year reductions of 97% and 75% in rabid dogs (low and high transmissions scenarios, respectively), approximately 2,000–1,600 human deaths averted, and an undiscounted cost-effectiveness of $451-$385 per life saved. Semi-annual vaccination of 20% of dogs results in in 10-year reductions of 94% and 78% in rabid dogs, and approximately 2,000–1,900 human deaths averted, and cost $404-$305 per life saved. In the low transmission scenario, vaccinating either 50% or 70% of dogs eliminated dog rabies. Results were most sensitive to dog birth rate and the initial rate of dog-to-dog transmission (Ro). Conclusions Dog rabies vaccination programs can control, and potentially eliminate, dog rabies. The frequency and coverage of vaccination programs, along with the level of dog rabies transmission, can affect the cost-effectiveness of such programs. RabiesEcon can aid both the planning and assessment of dog rabies vaccination programs.",TRUE,noun
R104,Bioinformatics,R75371,Isolating SARS-CoV-2 Strains From Countries in the Same Meridian: Genome Evolutionary Analysis,S345239,R75376,Has evaluation,R52255,recombination,"Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and “disorder” and “transmembrane” analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein—binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.",TRUE,noun
R104,Bioinformatics,R168516,Redirector: Designing Cell Factories by Reconstructing the Metabolic Objective,S668302,R168518,creates,R166930,Redirector,"Advances in computational metabolic optimization are required to realize the full potential of new in vivo metabolic engineering technologies by bridging the gap between computational design and strain development. We present Redirector, a new Flux Balance Analysis-based framework for identifying engineering targets to optimize metabolite production in complex pathways. Previous optimization frameworks have modeled metabolic alterations as directly controlling fluxes by setting particular flux bounds. Redirector develops a more biologically relevant approach, modeling metabolic alterations as changes in the balance of metabolic objectives in the system. This framework iteratively selects enzyme targets, adds the associated reaction fluxes to the metabolic objective, thereby incentivizing flux towards the production of a metabolite of interest. These adjustments to the objective act in competition with cellular growth and represent up-regulation and down-regulation of enzyme mediated reactions. Using the iAF1260 E. coli metabolic network model for optimization of fatty acid production as a test case, Redirector generates designs with as many as 39 simultaneous and 111 unique engineering targets. These designs discover proven in vivo targets, novel supporting pathways and relevant interdependencies, many of which cannot be predicted by other methods. Redirector is available as open and free software, scalable to computational resources, and powerful enough to find all known enzyme targets for fatty acid production.",TRUE,noun
R104,Bioinformatics,R168516,Redirector: Designing Cell Factories by Reconstructing the Metabolic Objective,S668306,R168520,deposits,R166931,Redirector,"Advances in computational metabolic optimization are required to realize the full potential of new in vivo metabolic engineering technologies by bridging the gap between computational design and strain development. We present Redirector, a new Flux Balance Analysis-based framework for identifying engineering targets to optimize metabolite production in complex pathways. Previous optimization frameworks have modeled metabolic alterations as directly controlling fluxes by setting particular flux bounds. Redirector develops a more biologically relevant approach, modeling metabolic alterations as changes in the balance of metabolic objectives in the system. This framework iteratively selects enzyme targets, adds the associated reaction fluxes to the metabolic objective, thereby incentivizing flux towards the production of a metabolite of interest. These adjustments to the objective act in competition with cellular growth and represent up-regulation and down-regulation of enzyme mediated reactions. Using the iAF1260 E. coli metabolic network model for optimization of fatty acid production as a test case, Redirector generates designs with as many as 39 simultaneous and 111 unique engineering targets. These designs discover proven in vivo targets, novel supporting pathways and relevant interdependencies, many of which cannot be predicted by other methods. Redirector is available as open and free software, scalable to computational resources, and powerful enough to find all known enzyme targets for fatty acid production.",TRUE,noun
R104,Bioinformatics,R168516,Redirector: Designing Cell Factories by Reconstructing the Metabolic Objective,S668304,R168519,uses,R166930,Redirector,"Advances in computational metabolic optimization are required to realize the full potential of new in vivo metabolic engineering technologies by bridging the gap between computational design and strain development. We present Redirector, a new Flux Balance Analysis-based framework for identifying engineering targets to optimize metabolite production in complex pathways. Previous optimization frameworks have modeled metabolic alterations as directly controlling fluxes by setting particular flux bounds. Redirector develops a more biologically relevant approach, modeling metabolic alterations as changes in the balance of metabolic objectives in the system. This framework iteratively selects enzyme targets, adds the associated reaction fluxes to the metabolic objective, thereby incentivizing flux towards the production of a metabolite of interest. These adjustments to the objective act in competition with cellular growth and represent up-regulation and down-regulation of enzyme mediated reactions. Using the iAF1260 E. coli metabolic network model for optimization of fatty acid production as a test case, Redirector generates designs with as many as 39 simultaneous and 111 unique engineering targets. These designs discover proven in vivo targets, novel supporting pathways and relevant interdependencies, many of which cannot be predicted by other methods. Redirector is available as open and free software, scalable to computational resources, and powerful enough to find all known enzyme targets for fatty acid production.",TRUE,noun
R104,Bioinformatics,R38466,"Biotea-2-Bioschemas, facilitating structured markup for semantically annotated scholarly publications",S126232,R38472,Related Resource,R38474,schema.org,"The total number of scholarly publications grows day by day, making it necessary to explore and use simple yet effective ways to expose their metadata. Schema.org supports adding structured metadata to web pages via markup, making it easier for data providers but also for search engines to provide the right search results. Bioschemas is based on the standards of schema.org, providing new types, properties and guidelines for metadata, i.e., providing metadata profiles tailored to the Life Sciences domain. Here we present our proposed contribution to Bioschemas (from the project “Biotea”), which supports metadata contributions for scholarly publications via profiles and web components. Biotea comprises a semantic model to represent publications together with annotated elements recognized from the scientific text; our Biotea model has been mapped to schema.org following Bioschemas standards.",TRUE,noun
R104,Bioinformatics,R168741,scPipe: A flexible R/Bioconductor preprocessing pipeline for single-cell RNA-sequencing data,S669215,R168742,deposits,R167072,scPipe,"Abstract Single-cell RNA sequencing (scRNA-seq) technology allows researchers to profile the transcriptomes of thousands of cells simultaneously. Protocols that incorpo-rate both designed and random barcodes have greatly increased the throughput of scRNA-seq, but give rise to a more complex data structure. There is a need for new tools that can handle the various barcoding strategies used by different protocols and exploit this information for quality assessment at the sample-level and provide effective visualization of these results in preparation for higher-level analyses. To this end, we developed scPipe , a R/Bioconductor package that integrates barcode demultiplexing, read alignment, UMI-aware gene-level quantification and quality control of raw sequencing data generated by multiple 3-prime-end sequencing protocols that include CEL-seq, MARS-seq, Chromium 10X and Drop-seq. scPipe produces a count matrix that is essential for downstream analysis along with an HTML report that summarises data quality. These results can be used as input for downstream analyses including normalization, visualization and statistical testing. scPipe performs this processing in a few simple R commands, promoting reproducible analysis of single-cell data that is compatible with the emerging suite of scRNA-seq analysis tools available in R/Bioconductor. The scPipe R package is available for download from https://www.bioconductor.org/packages/scPipe.",TRUE,noun
R104,Bioinformatics,R168492,PoreWalker: A Novel Tool for the Identification and Characterization of Channels in Transmembrane Proteins from Their Three-Dimensional Structure,S668226,R168495,deposits,R166917,software,"Transmembrane channel proteins play pivotal roles in maintaining the homeostasis and responsiveness of cells and the cross-membrane electrochemical gradient by mediating the transport of ions and molecules through biological membranes. Therefore, computational methods which, given a set of 3D coordinates, can automatically identify and describe channels in transmembrane proteins are key tools to provide insights into how they function. Herein we present PoreWalker, a fully automated method, which detects and fully characterises channels in transmembrane proteins from their 3D structures. A stepwise procedure is followed in which the pore centre and pore axis are first identified and optimised using geometric criteria, and then the biggest and longest cavity through the channel is detected. Finally, pore features, including diameter profiles, pore-lining residues, size, shape and regularity of the pore are calculated, providing a quantitative and visual characterization of the channel. To illustrate the use of this tool, the method was applied to several structures of transmembrane channel proteins and was able to identify shape/size/residue features representative of specific channel families. The software is available as a web-based resource at http://www.ebi.ac.uk/thornton-srv/software/PoreWalker/.",TRUE,noun
R104,Bioinformatics,R168508,ACME: Automated Cell Morphology Extractor for Comprehensive Reconstruction of Cell Membranes,S668283,R168511,deposits,R166926,software,"The quantification of cell shape, cell migration, and cell rearrangements is important for addressing classical questions in developmental biology such as patterning and tissue morphogenesis. Time-lapse microscopic imaging of transgenic embryos expressing fluorescent reporters is the method of choice for tracking morphogenetic changes and establishing cell lineages and fate maps in vivo. However, the manual steps involved in curating thousands of putative cell segmentations have been a major bottleneck in the application of these technologies especially for cell membranes. Segmentation of cell membranes while more difficult than nuclear segmentation is necessary for quantifying the relations between changes in cell morphology and morphogenesis. We present a novel and fully automated method to first reconstruct membrane signals and then segment out cells from 3D membrane images even in dense tissues. The approach has three stages: 1) detection of local membrane planes, 2) voting to fill structural gaps, and 3) region segmentation. We demonstrate the superior performance of the algorithms quantitatively on time-lapse confocal and two-photon images of zebrafish neuroectoderm and paraxial mesoderm by comparing its results with those derived from human inspection. We also compared with synthetic microscopic images generated by simulating the process of imaging with fluorescent reporters under varying conditions of noise. Both the over-segmentation and under-segmentation percentages of our method are around 5%. The volume overlap of individual cells, compared to expert manual segmentation, is consistently over 84%. By using our software (ACME) to study somite formation, we were able to segment touching cells with high accuracy and reliably quantify changes in morphogenetic parameters such as cell shape and size, and the arrangement of epithelial and mesenchymal cells. Our software has been developed and tested on Windows, Mac, and Linux platforms and is available publicly under an open source BSD license (https://github.com/krm15/ACME).",TRUE,noun
R104,Bioinformatics,R168663,ESPRIT-Forest: Parallel clustering of massive amplicon sequence data in subquadratic time,S668889,R168665,deposits,R167024,software,"The rapid development of sequencing technology has led to an explosive accumulation of genomic sequence data. Clustering is often the first step to perform in sequence analysis, and hierarchical clustering is one of the most commonly used approaches for this purpose. However, it is currently computationally expensive to perform hierarchical clustering of extremely large sequence datasets due to its quadratic time and space complexities. In this paper we developed a new algorithm called ESPRIT-Forest for parallel hierarchical clustering of sequences. The algorithm achieves subquadratic time and space complexity and maintains a high clustering accuracy comparable to the standard method. The basic idea is to organize sequences into a pseudo-metric based partitioning tree for sub-linear time searching of nearest neighbors, and then use a new multiple-pair merging criterion to construct clusters in parallel using multiple threads. The new algorithm was tested on the human microbiome project (HMP) dataset, currently one of the largest published microbial 16S rRNA sequence dataset. Our experiment demonstrated that with the power of parallel computing it is now compu- tationally feasible to perform hierarchical clustering analysis of tens of millions of sequences. The software is available at http://www.acsu.buffalo.edu/∼yijunsun/lab/ESPRIT-Forest.html.",TRUE,noun
R104,Bioinformatics,R168673,sourceR: Classification and source attribution of infectious agents among heterogeneous populations,S668926,R168675,deposits,R167032,sourceR,"Zoonotic diseases are a major cause of morbidity, and productivity losses in both human and animal populations. Identifying the source of food-borne zoonoses (e.g. an animal reservoir or food product) is crucial for the identification and prioritisation of food safety interventions. For many zoonotic diseases it is difficult to attribute human cases to sources of infection because there is little epidemiological information on the cases. However, microbial strain typing allows zoonotic pathogens to be categorised, and the relative frequencies of the strain types among the sources and in human cases allows inference on the likely source of each infection. We introduce sourceR, an R package for quantitative source attribution, aimed at food-borne diseases. It implements a Bayesian model using strain-typed surveillance data from both human cases and source samples, capable of identifying important sources of infection. The model measures the force of infection from each source, allowing for varying survivability, pathogenicity and virulence of pathogen strains, and varying abilities of the sources to act as vehicles of infection. A Bayesian non-parametric (Dirichlet process) approach is used to cluster pathogen strain types by epidemiological behaviour, avoiding model overfitting and allowing detection of strain types associated with potentially high “virulence”. sourceR is demonstrated using Campylobacter jejuni isolate data collected in New Zealand between 2005 and 2008. Chicken from a particular poultry supplier was identified as the major source of campylobacteriosis, which is qualitatively similar to results of previous studies using the same dataset. Additionally, the software identifies a cluster of 9 multilocus sequence types with abnormally high ‘virulence’ in humans. sourceR enables straightforward attribution of cases of zoonotic infection to putative sources of infection. As sourceR develops, we intend it to become an important and flexible resource for food-borne disease attribution studies.",TRUE,noun
R104,Bioinformatics,R169576,Different Populations of Blacklegged Tick Nymphs Exhibit Differences in Questing Behavior That Have Implications for Human Lyme Disease Risk,S673072,R169577,uses,R167621,Stan,"Animal behavior can have profound effects on pathogen transmission and disease incidence. We studied the questing (= host-seeking) behavior of blacklegged tick (Ixodes scapularis) nymphs, which are the primary vectors of Lyme disease in the eastern United States. Lyme disease is common in northern but not in southern regions, and prior ecological studies have found that standard methods used to collect host-seeking nymphs in northern regions are unsuccessful in the south. This led us to hypothesize that there are behavior differences between northern and southern nymphs that alter how readily they are collected, and how likely they are to transmit the etiological agent of Lyme disease to humans. To examine this question, we compared the questing behavior of I. scapularis nymphs originating from one northern (Lyme disease endemic) and two southern (non-endemic) US regions at field sites in Wisconsin, Rhode Island, Tennessee, and Florida. Laboratory-raised uninfected nymphs were monitored in circular 0.2 m2 arenas containing wooden dowels (mimicking stems of understory vegetation) for 10 (2011) and 19 (2012) weeks. The probability of observing nymphs questing on these stems (2011), and on stems, on top of leaf litter, and on arena walls (2012) was much greater for northern than for southern origin ticks in both years and at all field sites (19.5 times greater in 2011; 3.6–11.6 times greater in 2012). Our findings suggest that southern origin I. scapularis nymphs rarely emerge from the leaf litter, and consequently are unlikely to contact passing humans. We propose that this difference in questing behavior accounts for observed geographic differences in the efficacy of the standard sampling techniques used to collect questing nymphs. These findings also support our hypothesis that very low Lyme disease incidence in southern states is, in part, a consequence of the type of host-seeking behavior exhibited by southern populations of the key Lyme disease vector.",TRUE,noun
R104,Bioinformatics,R170097,Efficacy and tolerability of short-term duloxetine treatment in adults with generalized anxiety disorder: A meta-analysis,S675546,R170099,uses,R167945,Stata,"Objective To investigate the efficacy and tolerability of duloxetine during short-term treatment in adults with generalized anxiety disorder (GAD). Methods We conducted a comprehensive literature review of the PubMed, Embase, Cochrane Central Register of Controlled Trials, Web of Science, and ClinicalTrials databases for randomized controlled trials(RCTs) comparing duloxetine or duloxetine plus other antipsychotics with placebo for the treatment of GAD in adults. Outcome measures were (1) efficacy, assessed by the Hospital Anxiety and Depression Scale(HADS) anxiety subscale score, the Hamilton Rating Scale for Anxiety(HAM-A) psychic and somatic anxiety factor scores, and response and remission rates based on total scores of HAM-A; (2) tolerability, assessed by discontinuation rate due to adverse events, the incidence of treatment emergent adverse events(TEAEs) and serious adverse events(SAEs). Review Manager 5.3 and Stata Version 12.0 software were used for all statistical analyses. Results The meta-analysis included 8 RCTs. Mean changes in the HADS anxiety subscale score [mean difference(MD) = 2.32, 95% confidence interval(CI) 1.77–2.88, P<0.00001] and HAM-A psychic anxiety factor score were significantly greater in patients with GAD that received duloxetine compared to those that received placebo (MD = 2.15, 95%CI 1.61–2.68, P<0.00001). However, there was no difference in mean change in the HAM-A somatic anxiety factor score (MD = 1.13, 95%CI 0.67–1.58, P<0.00001). Discontinuation rate due to AEs in the duloxetine group was significantly higher than the placebo group [odds ratio(OR) = 2.62, 95%CI 1.35–5.06, P = 0.004]. The incidence of any TEAE was significantly increased in patients that received duloxetine (OR = 1.76, 95%CI 1.36–2.28, P<0.0001), but there was no significant difference in the incidence of SAEs (OR = 1.13, 95%CI 0.52–2.47, P = 0.75). Conclusion Duloxetine resulted in a greater improvement in symptoms of psychic anxiety and similar changes in symptoms of somatic anxiety compared to placebo during short-term treatment in adults with GAD and its tolerability was acceptable.",TRUE,noun
R104,Bioinformatics,R170139,Comparisons between different elements of reported burden and common mental disorder in caregivers of ethnically diverse people with dementia in Trinidad,S675785,R170141,uses,R167153,Stata,"Objective Culture plays a significant role in determining family responsibilities and possibly influences the caregiver burden associated with providing care for a relative with dementia. This study was carried out to determine the elements of caregiver burden in Trinidadians regarding which interventions will provide the most benefit. Methods Seventy-five caregivers of patients diagnosed with dementia participated in this investigation. Demographic data were recorded for each caregiver and patient. Caregiver burden was assessed using the Zarit Burden Interview (ZBI), and the General Health Questionnaire (GHQ) was used as a measure of psychiatric morbidity. Statistical analyses were performed using Stata and SPSS software. Associations between individual ZBI items and GHQ-28 scores in caregivers were analyzed in logistic regression models; the above-median GHQ-28 scores were used a binary dependent variable, and individual ZBI item scores were entered as 5-point ordinal independent variables. Results The caregiver sample was composed of 61 females and 14 males. Caregiver burden was significantly associated with the participant being male; there was heterogeneity by ethnic group, and a higher burden on female caregivers was detected at borderline levels of significance. Upon examining the associations between different ZBI items and the above-median GHQ-28 scores in caregivers, the strongest associations were found with domains reflecting the caregiver’s health having suffered, the caregiver not having sufficient time for him/herself, the caregiver’s social life suffering, and the caregiver admitting to feeling stressed due to caregiving and meeting other responsibilities. Conclusions In this sample, with a majority of female caregivers, the factors of the person with dementia being male and belonging to a minority ethnic group were associated with a greater degree of caregiver burden. The information obtained through the association of individual ZBI items and above-median GHQ-28 scores is a helpful guide for profiling Trinidadian caregiver burden.",TRUE,noun
R104,Bioinformatics,R170951,"The Use of a Chronic Disease and Risk Factor Surveillance System to Determine the Age, Period and Cohort Effects on the Prevalence of Obesity and Diabetes in South Australian Adults - 2003–2013",S680677,R170952,uses,R168246,Stata,"Background Age, period and cohort (APC) analyses, using representative, population-based descriptive data, provide additional understanding behind increased prevalence rates. Methods Data on obesity and diabetes from the South Australian (SA) monthly chronic disease and risk factor surveillance system from July 2002 to December 2013 (n = 59,025) were used. Age was the self-reported age of the respondent at the time of the interview. Period was the year of the interview and cohort was age subtracted from the survey year. Cohort years were 1905 to 1995. All variables were treated as continuous. The age-sex standardised prevalence for obesity and diabetes was calculated using the Australia 2011 census. The APC models were constructed with ‘‘apcfit’’ in Stata. Results The age-sex standardised prevalence of obesity and diabetes increased in 2002-2013 from 18.6% to 24.1% and from 6.2% to 7.9%. The peak age for obesity was approximately 70 years with a steady increasing rate from 20 to 70 years of age. The peak age for diabetes was approximately 80 years. There were strong cohort effects and no period effects for both obesity and diabetes. The magnitude of the cohort effect is much more pronounced for obesity than for diabetes. Conclusion The APC analyses showed a higher than expected peak age for both obesity and diabetes, strong cohort effects with an acceleration of risk after 1960s for obesity and after 1940s for diabetes, and no period effects. By simultaneously considering the effects of age, period and cohort we have provided additional evidence for effective public health interventions.",TRUE,noun
R104,Bioinformatics,R169913,Predictive value of traction force measurement in vacuum extraction: Development of a multivariate prognostic model,S674623,R169914,uses,R167830,Statistica,"Objective To enable early prediction of strong traction force vacuum extraction. Design Observational cohort. Setting Karolinska University Hospital delivery ward, tertiary unit. Population and sample size Term mid and low metal cup vacuum extraction deliveries June 2012—February 2015, n = 277. Methods Traction forces during vacuum extraction were collected prospectively using an intelligent handle. Levels of traction force were analysed pairwise by subjective category strong versus non-strong extraction, in order to define an objective predictive value for strong extraction. Statistical analysis A logistic regression model based on the shrinkage and selection method lasso was used to identify the predictive capacity of the different traction force variables. Predictors Total (time force integral, Newton minutes) and peak traction (Newton) force in the first to third pull; difference in traction force between the second and first pull, as well as the third and first pull respectively. Accumulated traction force at the second and third pull. Outcome Subjectively categorized extraction as strong versus non-strong. Results The prevalence of strong extraction was 26%. Prediction including the first and second pull: AUC 0,85 (CI 0,80–0,90); specificity 0,76; sensitivity 0,87; PPV 0,56; NPV 0,94. Prediction including the first to third pull: AUC 0,86 (CI 0,80–0,91); specificity 0,87; sensitivity 0,70; PPV 0,65; NPV 0,89. Conclusion Traction force measurement during vacuum extraction can help exclude strong category extraction from the second pull. From the third pull, two-thirds of strong extractions can be predicted.",TRUE,noun
R104,Bioinformatics,R169215,Ecological Genetics of Chinese Rhesus Macaque in Response to Mountain Building: All Things Are Not Equal,S671383,R169237,uses,R167407,Structure,"Background Pliocene uplifting of the Qinghai-Tibetan Plateau (QTP) and Quaternary glaciation may have impacted the Asian biota more than any other events. Little is documented with respect to how the geological and climatological events influenced speciation as well as spatial and genetic structuring, especially in vertebrate endotherms. Macaca mulatta is the most widely distributed non-human primate. It may be the most suitable model to test hypotheses regarding the genetic consequences of orogenesis on an endotherm. Methodology and Principal Findings Using a large dataset of maternally inherited mitochondrial DNA gene sequences and nuclear microsatellite DNA data, we discovered two maternal super-haplogroups exist, one in western China and the other in eastern China. M. mulatta formed around 2.31 Ma (1.51–3.15, 95%), and divergence of the two major matrilines was estimated at 1.15 Ma (0.78–1.55, 95%). The western super-haplogroup exhibits significant geographic structure. In contrast, the eastern super-haplogroup has far greater haplotypic variability with little structure based on analyses of six variable microsatellite loci using Structure and Geneland. Analysis using Migrate detected greater gene flow from WEST to EAST than vice versa. We did not detect signals of bottlenecking in most populations. Conclusions Analyses of the nuclear and mitochondrial datasets obtained large differences in genetic patterns for M. mulatta. The difference likely reflects inheritance mechanisms of the maternally inherited mtDNA genome versus nuclear biparentally inherited STRs and male-mediated gene flow. Dramatic environmental changes may be responsible for shaping the matrilineal history of macaques. The timing of events, the formation of M. mulatta, and the divergence of the super-haplogroups, corresponds to both the uplifting of the QTP and Quaternary climatic oscillations. Orogenesis likely drove divergence of western populations in China, and Pleistocene glaciations are likely responsible for genetic structuring in the eastern super-haplogroup via geographic isolation and secondary contact.",TRUE,noun
R104,Bioinformatics,R168755,Telescope: Characterization of the retrotranscriptome by accurate estimation of transposable element expression,S669271,R168756,creates,R167081,Telescope,"Characterization of Human Endogenous Retrovirus (HERV) expression within the transcriptomic landscape using RNA-seq is complicated by uncertainty in fragment assignment because of sequence similarity. We present Telescope, a computational software tool that provides accurate estimation of transposable element expression (retrotranscriptome) resolved to specific genomic locations. Telescope directly addresses uncertainty in fragment assignment by reassigning ambiguously mapped fragments to the most probable source transcript as determined within a Bayesian statistical model. We demonstrate the utility of our approach through single locus analysis of HERV expression in 13 ENCODE cell types. When examined at this resolution, we find that the magnitude and breadth of the retrotranscriptome can be vastly different among cell types. Furthermore, our approach is robust to differences in sequencing technology and demonstrates that the retrotranscriptome has potential to be used for cell type identification. We compared our tool with other approaches for quantifying transposable element (TE) expression, and found that Telescope has the greatest resolution, as it estimates expression at specific TE insertions rather than at the TE subfamily level. Telescope performs highly accurate quantification of the retrotranscriptomic landscape in RNA-seq experiments, revealing a differential complexity in the transposable element biology of complex systems not previously observed. Telescope is available at https://github.com/mlbendall/telescope.",TRUE,noun
R104,Bioinformatics,R168755,Telescope: Characterization of the retrotranscriptome by accurate estimation of transposable element expression,S669277,R168759,deposits,R167083,Telescope,"Characterization of Human Endogenous Retrovirus (HERV) expression within the transcriptomic landscape using RNA-seq is complicated by uncertainty in fragment assignment because of sequence similarity. We present Telescope, a computational software tool that provides accurate estimation of transposable element expression (retrotranscriptome) resolved to specific genomic locations. Telescope directly addresses uncertainty in fragment assignment by reassigning ambiguously mapped fragments to the most probable source transcript as determined within a Bayesian statistical model. We demonstrate the utility of our approach through single locus analysis of HERV expression in 13 ENCODE cell types. When examined at this resolution, we find that the magnitude and breadth of the retrotranscriptome can be vastly different among cell types. Furthermore, our approach is robust to differences in sequencing technology and demonstrates that the retrotranscriptome has potential to be used for cell type identification. We compared our tool with other approaches for quantifying transposable element (TE) expression, and found that Telescope has the greatest resolution, as it estimates expression at specific TE insertions rather than at the TE subfamily level. Telescope performs highly accurate quantification of the retrotranscriptomic landscape in RNA-seq experiments, revealing a differential complexity in the transposable element biology of complex systems not previously observed. Telescope is available at https://github.com/mlbendall/telescope.",TRUE,noun
R104,Bioinformatics,R168755,Telescope: Characterization of the retrotranscriptome by accurate estimation of transposable element expression,S669275,R168758,uses,R167081,Telescope,"Characterization of Human Endogenous Retrovirus (HERV) expression within the transcriptomic landscape using RNA-seq is complicated by uncertainty in fragment assignment because of sequence similarity. We present Telescope, a computational software tool that provides accurate estimation of transposable element expression (retrotranscriptome) resolved to specific genomic locations. Telescope directly addresses uncertainty in fragment assignment by reassigning ambiguously mapped fragments to the most probable source transcript as determined within a Bayesian statistical model. We demonstrate the utility of our approach through single locus analysis of HERV expression in 13 ENCODE cell types. When examined at this resolution, we find that the magnitude and breadth of the retrotranscriptome can be vastly different among cell types. Furthermore, our approach is robust to differences in sequencing technology and demonstrates that the retrotranscriptome has potential to be used for cell type identification. We compared our tool with other approaches for quantifying transposable element (TE) expression, and found that Telescope has the greatest resolution, as it estimates expression at specific TE insertions rather than at the TE subfamily level. Telescope performs highly accurate quantification of the retrotranscriptomic landscape in RNA-seq experiments, revealing a differential complexity in the transposable element biology of complex systems not previously observed. Telescope is available at https://github.com/mlbendall/telescope.",TRUE,noun
R104,Bioinformatics,R168616,QuIN: A Web Server for Querying and Visualizing Chromatin Interaction Networks,S668712,R168620,uses,R166997,Tomcat,"Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. AVAILABILITY: QuIN’s web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/.",TRUE,noun
R104,Bioinformatics,R150549,Classifying semantic relations in bioscience texts,S603632,R150551,Concept types,R148123,Treatment,"A crucial step toward the goal of automatic extraction of propositional information from natural language text is the identification of semantic relations between constituents in sentences. We examine the problem of distinguishing among seven relation types that can occur between the entities ""treatment"" and ""disease"" in bioscience text, and the problem of identifying such entities. We compare five generative graphical models and a neural network, using lexical, syntactic, and semantic features, finding that the latter help achieve high classification accuracy.",TRUE,noun
R104,Bioinformatics,R75371,Isolating SARS-CoV-2 Strains From Countries in the Same Meridian: Genome Evolutionary Analysis,S345240,R75376,Has evaluation,R75377,trimming,"Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and “disorder” and “transmembrane” analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein—binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.",TRUE,noun
R104,Bioinformatics,R169557,Quantifying Regional Differences in the Length of Twitter Messages,S672967,R169558,uses,R167609,Twitter,"The increasing usage of social media for conversations, together with the availability of its data to researchers, provides an opportunity to study human conversations on a large scale. Twitter, which allows its users to post messages of up to a limit of 140 characters, is one such social media. Previous studies of utterances in books, movies and Twitter have shown that most of these utterances, when transcribed, are much shorter than 140 characters. Furthermore, the median length of Twitter messages was found to vary across US states. Here, we investigate whether the length of Twitter messages varies across different regions in the UK. We find that the median message length, depending on grouping, can differ by up to 2 characters.",TRUE,noun
R104,Bioinformatics,R168472,SNPdetector: A Software Tool for Sensitive and Accurate SNP Detection,S668173,R168476,uses,R166906,Unix,"Identification of single nucleotide polymorphisms (SNPs) and mutations is important for the discovery of genetic predisposition to complex diseases. PCR resequencing is the method of choice for de novo SNP discovery. However, manual curation of putative SNPs has been a major bottleneck in the application of this method to high-throughput screening. Therefore it is critical to develop a more sensitive and accurate computational method for automated SNP detection. We developed a software tool, SNPdetector, for automated identification of SNPs and mutations in fluorescence-based resequencing reads. SNPdetector was designed to model the process of human visual inspection and has a very low false positive and false negative rate. We demonstrate the superior performance of SNPdetector in SNP and mutation analysis by comparing its results with those derived by human inspection, PolyPhred (a popular SNP detection tool), and independent genotype assays in three large-scale investigations. The first study identified and validated inter- and intra-subspecies variations in 4,650 traces of 25 inbred mouse strains that belong to either the Mus musculus species or the M. spretus species. Unexpected heterozgyosity in CAST/Ei strain was observed in two out of 1,167 mouse SNPs. The second study identified 11,241 candidate SNPs in five ENCODE regions of the human genome covering 2.5 Mb of genomic sequence. Approximately 50% of the candidate SNPs were selected for experimental genotyping; the validation rate exceeded 95%. The third study detected ENU-induced mutations (at 0.04% allele frequency) in 64,896 traces of 1,236 zebra fish. Our analysis of three large and diverse test datasets demonstrated that SNPdetector is an effective tool for genome-scale research and for large-sample clinical studies. SNPdetector runs on Unix/Linux platform and is available publicly (http://lpg.nci.nih.gov).",TRUE,noun
R104,Bioinformatics,R138859,Multi task sequence learning for depression scale prediction from video,S551754,R138861,Data,R138862,Voice,"Depression is a typical mood disorder, which affects people in mental and even physical problems. People who suffer depression always behave abnormal in visual behavior and the voice. In this paper, an audio visual based multimodal depression scale prediction system is proposed. Firstly, features are extracted from video and audio are fused in feature level to represent the audio visual behavior. Secondly, long short memory recurrent neural network (LSTM-RNN) is utilized to encode the dynamic temporal information of the abnormal audio visual behavior. Thirdly, emotion information is utilized by multi-task learning to boost the performance further. The proposed approach is evaluated on the Audio-Visual Emotion Challenge (AVEC2014) dataset. Experiments results show the dimensional emotion recognition helps to depression scale prediction.",TRUE,noun
R104,Bioinformatics,R168599,Wham: Identifying Structural Variants of Biological Consequence,S668629,R168600,creates,R166984,Wham,"Existing methods for identifying structural variants (SVs) from short read datasets are inaccurate. This complicates disease-gene identification and efforts to understand the consequences of genetic variation. In response, we have created Wham (Whole-genome Alignment Metrics) to provide a single, integrated framework for both structural variant calling and association testing, thereby bypassing many of the difficulties that currently frustrate attempts to employ SVs in association testing. Here we describe Wham, benchmark it against three other widely used SV identification tools–Lumpy, Delly and SoftSearch–and demonstrate Wham’s ability to identify and associate SVs with phenotypes using data from humans, domestic pigeons, and vaccinia virus. Wham and all associated software are covered under the MIT License and can be freely downloaded from github (https://github.com/zeeev/wham), with documentation on a wiki (http://zeeev.github.io/wham/). For community support please post questions to https://www.biostars.org/.",TRUE,noun
R104,Bioinformatics,R168599,Wham: Identifying Structural Variants of Biological Consequence,S668631,R168601,deposits,R166985,Wham,"Existing methods for identifying structural variants (SVs) from short read datasets are inaccurate. This complicates disease-gene identification and efforts to understand the consequences of genetic variation. In response, we have created Wham (Whole-genome Alignment Metrics) to provide a single, integrated framework for both structural variant calling and association testing, thereby bypassing many of the difficulties that currently frustrate attempts to employ SVs in association testing. Here we describe Wham, benchmark it against three other widely used SV identification tools–Lumpy, Delly and SoftSearch–and demonstrate Wham’s ability to identify and associate SVs with phenotypes using data from humans, domestic pigeons, and vaccinia virus. Wham and all associated software are covered under the MIT License and can be freely downloaded from github (https://github.com/zeeev/wham), with documentation on a wiki (http://zeeev.github.io/wham/). For community support please post questions to https://www.biostars.org/.",TRUE,noun
R104,Bioinformatics,R168599,Wham: Identifying Structural Variants of Biological Consequence,S668635,R168603,uses,R166987,Wham,"Existing methods for identifying structural variants (SVs) from short read datasets are inaccurate. This complicates disease-gene identification and efforts to understand the consequences of genetic variation. In response, we have created Wham (Whole-genome Alignment Metrics) to provide a single, integrated framework for both structural variant calling and association testing, thereby bypassing many of the difficulties that currently frustrate attempts to employ SVs in association testing. Here we describe Wham, benchmark it against three other widely used SV identification tools–Lumpy, Delly and SoftSearch–and demonstrate Wham’s ability to identify and associate SVs with phenotypes using data from humans, domestic pigeons, and vaccinia virus. Wham and all associated software are covered under the MIT License and can be freely downloaded from github (https://github.com/zeeev/wham), with documentation on a wiki (http://zeeev.github.io/wham/). For community support please post questions to https://www.biostars.org/.",TRUE,noun
R104,Bioinformatics,R168508,ACME: Automated Cell Morphology Extractor for Comprehensive Reconstruction of Cell Membranes,S668285,R168512,uses,R166927,Windows,"The quantification of cell shape, cell migration, and cell rearrangements is important for addressing classical questions in developmental biology such as patterning and tissue morphogenesis. Time-lapse microscopic imaging of transgenic embryos expressing fluorescent reporters is the method of choice for tracking morphogenetic changes and establishing cell lineages and fate maps in vivo. However, the manual steps involved in curating thousands of putative cell segmentations have been a major bottleneck in the application of these technologies especially for cell membranes. Segmentation of cell membranes while more difficult than nuclear segmentation is necessary for quantifying the relations between changes in cell morphology and morphogenesis. We present a novel and fully automated method to first reconstruct membrane signals and then segment out cells from 3D membrane images even in dense tissues. The approach has three stages: 1) detection of local membrane planes, 2) voting to fill structural gaps, and 3) region segmentation. We demonstrate the superior performance of the algorithms quantitatively on time-lapse confocal and two-photon images of zebrafish neuroectoderm and paraxial mesoderm by comparing its results with those derived from human inspection. We also compared with synthetic microscopic images generated by simulating the process of imaging with fluorescent reporters under varying conditions of noise. Both the over-segmentation and under-segmentation percentages of our method are around 5%. The volume overlap of individual cells, compared to expert manual segmentation, is consistently over 84%. By using our software (ACME) to study somite formation, we were able to segment touching cells with high accuracy and reliably quantify changes in morphogenetic parameters such as cell shape and size, and the arrangement of epithelial and mesenchymal cells. Our software has been developed and tested on Windows, Mac, and Linux platforms and is available publicly under an open source BSD license (https://github.com/krm15/ACME).",TRUE,noun
R104,Bioinformatics,R168556,Pep2Path: Automated Mass Spectrometry-Guided Genome Mining of Peptidic Natural Products,S668470,R168560,uses,R166958,Windows,"Nonribosomally and ribosomally synthesized bioactive peptides constitute a source of molecules of great biomedical importance, including antibiotics such as penicillin, immunosuppressants such as cyclosporine, and cytostatics such as bleomycin. Recently, an innovative mass-spectrometry-based strategy, peptidogenomics, has been pioneered to effectively mine microbial strains for novel peptidic metabolites. Even though mass-spectrometric peptide detection can be performed quite fast, true high-throughput natural product discovery approaches have still been limited by the inability to rapidly match the identified tandem mass spectra to the gene clusters responsible for the biosynthesis of the corresponding compounds. With Pep2Path, we introduce a software package to fully automate the peptidogenomics approach through the rapid Bayesian probabilistic matching of mass spectra to their corresponding biosynthetic gene clusters. Detailed benchmarking of the method shows that the approach is powerful enough to correctly identify gene clusters even in data sets that consist of hundreds of genomes, which also makes it possible to match compounds from unsequenced organisms to closely related biosynthetic gene clusters in other genomes. Applying Pep2Path to a data set of compounds without known biosynthesis routes, we were able to identify candidate gene clusters for the biosynthesis of five important compounds. Notably, one of these clusters was detected in a genome from a different subphylum of Proteobacteria than that in which the molecule had first been identified. All in all, our approach paves the way towards high-throughput discovery of novel peptidic natural products. Pep2Path is freely available from http://pep2path.sourceforge.net/, implemented in Python, licensed under the GNU General Public License v3 and supported on MS Windows, Linux and Mac OS X.",TRUE,noun
R104,Bioinformatics,R168697,PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems,S669037,R168702,uses,R166927,Windows,"Abstract Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal “virtual laboratory” for such multicellular systems simulates both the biochemical microenvironment (the “stage”) and many mechanically and biochemically interacting cells (the “players” upon the stage). PhysiCell—physics-based multicellular simulator—is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility “out of the box.” The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 10 5 -10 6 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a “cellular cargo delivery” system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net. Author Summary This paper introduces PhysiCell: an open source, agent-based modeling framework for 3-D multicellular simulations. It includes a standard library of sub-models for cell fluid and solid volume changes, cycle progression, apoptosis, necrosis, mechanics, and motility. PhysiCell is directly coupled to a biotransport solver to simulate many diffusing substrates and cell-secreted signals. Each cell can dynamically update its phenotype based on its microenvironmental conditions. Users can customize or replace the included sub-models. PhysiCell runs on a variety of platforms (Linux, OSX, and Windows) with few software dependencies. Its computational cost scales linearly in the number of cells. It is feasible to simulate 500,000 cells on quad-core desktop workstations, and millions of cells on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on hanging drop tumor spheroids (HDS) and ductal carcinoma in situ (DCIS) of the breast. We demonstrate contact- and chemokine-based interactions among multiple cell types with examples in synthetic multicellular bioengineering, cancer heterogeneity, and cancer immunology. We developed PhysiCell to help the scientific community tackle multicellular systems biology problems involving many interacting cells in multi-substrate microenvironments. PhysiCell is also an independent, cross-platform codebase for replicating results from other simulators.",TRUE,noun
R104,Bioinformatics,R171318,"Women’s autonomy and men's involvement in child care and feeding as predictors of infant and young child anthropometric indices in coffee farming households of Jimma Zone, South West of Ethiopia",S683084,R171320,uses,R167547,windows,"Background Most of child mortality and under nutrition in developing world were attributed to suboptimal childcare and feeding, which needs detailed investigation beyond the proximal factors. This study was conducted with the aim of assessing associations of women’s autonomy and men’s involvement with child anthropometric indices in cash crop livelihood areas of South West Ethiopia. Methods Multi-stage stratified sampling was used to select 749 farming households living in three coffee producing sub-districts of Jimma zone, Ethiopia. Domains of women’s Autonomy were measured by a tool adapted from demographic health survey. A model for determination of paternal involvement in childcare was employed. Caring practices were assessed through the WHO Infant and young child feeding practice core indicators. Length and weight measurements were taken in duplicate using standard techniques. Data were analyzed using SPSS for windows version 21. A multivariable linear regression was used to predict weight for height Z-scores and length for age Z-scores after adjusting for various factors. Results The mean (sd) scores of weight for age (WAZ), height for age (HAZ), weight for height (WHZ) and BMI for age (BAZ) was -0.52(1.26), -0.73(1.43), -0.13(1.34) and -0.1(1.39) respectively. The results of multi variable linear regression analyses showed that WHZ scores of children of mothers who had autonomy of conducting big purchase were higher by 0.42 compared to children's whose mothers had not. In addition, a child whose father was involved in childcare and feeding had higher HAZ score by 0.1. Regarding age, as for every month increase in age of child, a 0.04 point decrease in HAZ score and a 0.01 point decrease in WHZ were noted. Similarly, a child living in food insecure households had lower HAZ score by 0.29 compared to child of food secured households. As family size increased by a person a WHZ score of a child is decreased by 0.08. WHZ and HAZ scores of male child was found lower by 0.25 and 0.38 respectively compared to a female child of same age. Conclusion Women’s autonomy and men’s involvement appeared in tandem with better child anthropometric outcomes. Nutrition interventions in such setting should integrate enhancing women’s autonomy over resource and men’s involvement in childcare and feeding, in addition to food security measures.",TRUE,noun
R104,Bioinformatics,R148576,Exploiting syntax when detecting protein names in text,S595672,R148578,model,R148587,Yapex,"This paper presents work on a method to detect names of proteins in running text. Our system - Yapex - uses a combination of lexical and syntactic knowledge, heuristic filters and a local dynamic dictionary. The syntactic information given by a general-purpose off-the-shelf parser supports the correct identification of the boundaries of protein names, and the local dynamic dictionary finds protein names in positions incompletely analysed by the parser. We present the different steps involved in our approach to protein tagging, and show how combinations of them influence recall and precision. We evaluate the system on a corpus of MEDLINE abstracts and compare it with the KeX system (Fukuda et al., 1998) along four different notions of correctness.",TRUE,noun
R104,Bioinformatics,R171724,Future directions in meditation research: Recommendations for expanding the field of contemplative science,S685553,R171726,uses,R168450,Excel,"The science of meditation has grown tremendously in the last two decades. Most studies have focused on evaluating the clinical effectiveness of mindfulness-based interventions, neural and other physiological correlates of meditation, and individual cognitive and emotional aspects of meditation. Far less research has been conducted on more challenging domains to measure, such as group and relational, transpersonal and mystical, and difficult aspects of meditation; anomalous or extraordinary phenomena related to meditation; and post-conventional stages of development associated with meditation. However, these components of meditation may be crucial to people’s psychological and spiritual development, could represent important mediators and/or mechanisms by which meditation confers benefits, and could themselves be important outcomes of meditation practices. In addition, since large numbers of novices are being introduced to meditation, it is helpful to investigate experiences they may encounter that are not well understood. Over the last four years, a task force of meditation researchers and teachers met regularly to develop recommendations for expanding the current meditation research field to include these important yet often neglected topics. These meetings led to a cross-sectional online survey to investigate the prevalence of a wide range of experiences in 1120 meditators. Results show that the majority of respondents report having had many of these anomalous and extraordinary experiences. While some of the topics are potentially controversial, they can be subjected to rigorous scientific investigation. These arenas represent largely uncharted scientific terrain and provide excellent opportunities for both new and experienced researchers. We provide suggestions for future directions, with accompanying online materials to encourage such research.",TRUE,noun
R104,Bioinformatics,R169215,Ecological Genetics of Chinese Rhesus Macaque in Response to Mountain Building: All Things Are Not Equal,S671387,R169239,uses,R167409,migrate,"Background Pliocene uplifting of the Qinghai-Tibetan Plateau (QTP) and Quaternary glaciation may have impacted the Asian biota more than any other events. Little is documented with respect to how the geological and climatological events influenced speciation as well as spatial and genetic structuring, especially in vertebrate endotherms. Macaca mulatta is the most widely distributed non-human primate. It may be the most suitable model to test hypotheses regarding the genetic consequences of orogenesis on an endotherm. Methodology and Principal Findings Using a large dataset of maternally inherited mitochondrial DNA gene sequences and nuclear microsatellite DNA data, we discovered two maternal super-haplogroups exist, one in western China and the other in eastern China. M. mulatta formed around 2.31 Ma (1.51–3.15, 95%), and divergence of the two major matrilines was estimated at 1.15 Ma (0.78–1.55, 95%). The western super-haplogroup exhibits significant geographic structure. In contrast, the eastern super-haplogroup has far greater haplotypic variability with little structure based on analyses of six variable microsatellite loci using Structure and Geneland. Analysis using Migrate detected greater gene flow from WEST to EAST than vice versa. We did not detect signals of bottlenecking in most populations. Conclusions Analyses of the nuclear and mitochondrial datasets obtained large differences in genetic patterns for M. mulatta. The difference likely reflects inheritance mechanisms of the maternally inherited mtDNA genome versus nuclear biparentally inherited STRs and male-mediated gene flow. Dramatic environmental changes may be responsible for shaping the matrilineal history of macaques. The timing of events, the formation of M. mulatta, and the divergence of the super-haplogroups, corresponds to both the uplifting of the QTP and Quaternary climatic oscillations. Orogenesis likely drove divergence of western populations in China, and Pleistocene glaciations are likely responsible for genetic structuring in the eastern super-haplogroup via geographic isolation and secondary contact.",TRUE,noun
R16,Biophysics,R70278,Adaptive behaviour and learning in slime moulds: the role of oscillations,S333683,R70283,Has information processing paradigm,R70284,Oscillations,"The slime mould Physarum polycephalum, an aneural organism, uses information from previous experiences to adjust its behaviour, but the mechanisms by which this is accomplished remain unknown. This article examines the possible role of oscillations in learning and memory in slime moulds. Slime moulds share surprising similarities with the network of synaptic connections in animal brains. First, their topology derives from a network of interconnected, vein-like tubes in which signalling molecules are transported. Second, network motility, which generates slime mould behaviour, is driven by distinct oscillations that organize into spatio-temporal wave patterns. Likewise, neural activity in the brain is organized in a variety of oscillations characterized by different frequencies. Interestingly, the oscillating networks of slime moulds are not precursors of nervous systems but, rather, an alternative architecture. Here, we argue that comparable information-processing operations can be realized on different architectures sharing similar oscillatory properties. After describing learning abilities and oscillatory activities of P. polycephalum, we explore the relation between network oscillations and learning, and evaluate the organism's global architecture with respect to information-processing potential. We hypothesize that, as in the brain, modulation of spontaneous oscillations may sustain learning in slime mould. This article is part of the theme issue ‘Basal cognition: conceptual tools and the view from the single cell’.",TRUE,noun
R122,Chemistry,R46113,Preparation of Polycrystalline TiO2 Photocatalysts Impregnated with Various Transition Metal Ions: Characterization and Photocatalytic Activity for the Degradation of 4-Nitrophenol,S140460,R46114,chemical doping method,L86293,impregnation,"A set of polycrystalline TiO2 photocatalysts loaded with various ions of transition metals (Co, Cr, Cu, Fe, Mo, V, and W) were prepared by using the wet impregnation method. The samples were characterized by using some bulk and surface techniques, namely X-ray diffraction, BET specific surface area determination, scanning electron microscopy, point of zero charge determination, and femtosecond pump−probe diffuse reflectance spectroscopy (PP-DRS). The samples were employed as catalysts for 4-nitrophenol photodegradation in aqueous suspension, used as a probe reaction. The characterization results have confirmed the difficulty to find a straightforward correlation between photoactivity and single specific properties of the powders. Diffuse reflectance measurements showed a slight shift in the band gap transition to longer wavelengths and an extension of the absorption in the visible region for almost all the doped samples. SEM observation and EDX measurements indicated a similar morphology for all the parti...",TRUE,noun
R122,Chemistry,R46091,Synthesis and Characterization of Nitrogen-Doped TiO2 Nanophotocatalyst with High Visible Light Activity,S140279,R46092,chemical doping method,L86156,microemulsion,"Nitrogen-doped TiO2 nanocatalysts with a homogeneous anatase structure were successfully synthesized through a microemulsion−hydrothermal method by using some organic compounds such as triethylamine, urea, thiourea, and hydrazine hydrate. Analysis by Raman and X-ray photoemission spectroscopy indicated that nitrogen was doped effectively and most nitrogen dopants might be present in the chemical environment of Ti−O−N and O−Ti−N. A shift of the absorption edge to a lower energy and a stronger absorption in the visible light region were observed. The results of photodegradation or the organic pollutant rhodamine B in the visible light irradiation (λ > 420 nm) suggested that the TiO2 photocatalysts after nitrogen doping were greatly improved compared with the undoped TiO2 photocatalysts and Degussa P-25; especially the nitrogen-doped TiO2 using triathylamine as the nitrogen source showed the highest photocatalytic activity, which also showed a higher efficiency for photodecomposition of 2,4-dichlorophenol. T...",TRUE,noun
R122,Chemistry,R46105,Improved photocatalytic activity of Sn 4+ doped TiO 2 nanoparticulate films prepared by plasma-enhanced chemical vapor deposition,S140395,R46106,doping elements,L86245,Sn4+,"Sn4+ ion doped TiO2 (TiO2–Sn4+) nanoparticulate films with a doping ratio of about 7∶100 [(Sn)∶(Ti)] were prepared by the plasma-enhanced chemical vapor deposition (PCVD) method. The doping mode (lattice Ti substituted by Sn4+ ions) and the doping energy level of Sn4+ were determined by X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), surface photovoltage spectroscopy (SPS) and electric field induced surface photovoltage spectroscopy (EFISPS). It is found that the introduction of a doping energy level of Sn4+ ions is profitable to the separation of photogenerated carriers under both UV and visible light excitation. Characterization of the films with XRD and SPS indicates that after doping by Sn, more surface defects are present on the surface. Consequently, the photocatalytic activity for photodegradation of phenol in the presence of the TiO2–Sn4+ film is higher than that of the pure TiO2 film under both UV and visible light irradiation.",TRUE,noun
R137665,Coating and Surface Technology,R178362,Clay-Based Nanocomposite Coating for Flexible Optoelectronics Applying Commercial Polymers,S699616,R178364,Polymer material,L470897,Polyurethane,"Transparency, flexibility, and especially ultralow oxygen (OTR) and water vapor (WVTR) transmission rates are the key issues to be addressed for packaging of flexible organic photovoltaics and organic light-emitting diodes. Concomitant optimization of all essential features is still a big challenge. Here we present a thin (1.5 μm), highly transparent, and at the same time flexible nanocomposite coating with an exceptionally low OTR and WVTR (1.0 × 10(-2) cm(3) m(-2) day(-1) bar(-1) and <0.05 g m(-2) day(-1) at 50% RH, respectively). A commercially available polyurethane (Desmodur N 3600 and Desmophen 670 BA, Bayer MaterialScience AG) was filled with a delaminated synthetic layered silicate exhibiting huge aspect ratios of about 25,000. Functional films were prepared by simple doctor-blading a suspension of the matrix and the organophilized clay. This preparation procedure is technically benign, is easy to scale up, and may readily be applied for encapsulation of sensitive flexible electronics.",TRUE,noun
R137665,Coating and Surface Technology,R178358,Large Scale Self-Assembly of Smectic Nanocomposite Films by Doctor Blading versus Spray Coating: Impact of Crystal Quality on Barrier Properties,S699593,R178361,Solvent,L470878,water,"Flexible transparent barrier films are required in various fields of application ranging from flexible, transparent food packaging to display encapsulation. Environmentally friendly, waterborne polymer–clay nanocomposites would be preferred but fail to meet in particular requirements for ultra high water vapor barriers. Here we show that self-assembly of nanocomposite films into one-dimensional crystalline (smectic) polymer–clay domains is a so-far overlooked key-factor capable of suppressing water vapor diffusivity despite appreciable swelling at elevated temperatures and relative humidity (R.H.). Moreover, barrier performance was shown to improve with quality of the crystalline order. In this respect, spray coating is superior to doctor blading because it yields significantly better ordered structures. For spray-coated waterborne nanocomposite films (21.4 μm) ultra high barrier specifications are met at 23 °C and 50% R.H. with oxygen transmission rates (OTR) < 0.0005 cm3 m–2 day–1 bar–1 and water vapor ...",TRUE,noun
R346,Cognition and Perception,R110142,Do Animals Engage Greater Social Attention in Autism? An Eye Tracking Analysis,S502207,R110144,Data,R110145,attention,"Background Visual atypicalities in autism spectrum disorder (ASD) are a well documented phenomenon, beginning as early as 2–6 months of age and manifesting in a significantly decreased attention to the eyes, direct gaze and socially salient information. Early emerging neurobiological deficits in perceiving social stimuli as rewarding or its active avoidance due to the anxiety it entails have been widely purported as potential reasons for this atypicality. Parallel research evidence also points to the significant benefits of animal presence for reducing social anxiety and enhancing social interaction in children with autism. While atypicality in social attention in ASD has been widely substantiated, whether this atypicality persists equally across species types or is confined to humans has not been a key focus of research insofar. Methods We attempted a comprehensive examination of the differences in visual attention to static images of human and animal faces (40 images; 20 human faces and 20 animal faces) among children with ASD using an eye tracking paradigm. 44 children (ASD n = 21; TD n = 23) participated in the study (10,362 valid observations) across five regions of interest (left eye, right eye, eye region, face and screen). Results Results obtained revealed significantly greater social attention across human and animal stimuli in typical controls when compared to children with ASD. However in children with ASD, a significantly greater attention allocation was seen to animal faces and eye region and lesser attention to the animal mouth when compared to human faces, indicative of a clear attentional preference to socially salient regions of animal stimuli. The positive attentional bias toward animals was also seen in terms of a significantly greater visual attention to direct gaze in animal images. Conclusion Our results suggest the possibility that atypicalities in social attention in ASD may not be uniform across species. It adds to the current neural and biomarker evidence base of the potentially greater social reward processing and lesser social anxiety underlying animal stimuli as compared to human stimuli in children with ASD.",TRUE,noun
R288,Communication Sciences,R172941,Feeling Left Out: Underserved Audiences in Science Communication,S690010,R172944,Method,R172948,interview,"Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and “feeling left out” can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.",TRUE,noun
R288,Communication Sciences,R172941,Feeling Left Out: Underserved Audiences in Science Communication,S690011,R172944,Method,R172949,interviews,"Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and “feeling left out” can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.",TRUE,noun
R277,Computational Engineering,R4884,"Learning to Generate Wikipedia Summaries for Underserved Languages
from Wikidata",S5369,R4893,Material,R4899,Arabic,"While Wikipedia exists in 287 languages, its content is unevenly distributed among them. In this work, we investigate the generation of open domain Wikipedia summaries in underserved languages using structured data from Wikidata. To this end, we propose a neural network architecture equipped with copy actions that learns to generate single-sentence and comprehensible textual summaries from Wikidata triples. We demonstrate the effectiveness of the proposed approach by evaluating it against a set of baselines on two languages of different natures: Arabic, a morphological rich language with a larger vocabulary than English, and Esperanto, a constructed language known for its easy acquisition.",TRUE,noun
R277,Computational Engineering,R4884,"Learning to Generate Wikipedia Summaries for Underserved Languages
from Wikidata",S5370,R4893,Material,R4900,Esperanto,"While Wikipedia exists in 287 languages, its content is unevenly distributed among them. In this work, we investigate the generation of open domain Wikipedia summaries in underserved languages using structured data from Wikidata. To this end, we propose a neural network architecture equipped with copy actions that learns to generate single-sentence and comprehensible textual summaries from Wikidata triples. We demonstrate the effectiveness of the proposed approach by evaluating it against a set of baselines on two languages of different natures: Arabic, a morphological rich language with a larger vocabulary than English, and Esperanto, a constructed language known for its easy acquisition.",TRUE,noun
R277,Computational Engineering,R41026,Predicting Infections Using Computational Intelligence – A Systematic Review,S130191,R41042,Has result,R41046,Sepsis,"Infections encompass a set of medical conditions of very diverse kinds that can pose a significant risk to health, and even death. As with many other diseases, early diagnosis can help to provide patients with proper care to minimize the damage produced by the disease, or to isolate them to avoid the risk of spread. In this context, computational intelligence can be useful to predict the risk of infection in patients, raising early alarms that can aid medical teams to respond as quick as possible. In this paper, we survey the state of the art on infection prediction using computer science by means of a systematic literature review. The objective is to find papers where computational intelligence is used to predict infections in patients using physiological data as features. We have posed one major research question along with nine specific subquestions. The whole review process is thoroughly described, and eight databases are considered which index most of the literature published in different scholarly formats. A total of 101 relevant documents have been found in the period comprised between 2003 and 2019, and a detailed study of these documents is carried out to classify the works and answer the research questions posed, resulting to our best knowledge in the most comprehensive study of its kind. We conclude that the most widely addressed infection is by far sepsis, followed by Clostridium difficile infection and surgical site infections. Most works use machine learning techniques, from which logistic regression, support vector machines, random forest and naive Bayes are the most common. Some machine learning works provide some ideas on the problems of small data and class imbalance, which can be of interest. The current systematic literature review shows that automatic diagnosis of infectious diseases using computational intelligence is well documented in the medical literature.",TRUE,noun
R322,Computational Linguistics,R164478,BioNLP Shared Task 2011 – Bacteria Gene Interactions and Renaming,S656788,R164480,Entity types,R164491,Action,"We present two related tasks of the BioNLP Shared Tasks 2011: Bacteria Gene Renaming (Rename) and Bacteria Gene Interactions (GI). We detail the objectives, the corpus specification, the evaluation metrics, and we summarize the participants' results. Both issued from PubMed scientific literature abstracts, the Rename task aims at extracting gene name synonyms, and the GI task aims at extracting genic interaction events, mainly about gene transcriptional regulations in bacteria.",TRUE,noun
R322,Computational Linguistics,R164455,BioNLP Shared Task 2011 - Bacteria Biotope,S659529,R165270,Entity types,R163604,Bacteria,"This paper presents the Bacteria Biotope task as part of the BioNLP Shared Tasks 2011. The Bacteria Biotope task aims at extracting the location of bacteria from scientific Web pages. Bacteria location is a crucial knowledge in biology for phenotype studies. The paper details the corpus specification, the evaluation metrics, summarizes and discusses the participant results.",TRUE,noun
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595272,R148452,Concept types,R148453,CellLine,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,noun
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595296,R148452,Other resources,R114053,ChEBI,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,noun
R322,Computational Linguistics,R148131,Construction of an annotated corpus to support biomedical information extraction,S593949,R148133,Semantic roles,R148145,Condition,"Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes.",TRUE,noun
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595275,R148452,Concept types,R148456,Disease,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,noun
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595276,R148452,Concept types,R148457,DrugCompound,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,noun
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595295,R148452,Other resources,R148470,EntrezGene,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,noun
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595277,R148452,Concept types,R148458,ExperimentalMethod,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,noun
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595278,R148452,Concept types,R148459,Fragment,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,noun
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595279,R148452,Concept types,R148460,Fusion,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,noun
R322,Computational Linguistics,R148032,MedTag: A Collection of Biomedical Annotations,S593775,R148034,Concept types,R148042,Gene,"We present a database of annotated biomedical text corpora merged into a portable data structure with uniform conventions. MedTag combines three corpora, MedPost, ABGene and GENETAG, within a common relational database data model. The GENETAG corpus has been modified to reflect new definitions of genes and proteins. The MedPost corpus has been updated to include 1,000 additional sentences from the clinical medicine domain. All data have been updated with original MEDLINE text excerpts, PubMed identifiers, and tokenization independence to facilitate data accuracy, consistency and usability. The data are available in flat files along with software to facilitate loading the data into a relational SQL database from ftp://ftp.ncbi.nlm.nih.gov/pub/lsmith/MedTag/medtag.tar.gz.",TRUE,noun
R322,Computational Linguistics,R148039,GENETAG: a tagged corpus for gene/protein named entity recognition,S593670,R148041,Concept types,R148042,Gene,"Abstract Background Named entity recognition (NER) is an important first step for text mining the biomedical literature. Evaluating the performance of biomedical NER systems is impossible without a standardized test corpus. The annotation of such a corpus for gene/protein name NER is a difficult process due to the complexity of gene/protein names. We describe the construction and annotation of GENETAG, a corpus of 20K MEDLINE ® sentences for gene/protein NER. 15K GENETAG sentences were used for the BioCreAtIvE Task 1A Competition. Results To ensure heterogeneity of the corpus, MEDLINE sentences were first scored for term similarity to documents with known gene names, and 10K high- and 10K low-scoring sentences were chosen at random. The original 20K sentences were run through a gene/protein name tagger, and the results were modified manually to reflect a wide definition of gene/protein names subject to a specificity constraint, a rule that required the tagged entities to refer to specific entities. Each sentence in GENETAG was annotated with acceptable alternatives to the gene/protein names it contained, allowing for partial matching with semantic constraints. Semantic constraints are rules requiring the tagged entity to contain its true meaning in the sentence context. Application of these constraints results in a more meaningful measure of the performance of an NER system than unrestricted partial matching. Conclusion The annotation of GENETAG required intricate manual judgments by annotators which hindered tagging consistency. The data were pre-segmented into words, to provide indices supporting comparison of system responses to the ""gold standard"". However, character-based indices would have been more robust than word-based indices. GENETAG Train, Test and Round1 data and ancillary programs are freely available at ftp://ftp.ncbi.nlm.nih.gov/pub/tanabe/GENETAG.tar.gz. A newer version of GENETAG-05, will be released later this year.",TRUE,noun
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595281,R148452,Concept types,R148462,Gene,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,noun
R322,Computational Linguistics,R164478,BioNLP Shared Task 2011 – Bacteria Gene Interactions and Renaming,S656756,R164481,Entity types,R148462,Gene,"We present two related tasks of the BioNLP Shared Tasks 2011: Bacteria Gene Renaming (Rename) and Bacteria Gene Interactions (GI). We detail the objectives, the corpus specification, the evaluation metrics, and we summarize the participants' results. Both issued from PubMed scientific literature abstracts, the Rename task aims at extracting gene name synonyms, and the GI task aims at extracting genic interaction events, mainly about gene transcriptional regulations in bacteria.",TRUE,noun
R322,Computational Linguistics,R164478,BioNLP Shared Task 2011 – Bacteria Gene Interactions and Renaming,S661281,R165896,Relation types,R164495,Interaction,"We present two related tasks of the BioNLP Shared Tasks 2011: Bacteria Gene Renaming (Rename) and Bacteria Gene Interactions (GI). We detail the objectives, the corpus specification, the evaluation metrics, and we summarize the participants' results. Both issued from PubMed scientific literature abstracts, the Rename task aims at extracting gene name synonyms, and the GI task aims at extracting genic interaction events, mainly about gene transcriptional regulations in bacteria.",TRUE,noun
R322,Computational Linguistics,R150967,Annotation of Chemical Named Entities,S605290,R150969,Other resources,R46713,LingPipe,"We describe the annotation of chemical named entities in scientific text. A set of annotation guidelines defines 5 types of named entities, and provides instructions for the resolution of special cases. A corpus of fulltext chemistry papers was annotated, with an inter-annotator agreement F score of 93%. An investigation of named entity recognition using LingPipe suggests that F scores of 63% are possible without customisation, and scores of 74% are possible with the addition of custom tokenisation and the use of dictionaries.",TRUE,noun
R322,Computational Linguistics,R148131,Construction of an annotated corpus to support biomedical information extraction,S593944,R148133,Semantic roles,R148141,Location,"Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes.",TRUE,noun
R322,Computational Linguistics,R148131,Construction of an annotated corpus to support biomedical information extraction,S593942,R148133,Semantic roles,R148139,Manner,"Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes.",TRUE,noun
R322,Computational Linguistics,R148549,Medmentions: a large biomedical corpus annotated with UMLS concepts,S595610,R148551,Dataset name,R148572,MedMentions,"This paper presents the formal release of {\em MedMentions}, a new manually annotated resource for the recognition of biomedical concepts. What distinguishes MedMentions from other annotated biomedical corpora is its size (over 4,000 abstracts and over 350,000 linked mentions), as well as the size of the concept ontology (over 3 million concepts from UMLS 2017) and its broad coverage of biomedical disciplines. In addition to the full corpus, a sub-corpus of MedMentions is also presented, comprising annotations for a subset of UMLS 2017 targeted towards document retrieval. To encourage research in Biomedical Named Entity Recognition and Linking, data splits for training and testing are included in the release, and a baseline model and its metrics for entity linking are also described.",TRUE,noun
R322,Computational Linguistics,R148032,MedTag: A Collection of Biomedical Annotations,S593650,R148034,Dataset name,R148035,MedTag,"We present a database of annotated biomedical text corpora merged into a portable data structure with uniform conventions. MedTag combines three corpora, MedPost, ABGene and GENETAG, within a common relational database data model. The GENETAG corpus has been modified to reflect new definitions of genes and proteins. The MedPost corpus has been updated to include 1,000 additional sentences from the clinical medicine domain. All data have been updated with original MEDLINE text excerpts, PubMed identifiers, and tokenization independence to facilitate data accuracy, consistency and usability. The data are available in flat files along with software to facilitate loading the data into a relational SQL database from ftp://ftp.ncbi.nlm.nih.gov/pub/lsmith/MedTag/medtag.tar.gz.",TRUE,noun
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595282,R148452,Concept types,R148463,Modification,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,noun
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595284,R148452,Concept types,R148465,Mutant,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,noun
R322,Computational Linguistics,R148039,GENETAG: a tagged corpus for gene/protein named entity recognition,S593671,R148041,Concept types,R147749,Protein,"Abstract Background Named entity recognition (NER) is an important first step for text mining the biomedical literature. Evaluating the performance of biomedical NER systems is impossible without a standardized test corpus. The annotation of such a corpus for gene/protein name NER is a difficult process due to the complexity of gene/protein names. We describe the construction and annotation of GENETAG, a corpus of 20K MEDLINE ® sentences for gene/protein NER. 15K GENETAG sentences were used for the BioCreAtIvE Task 1A Competition. Results To ensure heterogeneity of the corpus, MEDLINE sentences were first scored for term similarity to documents with known gene names, and 10K high- and 10K low-scoring sentences were chosen at random. The original 20K sentences were run through a gene/protein name tagger, and the results were modified manually to reflect a wide definition of gene/protein names subject to a specificity constraint, a rule that required the tagged entities to refer to specific entities. Each sentence in GENETAG was annotated with acceptable alternatives to the gene/protein names it contained, allowing for partial matching with semantic constraints. Semantic constraints are rules requiring the tagged entity to contain its true meaning in the sentence context. Application of these constraints results in a more meaningful measure of the performance of an NER system than unrestricted partial matching. Conclusion The annotation of GENETAG required intricate manual judgments by annotators which hindered tagging consistency. The data were pre-segmented into words, to provide indices supporting comparison of system responses to the ""gold standard"". However, character-based indices would have been more robust than word-based indices. GENETAG Train, Test and Round1 data and ancillary programs are freely available at ftp://ftp.ncbi.nlm.nih.gov/pub/tanabe/GENETAG.tar.gz. A newer version of GENETAG-05, will be released later this year.",TRUE,noun
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595285,R148452,Concept types,R148466,Protein,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,noun
R322,Computational Linguistics,R148131,Construction of an annotated corpus to support biomedical information extraction,S593953,R148133,Semantic roles,R148149,Purpose,"Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes.",TRUE,noun
R322,Computational Linguistics,R148131,Construction of an annotated corpus to support biomedical information extraction,S593950,R148133,Semantic roles,R148146,Rate,"Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes.",TRUE,noun
R322,Computational Linguistics,R110753,Generating Abstractive Summaries from Meeting Transcripts,S504684,R110755,Human Evaluation Aspects,L364526,Readability,"Summaries of meetings are very important as they convey the essential content of discussions in a concise form. Both participants and non-participants are interested in the summaries of meetings to plan for their future work. Generally, it is time consuming to read and understand the whole documents. Therefore, summaries play an important role as the readers are interested in only the important context of discussions. In this work, we address the task of meeting document summarization. Automatic summarization systems on meeting conversations developed so far have been primarily extractive, resulting in unacceptable summaries that are hard to read. The extracted utterances contain disfluencies that affect the quality of the extractive summaries. To make summaries much more readable, we propose an approach to generating abstractive summaries by fusing important content from several utterances. We first separate meeting transcripts into various topic segments, and then identify the important utterances in each segment using a supervised learning approach. The important utterances are then combined together to generate a one-sentence summary. In the text generation step, the dependency parses of the utterances in each segment are combined together to create a directed graph. The most informative and well-formed sub-graph obtained by integer linear programming (ILP) is selected to generate a one-sentence summary for each topic segment. The ILP formulation reduces disfluencies by leveraging grammatical relations that are more prominent in non-conversational style of text, and therefore generates summaries that is comparable to human-written abstractive summaries. Experimental results show that our method can generate more informative summaries than the baselines. In addition, readability assessments by human judges as well as log-likelihood estimates obtained from the dependency parser show that our generated summaries are significantly readable and well-formed.",TRUE,noun
R322,Computational Linguistics,R155259,Leveraging Abstract Meaning Representation for Knowledge Base Question Answering,S621399,R155261,Techniques/Methods,L427844,reasoner,"Knowledge base question answering (KBQA) is an important task in Natural Language Processing. Existing approaches face significant challenges including complex question understanding, necessity for reasoning, and lack of large end-to-end training datasets. In this work, we propose Neuro-Symbolic Question Answering (NSQA), a modular KBQA system, that leverages (1) Abstract Meaning Representation (AMR) parses for task-independent question understanding; (2) a simple yet effective graph transformation approach to convert AMR parses into candidate logical queries that are aligned to the KB; (3) a pipeline-based approach which integrates multiple, reusable modules that are trained specifically for their individual tasks (semantic parser, entity and relationship linkers, and neuro-symbolic reasoner) and do not require end-to-end training data. NSQA achieves state-of-the-art performance on two prominent KBQA datasets based on DBpedia (QALD-9 and LC-QuAD 1.0). Furthermore, our analysis emphasizes that AMR is a powerful tool for KBQA systems.",TRUE,noun
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595294,R148452,Other resources,R148469,RefSeq,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,noun
R322,Computational Linguistics,R163227,The CLEF corpus: semantic annotation of clinical text,S650969,R163229,Concept types,R163250,Result,"The Clinical E-Science Framework (CLEF) project is building a framework for the capture, integration and presentation of clinical information: for clinical research, evidence-based health care and genotype-meets-phenotype informatics. A significant portion of the information required by such a framework originates as text, even in EHR-savvy organizations. CLEF uses Information Extraction (IE) to make this unstructured information available. An important part of IE is the identification of semantic entities and relationships. Typical approaches require human annotated documents to provide both evaluation standards and material for system development. CLEF has a corpus of clinical narratives, histopathology reports and imaging reports from 20 thousand patients. We describe the selection of a subset of this corpus for manual annotation of clinical entities and relationships. We describe an annotation methodology and report encouraging initial results of inter-annotator agreement. Comparisons are made between different text sub-genres, and between annotators with different skills.",TRUE,noun
R322,Computational Linguistics,R164218,The GENIA corpus: an annotated research abstract corpus in molecular biology domain,S655627,R164220,Concept types,R164221,Source,"With the information overload in genome-related field, there is an increasing need for natural language processing technology to extract information from literature and various attempts of information extraction using NLP has been being made. We are developing the necessary resources including domain ontology and annotated corpus from research abstracts in MEDLINE database (GENIA corpus). We are building the ontology and the corpus simultaneously, using each other. In this paper we report on our new corpus, its ontological basis, annotation scheme, and statistics of annotated objects. We also describe the tools used for corpus annotation and management.",TRUE,noun
R322,Computational Linguistics,R148131,Construction of an annotated corpus to support biomedical information extraction,S593945,R148133,Semantic roles,R148142,Source,"Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes.",TRUE,noun
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595286,R148452,Concept types,R148467,Tissue,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,noun
R322,Computational Linguistics,R164478,BioNLP Shared Task 2011 – Bacteria Gene Interactions and Renaming,S656790,R164480,Entity types,R164493,Transcription,"We present two related tasks of the BioNLP Shared Tasks 2011: Bacteria Gene Renaming (Rename) and Bacteria Gene Interactions (GI). We detail the objectives, the corpus specification, the evaluation metrics, and we summarize the participants' results. Both issued from PubMed scientific literature abstracts, the Rename task aims at extracting gene name synonyms, and the GI task aims at extracting genic interaction events, mainly about gene transcriptional regulations in bacteria.",TRUE,noun
R322,Computational Linguistics,R164478,BioNLP Shared Task 2011 – Bacteria Gene Interactions and Renaming,S656801,R164485,data source,R140296,PubMed,"We present two related tasks of the BioNLP Shared Tasks 2011: Bacteria Gene Renaming (Rename) and Bacteria Gene Interactions (GI). We detail the objectives, the corpus specification, the evaluation metrics, and we summarize the participants' results. Both issued from PubMed scientific literature abstracts, the Rename task aims at extracting gene name synonyms, and the GI task aims at extracting genic interaction events, mainly about gene transcriptional regulations in bacteria.",TRUE,noun
R322,Computational Linguistics,R164478,BioNLP Shared Task 2011 – Bacteria Gene Interactions and Renaming,S656755,R164481,Relation types,R164483,Rename,"We present two related tasks of the BioNLP Shared Tasks 2011: Bacteria Gene Renaming (Rename) and Bacteria Gene Interactions (GI). We detail the objectives, the corpus specification, the evaluation metrics, and we summarize the participants' results. Both issued from PubMed scientific literature abstracts, the Rename task aims at extracting gene name synonyms, and the GI task aims at extracting genic interaction events, mainly about gene transcriptional regulations in bacteria.",TRUE,noun
R231,Computer and Systems Architecture,R175456,A deep learning framework for character motion synthesis and editing,S696154,R175458,Has evaluation,L468088,Autoencoder,"We present a framework to synthesize character movements based on high level parameters, such that the produced movements respect the manifold of human motion, trained on a large motion capture dataset. The learned motion manifold, which is represented by the hidden units of a convolutional autoencoder, represents motion data in sparse components which can be combined to produce a wide range of complex movements. To map from high level parameters to the motion manifold, we stack a deep feedforward neural network on top of the trained autoencoder. This network is trained to produce realistic motion sequences from parameters such as a curve over the terrain that the character should follow, or a target location for punching and kicking. The feedforward control network and the motion manifold are trained independently, allowing the user to easily switch between feedforward networks according to the desired interface, without re-training the motion manifold. Once motion is generated it can be edited by performing optimization in the space of the motion manifold. This allows for imposing kinematic constraints, or transforming the style of the motion, while ensuring the edited motion remains natural. As a result, the system can produce smooth, high quality motion sequences without any manual pre-processing of the training data.",TRUE,noun
R231,Computer and Systems Architecture,R175456,A deep learning framework for character motion synthesis and editing,S695223,R175458,Activity,L467360,Punching,"We present a framework to synthesize character movements based on high level parameters, such that the produced movements respect the manifold of human motion, trained on a large motion capture dataset. The learned motion manifold, which is represented by the hidden units of a convolutional autoencoder, represents motion data in sparse components which can be combined to produce a wide range of complex movements. To map from high level parameters to the motion manifold, we stack a deep feedforward neural network on top of the trained autoencoder. This network is trained to produce realistic motion sequences from parameters such as a curve over the terrain that the character should follow, or a target location for punching and kicking. The feedforward control network and the motion manifold are trained independently, allowing the user to easily switch between feedforward networks according to the desired interface, without re-training the motion manifold. Once motion is generated it can be edited by performing optimization in the space of the motion manifold. This allows for imposing kinematic constraints, or transforming the style of the motion, while ensuring the edited motion remains natural. As a result, the system can produce smooth, high quality motion sequences without any manual pre-processing of the training data.",TRUE,noun
R231,Computer and Systems Architecture,R175447,Motion synthesis and editing in low-dimensional spaces,S695197,R175449,Publisher,L467340,Wiley,"Human motion is difficult to create and manipulate because of the high dimensionality and spatiotemporal nature of human motion data. Recently, the use of large collections of captured motion data has added increased realism in character animation. In order to make the synthesis and analysis of motion data tractable, we present a low‐dimensional motion space in which high‐dimensional human motion can be effectively visualized, synthesized, edited, parameterized, and interpolated in both spatial and temporal domains. Our system allows users to create and edit the motion of animated characters in several ways: The user can sketch and edit a curve on low‐dimensional motion space, directly manipulate the character's pose in three‐dimensional object space, or specify key poses to create in‐between motions. Copyright © 2006 John Wiley & Sons, Ltd.",TRUE,noun
R230,Computer Engineering,R74453,OER development and promotion. Outcomes of an international research project on the OpenCourseWare model,S497487,R109096,Document type,L360233,Article,"In this paper, we describe the successful results of an international research project focused on the use of Web technology in the educational context. The article explains how this international project, funded by public organizations and developed over the last two academic years, focuses on the area of open educational resources (OER) and particularly the educational content of the OpenCourseWare (OCW) model. This initiative has been developed by a research group composed of researchers from three countries. The project was enabled by the Universidad Politecnica de Madrid OCW Office's leadership of the Consortium of Latin American Universities and the distance education know-how of the Universidad Tecnica Particular de Loja (UTPL, Ecuador). We give a full account of the project, methodology, main outcomes and validation. The project results have further consolidated the group, and increased the maturity of group members and networking with other groups in the area. The group is now participating in other research projects that continue the lines developed here.",TRUE,noun
R230,Computer Engineering,R74463,Application of data anonymization in Learning Analytics,S497564,R109101,Source,L360300,Scopus,"Thanks to the proliferation of academic services on the Web and the opening of educational content, today, students can access a large number of free learning resources, and interact with value-added services. In this context, Learning Analytics can be carried out on a large scale thanks to the proliferation of open practices that promote the sharing of datasets. However, the opening or sharing of data managed through platforms and educational services, without considering the protection of users' sensitive data, could cause some privacy issues. Data anonymization is a strategy that should be adopted during lifecycle of data processing to reduce security risks. In this research, we try to characterize how much and how the anonymization techniques have been used in learning analytics proposals. From an initial exploration made in the Scopus database, we found that less than 6% of the papers focused on LA have also covered the privacy issue. Finally, through a specific case, we applied data anonymization and learning analytics to demonstrate that both technique can be integrated, in a reliably and effectively way, to support decision making in educational institutions.",TRUE,noun
R132,Computer Sciences,R131755,Knowledge Graph Embedding with Atrous Convolution and Residual Learning,S523669,R131756,has model,R124629,AcrE,"Knowledge graph embedding is an important task and it will benefit lots of downstream applications. Currently, deep neural networks based methods achieve state-of-the-art performance. However, most of these existing methods are very complex and need much time for training and inference. To address this issue, we propose a simple but effective atrous convolution based knowledge graph embedding method. Compared with existing state-of-the-art methods, our method has following main characteristics. First, it effectively increases feature interactions by using atrous convolutions. Second, to address the original information forgotten issue and vanishing/exploding gradient issue, it uses the residual learning method. Third, it has simpler structure but much higher parameter efficiency. We evaluate our method on six benchmark datasets with different evaluation metrics. Extensive experiments show that our model is very effective. On these diverse datasets, it achieves better results than the compared state-of-the-art methods on most of evaluation metrics. The source codes of our model could be found at https://github.com/neukg/AcrE.",TRUE,noun
R132,Computer Sciences,R135186,AxCell: Automatic Extraction of Results from Machine Learning Papers,S534624,R135187,has model,R128071,AxCell,"Tracking progress in machine learning has become increasingly difficult with the recent explosion in the number of papers. In this paper, we present AxCell, an automatic machine learning pipeline for extracting results from papers. AxCell uses several novel components, including a table segmentation subtask, to learn relevant structural knowledge that aids extraction. When compared with existing methods, our approach significantly improves the state of the art for results extraction. We also release a structured, annotated dataset for training models for results extraction, and a dataset for evaluating the performance of models on this task. Lastly, we show the viability of our approach enables it to be used for semi-automated results extraction in production, suggesting our improvements make this task practically viable for the first time. Code is available on GitHub.",TRUE,noun
R132,Computer Sciences,R134502,BilBOWA: Fast Bilingual Distributed Representations without Word Alignments,S532269,R134503,has model,R125980,BilBOWA,"We introduce BilBOWA (Bilingual Bag-of-Words without Alignments), a simple and computationally-efficient model for learning bilingual distributed representations of words which can scale to large monolingual datasets and does not require word-aligned parallel training data. Instead it trains directly on monolingual data and extracts a bilingual signal from a smaller set of raw-text sentence-aligned data. This is achieved using a novel sampled bag-of-words cross-lingual objective, which is used to regularize two noise-contrastive language models for efficient cross-lingual feature learning. We show that bilingual embeddings learned using the proposed model outperform state-of-the-art methods on a cross-lingual document classification task as well as a lexical translation task on WMT11 data.",TRUE,noun
R132,Computer Sciences,R135097,Sequential Random Network for Fine-grained Image Classification,S534289,R135105,has model,R126718,BiLSTM-TDN,"Deep Convolutional Neural Network (DCNN) and Transformer have achieved remarkable successes in image recognition. However, their performance in fine-grained image recognition is still difficult to meet the requirements of actual needs. This paper proposes a Sequence Random Network (SRN) to enhance the performance of DCNN. The output of DCNN is one-dimensional features. This onedimensional feature abstractly represents image information, but it does not express well the detailed information of image. To address this issue, we use the proposed SRN, which composed of BiLSTM and several Tanh-Dropout blocks (called BiLSTM-TDN), to further process DCNN one-dimensional features for highlighting the detail information of image. After the feature transform by BiLSTMTDN, the recognition performance has been greatly improved. We conducted the experiments on six fine-grained image datasets. Except for FGVC-Aircraft, the accuracy of the proposed methods on the other datasets exceeded 99%. Experimental results show that BiLSTM-TDN is far superior to the existing state-of-the-art methods. In addition to DCNN, BiLSTM-TDN can also be extended to other models, such as Transformer.",TRUE,noun
R132,Computer Sciences,R129405,BiTT: Bidirectional Tree Tagging for Joint Extraction of Overlapping Entities and Relations,S514741,R129406,has model,R116610,BiTT,"Joint extraction refers to extracting triples, composed of entities and relations, simultaneously from the text with a single model. However, most existing methods fail to extract all triples accurately and efficiently from sentences with overlapping issue, i.e., the same entity is included in multiple triples. In this paper, we propose a novel scheme called Bidirectional Tree Tagging (BiTT) to label overlapping triples in text. In BiTT, the triples with the same relation category in a sentence are especially represented as two binary trees, each of which is converted into a word-level tags sequence to label each word. Based on BiTT scheme, we develop an end-to-end extraction framework to predict the BiTT tags and further extract triples efficiently. We adopt the Bi-LSTM and the BERT as the encoder in our framework respectively, and obtain promising results in public English as well as Chinese datasets.",TRUE,noun
R132,Computer Sciences,R51013,Masakhane–Machine Translation For Africa,S156063,R51015,Material,R51023,community,"Africa has over 2000 languages. Despite this, African languages account for a small portion of available resources and publications in Natural Language Processing (NLP). This is due to multiple factors, including: a lack of focus from government and funding, discoverability, a lack of community, sheer language complexity, difficulty in reproducing papers and no benchmarks to compare techniques. To begin to address the identified problems, MASAKHANE, an open-source, continent-wide, distributed, online research effort for machine translation for African languages, was founded. In this paper, we discuss our methodology for building the community and spurring research from the African continent, as well as outline the success of the community in terms of addressing the identified problems affecting African NLP.",TRUE,noun
R132,Computer Sciences,R129585,"Entity, Relation, and Event Extraction with Contextualized Span Representations",S515328,R129591,has model,R116715,DyGIE++,"We examine the capabilities of a unified, multi-task framework for three information extraction tasks: named entity recognition, relation extraction, and event extraction. Our framework (called DyGIE++) accomplishes all tasks by enumerating, refining, and scoring text spans designed to capture local (within-sentence) and global (cross-sentence) context. Our framework achieves state-of-the-art results across all tasks, on four datasets from a variety of domains. We perform experiments comparing different techniques to construct span representations. Contextualized embeddings like BERT perform well at capturing relationships among entities in the same or adjacent sentences, while dynamic span graph updates model long-range cross-sentence relationships. For instance, propagating span representations via predicted coreference links can enable the model to disambiguate challenging entity mentions. Our code is publicly available at https://github.com/dwadden/dygiepp and can be easily adapted for new tasks or datasets.",TRUE,noun
R132,Computer Sciences,R130276,FusionNet: Fusing via Fully-Aware Attention with Application to Machine Comprehension,S517666,R130277,has model,R119581,FusionNet,"This paper introduces a new neural structure called FusionNet, which extends existing attention approaches from three perspectives. First, it puts forward a novel concept of ""history of word"" to characterize attention information from the lowest word-level embedding up to the highest semantic-level representation. Second, it introduces an improved attention scoring function that better utilizes the ""history of word"" concept. Third, it proposes a fully-aware multi-level attention mechanism to capture the complete information in one text (such as a question) and exploit it in its counterpart (such as context or passage) layer by layer. We apply FusionNet to the Stanford Question Answering Dataset (SQuAD) and it achieves the first position for both single and ensemble model on the official SQuAD leaderboard at the time of writing (Oct. 4th, 2017). Meanwhile, we verify the generalization of FusionNet with two adversarial SQuAD datasets and it sets up the new state-of-the-art on both datasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to 51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%.",TRUE,noun
R132,Computer Sciences,R130777,HyperNetworks,S520180,R130778,has model,R120902,Hypernetworks,"This work explores hypernetworks: an approach of using one network, also known as a hypernetwork, to generate the weights for another network. We apply hypernetworks to generate adaptive weights for recurrent networks. In this case, hypernetworks can be viewed as a relaxed form of weight-sharing across layers. In our implementation, hypernetworks are are trained jointly with the main network in an end-to-end fashion. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks.",TRUE,noun
R132,Computer Sciences,R131122,Perceiver: General Perception with Iterative Attention,S521760,R131123,has model,R122333,Perceiver,"Biological systems understand the world by simultaneously processing high-dimensional inputs from modalities as diverse as vision, audition, touch, proprioception, etc. The perception models used in deep learning on the other hand are designed for individual modalities, often relying on domain-specific assumptions such as the local grid structures exploited by virtually all existing vision models. These priors introduce helpful inductive biases, but also lock models to individual modalities. In this paper we introduce the Perceiver – a model that builds upon Transformers and hence makes few architectural assumptions about the relationship between its inputs, but that also scales to hundreds of thousands of inputs, like ConvNets. The model leverages an asymmetric attention mechanism to iteratively distill inputs into a tight latent bottleneck, allowing it to scale to handle very large inputs. We show that this architecture performs competitively or beyond strong, specialized models on classification tasks across various modalities: images, point clouds, audio, video and video+audio. The Perceiver obtains performance comparable to ResNet-50 on ImageNet without convolutions and by directly attending to 50,000 pixels. It also surpasses state-of-the-art results for all modalities in AudioSet.",TRUE,noun
R132,Computer Sciences,R36089,Crowdsourced semantic annotation of scientific publications and tabular data in PDF,S123475,R36090,Name,L74315,SemAnn,"Significant amounts of knowledge in science and technology have so far not been published as Linked Open Data but are contained in the text and tables of legacy PDF publications. Making such information available as RDF would, for example, provide direct access to claims and facilitate surveys of related work. A lot of valuable tabular information that till now only existed in PDF documents would also finally become machine understandable. Instead of studying scientific literature or engineering patents for months, it would be possible to collect such input by simple SPARQL queries. The SemAnn approach enables collaborative annotation of text and tables in PDF documents, a format that is still the common denominator of publishing, thus maximising the potential user base. The resulting annotations in RDF format are available for querying through a SPARQL endpoint. To incentivise users with an immediate benefit for making the effort of annotation, SemAnn recommends related papers, taking into account the hierarchical context of annotations in a novel way. We evaluated the usability of SemAnn and the usefulness of its recommendations by analysing annotations resulting from tasks assigned to test users and by interviewing them. While the evaluation shows that even few annotations lead to a good recall, we also observed unexpected, serendipitous recommendations, which confirms the merit of our low-threshold annotation support for the crowd.",TRUE,noun
R132,Computer Sciences,R129488,Span-based Joint Entity and Relation Extraction with Transformer Pre-training,S514987,R129489,has model,R116620,SpERT,"We introduce SpERT, an attention model for span-based joint entity and relation extraction. Our key contribution is a light-weight reasoning on BERT embeddings, which features entity recognition and filtering, as well as relation classification with a localized, marker-free context representation. The model is trained using strong within-sentence negative samples, which are efficiently extracted in a single BERT pass. These aspects facilitate a search over all spans in the sentence. In ablation studies, we demonstrate the benefits of pre-training, strong negative sampling and localized context. Our model outperforms prior work by up to 2.6% F1 score on several datasets for joint entity and relation extraction.",TRUE,noun
R132,Computer Sciences,R36093,TableSeer: automatic table metadata extraction and searching in digital libraries,S123524,R36094,Name,L74350,TableSeer,"Tables are ubiquitous in digital libraries. In scientific documents, tables are widely used to present experimental results or statistical data in a condensed fashion. However, current search engines do not support table search. The difficulty of automatic extracting tables from un-tagged documents, the lack of a universal table metadata specification, and the limitation of the existing ranking schemes make table search problem challenging. In this paper, we describe TableSeer, a search engine for tables. TableSeer crawls digital libraries, detects tables from documents, extracts tables metadata, indexes and ranks tables, and provides a user-friendly search interface. We propose an extensive set of medium-independent metadata for tables that scientists and other users can adopt for representing table information. In addition, we devise a novel page box-cutting method to improve the performance of the table detection. Given a query, TableSeer ranks the matched tables using an innovative ranking algorithm - TableRank. TableRank rates each ⃭query, tableℂ pair with a tailored vector space model and a specific term weighting scheme. Overall, TableSeer eliminates the burden of manually extract table data from digital libraries and enables users to automatically examine tables. We demonstrate the value of TableSeer with empirical studies on scientific documents.",TRUE,noun
R132,Computer Sciences,R129468,Two are Better than One: Joint Entity and Relation Extraction with Table-Sequence Encoders,S514926,R129469,has model,R116615,Table-Sequence,"Named entity recognition and relation extraction are two important fundamental problems. Joint learning algorithms have been proposed to solve both tasks simultaneously, and many of them cast the joint task as a table-filling problem. However, they typically focused on learning a single encoder (usually learning representation in the form of a table) to capture information required for both tasks within the same space. We argue that it can be beneficial to design two distinct encoders to capture such two different types of information in the learning process. In this work, we propose the novel {\em table-sequence encoders} where two different encoders -- a table encoder and a sequence encoder are designed to help each other in the representation learning process. Our experiments confirm the advantages of having {\em two} encoders over {\em one} encoder. On several standard datasets, our model shows significant improvements over existing approaches.",TRUE,noun
R132,Computer Sciences,R131002,R-Transformer: Recurrent Neural Network Enhanced Transformer,S521274,R131012,has model,R116197,Transformer,"Recurrent Neural Networks have long been the dominating choice for sequence modeling. However, it severely suffers from two issues: impotent in capturing very long-term dependencies and unable to parallelize the sequential computation procedure. Therefore, many non-recurrent sequence models that are built on convolution and attention operations have been proposed recently. Notably, models with multi-head attention such as Transformer have demonstrated extreme effectiveness in capturing long-term dependencies in a variety of sequence modeling tasks. Despite their success, however, these models lack necessary components to model local structures in sequences and heavily rely on position embeddings that have limited effects and require a considerable amount of design efforts. In this paper, we propose the R-Transformer which enjoys the advantages of both RNNs and the multi-head attention mechanism while avoids their respective drawbacks. The proposed model can effectively capture both local structures and global long-term dependencies in sequences without any use of position embeddings. We evaluate R-Transformer through extensive experiments with data from a wide range of domains and the empirical results show that R-Transformer outperforms the state-of-the-art methods by a large margin in most of the tasks. We have made the code publicly available at \url{this https URL}.",TRUE,noun
R132,Computer Sciences,R131142,Unsupervised Learning of Semantic Audio Representations,S521833,R131143,has model,R122337,Triplet,"Even in the absence of any explicit semantic annotation, vast collections of audio recordings provide valuable information for learning the categorical structure of sounds. We consider several class-agnostic semantic constraints that apply to unlabeled nonspeech audio: (i) noise and translations in time do not change the underlying sound category, (ii) a mixture of two sound events inherits the categories of the constituents, and (iii) the categories of events in close temporal proximity are likely to be the same or related. Without labels to ground them, these constraints are incompatible with classification loss functions. However, they may still be leveraged to identify geometric inequalities needed for triplet loss-based training of convolutional neural networks. The result is low-dimensional embeddings of the input spectrograms that recover 41% and 84% of the performance of their fully-supervised counterparts when applied to downstream query-by-example sound retrieval and sound event classification tasks, respectively. Moreover, in limited-supervision settings, our unsupervised embeddings double the state-of-the-art classification performance.",TRUE,noun
R132,Computer Sciences,R129948,Piano Skills Assessment,S516539,R129949,has model,R118533,Video,"Can a computer determine a piano player’s skill level? Is it preferable to base this assessment on visual analysis of the player’s performance or should we trust our ears over our eyes? Since current convolutional neural networks (CNNs) have difficulty processing long video videos, how can shorter clips be sampled to best reflect the players skill level? In this work, we collect and release a first-of-its-kind dataset for multimodal skill assessment focusing on assessing piano player’s skill level, answer the asked questions, initiate work in automated evaluation of piano playing skills and provide baselines for future work. Dataset can be accessed from: https://github.com/ParitoshParmar/Piano-Skills-Assessment.",TRUE,noun
R132,Computer Sciences,R34958,K-isomorphism: privacy preserving network publication against structural attacks,S120532,R34587,Anonymistion algorithm/method,R34542,k-isomorphism,"Serious concerns on privacy protection in social networks have been raised in recent years; however, research in this area is still in its infancy. The problem is challenging due to the diversity and complexity of graph data, on which an adversary can use many types of background knowledge to conduct an attack. One popular type of attacks as studied by pioneer work [2] is the use of embedding subgraphs. We follow this line of work and identify two realistic targets of attacks, namely, NodeInfo and LinkInfo. Our investigations show that k-isomorphism, or anonymization by forming k pairwise isomorphic subgraphs, is both sufficient and necessary for the protection. The problem is shown to be NP-hard. We devise a number of techniques to enhance the anonymization efficiency while retaining the data utility. A compound vertex ID mechanism is also introduced for privacy preservation over multiple data releases. The satisfactory performance on a number of real datasets, including HEP-Th, EUemail and LiveJournal, illustrates that the high symmetry of social networks is very helpful in mitigating the difficulty of the problem.",TRUE,noun
R112118,Computer Vision and Pattern Recognition,R178292,A Haar-Cascade classifier based Smart Parking System,S699354,R178294,Software platform,R167015,OpenCV,"In this paper, we present the implementation of a Haar–Cascade based classifier for car detection. This constitutes the detection sub-module of the framework for a Smart Parking System (SParkSys). In high density cities, finding available parking can be time consuming and results in traffic congestions as drivers cruise to find parking. The computer vision based smart parking solution presented in this paper has the advantage of being the least intrusive of most car sensing technologies. It is scalable for use with large parking facilities. Functional code from OpenCV was used in conjunction with custom Python code to implement the algorithm. Our tests show mixed results, with excellent true positive detections along with some with false negatives. Remarkable is that the classification algorithm learnt features that are common across a wide range of objects of interest.",TRUE,noun
R112118,Computer Vision and Pattern Recognition,R178292,A Haar-Cascade classifier based Smart Parking System,S699355,R178294,Software platform,R167996,Python,"In this paper, we present the implementation of a Haar–Cascade based classifier for car detection. This constitutes the detection sub-module of the framework for a Smart Parking System (SParkSys). In high density cities, finding available parking can be time consuming and results in traffic congestions as drivers cruise to find parking. The computer vision based smart parking solution presented in this paper has the advantage of being the least intrusive of most car sensing technologies. It is scalable for use with large parking facilities. Functional code from OpenCV was used in conjunction with custom Python code to implement the algorithm. Our tests show mixed results, with excellent true positive detections along with some with false negatives. Remarkable is that the classification algorithm learnt features that are common across a wide range of objects of interest.",TRUE,noun
R417,Cultural History,R139736,Public History and Contested Heritage: Archival Memories of the Bombing of Italy,S557905,R139743,Country of study,R139748, Italy.,"This article presents a case study of a collaborative public history project between participants in two countries, the United Kingdom and Italy. Its subject matter is the bombing war in Europe, 1939-1945, which is remembered and commemorated in very different ways in these two countries: the sensitivities involved thus constitute not only a case of public history conducted at the national level but also one involving contested heritage. An account of the ways in which public history has developed in the UK and Italy is presented. This is followed by an explanation of how the bombing war has been remembered in each country. In the UK, veterans of RAF Bomber Command have long felt a sense of neglect, largely because the deliberate targeting of civilians has not fitted comfortably into the dominant victor narrative. In Italy, recollections of being bombed have remained profoundly dissonant within the received liberation discourse. The International Bomber Command Centre Digital Archive (or Archive) is then described as a case study that employs a public history approach, focusing on various aspects of its inclusive ethos, intended to preserve multiple perspectives. The Italian component of the project is highlighted, problematising the digitisation of contested heritage within the broader context of twentieth-century history. Reflections on the use of digital archiving practices and working in partnership are offered, as well as a brief account of user analytics of the Archive through its first eighteen months online.",TRUE,noun
R417,Cultural History,R139820,"Digital Media, Participatory Culture, and Difficult Heritage: Online Remediation and the Trans-Atlantic Slave Trade",S558070,R139822,has stakeholder,R139832,communities,"A diverse and changing array of digital media have been used to present heritage online. While websites have been created for online heritage outreach for nearly two decades, social media is employed increasingly to complement and in some cases replace the use of websites. These same social media are used by stakeholders as a form of participatory culture, to create communities and to discuss heritage independently of narratives offered by official institutions such as museums, memorials, and universities. With difficult or “dark” heritage—places of memory centering on deaths, disasters, and atrocities—these online representations and conversations can be deeply contested. Examining the websites and social media of difficult heritage, with an emphasis on the trans-Atlantic slave trade provides insights into the efficacy of online resources provided by official institutions, as well as the unofficial, participatory communities of stakeholders who use social media for collective memories.",TRUE,noun
R417,Cultural History,R139993,The Role of Smart City Characteristics in the Plans of Fifteen Cities,S558958,R139995,has smart city instance,R140006,Cyberjaya,"ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.",TRUE,noun
R417,Cultural History,R139810,Digital heritage interpretation: a conceptual framework,S558046,R139813,Material,R139816,end-users,"ABSTRACT ‘Heritage Interpretation’ has always been considered as an effective learning, communication and management tool that increases visitors’ awareness of and empathy to heritage sites or artefacts. Yet the definition of ‘digital heritage interpretation’ is still wide and so far, no significant method and objective are evident within the domain of ‘digital heritage’ theory and discourse. Considering ‘digital heritage interpretation’ as a process rather than as a tool to present or communicate with end-users, this paper presents a critical application of a theoretical construct ascertained from multiple disciplines and explicates four objectives for a comprehensive interpretive process. A conceptual model is proposed and further developed into a conceptual framework with fifteen considerations. This framework is then implemented and tested on an online platform to assess its impact on end-users’ interpretation level. We believe the presented interpretive framework (PrEDiC) will help heritage professionals and media designers to develop interpretive heritage project.",TRUE,noun
R417,Cultural History,R139993,The Role of Smart City Characteristics in the Plans of Fifteen Cities,S558961,R139995,has smart city instance,R140009,Masdar,"ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.",TRUE,noun
R417,Cultural History,R139853,SMART CITIES AND HERITAGE CONSERVATION: DEVELOPING A SMARTHERITAGE AGENDA FOR SUSTAINABLE INCLUSIVE COMMUNITIES,S558292,R139855,Has method,R139856,review,"This paper discusses the potential of current advancements in Information Communication Technologies (ICT) for cultural heritage preservation, valorization and management within contemporary cities. The paper highlights the potential of virtual environments to assess the impacts of heritage policies on urban development. It does so by discussing the implications of virtual globes and crowdsourcing to support the participatory valuation and management of cultural heritage assets. To this purpose, a review of available valuation techniques is here presented together with a discussion on how these techniques might be coupled with ICT tools to promote inclusive governance. ",TRUE,noun
R417,Cultural History,R139993,The Role of Smart City Characteristics in the Plans of Fifteen Cities,S558962,R139995,has smart city instance,R140010,Skolkovo,"ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.",TRUE,noun
R417,Cultural History,R139993,The Role of Smart City Characteristics in the Plans of Fifteen Cities,S558954,R139995,has smart city instance,R140002,Songdo,"ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.",TRUE,noun
R417,Cultural History,R139810,Digital heritage interpretation: a conceptual framework,S558045,R139813,Material,R139815,tool,"ABSTRACT ‘Heritage Interpretation’ has always been considered as an effective learning, communication and management tool that increases visitors’ awareness of and empathy to heritage sites or artefacts. Yet the definition of ‘digital heritage interpretation’ is still wide and so far, no significant method and objective are evident within the domain of ‘digital heritage’ theory and discourse. Considering ‘digital heritage interpretation’ as a process rather than as a tool to present or communicate with end-users, this paper presents a critical application of a theoretical construct ascertained from multiple disciplines and explicates four objectives for a comprehensive interpretive process. A conceptual model is proposed and further developed into a conceptual framework with fifteen considerations. This framework is then implemented and tested on an online platform to assess its impact on end-users’ interpretation level. We believe the presented interpretive framework (PrEDiC) will help heritage professionals and media designers to develop interpretive heritage project.",TRUE,noun
R417,Cultural History,R139810,Digital heritage interpretation: a conceptual framework,S558074,R139813,has stakeholder,R139835,Visitor,"ABSTRACT ‘Heritage Interpretation’ has always been considered as an effective learning, communication and management tool that increases visitors’ awareness of and empathy to heritage sites or artefacts. Yet the definition of ‘digital heritage interpretation’ is still wide and so far, no significant method and objective are evident within the domain of ‘digital heritage’ theory and discourse. Considering ‘digital heritage interpretation’ as a process rather than as a tool to present or communicate with end-users, this paper presents a critical application of a theoretical construct ascertained from multiple disciplines and explicates four objectives for a comprehensive interpretive process. A conceptual model is proposed and further developed into a conceptual framework with fifteen considerations. This framework is then implemented and tested on an online platform to assess its impact on end-users’ interpretation level. We believe the presented interpretive framework (PrEDiC) will help heritage professionals and media designers to develop interpretive heritage project.",TRUE,noun
R417,Cultural History,R139820,"Digital Media, Participatory Culture, and Difficult Heritage: Online Remediation and the Trans-Atlantic Slave Trade",S558062,R139822,has communication channel,R139824,websites,"A diverse and changing array of digital media have been used to present heritage online. While websites have been created for online heritage outreach for nearly two decades, social media is employed increasingly to complement and in some cases replace the use of websites. These same social media are used by stakeholders as a form of participatory culture, to create communities and to discuss heritage independently of narratives offered by official institutions such as museums, memorials, and universities. With difficult or “dark” heritage—places of memory centering on deaths, disasters, and atrocities—these online representations and conversations can be deeply contested. Examining the websites and social media of difficult heritage, with an emphasis on the trans-Atlantic slave trade provides insights into the efficacy of online resources provided by official institutions, as well as the unofficial, participatory communities of stakeholders who use social media for collective memories.",TRUE,noun
R233,Data Storage Systems,R136067,EduCOR: An Educational and Career-Oriented Recommendation Ontology,S538726,R136069,keywords,R136072,Education,"Abstract With the increased dependence on online learning platforms and educational resource repositories, a unified representation of digital learning resources becomes essential to support a dynamic and multi-source learning experience. We introduce the EduCOR ontology, an educational, career-oriented ontology that provides a foundation for representing online learning resources for personalised learning systems. The ontology is designed to enable learning material repositories to offer learning path recommendations, which correspond to the user’s learning goals and preferences, academic and psychological parameters, and labour-market skills. We present the multiple patterns that compose the EduCOR ontology, highlighting its cross-domain applicability and integrability with other ontologies. A demonstration of the proposed ontology on the real-life learning platform eDoer is discussed as a use case. We evaluate the EduCOR ontology using both gold standard and task-based approaches. The comparison of EduCOR to three gold schemata, and its application in two use-cases, shows its coverage and adaptability to multiple OER repositories, which allows generating user-centric and labour-market oriented recommendations. Resource : https://tibonto.github.io/educor/.",TRUE,noun
R233,Data Storage Systems,R136098,An ontology based modeling framework for design of educational technologies,S538799,R136100,keywords,R136106,goals,"Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.",TRUE,noun
R233,Data Storage Systems,R136098,An ontology based modeling framework for design of educational technologies,S539279,R136100,Personalisation features,R136253,Goals,"Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.",TRUE,noun
R233,Data Storage Systems,R136067,EduCOR: An Educational and Career-Oriented Recommendation Ontology,S538723,R136069,keywords,R135501,ontology,"Abstract With the increased dependence on online learning platforms and educational resource repositories, a unified representation of digital learning resources becomes essential to support a dynamic and multi-source learning experience. We introduce the EduCOR ontology, an educational, career-oriented ontology that provides a foundation for representing online learning resources for personalised learning systems. The ontology is designed to enable learning material repositories to offer learning path recommendations, which correspond to the user’s learning goals and preferences, academic and psychological parameters, and labour-market skills. We present the multiple patterns that compose the EduCOR ontology, highlighting its cross-domain applicability and integrability with other ontologies. A demonstration of the proposed ontology on the real-life learning platform eDoer is discussed as a use case. We evaluate the EduCOR ontology using both gold standard and task-based approaches. The comparison of EduCOR to three gold schemata, and its application in two use-cases, shows its coverage and adaptability to multiple OER repositories, which allows generating user-centric and labour-market oriented recommendations. Resource : https://tibonto.github.io/educor/.",TRUE,noun
R233,Data Storage Systems,R136098,An ontology based modeling framework for design of educational technologies,S538797,R136100,keywords,R135501,ontology,"Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.",TRUE,noun
R233,Data Storage Systems,R136067,EduCOR: An Educational and Career-Oriented Recommendation Ontology,S538756,R136069,Personalisation features,R136086,recommendation,"Abstract With the increased dependence on online learning platforms and educational resource repositories, a unified representation of digital learning resources becomes essential to support a dynamic and multi-source learning experience. We introduce the EduCOR ontology, an educational, career-oriented ontology that provides a foundation for representing online learning resources for personalised learning systems. The ontology is designed to enable learning material repositories to offer learning path recommendations, which correspond to the user’s learning goals and preferences, academic and psychological parameters, and labour-market skills. We present the multiple patterns that compose the EduCOR ontology, highlighting its cross-domain applicability and integrability with other ontologies. A demonstration of the proposed ontology on the real-life learning platform eDoer is discussed as a use case. We evaluate the EduCOR ontology using both gold standard and task-based approaches. The comparison of EduCOR to three gold schemata, and its application in two use-cases, shows its coverage and adaptability to multiple OER repositories, which allows generating user-centric and labour-market oriented recommendations. Resource : https://tibonto.github.io/educor/.",TRUE,noun
R233,Data Storage Systems,R136098,An ontology based modeling framework for design of educational technologies,S538794,R136100,keywords,R136102,scale,"Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.",TRUE,noun
R233,Data Storage Systems,R136067,EduCOR: An Educational and Career-Oriented Recommendation Ontology,S538728,R136069,keywords,R136074,Skill,"Abstract With the increased dependence on online learning platforms and educational resource repositories, a unified representation of digital learning resources becomes essential to support a dynamic and multi-source learning experience. We introduce the EduCOR ontology, an educational, career-oriented ontology that provides a foundation for representing online learning resources for personalised learning systems. The ontology is designed to enable learning material repositories to offer learning path recommendations, which correspond to the user’s learning goals and preferences, academic and psychological parameters, and labour-market skills. We present the multiple patterns that compose the EduCOR ontology, highlighting its cross-domain applicability and integrability with other ontologies. A demonstration of the proposed ontology on the real-life learning platform eDoer is discussed as a use case. We evaluate the EduCOR ontology using both gold standard and task-based approaches. The comparison of EduCOR to three gold schemata, and its application in two use-cases, shows its coverage and adaptability to multiple OER repositories, which allows generating user-centric and labour-market oriented recommendations. Resource : https://tibonto.github.io/educor/.",TRUE,noun
R233,Data Storage Systems,R136067,EduCOR: An Educational and Career-Oriented Recommendation Ontology,S538758,R136069,Has evaluation,R136087,task-based,"Abstract With the increased dependence on online learning platforms and educational resource repositories, a unified representation of digital learning resources becomes essential to support a dynamic and multi-source learning experience. We introduce the EduCOR ontology, an educational, career-oriented ontology that provides a foundation for representing online learning resources for personalised learning systems. The ontology is designed to enable learning material repositories to offer learning path recommendations, which correspond to the user’s learning goals and preferences, academic and psychological parameters, and labour-market skills. We present the multiple patterns that compose the EduCOR ontology, highlighting its cross-domain applicability and integrability with other ontologies. A demonstration of the proposed ontology on the real-life learning platform eDoer is discussed as a use case. We evaluate the EduCOR ontology using both gold standard and task-based approaches. The comparison of EduCOR to three gold schemata, and its application in two use-cases, shows its coverage and adaptability to multiple OER repositories, which allows generating user-centric and labour-market oriented recommendations. Resource : https://tibonto.github.io/educor/.",TRUE,noun
R135,Databases/Information Systems,R6050,A method for eliminating articles by homonymous authors from the large number of articles retrieved by author search,S6109,R6051,Performance metric,R6035,Accuracy,"This paper proposes a methodology which discriminates the articles by the target authors (“true” articles) from those by other homonymous authors (“false” articles). Author name searches for 2,595 “source” authors in six subject fields retrieved about 629,000 articles. In order to extract true articles from the large amount of the retrieved articles, including many false ones, two filtering stages were applied. At the first stage any retrieved article was eliminated as false if either its affiliation addresses had little similarity to those of its source article or there was no citation relationship between the journal of the retrieved article and that of its source article. At the second stage, a sample of retrieved articles was subjected to manual judgment, and utilizing the judgment results, discrimination functions based on logistic regression were defined. These discrimination functions demonstrated both the recall ratio and the precision of about 95% and the accuracy (correct answer ratio) of 90–95%. Existence of common coauthor(s), address similarity, title words similarity, and interjournal citation relationships between the retrieved and source articles were found to be the effective discrimination predictors. Whether or not the source author was from a specific country was also one of the important predictors. Furthermore, it was shown that a retrieved article is almost certainly true if it was cited by, or cocited with, its source article. The method proposed in this study would be effective when dealing with a large number of articles whose subject fields and affiliation addresses vary widely. © 2011 Wiley Periodicals, Inc.",TRUE,noun
R135,Databases/Information Systems,R6050,A method for eliminating articles by homonymous authors from the large number of articles retrieved by author search,S6111,R6051,Evidence,R6040,Affiliation,"This paper proposes a methodology which discriminates the articles by the target authors (“true” articles) from those by other homonymous authors (“false” articles). Author name searches for 2,595 “source” authors in six subject fields retrieved about 629,000 articles. In order to extract true articles from the large amount of the retrieved articles, including many false ones, two filtering stages were applied. At the first stage any retrieved article was eliminated as false if either its affiliation addresses had little similarity to those of its source article or there was no citation relationship between the journal of the retrieved article and that of its source article. At the second stage, a sample of retrieved articles was subjected to manual judgment, and utilizing the judgment results, discrimination functions based on logistic regression were defined. These discrimination functions demonstrated both the recall ratio and the precision of about 95% and the accuracy (correct answer ratio) of 90–95%. Existence of common coauthor(s), address similarity, title words similarity, and interjournal citation relationships between the retrieved and source articles were found to be the effective discrimination predictors. Whether or not the source author was from a specific country was also one of the important predictors. Furthermore, it was shown that a retrieved article is almost certainly true if it was cited by, or cocited with, its source article. The method proposed in this study would be effective when dealing with a large number of articles whose subject fields and affiliation addresses vary widely. © 2011 Wiley Periodicals, Inc.",TRUE,noun
R135,Databases/Information Systems,R6116,A Real-time Heuristic-based Unsupervised Method for Name Disambiguation in Digital Libraries,S6371,R6117,Evidence,R6114,Authors,"This paper addresses the problem of name disambiguation in the context of digital libraries that administer bibliographic citations. The problem occurs when multiple authors share a common name or when multiple name variations for an author appear in citation records. Name disambiguation is not a trivial task, and most digital libraries do not provide an ecient way to accurately identify the citation records for an author. Furthermore, lack of complete meta-data information in digital libraries hinders the development of a generic algorithm that can be applicable to any dataset. We propose a heuristic-based, unsupervised and adaptive method that also examines users’ interactions in order to include users’ feedback in the disambiguation process. Moreover, the method exploits important features associated with author and citation records, such as co-authors, aliation, publication title, venue, etc., creating a multilayered hierarchical clustering algorithm which transforms itself according to the available information, and forms clusters of unambiguous records. Our experiments on a set of researchers’ names considered to be highly ambiguous produced high precision and recall results, and decisively armed the viability of our algorithm.",TRUE,noun
R135,Databases/Information Systems,R6135,Citation-based bootstrapping for large-scale author disambiguation,S6486,R6136,Evidence,R6114,Authors,"We present a new, two-stage, self-supervised algorithm for author disambiguation in large bibliographic databases. In the first “bootstrap” stage, a collection of high-precision features is used to bootstrap a training set with positive and negative examples of coreferring authors. A supervised feature-based classifier is then trained on the bootstrap clusters and used to cluster the authors in a larger unlabeled dataset. Our self-supervised approach shares the advantages of unsupervised approaches (no need for expensive hand labels) as well as supervised approaches (a rich set of features that can be discriminatively trained). The algorithm disambiguates 54,000,000 author instances in Thomson Reuters' Web of Knowledge with B3 F1 of.807. We analyze parameters and features, particularly those from citation networks, which have not been deeply investigated in author disambiguation. The most important citation feature is self-citation, which can be approximated without expensive extraction of the full network. For the supervised stage, the minor improvement due to other citation features (increasing F1 from.748 to.767) suggests they may not be worth the trouble of extracting from databases that don't already have them. A lean feature set without expensive abstract and title features performs 130 times faster with about equal F1. © 2012 Wiley Periodicals, Inc.",TRUE,noun
R135,Databases/Information Systems,R6139,Self-training author name disambiguation for information scarce scenarios,S6516,R6140,Evidence,R6114,Authors,"We present a novel 3‐step self‐training method for author name disambiguation—SAND (self‐training associative name disambiguator)—which requires no manual labeling, no parameterization (in real‐world scenarios) and is particularly suitable for the common situation in which only the most basic information about a citation record is available (i.e., author names, and work and venue titles). During the first step, real‐world heuristics on coauthors are able to produce highly pure (although fragmented) clusters. The most representative of these clusters are then selected to serve as training data for the third supervised author assignment step. The third step exploits a state‐of‐the‐art transductive disambiguation method capable of detecting unseen authors not included in any training example and incorporating reliable predictions to the training data. Experiments conducted with standard public collections, using the minimum set of attributes present in a citation, demonstrate that our proposed method outperforms all representative unsupervised author grouping disambiguation methods and is very competitive with fully supervised author assignment methods. Thus, different from other bootstrapping methods that explore privileged, hard to obtain information such as self‐citations and personal information, our proposed method produces topnotch performance with no (manual) training data or parameterization and in the presence of scarce information.",TRUE,noun
R135,Databases/Information Systems,R6178,Network based Framework for Author Name Disambiguation Applications,S6682,R6179,Evidence,R6114,Authors,"With the rapid development of digital libraries, name disambiguation becomes more and more important technique to distinguish authors with same names from physical persons. Many algorithms have been developed to accomplish the task. However, they are usually based on some restricted preconditions and rarely concern how to be incorporated into a practical application. In this paper, name disambiguation is regarded as the technique of learning module integrated with a knowledge base. A network is defined for the modeling of publication information, which facilitates the representation of differenttypes of relations among the attributes. The knowledge base component serves as the user interface for domain knowledge input. Furthermore, this paper exploits a random walk with restart algorithm and affinity propagation clustering algorithm to finally output name disambiguation results.",TRUE,noun
R135,Databases/Information Systems,R6050,A method for eliminating articles by homonymous authors from the large number of articles retrieved by author search,S6116,R6051,Evidence,R6049,Country,"This paper proposes a methodology which discriminates the articles by the target authors (“true” articles) from those by other homonymous authors (“false” articles). Author name searches for 2,595 “source” authors in six subject fields retrieved about 629,000 articles. In order to extract true articles from the large amount of the retrieved articles, including many false ones, two filtering stages were applied. At the first stage any retrieved article was eliminated as false if either its affiliation addresses had little similarity to those of its source article or there was no citation relationship between the journal of the retrieved article and that of its source article. At the second stage, a sample of retrieved articles was subjected to manual judgment, and utilizing the judgment results, discrimination functions based on logistic regression were defined. These discrimination functions demonstrated both the recall ratio and the precision of about 95% and the accuracy (correct answer ratio) of 90–95%. Existence of common coauthor(s), address similarity, title words similarity, and interjournal citation relationships between the retrieved and source articles were found to be the effective discrimination predictors. Whether or not the source author was from a specific country was also one of the important predictors. Furthermore, it was shown that a retrieved article is almost certainly true if it was cited by, or cocited with, its source article. The method proposed in this study would be effective when dealing with a large number of articles whose subject fields and affiliation addresses vary widely. © 2011 Wiley Periodicals, Inc.",TRUE,noun
R135,Databases/Information Systems,R77123,Heuristics-based query optimisation for SPARQL,S352127,R77125,Has implementation,R77127,MonetDB,"Query optimization in RDF Stores is a challenging problem as SPARQL queries typically contain many more joins than equivalent relational plans, and hence lead to a large join order search space. In such cases, cost-based query optimization often is not possible. One practical reason for this is that statistics typically are missing in web scale setting such as the Linked Open Datasets (LOD). The more profound reason is that due to the absence of schematic structure in RDF, join-hit ratio estimation requires complicated forms of correlated join statistics; and currently there are no methods to identify the relevant correlations beforehand. For this reason, the use of good heuristics is essential in SPARQL query optimization, even in the case that are partially used with cost-based statistics (i.e., hybrid query optimization). In this paper we describe a set of useful heuristics for SPARQL query optimizers. We present these in the context of a new Heuristic SPARQL Planner (HSP) that is capable of exploiting the syntactic and the structural variations of the triple patterns in a SPARQL query in order to choose an execution plan without the need of any cost model. For this, we define the variable graph and we show a reduction of the SPARQL query optimization problem to the maximum weight independent set problem. We implemented our planner on top of the MonetDB open source column-store and evaluated its effectiveness against the state-of-the-art RDF-3X engine as well as comparing the plan quality with a relational (SQL) equivalent of the benchmarks.",TRUE,noun
R135,Databases/Information Systems,R135477,A learning object ontology repository to support annotation and discovery of educational resources using semantic thesauri,S535886,R135479,keywords,R135501,ontology," Open educational resources are currently becoming increasingly available from a multitude of sources and are consequently annotated in many diverse ways. Interoperability concerns that naturally arise can often be resolved through the semantification of metadata descriptions, while at the same time strengthening the knowledge value of resources. SKOS can be a solid linking point offering a standard vocabulary for thematic descriptions, by referencing semantic thesauri. We propose the enhancement and maintenance of educational resources’ metadata in the form of learning object ontologies and introduce the notion of a learning object ontology repository that can help towards their publication, discovery and reuse. At the same time, linking to thesauri datasets and contextualized sources interrelates learning objects with linked data and exposes them to the Web of Data. We build a set of extensions and workflows on top of contemporary ontology management tools, such as WebProtégé, that can make it suitable as a learning object ontology repository. The proposed approach and implementation can help libraries and universities in discovering, managing and incorporating open educational resources and enhancing current curricula. ",TRUE,noun
R135,Databases/Information Systems,R6050,A method for eliminating articles by homonymous authors from the large number of articles retrieved by author search,S6107,R6051,Performance metric,R6009,Precision,"This paper proposes a methodology which discriminates the articles by the target authors (“true” articles) from those by other homonymous authors (“false” articles). Author name searches for 2,595 “source” authors in six subject fields retrieved about 629,000 articles. In order to extract true articles from the large amount of the retrieved articles, including many false ones, two filtering stages were applied. At the first stage any retrieved article was eliminated as false if either its affiliation addresses had little similarity to those of its source article or there was no citation relationship between the journal of the retrieved article and that of its source article. At the second stage, a sample of retrieved articles was subjected to manual judgment, and utilizing the judgment results, discrimination functions based on logistic regression were defined. These discrimination functions demonstrated both the recall ratio and the precision of about 95% and the accuracy (correct answer ratio) of 90–95%. Existence of common coauthor(s), address similarity, title words similarity, and interjournal citation relationships between the retrieved and source articles were found to be the effective discrimination predictors. Whether or not the source author was from a specific country was also one of the important predictors. Furthermore, it was shown that a retrieved article is almost certainly true if it was cited by, or cocited with, its source article. The method proposed in this study would be effective when dealing with a large number of articles whose subject fields and affiliation addresses vary widely. © 2011 Wiley Periodicals, Inc.",TRUE,noun
R135,Databases/Information Systems,R6116,A Real-time Heuristic-based Unsupervised Method for Name Disambiguation in Digital Libraries,S6368,R6117,Performance metric,R6009,Precision,"This paper addresses the problem of name disambiguation in the context of digital libraries that administer bibliographic citations. The problem occurs when multiple authors share a common name or when multiple name variations for an author appear in citation records. Name disambiguation is not a trivial task, and most digital libraries do not provide an ecient way to accurately identify the citation records for an author. Furthermore, lack of complete meta-data information in digital libraries hinders the development of a generic algorithm that can be applicable to any dataset. We propose a heuristic-based, unsupervised and adaptive method that also examines users’ interactions in order to include users’ feedback in the disambiguation process. Moreover, the method exploits important features associated with author and citation records, such as co-authors, aliation, publication title, venue, etc., creating a multilayered hierarchical clustering algorithm which transforms itself according to the available information, and forms clusters of unambiguous records. Our experiments on a set of researchers’ names considered to be highly ambiguous produced high precision and recall results, and decisively armed the viability of our algorithm.",TRUE,noun
R135,Databases/Information Systems,R6156,On Graph-Based Name Disambiguation,S6608,R6157,Performance metric,R6009,Precision,"Name ambiguity stems from the fact that many people or objects share identical names in the real world. Such name ambiguity decreases the performance of document retrieval, Web search, information integration, and may cause confusion in other applications. Due to the same name spellings and lack of information, it is a nontrivial task to distinguish them accurately. In this article, we focus on investigating the problem in digital libraries to distinguish publications written by authors with identical names. We present an effective framework named GHOST (abbreviation for GrapHical framewOrk for name diSambiguaTion), to solve the problem systematically. We devise a novel similarity metric, and utilize only one type of attribute (i.e., coauthorship) in GHOST. Given the similarity matrix, intermediate results are grouped into clusters with a recently introduced powerful clustering algorithm called Affinity Propagation . In addition, as a complementary technique, user feedback can be used to enhance the performance. We evaluated the framework on the real DBLP and PubMed datasets, and the experimental results show that GHOST can achieve both high precision and recall .",TRUE,noun
R135,Databases/Information Systems,R135477,A learning object ontology repository to support annotation and discovery of educational resources using semantic thesauri,S536089,R135479,Reuse of existing vocabularies,R135545,thesauri," Open educational resources are currently becoming increasingly available from a multitude of sources and are consequently annotated in many diverse ways. Interoperability concerns that naturally arise can often be resolved through the semantification of metadata descriptions, while at the same time strengthening the knowledge value of resources. SKOS can be a solid linking point offering a standard vocabulary for thematic descriptions, by referencing semantic thesauri. We propose the enhancement and maintenance of educational resources’ metadata in the form of learning object ontologies and introduce the notion of a learning object ontology repository that can help towards their publication, discovery and reuse. At the same time, linking to thesauri datasets and contextualized sources interrelates learning objects with linked data and exposes them to the Web of Data. We build a set of extensions and workflows on top of contemporary ontology management tools, such as WebProtégé, that can make it suitable as a learning object ontology repository. The proposed approach and implementation can help libraries and universities in discovering, managing and incorporating open educational resources and enhancing current curricula. ",TRUE,noun
R135,Databases/Information Systems,R6100,A fast method based on multiple clustering for name disambiguation in bibliographic citations,S6302,R6101,Evidence,R6013,Venues,"Name ambiguity in the context of bibliographic citation affects the quality of services in digital libraries. Previous methods are not widely applied in practice because of their high computational complexity and their strong dependency on excessive attributes, such as institutional affiliation, research area, address, etc., which are difficult to obtain in practice. To solve this problem, we propose a novel coarse‐to‐fine framework for name disambiguation which sequentially employs 3 common and easily accessible attributes (i.e., coauthor name, article title, and publication venue). Our proposed framework is based on multiple clustering and consists of 3 steps: (a) clustering articles by coauthorship and obtaining rough clusters, that is fragments; (b) clustering fragments obtained in step 1 by title information and getting bigger fragments; (c) and clustering fragments obtained in step 2 by the latent relations among venues. Experimental results on a Digital Bibliography and Library Project (DBLP) data set show that our method outperforms the existing state‐of‐the‐art methods by 2.4% to 22.7% on the average pairwise F1 score and is 10 to 100 times faster in terms of execution time.",TRUE,noun
R135,Databases/Information Systems,R6087,A Unified Probabilistic Framework for Name Disambiguation in Digital Library,S6245,R6088,Evidence,R6086,Year,"Despite years of research, the name ambiguity problem remains largely unresolved. Outstanding issues include how to capture all information for name disambiguation in a unified approach, and how to determine the number of people K in the disambiguation process. In this paper, we formalize the problem in a unified probabilistic framework, which incorporates both attributes and relationships. Specifically, we define a disambiguation objective function for the problem and propose a two-step parameter estimation algorithm. We also investigate a dynamic approach for estimating the number of people K. Experiments show that our proposed framework significantly outperforms four baseline methods of using clustering algorithms and two other previous methods. Experiments also indicate that the number K automatically found by our method is close to the actual number.",TRUE,noun
R135,Databases/Information Systems,R6050,A method for eliminating articles by homonymous authors from the large number of articles retrieved by author search,S6108,R6051,Performance metric,R6010,Recall,"This paper proposes a methodology which discriminates the articles by the target authors (“true” articles) from those by other homonymous authors (“false” articles). Author name searches for 2,595 “source” authors in six subject fields retrieved about 629,000 articles. In order to extract true articles from the large amount of the retrieved articles, including many false ones, two filtering stages were applied. At the first stage any retrieved article was eliminated as false if either its affiliation addresses had little similarity to those of its source article or there was no citation relationship between the journal of the retrieved article and that of its source article. At the second stage, a sample of retrieved articles was subjected to manual judgment, and utilizing the judgment results, discrimination functions based on logistic regression were defined. These discrimination functions demonstrated both the recall ratio and the precision of about 95% and the accuracy (correct answer ratio) of 90–95%. Existence of common coauthor(s), address similarity, title words similarity, and interjournal citation relationships between the retrieved and source articles were found to be the effective discrimination predictors. Whether or not the source author was from a specific country was also one of the important predictors. Furthermore, it was shown that a retrieved article is almost certainly true if it was cited by, or cocited with, its source article. The method proposed in this study would be effective when dealing with a large number of articles whose subject fields and affiliation addresses vary widely. © 2011 Wiley Periodicals, Inc.",TRUE,noun
R135,Databases/Information Systems,R6116,A Real-time Heuristic-based Unsupervised Method for Name Disambiguation in Digital Libraries,S6369,R6117,Performance metric,R6010,Recall,"This paper addresses the problem of name disambiguation in the context of digital libraries that administer bibliographic citations. The problem occurs when multiple authors share a common name or when multiple name variations for an author appear in citation records. Name disambiguation is not a trivial task, and most digital libraries do not provide an ecient way to accurately identify the citation records for an author. Furthermore, lack of complete meta-data information in digital libraries hinders the development of a generic algorithm that can be applicable to any dataset. We propose a heuristic-based, unsupervised and adaptive method that also examines users’ interactions in order to include users’ feedback in the disambiguation process. Moreover, the method exploits important features associated with author and citation records, such as co-authors, aliation, publication title, venue, etc., creating a multilayered hierarchical clustering algorithm which transforms itself according to the available information, and forms clusters of unambiguous records. Our experiments on a set of researchers’ names considered to be highly ambiguous produced high precision and recall results, and decisively armed the viability of our algorithm.",TRUE,noun
R142,Earth Sciences,R143486,Application of remote sensing methods to hydrology and water resources,S574481,R143488,Outcomes,R143482, Albedo,"Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.",TRUE,noun
R142,Earth Sciences,R143486,Application of remote sensing methods to hydrology and water resources,S574480,R143488,Outcomes,R143481, Evapotranspiration,"Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.",TRUE,noun
R142,Earth Sciences,R144024,Raman spectroscopy of the borosilicate mineral ferroaxinite,S576450,R144026,Minerals in consideration,R144019, Ferroaxinite,"Raman spectroscopy, complemented by infrared spectroscopy has been used to characterise the ferroaxinite minerals of theoretical formula Ca2Fe2+Al2BSi4O15(OH), a ferrous aluminium borosilicate. The Raman spectra are complex but are subdivided into sections based upon the vibrating units. The Raman spectra are interpreted in terms of the addition of borate and silicate spectra. Three characteristic bands of ferroaxinite are observed at 1082, 1056 and 1025 cm-1 and are attributed to BO4 stretching vibrations. Bands at 1003, 991, 980 and 963 cm-1 are assigned to SiO4 stretching vibrations. Bands are found in these positions for each of the ferroaxinites studied. No Raman bands were found above 1100 cm-1 showing that ferroaxinites contain only tetrahedral boron. The hydroxyl stretching region of ferroaxinites is characterised by a single Raman band between 3368 and 3376 cm-1, the position of which is sample dependent. Bands for ferroaxinite at 678, 643, 618, 609, 588, 572, 546 cm-1 may be attributed to the ν4 bending modes and the three bands at 484, 444 and 428 cm-1 may be attributed to the ν2 bending modes of the (SiO4)2-.",TRUE,noun
R142,Earth Sciences,R144034,Raman spectroscopy of the joaquinite minerals,S576500,R144036,Minerals in consideration,R144028, Joaquinite,"Selected joaquinite minerals have been studied by Raman spectroscopy. The minerals are categorised into two groups depending upon whether bands occur in the 3250 to 3450 cm−1 region and in the 3450 to 3600 cm−1 region, or in the latter region only. The first set of bands is attributed to water stretching vibrations and the second set to OH stretching bands. In the literature, X-ray diffraction could not identify the presence of OH units in the structure of joaquinite. Raman spectroscopy proves that the joaquinite mineral group contains OH units in their structure, and in some cases both water and OH units. A series of bands at 1123, 1062, 1031, 971, 912 and 892 cm−1 are assigned to SiO stretching vibrations. Bands above 1000 cm−1 are attributable to the νas modes of the (SiO4)4− and (Si2O7)6− units. Bands that are observed at 738, around 700, 682 and around 668, 621 and 602 cm−1 are attributed to OSiO bending modes. The patterns do not appear to match the published infrared spectral patterns of either (SiO4)4− or (Si2O7)6− units. The reason is attributed to the actual formulation of the joaquinite mineral, in which significant amounts of Ti or Nb and Fe are found. Copyright © 2007 John Wiley & Sons, Ltd.",TRUE,noun
R142,Earth Sciences,R147402,"Pegmatite spectral behavior considering ASTER and Landsat 8 OLI data in Naipa and Muiane mines (Alto Ligonha, Mozambique)",S591119,R147404,Supplementary information,R147399, lepidolite,"The Naipa and Muiane mines are located on the Nampula complex, a stratigraphic tectonic subdivision of the Mozambique Belt, in the Alto Ligonha region. The pegmatites are of the Li-Cs-Ta type, intrude a chlorite phyllite and gneisses with amphibole and biotite. The mines are still active. The main objective of this work was to analyze the pegmatite’s spectral behavior considering ASTER and Landsat 8 OLI data. An ASTER image from 27/05/2005, and an image Landsat OLI image from 02/02/2018 were considered. The data were radiometric calibrated and after atmospheric corrected considered the Dark Object Subtraction algorithm available in the Semi-Automatic Classification Plugin accessible in QGIS software. In the field, samples were collected from lepidolite waste pile in Naipa and Muaine mines. A spectroadiometer was used in order to analyze the spectral behavior of several pegmatite’s samples collected in the field in Alto Ligonha (Naipa and Muiane mines). In addition, QGIS software was also used for the spectral mapping of the hypothetical hydrothermal alterations associated with occurrences of basic metals, beryl gemstones, tourmalines, columbite-tantalites, and lithium minerals. A supervised classification algorithm was employed - Spectral Angle Mapper for the data processing, and the overall accuracy achieved was 80%. The integration of ASTER and Landsat 8 OLI data have proved very useful for pegmatite’s mapping. From the results obtained, we can conclude that: (i) the combination of ASTER and Landsat 8 OLI data allows us to obtain more information about mineral composition than just one sensor, i.e., these two sensors are complementary; (ii) the alteration spots identified in the mines area are composed of clay minerals. In the future, more data and others image processing algorithms can be applied in order to identify the different Lithium minerals, as spodumene, petalite, amblygonite and lepidolite.",TRUE,noun
R142,Earth Sciences,R143486,Application of remote sensing methods to hydrology and water resources,S574467,R143488,Outcomes,R143468, Precipitation,"Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.",TRUE,noun
R142,Earth Sciences,R140694,Comparison of airborne hyperspectral data and eo-1 hyperion for mineral mapping,S561939,R140696,Data used,L394427,Hyperion,"Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-/spl mu/m range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements are required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS.",TRUE,noun
R142,Earth Sciences,R160566,"The Performance of Maximum Likelihood, Spectral Angle Mapper, Neural Network and Decision Tree Classifiers in Hyperspectral Image Analysis",S640336,R160568,Features classified,L438345,Tree,"Several classification algorithms for pattern recognition had been tested in the mapping of tropical forest cover using airborne hyperspectral data. Results from the use of Maximum Likelihood (ML), Spectral Angle Mapper (SAM), Artificial Neural Network (ANN) and Decision Tree (DT) classifiers were compared and evaluated. It was found that ML performed the best followed by ANN, DT and SAM with accuracies of 86%, 84%, 51% and 49% respectively.",TRUE,noun
R142,Earth Sciences,R160571,Performance of Spectral Angle Mapper and Parallelepiped Classifiers in Agriculture Hyperspectral Image,S640358,R160573,Features classified,L438360,Wheat,"Hyperspectral Imaging (HSI) is used to provide a wealth of information which can be used to address a variety of problems in different applications. The main requirement in all applications is the classification of HSI data. In this paper, supervised HSI classification algorithms are used to extract agriculture areas that specialize in wheat growing and get a classified image. In particular, Parallelepiped and Spectral Angel Mapper (SAM) algorithms are used. They are implemented by a software tool used to analyse and process geospatial images that is an Environment of Visualizing Images (ENVI). They are applied on Al-Kharj, Saudi Arabia as the study area. The overall accuracy after applying the algorithms on the image of the study area for SAM classification was 66.67%, and 33.33% for Parallelepiped classification. Therefore, SAM algorithm has provided a better a study area image classification.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145502,Barcoding of biting midges in the genus Culicoides: a tool for species determination,S624338,R155704,Studied taxonomic group (Biology),R155708,Ceratopogonidae,"Biting midges of the genus Culicoides (Diptera: Ceratopogonidae) are insect vectors of economically important veterinary diseases such as African horse sickness virus and bluetongue virus. However, the identification of Culicoides based on morphological features is difficult. The sequencing of mitochondrial cytochrome oxidase subunit I (COI), referred to as DNA barcoding, has been proposed as a tool for rapid identification to species. Hence, a study was undertaken to establish DNA barcodes for all morphologically determined Culicoides species in Swedish collections. In total, 237 specimens of Culicoides representing 37 morphologically distinct species were used. The barcoding generated 37 supported clusters, 31 of which were in agreement with the morphological determination. However, two pairs of closely related species could not be separated using the DNA barcode approach. Moreover, Culicoides obsoletus Meigen and Culicoides newsteadi Austen showed relatively deep intraspecific divergence (more than 10 times the average), which led to the creation of two cryptic species within each of C. obsoletus and C. newsteadi. The use of COI barcodes as a tool for the species identification of biting midges can differentiate 95% of species studied. Identification of some closely related species should employ a less conserved region, such as a ribosomal internal transcribed spacer.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R146639,DNA barcodes for species delimitation in Chironomidae (Diptera): a case study on the genus Labrundinia,S624166,R155674,Studied taxonomic group (Biology),R155678,Chironomidae,"Abstract In this study, we analysed the applicability of DNA barcodes for delimitation of 79 specimens of 13 species of nonbiting midges in the subfamily Tanypodinae (Diptera: Chironomidae) from São Paulo State, Brazil. Our results support DNA barcoding as an excellent tool for species identification and for solving taxonomic conflicts in genus Labrundinia. Molecular analysis of cytochrome c oxidase subunit I (COI) gene sequences yielded taxon identification trees, supporting 13 cohesive species clusters, of which three similar groups were subsequently linked to morphological variation at the larval and pupal stage. Additionally, another cluster previously described by means of morphology was linked to molecular markers. We found a distinct barcode gap, and in some species substantial interspecific pairwise divergences (up to 19.3%) were observed, which permitted identification of all analysed species. The results also indicated that barcodes can be used to associate life stages of chironomids since COI was easily amplified and sequenced from different life stages with universal barcode primers. Résumé Notre étude évalue l'utilité des codes à barres d'ADN pour délimiter 79 spécimens de 13 espèces de moucherons de la sous-famille des Tanypodinae (Diptera: Chironomidae) provenant de l’état de São Paulo, Brésil. Notre étude confirme l'utilisation des codes à barres d'ADN comme un excellent outil pour l'identification des espèces et la solution de problèmes taxonomiques dans genre Labrundinia. Une analyse moléculaire des séquences des gènes COI fournit des arbres d'identification des taxons, délimitant 13 groupes cohérents d'espèces, dont trois groupes similaires ont été reliés subséquemment à une variation morphologique des stades larvaires et nymphal. De plus, un autre groupe décrit antérieurement à partir de caractères morphologiques a été relié à des marqueurs moléculaires. Il existe un écart net entre les codes à barres et, chez certaines espèces, d'importantes divergences entre les espèces considérées deux par deux (jusqu’à 19,3%), ce qui a permis l'identification de toutes les espèces examinées. Nos résultats montrent aussi que les codes à barres peuvent servir à associer les différents stades de vie des chironomides, car il est facile d'amplifier et de séquencer le gène COI provenant des différents stades avec les amorces universelles des codes à barres.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145509,Identifying Canadian mosquito species through DNA barcodes,S624270,R155693,Studied taxonomic group (Biology),R155686,Culicidae,"Abstract A short fragment of mt DNA from the cytochrome c oxidase 1 (CO1) region was used to provide the first CO1 barcodes for 37 species of Canadian mosquitoes (Diptera: Culicidae) from the provinces Ontario and New Brunswick. Sequence variation was analysed in a 617‐bp fragment from the 5′ end of the CO1 region. Sequences of each mosquito species formed barcode clusters with tight cohesion that were usually clearly distinct from those of allied species. CO1 sequence divergences were, on average, nearly 20 times higher for congeneric species than for members of a species; divergences between congeneric species averaged 10.4% (range 0.2–17.2%), whereas those for conspecific individuals averaged 0.5% (range 0.0–3.9%).",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145554,Identifying the Main Mosquito Species in China Based on DNA Barcoding,S624202,R155680,Studied taxonomic group (Biology),R155686,Culicidae,"Mosquitoes are insects of the Diptera, Nematocera, and Culicidae families, some species of which are important disease vectors. Identifying mosquito species based on morphological characteristics is difficult, particularly the identification of specimens collected in the field as part of disease surveillance programs. Because of this difficulty, we constructed DNA barcodes of the cytochrome c oxidase subunit 1, the COI gene, for the more common mosquito species in China, including the major disease vectors. A total of 404 mosquito specimens were collected and assigned to 15 genera and 122 species and subspecies on the basis of morphological characteristics. Individuals of the same species grouped closely together in a Neighborhood-Joining tree based on COI sequence similarity, regardless of collection site. COI gene sequence divergence was approximately 30 times higher for species in the same genus than for members of the same species. Divergence in over 98% of congeneric species ranged from 2.3% to 21.8%, whereas divergence in conspecific individuals ranged from 0% to 1.67%. Cryptic species may be common and a few pseudogenes were detected.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R142517,"A DNA barcode library for 5,200 German flies and midges (Insecta: Diptera) and its implications for metabarcoding‐based biomonitoring",S624747,R155788,Order (Taxonomy - biology),R149572,Diptera,"This study summarizes results of a DNA barcoding campaign on German Diptera, involving analysis of 45,040 specimens. The resultant DNA barcode library includes records for 2,453 named species comprising a total of 5,200 barcode index numbers (BINs), including 2,700 COI haplotype clusters without species‐level assignment, so called “dark taxa.” Overall, 88 out of 117 families (75%) recorded from Germany were covered, representing more than 50% of the 9,544 known species of German Diptera. Until now, most of these families, especially the most diverse, have been taxonomically inaccessible. By contrast, within a few years this study provided an intermediate taxonomic system for half of the German Dipteran fauna, which will provide a useful foundation for subsequent detailed, integrative taxonomic studies. Using DNA extracts derived from bulk collections made by Malaise traps, we further demonstrate that species delineation using BINs and operational taxonomic units (OTUs) constitutes an effective method for biodiversity studies using DNA metabarcoding. As the reference libraries continue to grow, and gaps in the species catalogue are filled, BIN lists assembled by metabarcoding will provide greater taxonomic resolution. The present study has three main goals: (a) to provide a DNA barcode library for 5,200 BINs of Diptera; (b) to demonstrate, based on the example of bulk extractions from a Malaise trap experiment, that DNA barcode clusters, labelled with globally unique identifiers (such as OTUs and/or BINs), provide a pragmatic, accurate solution to the “taxonomic impediment”; and (c) to demonstrate that interim names based on BINs and OTUs obtained through metabarcoding provide an effective method for studies on species‐rich groups that are usually neglected in biodiversity research projects because of their unresolved taxonomy.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R142535,DNA Barcodes for the Northern European Tachinid Flies (Diptera: Tachinidae),S624696,R155773,Order (Taxonomy - biology),R149572,Diptera,"This data release provides COI barcodes for 366 species of parasitic flies (Diptera: Tachinidae), enabling the DNA based identification of the majority of northern European species and a large proportion of Palearctic genera, regardless of the developmental stage. The data will provide a tool for taxonomists and ecologists studying this ecologically important but challenging parasitoid family. A comparison of minimum distances between the nearest neighbors revealed the mean divergence of 5.52% that is approximately the same as observed earlier with comparable sampling in Lepidoptera, but clearly less than in Coleoptera. Full barcode-sharing was observed between 13 species pairs or triplets, equaling to 7.36% of all species. Delimitation based on Barcode Index Number (BIN) system was compared with traditional classification of species and interesting cases of possible species oversplits and cryptic diversity are discussed. Overall, DNA barcodes are effective in separating tachinid species and provide novel insight into the taxonomy of several genera.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145497,"Half of the European fruit fly species barcoded (Diptera, Tephritidae); a feasibility test for molecular identification",S624369,R155710,Order (Taxonomy - biology),R149572,Diptera,"Abstract A feasibility test of molecular identification of European fruit flies (Diptera: Tephritidae) based on COI barcode sequences has been executed. A dataset containing 555 sequences of 135 ingroup species from three subfamilies and 42 genera and one single outgroup species has been analysed. 73.3% of all included species could be identified based on their COI barcode gene, based on similarity and distances. The low success rate is caused by singletons as well as some problematic groups: several species groups within the genus Terellia and especially the genus Urophora. With slightly more than 100 sequences – almost 20% of the total – this genus alone constitutes the larger part of the failure for molecular identification for this dataset. Deleting the singletons and Urophora results in a success-rate of 87.1% of all queries and 93.23% of the not discarded queries as correctly identified. Urophora is of special interest due to its economic importance as beneficial species for weed control, therefore it is desirable to have alternative markers for molecular identification. We demonstrate that the success of DNA barcoding for identification purposes strongly depends on the contents of the database used to BLAST against. Especially the necessity of including multiple specimens per species of geographically distinct populations and different ecologies for the understanding of the intra- versus interspecific variation is demonstrated. Furthermore thresholds and the distinction between true and false positives and negatives should not only be used to increase the reliability of the success of molecular identification but also to point out problematic groups, which should then be flagged in the reference database suggesting alternative methods for identification.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145502,Barcoding of biting midges in the genus Culicoides: a tool for species determination,S624336,R155704,Order (Taxonomy - biology),R149572,Diptera,"Biting midges of the genus Culicoides (Diptera: Ceratopogonidae) are insect vectors of economically important veterinary diseases such as African horse sickness virus and bluetongue virus. However, the identification of Culicoides based on morphological features is difficult. The sequencing of mitochondrial cytochrome oxidase subunit I (COI), referred to as DNA barcoding, has been proposed as a tool for rapid identification to species. Hence, a study was undertaken to establish DNA barcodes for all morphologically determined Culicoides species in Swedish collections. In total, 237 specimens of Culicoides representing 37 morphologically distinct species were used. The barcoding generated 37 supported clusters, 31 of which were in agreement with the morphological determination. However, two pairs of closely related species could not be separated using the DNA barcode approach. Moreover, Culicoides obsoletus Meigen and Culicoides newsteadi Austen showed relatively deep intraspecific divergence (more than 10 times the average), which led to the creation of two cryptic species within each of C. obsoletus and C. newsteadi. The use of COI barcodes as a tool for the species identification of biting midges can differentiate 95% of species studied. Identification of some closely related species should employ a less conserved region, such as a ribosomal internal transcribed spacer.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145509,Identifying Canadian mosquito species through DNA barcodes,S624268,R155693,Order (Taxonomy - biology),R149572,Diptera,"Abstract A short fragment of mt DNA from the cytochrome c oxidase 1 (CO1) region was used to provide the first CO1 barcodes for 37 species of Canadian mosquitoes (Diptera: Culicidae) from the provinces Ontario and New Brunswick. Sequence variation was analysed in a 617‐bp fragment from the 5′ end of the CO1 region. Sequences of each mosquito species formed barcode clusters with tight cohesion that were usually clearly distinct from those of allied species. CO1 sequence divergences were, on average, nearly 20 times higher for congeneric species than for members of a species; divergences between congeneric species averaged 10.4% (range 0.2–17.2%), whereas those for conspecific individuals averaged 0.5% (range 0.0–3.9%).",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145554,Identifying the Main Mosquito Species in China Based on DNA Barcoding,S624200,R155680,Order (Taxonomy - biology),R149572,Diptera,"Mosquitoes are insects of the Diptera, Nematocera, and Culicidae families, some species of which are important disease vectors. Identifying mosquito species based on morphological characteristics is difficult, particularly the identification of specimens collected in the field as part of disease surveillance programs. Because of this difficulty, we constructed DNA barcodes of the cytochrome c oxidase subunit 1, the COI gene, for the more common mosquito species in China, including the major disease vectors. A total of 404 mosquito specimens were collected and assigned to 15 genera and 122 species and subspecies on the basis of morphological characteristics. Individuals of the same species grouped closely together in a Neighborhood-Joining tree based on COI sequence similarity, regardless of collection site. COI gene sequence divergence was approximately 30 times higher for species in the same genus than for members of the same species. Divergence in over 98% of congeneric species ranged from 2.3% to 21.8%, whereas divergence in conspecific individuals ranged from 0% to 1.67%. Cryptic species may be common and a few pseudogenes were detected.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R146639,DNA barcodes for species delimitation in Chironomidae (Diptera): a case study on the genus Labrundinia,S624164,R155674,Order (Taxonomy - biology),R149572,Diptera,"Abstract In this study, we analysed the applicability of DNA barcodes for delimitation of 79 specimens of 13 species of nonbiting midges in the subfamily Tanypodinae (Diptera: Chironomidae) from São Paulo State, Brazil. Our results support DNA barcoding as an excellent tool for species identification and for solving taxonomic conflicts in genus Labrundinia. Molecular analysis of cytochrome c oxidase subunit I (COI) gene sequences yielded taxon identification trees, supporting 13 cohesive species clusters, of which three similar groups were subsequently linked to morphological variation at the larval and pupal stage. Additionally, another cluster previously described by means of morphology was linked to molecular markers. We found a distinct barcode gap, and in some species substantial interspecific pairwise divergences (up to 19.3%) were observed, which permitted identification of all analysed species. The results also indicated that barcodes can be used to associate life stages of chironomids since COI was easily amplified and sequenced from different life stages with universal barcode primers. Résumé Notre étude évalue l'utilité des codes à barres d'ADN pour délimiter 79 spécimens de 13 espèces de moucherons de la sous-famille des Tanypodinae (Diptera: Chironomidae) provenant de l’état de São Paulo, Brésil. Notre étude confirme l'utilisation des codes à barres d'ADN comme un excellent outil pour l'identification des espèces et la solution de problèmes taxonomiques dans genre Labrundinia. Une analyse moléculaire des séquences des gènes COI fournit des arbres d'identification des taxons, délimitant 13 groupes cohérents d'espèces, dont trois groupes similaires ont été reliés subséquemment à une variation morphologique des stades larvaires et nymphal. De plus, un autre groupe décrit antérieurement à partir de caractères morphologiques a été relié à des marqueurs moléculaires. Il existe un écart net entre les codes à barres et, chez certaines espèces, d'importantes divergences entre les espèces considérées deux par deux (jusqu’à 19,3%), ce qui a permis l'identification de toutes les espèces examinées. Nos résultats montrent aussi que les codes à barres peuvent servir à associer les différents stades de vie des chironomides, car il est facile d'amplifier et de séquencer le gène COI provenant des différents stades avec les amorces universelles des codes à barres.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R146643,Revision of Nearctic Dasysyrphus Enderlein (Diptera: Syrphidae),S624040,R155663,Order (Taxonomy - biology),R149572,Diptera,"Dasysyrphus Enderlein (Diptera: Syrphidae) has posed taxonomic challenges to researchers in the past, primarily due to their lack of interspecific diagnostic characters. In the present study, DNA data (mitochondrial cytochrome c oxidase sub-unit I—COI) were combined with morphology to help delimit species. This led to two species being resurrected from synonymy (D. laticaudus and D. pacificus) and the discovery of one new species (D. occidualis sp. nov.). An additional new species was described based on morphology alone (D. richardi sp. nov.), as the specimens were too old to obtain COI. Part of the taxonomic challenge presented by this group arises from missing type specimens. Neotypes are designated here for D. pauxillus and D. pinastri to bring stability to these names. An illustrated key to 13 Nearctic species is presented, along with descriptions, maps and supplementary data. A phylogeny based on COI is also presented and discussed.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R146646,Comprehensive evaluation of DNA barcoding for the molecular species identification of forensically important Australian Sarcophagidae (Diptera),S623944,R155647,Order (Taxonomy - biology),R149572,Diptera,"Abstract. Carrion-breeding Sarcophagidae (Diptera) can be used to estimate the post-mortem interval in forensic cases. Difficulties with accurate morphological identifications at any life stage and a lack of documented thermobiological profiles have limited their current usefulness. The molecular-based approach of DNA barcoding, which utilises a 648-bp fragment of the mitochondrial cytochrome oxidase subunit I gene, was evaluated in a pilot study for discrimination between 16 Australian sarcophagids. The current study comprehensively evaluated barcoding for a larger taxon set of 588 Australian sarcophagids. In total, 39 of the 84 known Australian species were represented by 580 specimens, which includes 92% of potentially forensically important species. A further eight specimens could not be identified, but were included nonetheless as six unidentifiable taxa. A neighbour-joining tree was generated and nucleotide sequence divergences were calculated. All species except Sarcophaga (Fergusonimyia) bancroftorum, known for high morphological variability, were resolved as monophyletic (99.2% of cases), with bootstrap support of 100. Excluding S. bancroftorum, the mean intraspecific and interspecific variation ranged from 1.12% and 2.81–11.23%, respectively, allowing for species discrimination. DNA barcoding was therefore validated as a suitable method for molecular identification of Australian Sarcophagidae, which will aid in the implementation of this fauna in forensic entomology.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R146938,Evaluation of DNA barcoding and identification of new haplomorphs in Canadian deerflies and horseflies,S599533,R149512,Order (Taxonomy - biology),R149572,Diptera,"This paper reports the first tests of the suitability of the standardized mitochondrial cytochrome c oxidase subunit I (COI) barcoding system for the identification of Canadian deerflies and horseflies. Two additional mitochondrial molecular markers were used to determine whether unambiguous species recognition in tabanids can be achieved. Our 332 Canadian tabanid samples yielded 650 sequences from five genera and 42 species. Standard COI barcodes demonstrated a strong A + T bias (mean 68.1%), especially at third codon positions (mean 93.0%). Our preliminary test of this system showed that the standard COI barcode worked well for Canadian Tabanidae: the target DNA can be easily recovered from small amounts of insect tissue and aligned for all tabanid taxa. Each tabanid species possessed distinctive sets of COI haplotypes which discriminated well among species. Average conspecific Kimura two‐parameter (K2P) divergence (0.49%) was 12 times lower than the average divergence within species. Both the neighbour‐joining and the Bayesian methods produced trees with identical monophyletic species groups. Two species, Chrysops dawsoni Philip and Chrysops montanus Osten Sacken (Diptera: Tabanidae), showed relatively deep intraspecific sequence divergences (∼10 times the average) for all three mitochondrial gene regions analysed. We suggest provisional differentiation of Ch. montanus into two haplotypes, namely, Ch. montanus haplomorph 1 and Ch. montanus haplomorph 2, both defined by their molecular sequences and by newly discovered differences in structural features near their ocelli.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R142517,"A DNA barcode library for 5,200 German flies and midges (Insecta: Diptera) and its implications for metabarcoding‐based biomonitoring",S624749,R155788,Studied taxonomic group (Biology),R149572,Diptera,"This study summarizes results of a DNA barcoding campaign on German Diptera, involving analysis of 45,040 specimens. The resultant DNA barcode library includes records for 2,453 named species comprising a total of 5,200 barcode index numbers (BINs), including 2,700 COI haplotype clusters without species‐level assignment, so called “dark taxa.” Overall, 88 out of 117 families (75%) recorded from Germany were covered, representing more than 50% of the 9,544 known species of German Diptera. Until now, most of these families, especially the most diverse, have been taxonomically inaccessible. By contrast, within a few years this study provided an intermediate taxonomic system for half of the German Dipteran fauna, which will provide a useful foundation for subsequent detailed, integrative taxonomic studies. Using DNA extracts derived from bulk collections made by Malaise traps, we further demonstrate that species delineation using BINs and operational taxonomic units (OTUs) constitutes an effective method for biodiversity studies using DNA metabarcoding. As the reference libraries continue to grow, and gaps in the species catalogue are filled, BIN lists assembled by metabarcoding will provide greater taxonomic resolution. The present study has three main goals: (a) to provide a DNA barcode library for 5,200 BINs of Diptera; (b) to demonstrate, based on the example of bulk extractions from a Malaise trap experiment, that DNA barcode clusters, labelled with globally unique identifiers (such as OTUs and/or BINs), provide a pragmatic, accurate solution to the “taxonomic impediment”; and (c) to demonstrate that interim names based on BINs and OTUs obtained through metabarcoding provide an effective method for studies on species‐rich groups that are usually neglected in biodiversity research projects because of their unresolved taxonomy.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R139508,Close congruence between Barcode Index Numbers (bins) and species boundaries in the Erebidae (Lepidoptera: Noctuoidea) of the Iberian Peninsula,S556436,R139510,Studied taxonomic group (Biology),R139512,Erebidae,"Abstract The DNA barcode reference library for Lepidoptera holds much promise as a tool for taxonomic research and for providing the reliable identifications needed for conservation assessment programs. We gathered sequences for the barcode region of the mitochondrial cytochrome c oxidase subunit I gene from 160 of the 176 nominal species of Erebidae moths (Insecta: Lepidoptera) known from the Iberian Peninsula. These results arise from a research project which constructing a DNA barcode library for the insect species of Spain. New records for 271 specimens (122 species) are coupled with preexisting data for 38 species from the Iberian fauna. Mean interspecific distance was 12.1%, while the mean nearest neighbour divergence was 6.4%. All 160 species possessed diagnostic barcode sequences, but one pair of congeneric taxa (Eublemma rosea and Eublemma rietzi) were assigned to the same BIN. As well, intraspecific sequence divergences higher than 1.5% were detected in four species which likely represent species complexes. This study reinforces the effectiveness of DNA barcoding as a tool for monitoring biodiversity in particular geographical areas and the strong correspondence between sequence clusters delineated by BINs and species recognized through detailed taxonomic analysis.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R139497,Congruence between morphology-based species and Barcode Index Numbers (BINs) in Neotropical Eumaeini (Lycaenidae),S629166,R156958,Studied taxonomic group (Biology),R156962,Eumaeini,"Background With about 1,000 species in the Neotropics, the Eumaeini (Theclinae) are one of the most diverse butterfly tribes. Correct morphology-based identifications are challenging in many genera due to relatively little interspecific differences in wing patterns. Geographic infraspecific variation is sometimes more substantial than variation between species. In this paper we present a large DNA barcode dataset of South American Lycaenidae. We analyze how well DNA barcode BINs match morphologically delimited species. Methods We compare morphology-based species identifications with the clustering of molecular operational taxonomic units (MOTUs) delimitated by the RESL algorithm in BOLD, which assigns Barcode Index Numbers (BINs). We examine intra- and interspecific divergences for genera represented by at least four morphospecies. We discuss the existence of local barcode gaps in a genus by genus analysis. We also note differences in the percentage of species with barcode gaps in groups of lowland and high mountain genera. Results We identified 2,213 specimens and obtained 1,839 sequences of 512 species in 90 genera. Overall, the mean intraspecific divergence value of CO1 sequences was 1.20%, while the mean interspecific divergence between nearest congeneric neighbors was 4.89%, demonstrating the presence of a barcode gap. However, the gap seemed to disappear from the entire set when comparing the maximum intraspecific distance (8.40%) with the minimum interspecific distance (0.40%). Clear barcode gaps are present in many genera but absent in others. From the set of specimens that yielded COI fragment lengths of at least 650 bp, 75% of the a priori morphology-based identifications were unambiguously assigned to a single Barcode Index Number (BIN). However, after a taxonomic a posteriori review, the percentage of matched identifications rose to 85%. BIN splitting was observed for 17% of the species and BIN sharing for 9%. We found that genera that contain primarily lowland species show higher percentages of local barcode gaps and congruence between BINs and morphology than genera that contain exclusively high montane species. The divergence values to the nearest neighbors were significantly lower in high Andean species while the intra-specific divergence values were significantly lower in the lowland species. These results raise questions regarding the causes of observed low inter and high intraspecific genetic variation. We discuss incomplete lineage sorting and hybridization as most likely causes of this phenomenon, as the montane species concerned are relatively young and hybridization is probable. The release of our data set represents an essential baseline for a reference library for biological assessment studies of butterflies in mega diverse countries using modern high-throughput technologies an highlights the necessity of taxonomic revisions for various genera combining both molecular and morphological data.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R157039,DNA barcode library for European Gelechiidae (Lepidoptera) suggests greatly underestimated species diversity,S629580,R157043,Studied taxonomic group (Biology),R157048,Gelechiidae,"For the first time, a nearly complete barcode library for European Gelechiidae is provided. DNA barcode sequences (COI gene - cytochrome c oxidase 1) from 751 out of 865 nominal species, belonging to 105 genera, were successfully recovered. A total of 741 species represented by specimens with sequences ≥ 500bp and an additional ten species represented by specimens with shorter sequences were used to produce 53 NJ trees. Intraspecific barcode divergence averaged only 0.54% whereas distance to the Nearest-Neighbour species averaged 5.58%. Of these, 710 species possessed unique DNA barcodes, but 31 species could not be reliably discriminated because of barcode sharing or partial barcode overlap. Species discrimination based on the Barcode Index System (BIN) was successful for 668 out of 723 species which clustered from minimum one to maximum 22 unique BINs. Fifty-five species shared a BIN with up to four species and identification from DNA barcode data is uncertain. Finally, 65 clusters with a unique BIN remained unidentified to species level. These putative taxa, as well as 114 nominal species with more than one BIN, suggest the presence of considerable cryptic diversity, cases which should be examined in future revisionary studies.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R140187,"DNA Barcoding the Geometrid Fauna of Bavaria (Lepidoptera): Successes, Surprises, and Questions",S628764,R156814,Studied taxonomic group (Biology),R156818,Geometridae,"Background The State of Bavaria is involved in a research program that will lead to the construction of a DNA barcode library for all animal species within its territorial boundaries. The present study provides a comprehensive DNA barcode library for the Geometridae, one of the most diverse of insect families. Methodology/Principal Findings This study reports DNA barcodes for 400 Bavarian geometrid species, 98 per cent of the known fauna, and approximately one per cent of all Bavarian animal species. Although 98.5% of these species possess diagnostic barcode sequences in Bavaria, records from neighbouring countries suggest that species-level resolution may be compromised in up to 3.5% of cases. All taxa which apparently share barcodes are discussed in detail. One case of modest divergence (1.4%) revealed a species overlooked by the current taxonomic system: Eupithecia goossensiata Mabille, 1869 stat.n. is raised from synonymy with Eupithecia absinthiata (Clerck, 1759) to species rank. Deep intraspecific sequence divergences (>2%) were detected in 20 traditionally recognized species. Conclusions/Significance The study emphasizes the effectiveness of DNA barcoding as a tool for monitoring biodiversity. Open access is provided to a data set that includes records for 1,395 geometrid specimens (331 species) from Bavaria, with 69 additional species from neighbouring regions. Taxa with deep intraspecific sequence divergences are undergoing more detailed analysis to ascertain if they represent cases of cryptic diversity.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R139508,Close congruence between Barcode Index Numbers (bins) and species boundaries in the Erebidae (Lepidoptera: Noctuoidea) of the Iberian Peninsula,S556413,R139510,Class (Taxonomy - biology),R108970,Insecta,"Abstract The DNA barcode reference library for Lepidoptera holds much promise as a tool for taxonomic research and for providing the reliable identifications needed for conservation assessment programs. We gathered sequences for the barcode region of the mitochondrial cytochrome c oxidase subunit I gene from 160 of the 176 nominal species of Erebidae moths (Insecta: Lepidoptera) known from the Iberian Peninsula. These results arise from a research project which constructing a DNA barcode library for the insect species of Spain. New records for 271 specimens (122 species) are coupled with preexisting data for 38 species from the Iberian fauna. Mean interspecific distance was 12.1%, while the mean nearest neighbour divergence was 6.4%. All 160 species possessed diagnostic barcode sequences, but one pair of congeneric taxa (Eublemma rosea and Eublemma rietzi) were assigned to the same BIN. As well, intraspecific sequence divergences higher than 1.5% were detected in four species which likely represent species complexes. This study reinforces the effectiveness of DNA barcoding as a tool for monitoring biodiversity in particular geographical areas and the strong correspondence between sequence clusters delineated by BINs and species recognized through detailed taxonomic analysis.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R138551,Probing planetary biodiversity with DNA barcodes: The Noctuoidea of North America,S629319,R156994,Order (Taxonomy - biology),R156752,Lepidoptera,"This study reports the assembly of a DNA barcode reference library for species in the lepidopteran superfamily Noctuoidea from Canada and the USA. Based on the analysis of 69,378 specimens, the library provides coverage for 97.3% of the noctuoid fauna (3565 of 3664 species). In addition to verifying the strong performance of DNA barcodes in the discrimination of these species, the results indicate close congruence between the number of species analyzed (3565) and the number of sequence clusters (3816) recognized by the Barcode Index Number (BIN) system. Distributional patterns across 12 North American ecoregions are examined for the 3251 species that have GPS data while BIN analysis is used to quantify overlap between the noctuoid faunas of North America and other zoogeographic regions. This analysis reveals that 90% of North American noctuoids are endemic and that just 7.5% and 1.8% of BINs are shared with the Neotropics and with the Palearctic, respectively. One third (29) of the latter species are recent introductions and, as expected, they possess low intraspecific divergences.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R139508,Close congruence between Barcode Index Numbers (bins) and species boundaries in the Erebidae (Lepidoptera: Noctuoidea) of the Iberian Peninsula,S556434,R139510,Order (Taxonomy - biology),R108971,Lepidoptera,"Abstract The DNA barcode reference library for Lepidoptera holds much promise as a tool for taxonomic research and for providing the reliable identifications needed for conservation assessment programs. We gathered sequences for the barcode region of the mitochondrial cytochrome c oxidase subunit I gene from 160 of the 176 nominal species of Erebidae moths (Insecta: Lepidoptera) known from the Iberian Peninsula. These results arise from a research project which constructing a DNA barcode library for the insect species of Spain. New records for 271 specimens (122 species) are coupled with preexisting data for 38 species from the Iberian fauna. Mean interspecific distance was 12.1%, while the mean nearest neighbour divergence was 6.4%. All 160 species possessed diagnostic barcode sequences, but one pair of congeneric taxa (Eublemma rosea and Eublemma rietzi) were assigned to the same BIN. As well, intraspecific sequence divergences higher than 1.5% were detected in four species which likely represent species complexes. This study reinforces the effectiveness of DNA barcoding as a tool for monitoring biodiversity in particular geographical areas and the strong correspondence between sequence clusters delineated by BINs and species recognized through detailed taxonomic analysis.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R140197,DNA barcodes distinguish species of tropical Lepidoptera,S628648,R156766,Order (Taxonomy - biology),R156752,Lepidoptera,"Although central to much biological research, the identification of species is often difficult. The use of DNA barcodes, short DNA sequences from a standardized region of the genome, has recently been proposed as a tool to facilitate species identification and discovery. However, the effectiveness of DNA barcoding for identifying specimens in species-rich tropical biotas is unknown. Here we show that cytochrome c oxidase I DNA barcodes effectively discriminate among species in three Lepidoptera families from Area de Conservación Guanacaste in northwestern Costa Rica. We found that 97.9% of the 521 species recognized by prior taxonomic work possess distinctive cytochrome c oxidase I barcodes and that the few instances of interspecific sequence overlap involve very similar species. We also found two or more barcode clusters within each of 13 supposedly single species. Covariation between these clusters and morphological and/or ecological traits indicates overlooked species complexes. If these results are general, DNA barcoding will significantly aid species identification and discovery in tropical settings.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R140252,Species-Level Para- and Polyphyly in DNA Barcode Gene Trees: Strong Operational Bias in European Lepidoptera,S628603,R156759,Order (Taxonomy - biology),R156752,Lepidoptera,"The proliferation of DNA data is revolutionizing all fields of systematic research. DNA barcode sequences, now available for millions of specimens and several hundred thousand species, are increasingly used in algorithmic species delimitations. This is complicated by occasional incongruences between species and gene genealogies, as indicated by situations where conspecific individuals do not form a monophyletic cluster in a gene tree. In two previous reviews, non-monophyly has been reported as being common in mitochondrial DNA gene trees. We developed a novel web service “Monophylizer” to detect non-monophyly in phylogenetic trees and used it to ascertain the incidence of species non-monophyly in COI (a.k.a. cox1) barcode sequence data from 4977 species and 41,583 specimens of European Lepidoptera, the largest data set of DNA barcodes analyzed from this regard. Particular attention was paid to accurate species identification to ensure data integrity. We investigated the effects of tree-building method, sampling effort, and other methodological issues, all of which can influence estimates of non-monophyly. We found a 12% incidence of non-monophyly, a value significantly lower than that observed in previous studies. Neighbor joining (NJ) and maximum likelihood (ML) methods yielded almost equal numbers of non-monophyletic species, but 24.1% of these cases of non-monophyly were only found by one of these methods. Non-monophyletic species tend to show either low genetic distances to their nearest neighbors or exceptionally high levels of intraspecific variability. Cases of polyphyly in COI trees arising as a result of deep intraspecific divergence are negligible, as the detected cases reflected misidentifications or methodological errors. Taking into consideration variation in sampling effort, we estimate that the true incidence of non-monophyly is ∼23%, but with operational factors still being included. Within the operational factors, we separately assessed the frequency of taxonomic limitations (presence of overlooked cryptic and oversplit species) and identification uncertainties. We observed that operational factors are potentially present in more than half (58.6%) of the detected cases of non-monophyly. Furthermore, we observed that in about 20% of non-monophyletic species and entangled species, the lineages involved are either allopatric or parapatric—conditions where species delimitation is inherently subjective and particularly dependent on the species concept that has been adopted. These observations suggest that species-level non-monophyly in COI gene trees is less common than previously supposed, with many cases reflecting misidentifications, the subjectivity of species delimitation or other operational factors.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R157051,"A Transcontinental Challenge — A Test of DNA Barcode Performance for 1,541 Species of Canadian Noctuoidea (Lepidoptera)",S629638,R157052,Order (Taxonomy - biology),R156752,Lepidoptera,"This study provides a first, comprehensive, diagnostic use of DNA barcodes for the Canadian fauna of noctuoids or “owlet” moths (Lepidoptera: Noctuoidea) based on vouchered records for 1,541 species (99.1% species coverage), and more than 30,000 sequences. When viewed from a Canada-wide perspective, DNA barcodes unambiguously discriminate 90% of the noctuoid species recognized through prior taxonomic study, and resolution reaches 95.6% when considered at a provincial scale. Barcode sharing is concentrated in certain lineages with 54% of the cases involving 1.8% of the genera. Deep intraspecific divergence exists in 7.7% of the species, but further studies are required to clarify whether these cases reflect an overlooked species complex or phylogeographic variation in a single species. Non-native species possess higher Nearest-Neighbour (NN) distances than native taxa, whereas generalist feeders have lower NN distances than those with more specialized feeding habits. We found high concordance between taxonomic names and sequence clusters delineated by the Barcode Index Number (BIN) system with 1,082 species (70%) assigned to a unique BIN. The cases of discordance involve both BIN mergers and BIN splits with 38 species falling into both categories, most likely reflecting bidirectional introgression. One fifth of the species are involved in a BIN merger reflecting the presence of 158 species sharing their barcode sequence with at least one other taxon, and 189 species with low, but diagnostic COI divergence. A very few cases (13) involved species whose members fell into both categories. Most of the remaining 140 species show a split into two or three BINs per species, while Virbia ferruginosa was divided into 16. The overall results confirm that DNA barcodes are effective for the identification of Canadian noctuoids. This study also affirms that BINs are a strong proxy for species, providing a pathway for a rapid, accurate estimation of animal diversity.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R140252,Species-Level Para- and Polyphyly in DNA Barcode Gene Trees: Strong Operational Bias in European Lepidoptera,S628605,R156759,Studied taxonomic group (Biology),R156752,Lepidoptera,"The proliferation of DNA data is revolutionizing all fields of systematic research. DNA barcode sequences, now available for millions of specimens and several hundred thousand species, are increasingly used in algorithmic species delimitations. This is complicated by occasional incongruences between species and gene genealogies, as indicated by situations where conspecific individuals do not form a monophyletic cluster in a gene tree. In two previous reviews, non-monophyly has been reported as being common in mitochondrial DNA gene trees. We developed a novel web service “Monophylizer” to detect non-monophyly in phylogenetic trees and used it to ascertain the incidence of species non-monophyly in COI (a.k.a. cox1) barcode sequence data from 4977 species and 41,583 specimens of European Lepidoptera, the largest data set of DNA barcodes analyzed from this regard. Particular attention was paid to accurate species identification to ensure data integrity. We investigated the effects of tree-building method, sampling effort, and other methodological issues, all of which can influence estimates of non-monophyly. We found a 12% incidence of non-monophyly, a value significantly lower than that observed in previous studies. Neighbor joining (NJ) and maximum likelihood (ML) methods yielded almost equal numbers of non-monophyletic species, but 24.1% of these cases of non-monophyly were only found by one of these methods. Non-monophyletic species tend to show either low genetic distances to their nearest neighbors or exceptionally high levels of intraspecific variability. Cases of polyphyly in COI trees arising as a result of deep intraspecific divergence are negligible, as the detected cases reflected misidentifications or methodological errors. Taking into consideration variation in sampling effort, we estimate that the true incidence of non-monophyly is ∼23%, but with operational factors still being included. Within the operational factors, we separately assessed the frequency of taxonomic limitations (presence of overlooked cryptic and oversplit species) and identification uncertainties. We observed that operational factors are potentially present in more than half (58.6%) of the detected cases of non-monophyly. Furthermore, we observed that in about 20% of non-monophyletic species and entangled species, the lineages involved are either allopatric or parapatric—conditions where species delimitation is inherently subjective and particularly dependent on the species concept that has been adopted. These observations suggest that species-level non-monophyly in COI gene trees is less common than previously supposed, with many cases reflecting misidentifications, the subjectivity of species delimitation or other operational factors.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R142471,DNA barcoding of Northern Nearctic Muscidae (Diptera) reveals high correspondence between morphological and molecular species limits,S624783,R155793,Studied taxonomic group (Biology),R155797,Muscidae,"Abstract Background Various methods have been proposed to assign unknown specimens to known species using their DNA barcodes, while others have focused on using genetic divergence thresholds to estimate “species” diversity for a taxon, without a well-developed taxonomy and/or an extensive reference library of DNA barcodes. The major goals of the present work were to: a) conduct the largest species-level barcoding study of the Muscidae to date and characterize the range of genetic divergence values in the northern Nearctic fauna; b) evaluate the correspondence between morphospecies and barcode groupings defined using both clustering-based and threshold-based approaches; and c) use the reference library produced to address taxonomic issues. Results Our data set included 1114 individuals and their COI sequences (951 from Churchill, Manitoba), representing 160 morphologically-determined species from 25 genera, covering 89% of the known fauna of Churchill and 23% of the Nearctic fauna. Following an iterative process through which all specimens belonging to taxa with anomalous divergence values and/or monophyly issues were re-examined, identity was modified for 9 taxa, including the reinstatement of Phaonia luteva (Walker) stat. nov. as a species distinct from Phaonia errans (Meigen). In the post-reassessment data set, no distinct gap was found between maximum pairwise intraspecific distances (range 0.00-3.01%) and minimum interspecific distances (range: 0.77-11.33%). Nevertheless, using a clustering-based approach, all individuals within 98% of species grouped with their conspecifics with high (>95%) bootstrap support; in contrast, a maximum species discrimination rate of 90% was obtained at the optimal threshold of 1.2%. DNA barcoding enabled the determination of females from 5 ambiguous species pairs and confirmed that 16 morphospecies were genetically distinct from named taxa. There were morphological differences among all distinct genetic clusters; thus, no cases of cryptic species were detected. Conclusions Our findings reveal the great utility of building a well-populated, species-level reference barcode database against which to compare unknowns. When such a library is unavailable, it is still possible to obtain a fairly accurate (within ~10%) rapid assessment of species richness based upon a barcode divergence threshold alone, but this approach is most accurate when the threshold is tuned to a particular taxon.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R138551,Probing planetary biodiversity with DNA barcodes: The Noctuoidea of North America,S629321,R156994,Studied taxonomic group (Biology),R156998,Noctuoidea,"This study reports the assembly of a DNA barcode reference library for species in the lepidopteran superfamily Noctuoidea from Canada and the USA. Based on the analysis of 69,378 specimens, the library provides coverage for 97.3% of the noctuoid fauna (3565 of 3664 species). In addition to verifying the strong performance of DNA barcodes in the discrimination of these species, the results indicate close congruence between the number of species analyzed (3565) and the number of sequence clusters (3816) recognized by the Barcode Index Number (BIN) system. Distributional patterns across 12 North American ecoregions are examined for the 3251 species that have GPS data while BIN analysis is used to quantify overlap between the noctuoid faunas of North America and other zoogeographic regions. This analysis reveals that 90% of North American noctuoids are endemic and that just 7.5% and 1.8% of BINs are shared with the Neotropics and with the Palearctic, respectively. One third (29) of the latter species are recent introductions and, as expected, they possess low intraspecific divergences.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R157051,"A Transcontinental Challenge — A Test of DNA Barcode Performance for 1,541 Species of Canadian Noctuoidea (Lepidoptera)",S629623,R157052,Studied taxonomic group (Biology),R156998,Noctuoidea,"This study provides a first, comprehensive, diagnostic use of DNA barcodes for the Canadian fauna of noctuoids or “owlet” moths (Lepidoptera: Noctuoidea) based on vouchered records for 1,541 species (99.1% species coverage), and more than 30,000 sequences. When viewed from a Canada-wide perspective, DNA barcodes unambiguously discriminate 90% of the noctuoid species recognized through prior taxonomic study, and resolution reaches 95.6% when considered at a provincial scale. Barcode sharing is concentrated in certain lineages with 54% of the cases involving 1.8% of the genera. Deep intraspecific divergence exists in 7.7% of the species, but further studies are required to clarify whether these cases reflect an overlooked species complex or phylogeographic variation in a single species. Non-native species possess higher Nearest-Neighbour (NN) distances than native taxa, whereas generalist feeders have lower NN distances than those with more specialized feeding habits. We found high concordance between taxonomic names and sequence clusters delineated by the Barcode Index Number (BIN) system with 1,082 species (70%) assigned to a unique BIN. The cases of discordance involve both BIN mergers and BIN splits with 38 species falling into both categories, most likely reflecting bidirectional introgression. One fifth of the species are involved in a BIN merger reflecting the presence of 158 species sharing their barcode sequence with at least one other taxon, and 189 species with low, but diagnostic COI divergence. A very few cases (13) involved species whose members fell into both categories. Most of the remaining 140 species show a split into two or three BINs per species, while Virbia ferruginosa was divided into 16. The overall results confirm that DNA barcodes are effective for the identification of Canadian noctuoids. This study also affirms that BINs are a strong proxy for species, providing a pathway for a rapid, accurate estimation of animal diversity.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R108960,Use of species delimitation approaches to tackle the cryptic diversity of an assemblage of high Andean butterflies (Lepidoptera: Papilionoidea),S629504,R157029,Studied taxonomic group (Biology),R156865,Papilionoidea,"Cryptic biological diversity has generated ambiguity in taxonomic and evolutionary studies. Single-locus methods and other approaches for species delimitation are useful for addressing this challenge, enabling the practical processing of large numbers of samples for identification and inventory purposes. This study analyzed one assemblage of high Andean butterflies using DNA barcoding and compared the identifications based on the current morphological taxonomy with three methods of species delimitation (automatic barcode gap discovery, generalized mixed Yule coalescent model, and Poisson tree processes). Sixteen potential cryptic species were recognized using these three methods, representing a net richness increase of 11.3% in the assemblage. A well-studied taxon of the genus Vanessa, which has a wide geographical distribution, appeared with the potential cryptic species that had a higher genetic differentiation at the local level than at the continental level. The analyses were useful for identifying the potential cryptic species in Pedaliodes and Forsterinaria complexes, which also show differentiation along altitudinal and latitudinal gradients. This genetic assessment of an entire assemblage of high Andean butterflies (Papilionoidea), provides baseline information for future research in a region characterized by high rates of endemism and population isolation.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R157056,A DNA Barcode Library for North American Pyraustinae (Lepidoptera: Pyraloidea: Crambidae),S629665,R157057,Studied taxonomic group (Biology),R157059,Pyraustinae,"Although members of the crambid subfamily Pyraustinae are frequently important crop pests, their identification is often difficult because many species lack conspicuous diagnostic morphological characters. DNA barcoding employs sequence diversity in a short standardized gene region to facilitate specimen identifications and species discovery. This study provides a DNA barcode reference library for North American pyraustines based upon the analysis of 1589 sequences recovered from 137 nominal species, 87% of the fauna. Data from 125 species were barcode compliant (>500bp, <1% n), and 99 of these taxa formed a distinct cluster that was assigned to a single BIN. The other 26 species were assigned to 56 BINs, reflecting frequent cases of deep intraspecific sequence divergence and a few instances of barcode sharing, creating a total of 155 BINs. Two systems for OTU designation, ABGD and BIN, were examined to check the correspondence between current taxonomy and sequence clusters. The BIN system performed better than ABGD in delimiting closely related species, while OTU counts with ABGD were influenced by the value employed for relative gap width. Different species with low or no interspecific divergence may represent cases of unrecognized synonymy, whereas those with high intraspecific divergence require further taxonomic scrutiny as they may involve cryptic diversity. The barcode library developed in this study will also help to advance understanding of relationships among species of Pyraustinae.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R146646,Comprehensive evaluation of DNA barcoding for the molecular species identification of forensically important Australian Sarcophagidae (Diptera),S623952,R155647,Studied taxonomic group (Biology),R155658,Sarcophagidae,"Abstract. Carrion-breeding Sarcophagidae (Diptera) can be used to estimate the post-mortem interval in forensic cases. Difficulties with accurate morphological identifications at any life stage and a lack of documented thermobiological profiles have limited their current usefulness. The molecular-based approach of DNA barcoding, which utilises a 648-bp fragment of the mitochondrial cytochrome oxidase subunit I gene, was evaluated in a pilot study for discrimination between 16 Australian sarcophagids. The current study comprehensively evaluated barcoding for a larger taxon set of 588 Australian sarcophagids. In total, 39 of the 84 known Australian species were represented by 580 specimens, which includes 92% of potentially forensically important species. A further eight specimens could not be identified, but were included nonetheless as six unidentifiable taxa. A neighbour-joining tree was generated and nucleotide sequence divergences were calculated. All species except Sarcophaga (Fergusonimyia) bancroftorum, known for high morphological variability, were resolved as monophyletic (99.2% of cases), with bootstrap support of 100. Excluding S. bancroftorum, the mean intraspecific and interspecific variation ranged from 1.12% and 2.81–11.23%, respectively, allowing for species discrimination. DNA barcoding was therefore validated as a suitable method for molecular identification of Australian Sarcophagidae, which will aid in the implementation of this fauna in forensic entomology.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145506,Identification of Nearctic black flies using DNA barcodes (Diptera: Simuliidae),S624303,R155698,Studied taxonomic group (Biology),R155635,Simuliidae,"DNA barcoding has gained increased recognition as a molecular tool for species identification in various groups of organisms. In this preliminary study, we tested the efficacy of a 615‐bp fragment of the cytochrome c oxidase I (COI) as a DNA barcode in the medically important family Simuliidae, or black flies. A total of 65 (25%) morphologically distinct species and sibling species in species complexes of the 255 recognized Nearctic black fly species were used to create a preliminary barcode profile for the family. Genetic divergence among congeners averaged 14.93% (range 2.83–15.33%), whereas intraspecific genetic divergence between morphologically distinct species averaged 0.72% (range 0–3.84%). DNA barcodes correctly identified nearly 100% of the morphologically distinct species (87% of the total sampled taxa), whereas in species complexes (13% of the sampled taxa) maximum values of divergence were comparatively higher (max. 4.58–6.5%), indicating cryptic diversity. The existence of sibling species in Prosimulium travisi and P. neomacropyga was also demonstrated, thus confirming previous cytological evidence about the existence of such cryptic diversity in these two taxa. We conclude that DNA barcoding is an effective method for species identification and discovery of cryptic diversity in black flies.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R146643,Revision of Nearctic Dasysyrphus Enderlein (Diptera: Syrphidae),S624043,R155663,Studied taxonomic group (Biology),R155668,Syrphidae,"Dasysyrphus Enderlein (Diptera: Syrphidae) has posed taxonomic challenges to researchers in the past, primarily due to their lack of interspecific diagnostic characters. In the present study, DNA data (mitochondrial cytochrome c oxidase sub-unit I—COI) were combined with morphology to help delimit species. This led to two species being resurrected from synonymy (D. laticaudus and D. pacificus) and the discovery of one new species (D. occidualis sp. nov.). An additional new species was described based on morphology alone (D. richardi sp. nov.), as the specimens were too old to obtain COI. Part of the taxonomic challenge presented by this group arises from missing type specimens. Neotypes are designated here for D. pauxillus and D. pinastri to bring stability to these names. An illustrated key to 13 Nearctic species is presented, along with descriptions, maps and supplementary data. A phylogeny based on COI is also presented and discussed.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R146938,Evaluation of DNA barcoding and identification of new haplomorphs in Canadian deerflies and horseflies,S599535,R149512,Studied taxonomic group (Biology),R149574,Tabanidae,"This paper reports the first tests of the suitability of the standardized mitochondrial cytochrome c oxidase subunit I (COI) barcoding system for the identification of Canadian deerflies and horseflies. Two additional mitochondrial molecular markers were used to determine whether unambiguous species recognition in tabanids can be achieved. Our 332 Canadian tabanid samples yielded 650 sequences from five genera and 42 species. Standard COI barcodes demonstrated a strong A + T bias (mean 68.1%), especially at third codon positions (mean 93.0%). Our preliminary test of this system showed that the standard COI barcode worked well for Canadian Tabanidae: the target DNA can be easily recovered from small amounts of insect tissue and aligned for all tabanid taxa. Each tabanid species possessed distinctive sets of COI haplotypes which discriminated well among species. Average conspecific Kimura two‐parameter (K2P) divergence (0.49%) was 12 times lower than the average divergence within species. Both the neighbour‐joining and the Bayesian methods produced trees with identical monophyletic species groups. Two species, Chrysops dawsoni Philip and Chrysops montanus Osten Sacken (Diptera: Tabanidae), showed relatively deep intraspecific sequence divergences (∼10 times the average) for all three mitochondrial gene regions analysed. We suggest provisional differentiation of Ch. montanus into two haplotypes, namely, Ch. montanus haplomorph 1 and Ch. montanus haplomorph 2, both defined by their molecular sequences and by newly discovered differences in structural features near their ocelli.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R142535,DNA Barcodes for the Northern European Tachinid Flies (Diptera: Tachinidae),S624698,R155773,Studied taxonomic group (Biology),R155777,Tachinidae,"This data release provides COI barcodes for 366 species of parasitic flies (Diptera: Tachinidae), enabling the DNA based identification of the majority of northern European species and a large proportion of Palearctic genera, regardless of the developmental stage. The data will provide a tool for taxonomists and ecologists studying this ecologically important but challenging parasitoid family. A comparison of minimum distances between the nearest neighbors revealed the mean divergence of 5.52% that is approximately the same as observed earlier with comparable sampling in Lepidoptera, but clearly less than in Coleoptera. Full barcode-sharing was observed between 13 species pairs or triplets, equaling to 7.36% of all species. Delimitation based on Barcode Index Number (BIN) system was compared with traditional classification of species and interesting cases of possible species oversplits and cryptic diversity are discussed. Overall, DNA barcodes are effective in separating tachinid species and provide novel insight into the taxonomy of several genera.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145497,"Half of the European fruit fly species barcoded (Diptera, Tephritidae); a feasibility test for molecular identification",S624371,R155710,Studied taxonomic group (Biology),R155714,Tephritidae,"Abstract A feasibility test of molecular identification of European fruit flies (Diptera: Tephritidae) based on COI barcode sequences has been executed. A dataset containing 555 sequences of 135 ingroup species from three subfamilies and 42 genera and one single outgroup species has been analysed. 73.3% of all included species could be identified based on their COI barcode gene, based on similarity and distances. The low success rate is caused by singletons as well as some problematic groups: several species groups within the genus Terellia and especially the genus Urophora. With slightly more than 100 sequences – almost 20% of the total – this genus alone constitutes the larger part of the failure for molecular identification for this dataset. Deleting the singletons and Urophora results in a success-rate of 87.1% of all queries and 93.23% of the not discarded queries as correctly identified. Urophora is of special interest due to its economic importance as beneficial species for weed control, therefore it is desirable to have alternative markers for molecular identification. We demonstrate that the success of DNA barcoding for identification purposes strongly depends on the contents of the database used to BLAST against. Especially the necessity of including multiple specimens per species of geographically distinct populations and different ecologies for the understanding of the intra- versus interspecific variation is demonstrated. Furthermore thresholds and the distinction between true and false positives and negatives should not only be used to increase the reliability of the success of molecular identification but also to point out problematic groups, which should then be flagged in the reference database suggesting alternative methods for identification.",TRUE,noun
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R139546,"A DNA barcode reference library for Swiss butterflies and forester moths as a tool for species identification, systematics and conservation",S628885,R156861,Studied taxonomic group (Biology),R156866,Zygaenidae,"Butterfly monitoring and Red List programs in Switzerland rely on a combination of observations and collection records to document changes in species distributions through time. While most butterflies can be identified using morphology, some taxa remain challenging, making it difficult to accurately map their distributions and develop appropriate conservation measures. In this paper, we explore the use of the DNA barcode (a fragment of the mitochondrial gene COI) as a tool for the identification of Swiss butterflies and forester moths (Rhopalocera and Zygaenidae). We present a national DNA barcode reference library including 868 sequences representing 217 out of 224 resident species, or 96.9% of Swiss fauna. DNA barcodes were diagnostic for nearly 90% of Swiss species. The remaining 10% represent cases of para- and polyphyly likely involving introgression or incomplete lineage sorting among closely related taxa. We demonstrate that integrative taxonomic methods incorporating a combination of morphological and genetic techniques result in a rate of species identification of over 96% in females and over 98% in males, higher than either morphology or DNA barcodes alone. We explore the use of the DNA barcode for exploring boundaries among taxa, understanding the geographical distribution of cryptic diversity and evaluating the status of purportedly endemic taxa. Finally, we discuss how DNA barcodes may be used to improve field practices and ultimately enhance conservation strategies.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54066,Germination patterns and implications for invasiveness in three Taraxacum (Asteraceae) species,S165683,R54067,Specific traits,L100549, Germination ,"Luo J & Cardina J (2012). Germination patterns and implications for invasiveness in three Taraxacum (Asteraceae) species. Weed Research 52, 112–121. Summary The ability to germinate across different environments has been considered an important trait of invasive plant species that allows for establishment success in new habitats. Using two alien congener species of Asteraceae –Taraxacum officinale (invasive) and Taraxacum laevigatum laevigatum (non-invasive) – we tested the hypothesis that invasive species germinate better than non-invasives under various conditions. The germination patterns of Taraxacum brevicorniculatum, a contaminant found in seeds of the crop Taraxacum kok-saghyz, were also investigated to evaluate its invasive potential. In four experiments, we germinated seeds along gradients of alternating temperature, constant temperature (with or without light), water potential and following accelerated ageing. Neither higher nor lower germination per se explained invasion success for the Taraxacum species tested here. At alternating temperature, the invasive T. officinale had higher germination than or similar to the non-invasive T. laevigatum. Contrary to predictions, T. laevigatum exhibited higher germination than T. officinale in environments of darkness, low water potential or after the seeds were exposed to an ageing process. These results suggested a complicated role of germination in the success of T. officinale. Taraxacum brevicorniculatum showed the highest germination among the three species in all environments. The invasive potential of this species is thus unclear and will probably depend on its performance at other life stages along environmental gradients.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54642,Re-colonisation rate differs between co-existing indigenous and invasive intertidal mussels following major disturbance,S172495,R54643,Type of disturbance,L106095, Storm,"The potential of introduced species to become invasive is often linked to their ability to colonise disturbed habitats rapidly. We studied the effects of major disturbance by severe storms on the indigenous mussel Perna perna and the invasive mussel Mytilus galloprovincialis in sympatric intertidal populations on the south coast of South Africa. At the study sites, these species dominate different shore levels and co-exist in the mid mussel zone. We tested the hypotheses that in the mid- zone P. perna would suffer less dislodgment than M. galloprovincialis, because of its greater tenacity, while M. galloprovincialis would respond with a higher re-colonisation rate. We estimated the per- cent cover of the 2 mussels in the mid-zone from photographs, once before severe storms and 3 times afterwards. M. galloprovincialis showed faster re-colonisation and 3 times more cover than P. perna 1 and 1.5 yr after the storms (when populations had recovered). Storm-driven dislodgment in the mid- zone was highest for the species that initially dominated at each site, conforming to the concept of compensatory mortality. This resulted in similar cover of the 2 species immediately after the storms. Thus, the storm wave forces exceeded the tenacity even of P. perna, while the higher recruitment rate of M. galloprovincialis can explain its greater colonisation ability. We predict that, because of its weaker attachment strength, M. galloprovincialis will be largely excluded from open coast sites where wave action is generally stronger, but that its greater capacity for exploitation competition through re-colonisation will allow it to outcompete P. perna in more sheltered areas (especially in bays) that are periodically disturbed by storms.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55019,"The role of propagule pressure, genetic diversity and microsite availability for Senecio vernalis invasion",S176393,R55020,Measure of invasion success,L109091,Abundance,"Genetic diversity is supposed to support the colonization success of expanding species, in particular in situations where microsite availability is constrained. Addressing the role of genetic diversity in plant invasion experimentally requires its manipulation independent of propagule pressure. To assess the relative importance of these components for the invasion of Senecio vernalis, we created propagule mixtures of four levels of genotype diversity by combining seeds across remote populations, across proximate populations, within single populations and within seed families. In a first container experiment with constant Festuca rupicola density as matrix, genotype diversity was crossed with three levels of seed density. In a second experiment, we tested for effects of establishment limitation and genotype diversity by manipulating Festuca densities. Increasing genetic diversity had no effects on abundance and biomass of S. vernalis but positively affected the proportion of large individuals to small individuals. Mixtures composed from proximate populations had a significantly higher proportion of large individuals than mixtures composed from within seed families only. High propagule pressure increased emergence and establishment of S. vernalis but had no effect on individual growth performance. Establishment was favoured in containers with Festuca, but performance of surviving seedlings was higher in open soil treatments. For S. vernalis invasion, we found a shift in driving factors from density dependence to effects of genetic diversity across life stages. While initial abundance was mostly linked to the amount of seed input, genetic diversity, in contrast, affected later stages of colonization probably via sampling effects and seemed to contribute to filtering the genotypes that finally grew up. In consequence, when disentangling the mechanistic relationships of genetic diversity, seed density and microsite limitation in colonization of invasive plants, a clear differentiation between initial emergence and subsequent survival to juvenile and adult stages is required.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55099,Invasive alien plants infiltrate bird-mediated shrub nucleation processes in arid savanna,S177308,R55100,Measure of invasion success,L109846,Abundance,"1 The cultivation and dissemination of alien ornamental plants increases their potential to invade. More specifically, species with bird‐dispersed seeds can potentially infiltrate natural nucleation processes in savannas. 2 To test (i) whether invasion depends on facilitation by host trees, (ii) whether propagule pressure determines invasion probability, and (iii) whether alien host plants are better facilitators of alien fleshy‐fruited species than indigenous species, we mapped the distribution of alien fleshy‐fruited species planted inside a military base, and compared this with the distribution of alien and native fleshy‐fruited species established in the surrounding natural vegetation. 3 Abundance and diversity of fleshy‐fruited plant species was much greater beneath tree canopies than in open grassland and, although some native fleshy‐fruited plants were found both beneath host trees and in the open, alien fleshy‐fruited plants were found only beneath trees. 4 Abundance of fleshy‐fruited alien species in the natural savanna was positively correlated with the number of individuals of those species planted in the grounds of the military base, while the species richness of alien fleshy‐fruited taxa decreased with distance from the military base, supporting the notion that propagule pressure is a fundamental driver of invasions. 5 There were more fleshy‐fruited species beneath native Acacia tortilis than beneath alien Prosopis sp. trees of the equivalent size. Although there were significant differences in native plant assemblages beneath these hosts, the proportion of alien to native fleshy‐fruited species did not differ with host. 6 Synthesis. Birds facilitate invasion of a semi‐arid African savanna by alien fleshy‐fruited plants, and this process does not require disturbance. Instead, propagule pressure and a few simple biological observations define the probability that a plant will invade, with alien species planted in gardens being a major source of propagules. Some invading species have the potential to transform this savanna by overtopping native trees, leading to ecosystem‐level impacts. Likewise, the invasion of the open savanna by alien host trees (such as Prosopis sp.) may change the diversity, abundance and species composition of the fleshy‐fruited understorey. These results illustrate the complex interplay between propagule pressure, facilitation, and a range of other factors in biological invasions.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54574,Anthropogenic Disturbance Can Determine the Magnitude of Opportunistic Species Responses on Marine Urban Infrastructures,S171688,R54575,Investigated species,L105424,Algae,"Background Coastal landscapes are being transformed as a consequence of the increasing demand for infrastructures to sustain residential, commercial and tourist activities. Thus, intertidal and shallow marine habitats are largely being replaced by a variety of artificial substrata (e.g. breakwaters, seawalls, jetties). Understanding the ecological functioning of these artificial habitats is key to planning their design and management, in order to minimise their impacts and to improve their potential to contribute to marine biodiversity and ecosystem functioning. Nonetheless, little effort has been made to assess the role of human disturbances in shaping the structure of assemblages on marine artificial infrastructures. We tested the hypothesis that some negative impacts associated with the expansion of opportunistic and invasive species on urban infrastructures can be related to the severe human disturbances that are typical of these environments, such as those from maintenance and renovation works. Methodology/Principal Findings Maintenance caused a marked decrease in the cover of dominant space occupiers, such as mussels and oysters, and a significant enhancement of opportunistic and invasive forms, such as biofilm and macroalgae. These effects were particularly pronounced on sheltered substrata compared to exposed substrata. Experimental application of the disturbance in winter reduced the magnitude of the impacts compared to application in spring or summer. We use these results to identify possible management strategies to inform the improvement of the ecological value of artificial marine infrastructures. Conclusions/Significance We demonstrate that some of the impacts of globally expanding marine urban infrastructures, such as those related to the spread of opportunistic, and invasive species could be mitigated through ecologically-driven planning and management of long-term maintenance of these structures. Impact mitigation is a possible outcome of policies that consider the ecological features of built infrastructures and the fundamental value of controlling biodiversity in marine urban systems.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54607,Determinants of Caulerpa racemosa distribution in the north-western Mediterranean,S172075,R54608,Investigated species,L105745,Algae,"Predicting community susceptibility to invasion has become a priority for preserving biodiversity. We tested the hypothesis that the occurrence and abundance of the seaweed Caulerpa racemosa in the north-western (NW) Mediterranean would increase with increasing levels of human disturbance. Data from a survey encompassing areas subjected to different human influences (i.e. from urbanized to protected areas) were fitted by means of generalized linear mixed models, including descriptors of habitats and communities. The incidence of occurrence of C. racemosa was greater on urban than extra-urban or protected reefs, along the coast of Tuscany and NW Sardinia, respectively. Within the Marine Protected Area of Capraia Island (Tuscan Archipelago), the probability of detecting C. racemosa did not vary according to the degree of protection (partial versus total). Human influence was, however, a poor predictor of the seaweed cover. At the seascape level, C. racemosa was more widely spread within degraded (i.e. Posidonia oceanica dead matte or algal turfs) than in better preserved habitats (i.e. canopy-forming macroalgae or P. oceanica seagrass meadows). At a smaller spatial scale, the presence of the seaweed was positively correlated to the diversity of macroalgae and negatively to that of sessile invertebrates. These results suggest that C. racemosa can take advantage of habitat degradation. Thus, predicting invasion scenarios requires a thorough knowledge of ecosystem structure, at a hierarchy of levels of biological organization (from the landscape to the assemblage) and detailed information on the nature and intensity of sources of disturbance and spatial scales at which they operate.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57133,"Functional group diversity, resource preemption and the genesis of invasion resistance in a community of marine algae",S194145,R57134,Investigated species,L121556,Algae,"Although many studies have investigated how community characteristics such as diversity and disturbance relate to invasibility, the mechanisms underlying biotic resistance to introduced species are not well understood. I manipulated the functional group composition of native algal communities and invaded them with the introduced, Japanese seaweed Sargassum muticum to understand how individual functional groups contributed to overall invasion resistance. The results suggested that space preemption by crustose and turfy algae inhibited S. muticum recruitment and that light preemption by canopy and understory algae reduced S. muticum survivorship. However, other mechanisms I did not investigate could have contributed to these two results. In this marine community the sequential preemption of key resources by different functional groups in different stages of the invasion generated resistance to invasion by S. muticum. Rather than acting collectively on a single resource the functional groups in this system were important for preempting either space or light, but not both resources. My experiment has important implications for diversity-invasibility studies, which typically look for an effect of diversity on individual resources. Overall invasion resistance will be due to the additive effects of individual functional groups (or species) summed over an invader's life cycle. Therefore, the cumulative effect of multiple functional groups (or species) acting on multiple resources is an alternative mechanism that could generate negative relationships between diversity and invasibility in a variety of biological systems.",TRUE,noun
R24,Ecology and Evolutionary Biology,R53360,Introduction pathway and climate trump ecology and life history as predictors of establishment success in alien frogs and toads,S163378,R53361,Investigated species,L98807,Amphibians,"A major goal for ecology and evolution is to understand how abiotic and biotic factors shape patterns of biological diversity. Here, we show that variation in establishment success of nonnative frogs and toads is primarily explained by variation in introduction pathways and climatic similarity between the native range and introduction locality, with minor contributions from phylogeny, species ecology, and life history. This finding contrasts with recent evidence that particular species characteristics promote evolutionary range expansion and reduce the probability of extinction in native populations of amphibians, emphasizing how different mechanisms may shape species distributions on different temporal and spatial scales. We suggest that contemporary changes in the distribution of amphibians will be primarily determined by human-mediated extinctions and movement of species within climatic envelopes, and less by species-typical traits.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54092,Invasive Microstegium populations consistently outperform native range populations across diverse environments,S165992,R54093, Phenotypic plasticity form,L100805,Biomass,"Plant species introduced into novel ranges may become invasive due to evolutionary change, phenotypic plasticity, or other biotic or abiotic mechanisms. Evolution of introduced populations could be the result of founder effects, drift, hybridization, or adaptation to local conditions, which could enhance the invasiveness of introduced species. However, understanding whether the success of invading populations is due to genetic differences between native and introduced populations may be obscured by origin x environment interactions. That is, studies conducted under a limited set of environmental conditions may show inconsistent results if native or introduced populations are differentially adapted to specific conditions. We tested for genetic differences between native and introduced populations, and for origin x environment interactions, between native (China) and introduced (U.S.) populations of the invasive annual grass Microstegium vimineum (stiltgrass) across 22 common gardens spanning a wide range of habitats and environmental conditions. On average, introduced populations produced 46% greater biomass and had 7.4% greater survival, and outperformed native range populations in every common garden. However, we found no evidence that introduced Microstegium exhibited greater phenotypic plasticity than native populations. Biomass of Microstegium was positively correlated with light and resident community richness and biomass across the common gardens. However, these relationships were equivalent for native and introduced populations, suggesting that the greater mean performance of introduced populations is not due to unequal responses to specific environmental parameters. Our data on performance of invasive and native populations suggest that post-introduction evolutionary changes may have enhanced the invasive potential of this species. Further, the ability of Microstegium to survive and grow across the wide variety of environmental conditions demonstrates that few habitats are immune to invasion.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54998,COMPETITION BETWEEN NATIVE PERENNIAL AND EXOTIC ANNUAL GRASSES: IMPLICATIONS FOR AN HISTORICAL INVASION,S176164,R54999,Measure of invasion success,L108904,Biomass,"Though established populations of invasive species can exert substantial competitive effects on native populations, exotic propagules may require disturbances that decrease competitive interference by resident species in order to become established. We compared the relative competitiveness of native perennial and exotic annual grasses in a California coastal prairie grassland to test whether the introduction of exotic propagules to coastal grasslands in the 19th century was likely to have been sufficient to shift community composition from native perennial to exotic annual grasses. Under experimental field con- ditions, we compared the aboveground productivity of native species alone to native species competing with exotics, and exotic species alone to exotic species competing with natives. Over the course of the four-year experiment, native grasses became increasingly dominant in the mixed-assemblage plots containing natives and exotics. Although the competitive interactions in the first growing season favored the exotics, over time the native grasses significantly reduced the productivity of exotic grasses. The number of exotic seedlings emerging and the biomass of dicot seedlings removed during weeding were also significantly lower in plots containing natives as compared to plots that did not contain natives. We found evidence that the ability of established native perennial species to limit space available for exotic annual seeds to germinate and to limit the light available to exotic seedlings reduced exotic productivity and shifted competitive interactions in favor of the natives. If interactions between native perennial and exotic annual grasses follow a similar pattern in other coastal grassland habitats, then the introduction of exotic grass propagules alone without changes in land use or climate, or both, was likely insufficient to convert the region's grasslands.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55129,Propagule pressure and resource availability determine plant community invasibility in a temperate forest understorey,S177651,R55131,Measure of invasion success,L110120,Biomass,"Few field experiments have examined the effects of both resource availability and propagule pressure on plant community invasibility. Two non-native forest species, a herb and a shrub (Hesperis matronalis and Rhamnus cathartica, respectively), were sown into 60 1-m 2 sub-plots distributed across three plots. These contained reconstructed native plant communities in a replaced surface soil layer in a North American forest interior. Resource availability and propagule pressure were manipulated as follows: understorey light level (shaded/unshaded), nutrient availability (control/fertilized), and seed pressures of the two non-native species (control/low/high). Hesperis and Rhamnus cover and the above-ground biomass of Hesperis were significantly higher in shaded sub-plots and at greater propagule pressures. Similarly, the above-ground biomass of Rhamnus was significantly increased with propagule pressure, although this was a function of density. In contrast, of species that seeded into plots from the surrounding forest during the growing season, the non-native species had significantly greater cover in unshaded sub-plots. Plants in these unshaded sub-plots were significantly taller than plants in shaded sub-plots, suggesting a greater fitness. Total and non-native species richness varied significantly among plots indicating the importance of fine-scale dispersal patterns. None of the experimental treatments influenced native species. Since the forest seed bank in our study was colonized primarily by non-native ruderal species that dominated understorey vegetation, the management of invasions by non-native species in forest understoreys will have to address factors that influence light levels and dispersal pathways.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54214,"Phenotypic plasticity, precipitation, and invasiveness in the fire-promoting grass Pennisetum setaceum (poaceae)",S167415,R54215,Specific traits,L101985,Biomass,"Invasiveness may result from genetic variation and adaptation or phenotypic plasticity, and genetic variation in fitness traits may be especially critical. Pennisetum setaceum (fountain grass, Poaceae) is highly invasive in Hawaii (HI), moderately invasive in Arizona (AZ), and less invasive in southern California (CA). In common garden experiments, we examined the relative importance of quantitative trait variation, precipitation, and phenotypic plasticity in invasiveness. In two very different environments, plants showed no differences by state of origin (HI, CA, AZ) in aboveground biomass, seeds/flower, and total seed number. Plants from different states were also similar within watering treatment. Plants with supplemental watering, relative to unwatered plants, had greater biomass, specific leaf area (SLA), and total seed number, but did not differ in seeds/flower. Progeny grown from seeds produced under different watering treatments showed no maternal effects in seed mass, germination, biomass or SLA. High phenotypic plasticity, rather than local adaptation is likely responsible for variation in invasiveness. Global change models indicate that temperature and precipitation patterns over the next several decades will change, although the direction of change is uncertain. Drier summers in southern California may retard further invasion, while wetter summers may favor the spread of fountain grass.",TRUE,noun
R24,Ecology and Evolutionary Biology,R53340,Patterns of bird invasion are consistent with environmental filtering,S163242,R53343,Investigated species,L98693,Birds,"Predicting invasion potential has global significance for managing ecosystems as well as important theoretical implications for understanding community assembly. Phylogenetic relationships of introduced species to the extant community may be predictive of establishment success because of the opposing forces of competition/shared enemies (which should limit invasions by close relatives) versus environmental filtering (which should allow invasions by close relatives). We examine here the association between establishment success of introduced birds and their phylogenetic relatedness to the extant avifauna within three highly invaded regions (Florida, New Zealand, and Hawaii). Published information on both successful and failed introductions, as well as native species, was compiled for all three regions. We created a phylogeny for each avifauna including all native and introduced bird species. From the estimated branch lengths on these phylogenies, we calculated multiple measurements of relatedness between each introduced species and the extant avifauna. We used generalized linear models to test for an association between relatedness and establishment success. We found that close relatedness to the extant avifauna was significantly associated with increased establishment success for exotic birds both at the regional (Florida, Hawaii, New Zealand) and sub-regional (islands within Hawaii) levels. Our results suggest that habitat filtering may be more important than interspecific competition in avian communities assembled under high rates of anthropogenic species introductions. This work also supports the utility of community phylogenetic methods in the study of vertebrate invasions.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54984,Global patterns of introduction effort and establishment success in birds,S194348,R57152,Investigated species,L121723,Birds,"Theory suggests that introduction effort (propagule size or number) should be a key determinant of establishment success for exotic species. Unfortunately, however, propagule pressure is not recorded for most introductions. Studies must therefore either use proxies whose efficacy must be largely assumed, or ignore effort altogether. The results of such studies will be flawed if effort is not distributed at random with respect to other characteristics that are predicted to influence success. We use global data for more than 600 introduction events for birds to show that introduction effort is both the strongest correlate of introduction success, and correlated with a large number of variables previously thought to influence success. Apart from effort, only habitat generalism relates to establishment success in birds.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55011,The role of competition and introduction effort in the success of passeriform birds introduced to New Zealand,S192095,R56975,Investigated species,L120037,Birds,"The finding that passeriform birds introduced to the islands of Hawaii and Saint Helena were more likely to successfully invade when fewer other introduced species were present has been interpreted as strong support for the hypothesis that interspecific competition influences invasion success. I tested whether invasions were more likely to succeed when fewer species were present using the records of passeriform birds introduced to four acclimatization districts in New Zealand. I also tested whether introduction effort, measured as the number of introductions and the total number of birds released, could predict invasion outcomes, a result previously established for all birds introduced to New Zealand. I found patterns consistent with both competition and introduction effort as explanations for invasion success. However, data supporting the two explanations were confounded such that the greater success of invaders arriving when fewer other species were present could have been due to a causal relationship between invasion success and introduction effort. Hence, without data on introduction effort, previous studies may have overestimated the degree to which the number of potential competitors could independently explain invasion outcomes and may therefore have overstated the importance of competition in structuring introduced avian assemblages. Furthermore, I suggest that a second pattern in avian invasion success previously attributed to competition, the morphological overdispersion of successful invaders, could also arise as an artifact of variation in introduction effort.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55013,High predictability in introduction outcomes and the geographical range size of introduced Australian birds: a role for climate,S192108,R56976,Investigated species,L120048,Birds,"Summary 1 We investigated factors hypothesized to influence introduction success and subsequent geographical range size in 52 species of bird that have been introduced to mainland Australia. 2 The 19 successful species had been introduced more times, at more sites and in greater overall numbers. Relative to failed species, successfully introduced species also had a greater area of climatically suitable habitat available in Australia, a larger overseas range size and were more likely to have been introduced successfully outside Australia. After controlling for phylogeny these relationships held, except that with overseas range size and, in addition, larger-bodied species had a higher probability of introduction success. There was also a marked taxonomic bias: gamebirds had a much lower probability of success than other species. A model including five of these variables explained perfectly the patterns in introduction success across-species. 3 Of the successful species, those with larger geographical ranges in Australia had a greater area of climatically suitable habitat, traits associated with a faster population growth rate (small body size, short incubation period and more broods per season) and a larger overseas range size. The relationships between range size in Australia, the extent of climatically suitable habitat and overseas range size held after controlling for phylogeny. 4 We discuss the probable causes underlying these relationships and why, in retrospect, the outcome of bird introductions to Australia are highly predictable.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55039,The Influence of Numbers Released on the Outcome of Attempts to Introduce Exotic Bird Species to New Zealand,S192271,R56989,Investigated species,L120185,Birds,"1. Information on the approximate number of individuals released is available for 47 of the 133 exotic bird species introduced to New Zealand in the late 19th and early 20th centuries. Of these, 21 species had populations surviving in the wild in 1969-79. The long interval between introduction and assessment of outcome provides a rare opportunity to examine the factors correlated with successful establishment without the uncertainty of long-term population persistence associated with studies of short duration. 2. The probability of successful establishment was strongly influenced by the number of individuals released during the main period of introductions. Eight-three per cent of species that had more than 100 individuals released within a 10-year period became established, compared with 21% of species that had less than 100 birds released. The relationship between the probability of establishment and number of birds released was similar to that found in a previous study of introductions of exotic birds to Australia. 3. It was possible to look for a within-family influence on the success of introduction of the number of birds released in nine bird families. A positive influence was found within seven families and no effect in two families. This preponderance of families with a positive effect was statistically significant. 4. A significant effect of body weight on the probability of successful establishment was found, and negative effects of clutch size and latitude of origin. However, the statistical significance of these effects varied according to whether comparison was or was not restricted to within-family variation. After applying the Bonferroni adjustment to significance levels, to allow for the large number of variables and factors being considered, only the effect of the number of birds released was statistically significant. 5. No significant effects on the probability of successful establishment were apparent for the mean date of release, the minimum number of years in which birds were released, the hemisphere of origin (northern or southern) and the size and diversity of latitudinal distribution of the natural geographical range.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55125,Behavioural flexibility predicts invasion success in birds introduced to New Zealand,S193153,R57064,Investigated species,L120917,Birds,"A fundamental question in ecology is whether there are evolutionary characteristics of species that make some better than others at invading new communities. In birds, nesting habits, sexually selected traits, migration, clutch size and body mass have been suggested as important variables, but behavioural flexibility is another obvious trait that has received little attention. Behavioural flexibility allows animals to respond more rapidly to environmental changes and can therefore be advantageous when invading novel habitats. Behavioural flexibility is linked to relative brain size and, for foraging, has been operationalised as the number of innovations per taxon reported in the short note sections of ornithology journals. Here, we use data on avian species introduced to New Zealand and test the link between forebrain size, feeding innovation frequency and invasion success. Relative brain size was, as expected, a significant predictor of introduction success, after removing the effect of introduction effort. Species with relatively larger brains tended to be better invaders than species with smaller ones. Introduction effort, migratory strategy and mode of juvenile development were also significant in the models. Pair-wise comparisons of closely related species indicate that successful invaders also showed a higher frequency of foraging innovations in their region of origin. This study provides the first evidence in vertebrates of a general set of traits, behavioural flexibility, that can potentially favour invasion success.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55136,Correlates of Introduction Success in Exotic New Zealand Birds,S193266,R57074,Investigated species,L121010,Birds,"Whether or not a bird species will establish a new population after invasion of uncolonized habitat depends, from theory, on its life-history attributes and initial population size. Data about initial population sizes are often unobtainable for natural and deliberate avian invasions. In New Zealand, however, contemporary documentation of introduction efforts allowed us to systematically compare unsuccessful and successful invaders without bias. We obtained data for 79 species involved in 496 introduction events and used the present-day status of each species as the dependent variable in fitting multiple logistic regression models. We found that introduction efforts for species that migrated within their endemic ranges were significantly less likely to be successful than those for nonmigratory species with similar introduction efforts. Initial population size, measured as number of releases and as the minimum number of propagules liberated in New Zealand, significantly increased the probability of translocation success. A null model showed that species released more times had a higher probability per release of successful establishment. Among 36 species for which data were available, successful invaders had significantly higher natality/mortality ratios. Successful invaders were also liberated at significantly more sites. Invasion of New Zealand by exotic birds was therefore primarily related to management, an outcome that has implications for conservation biology.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56080,The island biogeography of exotic bird species,S182769,R56081,Investigated species,L112279,Birds,"Aim: A recent upsurge of interest in the island biogeography of exotic species has followed from the argument that they may provide valuable information on the natural processes structuring island biotas. Here, we use data on the occurrence of exotic bird species across oceanic islands worldwide to demonstrate an alternative and previously untested hypothesis that these distributional patterns are a simple consequence of where humans have released such species, and hence of the number of species released. Location: Islands around the world. Methods: Statistical analysis of published information on the numbers of exotic bird species introduced to, and established on, islands around the world. Results: Established exotic birds showed very similar species-area relationships to native species, but different species-isolation relationships. However, in both cases the relationship for established exotics simply mimicked that for the number of exotic bird species introduced. Exotic bird introductions scaled positively with human population size and island isolation, and islands that had seen more native species extinctions had had more exotic species released. Main conclusion: The island biogeography of exotic birds is primarily a consequence of human, rather than natural, processes. © 2007 The Authors Journal compilation © 2007 Blackwell Publishing Ltd.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56084,A comparative analysis of the relative success of introduced land birds on islands,S182799,R56085,Investigated species,L112305,Birds,"It has been suggested that more species have been successfully introduced to oceanic islands than to mainland regions. This suggestion has attracted considerable ecological interest and several theoretical mechanisms havebeen proposed. However, few data are available to test the hypotheses directly, and the pattern may simply result from many more species being transported to islands rather than mainland regions. Here I test this idea using data for global land birds and present evidence that introductions to islands have a higher probability of success than those to mainland regions. This difference between island and mainland landforms is not consistent among either taxonomic families or biogeographic regions. Instead, introduction attempts within the same biogeographic region have been significantly more successful than those that have occurred between two different biogeographic regions. Subsequently, the proportion of introduction attempts that have occurred within a single biogeographic region is thus a significant predictor of the observed variability in introduction success. I also show that the correlates of successful island introductions are probably different to those of successful mainland introductions.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56098,Establishment success across convergent Mediterranean ecosystems: an analysis of bird introductions,S192386,R56999,Investigated species,L120280,Birds,"Abstract: Concern over the impact of invaders on biodiversity and on the functioning of ecosystems has generated a rising tide of comparative analyses aiming to unveil the factors that shape the success of introduced species across different regions. One limitation of these studies is that they often compare geographically rather than ecologically defined regions. We propose an approach that can help address this limitation: comparison of invasions across convergent ecosystems that share similar climates. We compared avian invasions in five convergent mediterranean climate systems around the globe. Based on a database of 180 introductions representing 121 avian species, we found that the proportion of bird species successfully established was high in all mediterranean systems (more than 40% for all five regions). Species differed in their likelihood to become established, although success was not higher for those originating from mediterranean systems than for those from nonmediterranean regions. Controlling for this taxonomic effect with generalized linear mixed models, species introduced into mediterranean islands did not show higher establishment success than those introduced to the mainland. Susceptibility to avian invaders, however, differed substantially among the different mediterranean regions. The probability that a species will become established was highest in the Mediterranean Basin and lowest in mediterranean Australia and the South African Cape. Our results suggest that many of the birds recently introduced into mediterranean systems, and especially into the Mediterranean Basin, have a high potential to establish self‐sustaining populations. This finding has important implications for conservation in these biologically diverse hotspots.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56102,Are islands more susceptible to be invaded than continents? Birds say no,S193140,R57063,Investigated species,L120906,Birds,"Island communities are generally viewed as being more susceptible to invasion than those of mainland areas, yet empirical evidence is almost lacking. A species-by-species examination of introduced birds in two independent island-mainland comparisons is not consistent with this hypothesis. In the New Zealand-mainland Australia comparison, 16 species were successful in both regions, 19 always failed and only eight had mixed outcomes. Mixed results were observed less often than expected by chance, and in only 5 cases was the relationship in the predicted direction. This result is not biased by differences in introduction effort because, within species, the number of individuals released in New Zealand did not differ significantly from those released in mainland Australia. A similar result emerged in the Hawaiian islands-mainland USA comparison: among the 35 species considered, 15 were successful in both regions, seven always failed and 13 had mixed outcomes. In this occasion, the results fit well to those expected by chance, and in only seven cases was the relationship in the direction predicted. I therefore conclude that, if true, the view that islands are less resistant than continents to invasions is far from universal.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56970,Sexual plumage differences and the outcome of game bird (Aves: Galliformes) introductions on oceanic islands,S192048,R56971,Investigated species,L119998,Birds,"Galliformes, after Passeriformes, is the group of birds that has been most introduced to oceanic islands. Among Passeriformes, whether the species’ plumage is sexually monochromatic or dichromatic, along with other factors such as introduction effort and interspecific competition, has been identified as a factor that limits introduction success. In this study, we tested the hypothesis that sexually dichromatic plumage reduces the probability of success for 51 species from 26 genera of game birds that were introduced onto 12 oceanic islands. Analyses revealed no significant differences in probability of introduction success between monochromatic and dichromatic species at either the generic or specific levels. We also found no significant difference between these two groups in size of native geographic range, wing length or humanintroduction effort. Our results do not support the hypothesis that sexually dichromatic plumage (probably a response to sexual selection) predicts introduction outcomes of game birds as has been reported for passerine birds. These findings suggest that passerine and non-passerine birds differ fundamentally in terms of factors that could influence introduction outcome, and should therefore be evaluated separately as opposed to lumping these two groups as ‘land birds’.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57061,Patterns of extinction in the introduced Hawaiian avifauna: a reexamination of the role of competition,S193126,R57062,Investigated species,L120894,Birds,"Among introduced passeriform and columbiform birds of the six major Hawaiian islands, some species (including most of those introduced early) may have an intrinsically high probability of successful invasion, whereas others (including many of those introduced from 1900 through 1936) may be intrinsically less likely to succeed. This hypothesis accords well with the observation that, of the 41 species introduced on more than one of the Hawaiian islands, all but four either succeeded everywhere they were introduced or failed everywhere they were introduced, no matter what other species or how many other species were present. Other hypotheses, including competitive ones, are possible. However, most other patterns that have been claimed to support the hypothesis that competitive interactions have been key to which species survived are ambiguous. We propose that the following patterns are true: (1) Extinction rate as a function of number of species present (S) is not better fit by addition of an S2 term. (2) Bill-length differences between pairs of species that invaded together may tend to be less for pairs in which at least one species became extinct, but the result is easily changed by use of one reasonable set of conventions rather than another. In any event, the relationship of bill-length differences to resource overlap has not been established for these species. (3) Surviving forest passeriforms on Oahu may be overdispersed in morphological space, although the species pool used to construct the space may not have been the correct one. (4) Densities of surviving species on species-poor islands have not been shown to exceed those on species-rich islands.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57157,Human-related processes drive the richness of exotic birds in Europe,S194414,R57158,Investigated species,L121777,Birds,"Both human-related and natural factors can affect the establishment and distribution of exotic species. Understanding the relative role of the different factors has important scientific and applied implications. Here, we examined the relative effect of human-related and natural factors in determining the richness of exotic bird species established across Europe. Using hierarchical partitioning, which controls for covariation among factors, we show that the most important factor is the human-related community-level propagule pressure (the number of exotic species introduced), which is often not included in invasion studies due to the lack of information for this early stage in the invasion process. Another, though less important, factor was the human footprint (an index that includes human population size, land use and infrastructure). Biotic and abiotic factors of the environment were of minor importance in shaping the number of established birds when tested at a European extent using 50×50 km2 grid squares. We provide, to our knowledge, the first map of the distribution of exotic bird richness in Europe. The richest hotspot of established exotic birds is located in southeastern England, followed by areas in Belgium and The Netherlands. Community-level propagule pressure remains the major factor shaping the distribution of exotic birds also when tested for the UK separately. Thus, studies examining the patterns of establishment should aim at collecting the crucial and hard-to-find information on community-level propagule pressure or develop reliable surrogates for estimating this factor. Allowing future introductions of exotic birds into Europe should be reconsidered carefully, as the number of introduced species is basically the main factor that determines the number established.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57159,"Ecological biogeography of southern oceanic islands: species-area relationships, human impacts, and conservation",S194449,R57161,Investigated species,L121806,Birds,"Previous studies have concluded that southern ocean islands are anomalous because past glacial extent and current temperature apparently explain most variance in their species richness. Here, the relationships between physical variables and species richness of vascular plants, insects, land and seabirds, and mammals were reexamined for these islands. Indigenous and introduced species were distinguished, and relationships between the latter and human occupancy variables were investigated. Most variance in indigenous species richness was explained by combinations of area and temperature (56%)—vascular plants; distance (nearest continent) and vascular plant species richness (75%)—insects; area and chlorophyll concentration (65%)—seabirds; and indigenous insect species richness and age (73%)—land birds. Indigenous insects and plants, along with distance (closest continent), explained most variance (70%) in introduced land bird species richness. A combination of area and temperature explained most variance in species richness of introduced vascular plants (73%), insects (69%), and mammals (69%). However, there was a strong relationship between area and number of human occupants. This suggested that larger islands attract more human occupants, increasing the risk of propagule transfer, while temperature increases the chance of propagule establishment. Consequently, human activities on these islands should be regulated more tightly.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57787,Low prevalence of haemosporidian parasites in the introduced house sparrow (Passer domesticus) in Brazil,S200643,R57788,Investigated species,L126435,Birds,"Species that are introduced to novel environments can lose their native pathogens and parasites during the process of introduction. The escape from the negative effects associated with these natural enemies is commonly employed as an explanation for the success and expansion of invasive species, which is termed the enemy release hypothesis (ERH). In this study, nested PCR techniques and microscopy were used to determine the prevalence and intensity (respectively) of Plasmodium spp. and Haemoproteus spp. in introduced house sparrows and native urban birds of central Brazil. Generalized linear mixed models were fitted by Laplace approximation considering a binomial error distribution and logit link function. Location and species were considered as random effects and species categorization (native or non-indigenous) as fixed effects. We found that native birds from Brazil presented significantly higher parasite prevalence in accordance with the ERH. We also compared our data with the literature, and found that house sparrows native to Europe exhibited significantly higher parasite prevalence than introduced house sparrows from Brazil, which also supports the ERH. Therefore, it is possible that house sparrows from Brazil might have experienced a parasitic release during the process of introduction, which might also be related to a demographic release (e.g. release from the negative effects of parasites on host population dynamics).",TRUE,noun
R24,Ecology and Evolutionary Biology,R57816,"Diversity, loss, and gain of malaria parasites in a globally invasive bird",S201020,R57817,Investigated species,L126754,Birds,"Invasive species can displace natives, and thus identifying the traits that make aliens successful is crucial for predicting and preventing biodiversity loss. Pathogens may play an important role in the invasive process, facilitating colonization of their hosts in new continents and islands. According to the Novel Weapon Hypothesis, colonizers may out-compete local native species by bringing with them novel pathogens to which native species are not adapted. In contrast, the Enemy Release Hypothesis suggests that flourishing colonizers are successful because they have left their pathogens behind. To assess the role of avian malaria and related haemosporidian parasites in the global spread of a common invasive bird, we examined the prevalence and genetic diversity of haemosporidian parasites (order Haemosporida, genera Plasmodium and Haemoproteus) infecting house sparrows (Passer domesticus). We sampled house sparrows (N = 1820) from 58 locations on 6 continents. All the samples were tested using PCR-based methods; blood films from the PCR-positive birds were examined microscopically to identify parasite species. The results show that haemosporidian parasites in the house sparrows' native range are replaced by species from local host-generalist parasite fauna in the alien environments of North and South America. Furthermore, sparrows in colonized regions displayed a lower diversity and prevalence of parasite infections. Because the house sparrow lost its native parasites when colonizing the American continents, the release from these natural enemies may have facilitated its invasion in the last two centuries. Our findings therefore reject the Novel Weapon Hypothesis and are concordant with the Enemy Release Hypothesis.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56563,Positive interactions between nonindigenous species facilitate transport by human vectors,S187293,R56564,Investigated species,L116261,Bryozoan,"Numerous studies have shown how interactions between nonindigenous spe- cies (NIS) can accelerate the rate at which they establish and spread in invaded habitats, leading to an ""invasional meltdown."" We investigated facilitation at an earlier stage in the invasion process: during entrainment of propagules in a transport pathway. The introduced bryozoan Watersipora subtorquata is tolerant of several antifouling biocides and a common component of hull-fouling assemblages, a major transport pathway for aquatic NIS. We predicted that colonies of W. subtorquata act as nontoxic refugia for other, less tolerant species to settle on. We compared rates of recruitment of W. subtorquata and other fouling organisms to surfaces coated with three antifouling paints and a nontoxic primer in coastal marinas in Queensland, Australia. Diversity and abundance of fouling taxa were compared between bryozoan colonies and adjacent toxic or nontoxic paint surfaces. After 16 weeks immersion, W. subtorquata covered up to 64% of the tile surfaces coated in antifouling paint. Twenty-two taxa occurred exclusively on W. subtorquata and were not found on toxic surfaces. Other fouling taxa present on toxic surfaces were up to 248 times more abundant on W. subtorquata. Because biocides leach from the paint surface, we expected a positive relationship between the size of W. subtorquata colonies and the abundance and diversity of epibionts. To test this, we compared recruitment of fouling organisms to mimic W. subtorquata colonies of three different sizes that had the same total surface area. Sec- ondary recruitment to mimic colonies was greater when the surrounding paint surface contained biocides. Contrary to our predictions, epibionts were most abundant on small mimic colonies with a large total perimeter. This pattern was observed in encrusting and erect bryozoans, tubiculous amphipods, and serpulid and sabellid polychaetes, but only in the presence of toxic paint. Our results show that W. subtorquata acts as a foundation species for fouling assemblages on ship hulls and facilitates the transport of other species at greater abundance and frequency than would otherwise be possible. Invasion success may be increased by positive interactions between NIS that enhance the delivery of prop- agules by human transport vectors.",TRUE,noun
R24,Ecology and Evolutionary Biology,R144046,Land Use and Avian Species Diversity Along an Urban Gradient,S576583,R144048,Focal entity,R144053,Communities,"I examined the distribution and abundance of bird species across an urban gradient, and concomitant changes in community structure, by censusing summer resident bird populations at six sites in Santa Clara County, California (all former oak woodlands). These sites represented a gradient of urban land use that ranged from relatively undisturbed to highly developed, and included a biological preserve, recreational area, golf course, residential neighborhood, office park, and business district. The composition of the bird community shifted from predominantly native species in the undisturbed area to invasive and exotic species in the business district. Species richness, Shannon diversity, and bird biomass peaked at moderately disturbed sites. One or more species reached maximal densities in each of the sites, and some species were restricted to a given site. The predevelopment bird species (assumed to be those found at the most undisturbed site) dropped out gradually as the sites became more urban. These patterns were significantly related to shifts in habitat structure that occurred along the gradient, as determined by canonical correspondence analysis (CCA) using the environmental variables of percent land covered by pavement, buildings, lawn, grasslands, and trees or shrubs. I compared each formal site to four additional sites with similar levels of development within a two-county area to verify that the bird communities at the formal study sites were rep- resentative of their land use category.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56686,Treatment-based Markov chain models clarify mechanisms of invasion in na invaded grassland community,S188714,R56687,Ecological Level of evidence,L117435,Community,"What are the relative roles of mechanisms underlying plant responses in grassland communities invaded by both plants and mammals? What type of community can we expect in the future given current or novel conditions? We address these questions by comparing Markov chain community models among treatments from a field experiment on invasive species on Robinson Crusoe Island, Chile. Because of seed dispersal, grazing and disturbance, we predicted that the exotic European rabbit (Oryctolagus cuniculus) facilitates epizoochorous exotic plants (plants with seeds that stick to the skin an animal) at the expense of native plants. To test our hypothesis, we crossed rabbit exclosure treatments with disturbance treatments, and sampled the plant community in permanent plots over 3 years. We then estimated Markov chain model transition probabilities and found significant differences among treatments. As hypothesized, this modelling revealed that exotic plants survive better in disturbed areas, while natives prefer no rabbits or disturbance. Surprisingly, rabbits negatively affect epizoochorous plants. Markov chain dynamics indicate that an overall replacement of native plants by exotic plants is underway. Using a treatment-based approach to multi-species Markov chain models allowed us to examine the changes in the importance of mechanisms in response to experimental impacts on communities.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56813,"Structural, compositional and trait differences between native- and non-native-dominated grassland patches",S190126,R56814,Ecological Level of evidence,L118593,Community,"Summary Non-native species with growth forms that are different from the native flora may alter the physical structure of the area they invade, thereby changing the resources available to resident species. This in turn can select for species with traits suited for the new growing environment. We used adjacent uninvaded and invaded grassland patches to evaluate whether the shift in dominance from a native perennial bunchgrass, Nassella pulchra, to the early season, non-native annual grass, Bromus diandrus, affects the physical structure, available light, plant community composition and community-weighted trait means. Our field surveys revealed that the exotic grass B. diandrus alters both the vertical and horizontal structure creating more dense continuous vegetative growth and dead plant biomass than patches dominated by N. pulchra. These differences in physical structure are responsible for a threefold reduction in available light and likely contribute to the lower diversity, especially of native forbs in B. diandrus-dominated patches. Further, flowering time began earlier and seed size and plant height were higher in B. diandrus patches relative to N. pulchra patches. Our results suggest that species that are better suited (earlier phenology, larger seed size and taller) for low light availability are those that coexist with B. diandrus, and this is consistent with our hypothesis that change in physical structure with B. diandrus invasion is an important driver of community and trait composition. The traits of species able to coexist with invaders are rarely considered when assessing community change following invasion; however, this may be a powerful approach for predicting community change in environments with high anthropogenic pressures, such as disturbance and nutrient enrichment. It also provides a means for selecting species to introduce when trying to enhance native diversity in an otherwise invaded community.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56917,"Feeding behaviour, predatory functional response and trophic interactions of the invasive Chinese mitten crab (Eriocheir sinensis) and signal crayfish (Pacifastacus leniusculus)",S191280,R56918,Ecological Level of evidence,L119539,Community,"1. Freshwaters are subject to particularly high rates of species introductions; hence, invaders increasingly co-occur and may interact to enhance impacts on ecosystem structure and function. As trophic interactions are a key mechanism by which invaders influence communities, we used a combination of approaches to investigate the feeding preferences and community impacts of two globally invasive large benthic decapods that co-occur in freshwaters: the signal crayfish (Pacifastacus leniusculus) and Chinese mitten crab (Eriocheir sinensis). 2. In laboratory preference tests, both consumed similar food items, including chironomids, isopods and the eggs of two coarse fish species. In a comparison of predatory functional responses with a native crayfish Austropotamobius pallipes), juvenile E. sinensis had a greater predatory intensity than the native A. pallipes on the keystone shredder Gammarus pulex, and also displayed a greater preference than P. leniusculus for this prey item. 3. In outdoor mesocosms (n = 16) used to investigate community impacts, the abundance of amphipods, isopods, chironomids and gastropods declined in the presence of decapods, and a decapod >gastropod >periphyton trophic cascade was detected when both species were present. Eriocheir sinensis affected a wider range of animal taxa than P. leniusculus. 4. Stable-isotope and gut-content analysis of wild-caught adult specimens of both invaders revealed a wide and overlapping range of diet items including macrophytes, algae, terrestrial detritus, macroinvertebrates and fish. Both decapods were similarly enriched in 15N and occupied the same trophic level as Ephemeroptera, Odonata and Notonecta. Eriocheir sinensis d13C values were closely aligned with macrophytes indicating a reliance on energy from this basal resource, supported by evidence of direct consumption from gut contents. Pacifastacus leniusculus d13C values were intermediate between those of terrestrial leaf litter and macrophytes, suggesting reliance on both allochthonous and autochthonous energy pathways. 5. Our results suggest that E. sinensis is likely to exert a greater per capita impact on the macroinvertebrate communities in invaded systems than P. leniusculus, with potential indirect effects on productivity and energy flow through the community.",TRUE,noun
R24,Ecology and Evolutionary Biology,R53282,Enemy damage of exotic plant species is similar to that of natives and increases with productivity,S201807,R57876,Indicator for enemy release,L127423,Damage,"In their colonized ranges, exotic plants may be released from some of the herbivores or pathogens of their home ranges but these can be replaced by novel enemies. It is of basic and practical interest to understand which characteristics of invaded communities control accumulation of the new pests. Key questions are whether enemy load on exotic species is smaller than on native competitors as suggested by the enemy release hypothesis (ERH) and whether this difference is most pronounced in resource‐rich habitats as predicted by the resource–enemy release hypothesis (R‐ERH). In 72 populations of 12 exotic invasive species, we scored all visible above‐ground damage morphotypes caused by herbivores and fungal pathogens. In addition, we quantified levels of leaf herbivory and fruit damage. We then assessed whether variation in damage diversity and levels was explained by habitat fertility, by relatedness between exotic species and the native community or rather by native species diversity. In a second part of the study, we also tested the ERH and the R‐ERH by comparing damage of plants in 28 pairs of co‐occurring native and exotic populations, representing nine congeneric pairs of native and exotic species. In the first part of the study, diversity of damage morphotypes and damage levels of exotic populations were greater in resource‐rich habitats. Co‐occurrence of closely related, native species in the community significantly increased the probability of fruit damage. Herbivory on exotics was less likely in communities with high phylogenetic diversity. In the second part of the study, exotic and native congeneric populations incurred similar damage diversity and levels, irrespective of whether they co‐occurred in nutrient‐poor or nutrient‐rich habitats. Synthesis. We identified habitat productivity as a major community factor affecting accumulation of enemy damage by exotic populations. Similar damage levels in exotic and native congeneric populations, even in species pairs from fertile habitats, suggest that the enemy release hypothesis or the R‐ERH cannot always explain the invasiveness of introduced species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57600,"Impact of fire on leaf nutrients, arthropod fauna and herbivory of native and exotic eucalypts in Kings Park, Perth, Western Australia",S198250,R57601,Indicator for enemy release,L124415,Damage,"The vegetation of Kings Park, near the centre of Perth, Western Australia, once had an overstorey of Eucalyptus marginata (jarrah) or Eucalyptus gomphocephala (tuart), and many trees still remain in the bushland parts of the Park. Avenues and roadsides have been planted with eastern Australian species, including Eucalyptus cladocalyx (sugar gum) and Eucalyptus botryoides (southern mahogany), both of which have become invasive. The present study examined the effect of a recent burn on the level of herbivory on these native and exotic eucalypts. Leaf damage, shoot extension and number of new leaves were measured on tagged shoots of saplings of each tree species in unburnt and burnt areas over an 8-month period. Leaf macronutrient levels were quantified and the number of arthropods on saplings was measured at the end of the recording period by chemical knockdown. Leaf macronutrients were mostly higher in all four species in the burnt area, and this was associated with generally higher numbers of canopy arthropods and greater levels of leaf damage. It is suggested that the pulse of soil nutrients after the fire resulted in more nutrient-rich foliage, which in turn was more palatable to arthropods. The resulting high levels of herbivory possibly led to reduced shoot extension of E. gomphocephala, E. botryoides and, to a lesser extent, E. cladocalyx. This acts as a negative feedback mechanism that lessens the tendency for lush, post-fire regrowth to outcompete other species of plants. There was no consistent difference in the levels of the various types of leaf damage or of arthropods on the native and the exotic eucalypts, suggesting that freedom from herbivory is not contributing to the invasiveness of the two exotic species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57620,"Herbivory, disease, recruitment limitation, and success of alien and native tree species",S198488,R57621,Indicator for enemy release,L124613,Damage,"The Enemies Hypothesis predicts that alien plants have a competitive ad- vantage over native plants because they are often introduced with few herbivores or diseases. To investigate this hypothesis, we transplanted seedlings of the invasive alien tree, Sapium sebiferum (Chinese tallow tree) and an ecologically similar native tree, Celtis laevigata (hackberry), into mesic forest, floodplain forest, and coastal prairie sites in east Texas and manipulated foliar fungal diseases and insect herbivores with fungicidal and insecticidal sprays. As predicted by the Enemies Hypothesis, insect herbivores caused significantly greater damage to untreated Celtis seedlings than to untreated Sapium seedlings. However, contrary to predictions, suppression of insect herbivores caused significantly greater in- creases in survivorship and growth of Sapium seedlings compared to Celtis seedlings. Regressions suggested that Sapium seedlings compensate for damage in the first year but that this greatly increases the risk of mortality in subsequent years. Fungal diseases had no effects on seedling survival or growth. The Recruitment Limitation Hypothesis predicts that the local abundance of a species will depend more on local seed input than on com- petitive ability at that location. To investigate this hypothesis, we added seeds of Celtis and Sapium on and off of artificial soil disturbances at all three sites. Adding seeds increased the density of Celtis seedlings and sometimes Sapium seedlings, with soil disturbance only affecting density of Celtis. Together the results of these experiments suggest that the success of Sapium may depend on high rates of seed input into these ecosystems and high growth potential, as well as performance advantages of seedlings caused by low rates of herbivory.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57632,Enemy release? An experiment with congeneric plant pairs and diverse above- and belowground enemies,S198642,R57633,Indicator for enemy release,L124743,Damage,"Several hypotheses proposed to explain the success of introduced species focus on altered interspecific interactions. One of the most prominent, the Enemy Release Hypothesis, posits that invading species benefit compared to their native counterparts if they lose their herbivores and pathogens during the invasion process. We previously reported on a common garden experiment (from 2002) in which we compared levels of herbivory between 30 taxonomically paired native and introduced old-field plants. In this phyloge- netically controlled comparison, herbivore damage tended to be higher on introduced than on native plants. This striking pattern, the opposite of current theory, prompted us to further investigate herbivory and several other interspecific interactions in a series of linked ex- periments with the same set of species. Here we show that, in these new experiments, introduced plants, on average, received less insect herbivory and were subject to half the negative soil microbial feedback compared to natives; attack by fungal and viral pathogens also tended to be reduced on introduced plants compared to natives. Although plant traits (foliar C:N, toughness, and water content) suggested that introduced species should be less resistant to generalist consumers, they were not consistently more heavily attacked. Finally, we used meta-analysis to combine data from this study with results from our previous work to show that escape generally was inconsistent among guilds of enemies: there were few instances in which escape from multiple guilds occurred for a taxonomic pair, and more cases in which the patterns of escape from different enemies canceled out. Our examination of multiple interspecific interactions demonstrates that escape from one guild of enemies does not necessarily imply escape from other guilds. Because the effects of each guild are likely to vary through space and time, the net effect of all enemies is also likely to be variable. The net effect of these interactions may create ''invasion opportunity windows'': times when introduced species make advances in native communities.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57635,Invasive exotic plants suffer less herbivory than non-invasive exotic plants,S198680,R57636,Indicator for enemy release,L124775,Damage,"We surveyed naturally occurring leaf herbivory in nine invasive and nine non-invasive exotic plant species sampled in natural areas in Ontario, New York and Massachusetts, and found that invasive plants experienced, on average, 96% less leaf damage than non-invasive species. Invasive plants were also more taxonomically isolated than non-invasive plants, belonging to families with 75% fewer native North American genera. However, the relationship between taxonomic isolation at the family level and herbivory was weak. We suggest that invasive plants may possess novel phytochemicals with anti-herbivore properties in addition to allelopathic and anti-microbial characteristics. Herbivory could be employed as an easily measured predictor of the likelihood that recently introduced exotic plants may become invasive.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57674,The interaction between soil nutrients and leaf loss during early 14 establishment in plant invasion,S199192,R57675,Indicator for enemy release,L125209,Damage,"Nitrogen availability affects both plant growth and the preferences of herbivores. We hypothesized that an interaction between these two factors could affect the early establishment of native and exotic species differently, promoting invasion in natural systems. Taxonomically paired native and invasive species (Acer platanoides, Acer rubrum, Lonicera maackii, Diervilla lonicera, Celastrus orbiculata, Celastrus scandens, Elaeagnus umbellata, Ceanothus americanus, Ampelopsis brevipedunculata, and Vitis riparia) were grown in relatively high-resource (hardwood forests) and low-resource (pine barrens) communities on Long Island, New York, for a period of 3 months. Plants were grown in ambient and nitrogen-enhanced conditions in both communities. Nitrogen additions produced an average 12% initial increase in leaf number of all plants. By the end of the experiment, invasive species outperformed native species in nitrogen-enhanced plots in hardwood forests, where all plants experienced increased damage relative to control plots. Native species experienced higher overall amounts of damage in hardwood forests, losing, on average, 45% more leaves than exotic species, and only native species experienced a decline in growth rates (32% compared with controls). In contrast, in pine barrens, there were no differences in damage and no differences in performance between native and invasive plants. Our results suggest that unequal damage by natural enemies may play a role in determining community composition by shifting the competitive advantage to exotic species in nitrogen-enhanced environments. FOR. SCI. 53(6):701-709.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57682,"Experimental field comparison of native and non-native maple seedlings: natural enemies, ecophysiology, growth and survival",S199299,R57683,Indicator for enemy release,L125300,Damage,"1 Acer platanoides (Norway maple) is an important non‐native invasive canopy tree in North American deciduous forests, where native species diversity and abundance are greatly reduced under its canopy. We conducted a field experiment in North American forests to compare planted seedlings of A. platanoides and Acer saccharum (sugar maple), a widespread, common native that, like A. platanoides, is shade tolerant. Over two growing seasons in three forests we compared multiple components of seedling success: damage from natural enemies, ecophysiology, growth and survival. We reasoned that equal or superior performance by A. platanoides relative to A. saccharum indicates seedling characteristics that support invasiveness, while inferior performance indicates potential barriers to invasion. 2 Acer platanoides seedlings produced more leaves and allocated more biomass to roots, A. saccharum had greater water use efficiency, and the two species exhibited similar photosynthesis and first‐season mortality rates. Acer platanoides had greater winter survival and earlier spring leaf emergence, but second‐season mortality rates were similar. 3 The success of A. platanoides seedlings was not due to escape from natural enemies, contrary to the enemy release hypothesis. Foliar insect herbivory and disease symptoms were similarly high for both native and non‐native, and seedling biomass did not differ. Rather, A. platanoides compared well with A. saccharum because of its equivalent ability to photosynthesize in the low light herb layer, its higher leaf production and greater allocation to roots, and its lower winter mortality coupled with earlier spring emergence. Its only potential barrier to seedling establishment, relative to A. saccharum, was lower water use efficiency, which possibly could hinder its invasion into drier forests. 4 The spread of non‐native canopy trees poses an especially serious problem for native forest communities, because canopy trees strongly influence species in all forest layers. Success at reaching the canopy depends on a tree's ecology in previous life‐history stages, particularly as a vulnerable seedling, but little is known about seedling characteristics that promote non‐native tree invasion. Experimental field comparison with ecologically successful native trees provides insight into why non‐native trees succeed as seedlings, which is a necessary stage on their journey into the forest canopy.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57685,"When there is no escape: The effects of natural enemies on native, invasive, and noninvasive plants",S199361,R57688,Indicator for enemy release,L125353,Damage,"An important question in the study of biological invasions is the degree to which successful invasion can be explained by release from control by natural enemies. Natural enemies dominate explanations of two alternate phenomena: that most introduced plants fail to establish viable populations (biotic resistance hypothesis) and that some introduced plants become noxious invaders (natural enemies hypothesis). We used a suite of 18 phylogenetically related native and nonnative clovers (Trifolium and Medicago) and the foliar pathogens and invertebrate herbivores that attack them to answer two questions. Do native species suffer greater attack by natural enemies relative to introduced species at the same site? Are some introduced species excluded from native plant communities because they are susceptible to local natural enemies? We address these questions using three lines of evidence: (1) the frequency of attack and composition of fungal pathogens and herbivores for each clover species in four years of common garden experiments, as well as susceptibility to inoculation with a common pathogen; (2) the degree of leaf damage suffered by each species in common garden experiments; and (3) fitness effects estimated using correlative approaches and pathogen removal experiments. Introduced species showed no evidence of escape from pathogens, being equivalent to native species as a group in terms of infection levels, susceptibility, disease prevalence, disease severity (with more severe damage on introduced species in one year), the influence of disease on mortality, and the effect of fungicide treatment on mortality and biomass. In contrast, invertebrate herbivores caused more damage on native species in two years, although the influence of herbivore attack on mortality did not differ between native and introduced species. Within introduced species, the predictions of the biotic resistance hypothesis were not supported: the most invasive species showed greater infection, greater prevalence and severity of disease, greater prevalence of herbivory, and greater effects of fungicide on biomass and were indistinguishable from noninvasive introduced species in all other respects. Therefore, although herbivores preferred native over introduced species, escape from pest pressure cannot be used to explain why some introduced clovers are common invaders in coastal prairie while others are not.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57693,"Tolerance to herbivory, and not resistance, may explain differential success of invasive, naturalized, and native North American temperate vines",S199473,R57696,Indicator for enemy release,L125449,Damage,"Numerous hypotheses suggest that natural enemies can influence the dynamics of biological invasions. Here, we use a group of 12 related native, invasive, and naturalized vines to test the relative importance of resistance and tolerance to herbivory in promoting biological invasions. In a field experiment in Long Island, New York, we excluded mammal and insect herbivores and examined plant growth and foliar damage over two growing seasons. This novel approach allowed us to compare the relative damage from mammal and insect herbivores and whether damage rates were related to invasion. In a greenhouse experiment, we simulated herbivory through clipping and measured growth response. After two seasons of excluding herbivores, there was no difference in relative growth rates among invasive, naturalized, and native woody vines, and all vines were susceptible to damage from mammal and insect herbivores. Thus, differential attack by herbivores and plant resistance to herbivory did not explain invasion success of these species. In the field, where damage rates were high, none of the vines were able to fully compensate for damage from mammals. However, in the greenhouse, we found that invasive vines were more tolerant of simulated herbivory than native and naturalized relatives. Our results indicate that invasive vines are not escaping herbivory in the novel range, rather they are persisting despite high rates of herbivore damage in the field. While most studies of invasive plants and natural enemies have focused on resistance, this work suggests that tolerance may also play a large role in facilitating invasions.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57725,Test of the enemy release hypothesis: The native magpie moth prefers a native fireweed (Senecio pinnatifolius) to its introduced congener (S madagascariensis),S199852,R57726,Indicator for enemy release,L125767,Damage,"The enemy release hypothesis predicts that native herbivores will either prefer or cause more damage to native than introduced plant species. We tested this using preference and performance experiments in the laboratory and surveys of leaf damage caused by the magpie moth Nyctemera amica on a co-occuring native and introduced species of fireweed (Senecio) in eastern Australia. In the laboratory, ovipositing females and feeding larvae preferred the native S. pinnatifolius over the introduced S. madagascariensis. Larvae performed equally well on foliage of S. pinnatifolius and S. madagascariensis: pupal weights did not differ between insects reared on the two species, but growth rates were significantly faster on S. pinnatifolius. In the field, foliage damage was significantly greater on native S. pinnatifolius than introduced S. madagascariensis. These results support the enemy release hypothesis, and suggest that the failure of native consumers to switch to introduced species contributes to their invasive success. Both plant species experienced reduced, rather than increased, levels of herbivory when growing in mixed populations, as opposed to pure stands in the field; thus, there was no evidence that apparent competition occurred.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57826,Population regulation by enemies of the grass Brachypodium sylvaticum: demography in native and invaded ranges,S201159,R57827,Indicator for enemy release,L126872,Damage,"The enemy-release hypothesis (ERH) states that species become more successful in their introduced range than in their native range because they leave behind natural enemies in their native range and are thus ""released"" from enemy pressures in their introduced range. The ERH is popularly cited to explain the invasive properties of many species and is the underpinning of biological control. We tested the prediction that plant populations are more strongly regulated by natural enemies (herbivores and pathogens) in their native range than in their introduced range with enemy-removal experiments using pesticides. These experiments were replicated at multiple sites in both the native and invaded ranges of the grass Brachypodium sylvaticum. In support of the ERH, enemies consistently regulated populations in the native range. There were more tillers and more seeds produced in treated vs. untreated plots in the native range, and few seedlings survived in the native range. Contrary to the ERH, total measured leaf damage was similar in both ranges, though the enemies that caused it differed. There was more damage by generalist mollusks and pathogens in the native range, and more damage by generalist insect herbivores in the invaded range. Demographic analysis showed that population growth rates were lower in the native range than in the invaded range, and that sexually produced seedlings constituted a smaller fraction of the total in the native range. Our removal experiment showed that enemies regulate plant populations in their native range and suggest that generalist enemies, not just specialists, are important for population regulation.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57853,Invading from the garden? A comparison of leaf herbivory for exotic and native plants in natural and ornamental settings,S201531,R57855,Indicator for enemy release,L127189,Damage,"Abstract The enemies release hypothesis proposes that exotic species can become invasive by escaping from predators and parasites in their novel environment. Agrawal et al. (Enemy release? An experiment with congeneric plant pairs and diverse above‐ and below‐ground enemies. Ecology, 86, 2979–2989) proposed that areas or times in which damage to introduced species is low provide opportunities for the invasion of native habitat. We tested whether ornamental settings may provide areas with low levels of herbivory for trees and shrubs, potentially facilitating invasion success. First, we compared levels of leaf herbivory among native and exotic species in ornamental and natural settings in Cincinnati, Ohio, United States. In the second study, we compared levels of herbivory for invasive and noninvasive exotic species between natural and ornamental settings. We found lower levels of leaf damage for exotic species than for native species; however, we found no differences in the amount of leaf damage suffered in ornamental or natural settings. Our results do not provide any evidence that ornamental settings afford additional release from herbivory for exotic plant species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57860,Herbivory by an introduced Asian weevil negatively affects population growth of an invasive Brazilian shrub in Florida,S201612,R57861,Indicator for enemy release,L127257,Damage,"The enemy release hypothesis (ERH) is often cited to explain why some plants successfully invade natural communities while others do not. This hypothesis maintains that plant populations are regulated by coevolved enemies in their native range but are relieved of this pressure where their enemies have not been co-introduced. Some studies have shown that invasive plants sustain lower levels of herbivore damage when compared to native species, but how damage affects fitness and population dynamics remains unclear. We used a system of co-occurring native and invasive Eugenia congeners in south Florida (USA) to experimentally test the ERH, addressing deficiencies in our understanding of the role of natural enemies in plant invasion at the population level. Insecticide was used to experimentally exclude insect herbivores from invasive Eugenia uniflora and its native co-occurring congeners in the field for two years. Herbivore damage, plant growth, survival, and population growth rates for the three species were then compared for control and insecticide-treated plants. Our results contradict the ERH, indicating that E. uniflora sustains more herbivore damage than its native congeners and that this damage negatively impacts stem height, survival, and population growth. In addition, most damage to E. uniflora, a native of Brazil, is carried out by Myllocerus undatus, a recently introduced weevil from Sri Lanka, and M. undatus attacks a significantly greater proportion of E. uniflora leaves than those of its native congeners. This interaction is particularly interesting because M. undatus and E. uniflora share no coevolutionary history, having arisen on two separate continents and come into contact on a third. Our study is the first to document negative population-level effects for an invasive plant as a result of the introduction of a novel herbivore. Such inhibitory interactions are likely to become more prevalent as suites of previously noninteracting species continue to accumulate and new communities assemble worldwide.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57889,"Biogeographic comparisons of herbivore attack, growth and impact of Japanese knotweed between Japan and France",S201996,R57890,Indicator for enemy release,L127583,Damage,"To shed light on the process of how exotic species become invasive, it is necessary to study them both in their native and non‐native ranges. Our intent was to measure differences in herbivory, plant growth and the impact on other species in Fallopia japonica in its native and non‐native ranges. We performed a cross‐range full descriptive, field study in Japan (native range) and France (non‐native range). We assessed DNA ploidy levels, the presence of phytophagous enemies, the amount of leaf damage, several growth parameters and the co‐occurrence of Fallopia japonica with other plant species of herbaceous communities. Invasive Fallopia japonica plants were all octoploid, a ploidy level we did not encounter in the native range, where plants were all tetraploid. Octoploids in France harboured far less phytophagous enemies, suffered much lower levels of herbivory, grew larger and had a much stronger impact on plant communities than tetraploid conspecifics in the native range in Japan. Our data confirm that Fallopia japonica performs better – plant vigour and dominance in the herbaceous community – in its non‐native than its native range. Because we could not find octoploids in the native range, we cannot separate the effects of differences in ploidy from other biogeographic factors. To go further, common garden experiments would now be needed to disentangle the proper role of each factor, taking into account the ploidy levels of plants in their native and non‐native ranges. Synthesis. As the process by which invasive plants successfully invade ecosystems in their non‐native range is probably multifactorial in most cases, examining several components – plant growth, herbivory load, impact on recipient systems – of plant invasions through biogeographic comparisons is important. Our study contributes towards filling this gap in the research, and it is hoped that this method will spread in invasion ecology, making such an approach more common.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57907,Little evidence for release from herbivores as a driver of plant invasiveness from a multi-species herbivore-removal experiment,S202272,R57911,Indicator for enemy release,L127818,Damage,"Enemy release is frequently posed as a main driver of invasiveness of alien species. However, an experimental multi-species test examining performance and herbivory of invasive alien, non-invasive alien and native plant species in the presence and absence of natural enemies is lacking. In a common garden experiment in Switzerland, we manipulated exposure of seven alien invasive, eight alien non-invasive and fourteen native species from six taxonomic groups to natural enemies (invertebrate herbivores), by applying a pesticide treatment under two different nutrient levels. We assessed biomass production, herbivore damage and the major herbivore taxa on plants. Across all species, plants gained significantly greater biomass under pesticide treatment. However, invasive, non-invasive and native species did not differ in their biomass response to pesticide treatment at either nutrient level. The proportion of leaves damaged on invasive species was significantly lower compared to native species, but not when compared to non-invasive species. However, the difference was lost when plant size was accounted for. There were no differences between invasive, non-invasive and native species in herbivore abundance. Our study offers little support for invertebrate herbivore release as a driver of plant invasiveness, but suggests that future enemy release studies should account for differences in plant size among species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57926,Grassland fires may favor native over introduced plants by reducing pathogen loads,S202489,R57927,Indicator for enemy release,L128002,Damage,"Grasslands have been lost and degraded in the United States since Euro-American settlement due to agriculture, development, introduced invasive species, and changes in fire regimes. Fire is frequently used in prairie restoration to control invasion by trees and shrubs, but may have additional consequences. For example, fire might reduce damage by herbivore and pathogen enemies by eliminating litter, which harbors eggs and spores. Less obviously, fire might influence enemy loads differently for native and introduced plant hosts. We used a controlled burn in a Willamette Valley (Oregon) prairie to examine these questions. We expected that, without fire, introduced host plants should have less damage than native host plants because the introduced species are likely to have left many of their enemies behind when they were transported to their new range (the enemy release hypothesis, or ERH). If the ERH holds, then fire, which should temporarily reduce enemies on all species, should give an advantage to the natives because they should see greater total reduction in damage by enemies. Prior to the burn, we censused herbivore and pathogen attack on eight plant species (five of nonnative origin: Bromus hordaceous, Cynosuros echinatus, Galium divaricatum, Schedonorus arundinaceus (= Festuca arundinacea), and Sherardia arvensis; and three natives: Danthonia californica, Epilobium minutum, and Lomatium nudicale). The same plots were monitored for two years post-fire. Prior to the burn, native plants had more kinds of damage and more pathogen damage than introduced plants, consistent with the ERH. Fire reduced pathogen damage relative to the controls more for the native than the introduced species, but the effects on herbivory were negligible. Pathogen attack was correlated with plant reproductive fitness, whereas herbivory was not. These results suggest that fire may be useful for promoting some native plants in prairies due to its negative effects on their pathogens.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57943,Comparison of invertebrate herbivores on native and non-native Senecio species: Implications for the enemy release hypothesis,S202737,R57946,Indicator for enemy release,L128213,Damage,"The enemy release hypothesis posits that non-native plant species may gain a competitive advantage over their native counterparts because they are liberated from co-evolved natural enemies from their native area. The phylogenetic relationship between a non-native plant and the native community may be important for understanding the success of some non-native plants, because host switching by insect herbivores is more likely to occur between closely related species. We tested the enemy release hypothesis by comparing leaf damage and herbivorous insect assemblages on the invasive species Senecio madagascariensis Poir. to that on nine congeneric species, of which five are native to the study area, and four are non-native but considered non-invasive. Non-native species had less leaf damage than natives overall, but we found no significant differences in the abundance, richness and Shannon diversity of herbivores between native and non-native Senecio L. species. The herbivore assemblage and percentage abundance of herbivore guilds differed among all Senecio species, but patterns were not related to whether the species was native or not. Species-level differences indicate that S. madagascariensis may have a greater proportion of generalist insect damage (represented by phytophagous leaf chewers) than the other Senecio species. Within a plant genus, escape from natural enemies may not be a sufficient explanation for why some non-native species become more invasive than others.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57950,Phytophagous Insects on Native and Non-Native Host Plants: Combining the Community Approach and the Biogeographical Approach,S202849,R57954,Indicator for enemy release,L128309,Damage,"During the past centuries, humans have introduced many plant species in areas where they do not naturally occur. Some of these species establish populations and in some cases become invasive, causing economic and ecological damage. Which factors determine the success of non-native plants is still incompletely understood, but the absence of natural enemies in the invaded area (Enemy Release Hypothesis; ERH) is one of the most popular explanations. One of the predictions of the ERH, a reduced herbivore load on non-native plants compared with native ones, has been repeatedly tested. However, many studies have either used a community approach (sampling from native and non-native species in the same community) or a biogeographical approach (sampling from the same plant species in areas where it is native and where it is non-native). Either method can sometimes lead to inconclusive results. To resolve this, we here add to the small number of studies that combine both approaches. We do so in a single study of insect herbivory on 47 woody plant species (trees, shrubs, and vines) in the Netherlands and Japan. We find higher herbivore diversity, higher herbivore load and more herbivory on native plants than on non-native plants, generating support for the enemy release hypothesis.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57959,No release for the wicked: enemy release is dynamic and not associated with invasiveness,S202936,R57961,Indicator for enemy release,L128382,Damage,"The enemy release hypothesis predicts that invasive species will receive less damage from enemies, compared to co-occurring native and noninvasive exotic species in their introduced range. However, release operating early in invasion could be lost over time and with increased range size as introduced species acquire new enemies. We used three years of data, from 61 plant species planted into common gardens, to determine whether (1) invasive, noninvasive exotic, and native species experience differential damage from insect herbivores. and mammalian browsers, and (2) enemy release is lost with increased residence time and geographic spread in the introduced range. We find no evidence suggesting enemy release is a general mechanism contributing to invasiveness in this region. Invasive species received the most insect herbivory, and damage increased with longer residence times and larger range sizes at three spatial scales. Our results show that invasive and exotic species fail to escape enemies, particularly over longer temporal and larger spatial scales.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57964,Natural selection on plant resistance to herbivores in the native and introduced range,S202995,R57965,Indicator for enemy release,L128432,Damage,"Plants introduced into a new range are expected to harbour fewer specialized herbivores and to receive less damage than conspecifics in native ranges. Datura stramonium was introduced in Spain about five centuries ago. Here, we compare damage by herbivores, plant size, and leaf trichomes between plants from non-native and native ranges and perform selection analyses. Non-native plants experienced much less damage, were larger and less pubescent than plants of native populations. While plant size was related to fitness in both ranges, selection to increase resistance was only detected in the native region. We suggest this is a consequence of a release from enemies in this new environment.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57986,Herbivory and the success of Ligustrum lucidum: evidence from a comparison between native and novel ranges,S203276,R57987,Indicator for enemy release,L128669,Damage,"
Invasive plant species may benefit from a reduction in herbivory in their introduced range. The reduced herbivory may cause a reallocation of resources from defence to fitness. Here, we evaluated leaf herbivory of an invasive tree species (Ligustrum lucidum Aiton) in its native and novel ranges, and determined the potential changes in leaf traits that may be associated with the patterns of herbivory. We measured forest structure, damage by herbivores and leaf traits in novel and native ranges, and on the basis of the literature, we identified the common natural herbivores of L. lucidum. We also performed an experiment offering leaves from both ranges to a generalist herbivore (Spodoptera frugiperda). L. lucidum was more abundant and experienced significantly less foliar damage in the novel than in the native range, in spite of the occurrence of several natural herbivores. The reduced lignin content and lower lignin : N ratio in novel leaves, together with the higher herbivore preference for leaves of this origin in the laboratory experiment, indicated lower herbivore resistance in novel than in native populations. The reduced damage by herbivores is not the only factor explaining invasion success, but it may be an important cause that enhances the invasiveness of L. lucidum.
",TRUE,noun
R24,Ecology and Evolutionary Biology,R57994,Can enemy release explain the invasion success of the diploid Leucanthemum vulgare in North America?,S203385,R57995,Indicator for enemy release,L128762,Damage,"Abstract Enemy release is a commonly accepted mechanism to explain plant invasions. Both the diploid Leucanthemum vulgare and the morphologically very similar tetraploid Leucanthemum ircutianum have been introduced into North America. To verify which species is more prevalent in North America we sampled 98 Leucanthemum populations and determined their ploidy level. Although polyploidy has repeatedly been proposed to be associated with increased invasiveness in plants, only two of the populations surveyed in North America were the tetraploid L. ircutianum . We tested the enemy release hypothesis by first comparing 20 populations of L. vulgare and 27 populations of L. ircutianum in their native range in Europe, and then comparing the European L. vulgare populations with 31 L. vulgare populations sampled in North America. Characteristics of the site and associated vegetation, plant performance and invertebrate herbivory were recorded. In Europe, plant height and density of the two species were similar but L. vulgare produced more flower heads than L. ircutianum . Leucanthemum vulgare in North America was 17 % taller, produced twice as many flower heads and grew much denser compared to L. vulgare in Europe. Attack rates by root- and leaf-feeding herbivores on L. vulgare in Europe (34 and 75 %) was comparable to that on L. ircutianum (26 and 71 %) but higher than that on L. vulgare in North America (10 and 3 %). However, herbivore load and leaf damage were low in Europe. Cover and height of the co-occurring vegetation was higher in L. vulgare populations in the native than in the introduced range, suggesting that a shift in plant competition may more easily explain the invasion success of L. vulgare than escape from herbivory.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57996,A Comparison of Herbivore Damage on Three Invasive Plants and Their Native Congeners: Implications for the Enemy Release Hypothesis,S203408,R57997,Indicator for enemy release,L128781,Damage,"ABSTRACT One explanation for the success of exotic plants in their introduced habitats is that, upon arriving to a new continent, plants escaped their native herbivores or pathogens, resulting in less damage and lower abundance of enemies than closely related native species (enemy release hypothesis). We tested whether the three exotic plant species, Rubus phoenicolasius (wineberry), Fallopia japonica (Japanese knotweed), and Persicaria perfoliata (mile-a-minute weed), suffered less herbivory or pathogen attack than native species by comparing leaf damage and invertebrate herbivore abundance and diversity on the invasive species and their native congeners. Fallopia japonica and R. phoenicolasius received less leaf damage than their native congeners, and F. japonica also contained a lower diversity and abundance of invertebrate herbivores. If the observed decrease in damage experienced by these two plant species contributes to increased fitness, then escape from enemies may provide at least a partial explanation for their invasiveness. However, P. perfoliata actually received greater leaf damage than its native congener. Rhinoncomimus latipes, a weevil previously introduced in the United States as a biological control for P. perfoliata, accounted for the greatest abundance of insects collected from P. perfoliata. Therefore, it is likely that the biocontrol R. latipes was responsible for the greater damage on P. perfoliata, suggesting this insect may be effective at controlling P. perfoliata populations if its growth and reproduction is affected by the increased herbivore damage.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56789,Exotic mammals disperse exotic fungi that promote invasion by exotic trees,S189849,R56790,Outcome of interaction,L118365,Dispersal,"Biological invasions are often complex phenomena because many factors influence their outcome. One key aspect is how non-natives interact with the local biota. Interaction with local species may be especially important for exotic species that require an obligatory mutualist, such as Pinaceae species that need ectomycorrhizal (EM) fungi. EM fungi and seeds of Pinaceae disperse independently, so they may use different vectors. We studied the role of exotic mammals as dispersal agents of EM fungi on Isla Victoria, Argentina, where many Pinaceae species have been introduced. Only a few of these tree species have become invasive, and they are found in high densities only near plantations, partly because these Pinaceae trees lack proper EM fungi when their seeds land far from plantations. Native mammals (a dwarf deer and rodents) are rare around plantations and do not appear to play a role in these invasions. With greenhouse experiments using animal feces as inoculum, plus observational and molecular studies, we found that wild boar and deer, both non-native, are dispersing EM fungi. Approximately 30% of the Pinaceae seedlings growing with feces of wild boar and 15% of the seedlings growing with deer feces were colonized by non-native EM fungi. Seedlings growing in control pots were not colonized by EM fungi. We found a low diversity of fungi colonizing the seedlings, with the hypogeous Rhizopogon as the most abundant genus. Wild boar, a recent introduction to the island, appear to be the main animal dispersing the fungi and may be playing a key role in facilitating the invasion of pine trees and even triggering their spread. These results show that interactions among non-natives help explain pine invasions in our study area.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56839,Novel interactions between non-native mammals and fungi facilitate establishment of invasive pines,S190405,R56840,Outcome of interaction,L118821,Dispersal,"The role of novel ecological interactions between mammals, fungi and plants in invaded ecosystems remains unresolved, but may play a key role in the widespread successful invasion of pines and their ectomycorrhizal fungal associates, even where mammal faunas originate from different continents to trees and fungi as in New Zealand. We examine the role of novel mammal associations in dispersal of ectomycorrhizal fungal inoculum of North American pines (Pinus contorta, Pseudotsuga menziesii), and native beech trees (Lophozonia menziesii) using faecal analyses, video monitoring and a bioassay experiment. Both European red deer (Cervus elaphus) and Australian brushtail possum (Trichosurus vulpecula) pellets contained spores and DNA from a range of native and non‐native ectomycorrhizal fungi. Faecal pellets from both animals resulted in ectomycorrhizal infection of pine seedlings with fungal genera Rhizopogon and Suillus, but not with native fungi or the invasive fungus Amanita muscaria, despite video and DNA evidence of consumption of these fungi. Native L. menziesii seedlings never developed any ectomycorrhizal infection from faecal pellet inoculation. Synthesis. Our results show that introduced mammals from Australia and Europe facilitate the co‐invasion of invasive North American trees and Northern Hemisphere fungi in New Zealand, while we find no evidence that introduced mammals benefit native trees or fungi. This novel tripartite ‘invasional meltdown’, comprising taxa from three kingdoms and three continents, highlights unforeseen consequences of global biotic homogenization.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56929,Asiatic Callosciurus squirrels as seed dispersers of exotic plants in the Pampas,S191404,R56930,Outcome of interaction,L119640,Dispersal,"Abstract Seed dispersal by exotic mammals exemplifies mutualistic interactions that can modify the habitat by facilitating the establishment of certain species. We examined the potential for endozoochoric dispersal of exotic plants by Callosciurus erythraeus introduced in the Pampas Region of Argentina. We identified and characterized entire and damaged seeds found in squirrel faeces and evaluated the germination capacity and viability of entire seeds in laboratory assays. We collected 120 samples of squirrel faeces that contained 883 pellets in seasonal surveys conducted between July 2011 and June 2012 at 3 study sites within the main invasion focus of C. erythraeus in Argentina. We found 226 entire seeds in 21% of the samples belonging to 4 species of exotic trees and shrubs. Germination in laboratory assays was recorded for Morus alba and Casuarina sp.; however, germination percentage and rate was higher for seeds obtained from the fruits than for seeds obtained from the faeces. The largest size of entire seeds found in the faeces was 4.2 × 4.0 mm, whereas the damaged seeds had at least 1 dimension ≥ 4.7 mm. Our results indicated that C. erythraeus can disperse viable seeds of at least 2 species of exotic trees. C. erythraeus predated seeds of other naturalized species in the region. The morphometric description suggested a restriction on the maximum size for the passage of entire seeds through the digestive tract of squirrels, which provides useful information to predict its role as a potential disperser or predator of other species in other invaded communities.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54574,Anthropogenic Disturbance Can Determine the Magnitude of Opportunistic Species Responses on Marine Urban Infrastructures,S171689,R54575,hypothesis,L105425,Disturbance,"Background Coastal landscapes are being transformed as a consequence of the increasing demand for infrastructures to sustain residential, commercial and tourist activities. Thus, intertidal and shallow marine habitats are largely being replaced by a variety of artificial substrata (e.g. breakwaters, seawalls, jetties). Understanding the ecological functioning of these artificial habitats is key to planning their design and management, in order to minimise their impacts and to improve their potential to contribute to marine biodiversity and ecosystem functioning. Nonetheless, little effort has been made to assess the role of human disturbances in shaping the structure of assemblages on marine artificial infrastructures. We tested the hypothesis that some negative impacts associated with the expansion of opportunistic and invasive species on urban infrastructures can be related to the severe human disturbances that are typical of these environments, such as those from maintenance and renovation works. Methodology/Principal Findings Maintenance caused a marked decrease in the cover of dominant space occupiers, such as mussels and oysters, and a significant enhancement of opportunistic and invasive forms, such as biofilm and macroalgae. These effects were particularly pronounced on sheltered substrata compared to exposed substrata. Experimental application of the disturbance in winter reduced the magnitude of the impacts compared to application in spring or summer. We use these results to identify possible management strategies to inform the improvement of the ecological value of artificial marine infrastructures. Conclusions/Significance We demonstrate that some of the impacts of globally expanding marine urban infrastructures, such as those related to the spread of opportunistic, and invasive species could be mitigated through ecologically-driven planning and management of long-term maintenance of these structures. Impact mitigation is a possible outcome of policies that consider the ecological features of built infrastructures and the fundamental value of controlling biodiversity in marine urban systems.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54576,Plant invasions along mountain roads: the altitudinal amplitude of alien Asteraceae forbs in their native and introduced ranges,S171711,R54577,hypothesis,L105443,Disturbance,"Studying plant invasions along environmental gradients is a promising approach to dissect the relative importance of multiple interacting factors that affect the spread of a species in a new range. Along altitudinal gradients, factors such as propagule pressure, climatic conditions and biotic interactions change simultaneously across rather small geographic scales. Here we investigate the distribution of eight Asteraceae forbs along mountain roads in both their native and introduced ranges in the Valais (southern Swiss Alps) and the Wallowa Mountains (northeastern Oregon, USA). We hypothesised that a lack of adaptation and more limiting propagule pressure at higher altitudes in the new range restricts the altitudinal distribution of aliens relative to the native range. However, all but one of the species reached the same or even a higher altitude in the new range. Thus neither the need to adapt to changing climatic conditions nor lower propagule pressure at higher altitudes appears to have prevented the altitudinal spread of introduced populations. We found clear differences between regions in the relative occurrence of alien species in ruderal sites compared to roadsides, and in the degree of invasion away from the roadside, presumably reflecting differences in disturbance patterns between regions. Whilst the upper altitudinal limits of these plant invasions are apparently climatically constrained, factors such as anthropogenic disturbance and competition with native vegetation appear to have greater influence than changing climatic conditions on the distribution of these alien species along altitudinal gradients.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54583,Disturbance as a factor in the distribution of sugar maple and the invasion of Norway maple into a modified woodland,S171793,R54584,hypothesis,L105511,Disturbance,"Disturbances have the potential to increase the success of bi ological invasions. Norway maple {Acer platanoides), a common street tree native to Europe, is a foreign invasive with greater tolerance and more effi cient resource utilization than the native sugar maple (Acer saccharum). This study examined the role disturbances from a road and path played in the invasion of Norway maple and in the distribution of sugar maple. Disturbed areas on the path and nearby undisturbed areas were surveyed for both species along transects running perpendicular to a road. Norway maples were present in greater number closer to the road and on the path, while the number of sugar maples was not significantly associated with either the road or the path. These results suggest that human-caused disturbances have a role in facili tating the establishment of an invasive species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54607,Determinants of Caulerpa racemosa distribution in the north-western Mediterranean,S172076,R54608,hypothesis,L105746,Disturbance,"Predicting community susceptibility to invasion has become a priority for preserving biodiversity. We tested the hypothesis that the occurrence and abundance of the seaweed Caulerpa racemosa in the north-western (NW) Mediterranean would increase with increasing levels of human disturbance. Data from a survey encompassing areas subjected to different human influences (i.e. from urbanized to protected areas) were fitted by means of generalized linear mixed models, including descriptors of habitats and communities. The incidence of occurrence of C. racemosa was greater on urban than extra-urban or protected reefs, along the coast of Tuscany and NW Sardinia, respectively. Within the Marine Protected Area of Capraia Island (Tuscan Archipelago), the probability of detecting C. racemosa did not vary according to the degree of protection (partial versus total). Human influence was, however, a poor predictor of the seaweed cover. At the seascape level, C. racemosa was more widely spread within degraded (i.e. Posidonia oceanica dead matte or algal turfs) than in better preserved habitats (i.e. canopy-forming macroalgae or P. oceanica seagrass meadows). At a smaller spatial scale, the presence of the seaweed was positively correlated to the diversity of macroalgae and negatively to that of sessile invertebrates. These results suggest that C. racemosa can take advantage of habitat degradation. Thus, predicting invasion scenarios requires a thorough knowledge of ecosystem structure, at a hierarchy of levels of biological organization (from the landscape to the assemblage) and detailed information on the nature and intensity of sources of disturbance and spatial scales at which they operate.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54609,An experimental study of plant community invasibility,S172099,R54610,hypothesis,L105765,Disturbance,"A long—term field experiment in limestone grassland near Buxton (North Derbyshire, United Kingdom) was designed to identify plant attributes and vegetation characteristics conducive to successful invasion. Plots containing crossed, continuous gradients of fertilizer addition and disturbance intensity were subjected to a single—seed inoculum comprising a wide range of plant functional types and 54 species not originally present at the site. Several disturbance treatments were applied; these included the creation of gaps of contrasting size and the mowing of the vegetation to different heights and at different times of the year. This paper analyzes the factors controlling the initial phase of the resulting invasions within the plots subject to gap creation. The susceptibility of the indigenous community to invasion was strongly related to the availability of bare ground created, but greatest success occurred where disturbance coincided with eutrophication. Disturbance damage to the indigenous dominants (particularly Festuca ovina) was an important determinant of seedling establishment by the sown invaders. Large seed size was identified as an important characteristic allowing certain species to establish relatively evenly across the productivity—disturbance matrix; smaller—seeded species were more dependent on disturbance for establishment. Successful and unsuccessful invaders were also distinguished to some extent by differences in germination requirements and present geographical distribution.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54620,A comparison of the urban flora of different phytoclimatic regions in Italy,S172234,R54621,hypothesis,L105878,Disturbance,"This study is a comparison of the spontaneous vascular flora of five Italian cities: Milan, Ancona, Rome, Cagliari and Palermo. The aims of the study are to test the hypothesis that urbanization results in uniformity of urban floras, and to evaluate the role of alien species in the flora of settlements located in different phytoclimatic regions. To obtain comparable data, ten plots of 1 ha, each representing typical urban habitats, were analysed in each city. The results indicate a low floristic similarity between the cities, while the strongest similarity appears within each city and between each city and the seminatural vegetation of the surrounding region. In the Mediterranean settlements, even the most urbanized plots reflect the characters of the surrounding landscape and are rich in native species, while aliens are relatively few. These results differ from the reported uniformity and the high proportion of aliens which generally characterize urban floras elsewhere. To explain this trend the importance of apophytes (indigenous plants expanding into man-made habitats) is highlighted; several Mediterranean species adapted to disturbance (i.e. grazing, trampling, and human activities) are pre-adapted to the urban environment. In addition, consideration is given to the minor role played by the ‘urban heat island’ in the Mediterranean basin, and to the structure and history of several Italian settlements, where ancient walls, ruins and archaeological sites in the periphery as well as in the historical centres act as conservative habitats and provide connection with seed-sources on the outskirts.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54622,Responses of exotic plant species to fires in Pinus ponderosa forests in northern Arizona,S172265,R54624,hypothesis,L105903,Disturbance,". Changes in disturbance due to fire regime in southwestern Pinus ponderosa forests over the last century have led to dense forests that are threatened by widespread fire. It has been shown in other studies that a pulse of native, early-seral opportunistic species typically follow such disturbance events. With the growing importance of exotic plants in local flora, however, these exotics often fill this opportunistic role in recovery. We report the effects of fire severity on exotic plant species following three widespread fires of 1996 in northern Arizona P. ponderosa forests. Species richness and abundance of all vascular plant species, including exotics, were higher in burned than nearby unburned areas. Exotic species were far more important, in terms of cover, where fire severity was highest. Species present after wildfires include those of the pre-disturbed forest and new species that could not be predicted from above-ground flora of nearby unburned forests.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54627,Variable effects of feral pig disturbances on native and exotic plants in a California grassland,S172325,R54629,hypothesis,L105953,Disturbance,"Biological invasions are a global phenomenon that can accelerate disturbance regimes and facilitate colonization by other nonnative species. In a coastal grassland in northern California, we conducted a four-year exclosure experiment to assess the effects of soil disturbances by feral pigs (Sus scrofa) on plant community composition and soil nitrogen availability. Our results indicate that pig disturbances had substantial effects on the community, although many responses varied with plant functional group, geographic origin (native vs. exotic), and grassland type. (''Short patches'' were dominated by annual grasses and forbs, whereas ''tall patches'' were dominated by perennial bunchgrasses.) Soil disturbances by pigs increased the richness of exotic plant species by 29% and native taxa by 24%. Although native perennial grasses were unaffected, disturbances reduced the bio- mass of exotic perennial grasses by 52% in tall patches and had no effect in short patches. Pig disturbances led to a 69% decrease in biomass of exotic annual grasses in tall patches but caused a 62% increase in short patches. Native, nongrass monocots exhibited the opposite biomass pattern as those seen for exotic annual grasses, with disturbance causing an 80% increase in tall patches and a 56% decrease in short patches. Native forbs were unaffected by disturbance, whereas the biomass of exotic forbs increased by 79% with disturbance in tall patches and showed no response in short patches. In contrast to these vegetation results, we found no evidence that pig disturbances affected nitrogen mineral- ization rates or soil moisture availability. Thus, we hypothesize that the observed vegetation changes were due to space clearing by pigs that provided greater opportunities for colo- nization and reduced intensity of competition, rather than changes in soil characteristics. In summary, although responses were variable, disturbances by feral pigs generally pro- moted the continued invasion of this coastal grassland by exotic plant taxa.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54638,Exotic invasive species in urban wetlands: environmental correlates and implications for wetland management,S172454,R54639,hypothesis,L106062,Disturbance,"Summary 1. Wetlands in urban regions are subjected to a wide variety of anthropogenic disturbances, many of which may promote invasions of exotic plant species. In order to devise management strategies, the influence of different aspects of the urban and natural environments on invasion and community structure must be understood. 2. The roles of soil variables, anthropogenic effects adjacent to and within the wetlands, and vegetation structure on exotic species occurrence within 21 forested wetlands in north-eastern New Jersey, USA, were compared. The hypotheses were tested that different vegetation strata and different invasive species respond similarly to environmental factors, and that invasion increases with increasing direct human impact, hydrologic disturbance, adjacent residential land use and decreasing wetland area. Canonical correspondence analyses, correlation and logistic regression analyses were used to examine invasion by individual species and overall site invasion, as measured by the absolute and relative number of exotic species in the site flora. 3. Within each stratum, different sets of environmental factors separated exotic and native species. Nutrients, soil clay content and pH, adjacent land use and canopy composition were the most frequently identified factors affecting species, but individual species showed highly individualistic responses to the sets of environmental variables, often responding in opposite ways to the same factor. 4. Overall invasion increased with decreasing area but only when sites > 100 ha were included. Unexpectedly, invasion decreased with increasing proportions of industrial/commercial adjacent land use. 5. The hypotheses were only partially supported; invasion does not increase in a simple way with increasing human presence and disturbance. 6. Synthesis and applications . The results suggest that a suite of environmental conditions can be identified that are associated with invasion into urban wetlands, which can be widely used for assessment and management. However, a comprehensive ecosystem approach is needed that places the remediation of physical alterations from urbanization within a landscape context. Specifically, sediment, inputs and hydrologic changes need to be related to adjoining urban land use and to the overlapping requirements of individual native and exotic species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54642,Re-colonisation rate differs between co-existing indigenous and invasive intertidal mussels following major disturbance,S172502,R54643,hypothesis,L106102,Disturbance,"The potential of introduced species to become invasive is often linked to their ability to colonise disturbed habitats rapidly. We studied the effects of major disturbance by severe storms on the indigenous mussel Perna perna and the invasive mussel Mytilus galloprovincialis in sympatric intertidal populations on the south coast of South Africa. At the study sites, these species dominate different shore levels and co-exist in the mid mussel zone. We tested the hypotheses that in the mid- zone P. perna would suffer less dislodgment than M. galloprovincialis, because of its greater tenacity, while M. galloprovincialis would respond with a higher re-colonisation rate. We estimated the per- cent cover of the 2 mussels in the mid-zone from photographs, once before severe storms and 3 times afterwards. M. galloprovincialis showed faster re-colonisation and 3 times more cover than P. perna 1 and 1.5 yr after the storms (when populations had recovered). Storm-driven dislodgment in the mid- zone was highest for the species that initially dominated at each site, conforming to the concept of compensatory mortality. This resulted in similar cover of the 2 species immediately after the storms. Thus, the storm wave forces exceeded the tenacity even of P. perna, while the higher recruitment rate of M. galloprovincialis can explain its greater colonisation ability. We predict that, because of its weaker attachment strength, M. galloprovincialis will be largely excluded from open coast sites where wave action is generally stronger, but that its greater capacity for exploitation competition through re-colonisation will allow it to outcompete P. perna in more sheltered areas (especially in bays) that are periodically disturbed by storms.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54656,Roads as conduits for exotic plant invasions in a semiarid landscape,S172678,R54658,hypothesis,L106248,Disturbance,"Abstract: Roads are believed to be a major contributing factor to the ongoing spread of exotic plants. We examined the effect of road improvement and environmental variables on exotic and native plant diversity in roadside verges and adjacent semiarid grassland, shrubland, and woodland communities of southern Utah ( U.S.A. ). We measured the cover of exotic and native species in roadside verges and both the richness and cover of exotic and native species in adjacent interior communities ( 50 m beyond the edge of the road cut ) along 42 roads stratified by level of road improvement ( paved, improved surface, graded, and four‐wheel‐drive track ). In roadside verges along paved roads, the cover of Bromus tectorum was three times as great ( 27% ) as in verges along four‐wheel‐drive tracks ( 9% ). The cover of five common exotic forb species tended to be lower in verges along four‐wheel‐drive tracks than in verges along more improved roads. The richness and cover of exotic species were both more than 50% greater, and the richness of native species was 30% lower, at interior sites adjacent to paved roads than at those adjacent to four‐wheel‐drive tracks. In addition, environmental variables relating to dominant vegetation, disturbance, and topography were significantly correlated with exotic and native species richness and cover. Improved roads can act as conduits for the invasion of adjacent ecosystems by converting natural habitats to those highly vulnerable to invasion. However, variation in dominant vegetation, soil moisture, nutrient levels, soil depth, disturbance, and topography may render interior communities differentially susceptible to invasions originating from roadside verges. Plant communities that are both physically invasible ( e.g., characterized by deep or fertile soils ) and disturbed appear most vulnerable. Decision‐makers considering whether to build, improve, and maintain roads should take into account the potential spread of exotic plants.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54659,Testing life history correlates of invasiveness using congeneric plant species,S172707,R54660,hypothesis,L106273,Disturbance,"We used three congeneric annual thistles, which vary in their ability to invade California (USA) annual grasslands, to test whether invasiveness is related to differences in life history traits. We hypothesized that populations of these summer-flowering Centaurea species must pass through a demographic gauntlet of survival and reproduction in order to persist and that the most invasive species (C. solstitialis) might possess unique life history characteristics. Using the idea of a demographic gauntlet as a conceptual framework, we compared each congener in terms of (1) seed germination and seedling establishment, (2) survival of rosettes subjected to competition from annual grasses, (3) subsequent growth and flowering in adult plants, and (4) variation in breeding system. Grazing and soil disturbance is thought to affect Centaurea establishment, growth, and reproduction, so we also explored differences among congeners in their response to clipping and to different sizes of soil disturbance. We found minimal differences among congeners in either seed germination responses or seedling establishment and survival. In contrast, differential growth responses of congeners to different sizes of canopy gaps led to large differences in adult size and fecundity. Canopy-gap size and clipping affected the fecundity of each species, but the most invasive species (C. solstitialis) was unique in its strong positive response to combinations of clipping and canopy gaps. In addition, the phenology of C. solstitialis allows this species to extend its growing season into the summer—a time when competition from winter annual vegetation for soil water is minimal. Surprisingly, C. solstitialis was highly self-incompatible while the less invasive species were highly self-compatible. Our results suggest that the invasiveness of C. solstitialis arises, in part, from its combined ability to persist in competition with annual grasses and its plastic growth and reproductive responses to open, disturbed habitat patches. Corresponding Editor: D. P. C. Peters.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54661,Invasibility and abiotic gradients: the positive correlation between native and exotic plant diversity,S172730,R54662,hypothesis,L106292,Disturbance,"We sampled the understory community in an old-growth, temperate forest to test alternative hypotheses explaining the establishment of exotic plants. We quantified the individual and net importance of distance from areas of human disturbance, native plant diversity, and environmental gradients in determining exotic plant establishment. Distance from disturbed areas, both within and around the reserve, was not correlated to exotic species richness. Numbers of native and exotic species were positively correlated at large (50 m 2 ) and small (10 m 2 ) plot sizes, a trend that persisted when relationships to environ- mental gradients were controlled statistically. Both native and exotic species richness in- creased with soil pH and decreased along a gradient of increasing nitrate availability. Exotic species were restricted to the upper portion of the pH gradient and had individualistic responses to the availability of soil resources. These results are inconsistent with both the diversity-resistance and resource-enrichment hypotheses for invasibility. Environmental conditions favoring native species richness also favor exotic species richness, and com- petitive interactions with the native flora do not appear to limit the entry of additional species into the understory community at this site. It appears that exotic species with niche requirements poorly represented in the regional flora of native species may establish with relatively little resistance or consequence for native species richness.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54675,Land use intensification differentially benefits alien over native predators in agricultural landscape mosaics,S172895,R54676,hypothesis,L106429,Disturbance,"Both anthropogenic habitat disturbance and the breadth of habitat use by alien species have been found to facilitate invasion into novel environments, and these factors have been hypothesized to be important within coccinellid communities specifically. In this study, we address two questions: (1) Do alien species benefit more than native species from human‐disturbed habitats? (2) Are alien species more generalized in their habitat use than natives within the invaded range or can their abundance patterns be explained by specialization on the most common habitats?",TRUE,noun
R24,Ecology and Evolutionary Biology,R54680,"Grassland invisibility and diversity: responses to nutrients, seed input, and disturbance",S172956,R54681,hypothesis,L106480,Disturbance,"The diversity and composition of a community are determined by a com- bination of local and regional processes. We conducted a field experiment to examine the impact of resource manipulations and seed addition on the invasibility and diversity of a low-productivity grassland. We manipulated resource levels both by a disturbance treatment that reduced adult plant cover in the spring of the first year and by addition of fertilizer every year. Seeds of 46 native species, both resident and nonresident to the community, were added in spring of the first year to determine the effects of recruitment limitation from local (seed limitation) and regional (dispersal limitation) sources on local species richness. Our results show that the unmanipulated community was not readily invasible. Seed addition increased the species richness of unmanipulated plots, but this was primarily due to increased occurrence of resident species. Nonresident species were only able to invade following a cover-reduction disturbance. Cover reduction resulted in an increase in nitrogen availability in the first year, but had no measurable effect on light availability in any year. In contrast, fertilization created a persistent increase in nitrogen availability that increased plant cover or biomass and reduced light penetration to ground level. Initially, fertilization had an overall positive effect on species richness, but by the third year, the effect was either negative or neutral. Unlike cover reduction, fertilization had no observable effect on seedling recruitment or occurrence (number of plots) of invading resident or nonresident species. The results of our experiment demonstrate that, although resource fluctuations can increase the invasibility of this grass- land, the community response depends on the nature of the resource change.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54684,Influence of fire and soil nutrients on native and non-native annuals at remnant vegetation edges in the Western Australian wheatbelt,S173002,R54685,hypothesis,L106518,Disturbance,". The effect of fire on annual plants was examined in two vegetation types at remnant vegetation edges in the Western Australian wheatbelt. Density and cover of non-native species were consistently greatest at the reserve edges, decreasing rapidly with increasing distance from reserve edge. Numbers of native species showed little effect of distance from reserve edge. Fire had no apparent effect on abundance of non-natives in Allocasuarina shrubland but abundance of native plants increased. Density of both non-native and native plants in Acacia acuminata-Eucalyptus loxophleba woodland decreased after fire. Fewer non-native species were found in the shrubland than in the woodland in both unburnt and burnt areas, this difference being smallest between burnt areas. Levels of soil phosphorus and nitrate were higher in burnt areas of both communities and ammonium also increased in the shrubland. Levels of soil phosphorus and nitrate were higher at the reserve edge in the unburnt shrubland, but not in the woodland. There was a strong correlation between soil phosphorus levels and abundance of non-native species in the unburnt shrubland, but not after fire or in the woodland. Removal of non-native plants in the burnt shrubland had a strong positive effect on total abundance of native plants, apparently due to increases in growth of smaller, suppressed native plants in response to decreased competition. Two native species showed increased seed production in plots where non-native plants had been removed. There was a general indication that, in the short term, fire does not necessarily increase invasion of these communities by non-native species and could, therefore be a useful management tool in remnant vegetation, providing other disturbances are minimised.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54686,Disturbance Facilitates Invasion: The Effects Are Stronger Abroad than at Home,S173033,R54688,hypothesis,L106543,Disturbance,"Disturbance is one of the most important factors promoting exotic invasion. However, if disturbance per se is sufficient to explain exotic success, then “invasion” abroad should not differ from “colonization” at home. Comparisons of the effects of disturbance on organisms in their native and introduced ranges are crucial to elucidate whether this is the case; however, such comparisons have not been conducted. We investigated the effects of disturbance on the success of Eurasian native Centaurea solstitialis in two invaded regions, California and Argentina, and one native region, Turkey, by conducting field experiments consisting of simulating different disturbances and adding locally collected C. solstitialis seeds. We also tested differences among C. solstitialis genotypes in these three regions and the effects of local soil microbes on C. solstitialis performance in greenhouse experiments. Disturbance increased C. solstitialis abundance and performance far more in nonnative ranges than in the native range, but C. solstitialis biomass and fecundity were similar among populations from all regions grown under common conditions. Eurasian soil microbes suppressed growth of C. solstitialis plants, while Californian and Argentinean soil biota did not. We suggest that escape from soil pathogens may contribute to the disproportionately powerful effect of disturbance in introduced regions.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54689,Effect of disturbance and nutrient addition on native and introduced annuals in plant communities in the Western Australian wheatbelt,S173071,R54691,hypothesis,L106575,Disturbance,"To investigate factors affecting the ability of introduced species to invade natural communities in the Western Australian wheatbelt, five communities were examined within a nature reserve near Kellerberrin. Transect studies indicated that introduced annuals were more abundant in woodland than in shrub communities, despite an input of introduced seed into all communities. The response of native and introduced annuals to soil disturbance and fertilizer addition was examined. Small areas were disturbed and/or provided with fertilizer prior to addition of seed of introduced annuals. In most communities, the introduced species used (Avena fatua and Ursinia anthemoides) established well only where the soil had been disturbed, but their growth was increased greatly when fertilizer was also added. Establishment and growth of other introduced species also increased where nutrient addition and soil disturbance were combined. Growth of several native annuals increased greatly with fertilizer addition, but showed little response to disturbance. Fertilizer addition also significantly increased the number of native species present in most communities. This indicates that growth of both native and introduced species is limited by nutrient availability in these communities, but also that introduced species respond more to a combination of nutrient addition and soil disturbance.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54694,Removal of nonnative vines and post-hurricane recruitment in tropical hardwood forests of Florida,S173132,R54696,hypothesis,L106626,Disturbance,"Abstract In hardwood subtropical forests of southern Florida, nonnative vines have been hypothesized to be detrimental, as many species form dense “vine blankets” that shroud the forest. To investigate the effects of nonnative vines in post-hurricane regeneration, we set up four large (two pairs of 30 × 60 m) study areas in each of three study sites. One of each pair was unmanaged and the other was managed by removal of nonnative plants, predominantly vines. Within these areas, we sampled vegetation in 5 × 5 m plots for stems 2 cm DBH (diameter at breast height) or greater and in 2 × 0.5 m plots for stems of all sizes. For five years, at annual censuses, we tagged and measured stems of vines, trees, shrubs and herbs in these plots. For each 5 × 5 m plot, we estimated percent coverage by individual vine species, using native and nonnative vines as classes. We investigated the hypotheses that: (1) plot coverage, occurrence and recruitment of nonnative vines were greater than that of native vines in unmanaged plots; (2) the management program was effective at reducing cover by nonnative vines; and (3) reduction of cover by nonnative vines improved recruitment of seedlings and saplings of native trees, shrubs, and herbs. In unmanaged plots, nonnative vines recruited more seedlings and had a significantly higher plot-cover index, but not a higher frequency of occurrence. Management significantly reduced cover by nonnative vines and had a significant overall positive effect on recruitment of seedlings and saplings of native trees, shrubs and herbs. Management also affected the seedling community (which included vines, trees, shrubs, and herbs) in some unanticipated ways, favoring early successional species for a longer period of time. The vine species with the greatest potential to “strangle” gaps were those that rapidly formed dense cover, had shade tolerant seedling recruitment, and were animal-dispersed. This suite of traits was more common in the nonnative vines than in the native vines. Our results suggest that some vines may alter the spatiotemporal pattern of recruitment sites in a forest ecosystem following a natural disturbance by creating many very shady spots very quickly.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54699,Effects of an intense prescribed fire on understory vegetation in a mixed conifer forest,S173193,R54701,hypothesis,L106677,Disturbance,"Abstract Huisinga, K. D., D. C. Laughlin, P. Z. Fulé, J. D. Springer, and C. M. McGlone (Ecological Restoration Institute and School of Forestry, Northern Arizona University, Box 15017, Flagstaff, AZ 86011). Effects of an intense prescribed fire on understory vegetation in a mixed conifer forest. J. Torrey Bot. Soc. 132: 590–601. 2005.—Intense prescribed fire has been suggested as a possible method for forest restoration in mixed conifer forests. In 1993, a prescribed fire in a dense, never-harvested forest on the North Rim of Grand Canyon National Park escaped prescription and burned with greater intensity and severity than expected. We sampled this burned area and an adjacent unburned area to assess fire effects on understory species composition, diversity, and plant cover. The unburned area was sampled in 1998 and the burned area in 1999; 25% of the plots were resampled in 2001 to ensure that differences between sites were consistent and persistent, and not due to inter-annual climatic differences. Species composition differed significantly between unburned and burned sites; eight species were identified as indicators of the unburned site and thirteen as indicators of the burned site. Plant cover was nearly twice as great in the burned site than in the unburned site in the first years of measurement and was 4.6 times greater in the burned site in 2001. Average and total species richness was greater in the burned site, explained mostly by higher numbers of native annual and biennial forbs. Overstory canopy cover and duff depth were significantly lower in the burned site, and there were significant inverse relationships between these variables and plant species richness and plant cover. Greater than 95% of the species in the post-fire community were native and exotic plant cover never exceeded 1%, in contrast with other northern Arizona forests that were dominated by exotic species following high-severity fires. This difference is attributed to the minimal anthropogenic disturbance history (no logging, minimal grazing) of forests in the national park, and suggests that park managers may have more options than non-park managers to use intense fire as a tool for forest conservation and restoration.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54709,Epifaunal disturbance by periodic low levels of dissolved oxygen: native vs. invasive species response,S173308,R54710,hypothesis,L106774,Disturbance,"Hypoxia is increasing in marine and estuarine systems worldwide, primarily due to anthropogenic causes. Periodic hypoxia represents a pulse disturbance, with the potential to restruc- ture estuarine biotic communities. We chose the shallow, epifaunal community in the lower Chesa- peake Bay, Virginia, USA, to test the hypothesis that low dissolved oxygen (DO) (<4 mg l -1 ) affects community dynamics by reducing the cover of spatial dominants, creating space both for less domi- nant native species and for invasive species. Settling panels were deployed at shallow depths in spring 2000 and 2001 at Gloucester Point, Virginia, and were manipulated every 2 wk from late June to mid-August. Manipulation involved exposing epifaunal communities to varying levels of DO for up to 24 h followed by redeployment in the York River. Exposure to low DO affected both species com- position (presence or absence) and the abundance of the organisms present. Community dominance shifted away from barnacles as level of hypoxia increased. Barnacles were important spatial domi- nants which reduced species diversity when locally abundant. The cover of Hydroides dianthus, a native serpulid polychaete, doubled when exposed to periodic hypoxia. Increased H. dianthus cover may indicate whether a local region has experienced periodic, local DO depletion and thus provide an indicator of poor water-quality conditions. In 2001, the combined cover of the invasive and crypto- genic species in this community, Botryllus schlosseri (tunicate), Molgula manhattensis (tunicate), Ficopomatus enigmaticus (polychaete) and Diadumene lineata (anemone), was highest on the plates exposed to moderately low DO (2 mg l -1 < DO < 4 mg l -1 ). All 4 of these species are now found world- wide and exhibit life histories well adapted for establishment in foreign habitats. Low DO events may enhance success of invasive species, which further stress marine and estuarine ecosystems.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54715,"Human activity facilitates altitudinal expansion of exotic plants along a road in montane grassland, South Africa",S173374,R54716,hypothesis,L106828,Disturbance,"ABSTRACT Question: Do anthropogenic activities facilitate the distribution of exotic plants along steep altitudinal gradients? Location: Sani Pass road, Grassland biome, South Africa. Methods: On both sides of this road, presence and abundance of exotic plants was recorded in four 25-m long road-verge plots and in parallel 25 m × 2 m adjacent land plots, nested at five altitudinal levels: 1500, 1800, 2100, 2400 and 2700 m a.s.l. Exotic community structure was analyzed using Canonical Correspondence Analysis while a two-level nested Generalized Linear Model was fitted for richness and cover of exotics. We tested the upper altitudinal limits for all exotics along this road for spatial clustering around four potential propagule sources using a t-test. Results: Community structure, richness and abundance of exotics were negatively correlated with altitude. Greatest invasion by exotics was recorded for adjacent land at the 1500 m level. Of the 45 exotics, 16 were found at higher altitudes than expected and observations were spatially clustered around potential propagule sources. Conclusions: Spatial clustering of upper altitudinal limits around human inhabited areas suggests that exotics originate from these areas, while exceeding expected altitudinal limits suggests that distribution ranges of exotics are presently underestimated. Exotics are generally characterised by a high propagule pressure and/or persistent seedbanks, thus future tarring of the Sani Pass may result in an increase of exotic species richness and abundance. This would initially result from construction-related soil disturbance and subsequently from increased traffic, water run-off, and altered fire frequency. We suggest examples of management actions to prevent this. Nomenclature: Germishuizen & Meyer (2003).",TRUE,noun
R24,Ecology and Evolutionary Biology,R54720,"The influence of anthropogenic disturbance and environmental suitability on the distribution of the nonindigenous amphipod, Echinogammarus ischnus, at Laurentian Great Lakes coastal margins",S173437,R54721,hypothesis,L106881,Disturbance,"ABSTRACT Invasion ecology offers a unique opportunity to examine drivers of ecological processes that regulate communities. Biotic resistance to nonindigenous species establishment is thought to be greater in communities that have not been disturbed by human activities. Alternatively, invasion may occur wherever environmental conditions are appropriate for the colonist, regardless of the composition of the existing community and the level of disturbance. We tested these hypotheses by investigating distribution of the nonindigenous amphipod, Echinogammarus ischnus Stebbing, 1899, in co-occurrence with a widespread amphipod, Gammarus fasciatus Say, 1818, at 97 sites across the Laurentian Great Lakes coastal margins influenced by varying types and levels of anthropogenic stress. E. Ischnus was distributed independently of disturbance gradients related to six anthropogenic disturbance variables that summarized overall nutrient input, nitrogen, and phosphorus load carried from the adjacent coastal watershed, agricultural land area, human population density, overall pollution loading, and the site-specific dominant stressor, consistent with the expectations of regulation by general environmental characteristics. Our results support the view that the biotic facilitation by dreissenid mussels and distribution of suitable habitats better explain E. ischnus' distribution at Laurentian Great Lakes coastal margins than anthropogenic disturbance.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54722,Alien plant dynamics following fire in Mediterranean-climate California shrublands,S173470,R54724,hypothesis,L106908,Disturbance,"Over 75 species of alien plants were recorded during the first five years after fire in southern California shrublands, most of which were European annuals. Both cover and richness of aliens varied between years and plant association. Alien cover was lowest in the first postfire year in all plant associations and remained low during succession in chaparral but increased in sage scrub. Alien cover and richness were significantly correlated with year (time since disturbance) and with precipitation in both coastal and interior sage scrub associations. Hypothesized factors determining alien dominance were tested with structural equation modeling. Models that included nitrogen deposition and distance from the coast were not significant, but with those variables removed we obtained a significant model that gave an R 2 5 0.60 for the response variable of fifth year alien dominance. Factors directly affecting alien dominance were (1) woody canopy closure and (2) alien seed banks. Significant indirect effects were (3) fire intensity, (4) fire history, (5) prefire stand structure, (6) aridity, and (7) community type. According to this model the most critical factor in- fluencing aliens is the rapid return of the shrub and subshrub canopy. Thus, in these communities a single functional type (woody plants) appears to the most critical element controlling alien invasion and persistence. Fire history is an important indirect factor be- cause it affects both prefire stand structure and postfire alien seed banks. Despite being fire-prone ecosystems, these shrublands are not adapted to fire per se, but rather to a particular fire regime. Alterations in the fire regime produce a very different selective environment, and high fire frequency changes the selective regime to favor aliens. This study does not support the widely held belief that prescription burning is a viable man- agement practice for controlling alien species on semiarid landscapes.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54725,Fire and grazing impacts on plant diversity and alien plant invasions in the southern Sierra Nevada,S173524,R54728,hypothesis,L106954,Disturbance,"Patterns of native and alien plant diversity in response to disturbance were examined along an elevational gradient in blue oak savanna, chaparral, and coniferous forests. Total species richness, alien species richness, and alien cover declined with elevation, at scales from 1 to 1000 m2. We found no support for the hypothesis that community diversity inhibits alien invasion. At the 1-m2 point scale, where we would expect competitive interactions between the largely herbaceous flora to be most intense, alien species richness as well as alien cover increased with increasing native species richness in all communities. This suggests that aliens are limited not by the number of native competitors, but by resources that affect establishment of both natives and aliens. Blue oak savannas were heavily dominated by alien species and consistently had more alien than native species at the 1-m2 scale. All of these aliens are annuals, and it is widely thought that they have displaced native bunchgrasses. If true, this...",TRUE,noun
R24,Ecology and Evolutionary Biology,R54736,"Species introductions, diversity and disturbances in marine macrophyte assemblages of the northwestern Mediterranean Sea",S173635,R54737,hypothesis,L107047,Disturbance,"In the process of species introduction, the traits that enable a species to establish and spread in a new habitat, and the habitat characteristics that determine the susceptibility to intro- duced species play a major role. Among the habitat characteristics that render a habitat resistant or susceptible to introductions, species diversity and disturbance are believed to be the most important. It is generally assumed that high species richness renders a habitat resistant to introductions, while disturbances enhance their susceptibility. In the present study, these 2 hypotheses were tested on NW Mediterranean shallow subtidal macrophyte assemblages. Data collection was carried out in early summer 2002 on sub-horizontal rocky substrate at 9 sites along the French Mediterranean coast, 4 undisturbed and 5 highly disturbed. Disturbances include cargo, naval and passenger har- bours, and industrial and urban pollution. Relationships between species richness (point diversity), disturbances and the number of introduced macrophytes were analysed. The following conclusions were drawn: (1) there is no relationship between species introductions, diversity and disturbance for the macrophyte assemblages; (2) multifactorial analyses only revealed the biogeographical relation- ships between the native flora of the sites.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54744,Biogenic disturbance determines invasion success in a subtidal soft-sediment system,S173723,R54745,hypothesis,L107119,Disturbance,"Theoretically, disturbance and diversity can influence the success of invasive colonists if (1) resource limitation is a prime determinant of invasion success and (2) disturbance and diversity affect the availability of required resources. However, resource limitation is not of overriding importance in all systems, as exemplified by marine soft sediments, one of Earth's most widespread habitat types. Here, we tested the disturbance-invasion hypothesis in a marine soft-sediment system by altering rates of biogenic disturbance and tracking the natural colonization of plots by invasive species. Levels of sediment disturbance were controlled by manipulating densities of burrowing spatangoid urchins, the dominant biogenic sediment mixers in the system. Colonization success by two invasive species (a gobiid fish and a semelid bivalve) was greatest in plots with sediment disturbance rates < 500 cm(3) x m(-2) x d(-1), at the low end of the experimental disturbance gradient (0 to > 9000 cm(3) x m(-2) x d(-1)). Invasive colonization declined with increasing levels of sediment disturbance, counter to the disturbance-invasion hypothesis. Increased sediment disturbance by the urchins also reduced the richness and diversity of native macrofauna (particularly small, sedentary, surface feeders), though there was no evidence of increased availability of resources with increased disturbance that would have facilitated invasive colonization: sediment food resources (chlorophyll a and organic matter content) did not increase, and space and access to overlying water were not limited (low invertebrate abundance). Thus, our study revealed the importance of biogenic disturbance in promoting invasion resistance in a marine soft-sediment community, providing further evidence of the valuable role of bioturbation in soft-sediment systems (bioturbation also affects carbon processing, nutrient recycling, oxygen dynamics, benthic community structure, and so on.). Bioturbation rates are influenced by the presence and abundance of large burrowing species (like spatangoid urchins). Therefore, mass mortalities of large bioturbators could inflate invasion risk and alter other aspects of ecosystem performance in marine soft-sediment habitats.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54751,Old World Climbing Fern (Lygodium microphyllum) Invasion in Hurricane Caused Treefalls,S173807,R54752,hypothesis,L107189,Disturbance,"ABSTRACT: We examined effects of a natural disturbance (hurricanes) on potential invasion of tree islands by an exotic plant (Old World climbing fern, Lygodium microphyllum) in the Arthur R. Marshall Loxahatchee National Wildlife Refuge, Florida. Three major hurricanes in 2004 and 2005 caused varying degrees of impacts to trees on tree islands within the Refuge. Physical impacts of hurricanes were hypothesized to promote invasion and growth of L. microphyllum. We compared presence and density of L. microphyllum in plots of disturbed soil created by hurricane-caused treefalls to randomly selected non-disturbed plots on 12 tree islands. We also examined relationships between disturbed area size, canopy cover, and presence of standing water on presence and density of L. microphyllum. Lygodium microphyllum was present in significantly more treefall plots than random non-treefall plots (76% of the treefall plots (N=55) and only 14% of random non-treefall plots (N=55)). Density of L. microphyllum was higher in treefall plots compared to random non-disturbed plots (6.0 stems per m2 for treefall plots; 0.5 stems per m2 for random non-disturbed plots), and L. microphyllum density was correlated with disturbed area size (P = 0.005). Lygodium microphyllum presence in treefall sites was significantly related to canopy cover and presence of water: it was present in five times more treefalls with water than those without. These results suggest that disturbances, such as hurricanes, that result in canopy openings and the creation of disturbed areas with standing water contribute to the ability of L. microphyllum to invade natural areas.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54753,Are invasive species the drivers or passengers of change in degraded ecosystems?,S173830,R54754,hypothesis,L107208,Disturbance,"Few invaded ecosystems are free from habitat loss and disturbance, leading to uncertainty whether dominant invasive species are driving community change or are passengers along for the environmental ride. The ''driver'' model predicts that invaded communities are highly interactive, with subordinate native species being limited or ex- cluded by competition from the exotic dominants. The ''passenger'' model predicts that invaded communities are primarily structured by noninteractive factors (environmental change, dispersal limitation) that are less constraining on the exotics, which thus dominate. We tested these alternative hypotheses in an invaded, fragmented, and fire-suppressed oak savanna. We examined the impact of two invasive dominant perennial grasses on community structure using a reduction (mowing of aboveground biomass) and removal (weeding of above- and belowground biomass) experiment conducted at different seasons and soil depths. We examined the relative importance of competition vs. dispersal limitation with experimental seed additions. Competition by the dominants limits the abundance and re- production of many native and exotic species based on their increased performance with removals and mowing. The treatments resulted in increased light availability and bare soil; soil moisture and N were unaffected. Although competition was limiting for some, 36 of 79 species did not respond to the treatments or declined in the absence of grass cover. Seed additions revealed that some subordinates are dispersal limited; competition alone was insufficient to explain their rarity even though it does exacerbate dispersal inefficiencies by lowering reproduction. While the net effects of the dominants were negative, their presence restricted woody plants, facilitated seedling survival with moderate disturbance (i.e., treatments applied in the fall), or was not the primary limiting factor for the occurrence of some species. Finally, the species most functionally distinct from the dominants (forbs, woody plants) responded most significantly to the treatments. This suggests that relative abundance is determined more by trade-offs relating to environmental conditions (long- term fire suppression) than to traits relating to resource capture (which should most impact functionally similar species). This points toward the passenger model as the underlying cause of exotic dominance, although their combined effects (suppressive and facilitative) on community structure are substantial.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54761,Scaling Disturbance Instead of Richness to Better Understand Anthropogenic Impacts on Biodiversity,S173920,R54762,hypothesis,L107282,Disturbance,"A primary impediment to understanding how species diversity and anthropogenic disturbance are related is that both diversity and disturbance can depend on the scales at which they are sampled. While the scale dependence of diversity estimation has received substantial attention, the scale dependence of disturbance estimation has been essentially overlooked. Here, we break from conventional examination of the diversity-disturbance relationship by holding the area over which species richness is estimated constant and instead manipulating the area over which human disturbance is measured. In the boreal forest ecoregion of Alberta, Canada, we test the dependence of species richness on disturbance scale, the scale-dependence of the intermediate disturbance hypothesis, and the consistency of these patterns in native versus exotic species and among human disturbance types. We related field observed species richness in 1 ha surveys of 372 boreal vascular plant communities to remotely sensed measures of human disturbance extent at two survey scales: local (1 ha) and landscape (18 km2). Supporting the intermediate disturbance hypothesis, species richness-disturbance relationships were quadratic at both local and landscape scales of disturbance measurement. This suggests the shape of richness-disturbance relationships is independent of the scale at which disturbance is assessed, despite that local diversity is influenced by disturbance at different scales by different mechanisms, such as direct removal of individuals (local) or indirect alteration of propagule supply (landscape). By contrast, predictions of species richness did depend on scale of disturbance measurement: with high local disturbance richness was double that under high landscape disturbance.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54767,"Predicting Richness of Native, Rare, and Exotic Plants in Response to Habitat and Disturbance Variables across a Variegated Landscape",S173995,R54769,hypothesis,L107343,Disturbance,"Species richness of native, rare native, and exotic understorey plants was recorded at 120 sites in temperate grassy vegetation in New South Wales. Linear models were used to predict the effects of environment and disturbance on the richness of each of these groups. Total native species and rare native species showed similar responses, with rich- ness declining on sites of increasing natural fertility of par- ent material as well as declining under conditions of water",TRUE,noun
R24,Ecology and Evolutionary Biology,R54770,Disturbance-mediated competition and the spread of Phragmites australis in a coastal marsh,S174023,R54771,hypothesis,L107367,Disturbance,"In recent decades the grass Phragmites australis has been aggressively in- vading coastal, tidal marshes of North America, and in many areas it is now considered a nuisance species. While P. australis has historically been restricted to the relatively benign upper border of brackish and salt marshes, it has been expanding seaward into more phys- iologically stressful regions. Here we test a leading hypothesis that the spread of P. australis is due to anthropogenic modification of coastal marshes. We did a field experiment along natural borders between stands of P. australis and the other dominant grasses and rushes (i.e., matrix vegetation) in a brackish marsh in Rhode Island, USA. We applied a pulse disturbance in one year by removing or not removing neighboring matrix vegetation and adding three levels of nutrients (specifically nitrogen) in a factorial design, and then we monitored the aboveground performance of P. australis and the matrix vegetation. Both disturbances increased the density, height, and biomass of shoots of P. australis, and the effects of fertilization were more pronounced where matrix vegetation was removed. Clear- ing competing matrix vegetation also increased the distance that shoots expanded and their reproductive output, both indicators of the potential for P. australis to spread within and among local marshes. In contrast, the biomass of the matrix vegetation decreased with increasing severity of disturbance. Disturbance increased the total aboveground production of plants in the marsh as matrix vegetation was displaced by P. australis. A greenhouse experiment showed that, with increasing nutrient levels, P. australis allocates proportionally more of its biomass to aboveground structures used for spread than to belowground struc- tures used for nutrient acquisition. Therefore, disturbances that enrich nutrients or remove competitors promote the spread of P. australis by reducing belowground competition for nutrients between P. australis and the matrix vegetation, thus allowing P. australis, the largest plant in the marsh, to expand and displace the matrix vegetation. Reducing nutrient load and maintaining buffers of matrix vegetation along the terrestrial-marsh ecotone will, therefore, be important methods of control for this nuisance species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54778,Six years of plant community development after clearcut harvesting in western Washington,S174124,R54780,hypothesis,L107450,Disturbance,What roles do ruderals and residuals play in early forest succession and how does repeated disturbance affect them? We examined this question by monitoring plant cover and composition on a producti...,TRUE,noun
R24,Ecology and Evolutionary Biology,R54781,Are invaders disturbance-limited? Conservation of mountain grasslands in Central Argentina,S174161,R54783,hypothesis,L107481,Disturbance,"Abstract Extensive areas in the mountain grasslands of central Argentina are heavily invaded by alien species from Europe. A decrease in biodiversity and a loss of palatable species is also observed. The invasibility of the tall-grass mountain grassland community was investigated in an experiment of factorial design. Six alien species which are widely distributed in the region were sown in plots where soil disturbance, above-ground biomass removal by cutting and burning were used as treatments. Alien species did not establish in undisturbed plots. All three types of disturbances increased the number and cover of alien species; the effects of soil disturbance and biomass removal was cumulative. Cirsium vulgare and Oenothera erythrosepala were the most efficient alien colonizers. In conditions where disturbances did not continue the cover of aliens started to decrease in the second year, by the end of the third season, only a few adults were established. Consequently, disturbances are needed to maintain ali...",TRUE,noun
R24,Ecology and Evolutionary Biology,R54786,Pollution reduces native diversity and increases invader dominance in marine hard-substrate communities,S174222,R54788,hypothesis,L107532,Disturbance,"Anthropogenic disturbance is considered a risk factor in the establishment of non‐indigenous species (NIS); however, few studies have investigated the role of anthropogenic disturbance in facilitating the establishment and spread of NIS in marine environments. A baseline survey of native and NIS was undertaken in conjunction with a manipulative experiment to determine the effect that heavy metal pollution had on the diversity and invasibility of marine hard‐substrate assemblages. The study was repeated at two sites in each of two harbours in New South Wales, Australia. The survey sampled a total of 47 sessile invertebrate taxa, of which 15 (32%) were identified as native, 19 (40%) as NIS, and 13 (28%) as cryptogenic. Increasing pollution exposure decreased native species diversity at all study sites by between 33% and 50%. In contrast, there was no significant change in the numbers of NIS. Percentage cover was used as a measure of spatial dominance, with increased pollution exposure leading to increased NIS dominance across all sites. At three of the four study sites, assemblages that had previously been dominated by natives changed to become either extensively dominated by NIS or equally occupied by native and NIS alike. No single native or NIS was repeatedly responsible for the observed changes in native species diversity or NIS dominance at all sites. Rather, the observed effects of pollution were driven by a diverse range of taxa and species. These findings have important implications for both the way we assess pollution impacts, and for the management of NIS. When monitoring the response of assemblages to pollution, it is not sufficient to simply assess changes in community diversity. Rather, it is important to distinguish native from NIS components since both are expected to respond differently. In order to successfully manage current NIS, we first need to address levels of pollution within recipient systems in an effort to bolster the resilience of native communities to invasion.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54789,Conservation of the Grassy White Box Woodlands: Relative Contributions of Size and Disturbance to Floristic Composition and Diversity of Remnants,S174251,R54790,hypothesis,L107557,Disturbance," Before European settlement, grassy white box woodlands were the dominant vegetation in the east of the
wheat-sheep belt of south-eastern Australia. Tree clearing, cultivation and pasture improvement have led
to fragmentation of this once relatively continuous ecosystem, leaving a series of remnants which
themselves have been modified by livestock grazing. Little-modified remnants are extremely rare. We
examined and compared the effects of fragmentation and disturbance on the understorey flora of
woodland remnants, through a survey of remnants of varying size, grazing history and tree clearing. In
accordance with fragmentation theory, species richness generally increased with remnant size, and, for
little-grazed remnants, smaller remnants were more vulnerable to weed invasion. Similarly, tree
clearing and grazing encouraged weed invasion and reduced native species richness. Evidence for
increased total species richness at intermediate grazing levels, as predicted by the intermediate
disturbance hypothesis, was equivocal. Remnant quality was more severely affected by grazing than by
remnant size. All little-grazed remnants had lower exotic species abundance and similar or higher native
species richness than grazed remnants, despite their extremely small sizes (< 6 ha). Further, small, littlegrazed
remnants maintained the general character of the pre-European woodland understorey, while
grazing caused changes to the dominant species. Although generally small, the little-grazed remnants
are the best representatives of the pre-European woodland understorey, and should be central to any
conservation plan for the woodlands. Selected larger remnants are needed to complement these,
however, to increase the total area of woodland conserved, and, because most little-grazed remnants are
cleared, to represent the ecosystem in its original structural form. For the maintenance of native plant
diversity and composition in little-grazed remnants, it is critical that livestock grazing continues to be
excluded. For grazed remnants, maintenance of a site in its current state would allow continuation of
past management, while restoration to a pre-European condition would require management directed
towards weed removal, and could take advantage of the difference noted in the predominant life-cycle
of native (perennial) versus exotic (annual or biennial) species. ",TRUE,noun
R24,Ecology and Evolutionary Biology,R54791,EFFECTS OF DISTURBANCE ON HERBACEOUS EXOTIC PLANT-SPECIES ON THE FLOODPLAIN OF THE POTOMAC RIVER,S174273,R54792,hypothesis,L107575,Disturbance,"The objective of this study was to investigate specific effects of disturbance on exotic species in floodplain environments and to provide baseline data on the abundance of exotic herbs in the Potomac River floodplain. Frequency of exotics generally increased with man-made disturbance (forest fragmentation and recreational use of land) and decreased with increasing flooding frequency. Species richness of exotics followed a similar pattern. Some variation was found in individual species' responses to disturbance. The spread of Alliaria officinalis and Glecoma hederacea, the most frequent exotic species, was inhibited by forest fragmentation.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54795,Functional and performance comparisons of invasive Hieracium lepidulum and co-occurring species in New Zealand,S174320,R54796,hypothesis,L107614,Disturbance,"One of the key environmental factors affecting plant species abundance, including that of invasive exotics, is nutrient resource availability. Plant functional response to nutrient availability, and what this tells us about plant interactions with associated species, may therefore give us clues about underlying processes related to plant abundance and invasion. Patterns of abundance of Hieracium lepidulum, a European herbaceous invader of subalpine New Zealand, appear to be related to soil fertility/nutrient availability, however, abundance may be influenced by other factors including disturbance. In this study we compare H. lepidulum and field co-occurring species for growth performance across artificial nutrient concentration gradients, for relative competitiveness and for response to disturbance, to construct a functional profile of the species. Hieracium lepidulum was found to be significantly different in its functional response to nutrient concentration gradients. Hieracium lepidulum had high relative growth rate, high yield and root plasticity in response to nutrient concentration dilution, relatively low absolute yield, low competitive yield and a positive response to clipping disturbance relative to other species. Based on overall functional response to nutrient concentration gradients, compared with other species found at the same field sites, we hypothesize that H. lepidulum invasion is not related to competitive domination. Relatively low tolerance of nutrient dilution leads us to predict that H. lepidulum is likely to be restricted from invading low fertility sites, including sites within alpine vegetation or where intact high biomass plant communities are found. Positive response to clipping disturbance and relatively high nutrient requirement, despite poor competitive performance, leads us to predict that H. lepidulum may respond to selective grazing disturbance of associated vegetation. These results are discussed in relation to published observations of H. lepidulum in New Zealand and possible tests for the hypotheses raised here.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54797,Mammals of the northern Philippines: tolerance for habitat disturbance and resistance to invasive species in an endemic insular fauna,S174342,R54798,hypothesis,L107632,Disturbance,"Aim Island faunas, particularly those with high levels of endemism, usually are considered especially susceptible to disruption from habitat disturbance and invasive alien species. We tested this general hypothesis by examining the distribution of small mammals along gradients of anthropogenic habitat disturbance in northern Luzon Island, an area with a very high level of mammalian endemism.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54803,"Relationship between productivity, and species and functional group diversity in grazed and non-grazed Pampas grassland",S174420,R54805,hypothesis,L107696,Disturbance,"Most hypotheses addressing the effect of diversity on ecosystem function indicate the occurrence of higher process rates with increasing diversity, and only diverge in the shape of the function depending on their assumptions about the role of individual species and functional groups. Contrarily to these predictions, we show that grazing of the Flooding Pampas grasslands increased species richness, but drastically reduced above ground net primary production, even when communities with similar initial biomass were compared. Grazing increased species richness through the addition of a number of exotic forbs, without reducing the richness and cover of the native flora. Since these forbs were essentially cool-season species, and also because their introduction has led to the displacement of warm-season grasses from dominant to subordinate positions in the community, grazing not only decreased productivity, but also shifted its seasonality towards the cool season. These results suggest that species diversity and/or richness alone are poor predictors of above-ground primary production. Therefore, models that relate productivity to diversity should take into account the relative abundance and identity of species that are added or deleted by the specific disturbances that modify diversity.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54819,"Distribution of an alien aquatic snail in relation to flow variability, human activities and water quality",S174617,R54821,hypothesis,L107861,Disturbance,"1. Disturbance and anthropogenic land use changes are usually considered to be key factors facilitating biological invasions. However, specific comparisons of invasion success between sites affected to different degrees by these factors are rare. 2. In this study we related the large-scale distribution of the invading New Zealand mud snail ( Potamopyrgus antipodarum ) in southern Victorian streams, Australia, to anthropogenic land use, flow variability, water quality and distance from the site to the sea along the stream channel. 3. The presence of P. antipodarum was positively related to an index of flow-driven disturbance, the coefficient of variability of mean daily flows for the year prior to the study. 4. Furthermore, we found that the invader was more likely to occur at sites with multiple land uses in the catchment, in the forms of grazing, forestry and anthropogenic developments (e.g. towns and dams), compared with sites with low-impact activities in the catchment. However, this relationship was confounded by a higher likelihood of finding this snail in lowland sites close to the sea. 5. We conclude that P. antipodarum could potentially be found worldwide at sites with similar ecological characteristics. We hypothesise that its success as an invader may be related to an ability to quickly re-colonise denuded areas and that population abundances may respond to increased food resources. Disturbances could facilitate this invader by creating spaces for colonisation (e.g. a possible consequence of floods) or changing resource levels (e.g. increased nutrient levels in streams with intense human land use in their catchments).",TRUE,noun
R24,Ecology and Evolutionary Biology,R54822,"Invasion, competitive dominance, and resource use by exotic and native California grassland species",S174645,R54823,hypothesis,L107885,Disturbance,"The dynamics of invasive species may depend on their abilities to compete for resources and exploit disturbances relative to the abilities of native species. We test this hypothesis and explore its implications for the restoration of native ecosystems in one of the most dramatic ecological invasions worldwide, the replacement of native perennial grasses by exotic annual grasses and forbs in 9.2 million hectares of California grasslands. The long-term persistence of these exotic annuals has been thought to imply that the exotics are superior competitors. However, seed-addition experiments in a southern California grassland revealed that native perennial species, which had lower requirements for deep soil water, soil nitrate, and light, were strong competitors, and they markedly depressed the abundance and fecundity of exotic annuals after overcoming recruitment limitations. Native species reinvaded exotic grasslands across experimentally imposed nitrogen, water, and disturbance gradients. Thus, exotic annuals are not superior competitors but rather may dominate because of prior disturbance and the low dispersal abilities and extreme current rarity of native perennials. If our results prove to be general, it may be feasible to restore native California grassland flora to at least parts of its former range.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54824,Pre-fire fuel reduction treatments influence plant communities and exotic species 9 years after a large wildfire,S174670,R54825,hypothesis,L107906,Disturbance,"Questions: How did post-wildfire understorey plant community response, including exotic species response, differ between pre-fire treated areas that were less severely burned, and pre-fire untreated areas that were more severely burned? Were these differences consistent through time? Location: East-central Arizona, southwestern US. Methods: We used a multi-year data set from the 2002 Rodeo–Chediski Fire to detect post-fire trends in plant community response in burned ponderosa pine forests. Within the burn perimeter, we examined the effects of pre-fire fuels treatments on post-fire vegetation by comparing paired treated and untreated sites on the Apache-Sitgreaves National Forest. We sampled these paired sites in 2004, 2005 and 2011. Results: There were significant differences in pre-fire treated and untreated plant communities by species composition and abundance in 2004 and 2005, but these communities were beginning to converge in 2011. Total understorey plant cover was significantly higher in untreated areas for all 3 yr. Plant cover generally increased between 2004 and 2005 and markedly decreased in 2011, with the exception of shrub cover, which steadily increased through time. The sharp decrease in forb and graminoid cover in 2011 is likely related to drought conditions since the fire. Annual/biennial forb and graminoid cover decreased relative to perennial cover through time, consistent with the initial floristics hypothesis. Exotic plant response was highly variable and not limited to the immediate post-fire, annual/biennial community. Despite low overall exotic forb and graminoid cover for all years (<2.5%), several exotic species increased in frequency, and the relative proportion of exotic to native cover increased through time. Conclusions: Pre-treatment fuel reduction treatments helped maintain foundation overstorey species and associated native plant communities following this large wildfire. The overall low cover of exotic species on these sites supports other findings that the disturbance associated with high-severity fire does not always result in exotic species invasions. The increase in relative cover and frequency though time indicates that some species are proliferating, and continued monitoring is recommended. Patterns of exotic species invasions after severe burning are not easily predicted, and are likely more dependent on site-specific factors such as propagules, weather patterns and management.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54828,Quantifying the impact of an extreme climate event on species diversity in fragmented temperate forests: the effect of the October 1987 storm on British broadleaved woodlands,S174715,R54829,hypothesis,L107943,Disturbance,"We report the impact of an extreme weather event, the October 1987 severe storm, on fragmented woodlands in southern Britain. We analysed ecological changes between 1971 and 2002 in 143 200‐m2 plots in 10 woodland sites exposed to the storm with an ecologically equivalent sample of 150 plots in 16 non‐exposed sites. Comparing both years, understorey plant species‐richness, species composition, soil pH and woody basal area of the tree and shrub canopy were measured. We tested the hypothesis that the storm had deflected sites from the wider national trajectory of an increase in woody basal area and reduced understorey species‐richness associated with ageing canopies and declining woodland management. We also expected storm disturbance to amplify the background trend of increasing soil pH, a UK‐wide response to reduced atmospheric sulphur deposition. Path analysis was used to quantify indirect effects of storm exposure on understorey species richness via changes in woody basal area and soil pH. By 2002, storm exposure was estimated to have increased mean species richness per 200 m2 by 32%. Woody basal area changes were highly variable and did not significantly differ with storm exposure. Increasing soil pH was associated with a 7% increase in richness. There was no evidence that soil pH increased more as a function of storm exposure. Changes in species richness and basal area were negatively correlated: a 3.4% decrease in richness occurred for every 0.1‐m2 increase in woody basal area per plot. Despite all sites substantially exceeding the empirical critical load for nitrogen deposition, there was no evidence that in the 15 years since the storm, disturbance had triggered a eutrophication effect associated with dominance of gaps by nitrophilous species. Synthesis. Although the impacts of the 1987 storm were spatially variable in terms of impacts on woody basal area, the storm had a positive effect on understorey species richness. There was no evidence that disturbance had increased dominance of gaps by invasive species. This could change if recovery from acidification results in a soil pH regime associated with greater macronutrient availability.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54836,How grazing and soil quality affect native and exotic plant diversity in rocky mountain grasslands,S174818,R54838,hypothesis,L108028,Disturbance,"We used multiscale plots to sample vascular plant diversity and soil characteristics in and adjacent to 26 long-term grazing exclosure sites in Colorado, Wyoming, Montana, and South Dakota, USA. The exclosures were 7–60 yr old (31.2 ± 2.5 yr, mean ± 1 se). Plots were also randomly placed in the broader landscape in open rangeland in the same vegetation type at each site to assess spatial variation in grazed landscapes. Consistent sampling in the nine National Parks, Wildlife Refuges, and other management units yielded data from 78 1000-m2 plots and 780 1-m2 subplots. We hypothesized that native species richness would be lower in the exclosures than in grazed sites, due to competitive exclusion in the absence of grazing. We also hypothesized that grazed sites would have higher native and exotic species richness compared to ungrazed areas, due to disturbance (i.e., the intermediate-disturbance hypothesis) and the conventional wisdom that grazing may accelerate weed invasion. Both hypotheses were soundly rej...",TRUE,noun
R24,Ecology and Evolutionary Biology,R54841,Lack of native species recovery following severe exotic disturbance in southern Californian shrublands,S174878,R54843,hypothesis,L108078,Disturbance,"Summary 1. Urban and agricultural activities are not part of natural disturbance regimes and may bear little resemblance to them. Such disturbances are common in densely populated semi-arid shrub communities of the south-western US, yet successional studies in these regions have been limited primarily to natural successional change and the impact of human-induced changes on natural disturbance regimes. Although these communities are resilient to recurrent and large-scale disturbance by fire, they are not necessarily well-adapted to recover from exotic disturbances. 2. This study investigated the effects of severe exotic disturbance (construction, heavy-vehicle activity, landfill operations, soil excavation and tillage) on shrub communities in southern California. These disturbances led to the conversion of indigenous shrublands to exotic annual communities with low native species richness. 3. Nearly 60% of the cover on disturbed sites consisted of exotic annual species, while undisturbed sites were primarily covered by native shrub species (68%). Annual species dominant on disturbed sites included Erodium botrys, Hypochaeris glabra, Bromus spp., Vulpia myuros and Avena spp. 4. The cover of native species remained low on disturbed sites even 71 years after initial exotic disturbance ceased. Native shrub seedlings were also very infrequent on disturbed sites, despite the presence of nearby seed sources. Only two native shrubs, Eriogonum fasciculatum and Baccharis sarothroides, colonized some disturbed sites in large numbers. 5. Although some disturbed sites had lower total soil nitrogen and percentage organic matter and higher pH than undisturbed sites, soil variables measured in this study were not sufficient to explain variations in species abundances on these sites. 6. Non-native annual communities observed in this study did not recover to a predisturbed state within typical successional time (< 25 years), supporting the hypothesis that altered stable states can occur if a community is pushed beyond its threshold of resilience.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54849,"Alien Flora in Grasslands Adjacent to Road and Trail Corridors in Glacier National Park, Montana (U.S.A.)",S174967,R54850,hypothesis,L108153,Disturbance,": Alien plant species have rapidly invaded and successfully displaced native species in many grasslands of western North America. Thus, the status of alien species in the nature reserve grasslands of this region warrants special attention. This study describes alien flora in nine fescue grassland study sites adjacent to three types of transportation corridors—primary roads, secondary roads, and backcountry trails—in Glacier National Park, Montana (U.S.A.). Parallel transects, placed at varying distances from the adjacent road or trail, were used to determine alien species richness and frequency at individual study sites. Fifteen alien species were recorded, two Eurasian grasses, Phleum pratense and Poa pratensis, being particularly common in most of the study sites. In sites adjacent to primary and secondary roads, alien species richness declined out to the most distant transect, suggesting that alien species are successfully invading grasslands from the roadside area. In study sites adjacent to backcountry trails, absence of a comparable decline and unexpectedly high levels of alien species richness 100 m from the trailside suggest that alien species have been introduced in off-trail areas. The results of this study imply that in spite of low levels of livestock grazing and other anthropogenic disturbances, fescue grasslands in nature reserves of this region are vulnerable to invasion by alien flora. Given the prominent role that roadsides play in the establishment and dispersal of alien flora, road construction should be viewed from a biological, rather than an engineering, perspective. Nature reserve man agers should establish effective roadside vegetation management programs that include monitoring, quickly treating keystone alien species upon their initial occurrence in nature reserves, and creating buffer zones on roadside leading to nature reserves. Resumen: Especies de plantas introducidas han invadido rapidamente y desplazado exitosamente especies nativas en praderas del Oeste de America del Norte. Por lo tanto el estado de las especies introducidas en las reservas de pastizales naturales de esta region exige especial atencion. Este estudio describe la flora introducida en nueve pastizales naturales de festuca, las areas de estudios son adyacentes a tres tipos decorredores de transporte—caminos primarios, caminos secundarios y senderos remotos—en el Parque Nacional “Glacier,” Montana (EE.UU). Para determinar riqueza y frecuencia de especies introducidas, se trazaron transectas paralelas, localizadas a distancias variables del camino o sendero adyacente en las areas de estudio. Se registraron quince especies introducidas. Dos pastos eurasiaticos, Phleum pratensis y Poa pratensis, resultaron particularmente abuntes en la mayoria de las areas de estudio. En lugares adyacentes a caminos primarios y secundarios, la riqueza de especies introducidas disminuyo en la direccion de las transectas mas distantes, sugiriendo que las especies introducidas estan invadiendo exitosamente las praderas desde areas aledanas a caminos. En las areas de estudio adyacentes a senderos remotos no se encontro una disminucion comparable; inesperados altos niveles de riqueza de especies introducidas a 100 m de los senderos, sugieren que las especies foraneas han sido introducidas desde otras areas fuero de los senderos. Los resultados de este estudio implican que a pesar de los bajos niveles de pastoreo y otras perturbaciones antropogenicas, los pastizales de festuca en las reservas naturales de esta region son vulnerables a la invasion de la flora introducida. Dada el rol preponderante que juegan los caminos en el establecimiento y dispersion de la flora introducida, la construccion de rutas debe ser vista desde un punto de vista biologica, mas que desde una perspectiva meramente ingenieril. Los administradores de reservas naturales deberian establecer programas efectivos de manejo de vegetacion en los bordes de los caminos. Estos programas deberian incluir monitoreo, tratamiento rapido de especies introducidas y claves tan pronto como se detecten en las reservas naturales, y creacion de zonas de transicion en los caminos que conducen a las reservas naturales.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54855,Roads Alter the Colonization Dynamics of a Keystone Herbivore in Neotropical Savannas,S175036,R54856,hypothesis,L108210,Disturbance,"Roads can facilitate the establishment and spread of both native and exotic species. Nevertheless, the precise mechanisms facilitating this expansion are rarely known. We tested the hypothesis that dirt roads are favorable landing and nest initiation sites for founding‐queens of the leaf‐cutter ant Atta laevigata. For 2 yr, we compared the number of attempts to found new nests (colonization attempts) in dirt roads and the adjacent vegetation in a reserve of cerrado (tree‐dominated savanna) in southeastern Brazil. The number of colonization attempts in roads was 5 to 10 times greater than in the adjacent vegetation. Experimental transplants indicate that founding‐queens are more likely to establish a nest on bare soil than on soil covered with leaf‐litter, but the amount of litter covering the ground did not fully explain the preference of queens for dirt roads. Queens that landed on roads were at higher risk of predation by beetles and ants than those that landed in the adjacent vegetation. Nevertheless, greater predation in roads was not sufficient to offset the greater number of colonization attempts in this habitat. As a consequence, significantly more new colonies were established in roads than in the adjacent vegetation. Our results suggest that disturbance caused by the opening of roads could result in an increased Atta abundance in protected areas of the Brazilian Cerrado.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54857,"Case studies of the expansion of Acacia dealbata in the valley of the river Mino (Galicia, Spain)",S175058,R54858,hypothesis,L108228,Disturbance,"Aim of study: Acacia dealbata is a naturalized tree of invasive behaviour that has expanded from small plots associated with vineyards into forest ecosystems. Our main objective is to find evidence to support the notion that disturbances, particularly forest fires, are important driving factors in the current expansion of A. dealbata. Area of study: We mapped it current distribution using three study areas and assesses the temporal changes registered in forest cover in these areas of the valley of the river Mino. Material and Methods: The analyses were based on visual interpretation of aerial photographs taken in 1985 and 2003 of three 1x1 km study areas and field works. Main result: A 62.4%, 48.6% and 22.2% of the surface area was covered by A. dealbata in 2003 in pure or mixed stands. Furthermore, areas composed exclusively of A. dealbata make up 33.8%, 15.2% and 5.7% of the stands. The transition matrix analyses between the two dates support our hypothesis that the areas currently covered by A. dealbata make up a greater proportion of the forest area previously classified as unwooded or open forest than those without A. dealbata cover. Both of these surface types are the result of an important impact of fire in the region. Within each area, A. dealbata is mainly located on steeper terrain, which is more affected by fires. Research highlights: A. dealbata is becoming the dominant tree species over large areas and the invasion of this species gives rise to monospecific stands, which may have important implications for future fire regimes. Keywords: Fire regime; Mimosa; plant invasion; silver wattle.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54973,How many founders for a biological invasion? Predicting introduction outcomes from propagule pressure,S175884,R54974,Measure of invasion success,L108674,Establishment,"Ecological theory on biological invasions attempts to characterize the predictors of invasion success and the relative importance of the different drivers of population establishment. An outstanding question is how propagule pressure determines the probability of population establishment, where propagule pressure is the number of individuals of a species introduced into a specific location (propagule size) and their frequency of introduction (propagule number). Here, we used large-scale replicated mesocosm ponds over three reproductive seasons to identify how propagule size and number predict the probability of establishment of one of world's most invasive fish, Pseudorasbora parva, as well as its effect on the somatic growth of individuals during establishment. We demonstrated that, although a threshold of 11 introduced pairs of fish (a pair is 1 male, 1 female) was required for establishment probability to exceed 95%, establishment also occurred at low propagule size (1-5 pairs). Although single introduction events were as effective as multiple events at enabling establishment, the propagule sizes used in the multiple introductions were above the detected threshold for establishment. After three reproductive seasons, population abundance was also a function of propagule size, with rapid increases in abundance only apparent when propagule size exceeded 25 pairs. This was initially assisted by adapted biological traits, including rapid individual somatic growth that helped to overcome demographic bottlenecks.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54984,Global patterns of introduction effort and establishment success in birds,S176023,R54986,Measure of invasion success,L108789,Establishment,"Theory suggests that introduction effort (propagule size or number) should be a key determinant of establishment success for exotic species. Unfortunately, however, propagule pressure is not recorded for most introductions. Studies must therefore either use proxies whose efficacy must be largely assumed, or ignore effort altogether. The results of such studies will be flawed if effort is not distributed at random with respect to other characteristics that are predicted to influence success. We use global data for more than 600 introduction events for birds to show that introduction effort is both the strongest correlate of introduction success, and correlated with a large number of variables previously thought to influence success. Apart from effort, only habitat generalism relates to establishment success in birds.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55002,Factors explaining alien plant invasion success in a tropical ecosystem differ at each stage of invasion,S176206,R55003,Measure of invasion success,L108938,Establishment,"1 Understanding why some alien plant species become invasive when others fail is a fundamental goal in invasion ecology. We used detailed historical planting records of alien plant species introduced to Amani Botanical Garden, Tanzania and contemporary surveys of their invasion status to assess the relative ability of phylogeny, propagule pressure, residence time, plant traits and other factors to explain the success of alien plant species at different stages of the invasion process. 2 Species with native ranges centred in the tropics and with larger seeds were more likely to regenerate, whereas naturalization success was explained by longer residence time, faster growth rate, fewer seeds per fruit, smaller seed mass and shade tolerance. 3 Naturalized species spreading greater distances from original plantings tended to have more seeds per fruit, whereas species dispersed by canopy‐feeding animals and with native ranges centred on the tropics tended to have spread more widely in the botanical garden. Species dispersed by canopy‐feeding animals and with greater seed mass were more likely to be established in closed forest. 4 Phylogeny alone made a relatively minor contribution to the explanatory power of statistical models, but a greater proportion of variation in spread within the botanical garden and in forest establishment was explained by phylogeny alone than for other models. Phylogeny jointly with variables also explained a greater proportion of variation in forest establishment than in other models. Phylogenetic correction weakened the importance of dispersal syndrome in explaining compartmental spread, seed mass in the forest establishment model, and all factors except for growth rate and residence time in the naturalization model. 5 Synthesis. This study demonstrates that it matters considerably how invasive species are defined when trying to understand the relative ability of multiple variables to explain invasion success. By disentangling different invasion stages and using relatively objective criteria to assess species status, this study highlights that relatively simple models can help to explain why some alien plants are able to naturalize, spread and even establish in closed tropical forests.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55027,"Climatic Suitability, Life-History Traits, Introduction Effort, and the Establishment and Spread of Introduced Mammals in Australia",S176479,R55028,Measure of invasion success,L109161,Establishment,"Abstract: Major progress in understanding biological invasions has recently been made by quantitatively comparing successful and unsuccessful invasions. We used such an approach to test hypotheses about the role of climatic suitability, life history, and historical factors in the establishment and subsequent spread of 40 species of mammal that have been introduced to mainland Australia. Relative to failed species, the 23 species that became established had a greater area of climatically suitable habitat available in Australia, had previously become established elsewhere, had a larger overseas range, and were introduced more times. These relationships held after phylogeny was controlled for, but successful species were also significantly more likely to be nonmigratory. A forward‐selection model included only two of the nine variables for which we had data for all species: climatic suitability and introduction effort. When the model was adjusted for phylogeny, those same two variables were included, along with previous establishment success. Of the established species, those with a larger geographic range size in Australia had a greater area of climatically suitable habitat, had traits associated with a faster population growth rate (small body size, shorter life span, lower weaning age, more offspring per year), were nonherbivorous, and had a larger overseas range size. When the model was adjusted for phylogeny, the importance of climatic suitability and the life‐history traits remained significant, but overseas range size was no longer important and species with greater introduction effort had a larger geographic range size. Two variables explained variation in geographic range size in a forward‐selection model: species with smaller body mass and greater longevity tended to have larger range sizes in Australia. These results mirror those from a recent analysis of exotic‐bird introductions into Australia, suggesting that, at least among vertebrate taxa, similar factors predict establishment and spread. Our approach and results are being used to assess the risks of exotic vertebrates becoming established and spreading in Australia.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55039,The Influence of Numbers Released on the Outcome of Attempts to Introduce Exotic Bird Species to New Zealand,S176616,R55040,Measure of invasion success,L109274,Establishment,"1. Information on the approximate number of individuals released is available for 47 of the 133 exotic bird species introduced to New Zealand in the late 19th and early 20th centuries. Of these, 21 species had populations surviving in the wild in 1969-79. The long interval between introduction and assessment of outcome provides a rare opportunity to examine the factors correlated with successful establishment without the uncertainty of long-term population persistence associated with studies of short duration. 2. The probability of successful establishment was strongly influenced by the number of individuals released during the main period of introductions. Eight-three per cent of species that had more than 100 individuals released within a 10-year period became established, compared with 21% of species that had less than 100 birds released. The relationship between the probability of establishment and number of birds released was similar to that found in a previous study of introductions of exotic birds to Australia. 3. It was possible to look for a within-family influence on the success of introduction of the number of birds released in nine bird families. A positive influence was found within seven families and no effect in two families. This preponderance of families with a positive effect was statistically significant. 4. A significant effect of body weight on the probability of successful establishment was found, and negative effects of clutch size and latitude of origin. However, the statistical significance of these effects varied according to whether comparison was or was not restricted to within-family variation. After applying the Bonferroni adjustment to significance levels, to allow for the large number of variables and factors being considered, only the effect of the number of birds released was statistically significant. 5. No significant effects on the probability of successful establishment were apparent for the mean date of release, the minimum number of years in which birds were released, the hemisphere of origin (northern or southern) and the size and diversity of latitudinal distribution of the natural geographical range.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55061,Determinants of vertebrate invasion success in Europe and North America,S176865,R55062,Measure of invasion success,L109479,Establishment,"Species that are frequently introduced to an exotic range have a high potential of becoming invasive. Besides propagule pressure, however, no other generally strong determinant of invasion success is known. Although evidence has accumulated that human affiliates (domesticates, pets, human commensals) also have high invasion success, existing studies do not distinguish whether this success can be completely explained by or is partly independent of propagule pressure. Here, we analyze both factors independently, propagule pressure and human affiliation. We also consider a third factor directly related to humans, hunting, and 17 traits on each species' population size and extent, diet, body size, and life history. Our dataset includes all 2362 freshwater fish, mammals, and birds native to Europe or North America. In contrast to most previous studies, we look at the complete invasion process consisting of (1) introduction, (2) establishment, and (3) spread. In this way, we not only consider which of the introduced species became invasive but also which species were introduced. Of the 20 factors tested, propagule pressure and human affiliation were the two strongest determinants of invasion success across all taxa and steps. This was true for multivariate analyses that account for intercorrelations among variables as well as univariate analyses, suggesting that human affiliation influenced invasion success independently of propagule pressure. Some factors affected the different steps of the invasion process antagonistically. For example, game species were much more likely to be introduced to an exotic continent than nonhunted species but tended to be less likely to establish themselves and spread. Such antagonistic effects show the importance of considering the complete invasion process.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55081,Invasive species profiling? Exploring the characteristics of non-native fishes across invasion stages in California,S177094,R55082,Measure of invasion success,L109668,Establishment,"Summary 1. The global spread of non-native species is a major concern for ecologists, particularly in regards to aquatic systems. Predicting the characteristics of successful invaders has been a goal of invasion biology for decades. Quantitative analysis of species characteristics may allow invasive species profiling and assist the development of risk assessment strategies. 2. In the current analysis we developed a data base on fish invasions in catchments throughout California that distinguishes among the establishment, spread and integration stages of the invasion process, and separates social and biological factors related to invasion success. 3. Using Akaike's information criteria (AIC), logistic and multiple regression models, we show suites of biological variables, which are important in predicting establishment (parental care and physiological tolerance), spread (life span, distance from nearest native source and trophic status) and abundance (maximum size, physiological tolerance and distance from nearest native source). Two variables indicating human interest in a species (propagule pressure and prior invasion success) are predictors of successful establishment and prior invasion success is a predictor of spread and integration. 4. Despite the idiosyncratic nature of the invasion process, our results suggest some assistance in the search for characteristics of fish species that successfully transition between invasion stages.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55083,ALIEN FISHES IN CALIFORNIA WATERSHEDS: CHARACTERISTICS OF SUCCESSFUL AND FAILED INVADERS,S177116,R55084,Measure of invasion success,L109686,Establishment,"The literature on alien animal invaders focuses largely on successful invasions over broad geographic scales and rarely examines failed invasions. As a result, it is difficult to make predictions about which species are likely to become successful invaders or which environments are likely to be most susceptible to invasion. To address these issues, we developed a data set on fish invasions in watersheds throughout California (USA) that includes failed introductions. Our data set includes information from three stages of the invasion process (establishment, spread, and integration). We define seven categorical predictor variables (trophic status, size of native range, parental care, maximum adult size, physiological tolerance, distance from nearest native source, and propagule pressure) and one continuous predictor variable (prior invasion success) for all introduced species. Using an information-theoretic approach we evaluate 45 separate hypotheses derived from the invasion literature over these three sta...",TRUE,noun
R24,Ecology and Evolutionary Biology,R55095,The effect of propagule size on the invasion of an alien insect,S177259,R55096,Measure of invasion success,L109805,Establishment,"1. The movement of species from their native ranges to alien environments is a serious threat to biological diversity. The number of individuals involved in an invasion provides a strong theoretical basis for determining the likelihood of establishment of an alien species. 2. Here a field experiment was used to manipulate the critical first stages of the invasion of an alien insect, a psyllid weed biocontrol agent, Arytainilla spartiophila Forster, in New Zealand and to observe the progress of the invasion over the following 6 years. 3. Fifty-five releases were made along a linear transect 135 km long: 10 releases of two, four, 10, 30 and 90 psyllids and five releases of 270 psyllids. Six years after their original release, psyllids were present in 22 of the 55 release sites. Analysis by logistic regression showed that the probability of establishment was significantly and positively related to initial release size, but that this effect was important only during the psyllids' first year in the field. 4. Although less likely to establish, some of the releases of two and four psyllids did survive 5 years in the field. Overall, releases that survived their first year had a 96% chance of surviving thereafter, providing the release site remained secure. The probability of colony loss due to site destruction remained the same throughout the experiment, whereas the probability of natural extinction reduced steeply over time. 5. During the first year colonies were undergoing a process of establishment and, in most cases, population size decreased. After this first year, a period of exponential growth ensued. 6. A lag period was observed before the populations increased dramatically in size. This was thought to be due to inherent lags caused by the nature of population growth, which causes the smaller releases to appear to have a longer lag period.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55117,The comparative importance of species traits and introduction characteristics in tropical plant invasions,S177519,R55119,Measure of invasion success,L110012,Establishment,"Aim We used alien plant species introduced to a botanic garden to investigate the relative importance of species traits (leaf traits, dispersal syndrome) and introduction characteristics (propagule pressure, residence time and distance to forest) in explaining establishment success in surrounding tropical forest. We also used invasion scores from a weed risk assessment protocol as an independent measure of invasion risk and assessed differences in variables between high‐ and low‐risk species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55127,Propagule pressure and climate contribute to the displacement of Linepithema humile by Pachycondyla chinensis,S177615,R55128,Measure of invasion success,L110090,Establishment,"Identifying mechanisms governing the establishment and spread of invasive species is a fundamental challenge in invasion biology. Because species invasions are frequently observed only after the species presents an environmental threat, research identifying the contributing agents to dispersal and subsequent spread are confined to retrograde observations. Here, we use a combination of seasonal surveys and experimental approaches to test the relative importance of behavioral and abiotic factors in determining the local co-occurrence of two invasive ant species, the established Argentine ant (Linepithema humile Mayr) and the newly invasive Asian needle ant (Pachycondyla chinensis Emery). We show that the broader climatic envelope of P. chinensis enables it to establish earlier in the year than L. humile. We also demonstrate that increased P. chinensis propagule pressure during periods of L. humile scarcity contributes to successful P. chinensis early season establishment. Furthermore, we show that, although L. humile is the numerically superior and behaviorally dominant species at baits, P. chinensis is currently displacing L. humile across the invaded landscape. By identifying the features promoting the displacement of one invasive ant by another we can better understand both early determinants in the invasion process and factors limiting colony expansion and survival.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55134,"The roles of climate, phylogenetic relatedness, introduction effort, and reproductive traits in the establishment of non-native reptiles and amphibians",S177693,R55135,Measure of invasion success,L110154,Establishment,"Abstract: We developed a method to predict the potential of non‐native reptiles and amphibians (herpetofauna) to establish populations. This method may inform efforts to prevent the introduction of invasive non‐native species. We used boosted regression trees to determine whether nine variables influence establishment success of introduced herpetofauna in California and Florida. We used an independent data set to assess model performance. Propagule pressure was the variable most strongly associated with establishment success. Species with short juvenile periods and species with phylogenetically more distant relatives in regional biotas were more likely to establish than species that start breeding later and those that have close relatives. Average climate match (the similarity of climate between native and non‐native range) and life form were also important. Frogs and lizards were the taxonomic groups most likely to establish, whereas a much lower proportion of snakes and turtles established. We used results from our best model to compile a spreadsheet‐based model for easy use and interpretation. Probability scores obtained from the spreadsheet model were strongly correlated with establishment success as were probabilities predicted for independent data by the boosted regression tree model. However, the error rate for predictions made with independent data was much higher than with cross validation using training data. This difference in predictive power does not preclude use of the model to assess the probability of establishment of herpetofauna because (1) the independent data had no information for two variables (meaning the full predictive capacity of the model could not be realized) and (2) the model structure is consistent with the recent literature on the primary determinants of establishment success for herpetofauna. It may still be difficult to predict the establishment probability of poorly studied taxa, but it is clear that non‐native species (especially lizards and frogs) that mature early and come from environments similar to that of the introduction region have the highest probability of establishment.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55136,Correlates of Introduction Success in Exotic New Zealand Birds,S177730,R55138,Measure of invasion success,L110185,Establishment,"Whether or not a bird species will establish a new population after invasion of uncolonized habitat depends, from theory, on its life-history attributes and initial population size. Data about initial population sizes are often unobtainable for natural and deliberate avian invasions. In New Zealand, however, contemporary documentation of introduction efforts allowed us to systematically compare unsuccessful and successful invaders without bias. We obtained data for 79 species involved in 496 introduction events and used the present-day status of each species as the dependent variable in fitting multiple logistic regression models. We found that introduction efforts for species that migrated within their endemic ranges were significantly less likely to be successful than those for nonmigratory species with similar introduction efforts. Initial population size, measured as number of releases and as the minimum number of propagules liberated in New Zealand, significantly increased the probability of translocation success. A null model showed that species released more times had a higher probability per release of successful establishment. Among 36 species for which data were available, successful invaders had significantly higher natality/mortality ratios. Successful invaders were also liberated at significantly more sites. Invasion of New Zealand by exotic birds was therefore primarily related to management, an outcome that has implications for conservation biology.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55146,Propagule pressure drives establishment of introduced freshwater fish: quantitative evidence from an irrigation network,S177830,R55147,Measure of invasion success,L110267,Establishment,"Propagule pressure is recognized as a fundamental driver of freshwater fish invasions, though few studies have quantified its role. Natural experiments can be used to quantify the role of this factor relative to others in driving establishment success. An irrigation network in South Africa takes water from an inter-basin water transfer (IBWT) scheme to supply multiple small irrigation ponds. We compared fish community composition upstream, within, and downstream of the irrigation network, to show that this system is a unidirectional dispersal network with a single immigration source. We then assessed the effect of propagule pressure and biological adaptation on the colonization success of nine fish species across 30 recipient ponds of varying age. Establishing species received significantly more propagules at the source than did incidental species, while rates of establishment across the ponds displayed a saturation response to propagule pressure. This shows that propagule pressure is a significant driver of establishment overall. Those species that did not establish were either extremely rare at the immigration source or lacked the reproductive adaptations to breed in the ponds. The ability of all nine species to arrive at some of the ponds illustrates how long-term continuous propagule pressure from IBWT infrastructure enables range expansion of fishes. The quantitative link between propagule pressure and success and rate of population establishment confirms the driving role of this factor in fish invasion ecology.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54622,Responses of exotic plant species to fires in Pinus ponderosa forests in northern Arizona,S172268,R54624,Type of disturbance,L105906,Fire,". Changes in disturbance due to fire regime in southwestern Pinus ponderosa forests over the last century have led to dense forests that are threatened by widespread fire. It has been shown in other studies that a pulse of native, early-seral opportunistic species typically follow such disturbance events. With the growing importance of exotic plants in local flora, however, these exotics often fill this opportunistic role in recovery. We report the effects of fire severity on exotic plant species following three widespread fires of 1996 in northern Arizona P. ponderosa forests. Species richness and abundance of all vascular plant species, including exotics, were higher in burned than nearby unburned areas. Exotic species were far more important, in terms of cover, where fire severity was highest. Species present after wildfires include those of the pre-disturbed forest and new species that could not be predicted from above-ground flora of nearby unburned forests.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54630,Factors influencing dynamics of two invasive C-4 grasses in seasonally dry Hawaiian woodlands,S172347,R54631,Type of disturbance,L105971,Fire,"The introduced C4 bunchgrass, Schizachyrium condensatum, is abundant in unburned, seasonally dry woodlands on the island of Hawaii, where it promotes the spread of fire. After fire, it is partially replaced by Melinis minutiflora, another invasive C4 grass. Seed bank surveys in unburned woodland showed that Melinis seed is present in locations without adult plants. Using a combination of germination tests and seedling outplant ex- periments, we tested the hypothesis that Melinis was unable to invade the unburned wood- land because of nutrient and/or light limitation. We found that Melinis germination and seedling growth are depressed by the low light levels common under Schizachyrium in unburned woodland. Outplanted Melinis seedlings grew rapidly to flowering and persisted for several years in unburned woodland without nutrient additions, but only if Schizachyrium individuals were removed. Nutrients alone did not facilitate Melinis establishment. Competition between Melinis and Schizachyrium naturally occurs when individuals of both species emerge from the seed bank simultaneously, or when seedlings of one species emerge in sites already dominated by individuals of the other species. When both species are grown from seed, we found that Melinis consistently outcompetes Schizachyrium, re- gardless of light or nutrient treatments. When seeds of Melinis were added to pots with well-established Schizachyrium (and vice versa), Melinis eventually invaded and overgrew adult Schizachyrium under high, but not low, nutrients. By contrast, Schizachyrium could not invade established Melinis pots regardless of nutrient level. A field experiment dem- onstrated that Schizachyrium individuals are suppressed by Melinis in burned sites through competition for both light and nutrients. Overall, Melinis is a dominant competitor over Schizachyrium once it becomes estab- lished, whether in a pot or in the field. We believe that the dominance of Schizachyrium, rather than Melinis, in the unburned woodland is the result of asymmetric competition due to the prior establishment of Schizachyrium in these sites. If Schizachyrium were not present, the unburned woodland could support dense stands of Melinis. Fire disrupts the priority effect of Schizachyrium and allows the dominant competitor (Melinis) to enter the system where it eventually replaces Schizachyrium through resource competition.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54667,Anthropogenic fires increase alien and native annual species in the Chilean coastal matorral,S172790,R54668,Type of disturbance,L106340,Fire,Aim We tested the hypothesis that anthropogenic fires favour the successful establishment of alien annual species to the detriment of natives in the Chilean coastal matorral.,TRUE,noun
R24,Ecology and Evolutionary Biology,R54697,Alien Grass Invasion and Fire In the Seasonal Submontane Zone of Hawaii,S173155,R54698,Type of disturbance,L106645,Fire,"Island ecosystems are notably susceptible to biological invasions (Elton 1958), and the Hawaiian islands in particular have been colonized by many introduced species (Loope and Mueller-Dombois 1989). Introduced plants now dominate extensive areas of the Hawaiian Islands, and 86 species of alien plants are presently considered to pose serious threats to Hawaiian communities and ecosystems (Smith 1985). Among the most important invasive plants are several species of tropical and subtropical grasses that use the C4 photosynthetic pathway. These grasses now dominate extensive areas of dry and seasonally dry habitats in Hawai'i. They may compete with native species, and they have also been shown to alter hydrological properties in the areas they invade (MuellerDombois 1973). Most importantly, alien grasses can introduce fire into areas where it was previously rare or absent (Smith 1985), thereby altering the structure and functioning of previously native-dominated ecosystems. Many of these grasses evolved in fire-affected areas and have mechanisms for surviving and recovering rapidly from fire (Vogl 1975, Christensen 1985), while most native species in Hawai'i have little background with fire (Mueller-Dombois 1981) and hence few or no such mechanisms. Consequently, grass invasion could initiate a grass/fire cycle whereby invading grasses promote fire, which in turn favors alien grasses over native species. Such a scenario has been suggested in a number of areas, including Latin America, western North America, Australia, and Hawai'i (Parsons 1972, Smith 1985, Christensen and Burrows 1986, Mack 1986, MacDonald and Frame 1988). In most of these cases, land clearing by humans initiates colonization by alien grasses, and the grass/fire cycle then leads to their persistence. In Hawai'i and perhaps other areas, however, grass invasion occurs without any direct human intervention. Where such invasions initiate a grass/fire cy-",TRUE,noun
R24,Ecology and Evolutionary Biology,R54729,"The short-term responses of small mammals to wildfire in semiarid mallee shrubland, Australia",S173545,R54730,Type of disturbance,L106971,Fire,"Context. Wildfire is a major driver of the structure and function of mallee eucalypt- and spinifex-dominated landscapes. Understanding how fire influences the distribution of biota in these fire-prone environments is essential for effective ecological and conservation-based management. Aims. We aimed to (1) determine the effects of an extensive wildfire (118 000 ha) on a small mammal community in the mallee shrublands of semiarid Australia and (2) assess the hypothesis that the fire-response patterns of small mammals can be predicted by their life-history characteristics. Methods. Small-mammal surveys were undertaken concurrently at 26 sites: once before the fire and on four occasions following the fire (including 14 sites that remained unburnt). We documented changes in small-mammal occurrence before and after the fire, and compared burnt and unburnt sites. In addition, key components of vegetation structure were assessed at each site. Key results. Wildfire had a strong influence on vegetation structure and on the occurrence of small mammals. The mallee ningaui, Ningaui yvonneae, a dasyurid marsupial, showed a marked decline in the immediate post-fire environment, corresponding with a reduction in hummock-grass cover in recently burnt vegetation. Species richness of native small mammals was positively associated with unburnt vegetation, although some species showed no clear response to wildfire. Conclusions. Our results are consistent with the contention that mammal responses to fire are associated with their known life-history traits. The species most strongly affected by wildfire, N. yvonneae, has the most specific habitat requirements and restricted life history of the small mammals in the study area. The only species positively associated with recently burnt vegetation, the introduced house mouse, Mus domesticus, has a flexible life history and non-specialised resource requirements. Implications. Maintaining sources for recolonisation after large-scale wildfires will be vital to the conservation of native small mammals in mallee ecosystems.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54806,Fire effects on plant diversity in serpentine vs. sandstone chaparral,S174442,R54807,Type of disturbance,L107714,Fire,"Fire contributes to the maintenance of species diversity in many plant com- munities, but few studies have compared its impacts in similar communities that vary in such attributes as soils and productivity. We compared how a wildfire affected plant diversity in chaparral vegetation on serpentine and sandstone soils. We hypothesized that because biomass and cover are lower in serpentine chaparral, space and light are less limiting, and therefore postfire increases in plant species diversity would be lower than in sandstone chaparral. In 40 pairs of burned and unburned 250-m 2 plots, we measured changes in the plant community after a fire for three years. The diversity of native and exotic species increased more in response to fire in sandstone than serpentine chaparral, at both the local (plot) and regional (whole study) scales. In serpentine compared with sandstone chaparral, specialized fire-dependent species were less prevalent, mean fire severity was lower, mean time since last fire was longer, postfire shrub recruitment was lower, and regrowth of biomass was slower. Within each chaparral type, the responses of diversity to fire were positively correlated with prefire shrub cover and with a number of measures of soil fertility. Fire severity was negatively related to the postfire change in diversity in sandstone chaparral, and unimodally related to the postfire change in diversity in serpentine chaparral. Our results suggest that the effects of fire on less productive plant communities like serpentine chaparral may be less pronounced, although longer lasting, than the effects of fire on similar but more productive communities.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54824,Pre-fire fuel reduction treatments influence plant communities and exotic species 9 years after a large wildfire,S174663,R54825,Type of disturbance,L107899,Fire,"Questions: How did post-wildfire understorey plant community response, including exotic species response, differ between pre-fire treated areas that were less severely burned, and pre-fire untreated areas that were more severely burned? Were these differences consistent through time? Location: East-central Arizona, southwestern US. Methods: We used a multi-year data set from the 2002 Rodeo–Chediski Fire to detect post-fire trends in plant community response in burned ponderosa pine forests. Within the burn perimeter, we examined the effects of pre-fire fuels treatments on post-fire vegetation by comparing paired treated and untreated sites on the Apache-Sitgreaves National Forest. We sampled these paired sites in 2004, 2005 and 2011. Results: There were significant differences in pre-fire treated and untreated plant communities by species composition and abundance in 2004 and 2005, but these communities were beginning to converge in 2011. Total understorey plant cover was significantly higher in untreated areas for all 3 yr. Plant cover generally increased between 2004 and 2005 and markedly decreased in 2011, with the exception of shrub cover, which steadily increased through time. The sharp decrease in forb and graminoid cover in 2011 is likely related to drought conditions since the fire. Annual/biennial forb and graminoid cover decreased relative to perennial cover through time, consistent with the initial floristics hypothesis. Exotic plant response was highly variable and not limited to the immediate post-fire, annual/biennial community. Despite low overall exotic forb and graminoid cover for all years (<2.5%), several exotic species increased in frequency, and the relative proportion of exotic to native cover increased through time. Conclusions: Pre-treatment fuel reduction treatments helped maintain foundation overstorey species and associated native plant communities following this large wildfire. The overall low cover of exotic species on these sites supports other findings that the disturbance associated with high-severity fire does not always result in exotic species invasions. The increase in relative cover and frequency though time indicates that some species are proliferating, and continued monitoring is recommended. Patterns of exotic species invasions after severe burning are not easily predicted, and are likely more dependent on site-specific factors such as propagules, weather patterns and management.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54857,"Case studies of the expansion of Acacia dealbata in the valley of the river Mino (Galicia, Spain)",S175051,R54858,Type of disturbance,L108221,Fire,"Aim of study: Acacia dealbata is a naturalized tree of invasive behaviour that has expanded from small plots associated with vineyards into forest ecosystems. Our main objective is to find evidence to support the notion that disturbances, particularly forest fires, are important driving factors in the current expansion of A. dealbata. Area of study: We mapped it current distribution using three study areas and assesses the temporal changes registered in forest cover in these areas of the valley of the river Mino. Material and Methods: The analyses were based on visual interpretation of aerial photographs taken in 1985 and 2003 of three 1x1 km study areas and field works. Main result: A 62.4%, 48.6% and 22.2% of the surface area was covered by A. dealbata in 2003 in pure or mixed stands. Furthermore, areas composed exclusively of A. dealbata make up 33.8%, 15.2% and 5.7% of the stands. The transition matrix analyses between the two dates support our hypothesis that the areas currently covered by A. dealbata make up a greater proportion of the forest area previously classified as unwooded or open forest than those without A. dealbata cover. Both of these surface types are the result of an important impact of fire in the region. Within each area, A. dealbata is mainly located on steeper terrain, which is more affected by fires. Research highlights: A. dealbata is becoming the dominant tree species over large areas and the invasion of this species gives rise to monospecific stands, which may have important implications for future fire regimes. Keywords: Fire regime; Mimosa; plant invasion; silver wattle.",TRUE,noun
R24,Ecology and Evolutionary Biology,R53387,Fish species introductions provide novel insights into the patterns and drivers of phylogenetic structure in freshwaters,S163578,R53388,Investigated species,L98973,Fishes,"Despite long-standing interest of terrestrial ecologists, freshwater ecosystems are a fertile, yet unappreciated, testing ground for applying community phylogenetics to uncover mechanisms of species assembly. We quantify phylogenetic clustering and overdispersion of native and non-native fishes of a large river basin in the American Southwest to test for the mechanisms (environmental filtering versus competitive exclusion) and spatial scales influencing community structure. Contrary to expectations, non-native species were phylogenetically clustered and related to natural environmental conditions, whereas native species were not phylogenetically structured, likely reflecting human-related changes to the basin. The species that are most invasive (in terms of ecological impacts) tended to be the most phylogenetically divergent from natives across watersheds, but not within watersheds, supporting the hypothesis that Darwin's naturalization conundrum is driven by the spatial scale. Phylogenetic distinctiveness may facilitate non-native establishment at regional scales, but environmental filtering restricts local membership to closely related species with physiological tolerances for current environments. By contrast, native species may have been phylogenetically clustered in historical times, but species loss from contemporary populations by anthropogenic activities has likely shaped the phylogenetic signal. Our study implies that fundamental mechanisms of community assembly have changed, with fundamental consequences for the biogeography of both native and non-native species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54994,Propagule pressure and the invasion risks of non-native freshwater fishes: a case study in England,S176123,R54995,Investigated species,L108871,Fishes,"European countries in general, and England in particular, have a long history of introducing non-native fish species, but there exist no detailed studies of the introduction pathways and propagules pressure for any European country. Using the nine regions of England as a preliminary case study, the potential relationship between the occurrence in the wild of non-native freshwater fishes (from a recent audit of non-native species) and the intensity (i.e. propagule pressure) and diversity of fish imports was investigated. The main pathways of introduction were via imports of fishes for ornamental use (e.g. aquaria and garden ponds) and sport fishing, with no reported or suspected cases of ballast water or hull fouling introductions. The recorded occurrence of non-native fishes in the wild was found to be related to the time (number of years) since the decade of introduction. A shift in the establishment rate, however, was observed in the 1970s after which the ratio of established-to-introduced species declined. The number of established non-native fish species observed in the wild was found to increase significantly (P < 0·05) with increasing import intensity (log10x + 1 of the numbers of fish imported for the years 2000–2004) and with increasing consignment diversity (log10x + 1 of the numbers of consignment types imported for the years 2000–2004). The implications for policy and management are discussed.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54996,"The demography of introduction pathways, propagule pressure and occurrences of non-native freshwater fish in England",S176145,R54997,Investigated species,L108889,Fishes,"1. Biological invasion theory predicts that the introduction and establishment of non-native species is positively correlated with propagule pressure. Releases of pet and aquarium fishes to inland waters has a long history; however, few studies have examined the demographic basis of their importation and incidence in the wild. 2. For the 1500 grid squares (10×10 km) that make up England, data on human demographics (population density, numbers of pet shops, garden centres and fish farms), the numbers of non-native freshwater fishes (from consented licences) imported in those grid squares (i.e. propagule pressure), and the reported incidences (in a national database) of non-native fishes in the wild were used to examine spatial relationships between the occurrence of non-native fishes and the demographic factors associated with propagule pressure, as well as to test whether the demographic factors are statistically reliable predictors of the incidence of non-native fishes, and as such surrogate estimators of propagule pressure. 3. Principal coordinates of neighbour matrices analyses, used to generate spatially explicit models, and confirmatory factor analysis revealed that spatial distributions of non-native species in England were significantly related to human population density, garden centre density and fish farm density. Human population density and the number of fish imports were identified as the best predictors of propagule pressure. 4. Human population density is an effective surrogate estimator of non-native fish propagule pressure and can be used to predict likely areas of non-native fish introductions. In conjunction with fish movements, where available, human population densities can be used to support biological invasion monitoring programmes across Europe (and perhaps globally) and to inform management decisions as regards the prioritization of areas for the control of non-native fish introductions. © Crown copyright 2010. Reproduced with the permission of her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55146,Propagule pressure drives establishment of introduced freshwater fish: quantitative evidence from an irrigation network,S177832,R55147,Investigated species,L110269,Fishes,"Propagule pressure is recognized as a fundamental driver of freshwater fish invasions, though few studies have quantified its role. Natural experiments can be used to quantify the role of this factor relative to others in driving establishment success. An irrigation network in South Africa takes water from an inter-basin water transfer (IBWT) scheme to supply multiple small irrigation ponds. We compared fish community composition upstream, within, and downstream of the irrigation network, to show that this system is a unidirectional dispersal network with a single immigration source. We then assessed the effect of propagule pressure and biological adaptation on the colonization success of nine fish species across 30 recipient ponds of varying age. Establishing species received significantly more propagules at the source than did incidental species, while rates of establishment across the ponds displayed a saturation response to propagule pressure. This shows that propagule pressure is a significant driver of establishment overall. Those species that did not establish were either extremely rare at the immigration source or lacked the reproductive adaptations to breed in the ponds. The ability of all nine species to arrive at some of the ponds illustrates how long-term continuous propagule pressure from IBWT infrastructure enables range expansion of fishes. The quantitative link between propagule pressure and success and rate of population establishment confirms the driving role of this factor in fish invasion ecology.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56551,Na emergent multiple predator effect may enhance biotic resistance in a stream fish assemblage,S187156,R56552,Investigated species,L116148,Fishes,"While two cyprinid fishes introduced from nearby drainages have become widespread and abundant in the Eel River of northwestern California, a third nonindigenous cyprinid has remained largely confined to ≤25 km of one major tributary (the Van Duzen River) for at least 15 years. The downstream limit of this species, speckled dace, does not appear to correspond with any thresholds or steep gradients in abiotic conditions, but it lies near the upstream limits of three other fishes: coastrange sculpin, prickly sculpin, and nonindigenous Sacramento pikeminnow. We conducted a laboratory stream experiment to explore the potential for emergent multiple predator effects to influence biotic resistance in this situation. Sculpins in combination with Sacramento pikeminnow caused greater mortality of speckled dace than predicted based on their separate effects. In contrast to speckled dace, 99% of sculpin survived trials with Sacramento pikeminnow, in part because sculpin usually occupied benthic cover units while Sacramento pikeminnow occupied the water column. A 10-fold difference in benthic cover availability did not detectably influence biotic interactions in the experiment. The distribution of speckled dace in the Eel River drainage may be limited by two predator taxa with very different patterns of habitat use and a shortage of alternative habitats.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57000,Ecological predictions and risk assessment for alien fishes in North America,S192420,R57002,Investigated species,L120308,Fishes,"Methods of risk assessment for alien species, especially for nonagricultural systems, are largely qualitative. Using a generalizable risk assessment approach and statistical models of fish introductions into the Great Lakes, North America, we developed a quantitative approach to target prevention efforts on species most likely to cause damage. Models correctly categorized established, quickly spreading, and nuisance fishes with 87 to 94% accuracy. We then identified fishes that pose a high risk to the Great Lakes if introduced from unintentional (ballast water) or intentional pathways (sport, pet, bait, and aquaculture industries).",TRUE,noun
R24,Ecology and Evolutionary Biology,R57048,Predicting the number of ecologically harmful exotic species in an aquatic system,S192998,R57050,Investigated species,L120790,Fishes,"Most introduced species apparently have little impact on native biodiversity, but the proliferation of human vectors that transport species worldwide increases the probability of a region being affected by high‐impact invaders – i.e. those that cause severe declines in native species populations. Our study determined whether the number of high‐impact invaders can be predicted from the total number of invaders in an area, after controlling for species–area effects. These two variables are positively correlated in a set of 16 invaded freshwater and marine systems from around the world. The relationship is a simple linear function; there is no evidence of synergistic or antagonistic effects of invaders across systems. A similar relationship is found for introduced freshwater fishes across 149 regions. In both data sets, high‐impact invaders comprise approximately 10% of the total number of invaders. Although the mechanism driving this correlation is likely a sampling effect, it is not simply the proportional sampling of a constant number of repeat‐offenders; in most cases, an invader is not reported to have strong impacts on native species in the majority of regions it invades. These findings link vector activity and the negative impacts of introduced species on biodiversity, and thus justify management efforts to reduce invasion rates even where numerous invasions have already occurred.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57111,Environmental and biotic correlates to lionfish invasion success in Bahamian coral reefs,S193898,R57112,Investigated species,L121353,Fishes,"Lionfish (Pterois volitans), venomous predators from the Indo-Pacific, are recent invaders of the Caribbean Basin and southeastern coast of North America. Quantification of invasive lionfish abundances, along with potentially important physical and biological environmental characteristics, permitted inferences about the invasion process of reefs on the island of San Salvador in the Bahamas. Environmental wave-exposure had a large influence on lionfish abundance, which was more than 20 and 120 times greater for density and biomass respectively at sheltered sites as compared with wave-exposed environments. Our measurements of topographic complexity of the reefs revealed that lionfish abundance was not driven by habitat rugosity. Lionfish abundance was not negatively affected by the abundance of large native predators (or large native groupers) and was also unrelated to the abundance of medium prey fishes (total length of 5–10 cm). These relationships suggest that (1) higher-energy environments may impose intrinsic resistance against lionfish invasion, (2) habitat complexity may not facilitate the lionfish invasion process, (3) predation or competition by native fishes may not provide biotic resistance against lionfish invasion, and (4) abundant prey fish might not facilitate lionfish invasion success. The relatively low biomass of large grouper on this island could explain our failure to detect suppression of lionfish abundance and we encourage continuing the preservation and restoration of potential lionfish predators in the Caribbean. In addition, energetic environments might exert direct or indirect resistance to the lionfish proliferation, providing native fish populations with essential refuges.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57227,Native Predators Do Not Influence Invasion Success of Pacific Lionfish on Caribbean Reefs,S195227,R57228,Investigated species,L122450,Fishes,"Biotic resistance, the process by which new colonists are excluded from a community by predation from and/or competition with resident species, can prevent or limit species invasions. We examined whether biotic resistance by native predators on Caribbean coral reefs has influenced the invasion success of red lionfishes (Pterois volitans and Pterois miles), piscivores from the Indo-Pacific. Specifically, we surveyed the abundance (density and biomass) of lionfish and native predatory fishes that could interact with lionfish (either through predation or competition) on 71 reefs in three biogeographic regions of the Caribbean. We recorded protection status of the reefs, and abiotic variables including depth, habitat type, and wind/wave exposure at each site. We found no relationship between the density or biomass of lionfish and that of native predators. However, lionfish densities were significantly lower on windward sites, potentially because of habitat preferences, and in marine protected areas, most likely because of ongoing removal efforts by reserve managers. Our results suggest that interactions with native predators do not influence the colonization or post-establishment population density of invasive lionfish on Caribbean reefs.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54605,Interannual variation of fish assemblage structure in a Mediterranean River: Implications of streamflow on the dominance of native or exotic species,S172048,R54606,Type of disturbance,L105722,Flood,"Streams in mediterranean‐type climate regions are shaped by predictable seasonal events of flooding and drying over an annual cycle, but also present a strong interannual flow variation.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54791,EFFECTS OF DISTURBANCE ON HERBACEOUS EXOTIC PLANT-SPECIES ON THE FLOODPLAIN OF THE POTOMAC RIVER,S174266,R54792,Type of disturbance,L107568,Flood,"The objective of this study was to investigate specific effects of disturbance on exotic species in floodplain environments and to provide baseline data on the abundance of exotic herbs in the Potomac River floodplain. Frequency of exotics generally increased with man-made disturbance (forest fragmentation and recreational use of land) and decreased with increasing flooding frequency. Species richness of exotics followed a similar pattern. Some variation was found in individual species' responses to disturbance. The spread of Alliaria officinalis and Glecoma hederacea, the most frequent exotic species, was inhibited by forest fragmentation.",TRUE,noun
R24,Ecology and Evolutionary Biology,R53387,Fish species introductions provide novel insights into the patterns and drivers of phylogenetic structure in freshwaters,S163583,R53388,Habitat,L98978,Freshwater,"Despite long-standing interest of terrestrial ecologists, freshwater ecosystems are a fertile, yet unappreciated, testing ground for applying community phylogenetics to uncover mechanisms of species assembly. We quantify phylogenetic clustering and overdispersion of native and non-native fishes of a large river basin in the American Southwest to test for the mechanisms (environmental filtering versus competitive exclusion) and spatial scales influencing community structure. Contrary to expectations, non-native species were phylogenetically clustered and related to natural environmental conditions, whereas native species were not phylogenetically structured, likely reflecting human-related changes to the basin. The species that are most invasive (in terms of ecological impacts) tended to be the most phylogenetically divergent from natives across watersheds, but not within watersheds, supporting the hypothesis that Darwin's naturalization conundrum is driven by the spatial scale. Phylogenetic distinctiveness may facilitate non-native establishment at regional scales, but environmental filtering restricts local membership to closely related species with physiological tolerances for current environments. By contrast, native species may have been phylogenetically clustered in historical times, but species loss from contemporary populations by anthropogenic activities has likely shaped the phylogenetic signal. Our study implies that fundamental mechanisms of community assembly have changed, with fundamental consequences for the biogeography of both native and non-native species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54032,Morphological variation between non-native lake- and stream-dwelling pumpkinseed Lepomis gibbosusin the Iberian Peninsula,S165284,R54033,Habitat,L100218,Freshwater,"The objective of this study was to test if morphological differences in pumpkinseed Lepomis gibbosus found in their native range (eastern North America) that are linked to feeding regime, competition with other species, hydrodynamic forces and habitat were also found among stream- and lake- or reservoir-dwelling fish in Iberian systems. The species has been introduced into these systems, expanding its range, and is presumably well adapted to freshwater Iberian Peninsula ecosystems. The results show a consistent pattern for size of lateral fins, with L. gibbosus that inhabit streams in the Iberian Peninsula having longer lateral fins than those inhabiting reservoirs or lakes. Differences in fin placement, body depth and caudal peduncle dimensions do not differentiate populations of L. gibbosus from lentic and lotic water bodies and, therefore, are not consistent with functional expectations. Lepomis gibbosus from lotic and lentic habitats also do not show a consistent pattern of internal morphological differentiation, probably due to the lack of lotic-lentic differences in prey type. Overall, the univariate and multivariate analyses show that most of the external and internal morphological characters that vary among populations do not differentiate lotic from lentic Iberian populations. The lack of expected differences may be a consequence of the high seasonal flow variation in Mediterranean streams, and the resultant low- or no-flow conditions during periods of summer drought.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54136,Invasion strategies in clonal aquatic plants: are phenotypic differences caused by phenotypic plasticity or local adaptation? ,S166502,R54137,Habitat,L101228,Freshwater,"BACKGROUND AND AIMS The successful spread of invasive plants in new environments is often linked to multiple introductions and a diverse gene pool that facilitates local adaptation to variable environmental conditions. For clonal plants, however, phenotypic plasticity may be equally important. Here the primary adaptive strategy in three non-native, clonally reproducing macrophytes (Egeria densa, Elodea canadensis and Lagarosiphon major) in New Zealand freshwaters were examined and an attempt was made to link observed differences in plant morphology to local variation in habitat conditions. METHODS Field populations with a large phenotypic variety were sampled in a range of lakes and streams with different chemical and physical properties. The phenotypic plasticity of the species before and after cultivation was studied in a common garden growth experiment, and the genetic diversity of these same populations was also quantified. KEY RESULTS For all three species, greater variation in plant characteristics was found before they were grown in standardized conditions. Moreover, field populations displayed remarkably little genetic variation and there was little interaction between habitat conditions and plant morphological characteristics. CONCLUSIONS The results indicate that at the current stage of spread into New Zealand, the primary adaptive strategy of these three invasive macrophytes is phenotypic plasticity. However, while limited, the possibility that genetic diversity between populations may facilitate ecotypic differentiation in the future cannot be excluded. These results thus indicate that invasive clonal aquatic plants adapt to new introduced areas by phenotypic plasticity. Inorganic carbon, nitrogen and phosphorous were important in controlling plant size of E. canadensis and L. major, but no other relationships between plant characteristics and habitat conditions were apparent. This implies that within-species differences in plant size can be explained by local nutrient conditions. All together this strongly suggests that invasive clonal aquatic plants adapt to a wide range of habitats in introduced areas by phenotypic plasticity rather than local adaptation.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54222,Adaptation vs. phenotypic plasticity in the success of a clonal invader,S167518,R54223,Habitat,L102072,Freshwater,"The relative importance of plasticity vs. adaptation for the spread of invasive species has rarely been studied. We examined this question in a clonal population of invasive freshwater snails (Potamopyrgus antipodarum) from the western United States by testing whether observed plasticity in life history traits conferred higher fitness across a range of temperatures. We raised isofemale lines from three populations from different climate regimes (high- and low-elevation rivers and an estuary) in a split-brood, common-garden design in three temperatures. We measured life history and growth traits and calculated population growth rate (as a measure of fitness) using an age-structured projection matrix model. We found a strong effect of temperature on all traits, but no evidence for divergence in the average level of traits among populations. Levels of genetic variation and significant reaction norm divergence for life history traits suggested some role for adaptation. Plasticity varied among traits and was lowest for size and reproductive traits compared to age-related traits and fitness. Plasticity in fitness was intermediate, suggesting that invasive populations are not general-purpose genotypes with respect to the range of temperatures studied. Thus, by considering plasticity in fitness and its component traits, we have shown that trait plasticity alone does not yield the same fitness across a relevant set of temperature conditions.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54228,Predator-induced phenotypic plasticity in the exotic cladoceran Daphnia lumholtzi,S167588,R54229,Habitat,L102130,Freshwater,"Summary 1. The exotic cladoceran Daphnia lumholtzi has recently invaded freshwater systems throughout the United States. Daphnia lumholtzi possesses extravagant head spines that are longer than those found on any other North American Daphnia. These spines are effective at reducing predation from many of the predators that are native to newly invaded habitats; however, they are plastic both in nature and in laboratory cultures. The purpose of this experiment was to better understand what environmental cues induce and maintain these effective predator-deterrent spines. We conducted life-table experiments on individual D. lumholtzi grown in water conditioned with an invertebrate insect predator, Chaoborus punctipennis, and water conditioned with a vertebrate fish predator, Lepomis macrochirus. 2. Daphnia lumholtzi exhibited morphological plasticity in response to kairomones released by both predators. However, direct exposure to predator kairomones during postembryonic development did not induce long spines in D. lumholtzi. In contrast, neonates produced from individuals exposed to Lepomis kairomones had significantly longer head and tail spines than neonates produced from control and Chaoborus individuals. These results suggest that there may be a maternal, or pre-embryonic, effect of kairomone exposure on spine development in D. lumholtzi. 3. Independent of these morphological shifts, D. lumholtzi also exhibited plasticity in life history characteristics in response to predator kairomones. For example, D. lumholtzi exhibited delayed reproduction in response to Chaoborus kairomones, and significantly more individuals produced resting eggs, or ephippia, in the presence of Lepomis kairomones.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54711,Dam invaders: impoundments facilitate biological invasions into freshwaters,S173333,R54712,Habitat,L106795,Freshwater,"Freshwater ecosystems are at the forefront of the global biodiversity crisis, with more declining and extinct species than in terrestrial or marine environments. Hydrologic alterations and biological invasions represent two of the greatest threats to freshwater biota, yet the importance of linkages between these drivers of environmental change remains uncertain. Here, we quantitatively test the hypothesis that impoundments facilitate the introduction and establishment of aquatic invasive species in lake ecosystems. By combining data on boating activity, water body physicochemistry, and geographical distribution of five nuisance invaders in the Laurentian Great Lakes region, we show that non-indigenous species are 2.4 to 300 times more likely to occur in impoundments than in natural lakes, and that impoundments frequently support multiple invaders. Furthermore, comparisons of the contemporary and historical landscapes revealed that impoundments enhance the invasion risk of natural lakes by increasing their...",TRUE,noun
R24,Ecology and Evolutionary Biology,R54979,Geographical variability in propagule pressure and climatic suitability explain the European distribution of two highly invasive crayfish,S175956,R54980,Habitat,L108734,Freshwater,"We assess the relative contribution of human, biological and climatic factors in explaining the colonization success of two highly invasive freshwater decapods: the signal crayfish (Pacifastacus leniusculus) and the red swamp crayfish (Procambarus clarkii).",TRUE,noun
R24,Ecology and Evolutionary Biology,R54989,Effects of pre-existing submersed vegetation and propagule pressure on the invasion success of Hydrilla verticillata,S176084,R54991,Habitat,L108840,Freshwater,"Summary 1 With biological invasions causing widespread problems in ecosystems, methods to curb the colonization success of invasive species are needed. The effective management of invasive species will require an integrated approach that restores community structure and ecosystem processes while controlling propagule pressure of non-native species. 2 We tested the hypotheses that restoring native vegetation and minimizing propagule pressure of invasive species slows the establishment of an invader. In field and greenhouse experiments, we evaluated (i) the effects of a native submersed aquatic plant species, Vallisneria americana, on the colonization success of a non-native species, Hydrilla verticillata; and (ii) the effects of H. verticillata propagule density on its colonization success. 3 Results from the greenhouse experiment showed that V. americana decreased H. verticillata colonization through nutrient draw-down in the water column of closed mesocosms, although data from the field experiment, located in a tidal freshwater region of Chesapeake Bay that is open to nutrient fluxes, suggested that V. americana did not negatively impact H. verticillata colonization. However, H. verticillata colonization was greater in a treatment of plastic V. americana look-alikes, suggesting that the canopy of V. americana can physically capture H. verticillata fragments. Thus pre-emption effects may be less clear in the field experiment because of complex interactions between competitive and facilitative effects in combination with continuous nutrient inputs from tides and rivers that do not allow nutrient draw-down to levels experienced in the greenhouse. 4 Greenhouse and field tests differed in the timing, duration and density of propagule inputs. However, irrespective of these differences, propagule pressure of the invader affected colonization success except in situations when the native species could draw-down nutrients in closed greenhouse mesocosms. In that case, no propagules were able to colonize. 5 Synthesis and applications. We have shown that reducing propagule pressure through targeted management should be considered to slow the spread of invasive species. This, in combination with restoration of native species, may be the best defence against non-native species invasion. Thus a combined strategy of targeted control and promotion of native plant growth is likely to be the most sustainable and cost-effective form of invasive species management.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54994,Propagule pressure and the invasion risks of non-native freshwater fishes: a case study in England,S176128,R54995,Habitat,L108876,Freshwater,"European countries in general, and England in particular, have a long history of introducing non-native fish species, but there exist no detailed studies of the introduction pathways and propagules pressure for any European country. Using the nine regions of England as a preliminary case study, the potential relationship between the occurrence in the wild of non-native freshwater fishes (from a recent audit of non-native species) and the intensity (i.e. propagule pressure) and diversity of fish imports was investigated. The main pathways of introduction were via imports of fishes for ornamental use (e.g. aquaria and garden ponds) and sport fishing, with no reported or suspected cases of ballast water or hull fouling introductions. The recorded occurrence of non-native fishes in the wild was found to be related to the time (number of years) since the decade of introduction. A shift in the establishment rate, however, was observed in the 1970s after which the ratio of established-to-introduced species declined. The number of established non-native fish species observed in the wild was found to increase significantly (P < 0·05) with increasing import intensity (log10x + 1 of the numbers of fish imported for the years 2000–2004) and with increasing consignment diversity (log10x + 1 of the numbers of consignment types imported for the years 2000–2004). The implications for policy and management are discussed.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54996,"The demography of introduction pathways, propagule pressure and occurrences of non-native freshwater fish in England",S176150,R54997,Habitat,L108894,Freshwater,"1. Biological invasion theory predicts that the introduction and establishment of non-native species is positively correlated with propagule pressure. Releases of pet and aquarium fishes to inland waters has a long history; however, few studies have examined the demographic basis of their importation and incidence in the wild. 2. For the 1500 grid squares (10×10 km) that make up England, data on human demographics (population density, numbers of pet shops, garden centres and fish farms), the numbers of non-native freshwater fishes (from consented licences) imported in those grid squares (i.e. propagule pressure), and the reported incidences (in a national database) of non-native fishes in the wild were used to examine spatial relationships between the occurrence of non-native fishes and the demographic factors associated with propagule pressure, as well as to test whether the demographic factors are statistically reliable predictors of the incidence of non-native fishes, and as such surrogate estimators of propagule pressure. 3. Principal coordinates of neighbour matrices analyses, used to generate spatially explicit models, and confirmatory factor analysis revealed that spatial distributions of non-native species in England were significantly related to human population density, garden centre density and fish farm density. Human population density and the number of fish imports were identified as the best predictors of propagule pressure. 4. Human population density is an effective surrogate estimator of non-native fish propagule pressure and can be used to predict likely areas of non-native fish introductions. In conjunction with fish movements, where available, human population densities can be used to support biological invasion monitoring programmes across Europe (and perhaps globally) and to inform management decisions as regards the prioritization of areas for the control of non-native fish introductions. © Crown copyright 2010. Reproduced with the permission of her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55146,Propagule pressure drives establishment of introduced freshwater fish: quantitative evidence from an irrigation network,S177837,R55147,Habitat,L110274,Freshwater,"Propagule pressure is recognized as a fundamental driver of freshwater fish invasions, though few studies have quantified its role. Natural experiments can be used to quantify the role of this factor relative to others in driving establishment success. An irrigation network in South Africa takes water from an inter-basin water transfer (IBWT) scheme to supply multiple small irrigation ponds. We compared fish community composition upstream, within, and downstream of the irrigation network, to show that this system is a unidirectional dispersal network with a single immigration source. We then assessed the effect of propagule pressure and biological adaptation on the colonization success of nine fish species across 30 recipient ponds of varying age. Establishing species received significantly more propagules at the source than did incidental species, while rates of establishment across the ponds displayed a saturation response to propagule pressure. This shows that propagule pressure is a significant driver of establishment overall. Those species that did not establish were either extremely rare at the immigration source or lacked the reproductive adaptations to breed in the ponds. The ability of all nine species to arrive at some of the ponds illustrates how long-term continuous propagule pressure from IBWT infrastructure enables range expansion of fishes. The quantitative link between propagule pressure and success and rate of population establishment confirms the driving role of this factor in fish invasion ecology.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56545,Ecological traits of the amphipod invader Dikerogammarus villosus on a mesohabitat scale,S187091,R56546,Habitat,L116095,Freshwater,"Since 1995, Dikerogammarus villosus Sowinski, a Ponto-Caspian amphi- pod species, has been invading most of Western Europe' s hydrosystems. D. villosus geographic extension and quickly increasing population density has enabled it to become a major component of macrobenthic assemblages in recipient ecosystems. The ecological characteristics of D. villosus on a mesohabitat scale were investigated at a station in the Moselle River. This amphipod is able to colonize a wide range of sub- stratum types, thus posing a threat to all freshwater ecosystems. Rivers whose domi- nant substratum is cobbles and which have tree roots along the banks could harbour particularly high densities of D. villosus. A relationship exists between substratum par- ticle size and the length of the individuals, and spatial segregation according to length was shown. This allows the species to limit intra-specific competition between genera- tions while facilitating reproduction. A strong association exists between D. villosus and other Ponto-Caspian species, such as Dreissena polymorpha and Corophium cur- vispinum, in keeping with Invasional Meltdown Theory. Four taxa (Coenagrionidae, Calopteryx splendens, Corophium curvispinum and Gammarus pulex ) exhibited spa- tial niches that overlap significantly that of D. villosus. According to the predatory be- haviour of the newcomer, their populations may be severely impacted.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56559,"Exotic species replacement: shifting dominance of dreissenid mussels in the Soulanges Canal, upper St. Lawrence River, Canada",S187251,R56560,Habitat,L116227,Freshwater,"Abstract During the early 1990s, 2 Eurasian macrofouling mollusks, the zebra mussel Dreissena polymorpha and the quagga mussel D. bugensis, colonized the freshwater section of the St. Lawrence River and decimated native mussel populations through competitive interference. For several years, zebra mussels dominated molluscan biomass in the river; however, quagga mussels have increased in abundance and are apparently displacing zebra mussels from the Soulanges Canal, west of the Island of Montreal. The ratio of quagga mussel biomass to zebra mussel biomass on the canal wall is correlated with depth, and quagga mussels constitute >99% of dreissenid biomass on bottom sediments. This dominance shift did not substantially affect the total dreissenid biomass, which has remained at 3 to 5 kg fresh mass /m2 on the canal walls for nearly a decade. The mechanism for this shift is unknown, but may be related to a greater bioenergetic efficiency for quaggas, which attained larger shell sizes than zebra mussels at all depths. Similar events have occurred in the lower Great Lakes where zebra mussels once dominated littoral macroinvertebrate biomass, demonstrating that a well-established and prolific invader can be replaced by another introduced species without prior extinction.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56644,Epiphytic macroinvertebrate communities on Eurasian watermilfoil (Myriophyllum spicatum) and native milfoils Myriophyllum sibericum and Myriophyllum alterniflorum in eastern North America,S188232,R56645,Habitat,L117038,Freshwater,"Aquatic macrophytes play an important role in the survival and proliferation of invertebrates in freshwater eco- systems. Epiphytic invertebrate communities may be altered through the replacement of native macrophytes by exotic mac- rophytes, even when the macrophytes are close relatives and have similar morphology. We sampled an invasive exotic macrophyte, Eurasian watermilfoil (Myriophyllum spicatum), and native milfoils Myriophyllum sibericum and Myriophyl- lum alterniflorum in four bodies of water in southern Quebec and upstate New York during the summer of 2005. Within each waterbody, we compared the abundance, diversity, and community composition of epiphytic macroinvertebrates on exotic and native Myriophyllum. In general, both M. sibericum and M. alterniflorum had higher invertebrate diversity and higher invertebrate biomass and supported more gastropods than the exotic M. spicatum. In late summer, invertebrate den- sity tended to be higher on M. sibericum than on M. spicatum, but lower on M. alterniflorum than on M. spicatum. Our re- sults demonstrate that M. spicatum supports macroinvertebrate communities that may differ from those on structurally similar native macrophytes, although these differences vary across sites and sampling dates. Thus, the replacement of na- tive milfoils by M. spicatum may have indirect effects on aquatic food webs. Resume´ : Les macrophytes aquatiques jouent un role important dans la survie et la proliferation des invertebres dans les ecosystemes d'eau douce. Les communautes d'invertebresepiphytes peuvent etre modifiees par le remplacement des mac- rophytes indigenes par des marcophytes exotiques, meme lorsque ces macrophytes sont de proches parents et possedent une morphologie similaire. Nous avons echantillonneun macrophyte exotique envahissant, le myriophylle a epis (Myrio- phyllum spicatum), et des myriophylles indigenes (Myriophyllum sibericum et Myriophyllum alterniflorum) dans des plans d'eau du sud du Quebec et du nord de l'etat de New York durant l'ete´ 2005. Dans chaque plan d'eau, nous avons compare´ l'abondance, la diversiteet la composition de la communautede macroinvertebresepiphytes sur les Myriophyllum exotique et indigenes. En general, tant M. sibericum que M. alterniflorum portent une plus grande diversiteet une biomasse plus importante d'invertebres ainsi qu'un plus grand nombre de gasteropodes que le M. spicatum exotique. En fin d'ete´, la den- sitedes invertebres tend a etre plus importante sur M. sibericum que sur M. spicatum, mais plus faible sur M. alterniflorum que sur M. spicatum. Nos resultats demontrent que M. spicatum porte des communautes de macroinvertebres qui peuvent differer de celles qui se retrouvent sur les macrophytes indigenes de structure similaire, bien que ces differences puissent varier d'un site aun autre et d'une date d'echantillonnage a l'autre. Ainsi, le remplacement des myriophylles indigenes par M. spicatum peut avoir des effets indirects sur les reseaux alimentaires aquatiques. (Traduit par la Redaction)",TRUE,noun
R24,Ecology and Evolutionary Biology,R56648,"Implications of beaver Castor canadensis and trout introductions on native fish in the Cape Horn biosphere reserve, Chile",S188277,R56649,Habitat,L117075,Freshwater,"Abstract Invasive species threaten global biodiversity, but multiple invasions make predicting the impacts difficult because of potential synergistic effects. We examined the impact of introduced beaver Castor canadensis, brook trout Salvelinus fontinalis, and rainbow trout Oncorhynchus mykiss on native stream fishes in the Cape Horn Biosphere Reserve, Chile. The combined effects of introduced species on the structure of the native freshwater fish community were quantified by electrofishing 28 stream reaches within four riparian habitat types (forest, grassland, shrubland, and beaver-affected habitat) in 23 watersheds and by measuring related habitat variables (water velocity, substrate type, depth, and the percentage of pools). Three native stream fish species (puye Galaxias maculatus [also known as inanga], Aplochiton taeniatus, and A. zebra) were found along with brook trout and rainbow trout, but puye was the only native species that was common and widespread. The reaches affected by beaver impoundmen...",TRUE,noun
R24,Ecology and Evolutionary Biology,R56843,Does whirling disease mediate hybridization between a native and nonnative trout?,S190455,R56844,Habitat,L118863,Freshwater,"AbstractThe spread of nonnative species over the last century has profoundly altered freshwater ecosystems, resulting in novel species assemblages. Interactions between nonnative species may alter their impacts on native species, yet few studies have addressed multispecies interactions. The spread of whirling disease, caused by the nonnative parasite Myxobolus cerebralis, has generated declines in wild trout populations across western North America. Westslope Cutthroat Trout Oncorhynchus clarkii lewisi in the northern Rocky Mountains are threatened by hybridization with introduced Rainbow Trout O. mykiss. Rainbow Trout are more susceptible to whirling disease than Cutthroat Trout and may be more vulnerable due to differences in spawning location. We hypothesized that the presence of whirling disease in a stream would (1) reduce levels of introgressive hybridization at the site scale and (2) limit the size of the hybrid zone at the whole-stream scale. We measured levels of introgression and the spatial ext...",TRUE,noun
R24,Ecology and Evolutionary Biology,R56867,Comparisons of isotopic niche widths of some invasive and indigenous fauna in a South African river,S190721,R56868,Habitat,L119081,Freshwater,"Summary Biological invasions threaten ecosystem integrity and biodiversity, with numerous adverse implications for native flora and fauna. Established populations of two notorious freshwater invaders, the snail Tarebia granifera and the fish Pterygoplichthys disjunctivus, have been reported on three continents and are frequently predicted to be in direct competition with native species for dietary resources. Using comparisons of species' isotopic niche widths and stable isotope community metrics, we investigated whether the diets of the invasive T. granifera and P. disjunctivus overlapped with those of native species in a highly invaded river. We also attempted to resolve diet composition for both species, providing some insight into the original pathway of invasion in the Nseleni River, South Africa. Stable isotope metrics of the invasive species were similar to or consistently mid-range in comparison with their native counterparts, with the exception of markedly more uneven spread in isotopic space relative to indigenous species. Dietary overlap between the invasive P. disjunctivus and native fish was low, with the majority of shared food resources having overlaps of <0.26. The invasive T. granifera showed effectively no overlap with the native planorbid snail. However, there was a high degree of overlap between the two invasive species (˜0.86). Bayesian mixing models indicated that detrital mangrove Barringtonia racemosa leaves contributed the largest proportion to P. disjunctivus diet (0.12–0.58), while the diet of T. granifera was more variable with high proportions of detrital Eichhornia crassipes (0.24–0.60) and Azolla filiculoides (0.09–0.33) as well as detrital Barringtonia racemosa leaves (0.00–0.30). Overall, although the invasive T. granifera and P. disjunctivus were not in direct competition for dietary resources with native species in the Nseleni River system, their spread in isotopic space suggests they are likely to restrict energy available to higher consumers in the food web. Establishment of these invasive populations in the Nseleni River is thus probably driven by access to resources unexploited or unavailable to native residents.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56909,Over-invasion in a freshwater ecosystem: newly introduced virile crayfish (Orconectes virilis) outcompete established invasive signal crayfish (Pacifastacus leniusculus),S191188,R56910,Habitat,L119464,Freshwater,"Abstract Biological invasions are a key threat to freshwater biodiversity, and identifying determinants of invasion success is a global conservation priority. The establishment of introduced species is predicted to be hindered by pre-existing, functionally similar invasive species. Over a five-year period we, however, find that in the River Lee (UK), recently introduced non-native virile crayfish (Orconectes virilis) increased in range and abundance, despite the presence of established alien signal crayfish (Pacifastacus leniusculus). In regions of sympatry, virile crayfish had a detrimental effect on signal crayfish abundance but not vice versa. Competition experiments revealed that virile crayfish were more aggressive than signal crayfish and outcompeted them for shelter. Together, these results provide early evidence for the potential over-invasion of signal crayfish by competitively dominant virile crayfish. Based on our results and the limited distribution of virile crayfish in Europe, we recommend that efforts to contain them within the Lee catchment be implemented immediately.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56917,"Feeding behaviour, predatory functional response and trophic interactions of the invasive Chinese mitten crab (Eriocheir sinensis) and signal crayfish (Pacifastacus leniusculus)",S191276,R56918,Habitat,L119536,Freshwater,"1. Freshwaters are subject to particularly high rates of species introductions; hence, invaders increasingly co-occur and may interact to enhance impacts on ecosystem structure and function. As trophic interactions are a key mechanism by which invaders influence communities, we used a combination of approaches to investigate the feeding preferences and community impacts of two globally invasive large benthic decapods that co-occur in freshwaters: the signal crayfish (Pacifastacus leniusculus) and Chinese mitten crab (Eriocheir sinensis). 2. In laboratory preference tests, both consumed similar food items, including chironomids, isopods and the eggs of two coarse fish species. In a comparison of predatory functional responses with a native crayfish Austropotamobius pallipes), juvenile E. sinensis had a greater predatory intensity than the native A. pallipes on the keystone shredder Gammarus pulex, and also displayed a greater preference than P. leniusculus for this prey item. 3. In outdoor mesocosms (n = 16) used to investigate community impacts, the abundance of amphipods, isopods, chironomids and gastropods declined in the presence of decapods, and a decapod >gastropod >periphyton trophic cascade was detected when both species were present. Eriocheir sinensis affected a wider range of animal taxa than P. leniusculus. 4. Stable-isotope and gut-content analysis of wild-caught adult specimens of both invaders revealed a wide and overlapping range of diet items including macrophytes, algae, terrestrial detritus, macroinvertebrates and fish. Both decapods were similarly enriched in 15N and occupied the same trophic level as Ephemeroptera, Odonata and Notonecta. Eriocheir sinensis d13C values were closely aligned with macrophytes indicating a reliance on energy from this basal resource, supported by evidence of direct consumption from gut contents. Pacifastacus leniusculus d13C values were intermediate between those of terrestrial leaf litter and macrophytes, suggesting reliance on both allochthonous and autochthonous energy pathways. 5. Our results suggest that E. sinensis is likely to exert a greater per capita impact on the macroinvertebrate communities in invaded systems than P. leniusculus, with potential indirect effects on productivity and energy flow through the community.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56919,Strong invaders are strong defenders - implications for the resistance of invaded communities,S191298,R56920,Habitat,L119554,Freshwater,"Many ecosystems receive a steady stream of non-native species. How biotic resistance develops over time in these ecosystems will depend on how established invaders contribute to subsequent resistance. If invasion success and defence capacity (i.e. contribution to resistance) are correlated, then community resistance should increase as species accumulate. If successful invaders also cause most impact (through replacing native species with low defence capacity) then the effect will be even stronger. If successful invaders instead have weak defence capacity or even facilitative attributes, then resistance should decrease with time, as proposed by the invasional meltdown hypothesis. We analysed 1157 introductions of freshwater fish in Swedish lakes and found that species' invasion success was positively correlated with their defence capacity and impact, suggesting that these communities will develop stronger resistance over time. These insights can be used to identify scenarios where invading species are expected to cause large impact.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56984,Introduction pathways and establishment rates of invasive aquatic species in Europe,S192226,R56985,Habitat,L120148,Freshwater,"Species invasion is one of the leading mechanisms of global environmental change, particularly in freshwater ecosystems. We used the Food and Agriculture Organization's Database of Invasive Aquatic...",TRUE,noun
R24,Ecology and Evolutionary Biology,R56990,Alien aquatic plant species in European countries,S192295,R56991,Habitat,L120205,Freshwater,"Hussner A (2012). Alien aquatic plant species in European countries. Weed Research52, 297–306. Summary Alien aquatic plant species cause serious ecological and economic impacts to European freshwater ecosystems. This study presents a comprehensive overview of all alien aquatic plants in Europe, their places of origin and their distribution within the 46 European countries. In total, 96 aquatic species from 30 families have been reported as aliens from at least one European country. Most alien aquatic plants are native to Northern America, followed by Asia and Southern America. Elodea canadensis is the most widespread alien aquatic plant in Europe, reported from 41 European countries. Azolla filiculoides ranks second (25), followed by Vallisneria spiralis (22) and Elodea nuttallii (20). The highest number of alien aquatic plant species has been found in Italy and France (34 species), followed by Germany (27), Belgium and Hungary (both 26) and the Netherlands (24). Even though the number of alien aquatic plants seems relatively small, the European and Mediterranean Plant Protection Organization (EPPO, http://www.eppo.org) has listed 18 of these species as invasive or potentially invasive within the EPPO region. As ornamental trade has been regarded as the major pathway for the introduction of alien aquatic plants, trading bans seem to be the most effective option to reduce the risk of further unintended entry of alien aquatic plants into Europe.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57070,A global meta-analysis of the ecological impacts of nonnative crayfish,S193236,R57071,Habitat,L120986,Freshwater,"Abstract. Nonnative crayfish have been widely introduced and are a major threat to freshwater biodiversity and ecosystem functioning. Despite documentation of the ecological effects of nonnative crayfish from >3 decades of case studies, no comprehensive synthesis has been done to test quantitatively for their general or species-specific effects on recipient ecosystems. We provide the first global meta-analysis of the ecological effects of nonnative crayfish under experimental settings to compare effects among species and across levels of ecological organization. Our meta-analysis revealed strong, but variable, negative ecological impacts of nonnative crayfish with strikingly consistent effects among introduced species. In experimental settings, nonnative crayfish generally affect all levels of freshwater food webs. Nonnative crayfish reduce the abundance of basal resources like aquatic macrophytes, prey on invertebrates like snails and mayflies, and reduce abundances and growth of amphibians and fish, but they do not consistently increase algal biomass. Nonnative crayfish tend to have larger positive effects on growth of algae and larger negative effects on invertebrates and fish than native crayfish, but effect sizes vary considerably. Our study supports the assessment of crayfish as strong interactors in food webs that have significant effects across native taxa via polytrophic, generalist feeding habits. Nonnative crayfish species identity may be less important than extrinsic attributes of the recipient ecosystems in determining effects of nonnative crayfish. We identify some understudied and emerging nonnative crayfish that should be studied further and suggest expanding research to encompass more comparisons of native vs nonnative crayfish and different geographic regions. The consistent and general negative effects of nonnative crayfish warrant efforts to discourage their introduction beyond native ranges.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57075,"How well do we understand the impacts of alien species on ecosystem services? A pan-European, cross-taxa assessment",S193338,R57079,Habitat,L121072,Freshwater,"Recent comprehensive data provided through the DAISIE project (www.europe-aliens.org) have facilitated the development of the first pan-European assessment of the impacts of alien plants, vertebrates, and invertebrates – in terrestrial, freshwater, and marine environments – on ecosystem services. There are 1094 species with documented ecological impacts and 1347 with economic impacts. The two taxonomic groups with the most species causing impacts are terrestrial invertebrates and terrestrial plants. The North Sea is the maritime region that suffers the most impacts. Across taxa and regions, ecological and economic impacts are highly correlated. Terrestrial invertebrates create greater economic impacts than ecological impacts, while the reverse is true for terrestrial plants. Alien species from all taxonomic groups affect “supporting”, “provisioning”, “regulating”, and “cultural” services and interfere with human well-being. Terrestrial vertebrates are responsible for the greatest range of impacts, and these are widely distributed across Europe. Here, we present a review of the financial costs, as the first step toward calculating an estimate of the economic consequences of alien species in Europe.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57207,Diversity and biomass of native macrophytes are negatively related to dominance of an invasive Poaceae in Brazilian sub-tropical streams,S194992,R57208,Habitat,L122255,Freshwater,"Besides exacerbated exploitation, pollution, flow alteration and habitats degradation, freshwater biodiversity is also threatened by biological invasions. This paper addresses how native aquatic macrophyte communities are affected by the non-native species Urochloa arrecta, a current successful invader in Brazilian freshwater systems. We compared the native macrophytes colonizing patches dominated and non-dominated by this invader species. We surveyed eight streams in Northwest Parana State (Brazil). In each stream, we recorded native macrophytes' richness and biomass in sites where U. arrecta was dominant and in sites where it was not dominant or absent. No native species were found in seven, out of the eight investigated sites where U. arrecta was dominant. Thus, we found higher native species richness, Shannon index and native biomass values in sites without dominance of U. arrecta than in sites dominated by this invader. Although difficult to conclude about causes of such differences, we infer that the elevated biomass production by this grass might be the primary reason for alterations in invaded environments and for the consequent impacts on macrophytes' native communities. However, biotic resistance offered by native richer sites could be an alternative explanation for our results. To mitigate potential impacts and to prevent future environmental perturbations, we propose mechanical removal of the invasive species and maintenance or restoration of riparian vegetation, for freshwater ecosystems have vital importance for the maintenance of ecological services and biodiversity and should be preserved.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57225,Native and introduced fish species richness in Chilean Patagonian lakes: inferences on invasion mechanisms using salmonid-free lakes,S195204,R57226,Habitat,L122431,Freshwater,"Geographic patterns of species richness have been linked to many physical and biological drivers. In this study, we document and explain gradients of species richness for native and introduced freshwater fish in Chilean lakes. We focus on the role of the physical environment to explain native richness patterns. For patterns of introduced salmonid richness and dominance, we also examine the biotic resistance and human activity hypotheses. We were particularly interested in identifying the factors that best explain the persistence of salmonid‐free lakes in Patagonia.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57263,Fish invasions in the world's river systems: When natural processes are blurred by human activities,S195656,R57264,Habitat,L122807,Freshwater,"Because species invasions are a principal driver of the human-induced biodiversity crisis, the identification of the major determinants of global invasions is a prerequisite for adopting sound conservation policies. Three major hypotheses, which are not necessarily mutually exclusive, have been proposed to explain the establishment of non-native species: the “human activity” hypothesis, which argues that human activities facilitate the establishment of non-native species by disturbing natural landscapes and by increasing propagule pressure; the “biotic resistance” hypothesis, predicting that species-rich communities will readily impede the establishment of non-native species; and the “biotic acceptance” hypothesis, predicting that environmentally suitable habitats for native species are also suitable for non-native species. We tested these hypotheses and report here a global map of fish invasions (i.e., the number of non-native fish species established per river basin) using an original worldwide dataset of freshwater fish occurrences, environmental variables, and human activity indicators for 1,055 river basins covering more than 80% of Earth's surface. First, we identified six major invasion hotspots where non-native species represent more than a quarter of the total number of species. According to the World Conservation Union, these areas are also characterised by the highest proportion of threatened fish species. Second, we show that the human activity indicators account for most of the global variation in non-native species richness, which is highly consistent with the “human activity” hypothesis. In contrast, our results do not provide support for either the “biotic acceptance” or the “biotic resistance” hypothesis. We show that the biogeography of fish invasions matches the geography of human impact at the global scale, which means that natural processes are blurred by human activities in driving fish invasions in the world's river systems. In view of our findings, we fear massive invasions in developing countries with a growing economy as already experienced in developed countries. Anticipating such potential biodiversity threats should therefore be a priority.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57748,"Cryptic seedling herbivory by nocturnal introduced generalists impacts survival, performance of native and exotic plants",S200176,R57750,Release of which kind of enemies?,L126044,Generalists,"Although much of the theory on the success of invasive species has been geared at escape from specialist enemies, the impact of introduced generalist invertebrate herbivores on both native and introduced plant species has been underappreciated. The role of nocturnal invertebrate herbivores in structuring plant communities has been examined extensively in Europe, but less so in North America. Many nocturnal generalists (slugs, snails, and earwigs) have been introduced to North America, and 96% of herbivores found during a night census at our California Central Valley site were introduced generalists. We explored the role of these herbivores in the distribution, survivorship, and growth of 12 native and introduced plant species from six families. We predicted that introduced species sharing an evolutionary history with these generalists might be less vulnerable than native plant species. We quantified plant and herbivore abundances within our heterogeneous site and also established herbivore removal experiments in 160 plots spanning the gamut of microhabitats. As 18 collaborators, we checked 2000 seedling sites every day for three weeks to assess nocturnal seedling predation. Laboratory feeding trials allowed us to quantify the palatability of plant species to the two dominant nocturnal herbivores at the site (slugs and earwigs) and allowed us to account for herbivore microhabitat preferences when analyzing attack rates on seedlings. The relationship between local slug abundance and percent cover of five common plant taxa at the field site was significantly negatively associated with the mean palatability of these taxa to slugs in laboratory trials. Moreover, seedling mortality of 12 species in open-field plots was positively correlated with mean palatability of these taxa to both slugs and earwigs in laboratory trials. Counter to expectations, seedlings of native species were neither more vulnerable nor more palatable to nocturnal generalists than those of introduced species. Growth comparison of plants within and outside herbivore exclosures also revealed no differences between native and introduced plant species, despite large impacts of herbivores on growth. Cryptic nocturnal predation on seedlings was common and had large effects on plant establishment at our site. Without intensive monitoring, such predation could easily be misconstrued as poor seedling emergence.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54222,Adaptation vs. phenotypic plasticity in the success of a clonal invader,S167514,R54223,Specific traits,L102068,Growth,"The relative importance of plasticity vs. adaptation for the spread of invasive species has rarely been studied. We examined this question in a clonal population of invasive freshwater snails (Potamopyrgus antipodarum) from the western United States by testing whether observed plasticity in life history traits conferred higher fitness across a range of temperatures. We raised isofemale lines from three populations from different climate regimes (high- and low-elevation rivers and an estuary) in a split-brood, common-garden design in three temperatures. We measured life history and growth traits and calculated population growth rate (as a measure of fitness) using an age-structured projection matrix model. We found a strong effect of temperature on all traits, but no evidence for divergence in the average level of traits among populations. Levels of genetic variation and significant reaction norm divergence for life history traits suggested some role for adaptation. Plasticity varied among traits and was lowest for size and reproductive traits compared to age-related traits and fitness. Plasticity in fitness was intermediate, suggesting that invasive populations are not general-purpose genotypes with respect to the range of temperatures studied. Thus, by considering plasticity in fitness and its component traits, we have shown that trait plasticity alone does not yield the same fitness across a relevant set of temperature conditions.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56917,"Feeding behaviour, predatory functional response and trophic interactions of the invasive Chinese mitten crab (Eriocheir sinensis) and signal crayfish (Pacifastacus leniusculus)",S191271,R56918,Outcome of interaction,L119531,Impact,"1. Freshwaters are subject to particularly high rates of species introductions; hence, invaders increasingly co-occur and may interact to enhance impacts on ecosystem structure and function. As trophic interactions are a key mechanism by which invaders influence communities, we used a combination of approaches to investigate the feeding preferences and community impacts of two globally invasive large benthic decapods that co-occur in freshwaters: the signal crayfish (Pacifastacus leniusculus) and Chinese mitten crab (Eriocheir sinensis). 2. In laboratory preference tests, both consumed similar food items, including chironomids, isopods and the eggs of two coarse fish species. In a comparison of predatory functional responses with a native crayfish Austropotamobius pallipes), juvenile E. sinensis had a greater predatory intensity than the native A. pallipes on the keystone shredder Gammarus pulex, and also displayed a greater preference than P. leniusculus for this prey item. 3. In outdoor mesocosms (n = 16) used to investigate community impacts, the abundance of amphipods, isopods, chironomids and gastropods declined in the presence of decapods, and a decapod >gastropod >periphyton trophic cascade was detected when both species were present. Eriocheir sinensis affected a wider range of animal taxa than P. leniusculus. 4. Stable-isotope and gut-content analysis of wild-caught adult specimens of both invaders revealed a wide and overlapping range of diet items including macrophytes, algae, terrestrial detritus, macroinvertebrates and fish. Both decapods were similarly enriched in 15N and occupied the same trophic level as Ephemeroptera, Odonata and Notonecta. Eriocheir sinensis d13C values were closely aligned with macrophytes indicating a reliance on energy from this basal resource, supported by evidence of direct consumption from gut contents. Pacifastacus leniusculus d13C values were intermediate between those of terrestrial leaf litter and macrophytes, suggesting reliance on both allochthonous and autochthonous energy pathways. 5. Our results suggest that E. sinensis is likely to exert a greater per capita impact on the macroinvertebrate communities in invaded systems than P. leniusculus, with potential indirect effects on productivity and energy flow through the community.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56579,"Invasive mutualisms and the structure of plant-pollinator interactions in the temperate forests of north-west Patagonia, Argentina",S187484,R56580,Ecological Level of evidence,L116419,Individual,"1 Alien species may form plant–animal mutualistic complexes that contribute to their invasive potential. Using multivariate techniques, we examined the structure of a plant–pollinator web comprising both alien and native plants and flower visitors in the temperate forests of north‐west Patagonia, Argentina. Our main objective was to assess whether plant species origin (alien or native) influences the composition of flower visitor assemblages. We also examined the influence of other potential confounding intrinsic factors such as flower symmetry and colour, and extrinsic factors such as flowering time, site and habitat disturbance. 2 Flowers of alien and native plant species were visited by a similar number of species and proportion of insects from different orders, but the composition of the assemblages of flower‐visiting species differed between alien and native plants. 3 The influence of plant species origin on the composition of flower visitor assemblages persisted after accounting for other significant factors such as flowering time, bearing red corollas, and habitat disturbance. This influence was at least in part determined by the fact that alien flower visitors were more closely associated with alien plants than with native plants. The main native flower visitors were, on average, equally associated with native and alien plant species. 4 In spite of representing a minor fraction of total species richness (3.6% of all species), alien flower visitors accounted for > 20% of all individuals recorded on flowers. Thus, their high abundance could have a significant impact in terms of pollination. 5 The mutualistic web of alien plants and flower‐visiting insects is well integrated into the overall community‐wide pollination web. However, in addition to their use of the native biota, invasive plants and flower visitors may benefit from differential interactions with their alien partners. The existence of these invader complexes could contribute to the spread of aliens into novel environments.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56608,Facilitation and interference underlying the association between the woody invaders Pyracantha angustifolia and Ligustrum lucidum,S187822,R56609,Ecological Level of evidence,L116699,Individual,"ABSTRACT Questions: 1. Is there any post-dispersal positive effect of the exotic shrub Pyracantha angustifolia on the success of Ligustrum lucidum seedlings, as compared to the effect of the native Condalia montana or the open herbaceous patches between shrubs? 2. Is the possible facilitation by Pyracantha and/or Condalia related to differential emergence, growth, or survival of Ligustrum seedlings under their canopies? Location: Cordoba, central Argentina. Methods: We designed three treatments, in which ten mature individuals of Pyracantha, ten of the dominant native shrub Condalia montana, and ten patches without shrub cover were involved. In each treatment we planted seeds and saplings of Ligustrum collected from nearby natural populations. Seedlings emerging from the planted seeds were harvested after one year to measure growth. Survival of the transplanted saplings was recorded every two month during a year. Half of the planted seeds and transplanted saplings were cage-protected from rodents. Results...",TRUE,noun
R24,Ecology and Evolutionary Biology,R56626,"Enemy release or invasional meltdown? Deer preference for exotic and native trees on Isla Victoria, Argentina",S188030,R56627,Ecological Level of evidence,L116871,Individual,"How interactions between exotic species affect invasion impact is a fundamental issue on both theoretical and applied grounds. Exotics can facilitate establishment and invasion of other exotics (invasional meltdown) or they can restrict them by re-establishing natural population control (as predicted by the enemy- release hypothesis). We studied forest invasion on an Argentinean island where 43 species of Pinaceae, including 60% of the world's recorded invasive Pinaceae, were introduced c. 1920 but where few species are colonizing pristine areas. In this area two species of Palearctic deer, natural enemies of most Pinaceae, were introduced 80 years ago. Expecting deer to help to control the exotics, we conducted a cafeteria experiment to assess deer preferences among the two dominant native species (a conifer, Austrocedrus chilensis, and a broadleaf, Nothofagus dombeyi) and two widely introduced exotic tree species (Pseudotsuga menziesii and Pinus ponderosa). Deer browsed much more intensively on native species than on exotic conifers, in terms of number of individuals attacked and degree of browsing. Deer preference for natives could potentially facilitate invasion by exotic pines. However, we hypothesize that the low rates of invasion currently observed can result at least partly from high densities of exotic deer, which, despite their preference for natives, can prevent establishment of both native and exotic trees. Other factors, not mutually exclusive, could produce the observed pattern. Our results underscore the difficulty of predicting how one introduced species will effect impact of another one.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56638,Positive interactions among plant species for pollinator service: assessing the 'magnet species' concept with invasive species,S188167,R56639,Ecological Level of evidence,L116984,Individual,"Plants with poorly attractive flowers or with little floral rewards may have inadequate pollinator service, which in turn reduces seed output. However, pollinator service of less attractive species could be enhanced when they are associated with species with highly attractive flowers (so called ‘magnet-species’). Although several studies have reported the magnet species effect, few of them have evaluated whether this positive interaction result in an enhancement of the seed output for the beneficiary species. Here, we compared pollinator visitation rates and seed output of the invasive annual species Carduus pycnocephalus when grow associated with shrubs of the invasive Lupinus arboreus and when grow alone, and hypothesized that L. arboreus acts as a magnet species for C. pycnocephalus. Results showed that C. pycnocephalus individuals associated with L. arboreus had higher pollinator visitation rates and higher seed output than individuals growing alone. The higher visitation rates of C. pycnocephalus associated to L. arboreus were maintained after accounting for flower density, which consistently supports our hypothesis on the magnet species effect of L. arboreus. Given that both species are invasives, the facilitated pollination and reproduction of C. pycnocephalus by L. arboreus could promote its naturalization in the community, suggesting a synergistic invasional process contributing to an ‘invasional meltdown’. The magnet effect of Lupinus on Carduus found in this study seems to be one the first examples of indirect facilitative interactions via increased pollination among invasive species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56666,Preferences of the Ponto-Caspian amphipod Dikerogammarus haemobaphes for living zebra mussels,S188488,R56667,Ecological Level of evidence,L117249,Individual,"A Ponto-Caspian amphipod Dikerogammarus haemobaphes has recently invaded European waters. In the recipient area, it encountered Dreissena polymorpha, a habitat-forming bivalve, co-occurring with the gammarids in their native range. We assumed that interspecific interactions between these two species, which could develop during their long-term co-evolution, may affect the gammarid behaviour in novel areas. We examined the gammarid ability to select a habitat containing living mussels and searched for cues used in that selection. We hypothesized that they may respond to such traits of a living mussel as byssal threads, activity (e.g. valve movements, filtration) and/or shell surface properties. We conducted the pairwise habitat-choice experiments in which we offered various objects to single gammarids in the following combinations: (1) living mussels versus empty shells (the general effect of living Dreissena); (2) living mussels versus shells with added byssal threads and shells with byssus versus shells without it (the effect of byssus); (3) living mussels versus shells, both coated with nail varnish to neutralize the shell surface (the effect of mussel activity); (4) varnished versus clean living mussels (the effect of shell surface); (5) varnished versus clean stones (the effect of varnish). We checked the gammarid positions in the experimental tanks after 24 h. The gammarids preferred clean living mussels over clean shells, regardless of the presence of byssal threads under the latter. They responded to the shell surface, exhibiting preferences for clean mussels over varnished individuals. They were neither affected by the presence of byssus nor by mussel activity. The ability to detect and actively select zebra mussel habitats may be beneficial for D. haemobaphes and help it establish stable populations in newly invaded areas.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56746,Ecology of brushtail possums in New Zealand dryland ecosystem,S189378,R56747,Ecological Level of evidence,L117979,Individual,"The introduced brushtail possum (Trichosurus vulpecula) is a major environmental and agricultural pest in New Zealand but little information is available on the ecology of possums in drylands, which cover c. 19% of the country. Here, we describe a temporal snapshot of the diet and feeding preferences of possums in a dryland habitat in New Zealand's South Island, as well as movement patterns and survival rates. We also briefly explore spatial patterns in capture rates. We trapped 279 possums at an average capture rate of 9 possums per 100 trap nights. Capture rates on individual trap lines varied from 0 to 38%, decreased with altitude, and were highest in the eastern (drier) parts of the study area. Stomach contents were dominated by forbs and sweet briar (Rosa rubiginosa); both items were consumed preferentially relative to availability. Possums also strongly preferred crack willow (Salix fragilis), which was uncommon in the study area and consumed only occasionally, but in large amounts. Estimated activity areas of 29 possums radio-tracked for up to 12 months varied from 0.2 to 19.5 ha (mean 5.1 ha). Nine possums (4 male, 5 female) undertook dispersal movements (≥1000 m), the longest of which was 4940 m. The most common dens of radio-collared possums were sweet briar shrubs, followed by rock outcrops. Estimated annual survival was 85% for adults and 54% for subadults. Differences between the diets, activity areas and den use of possums in this study and those in forest or farmland most likely reflect differences in availability and distribution of resources. Our results suggest that invasive willow and sweet briar may facilitate the existence of possums by providing abundant food and shelter. In turn, possums may facilitate the spread of weeds by acting as a seed vector. This basic ecological information will be useful in modelling and managing the impacts of possum populations in drylands.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54120,"Variation in morphological characters of two invasive leafminers, Liriomyza huidobrensis and L. sativae, across a tropical elevation gradient",S166310,R54121,Investigated species,L101068,Insects,"Abstract Changes in morphological traits along elevation and latitudinal gradients in ectotherms are often interpreted in terms of the temperature-size rule, which states that the body size of organisms increases under low temperatures, and is therefore expected to increase with elevation and latitude. However other factors like host plant might contribute to spatial patterns in size as well, particularly for polyphagous insects. Here elevation patterns for trait size and shape in two leafminer species are examined, Liriomyza huidobrensis (Blanchard) (Diptera: Agromyzidae) and L. sativae Blanchard, along a tropical elevation gradient in Java, Indonesia. Adult leafminers were trapped from different locations in the mountainous area of Dieng in the province of Central Java. To separate environmental versus genetic effects, L. huidobrensis originating from 1378 m and 2129 m ASL were reared in the laboratory for five generations. Size variation along the elevation gradient was only found in L. huidobrensis and this followed expectations based on the temperature-size rule. There were also complex changes in wing shape along the gradient. Morphological differences were influenced by genetic and environmental effects. Findings are discussed within the context of adaptation to different elevations in the two species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56537,Widespread association of the invasive ant Solenopsis invicta with an invasive mealybug,S186991,R56538,Investigated species,L116011,Insects,"Factors such as aggressiveness and adaptation to disturbed environments have been suggested as important characteristics of invasive ant species, but diet has rarely been considered. However, because invasive ants reach extraordinary densities at introduced locations, increased feeding efficiency or increased exploitation of new foods should be important in their success. Earlier studies suggest that honeydew produced by Homoptera (e.g., aphids, mealybugs, scale insects) may be important in the diet of the invasive ant species Solenopsis invicta. To determine if this is the case, we studied associations of S. invicta and Homoptera in east Texas and conducted a regional survey for such associations throughout the species' range in the southeast United States. In east Texas, we found that S. invicta tended Ho- moptera extensively and actively constructed shelters around them. The shelters housed a variety of Homoptera whose frequency differed according to either site location or season, presumably because of differences in host plant availability and temperature. Overall, we estimate that the honeydew produced in Homoptera shelters at study sites in east Texas could supply nearly one-half of the daily energetic requirements of an S. invicta colony. Of that, 70% may come from a single species of invasive Homoptera, the mealybugAntonina graminis. Homoptera shelters were also common at regional survey sites and A. graminis occurred in shelters at nine of 11 survey sites. A comparison of shelter densities at survey sites and in east Texas suggests that our results from east Texas could apply throughout the range of S. invicta in the southeast United States. Antonina graminis may be an ex- ceptionally important nutritional resource for S. invicta in the southeast United States. While it remains largely unstudied, the tending of introduced or invasive Homoptera also appears important to other, and perhaps all, invasive ant species. Exploitative or mutually beneficial associations that occur between these insects may be an important, previously unrecognized factor promoting their success.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56547,Invasional 'meltdown' on an oceanic island,S187111,R56548,Investigated species,L116111,Insects,"Islands can serve as model systems for understanding how biological invasions affect community structure and ecosystem function. Here we show invasion by the alien crazy ant Anoplolepis gracilipes causes a rapid, catastrophic shift in the rain forest ecosystem of a tropical oceanic island, affecting at least three trophic levels. In invaded areas, crazy ants extirpate the red land crab, the dominant endemic consumer on the forest floor. In doing so, crazy ants indirectly release seedling recruitment, enhance species richness of seedlings, and slow litter breakdown. In the forest canopy, new associations between this invasive ant and honeydew-secreting scale insects accelerate and diversify impacts. Sustained high densities of foraging ants on canopy trees result in high population densities of hostgeneralist scale insects and growth of sooty moulds, leading to canopy dieback and even deaths of canopy trees. The indirect fallout from the displacement of a native keystone species by an ant invader, itself abetted by introduced/cryptogenic mutualists, produces synergism in impacts to precipitate invasional meltdown in this system.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56603,Collapse of ant-scale mutualism in a rainforest on Christmas Island,S187750,R56604,Investigated species,L116638,Insects,"Positive interactions play a widespread role in facilitating biological invasions. Here we use a landscape-scale ant exclusion experiment to show that widespread invasion of tropical rainforest by honeydew-producing scale insects on Christmas Island (Indian Ocean) has been facilitated by positive interactions with the invasive ant Anoplolepis gracilipes. Toxic bait was used to exclude A. gracilipes from large (9-35 ha) forest patches. Within 11 weeks, ant activity on the ground and on trunks had been reduced by 98-100%, while activity on control plots remained unchanged. The exclusion of ants caused a 100% decline in the density of scale insects in the canopies of three rainforest trees in 12 months (Inocarpus fagifer, Syzygium nervosum and Barringtonia racemosa), but on B. racemosa densities of scale insects also declined in control plots, resulting in no effect of ant exclusion on this species. This study demonstrates the role of positive interactions in facilitating biological invasions, and supports recent models calling for greater recognition of the role of positive interactions in structuring ecological communities.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56630,Exploitative competition between invasive herbivores benefits a native host plant,S188068,R56631,Investigated species,L116902,Insects,"Although biological invasions are of considerable concern to ecologists, relatively little attention has been paid to the potential for and consequences of indirect interactions between invasive species. Such interactions are generally thought to enhance invasives' spread and impact (i.e., the ""invasional meltdown"" hypothesis); however, exotic species might also act indirectly to slow the spread or blunt the impact of other invasives. On the east coast of the United States, the invasive hemlock woolly adelgid (Adelges tsugae, HWA) and elongate hemlock scale (Fiorinia externa, EHS) both feed on eastern hemlock (Tsuga canadensis). Of the two insects, HWA is considered far more damaging and disproportionately responsible for hemlock mortality. We describe research assessing the interaction between HWA and EHS, and the consequences of this interaction for eastern hemlock. We conducted an experiment in which uninfested hemlock branches were experimentally infested with herbivores in a 2 x 2 factorial design (either, both, or neither herbivore species). Over the 2.5-year course of the experiment, each herbivore's density was approximately 30% lower in mixed- vs. single-species treatments. Intriguingly, however, interspecific competition weakened rather than enhanced plant damage: growth was lower in the HWA-only treatment than in the HWA + EHS, EHS-only, or control treatments. Our results suggest that, for HWA-infested hemlocks, the benefit of co-occurring EHS infestations (reduced HWA density) may outweigh the cost (increased resource depletion).",TRUE,noun
R24,Ecology and Evolutionary Biology,R56875,Mutualism between fire ants and mealybugs reduce lady beetle predation,S190807,R56876,Investigated species,L119151,Insects,"ABSTRACT Solenopsis invicta Buren is an important invasive pest that has a negative impact on biodiversity. However, current knowledge regarding the ecological effects of its interaction with honeydewproducing hemipteran insects is inadequate. To partially address this problem, we assessed whether the interaction between the two invasive species S. invicta and Phenacoccus solenopsis Tinsley mediated predation of P. solenopsis by Propylaea japonica Thunbery lady beetles using field investigations and indoor experiments. S. invicta tending significantly reduced predation by the Pr. japonica lady beetle, and this response was more pronounced for lady beetle larvae than for adults. A field investigation showed that the species richness and quantity of lady beetle species in plots with fire ants were much lower than in those without fire ants. In an olfaction bioassay, lady beetles preferred to move toward untended rather than tended mealybugs. Overall, these results suggest that mutualism between S. invicta and P. solenopsis may have a serious impact on predation of P. solenopsis by lady beetles, which could promote growth of P. solenopsis populations.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56949,Biological control attempts by introductions against pest insects in the field in Canada,S191811,R56950,Investigated species,L119803,Insects,"This is an analysis of the attempts to colonize at least 208 species of parasites and predators on about 75 species of pest insects in the field in Canada. There was colonization by about 10% of the species that were introduced in totals of under 5,000 individuals, 40% of those introduced in totals of between 5,000 and 31,200, and 78% of those introduced in totals of over 31,200. Indications exist that initial colonizations may be favoured by large releases and by selection of release sites that are semi-isolated and not ecologically complex but that colonizations are hindered when the target species differs taxonomically from the species from which introduced agents originated and when the release site lacks factors needed for introduced agents to survive or when it is subject to potentially-avoidable physical disruptions. There was no evidence that the probability of colonization was increased when the numbers of individuals released were increased by laboratory propagation. About 10% of the attempts were successful from the economic viewpoint. Successes may be overestimated if the influence of causes of coincidental, actual, or supposed changes in pest abundance are overlooked. Most of the successes were by two or more kinds of agents of which at least one attacked species additional to the target pests. Unplanned consequences of colonization have not been sufficiently harmful to warrant precautions to the extent advocated by Turnbull and Chant but are sufficiently potentially dangerous to warrant the restriction of all colonization attempts to biological control experts. It is concluded that most failures were caused by inadequate procedures, rather than by any weaknesses inherent in the method, that those inadequacies can be avoided in the future, and therefore that biological control of pest insects has much unrealized potential for use in Canada.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57067,The role of opportunity in the unintentional introduction of nonnative ants,S193210,R57069,Investigated species,L120964,Insects,"A longstanding goal in the study of biological invasions is to predict why some species are successful invaders, whereas others are not. To understand this process, detailed information is required concerning the pool of species that have the opportunity to become established. Here we develop an extensive database of ant species unintentionally transported to the continental United States and use these data to test how opportunity and species-level ecological attributes affect the probability of establishment. This database includes an amount of information on failed introductions that may be unparalleled for any group of unintentionally introduced insects. We found a high diversity of species (232 species from 394 records), 12% of which have become established in the continental United States. The probability of establishment increased with the number of times a species was transported (propagule pressure) but was also influenced by nesting habit. Ground nesting species were more likely to become established compared with arboreal species. These results highlight the value of developing similar databases for additional groups of organisms transported by humans to obtain quantitative data on the first stages of the invasion process: opportunity and transport.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57159,"Ecological biogeography of southern oceanic islands: species-area relationships, human impacts, and conservation",S194436,R57160,Investigated species,L121795,Insects,"Previous studies have concluded that southern ocean islands are anomalous because past glacial extent and current temperature apparently explain most variance in their species richness. Here, the relationships between physical variables and species richness of vascular plants, insects, land and seabirds, and mammals were reexamined for these islands. Indigenous and introduced species were distinguished, and relationships between the latter and human occupancy variables were investigated. Most variance in indigenous species richness was explained by combinations of area and temperature (56%)—vascular plants; distance (nearest continent) and vascular plant species richness (75%)—insects; area and chlorophyll concentration (65%)—seabirds; and indigenous insect species richness and age (73%)—land birds. Indigenous insects and plants, along with distance (closest continent), explained most variance (70%) in introduced land bird species richness. A combination of area and temperature explained most variance in species richness of introduced vascular plants (73%), insects (69%), and mammals (69%). However, there was a strong relationship between area and number of human occupants. This suggested that larger islands attract more human occupants, increasing the risk of propagule transfer, while temperature increases the chance of propagule establishment. Consequently, human activities on these islands should be regulated more tightly.",TRUE,noun
R24,Ecology and Evolutionary Biology,R52116,Are competitive effects of native species on an invader mediated by water availability?,S159121,R52117,type of experiment,L95950,Lab,"Question Climate change processes could influence the dynamics of biotic interactions such as plant competition, especially in response to disturbance phenomena such as invasional processes. Are competitive effects of native species on an invader mediated by water availability? Location Glasshouse facility, New South Wales, Australia. Methods We constructed competitive hierarchies for a representative suite of species from coastal dune communities that have been invaded by the Asteraceae shrub, bitou (Chrysanthemoides monilifera subsp. rotundata). We used a comparative phytometer approach, where the invader species was grown with or without a suite of native species in glasshouse trials. This was used to construct competition hierarchies under two water stress conditions: non-droughted and droughted. The treatments were designed to simulate current and potential future water availability respectively. Results We found that the invader experienced fewer competitive effects from some native species under water stress, particularly with regard to below-ground biomass effects. Native species were often poor competitors with the invader, despite their adaptation to periodic water stress in native coastal environments. Of the native species with significant competitive effects on the invader, functionally similar shrub species were the most effective competitors, as expressed in below-ground biomass. The relative position of species in the hierarchy was consistent across water treatments based on below-ground bitou biomass, but was contingent on water treatment when based on above-ground bitou biomass. Conclusions The competitive effects of native species on an invader are affected by water stress. While the direction of response to water stress is species-specific, many species have small competitive effects on the invader under droughted conditions. This could allow an increase in invader dominance with climate change.",TRUE,noun
R24,Ecology and Evolutionary Biology,R53314,An experimental test of Darwin's naturalization hypothesis,S163048,R53317,type of experiment,L98533,Lab,"One of the oldest ideas in invasion biology, known as Darwin’s naturalization hypothesis, suggests that introduced species are more successful in communities in which their close relatives are absent. We conducted the first experimental test of this hypothesis in laboratory bacterial communities varying in phylogenetic relatedness between resident and invading species with and without a protist bacterivore. As predicted, invasion success increased with phylogenetic distance between the invading and the resident bacterial species in both the presence and the absence of protistan bacterivory. The frequency of successful invader establishment was best explained by average phylogenetic distance between the invader and all resident species, possibly indicating limitation by the availability of the unexploited niche (i.e., organic substances in the medium capable of supporting the invader growth); invader abundance was best explained by phylogenetic distance between the invader and its nearest resident relative, possibly indicating limitation by the availability of the unexploited optimal niche (i.e., the subset of organic substances supporting the best invader growth). These results were largely driven by one resident bacterium (a subspecies of Serratia marcescens) posting the strongest resistance to the alien bacterium (another subspecies of S. marcescens). Overall, our findings support phylogenetic relatedness as a useful predictor of species invasion success.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54040,Architectural strategies of Rhamnus cathartica (Rhamnaceae) in relation to canopy openness,S165386,R54041,type of experiment,L100303,Lab,"While phenotypic plasticity is considered the major means that allows plant to cope with environmental heterogeneity, scant information is available on phenotypic plasticity of the whole-plant architecture in relation to ontogenic processes. We performed an architectural analysis to gain an understanding of the structural and ontogenic properties of common buckthorn (Rhamnus cathartica L., Rhamnaceae) growing in the understory and under an open canopy. We found that ontogenic effects on growth need to be calibrated if a full description of phenotypic plasticity is to be obtained. Our analysis pointed to three levels of organization (or nested structural units) in R. cathartica. Their modulation in relation to light conditions leads to the expression of two architectural strategies that involve sets of traits known to confer competitive advantage in their respective environments. In the understory, the plant develops a tree-like form. Its strategy here is based on restricting investment in exploitation str...",TRUE,noun
R24,Ecology and Evolutionary Biology,R54056,"Seasonal Photoperiods Alter Developmental Time and Mass of an Invasive Mosquito, Aedes albopictus (Diptera: Culicidae), Across Its North-South Range in the United States",S165575,R54057,type of experiment,L100460,Lab,"ABSTRACT The Asian tiger mosquito, Aedes albopictus (Skuse), is perhaps the most successful invasive mosquito species in contemporary history. In the United States, Ae. albopictus has spread from its introduction point in southern Texas to as far north as New Jersey (i.e., a span of ≈14° latitude). This species experiences seasonal constraints in activity because of cold temperatures in winter in the northern United States, but is active year-round in the south. We performed a laboratory experiment to examine how life-history traits of Ae. albopictus from four populations (New Jersey [39.4° N], Virginia [38.6° N], North Carolina [35.8° N], Florida [27.6° N]) responded to photoperiod conditions that mimic approaching winter in the north (short static daylength, short diminishing daylength) or relatively benign summer conditions in the south (long daylength), at low and high larval densities. Individuals from northern locations were predicted to exhibit reduced development times and to emerge smaller as adults under short daylength, but be larger and take longer to develop under long daylength. Life-history traits of southern populations were predicted to show less plasticity in response to daylength because of low probability of seasonal mortality in those areas. Males and females responded strongly to photoperiod regardless of geographic location, being generally larger but taking longer to develop under the long daylength compared with short day lengths; adults of both sexes were smaller when reared at low larval densities. Adults also differed in mass and development time among locations, although this effect was independent of density and photoperiod in females but interacted with density in males. Differences between male and female mass and development times was greater in the long photoperiod suggesting differences between the sexes in their reaction to different photoperiods. This work suggests that Ae. albopictus exhibits sex-specific phenotypic plasticity in life-history traits matching variation in important environmental variables.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54112,VARIATION IN PHENOTYPIC PLASTICITY AMONG NATIVE AND INVASIVE POPULATIONS OF ALLIARIA PETIOLATA,S166225,R54113,type of experiment,L100998,Lab,"Alliaria petiolata is a Eurasian biennial herb that is invasive in North America and for which phenotypic plasticity has been noted as a potentially important invasive trait. Using four European and four North American populations, we explored variation among populations in the response of a suite of antioxidant, antiherbivore, and morphological traits to the availability of water and nutrients and to jasmonic acid treatment. Multivariate analyses revealed substantial variation among populations in mean levels of these traits and in the response of this suite of traits to environmental variation, especially water availability. Univariate analyses revealed variation in plasticity among populations in the expression of all of the traits measured to at least one of these environmental factors, with the exception of leaf length. There was no evidence for continentally distinct plasticity patterns, but there was ample evidence for variation in phenotypic plasticity among the populations within continents. This implies that A. petiolata has the potential to evolve distinct phenotypic plasticity patterns within populations but that invasive populations are no more plastic than native populations.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54120,"Variation in morphological characters of two invasive leafminers, Liriomyza huidobrensis and L. sativae, across a tropical elevation gradient",S166318,R54121,type of experiment,L101075,Lab,"Abstract Changes in morphological traits along elevation and latitudinal gradients in ectotherms are often interpreted in terms of the temperature-size rule, which states that the body size of organisms increases under low temperatures, and is therefore expected to increase with elevation and latitude. However other factors like host plant might contribute to spatial patterns in size as well, particularly for polyphagous insects. Here elevation patterns for trait size and shape in two leafminer species are examined, Liriomyza huidobrensis (Blanchard) (Diptera: Agromyzidae) and L. sativae Blanchard, along a tropical elevation gradient in Java, Indonesia. Adult leafminers were trapped from different locations in the mountainous area of Dieng in the province of Central Java. To separate environmental versus genetic effects, L. huidobrensis originating from 1378 m and 2129 m ASL were reared in the laboratory for five generations. Size variation along the elevation gradient was only found in L. huidobrensis and this followed expectations based on the temperature-size rule. There were also complex changes in wing shape along the gradient. Morphological differences were influenced by genetic and environmental effects. Findings are discussed within the context of adaptation to different elevations in the two species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54174,"Growth, water relations, and stomatal development of Caragana korshinskii Kom. and Zygophyllum xanthoxylum (Bunge) Maxim. seedlings in response to water deficits",S166949,R54175,type of experiment,L101598,Lab,"Abstract The selection and introduction of drought tolerant species is a common method of restoring degraded grasslands in arid environments. This study investigated the effects of water stress on growth, water relations, Na+ and K+ accumulation, and stomatal development in the native plant species Zygophyllum xanthoxylum (Bunge) Maxim., and an introduced species, Caragana korshinskii Kom., under three watering regimes. Moderate drought significantly reduced pre‐dawn water potential, leaf relative water content, total biomass, total leaf area, above‐ground biomass, total number of leaves and specific leaf area, but it increased the root/total weight ratio (0.23 versus 0.33) in C. korshinskii. Only severe drought significantly affected water status and growth in Z. xanthoxylum. In any given watering regime, a significantly higher total biomass was observed in Z. xanthoxylum (1.14 g) compared to C. korshinskii (0.19 g). Moderate drought significantly increased Na+ accumulation in all parts of Z. xanthoxylum, e.g., moderate drought increased leaf Na+ concentration from 1.14 to 2.03 g/100 g DW, however, there was no change in Na+ (0.11 versus 0.12) in the leaf of C. korshinskii when subjected to moderate drought. Stomatal density increased as water availability was reduced in both C. korshinskii and Z. xanthoxylum, but there was no difference in stomatal index of either species. Stomatal length and width, and pore width were significantly reduced by moderate water stress in Z. xanthoxylum, but severe drought was required to produce a significant effect in C. korshinskii. These results indicated that C. korshinskii is more responsive to water stress and exhibits strong phenotypic plasticity especially in above‐ground/below‐ground biomass allocation. In contrast, Z. xanthoxylum was more tolerant to water deficit, with a lower specific leaf area and a strong ability to maintain water status through osmotic adjustment and stomatal closure, thereby providing an effective strategy to cope with local extreme arid environments.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54224,Phenotypic plasticity of an invasive acacia versus two native Mediterranean species,S167543,R54225,type of experiment,L102092,Lab,"
The phenotypic plasticity and the competitive ability of the invasive Acacia longifolia v. the indigenous Mediterranean dune species Halimium halimifolium and Pinus pinea were evaluated. In particular, we explored the hypothesis that phenotypic plasticity in response to biotic and abiotic factors explains the observed differences in competitiveness between invasive and native species. The seedlings’ ability to exploit different resource availabilities was examined in a two factorial experimental design of light and nutrient treatments by analysing 20 physiological and morphological traits. Competitiveness was tested using an additive experimental design in combination with 15N-labelling experiments. Light and nutrient availability had only minor effects on most physiological traits and differences between species were not significant. Plasticity in response to changes in resource availability occurred in morphological and allocation traits, revealing A. longifolia to be a species of intermediate responsiveness. The major competitive advantage of A. longifolia was its constitutively high shoot elongation rate at most resource treatments and its effective nutrient acquisition. Further, A. longifolia was found to be highly tolerant against competition from native species. In contrast to common expectations, the competition experiment indicated that A. longifolia expressed a constant allocation pattern and a phenotypic plasticity similar to that of the native species.
",TRUE,noun
R24,Ecology and Evolutionary Biology,R54228,Predator-induced phenotypic plasticity in the exotic cladoceran Daphnia lumholtzi,S167590,R54229,type of experiment,L102131,Lab,"Summary 1. The exotic cladoceran Daphnia lumholtzi has recently invaded freshwater systems throughout the United States. Daphnia lumholtzi possesses extravagant head spines that are longer than those found on any other North American Daphnia. These spines are effective at reducing predation from many of the predators that are native to newly invaded habitats; however, they are plastic both in nature and in laboratory cultures. The purpose of this experiment was to better understand what environmental cues induce and maintain these effective predator-deterrent spines. We conducted life-table experiments on individual D. lumholtzi grown in water conditioned with an invertebrate insect predator, Chaoborus punctipennis, and water conditioned with a vertebrate fish predator, Lepomis macrochirus. 2. Daphnia lumholtzi exhibited morphological plasticity in response to kairomones released by both predators. However, direct exposure to predator kairomones during postembryonic development did not induce long spines in D. lumholtzi. In contrast, neonates produced from individuals exposed to Lepomis kairomones had significantly longer head and tail spines than neonates produced from control and Chaoborus individuals. These results suggest that there may be a maternal, or pre-embryonic, effect of kairomone exposure on spine development in D. lumholtzi. 3. Independent of these morphological shifts, D. lumholtzi also exhibited plasticity in life history characteristics in response to predator kairomones. For example, D. lumholtzi exhibited delayed reproduction in response to Chaoborus kairomones, and significantly more individuals produced resting eggs, or ephippia, in the presence of Lepomis kairomones.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55122,"Behavioural plasticity associated with propagule size, resources, and the invasion success of the Argentine ant Linepithema humile",S177581,R55124,type of experiment,L110064,Lab,"Summary 1. The number of individuals involved in an invasion event, or ‘propagule size’, has a strong theoretical basis for influencing invasion success. However, rarely has propagule size been experimentally manipulated to examine changes in invader behaviour, and propagule longevity and success. 2. We manipulated propagule size of the invasive Argentine ant Linepithema humile in laboratory and field studies. Laboratory experiments involved L. humile propagules containing two queens and 10, 100, 200 or 1000 workers. Propagules were introduced into arenas containing colonies of queens and 200 workers of the competing native ant Monomorium antarcticum . The effects of food availability were investigated via treatments of only one central resource, or 10 separated resources. Field studies used similar colony sizes of L. humile , which were introduced into novel environments near an invasion front. 3. In laboratory studies, small propagules of L. humile were quickly annihilated. Only the larger propagule size survived and killed the native ant colony in some replicates. Aggression was largely independent of food availability, but the behaviour of L. humile changed substantially with propagule size. In larger propagules, aggressive behaviour was significantly more frequent, while L. humile were much more likely to avoid conflict in smaller propagules. 4. In field studies, however, propagule size did not influence colony persistence. Linepithema humile colonies persisted for up to 2 months, even in small propagules of 10 workers. Factors such as temperature or competitor abundance had no effect, although some colonies were decimated by M. antarcticum . 5. Synthesis and applications. Although propagule size has been correlated with invasion success in a wide variety of taxa, our results indicate that it will have limited predictive power with species displaying behavioural plasticity. We recommend that aspects of animal behaviour be given much more consideration in attempts to model invasion success. Secondly, areas of high biodiversity are thought to offer biotic resistance to invasion via the abundance of predators and competitors. Invasive pests such as L. humile appear to modify their behaviour according to local conditions, and establishment was not related to resource availability. We cannot necessarily rely on high levels of native biodiversity to repel invasions.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56551,Na emergent multiple predator effect may enhance biotic resistance in a stream fish assemblage,S187162,R56552,type of experiment,L116153,Lab,"While two cyprinid fishes introduced from nearby drainages have become widespread and abundant in the Eel River of northwestern California, a third nonindigenous cyprinid has remained largely confined to ≤25 km of one major tributary (the Van Duzen River) for at least 15 years. The downstream limit of this species, speckled dace, does not appear to correspond with any thresholds or steep gradients in abiotic conditions, but it lies near the upstream limits of three other fishes: coastrange sculpin, prickly sculpin, and nonindigenous Sacramento pikeminnow. We conducted a laboratory stream experiment to explore the potential for emergent multiple predator effects to influence biotic resistance in this situation. Sculpins in combination with Sacramento pikeminnow caused greater mortality of speckled dace than predicted based on their separate effects. In contrast to speckled dace, 99% of sculpin survived trials with Sacramento pikeminnow, in part because sculpin usually occupied benthic cover units while Sacramento pikeminnow occupied the water column. A 10-fold difference in benthic cover availability did not detectably influence biotic interactions in the experiment. The distribution of speckled dace in the Eel River drainage may be limited by two predator taxa with very different patterns of habitat use and a shortage of alternative habitats.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56610,"Invasive fruits, novel foods, and choice: na investigation of european starling and american robin frugivory",S187842,R56611,type of experiment,L116715,Lab,"Abstract We compared the feeding choices of an invasive frugivore, the European Starling (Sturnus vulgaris), with those of a native, the American Robin (Turdus migratorius). Using captive birds, we tested whether these species differ in their preferences when offered a choice between a native and an invasive fruit, and between a novel and a familiar food. We examined willingness to eat fruits of selected invasive plants and to select a novel food by measuring the time elapsed before feeding began. Both species demonstrated significant preferences for invasive fruits over similar native fruits in two of three choice tests. Both starlings and robins ate autumn olive (Elaeagnus umbellata) fruits significantly more willingly than Asiatic bittersweet (Celastrus orbiculatus). Starlings, but not robins when choosing between a novel and a familiar food, strongly preferred the familiar food. We found no differences in willingness of birds to eat a novel food when it was the only food available. These results suggest that some fleshy-fruited invasive plants may receive more dispersal services than native plants with similar fruits, and that different frugivores may be seed dispersers for different invasive plants.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54070,Phenotypic variation of an alien species in a new environment: the body size and diet of American mink over time and at local and continental scales,S165725,R54071,Investigated species,L100583,Mammals,"Introduced species must adapt their ecology, behaviour, and morphological traits to new conditions. The successful introduction and invasive potential of a species are related to its levels of phenotypic plasticity and genetic polymorphism. We analysed changes in the body mass and length of American mink (Neovison vison) since its introduction into the Warta Mouth National Park, western Poland, in relation to diet composition and colonization progress from 1996 to 2004. Mink body mass decreased significantly during the period of population establishment within the study area, with an average decrease of 13% from 1.36 to 1.18 kg in males and of 16% from 0.83 to 0.70 kg in females. Diet composition varied seasonally and between consecutive years. The main prey items were mammals and fish in the cold season and birds and fish in the warm season. During the study period the proportion of mammals preyed upon increased in the cold season and decreased in the warm season. The proportion of birds preyed upon decreased over the study period, whereas the proportion of fish increased. Following introduction, the strictly aquatic portion of mink diet (fish and frogs) increased over time, whereas the proportion of large prey (large birds, muskrats, and water voles) decreased. The average yearly proportion of large prey and average-sized prey in the mink diet was significantly correlated with the mean body masses of males and females. Biogeographical variation in the body mass and length of mink was best explained by the percentage of large prey in the mink diet in both sexes, and by latitude for females. Together these results demonstrate that American mink rapidly changed their body mass in relation to local conditions. This phenotypic variability may be underpinned by phenotypic plasticity and/or by adaptation of quantitative genetic variation. The potential to rapidly change phenotypic variation in this manner is an important factor determining the negative ecological impacts of invasive species. © 2012 The Linnean Society of London, Biological Journal of the Linnean Society, 2012, 105, 681–693.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54707,Do biodiversity and human impact influence the introduction or establishment of alien mammals?,S195454,R57247,Investigated species,L122639,Mammals,"What determines the number of alien species in a given region? ‘Native biodiversity’ and ‘human impact’ are typical answers to this question. Indeed, studies comparing different regions have frequently found positive relationships between number of alien species and measures of both native biodiversity (e.g. the number of native species) and human impact (e.g. human population). These relationships are typically explained by biotic acceptance or resistance, i.e. by influence of native biodiversity and human impact on the second step of the invasion process, establishment. The first step of the invasion process, introduction, has often been ignored. Here we investigate whether relationships between number of alien mammals and native biodiversity or human impact in 43 European countries are mainly shaped by differences in number of introduced mammals or establishment success. Our results suggest that correlation between number of native and established mammals is spurious, as it is simply explainable by the fact that both quantities are linked to country area. We also demonstrate that countries with higher human impact host more alien mammals than other countries because they received more introductions than other countries. Differences in number of alien mammals cannot be explained by differences in establishment success. Our findings highlight importance of human activities and question, at least for mammals in Europe, importance of biotic acceptance and resistance.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54729,"The short-term responses of small mammals to wildfire in semiarid mallee shrubland, Australia",S173551,R54730,Investigated species,L106977,Mammals,"Context. Wildfire is a major driver of the structure and function of mallee eucalypt- and spinifex-dominated landscapes. Understanding how fire influences the distribution of biota in these fire-prone environments is essential for effective ecological and conservation-based management. Aims. We aimed to (1) determine the effects of an extensive wildfire (118 000 ha) on a small mammal community in the mallee shrublands of semiarid Australia and (2) assess the hypothesis that the fire-response patterns of small mammals can be predicted by their life-history characteristics. Methods. Small-mammal surveys were undertaken concurrently at 26 sites: once before the fire and on four occasions following the fire (including 14 sites that remained unburnt). We documented changes in small-mammal occurrence before and after the fire, and compared burnt and unburnt sites. In addition, key components of vegetation structure were assessed at each site. Key results. Wildfire had a strong influence on vegetation structure and on the occurrence of small mammals. The mallee ningaui, Ningaui yvonneae, a dasyurid marsupial, showed a marked decline in the immediate post-fire environment, corresponding with a reduction in hummock-grass cover in recently burnt vegetation. Species richness of native small mammals was positively associated with unburnt vegetation, although some species showed no clear response to wildfire. Conclusions. Our results are consistent with the contention that mammal responses to fire are associated with their known life-history traits. The species most strongly affected by wildfire, N. yvonneae, has the most specific habitat requirements and restricted life history of the small mammals in the study area. The only species positively associated with recently burnt vegetation, the introduced house mouse, Mus domesticus, has a flexible life history and non-specialised resource requirements. Implications. Maintaining sources for recolonisation after large-scale wildfires will be vital to the conservation of native small mammals in mallee ecosystems.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54797,Mammals of the northern Philippines: tolerance for habitat disturbance and resistance to invasive species in an endemic insular fauna,S174341,R54798,Investigated species,L107631,Mammals,"Aim Island faunas, particularly those with high levels of endemism, usually are considered especially susceptible to disruption from habitat disturbance and invasive alien species. We tested this general hypothesis by examining the distribution of small mammals along gradients of anthropogenic habitat disturbance in northern Luzon Island, an area with a very high level of mammalian endemism.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56686,Treatment-based Markov chain models clarify mechanisms of invasion in na invaded grassland community,S188706,R56687,Investigated species,L117428,Mammals,"What are the relative roles of mechanisms underlying plant responses in grassland communities invaded by both plants and mammals? What type of community can we expect in the future given current or novel conditions? We address these questions by comparing Markov chain community models among treatments from a field experiment on invasive species on Robinson Crusoe Island, Chile. Because of seed dispersal, grazing and disturbance, we predicted that the exotic European rabbit (Oryctolagus cuniculus) facilitates epizoochorous exotic plants (plants with seeds that stick to the skin an animal) at the expense of native plants. To test our hypothesis, we crossed rabbit exclosure treatments with disturbance treatments, and sampled the plant community in permanent plots over 3 years. We then estimated Markov chain model transition probabilities and found significant differences among treatments. As hypothesized, this modelling revealed that exotic plants survive better in disturbed areas, while natives prefer no rabbits or disturbance. Surprisingly, rabbits negatively affect epizoochorous plants. Markov chain dynamics indicate that an overall replacement of native plants by exotic plants is underway. Using a treatment-based approach to multi-species Markov chain models allowed us to examine the changes in the importance of mechanisms in response to experimental impacts on communities.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56799,Linking the pattern to the mechanism: how na introduced mammal facilitates plant invasions,S189961,R56800,Investigated species,L118457,Mammals,"Non-native mammals that are disturbance agents can promote non-native plant invasions, but to date there is scant evidence on the mechanisms behind this pattern. We used wild boar (Sus scrofa) as a model species to evaluate the role of non-native mammals in promoting plant invasion by identifying the degree to which soil disturbance and endozoochorous seed dispersal drive plant invasions. To test if soil disturbance promotes plant invasion, we conducted an exclosure experiment in which we recorded emergence, establishment and biomass of seedlings of seven non-native plant species planted in no-rooting, boar-rooting and artificial rooting patches in Patagonia, Argentina. To examine the role of boar in dispersing seeds we germinated viable seeds from 181 boar droppings and compared this collection to the soil seed bank by collecting a soil sample adjacent to each dropping. We found that both establishment and biomass of non-native seedlings in boar-rooting patches were double those in no-rooting patches. Values in artificial rooting patches were intermediate between those in boar-rooting and no-rooting treatments. By contrast, we found that the proportion of non-native seedlings in the soil samples was double that in the droppings, and over 80% of the germinated seeds were native species in both samples. Lastly, an effect size test showed that soil disturbance by wild boar rather than endozoochorous dispersal facilitates plant invasions. These results have implications for both the native and introduced ranges of wild boar, where rooting disturbance may facilitate community composition shifts.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56953,Determinants of establishment success for introducted exotic mammals,S191854,R56954,Investigated species,L119838,Mammals,"We conducted comparisons for exotic mammal species introduced to New Zealand (28 successful, 4 failed), Australia (24, 17) and Britain (15, 16). Modelling of variables associated with establishment success was constrained by small sample sizes and phylogenetic dependence, so our results should be interpreted with caution. Successful species were subject to more release events, had higher climate matches between their overseas geographic range and their country of introduction, had larger overseas geographic range sizes and were more likely to have established an exotic population elsewhere than was the case for failed species. Of the mammals introduced to New Zealand, successful species also had larger areas of suitable habitat than did failed species. Our findings may guide risk assessments for the import of live mammals to reduce the rate new species establish in the wild.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56986,"Alien mammals in Europe: updated numbers and trends, and assessment of the effects on biodiversity",S192258,R56988,Investigated species,L120174,Mammals,"This study provides an updated picture of mammal invasions in Europe, based on detailed analysis of information on introductions occurring from the Neolithic to recent times. The assessment considered all information on species introductions, known extinctions and successful eradication campaigns, to reconstruct a trend of alien mammals' establishment in the region. Through a comparative analysis of the data on introduction, with the information on the impact of alien mammals on native and threatened species of Europe, the present study also provides an objective assessment of the overall impact of mammal introductions on European biodiversity, including information on impact mechanisms. The results of this assessment confirm the constant increase of mammal invasions in Europe, with no indication of a reduction of the rate of introduction. The study also confirms the severe impact of alien mammals, which directly threaten a significant number of native species, including many highly threatened species. The results could help to prioritize species for response, as required by international conventions and obligations.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57030,A generic impact-scoring system applied to alien mammals in Europe,S192779,R57031,Investigated species,L120609,Mammals,"Abstract: We present a generic scoring system that compares the impact of alien species among members of large taxonomic groups. This scoring can be used to identify the most harmful alien species so that conservation measures to ameliorate their negative effects can be prioritized. For all alien mammals in Europe, we assessed impact reports as completely as possible. Impact was classified as either environmental or economic. We subdivided each of these categories into five subcategories (environmental: impact through competition, predation, hybridization, transmission of disease, and herbivory; economic: impact on agriculture, livestock, forestry, human health, and infrastructure). We assigned all impact reports to one of these 10 categories. All categories had impact scores that ranged from zero (minimal) to five (maximal possible impact at a location). We summed all impact scores for a species to calculate ""potential impact"" scores. We obtained ""actual impact"" scores by multiplying potential impact scores by the percentage of area occupied by the respective species in Europe. Finally, we correlated species’ ecological traits with the derived impact scores. Alien mammals from the orders Rodentia, Artiodactyla, and Carnivora caused the highest impact. In particular, the brown rat (Rattus norvegicus), muskrat (Ondathra zibethicus), and sika deer (Cervus nippon) had the highest overall scores. Species with a high potential environmental impact also had a strong potential economic impact. Potential impact also correlated with the distribution of a species in Europe. Ecological flexibility (measured as number of different habitats a species occupies) was strongly related to impact. The scoring system was robust to uncertainty in knowledge of impact and could be adjusted with weight scores to account for specific value systems of particular stakeholder groups (e.g., agronomists or environmentalists). Finally, the scoring system is easily applicable and adaptable to other taxonomic groups.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57159,"Ecological biogeography of southern oceanic islands: species-area relationships, human impacts, and conservation",S194462,R57162,Investigated species,L121817,Mammals,"Previous studies have concluded that southern ocean islands are anomalous because past glacial extent and current temperature apparently explain most variance in their species richness. Here, the relationships between physical variables and species richness of vascular plants, insects, land and seabirds, and mammals were reexamined for these islands. Indigenous and introduced species were distinguished, and relationships between the latter and human occupancy variables were investigated. Most variance in indigenous species richness was explained by combinations of area and temperature (56%)—vascular plants; distance (nearest continent) and vascular plant species richness (75%)—insects; area and chlorophyll concentration (65%)—seabirds; and indigenous insect species richness and age (73%)—land birds. Indigenous insects and plants, along with distance (closest continent), explained most variance (70%) in introduced land bird species richness. A combination of area and temperature explained most variance in species richness of introduced vascular plants (73%), insects (69%), and mammals (69%). However, there was a strong relationship between area and number of human occupants. This suggested that larger islands attract more human occupants, increasing the risk of propagule transfer, while temperature increases the chance of propagule establishment. Consequently, human activities on these islands should be regulated more tightly.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54236,Induced defenses in response to an invading crab predator: An explanation of historical and geographic phenotypic change,S167679,R54237,Habitat,L102205,Marine,"The expression of defensive morphologies in prey often is correlated with predator abundance or diversity over a range of temporal and spatial scales. These patterns are assumed to reflect natural selection via differential predation on genetically determined, fixed phenotypes. Phenotypic variation, however, also can reflect within-generation developmental responses to environmental cues (phenotypic plasticity). For example, water-borne effluents from predators can induce the production of defensive morphologies in many prey taxa. This phenomenon, however, has been examined only on narrow scales. Here, we demonstrate adaptive phenotypic plasticity in prey from geographically separated populations that were reared in the presence of an introduced predator. Marine snails exposed to predatory crab effluent in the field increased shell thickness rapidly compared with controls. Induced changes were comparable to (i) historical transitions in thickness previously attributed to selection by the invading predator and (ii) present-day clinal variation predicted from water temperature differences. Thus, predator-induced phenotypic plasticity may explain broad-scale geographic and temporal phenotypic variation. If inducible defenses are heritable, then selection on the reaction norm may influence coevolution between predator and prey. Trade-offs may explain why inducible rather than constitutive defenses have evolved in several gastropod species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54574,Anthropogenic Disturbance Can Determine the Magnitude of Opportunistic Species Responses on Marine Urban Infrastructures,S171692,R54575,Habitat,L105428,Marine,"Background Coastal landscapes are being transformed as a consequence of the increasing demand for infrastructures to sustain residential, commercial and tourist activities. Thus, intertidal and shallow marine habitats are largely being replaced by a variety of artificial substrata (e.g. breakwaters, seawalls, jetties). Understanding the ecological functioning of these artificial habitats is key to planning their design and management, in order to minimise their impacts and to improve their potential to contribute to marine biodiversity and ecosystem functioning. Nonetheless, little effort has been made to assess the role of human disturbances in shaping the structure of assemblages on marine artificial infrastructures. We tested the hypothesis that some negative impacts associated with the expansion of opportunistic and invasive species on urban infrastructures can be related to the severe human disturbances that are typical of these environments, such as those from maintenance and renovation works. Methodology/Principal Findings Maintenance caused a marked decrease in the cover of dominant space occupiers, such as mussels and oysters, and a significant enhancement of opportunistic and invasive forms, such as biofilm and macroalgae. These effects were particularly pronounced on sheltered substrata compared to exposed substrata. Experimental application of the disturbance in winter reduced the magnitude of the impacts compared to application in spring or summer. We use these results to identify possible management strategies to inform the improvement of the ecological value of artificial marine infrastructures. Conclusions/Significance We demonstrate that some of the impacts of globally expanding marine urban infrastructures, such as those related to the spread of opportunistic, and invasive species could be mitigated through ecologically-driven planning and management of long-term maintenance of these structures. Impact mitigation is a possible outcome of policies that consider the ecological features of built infrastructures and the fundamental value of controlling biodiversity in marine urban systems.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54607,Determinants of Caulerpa racemosa distribution in the north-western Mediterranean,S172079,R54608,Habitat,L105749,Marine,"Predicting community susceptibility to invasion has become a priority for preserving biodiversity. We tested the hypothesis that the occurrence and abundance of the seaweed Caulerpa racemosa in the north-western (NW) Mediterranean would increase with increasing levels of human disturbance. Data from a survey encompassing areas subjected to different human influences (i.e. from urbanized to protected areas) were fitted by means of generalized linear mixed models, including descriptors of habitats and communities. The incidence of occurrence of C. racemosa was greater on urban than extra-urban or protected reefs, along the coast of Tuscany and NW Sardinia, respectively. Within the Marine Protected Area of Capraia Island (Tuscan Archipelago), the probability of detecting C. racemosa did not vary according to the degree of protection (partial versus total). Human influence was, however, a poor predictor of the seaweed cover. At the seascape level, C. racemosa was more widely spread within degraded (i.e. Posidonia oceanica dead matte or algal turfs) than in better preserved habitats (i.e. canopy-forming macroalgae or P. oceanica seagrass meadows). At a smaller spatial scale, the presence of the seaweed was positively correlated to the diversity of macroalgae and negatively to that of sessile invertebrates. These results suggest that C. racemosa can take advantage of habitat degradation. Thus, predicting invasion scenarios requires a thorough knowledge of ecosystem structure, at a hierarchy of levels of biological organization (from the landscape to the assemblage) and detailed information on the nature and intensity of sources of disturbance and spatial scales at which they operate.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54709,Epifaunal disturbance by periodic low levels of dissolved oxygen: native vs. invasive species response,S173311,R54710,Habitat,L106777,Marine,"Hypoxia is increasing in marine and estuarine systems worldwide, primarily due to anthropogenic causes. Periodic hypoxia represents a pulse disturbance, with the potential to restruc- ture estuarine biotic communities. We chose the shallow, epifaunal community in the lower Chesa- peake Bay, Virginia, USA, to test the hypothesis that low dissolved oxygen (DO) (<4 mg l -1 ) affects community dynamics by reducing the cover of spatial dominants, creating space both for less domi- nant native species and for invasive species. Settling panels were deployed at shallow depths in spring 2000 and 2001 at Gloucester Point, Virginia, and were manipulated every 2 wk from late June to mid-August. Manipulation involved exposing epifaunal communities to varying levels of DO for up to 24 h followed by redeployment in the York River. Exposure to low DO affected both species com- position (presence or absence) and the abundance of the organisms present. Community dominance shifted away from barnacles as level of hypoxia increased. Barnacles were important spatial domi- nants which reduced species diversity when locally abundant. The cover of Hydroides dianthus, a native serpulid polychaete, doubled when exposed to periodic hypoxia. Increased H. dianthus cover may indicate whether a local region has experienced periodic, local DO depletion and thus provide an indicator of poor water-quality conditions. In 2001, the combined cover of the invasive and crypto- genic species in this community, Botryllus schlosseri (tunicate), Molgula manhattensis (tunicate), Ficopomatus enigmaticus (polychaete) and Diadumene lineata (anemone), was highest on the plates exposed to moderately low DO (2 mg l -1 < DO < 4 mg l -1 ). All 4 of these species are now found world- wide and exhibit life histories well adapted for establishment in foreign habitats. Low DO events may enhance success of invasive species, which further stress marine and estuarine ecosystems.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54744,Biogenic disturbance determines invasion success in a subtidal soft-sediment system,S173726,R54745,Habitat,L107122,Marine,"Theoretically, disturbance and diversity can influence the success of invasive colonists if (1) resource limitation is a prime determinant of invasion success and (2) disturbance and diversity affect the availability of required resources. However, resource limitation is not of overriding importance in all systems, as exemplified by marine soft sediments, one of Earth's most widespread habitat types. Here, we tested the disturbance-invasion hypothesis in a marine soft-sediment system by altering rates of biogenic disturbance and tracking the natural colonization of plots by invasive species. Levels of sediment disturbance were controlled by manipulating densities of burrowing spatangoid urchins, the dominant biogenic sediment mixers in the system. Colonization success by two invasive species (a gobiid fish and a semelid bivalve) was greatest in plots with sediment disturbance rates < 500 cm(3) x m(-2) x d(-1), at the low end of the experimental disturbance gradient (0 to > 9000 cm(3) x m(-2) x d(-1)). Invasive colonization declined with increasing levels of sediment disturbance, counter to the disturbance-invasion hypothesis. Increased sediment disturbance by the urchins also reduced the richness and diversity of native macrofauna (particularly small, sedentary, surface feeders), though there was no evidence of increased availability of resources with increased disturbance that would have facilitated invasive colonization: sediment food resources (chlorophyll a and organic matter content) did not increase, and space and access to overlying water were not limited (low invertebrate abundance). Thus, our study revealed the importance of biogenic disturbance in promoting invasion resistance in a marine soft-sediment community, providing further evidence of the valuable role of bioturbation in soft-sediment systems (bioturbation also affects carbon processing, nutrient recycling, oxygen dynamics, benthic community structure, and so on.). Bioturbation rates are influenced by the presence and abundance of large burrowing species (like spatangoid urchins). Therefore, mass mortalities of large bioturbators could inflate invasion risk and alter other aspects of ecosystem performance in marine soft-sediment habitats.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54786,Pollution reduces native diversity and increases invader dominance in marine hard-substrate communities,S196270,R57320,Habitat,L123309,Marine,"Anthropogenic disturbance is considered a risk factor in the establishment of non‐indigenous species (NIS); however, few studies have investigated the role of anthropogenic disturbance in facilitating the establishment and spread of NIS in marine environments. A baseline survey of native and NIS was undertaken in conjunction with a manipulative experiment to determine the effect that heavy metal pollution had on the diversity and invasibility of marine hard‐substrate assemblages. The study was repeated at two sites in each of two harbours in New South Wales, Australia. The survey sampled a total of 47 sessile invertebrate taxa, of which 15 (32%) were identified as native, 19 (40%) as NIS, and 13 (28%) as cryptogenic. Increasing pollution exposure decreased native species diversity at all study sites by between 33% and 50%. In contrast, there was no significant change in the numbers of NIS. Percentage cover was used as a measure of spatial dominance, with increased pollution exposure leading to increased NIS dominance across all sites. At three of the four study sites, assemblages that had previously been dominated by natives changed to become either extensively dominated by NIS or equally occupied by native and NIS alike. No single native or NIS was repeatedly responsible for the observed changes in native species diversity or NIS dominance at all sites. Rather, the observed effects of pollution were driven by a diverse range of taxa and species. These findings have important implications for both the way we assess pollution impacts, and for the management of NIS. When monitoring the response of assemblages to pollution, it is not sufficient to simply assess changes in community diversity. Rather, it is important to distinguish native from NIS components since both are expected to respond differently. In order to successfully manage current NIS, we first need to address levels of pollution within recipient systems in an effort to bolster the resilience of native communities to invasion.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54817,Recent Invasion of the Symbiont-Bearing Foraminifera Pararotalia into the Eastern Mediterranean Facilitated by the Ongoing Warming Trend,S174587,R54818,Habitat,L107837,Marine,"The eastern Mediterranean is a hotspot of biological invasions. Numerous species of Indo-pacific origin have colonized the Mediterranean in recent times, including tropical symbiont-bearing foraminifera. Among these is the species Pararotalia calcariformata. Unlike other invasive foraminifera, this species was discovered only two decades ago and is restricted to the eastern Mediterranean coast. Combining ecological, genetic and physiological observations, we attempt to explain the recent invasion of this species in the Mediterranean Sea. Using morphological and genetic data, we confirm the species attribution to P. calcariformata McCulloch 1977 and identify its symbionts as a consortium of diatom species dominated by Minutocellus polymorphus. We document photosynthetic activity of its endosymbionts using Pulse Amplitude Modulated Fluorometry and test the effects of elevated temperatures on growth rates of asexual offspring. The culturing of asexual offspring for 120 days shows a 30-day period of rapid growth followed by a period of slower growth. A subsequent 48-day temperature sensitivity experiment indicates a similar developmental pathway and high growth rate at 28°C, whereas an almost complete inhibition of growth was observed at 20°C and 35°C. This indicates that the offspring of this species may have lower tolerance to cold temperatures than what would be expected for species native to the Mediterranean. We expand this hypothesis by applying a Species Distribution Model (SDM) based on modern occurrences in the Mediterranean using three environmental variables: irradiance, turbidity and yearly minimum temperature. The model reproduces the observed restricted distribution and indicates that the range of the species will drastically expand westwards under future global change scenarios. We conclude that P. calcariformata established a population in the Levant because of the recent warming in the region. In line with observations from other groups of organisms, our results indicate that continued warming of the eastern Mediterranean will facilitate the invasion of more tropical marine taxa into the Mediterranean, disturbing local biodiversity and ecosystem structure.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54975,Short- and long-term effects of disturbance and propagule pressure on a biological invasion,S175912,R54976,Habitat,L108698,Marine,"1 Invading species typically need to overcome multiple limiting factors simultaneously in order to become established, and understanding how such factors interact to regulate the invasion process remains a major challenge in ecology. 2 We used the invasion of marine algal communities by the seaweed Sargassum muticum as a study system to experimentally investigate the independent and interactive effects of disturbance and propagule pressure in the short term. Based on our experimental results, we parameterized an integrodifference equation model, which we used to examine how disturbances created by different benthic herbivores influence the longer term invasion success of S. muticum. 3 Our experimental results demonstrate that in this system neither disturbance nor propagule input alone was sufficient to maximize invasion success. Rather, the interaction between these processes was critical for understanding how the S. muticum invasion is regulated in the short term. 4 The model showed that both the size and spatial arrangement of herbivore disturbances had a major impact on how disturbance facilitated the invasion, by jointly determining how much space‐limitation was alleviated and how readily disturbed areas could be reached by dispersing propagules. 5 Synthesis. Both the short‐term experiment and the long‐term model show that S. muticum invasion success is co‐regulated by disturbance and propagule pressure. Our results underscore the importance of considering interactive effects when making predictions about invasion success.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54992,Propagule pressure and disturbance interact to overcome biotic resistance of marine invertebrate,S176106,R54993,Habitat,L108858,Marine,"Propagule pressure is fundamental to invasion success, yet our understanding of its role in the marine domain is limited. Few studies have manipulated or controlled for propagule supply in the field, and consequently there is little empirical data to test for non-linearities or interactions with other processes. Supply of non-indigenous propagules is most likely to be elevated in urban estuaries, where vessels congregate and bring exotic species on fouled hulls and in ballast water. These same environments are also typically subject to elevated levels of disturbance from human activities, creating the potential for propagule pressure and disturbance to interact. By applying a controlled dose of free-swimming larvae to replicate assemblages, we were able to quantify a dose-response relationship at much finer spatial and temporal scales than previously achieved in the marine environment. We experimentally crossed controlled levels of propagule pressure and disturbance in the field, and found that both were required for invasion to occur. Only recruits that had settled onto bare space survived beyond three months, precluding invader persistence in undisturbed communities. In disturbed communities initial survival on bare space appeared stochastic, such that a critical density was required before the probability of at least one colony surviving reached a sufficient level. Those that persisted showed 75% survival over the following three months, signifying a threshold past which invaders were resilient to chance mortality. Urban estuaries subject to anthropogenic disturbance are common throughout the world, and similar interactions may be integral to invasion dynamics in these ecosystems.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56567,Positive effects of a dominant invader on introduced and native mudflat species,S187346,R56568,Habitat,L116306,Marine,"Many introduced species have negative impacts on native species, but some develop positive interactions with both native species and other invaders. Facilitation between invaders may lead to an overall acceleration in invasion success and impacts. Mechanisms of facilitation include habitat alteration, or ecosystem engineering, and trophic interactions. In marine systems, only a handful of positive effects have been reported for invading species. In an unusual NE Pacific marine assemblage dominated by 5 conspicuous invaders and 2 native species, we identified positive effects of the most abundant invader, the Asian hornsnail Batillaria attramentaria, on all other species. B. attramentaria reached densities >1400 m -2 , providing an average of 600 cm of hard substrate per m 2 on this mudflat. Its shells were used as habitat almost exclusively by the introduced Atlantic slipper shell Crepidula convexa, the introduced Asian anemone Diadumene lineata, and 2 native hermit crabs Pagurus hirsutiusculus and P. granosimanus. In addition, manipulative experiments showed that the abundance of the mudsnail Nassarius fraterculus and percentage cover of the eelgrass Zostera japonica, both introduced from the NW Pacific, increased significantly in the presence of B. attramentaria. The most likely mechanisms for these facilitations are indirect grazing effects and bioturbation, respectively. Since the precise arrival dates of all these invaders are unknown, the role of B. attramentaria's positive interactions in their initial invasion success is unknown. Nevertheless, by providing habitat for 2 non-native epibionts and 2 native species, and by facilitating 2 other invaders, the non-native B. attramentaria enhances the level of invasion by all 6 species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56616,Non-native habitat as home for non-native species: comparison of communities associated with invasive tubeworm and native oyster reefs,S187909,R56617,Habitat,L116771,Marine,"Introduction vectors for marine non-native species, such as oyster culture and boat foul- ing, often select for organisms dependent on hard substrates during some or all life stages. In soft- sediment estuaries, hard substrate is a limited resource, which can increase with the introduction of hard habitat-creating non-native species. Positive interactions between non-native, habitat-creating species and non-native species utilizing such habitats could be a mechanism for enhanced invasion success. Most previous studies on aquatic invasive habitat-creating species have demonstrated posi- tive responses in associated communities, but few have directly addressed responses of other non- native species. We explored the association of native and non-native species with invasive habitat- creating species by comparing communities associated with non-native, reef-building tubeworms Ficopomatus enigmaticus and native oysters Ostrea conchaphila in Elkhorn Slough, a central Califor- nia estuary. Non-native habitat supported greater densities of associated organisms—primarily highly abundant non-native amphipods (e.g. Monocorophium insidiosum, Melita nitida), tanaid (Sinelebus sp.), and tube-dwelling polychaetes (Polydora spp.). Detritivores were the most common trophic group, making up disproportionately more of the community associated with F. enigmaticus than was the case in the O. conchaphila community. Analysis of similarity (ANOSIM) showed that native species' community structure varied significantly among sites, but not between biogenic habi- tats. In contrast, non-natives varied with biogenic habitat type, but not with site. Thus, reefs of the invasive tubeworm F. enigmaticus interact positively with other non-native species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57020,Differentiating successful and failed molluscan invaders in estuarine ecosystems,S192692,R57023,Habitat,L120538,Marine,"ABSTRACT: Despite mounting evidence of invasive species’ impacts on the environment and society,our ability to predict invasion establishment, spread, and impact are inadequate. Efforts to explainand predict invasion outcomes have been limited primarily to terrestrial and freshwater ecosystems.Invasions are also common in coastal marine ecosystems, yet to date predictive marine invasion mod-els are absent. Here we present a model based on biological attributes associated with invasion suc-cess (establishment) of marine molluscs that compares successful and failed invasions from a groupof 93 species introduced to San Francisco Bay (SFB) in association with commercial oyster transfersfrom eastern North America (ca. 1869 to 1940). A multiple logistic regression model correctly classi-fied 83% of successful and 80% of failed invaders according to their source region abundance at thetime of oyster transfers, tolerance of low salinity, and developmental mode. We tested the generalityof the SFB invasion model by applying it to 3 coastal locations (2 in North America and 1 in Europe)that received oyster transfers from the same source and during the same time as SFB. The model cor-rectly predicted 100, 75, and 86% of successful invaders in these locations, indicating that abun-dance, environmental tolerance (ability to withstand low salinity), and developmental mode not onlyexplain patterns of invasion success in SFB, but more importantly, predict invasion success in geo-graphically disparate marine ecosystems. Finally, we demonstrate that the proportion of marine mol-luscs that succeeded in the latter stages of invasion (i.e. that establish self-sustaining populations,spread and become pests) is much greater than has been previously predicted or shown for otheranimals and plants.KEY WORDS: Invasion · Bivalve · Gastropod · Mollusc · Marine · Oyster · Vector · Risk assessment",TRUE,noun
R24,Ecology and Evolutionary Biology,R57053,Non-indigenous species as stressors in estuarine and marine communities: Assessing invasion impacts and interactions,S193046,R57054,Habitat,L120830,Marine,"Invasions by non‐indigenous species (NIS) are recognized as important stressors of many communities throughout the world. Here, we evaluated available data on the role of NIS in marine and estuarine communities and their interactions with other anthropogenic stressors, using an intensive analysis of the Chesapeake Bay region as a case study. First, we reviewed the reported ecological impacts of 196 species that occur in tidal waters of the bay, including species that are known invaders as well as some that are cryptogenic (i.e., of uncertain origin). Second, we compared the impacts reported in and out of the bay region for the same 54 species of plants and fish from this group that regularly occur in the regionÂ's tidal waters. Third, we assessed the evidence for interaction in the distribution or performance of these 54 plant and fish species within the bay and other stressors. Of the 196 known and possible NIS, 39 (20%) were thought to have some significant impact on a resident population, community, habitat, or process within the bay region. However, quantitative data on impacts were found for only 12 of the 39, representing 31% of this group and 6% of all 196 species surveyed. The patterns of reported impacts in the bay for plants and fish were nearly identical: 29% were reported to have significant impacts, but quantitative impact data existed for only 7% (4/54) of these species. In contrast, 74% of the same species were reported to have significant impacts outside of the bay, and some quantitative impact data were found for 44% (24 /54) of them. Although it appears that 20% of the plant and fish species in our analysis may have significant impacts in the bay region based upon impacts measured elsewhere, we suggest that studies outside the region cannot reliably predict such impacts. We surmise that quantitative impact measures for individual bays or estuaries generally exist for <5% of the NIS present, and many of these measures are not particularly informative. Despite the increasing knowledge of marine invasions at many sites, it is evident that we understand little about the full extent and variety of the impacts they create singly and cumulatively. Given the multiple anthropogenic stressors that overlap with NIS in estuaries, we predict NIS‐stressor interactions play an important role in the pattern and impact of invasions.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57065,Globalisation in marine ecosystems: the story of non-indigenous marine species across European seas,S193179,R57066,Habitat,L120939,Marine,"The introduction of non-indigenous species (NIS) across the major European seas is a dynamic non-stop process. Up to September 2004, 851 NIS (the majority being zoobenthic organ- isms) have been reported in European marine and brackish waters, the majority during the 1960s and 1970s. The Mediterranean is by far the major recipient of exotic species with an average of one introduction every 4 wk over the past 5 yr. Of the 25 species recorded in 2004, 23 were reported in the Mediterranean and only two in the Baltic. The most updated patterns and trends in the rate, mode of introduction and establishment success of introductions were examined, revealing a process similar to introductions in other parts of the world, but with the uniqueness of migrants through the Suez Canal into the Mediterranean (Lessepsian or Erythrean migration). Shipping appears to be the major vector of introduction (excluding the Lessepsian migration). Aquaculture is also an important vector with target species outnumbered by those introduced unintentionally. More than half of immigrants have been estab- lished in at least one regional sea. However, for a significant part of the introductions both the establishment success and mode of introduction remain unknown. Finally, comparing trends across taxa and seas is not as accurate as could have been wished because there are differences in the spatial and taxonomic effort in the study of NIS. These differences lead to the conclusion that the number of NIS remains an underestimate, calling for continuous updating and systematic research.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57075,"How well do we understand the impacts of alien species on ecosystem services? A pan-European, cross-taxa assessment",S193350,R57080,Habitat,L121082,Marine,"Recent comprehensive data provided through the DAISIE project (www.europe-aliens.org) have facilitated the development of the first pan-European assessment of the impacts of alien plants, vertebrates, and invertebrates – in terrestrial, freshwater, and marine environments – on ecosystem services. There are 1094 species with documented ecological impacts and 1347 with economic impacts. The two taxonomic groups with the most species causing impacts are terrestrial invertebrates and terrestrial plants. The North Sea is the maritime region that suffers the most impacts. Across taxa and regions, ecological and economic impacts are highly correlated. Terrestrial invertebrates create greater economic impacts than ecological impacts, while the reverse is true for terrestrial plants. Alien species from all taxonomic groups affect “supporting”, “provisioning”, “regulating”, and “cultural” services and interfere with human well-being. Terrestrial vertebrates are responsible for the greatest range of impacts, and these are widely distributed across Europe. Here, we present a review of the financial costs, as the first step toward calculating an estimate of the economic consequences of alien species in Europe.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57133,"Functional group diversity, resource preemption and the genesis of invasion resistance in a community of marine algae",S194150,R57134,Habitat,L121561,Marine,"Although many studies have investigated how community characteristics such as diversity and disturbance relate to invasibility, the mechanisms underlying biotic resistance to introduced species are not well understood. I manipulated the functional group composition of native algal communities and invaded them with the introduced, Japanese seaweed Sargassum muticum to understand how individual functional groups contributed to overall invasion resistance. The results suggested that space preemption by crustose and turfy algae inhibited S. muticum recruitment and that light preemption by canopy and understory algae reduced S. muticum survivorship. However, other mechanisms I did not investigate could have contributed to these two results. In this marine community the sequential preemption of key resources by different functional groups in different stages of the invasion generated resistance to invasion by S. muticum. Rather than acting collectively on a single resource the functional groups in this system were important for preempting either space or light, but not both resources. My experiment has important implications for diversity-invasibility studies, which typically look for an effect of diversity on individual resources. Overall invasion resistance will be due to the additive effects of individual functional groups (or species) summed over an invader's life cycle. Therefore, the cumulative effect of multiple functional groups (or species) acting on multiple resources is an alternative mechanism that could generate negative relationships between diversity and invasibility in a variety of biological systems.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57227,Native Predators Do Not Influence Invasion Success of Pacific Lionfish on Caribbean Reefs,S195231,R57228,Habitat,L122454,Marine,"Biotic resistance, the process by which new colonists are excluded from a community by predation from and/or competition with resident species, can prevent or limit species invasions. We examined whether biotic resistance by native predators on Caribbean coral reefs has influenced the invasion success of red lionfishes (Pterois volitans and Pterois miles), piscivores from the Indo-Pacific. Specifically, we surveyed the abundance (density and biomass) of lionfish and native predatory fishes that could interact with lionfish (either through predation or competition) on 71 reefs in three biogeographic regions of the Caribbean. We recorded protection status of the reefs, and abiotic variables including depth, habitat type, and wind/wave exposure at each site. We found no relationship between the density or biomass of lionfish and that of native predators. However, lionfish densities were significantly lower on windward sites, potentially because of habitat preferences, and in marine protected areas, most likely because of ongoing removal efforts by reserve managers. Our results suggest that interactions with native predators do not influence the colonization or post-establishment population density of invasive lionfish on Caribbean reefs.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57354,Species diversity and invasion resistance in a marine ecosystem,S196665,R57355,Habitat,L123634,Marine,"Theory predicts that systems that are more diverse should be more resistant to exotic species, but experimental tests are needed to verify this. In experimental communities of sessile marine invertebrates, increased species richness significantly decreased invasion success, apparently because species-rich communities more completely and efficiently used available space, the limiting resource in this system. Declining biodiversity thus facilitates invasion in this system, potentially accelerating the loss of biodiversity and the homogenization of the world's biota.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57356,"Biodiversity, invasion resistance, and marine ecosystem function: Reconciling pattern and process",S196700,R57358,Habitat,L123663,Marine,"A venerable generalization about community resistance to invasions is that more diverse communities are more resistant to invasion. However, results of experimental and observational studies often conflict, leading to vigorous debate about the mechanistic importance of diversity in determining invasion success in the field, as well as other eco- system properties, such as productivity and stability. In this study, we employed both field experiments and observational approaches to assess the effects of diversity on the invasion of a subtidal marine invertebrate community by three species of nonindigenous ascidians (sea squirts). In experimentally assembled communities, decreasing native diversity in- creased the survival and final percent cover of invaders, whereas the abundance of individual species had no effect on these measures of invasion success. Increasing native diversity also decreased the availability of open space, the limiting resource in this system, by buffering against fluctuations in the cover of individual species. This occurred because temporal patterns of abundance differed among species, so space was most consistently and completely occupied when more species were present. When we held diversity constant, but manipulated resource availability, we found that the settlement and recruitment of new invaders dramatically increased with increasing availability of open space. This suggests that the effect of diversity on invasion success is largely due to its effects on resource (space) availability. Apart from invasion resistance, the increased temporal stability found in more diverse communities may itself be considered an enhancement of ecosystem func- tion. In field surveys, we found a strong negative correlation between native-species richness and the number and frequency of nonnative invaders at the scale of both a single quadrat (25 3 25 cm), and an entire site (50 3 50 m). Such a pattern suggests that the means by which diversity affects invasion resistance in our experiments is important in determining the distribution of invasive species in the field. Further synthesis of mechanistic and ob- servational approaches should be encouraged, as this will increase our understanding of the conditions under which diversity does (and does not) play an important role in deter- mining the distribution of invaders in the field.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57698,Using parasites to inform ecological history: Comparisons among three congeneric marine snails,S199515,R57699,Habitat,L125485,Marine,"Species introduced to novel regions often leave behind many parasite species. Signatures of parasite release could thus be used to resolve cryptogenic (uncertain) origins such as that of Littorina littorea, a European marine snail whose history in North America has been debated for over 100 years. Through extensive field and literature surveys, we examined species richness of parasitic trematodes infecting this snail and two co-occurring congeners, L. saxatilis and L. obtusata, both considered native throughout the North Atlantic. Of the three snails, only L. littorea possessed significantly fewer trematode species in North America, and all North American trematodes infecting the three Littorina spp. were a nested subset of Europe. Surprisingly, several of L. littorea's missing trematodes in North America infected the other Littorina congeners. Most likely, long separation of these trematodes from their former host resulted in divergence of the parasites' recognition of L. littorea. Overall, these patterns of parasitism suggest a recent invasion from Europe to North America for L. littorea and an older, natural expansion from Europe to North America for L. saxatilis and L. obtusata.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57774,"Effects of large enemies on success of exotic species in marine fouling communities of Washington, USA",S200481,R57775,Habitat,L126299,Marine,"The enemy release hypothesis, which posits that exotic species are less regulated by enemies than native species, has been well-supported in terrestrial systems but rarely tested in marine systems. Here, the enemy release hypothesis was tested in a marine system by excluding large enemies (>1.3 cm) in dock fouling communities in Washington, USA. After documenting the distribution and abundance of potential enemies such as chitons, gastropods and flatworms at 4 study sites, exclusion experiments were conducted to test the hypotheses that large grazing ene- mies (1) reduced recruitment rates in the exotic ascidian Botrylloides violaceus and native species, (2) reduced B. violaceus and native species abundance, and (3) altered fouling community struc- ture. Experiments demonstrated that, as predicted by the enemy release hypothesis, exclusion of large enemies did not significantly alter B. violaceus recruitment or abundance and it did signifi- cantly increase abundance or recruitment of 2 common native species. However, large enemy exclusion had no significant effects on most native species or on overall fouling community struc- ture. Furthermore, neither B. violaceus nor total exotic species abundance correlated positively with abundance of large enemies across sites. I therefore conclude that release from large ene- mies is likely not an important mechanism for the success of exotic species in Washington fouling communities.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57020,Differentiating successful and failed molluscan invaders in estuarine ecosystems,S192689,R57023,Investigated species,L120535,Molluscs,"ABSTRACT: Despite mounting evidence of invasive species’ impacts on the environment and society,our ability to predict invasion establishment, spread, and impact are inadequate. Efforts to explainand predict invasion outcomes have been limited primarily to terrestrial and freshwater ecosystems.Invasions are also common in coastal marine ecosystems, yet to date predictive marine invasion mod-els are absent. Here we present a model based on biological attributes associated with invasion suc-cess (establishment) of marine molluscs that compares successful and failed invasions from a groupof 93 species introduced to San Francisco Bay (SFB) in association with commercial oyster transfersfrom eastern North America (ca. 1869 to 1940). A multiple logistic regression model correctly classi-fied 83% of successful and 80% of failed invaders according to their source region abundance at thetime of oyster transfers, tolerance of low salinity, and developmental mode. We tested the generalityof the SFB invasion model by applying it to 3 coastal locations (2 in North America and 1 in Europe)that received oyster transfers from the same source and during the same time as SFB. The model cor-rectly predicted 100, 75, and 86% of successful invaders in these locations, indicating that abun-dance, environmental tolerance (ability to withstand low salinity), and developmental mode not onlyexplain patterns of invasion success in SFB, but more importantly, predict invasion success in geo-graphically disparate marine ecosystems. Finally, we demonstrate that the proportion of marine mol-luscs that succeeded in the latter stages of invasion (i.e. that establish self-sustaining populations,spread and become pests) is much greater than has been previously predicted or shown for otheranimals and plants.KEY WORDS: Invasion · Bivalve · Gastropod · Mollusc · Marine · Oyster · Vector · Risk assessment",TRUE,noun
R24,Ecology and Evolutionary Biology,R57609,Invasiveness of Ammophila arenaria: Release from soil-borne pathogens?,S198366,R57611,Indicator for enemy release,L124512,Performance,"The Natural Enemies Hypothesis (i.e., introduced species experience release from their natural enemies) is a common explanation for why invasive species are so successful. We tested this hypothesis for Ammophila arenaria (Poaceae: European beachgrass), an aggressive plant invading the coastal dunes of California, USA, by comparing the demographic effects of belowground pathogens on A. arenaria in its introduced range to those reported in its native range. European research on A. arenaria in its native range has established that soil-borne pathogens, primarily nematodes and fungi, reduce A. arenaria's growth. In a greenhouse experiment designed to parallel European studies, seeds and 2-wk-old seedlings were planted in sterilized and nonsterilized soil collected from the A. arenaria root zone in its introduced range of California. We assessed the effects of pathogens via soil sterilization on three early performance traits: seed germination, seedling survival, and plant growth. We found that seed germinatio...",TRUE,noun
R24,Ecology and Evolutionary Biology,R57618,Plant-soil biota interactions and spatial distribution of black cherry in its native and invasive ranges,S198465,R57619,Indicator for enemy release,L124594,Performance,"One explanation for the higher abundance of invasive species in their non-native than native ranges is the escape from natural enemies. But there are few experimental studies comparing the parallel impact of enemies (or competitors and mutualists) on a plant species in its native and invaded ranges, and release from soil pathogens has been rarely investigated. Here we present evidence showing that the invasion of black cherry (Prunus serotina) into north-western Europe is facilitated by the soil community. In the native range in the USA, the soil community that develops near black cherry inhibits the establishment of neighbouring conspecifics and reduces seedling performance in the greenhouse. In contrast, in the non-native range, black cherry readily establishes in close proximity to conspecifics, and the soil community enhances the growth of its seedlings. Understanding the effects of soil organisms on plant abundance will improve our ability to predict and counteract plant invasions.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57620,"Herbivory, disease, recruitment limitation, and success of alien and native tree species",S198497,R57622,Indicator for enemy release,L124621,Performance,"The Enemies Hypothesis predicts that alien plants have a competitive ad- vantage over native plants because they are often introduced with few herbivores or diseases. To investigate this hypothesis, we transplanted seedlings of the invasive alien tree, Sapium sebiferum (Chinese tallow tree) and an ecologically similar native tree, Celtis laevigata (hackberry), into mesic forest, floodplain forest, and coastal prairie sites in east Texas and manipulated foliar fungal diseases and insect herbivores with fungicidal and insecticidal sprays. As predicted by the Enemies Hypothesis, insect herbivores caused significantly greater damage to untreated Celtis seedlings than to untreated Sapium seedlings. However, contrary to predictions, suppression of insect herbivores caused significantly greater in- creases in survivorship and growth of Sapium seedlings compared to Celtis seedlings. Regressions suggested that Sapium seedlings compensate for damage in the first year but that this greatly increases the risk of mortality in subsequent years. Fungal diseases had no effects on seedling survival or growth. The Recruitment Limitation Hypothesis predicts that the local abundance of a species will depend more on local seed input than on com- petitive ability at that location. To investigate this hypothesis, we added seeds of Celtis and Sapium on and off of artificial soil disturbances at all three sites. Adding seeds increased the density of Celtis seedlings and sometimes Sapium seedlings, with soil disturbance only affecting density of Celtis. Together the results of these experiments suggest that the success of Sapium may depend on high rates of seed input into these ecosystems and high growth potential, as well as performance advantages of seedlings caused by low rates of herbivory.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57674,The interaction between soil nutrients and leaf loss during early 14 establishment in plant invasion,S199201,R57676,Indicator for enemy release,L125217,Performance,"Nitrogen availability affects both plant growth and the preferences of herbivores. We hypothesized that an interaction between these two factors could affect the early establishment of native and exotic species differently, promoting invasion in natural systems. Taxonomically paired native and invasive species (Acer platanoides, Acer rubrum, Lonicera maackii, Diervilla lonicera, Celastrus orbiculata, Celastrus scandens, Elaeagnus umbellata, Ceanothus americanus, Ampelopsis brevipedunculata, and Vitis riparia) were grown in relatively high-resource (hardwood forests) and low-resource (pine barrens) communities on Long Island, New York, for a period of 3 months. Plants were grown in ambient and nitrogen-enhanced conditions in both communities. Nitrogen additions produced an average 12% initial increase in leaf number of all plants. By the end of the experiment, invasive species outperformed native species in nitrogen-enhanced plots in hardwood forests, where all plants experienced increased damage relative to control plots. Native species experienced higher overall amounts of damage in hardwood forests, losing, on average, 45% more leaves than exotic species, and only native species experienced a decline in growth rates (32% compared with controls). In contrast, in pine barrens, there were no differences in damage and no differences in performance between native and invasive plants. Our results suggest that unequal damage by natural enemies may play a role in determining community composition by shifting the competitive advantage to exotic species in nitrogen-enhanced environments. FOR. SCI. 53(6):701-709.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57682,"Experimental field comparison of native and non-native maple seedlings: natural enemies, ecophysiology, growth and survival",S199308,R57684,Indicator for enemy release,L125308,Performance,"1 Acer platanoides (Norway maple) is an important non‐native invasive canopy tree in North American deciduous forests, where native species diversity and abundance are greatly reduced under its canopy. We conducted a field experiment in North American forests to compare planted seedlings of A. platanoides and Acer saccharum (sugar maple), a widespread, common native that, like A. platanoides, is shade tolerant. Over two growing seasons in three forests we compared multiple components of seedling success: damage from natural enemies, ecophysiology, growth and survival. We reasoned that equal or superior performance by A. platanoides relative to A. saccharum indicates seedling characteristics that support invasiveness, while inferior performance indicates potential barriers to invasion. 2 Acer platanoides seedlings produced more leaves and allocated more biomass to roots, A. saccharum had greater water use efficiency, and the two species exhibited similar photosynthesis and first‐season mortality rates. Acer platanoides had greater winter survival and earlier spring leaf emergence, but second‐season mortality rates were similar. 3 The success of A. platanoides seedlings was not due to escape from natural enemies, contrary to the enemy release hypothesis. Foliar insect herbivory and disease symptoms were similarly high for both native and non‐native, and seedling biomass did not differ. Rather, A. platanoides compared well with A. saccharum because of its equivalent ability to photosynthesize in the low light herb layer, its higher leaf production and greater allocation to roots, and its lower winter mortality coupled with earlier spring emergence. Its only potential barrier to seedling establishment, relative to A. saccharum, was lower water use efficiency, which possibly could hinder its invasion into drier forests. 4 The spread of non‐native canopy trees poses an especially serious problem for native forest communities, because canopy trees strongly influence species in all forest layers. Success at reaching the canopy depends on a tree's ecology in previous life‐history stages, particularly as a vulnerable seedling, but little is known about seedling characteristics that promote non‐native tree invasion. Experimental field comparison with ecologically successful native trees provides insight into why non‐native trees succeed as seedlings, which is a necessary stage on their journey into the forest canopy.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57803,Testing hypotheses for exotic plant success: parallel experiments in the native and introduced ranges,S200860,R57805,Indicator for enemy release,L126618,Performance,"A central question in ecology concerns how some exotic plants that occur at low densities in their native range are able to attain much higher densities where they are introduced. This question has remained unresolved in part due to a lack of experiments that assess factors that affect the population growth or abundance of plants in both ranges. We tested two hypotheses for exotic plant success: escape from specialist insect herbivores and a greater response to disturbance in the introduced range. Within three introduced populations in Montana, USA, and three native populations in Germany, we experimentally manipulated insect herbivore pressure and created small-scale disturbances to determine how these factors affect the performance of houndstongue (Cynoglossum officinale), a widespread exotic in western North America. Herbivores reduced plant size and fecundity in the native range but had little effect on plant performance in the introduced range. Small-scale experimental disturbances enhanced seedling recruitment in both ranges, but subsequent seedling survival was more positively affected by disturbance in the introduced range. We combined these experimental results with demographic data from each population to parameterize integral projection population models to assess how enemy escape and disturbance might differentially influence C. officinale in each range. Model results suggest that escape from specialist insects would lead to only slight increases in the growth rate (lambda) of introduced populations. In contrast, the larger response to disturbance in the introduced vs. native range had much greater positive effects on lambda. These results together suggest that, at least in the regions where the experiments were performed, the differences in response to small disturbances by C. officinale contribute more to higher abundance in the introduced range compared to at home. Despite the challenges of conducting experiments on a wide biogeographic scale and the logistical constraints of adequately sampling populations within a range, this approach is a critical step forward to understanding the success of exotic plants.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57814,Remote analysis of biological invasion and the impact of enemy release,S201006,R57815,Indicator for enemy release,L126743,Performance,"Escape from natural enemies is a widely held generalization for the success of exotic plants. We conducted a large-scale experiment in Hawaii (USA) to quantify impacts of ungulate removal on plant growth and performance, and to test whether elimination of an exotic generalist herbivore facilitated exotic success. Assessment of impacted and control sites before and after ungulate exclusion using airborne imaging spectroscopy and LiDAR, time series satellite observations, and ground-based field studies over nine years indicated that removal of generalist herbivores facilitated exotic success, but the abundance of native species was unchanged. Vegetation cover <1 m in height increased in ungulate-free areas from 48.7% +/- 1.5% to 74.3% +/- 1.8% over 8.4 years, corresponding to an annualized growth rate of lambda = 1.05 +/- 0.01 yr(-1) (median +/- SD). Most of the change was attributable to exotic plant species, which increased from 24.4% +/- 1.4% to 49.1% +/- 2.0%, (lambda = 1.08 +/- 0.01 yr(-1)). Native plants experienced no significant change in cover (23.0% +/- 1.3% to 24.2% +/- 1.8%, lambda = 1.01 +/- 0.01 yr(-1)). Time series of satellite phenology were indistinguishable between the treatment and a 3.0-km2 control site for four years prior to ungulate removal, but they diverged immediately following exclusion of ungulates. Comparison of monthly EVI means before and after ungulate exclusion and between the managed and control areas indicates that EVI strongly increased in the managed area after ungulate exclusion. Field studies and airborne analyses show that the dominant invader was Senecio madagascariensis, an invasive annual forb that increased from < 0.01% to 14.7% fractional cover in ungulate-free areas (lambda = 1.89 +/- 0.34 yr(-1)), but which was nearly absent from the control site. A combination of canopy LAI, water, and fractional cover were expressed in satellite EVI time series and indicate that the invaded region maintained greenness during drought conditions. These findings demonstrate that enemy release from generalist herbivores can facilitate exotic success and suggest a plausible mechanism by which invasion occurred. They also show how novel remote-sensing technology can be integrated with conservation and management to help address exotic plant invasions.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57907,Little evidence for release from herbivores as a driver of plant invasiveness from a multi-species herbivore-removal experiment,S202242,R57909,Indicator for enemy release,L127792,Performance,"Enemy release is frequently posed as a main driver of invasiveness of alien species. However, an experimental multi-species test examining performance and herbivory of invasive alien, non-invasive alien and native plant species in the presence and absence of natural enemies is lacking. In a common garden experiment in Switzerland, we manipulated exposure of seven alien invasive, eight alien non-invasive and fourteen native species from six taxonomic groups to natural enemies (invertebrate herbivores), by applying a pesticide treatment under two different nutrient levels. We assessed biomass production, herbivore damage and the major herbivore taxa on plants. Across all species, plants gained significantly greater biomass under pesticide treatment. However, invasive, non-invasive and native species did not differ in their biomass response to pesticide treatment at either nutrient level. The proportion of leaves damaged on invasive species was significantly lower compared to native species, but not when compared to non-invasive species. However, the difference was lost when plant size was accounted for. There were no differences between invasive, non-invasive and native species in herbivore abundance. Our study offers little support for invertebrate herbivore release as a driver of plant invasiveness, but suggests that future enemy release studies should account for differences in plant size among species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R52077,Plant functional group identity and diversity determine biotic resistance to invasion by an exotic grass,S194247,R57143,Investigated species,L121640,Plants,"Biotic resistance, the ability of species in a community to limit invasion, is central to our understanding of how communities at risk of invasion assemble after disturbances, but it has yet to translate into guiding principles for the restoration of invasion‐resistant plant communities. We combined experimental, functional, and modelling approaches to investigate processes of community assembly contributing to biotic resistance to an introduced lineage of Phragmites australis, a model invasive species in North America. We hypothesized that (i) functional group identity would be a good predictor of biotic resistance to P. australis, while species identity effect would be redundant within functional group (ii) mixtures of species would be more invasion resistant than monocultures. We classified 36 resident wetland plants into four functional groups based on eight functional traits. We conducted two competition experiments based on the additive competition design with P. australis and monocultures or mixtures of wetland plants. As an indicator of biotic resistance, we calculated a relative competition index (RCIavg) based on the average performance of P. australis in competition treatment compared with control. To explain diversity effect further, we partitioned it into selection effect and complementarity effect and tested several diversity–interaction models. In monoculture treatments, RCIavg of wetland plants was significantly different among functional groups, but not within each functional group. We found the highest RCIavg for fast‐growing annuals, suggesting priority effect. RCIavg of wetland plants was significantly greater in mixture than in monoculture mainly due to complementarity–diversity effect among functional groups. In diversity–interaction models, species interaction patterns in mixtures were described best by interactions between functional groups when fitted to RCIavg or biomass, implying niche partitioning. Synthesis. Functional group identity and diversity of resident plant communities are good indicators of biotic resistance to invasion by introduced Phragmites australis, suggesting niche pre‐emption (priority effect) and niche partitioning (diversity effect) as underlying mechanisms. Guiding principles to understand and/or manage biological invasion could emerge from advances in community theory and the use of a functional framework. Targeting widely distributed invasive plants in different contexts and scaling up to field situations will facilitate generalization.",TRUE,noun
R24,Ecology and Evolutionary Biology,R52120,Plant functional group diversity as a mechanism for invasion resistance,S196303,R57323,Investigated species,L123336,Plants,"A commonly cited mechanism for invasion resistance is more complete resource use by diverse plant assemblages with maximum niche complementarity. We investigated the invasion resistance of several plant functional groups against the nonindigenous forb Spotted knapweed (Centaurea maculosa). The study consisted of a factorial combination of seven functional group removals (groups singularly or in combination) and two C. maculosa treatments (addition vs. no addition) applied in a randomized complete block design replicated four times at each of two sites. We quantified aboveground plant material nutrient concentration and uptake (concentration × biomass) by indigenous functional groups: grasses, shallow‐rooted forbs, deep‐rooted forbs, spikemoss, and the nonindigenous invader C. maculosa. In 2001, C. maculosa density depended upon which functional groups were removed. The highest C. maculosa densities occurred where all vegetation or all forbs were removed. Centaurea maculosa densities were the lowest in plots where nothing, shallow‐rooted forbs, deep‐rooted forbs, grasses, or spikemoss were removed. Functional group biomass was also collected and analyzed for nitrogen, phosphorus, potassium, and sulphur. Based on covariate analyses, postremoval indigenous plot biomass did not relate to invasion by C. maculosa. Analysis of variance indicated that C. maculosa tissue nutrient percentage and net nutrient uptake were most similar to indigenous forb functional groups. Our study suggests that establishing and maintaining a diversity of plant functional groups within the plant community enhances resistance to invasion. Indigenous plants of functionally similar groups as an invader may be particularly important in invasion resistance.",TRUE,noun
R24,Ecology and Evolutionary Biology,R52138,The role of diversity and functional traits of species in community invasibility,S197140,R57396,Investigated species,L124027,Plants,"The invasion of exotic species into assemblages of native plants is a pervasive and widespread phenomenon. Many theoretical and observational studies suggest that diverse communities are more resistant to invasion by exotic species than less diverse ones. However, experimental results do not always support such a relationship. Therefore, the hypothesis of diversity-community invasibility is still a focus of controversy in the field of invasion ecology. In this study, we established and manipulated communities with different species diversity and different species functional groups (16 species belong to C3, C4, forbs and legumes, respectively) to test Elton's hypothesis and other relevant hypotheses by studying the process of invasion. Alligator weed (Alternanthera philoxeroides) was chosen as the invader. We found that the correlation between the decrement of extractable soil nitrogen and biomass of alligator weed was not significant, and that species diversity, independent of functional groups diversity, did not show a significant correlation with invasibility. However, the communities with higher functional groups diversity significantly reduced the biomass of alligator weed by decreasing its resource opportunity. Functional traits of species also influenced the success of the invasion. Alternanthera sessilis, in the same morphological and functional group as alligator weed, was significantly resistant to alligator weed invasion. Because community invasibility is influenced by many factors and interactions among them, the pattern and mechanisms of community invasibility are likely to be far subtler than we found in this study. More careful manipulated experiments coupled with theoretical modeling studies are essential steps to a more profound understanding of community invasibility.",TRUE,noun
R24,Ecology and Evolutionary Biology,R53271,Darwin's naturalization hypothesis revisited,S162716,R53274,Investigated species,L98255,Plants,"In The Origin of Species, Darwin (1859) drew attention to observations by Alphonse de Candolle (1855) that floras gain by naturalization far more species belonging to new genera than species belonging to native genera. Darwin (1859, p. 86) goes on to give a specific example: “In the last edition of Dr. Asa Gray’s ‘Manual of the Flora of the United States’ ... out of the 162 naturalised genera, no less than 100 genera are not there indigenous.” Darwin used these data to support his theory of intense competition between congeners, described only a few pages earlier: “As the species of the same genus usually have, though by no means invariably, much similarity in habits and constitution, and always in structure, the struggle will generally be more severe between them” (1859, p. 60). Darwin’s intriguing observations have recently attracted renewed interest, as comprehensive lists of naturalized plants have become available for various regions of the world. Two studies (Mack 1996; Rejmanek 1996, 1998) have concluded that naturalized floras provide some support for Darwin’s hypothesis, but only one of these studies used statistical tests. Analyses of additional floras are needed to test the generality of Darwin’s naturalization hypothesis. Mack (1996) tabulated data from six regional floras within the United States and noted that naturalized species more often belong to alien genera than native genera, with the curious exception of one region (New York). In addition to the possibility of strong competition between native and introduced congeners, Mack (1996) proposed that specialist native herbivores, or pathogens, may be",TRUE,noun
R24,Ecology and Evolutionary Biology,R53282,Enemy damage of exotic plant species is similar to that of natives and increases with productivity,S201809,R57876,Investigated species,L127425,Plants,"In their colonized ranges, exotic plants may be released from some of the herbivores or pathogens of their home ranges but these can be replaced by novel enemies. It is of basic and practical interest to understand which characteristics of invaded communities control accumulation of the new pests. Key questions are whether enemy load on exotic species is smaller than on native competitors as suggested by the enemy release hypothesis (ERH) and whether this difference is most pronounced in resource‐rich habitats as predicted by the resource–enemy release hypothesis (R‐ERH). In 72 populations of 12 exotic invasive species, we scored all visible above‐ground damage morphotypes caused by herbivores and fungal pathogens. In addition, we quantified levels of leaf herbivory and fruit damage. We then assessed whether variation in damage diversity and levels was explained by habitat fertility, by relatedness between exotic species and the native community or rather by native species diversity. In a second part of the study, we also tested the ERH and the R‐ERH by comparing damage of plants in 28 pairs of co‐occurring native and exotic populations, representing nine congeneric pairs of native and exotic species. In the first part of the study, diversity of damage morphotypes and damage levels of exotic populations were greater in resource‐rich habitats. Co‐occurrence of closely related, native species in the community significantly increased the probability of fruit damage. Herbivory on exotics was less likely in communities with high phylogenetic diversity. In the second part of the study, exotic and native congeneric populations incurred similar damage diversity and levels, irrespective of whether they co‐occurred in nutrient‐poor or nutrient‐rich habitats. Synthesis. We identified habitat productivity as a major community factor affecting accumulation of enemy damage by exotic populations. Similar damage levels in exotic and native congeneric populations, even in species pairs from fertile habitats, suggest that the enemy release hypothesis or the R‐ERH cannot always explain the invasiveness of introduced species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R53325,How strongly do interactions with closely-related native species influence plant invasions? Darwin's naturalization hypothesis assessed on Mediterranean islands,S163108,R53326,Investigated species,L98581,Plants,"Aim Recent works have found the presence of native congeners to have a small effect on the naturalization rates of introduced plants, some suggesting a negative interaction (as proposed by Charles Darwin in The Origin of Species), and others a positive association. We assessed this question for a new biogeographic region, and discuss some of the problems associated with data base analyses of this type.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54014,"Native jewelweed, but not other native species, displays post-invasion trait divergence",S165072,R54015,Investigated species,L100042,Plants,"Invasive exotic plants reduce the diversity of native communities by displacing native species. According to the coexistence theory, native plants are able to coexist with invaders only when their fitness is not significantly smaller than that of the exotics or when they occupy a different niche. It has therefore been hypothesized that the survival of some native species at invaded sites is due to post-invasion evolutionary changes in fitness and/or niche traits. In common garden experiments, we tested whether plants from invaded sites of two native species, Impatiens noli-tangere and Galeopsis speciosa, outperform conspecifics from non-invaded sites when grown in competition with the invader (Impatiens parviflora). We further examined whether the expected superior performance of the plants from the invaded sites is due to changes in the plant size (fitness proxy) and/or changes in the germination phenology and phenotypic plasticity (niche proxies). Invasion history did not influence the performance of any native species when grown with the exotic competitor. In I. noli-tangere, however, we found significant trait divergence with regard to plant size, germination phenology and phenotypic plasticity. In the absence of a competitor, plants of I. noli-tangere from invaded sites were larger than plants from non-invaded sites. The former plants germinated earlier than inexperienced conspecifics or an exotic congener. Invasion experience was also associated with increased phenotypic plasticity and an improved shade-avoidance syndrome. Although these changes indicate fitness and niche differentiation of I. noli-tangere at invaded sites, future research should examine more closely the adaptive value of these changes and their genetic basis.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54030,"Seedling traits, plasticity and local differentiation as strategies of invasive species of Impatiens in central Europe",S165254,R54031,Investigated species,L100192,Plants,"Background and Aims Invasiveness of some alien plants is associated with their traits, plastic responses to environmental conditions and interpopulation differentiation. To obtain insights into the role of these processes in contributing to variation in performance, we compared congeneric species of Impatiens (Balsaminaceae) with different origin and invasion status that occur in central Europe. Methods Native I. noli-tangere and three alien species (highly invasive I. glandulifera, less invasive I. parviflora and potentially invasive I. capensis) were studied and their responses to simulated canopy shading and different nutrient and moisture levels were determined in terms of survival and seedling traits. Key Results and Conclusions Impatiens glandulifera produced high biomass in all the treatments and the control, exhibiting the ‘Jack-and-master’ strategy that makes it a strong competitor from germination onwards. The results suggest that plasticity and differentiation occurred in all the species tested and that along the continuum from plasticity to differentiation, the species at the plasticity end is the better invader. The most invasive species I. glandulifera appears to be highly plastic, whereas the other two less invasive species, I. parviflora and I. capensis, exhibited lower plasticity but rather strong population differentiation. The invasive Impatiens species were taller and exhibited higher plasticity and differentiation than native I. noli-tangere. This suggests that even within one genus, the relative importance of the phenomena contributing to invasiveness appears to be species'specific.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54048,"The relative importance for plant invasiveness of trait means, and their plasticity and integration in a multivariate framework",S165472,R54049,Investigated species,L100374,Plants,"Functional traits, their plasticity and their integration in a phenotype have profound impacts on plant performance. We developed structural equation models (SEMs) to evaluate their relative contribution to promote invasiveness in plants along resource gradients. We compared 20 invasive-native phylogenetically and ecologically related pairs. SEMs included one morphological (root-to-shoot ratio (R/S)) and one physiological (photosynthesis nitrogen-use efficiency (PNUE)) trait, their plasticities in response to nutrient and light variation, and phenotypic integration among 31 traits. Additionally, these components were related to two fitness estimators, biomass and survival. The relative contributions of traits, plasticity and integration were similar in invasive and native species. Trait means were more important than plasticity and integration for fitness. Invasive species showed higher fitness than natives because: they had lower R/S and higher PNUE values across gradients; their higher PNUE plasticity positively influenced biomass and thus survival; and they offset more the cases where plasticity and integration had a negative direct effect on fitness. Our results suggest that invasiveness is promoted by higher values in the fitness hierarchy--trait means are more important than trait plasticity, and plasticity is similar to integration--rather than by a specific combination of the three components of the functional strategy.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54054,Phenotypic Plasticity in the Invasion of Crofton Weed (Eupatorium adenophorum) in China,S165544,R54055,Investigated species,L100434,Plants,"Phenotypic plasticity and rapid evolution are two important strategies by which invasive species adapt to a wide range of environments and consequently are closely associated with plant invasion. To test their importance in invasion success of Crofton weed, we examined the phenotypic response and genetic variation of the weed by conducting a field investigation, common garden experiments, and intersimple sequence repeat (ISSR) marker analysis on 16 populations in China. Molecular markers revealed low genetic variation among and within the sampled populations. There were significant differences in leaf area (LA), specific leaf area (SLA), and seed number (SN) among field populations, and plasticity index (PIv) for LA, SLA, and SN were 0.62, 0.46 and 0.85, respectively. Regression analyses revealed a significant quadratic effect of latitude of population origin on LA, SLA, and SN based on field data but not on traits in the common garden experiments (greenhouse and open air). Plants from different populations showed similar reaction norms across the two common gardens for functional traits. LA, SLA, aboveground biomass, plant height at harvest, first flowering day, and life span were higher in the greenhouse than in the open-air garden, whereas SN was lower. Growth conditions (greenhouse vs. open air) and the interactions between growth condition and population origin significantly affect plant traits. The combined evidence suggests high phenotypic plasticity but low genetically based variation for functional traits of Crofton weed in the invaded range. Therefore, we suggest that phenotypic plasticity is the primary strategy for Crofton weed as an aggressive invader that can adapt to diverse environments in China.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54060,"Seedling defoliation, plant growth and flowering potential in native- and invasive-range Plantago lanceolata populations",S165611,R54061,Investigated species,L100489,Plants,"Hanley ME (2012). Seedling defoliation, plant growth and flowering potential in native- and invasive-range Plantago lanceolata populations. Weed Research52, 252–259. Summary The plastic response of weeds to new environmental conditions, in particular the likely relaxation of herbivore pressure, is considered vital for successful colonisation and spread. However, while variation in plant anti-herbivore resistance between native- and introduced-range populations is well studied, few authors have considered herbivore tolerance, especially at the seedling stage. This study examines variation in seedling tolerance in native (European) and introduced (North American) Plantago lanceolata populations following cotyledon removal at 14 days old. Subsequent effects on plant growth were quantified at 35 days, along with effects on flowering potential at maturity. Cotyledon removal reduced early growth for all populations, with no variation between introduced- or native-range plants. Although more variable, the effects of cotyledon loss on flowering potential were also unrelated to range. The likelihood that generalist seedling herbivores are common throughout North America may explain why no difference in seedling tolerance was apparent. However, increased flowering potential in plants from North American P. lanceolata populations was observed. As increased flowering potential was not lost, even after severe cotyledon damage, the manifestation of phenotypic plasticity in weeds at maturity may nonetheless still be shaped by plasticity in the ability to tolerate herbivory during seedling establishment.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54064,Plastic Traits of an Exotic Grass Contribute to Its Abundance but Are Not Always Favourable,S165659,R54065,Investigated species,L100529,Plants,"In herbaceous ecosystems worldwide, biodiversity has been negatively impacted by changed grazing regimes and nutrient enrichment. Altered disturbance regimes are thought to favour invasive species that have a high phenotypic plasticity, although most studies measure plasticity under controlled conditions in the greenhouse and then assume plasticity is an advantage in the field. Here, we compare trait plasticity between three co-occurring, C4 perennial grass species, an invader Eragrostis curvula, and natives Eragrostis sororia and Aristida personata to grazing and fertilizer in a three-year field trial. We measured abundances and several leaf traits known to correlate with strategies used by plants to fix carbon and acquire resources, i.e. specific leaf area (SLA), leaf dry matter content (LDMC), leaf nutrient concentrations (N, C∶N, P), assimilation rates (Amax) and photosynthetic nitrogen use efficiency (PNUE). In the control treatment (grazed only), trait values for SLA, leaf C∶N ratios, Amax and PNUE differed significantly between the three grass species. When trait values were compared across treatments, E. curvula showed higher trait plasticity than the native grasses, and this correlated with an increase in abundance across all but the grazed/fertilized treatment. The native grasses showed little trait plasticity in response to the treatments. Aristida personata decreased significantly in the treatments where E. curvula increased, and E. sororia abundance increased possibly due to increased rainfall and not in response to treatments or invader abundance. Overall, we found that plasticity did not favour an increase in abundance of E. curvula under the grazed/fertilized treatment likely because leaf nutrient contents increased and subsequently its' palatability to consumers. E. curvula also displayed a higher resource use efficiency than the native grasses. These findings suggest resource conditions and disturbance regimes can be manipulated to disadvantage the success of even plastic exotic species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54094,Multispecies comparison reveals that invasive and native plants differ in their traits but not in their plasticity,S166005,R54095,Investigated species,L100815,Plants,"Summary 1. Plastic responses to spatiotemporal environmental variation strongly influence species distribution, with widespread species expected to have high phenotypic plasticity. Theoretically, high phenotypic plasticity has been linked to plant invasiveness because it facilitates colonization and rapid spreading over large and environmentally heterogeneous new areas. 2. To determine the importance of phenotypic plasticity for plant invasiveness, we compare well-known exotic invasive species with widespread native congeners. First, we characterized the phenotype of 20 invasive–native ecologically and phylogenetically related pairs from the Mediterranean region by measuring 20 different traits involved in resource acquisition, plant competition ability and stress tolerance. Second, we estimated their plasticity across nutrient and light gradients. 3. On average, invasive species had greater capacity for carbon gain and enhanced performance over a range of limiting to saturating resource availabilities than natives. However, both groups responded to environmental variations with high albeit similar levels of trait plasticity. Therefore, contrary to the theory, the extent of phenotypic plasticity was not significantly higher for invasive plants. 4. We argue that the combination of studying mean values of a trait with its plasticity can render insightful conclusions on functional comparisons of species such as those exploring the performance of species coexisting in heterogeneous and changing environments.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54128,Functional differences in response to drought in the invasive Taraxacum officinale from native and introduced alpine habitat ranges,S166401,R54129,Investigated species,L101143,Plants,"Background: Phenotypic plasticity and ecotypic differentiation have been suggested as the main mechanisms by which widely distributed species can colonise broad geographic areas with variable and stressful conditions. Some invasive plant species are among the most widely distributed plants worldwide. Plasticity and local adaptation could be the mechanisms for colonising new areas. Aims: We addressed if Taraxacum officinale from native (Alps) and introduced (Andes) stock responded similarly to drought treatment, in terms of photosynthesis, foliar angle, and flowering time. We also evaluated if ontogeny affected fitness and physiological responses to drought. Methods: We carried out two common garden experiments with both seedlings and adults (F2) of T. officinale from its native and introduced ranges in order to evaluate their plasticity and ecotypic differentiation under a drought treatment. Results: Our data suggest that the functional response of T. officinale individuals from the introduced range to drought is the result of local adaptation rather than plasticity. In addition, the individuals from the native distribution range were more sensitive to drought than those from the introduced distribution ranges at both seedling and adult stages. Conclusions: These results suggest that local adaptation may be a possible mechanism underlying the successful invasion of T. officinale in high mountain environments of the Andes.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54132,Elevational distribution limits of non-native species: combining observational and experimental evidence,S166450,R54133,Investigated species,L101184,Plants,"Background: In temperate mountains, most non-native plant species reach their distributional limit somewhere along the elevational gradient. However, it is unclear if growth limitations can explain upper range limits and whether phenotypic plasticity or genetic changes allow species to occupy a broad elevational gradient. Aims: We investigated how non-native plant individuals from different elevations responded to growing season temperatures, which represented conditions at the core and margin of the elevational distributions of the species. Methods: We recorded the occurrence of nine non-native species in the Swiss Alps and subsequently conducted a climate chamber experiment to assess growth rates of plants from different elevations under different temperature treatments. Results: The elevational limit observed in the field was not related to the species' temperature response in the climate chamber experiment. Almost all species showed a similar level of reduction in growth rates under lower temperatures independent of the upper elevational limit of the species' distribution. For two species we found indications for genetic differentiation among plants from different elevations. Conclusions: We conclude that factors other than growing season temperatures, such as extreme events or winter mortality, might shape the elevational limit of non-native species, and that ecological filtering might select for genotypes that are phenotypically plastic.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54136,Invasion strategies in clonal aquatic plants: are phenotypic differences caused by phenotypic plasticity or local adaptation? ,S166496,R54137,Investigated species,L101222,Plants,"BACKGROUND AND AIMS The successful spread of invasive plants in new environments is often linked to multiple introductions and a diverse gene pool that facilitates local adaptation to variable environmental conditions. For clonal plants, however, phenotypic plasticity may be equally important. Here the primary adaptive strategy in three non-native, clonally reproducing macrophytes (Egeria densa, Elodea canadensis and Lagarosiphon major) in New Zealand freshwaters were examined and an attempt was made to link observed differences in plant morphology to local variation in habitat conditions. METHODS Field populations with a large phenotypic variety were sampled in a range of lakes and streams with different chemical and physical properties. The phenotypic plasticity of the species before and after cultivation was studied in a common garden growth experiment, and the genetic diversity of these same populations was also quantified. KEY RESULTS For all three species, greater variation in plant characteristics was found before they were grown in standardized conditions. Moreover, field populations displayed remarkably little genetic variation and there was little interaction between habitat conditions and plant morphological characteristics. CONCLUSIONS The results indicate that at the current stage of spread into New Zealand, the primary adaptive strategy of these three invasive macrophytes is phenotypic plasticity. However, while limited, the possibility that genetic diversity between populations may facilitate ecotypic differentiation in the future cannot be excluded. These results thus indicate that invasive clonal aquatic plants adapt to new introduced areas by phenotypic plasticity. Inorganic carbon, nitrogen and phosphorous were important in controlling plant size of E. canadensis and L. major, but no other relationships between plant characteristics and habitat conditions were apparent. This implies that within-species differences in plant size can be explained by local nutrient conditions. All together this strongly suggests that invasive clonal aquatic plants adapt to a wide range of habitats in introduced areas by phenotypic plasticity rather than local adaptation.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54142,Microhabitat analysis of the invasive exotic liana Lonicera japonica Thunb.,S166567,R54143,Investigated species,L101281,Plants,"Abstract We documented microhabitat occurrence and growth of Lonicera japonica to identify factors related to its invasion into a southern Illinois shale barren. The barren was surveyed for L. japonica in June 2003, and the microhabitats of established L. japonica plants were compared to random points that sampled the range of available microhabitats in the barren. Vine and leaf characters were used as measurements of plant growth. Lonicera japonica occurred preferentially in areas of high litter cover and species richness, comparatively small trees, low PAR, low soil moisture and temperature, steep slopes, and shallow soils. Plant growth varied among these microhabitats. Among plots where L. japonica occurred, growth was related to soil and light conditions, and aspects of surrounding cover. Overhead canopy cover was a common variable associated with nearly all measured growth traits. Plasticity of traits to improve invader success can only affect the likelihood of invasion once constraints to establishment and persistence have been surmounted. Therefore, understanding where L. japonica invasion occurs, and microhabitat interactions with plant growth are important for estimating invasion success.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54144,Evolution of dispersal traits along an invasion route in the wind-dispersed Senecio inaequidens (Asteraceae) ,S166588,R54145,Investigated species,L101298,Plants,"In introduced organisms, dispersal propensity is expected to increase during range expansion. This prediction is based on the assumption that phenotypic plasticity is low compared to genetic diversity, and an increase in dispersal can be counteracted by the Allee effect. Empirical evidence in support of these hypotheses is however lacking. The present study tested for evidence of differentiation in dispersal-related traits and the Allee effect in the wind-dispersed invasive Senecio inaequidens (Asteraceae). We collected capitula from individuals in ten field populations, along an invasion route including the original introduction site in southern France. In addition, we conducted a common garden experiment from field-collected seeds and obtained capitula from individuals representing the same ten field populations. We analysed phenotypic variation in dispersal traits between field and common garden environments as a function of the distance between populations and the introduction site. Our results revealed low levels of phenotypic differentiation among populations. However, significant clinal variation in dispersal traits was demonstrated in common garden plants representing the invasion route. In field populations, similar trends in dispersal-related traits and evidence of an Allee effect were not detected. In part, our results supported expectations of increased dispersal capacity with range expansion, and emphasized the contribution of phenotypic plasticity under natural conditions.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54172,Understanding the consequences of seed dispersal in a heterogeneous environment ,S166918,R54173,Investigated species,L101572,Plants,"Plant distributions are in part determined by environmental heterogeneity on both large (landscape) and small (several meters) spatial scales. Plant populations can respond to environmental heterogeneity via genetic differentiation between large distinct patches, and via phenotypic plasticity in response to heterogeneity occurring at small scales relative to dispersal distance. As a result, the level of environmental heterogeneity experienced across generations, as determined by seed dispersal distance, may itself be under selection. Selection could act to increase or decrease seed dispersal distance, depending on patterns of heterogeneity in environmental quality with distance from a maternal home site. Serpentine soils, which impose harsh and variable abiotic stress on non-adapted plants, have been partially invaded by Erodium cicutarium in northern California, USA. Using nearby grassland sites characterized as either serpentine or non-serpentine, we collected seeds from dense patches of E. cicutarium on both soil types in spring 2004 and subsequently dispersed those seeds to one of four distances from their maternal home site (0, 0.5, 1, or 10 m). We examined distance-dependent patterns of variation in offspring lifetime fitness, conspecific density, soil availability, soil water content, and aboveground grass and forb biomass. ANOVA revealed a distinct fitness peak when seeds were dispersed 0.5 m from their maternal home site on serpentine patches. In non-serpentine patches, fitness was reduced only for seeds placed back into the maternal home site. Conspecific density was uniformly high within 1 m of a maternal home site on both soils, whereas soil water content and grass biomass were significantly heterogeneous among dispersal distances only on serpentine soils. Structural equation modeling and multigroup analysis revealed significantly stronger direct and indirect effects linking abiotic and biotic variation to offspring performance on serpentine soils than on non-serpentine soils, indicating the potential for soil-specific selection on seed dispersal distance in this invasive species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54186,Establishment of parallel altitudinal clines in traits of native and introduced forbs,S167084,R54187,Investigated species,L101710,Plants,"Due to altered ecological and evolutionary contexts, we might expect the responses of alien plants to environmental gradients, as revealed through patterns of trait variation, to differ from those of the same species in their native range. In particular, the spread of alien plant species along such gradients might be limited by their ability to establish clinal patterns of trait variation. We investigated trends in growth and reproductive traits in natural populations of eight invasive Asteraceae forbs along altitudinal gradients in their native and introduced ranges (Valais, Switzerland, and Wallowa Mountains, Oregon, USA). Plants showed similar responses to altitude in both ranges, being generally smaller and having fewer inflorescences but larger seeds at higher altitudes. However, these trends were modified by region-specific effects that were independent of species status (native or introduced), suggesting that any differential performance of alien species in the introduced range cannot be interpreted without a fully reciprocal approach to test the basis of these differences. Furthermore, we found differences in patterns of resource allocation to capitula among species in the native and the introduced areas. These suggest that the mechanisms underlying trait variation, for example, increasing seed size with altitude, might differ between ranges. The rapid establishment of clinal patterns of trait variation in the new range indicates that the need to respond to altitudinal gradients, possibly by local adaptation, has not limited the ability of these species to invade mountain regions. Studies are now needed to test the underlying mechanisms of altitudinal clines in traits of alien species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54190,Phenotypic variability in Holcus lanatus L. in southern Chile: a strategy that enhances plant survival and pasture stability,S167130,R54191,Investigated species,L101748,Plants,"
Holcus lanatus L. can colonise a wide range of sites within the naturalised grassland of the Humid Dominion of Chile. The objectives were to determine plant growth mechanisms and strategies that have allowed H. lanatus to colonise contrasting pastures and to determine the existence of ecotypes of H. lanatus in southern Chile. Plants of H. lanatus were collected from four geographic zones of southern Chile and established in a randomised complete block design with four replicates. Five newly emerging tillers were marked per plant and evaluated at the vegetative, pre-ear emergence, complete emerged inflorescence, end of flowering period, and mature seed stages. At each evaluation, one marked tiller was harvested per plant. The variables measured included lamina length and width, tiller height, length of the inflorescence, total number of leaves, and leaf, stem, and inflorescence mass. At each phenological stage, groups of accessions were statistically formed using cluster analysis. The grouping of accessions (cluster analysis) into statistically different groups (ANOVA and canonical variate analysis) indicated the existence of different ecotypes. The phenotypic variation within each group of the accessions suggested that each group has its own phenotypic plasticity. It is concluded that the successful colonisation by H. lanatus has resulted from diversity within the species.
",TRUE,noun
R24,Ecology and Evolutionary Biology,R54198,Predicting invasiveness in exotic species: do subtropical native and invasive exotic aquatic plants differ in their growth responses to macronutrients?,S167226,R54199,Investigated species,L101828,Plants,"We investigated whether plasticity in growth responses to nutrients could predict invasive potential in aquatic plants by measuring the effects of nutrients on growth of eight non‐invasive native and six invasive exotic aquatic plant species. Nutrients were applied at two levels, approximating those found in urbanized and relatively undisturbed catchments, respectively. To identify systematic differences between invasive and non‐invasive species, we compared the growth responses (total biomass, root:shoot allocation, and photosynthetic surface area) of native species with those of related invasive species after 13 weeks growth. The results were used to seek evidence of invasive potential among four recently naturalized species. There was evidence that invasive species tend to accumulate more biomass than native species (P = 0.0788). Root:shoot allocation did not differ between native and invasive plant species, nor was allocation affected by nutrient addition. However, the photosynthetic surface area of invasive species tended to increase with nutrients, whereas it did not among native species (P = 0.0658). Of the four recently naturalized species, Hydrocleys nymphoides showed the same nutrient‐related plasticity in photosynthetic area displayed by known invasive species. Cyperus papyrus showed a strong reduction in photosynthetic area with increased nutrients. H. nymphoides and C. papyrus also accumulated more biomass than their native relatives. H. nymphoides possesses both of the traits we found to be associated with invasiveness, and should thus be regarded as likely to be invasive.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54200,Spreading of the invasive Carpobrotus aff. acinaciformis in Mediterranean ecosystems: The advantage of performing in different light environments,S167249,R54201,Investigated species,L101847,Plants,"ABSTRACT Question: Do specific environmental conditions affect the performance and growth dynamics of one of the most invasive taxa (Carpobrotus aff. acinaciformis) on Mediterranean islands? Location: Four populations located on Mallorca, Spain. Methods: We monitored growth rates of main and lateral shoots of this stoloniferous plant for over two years (2002–2003), comparing two habitats (rocky coast vs. coastal dune) and two different light conditions (sun vs. shade). In one population of each habitat type, we estimated electron transport rate and the level of plant stress (maximal photochemical efficiency Fv/Fm) by means of chlorophyll fluorescence. Results: Main shoots of Carpobrotus grew at similar rates at all sites, regardless habitat type. However, growth rate of lateral shoots was greater in shaded plants than in those exposed to sunlight. Its high phenotypic plasticity, expressed in different allocation patterns in sun and shade individuals, and its clonal growth which promotes the continuous sea...",TRUE,noun
R24,Ecology and Evolutionary Biology,R54202,Photosynthesis and water-use efficiency: A comparison between invasive (exotic) and non-invasive (native) species,S167272,R54203,Investigated species,L101866,Plants,"Invasive species have been hypothesized to out-compete natives though either a Jack-of-all-trades strategy, where they are able to utilize resources effectively in unfavourable environments, a master-of-some, where resource utilization is greater than its competitors in favourable environments, or a combination of the two (Jack-and-master). We examined the invasive strategy of Berberis darwinii in New Zealand compared with four co-occurring native species by examining germination, seedling survival, photosynthetic characteristics and water-use efficiency of adult plants, in sun and shade environments. Berberis darwinii seeds germinated more in shady sites than the other natives, but survival was low. In contrast, while germination of B. darwinii was the same as the native species in sunny sites, seedling survival after 18 months was nearly twice that of the all native species. The maximum photosynthetic rate of B. darwinii was nearly double that of all native species in the sun, but was similar among all species in the shade. Other photosynthetic traits (quantum yield and stomatal conductance) did not generally differ between B. darwinii and the native species, regardless of light environment. Berberis darwinii had more positive values of δ13C than the four native species, suggesting that it gains more carbon per unit water transpired than the competing native species. These results suggest that the invasion success of B. darwinii may be partially explained by combination of a Jack-of-all-trades scenario of widespread germination with a master-of-some scenario through its ability to photosynthesize at higher rates in the sun and, hence, gain a rapid height and biomass advantage over native species in favourable environments.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54210,Contrasting plant physiological adaptation to climate in the native and introduced range of Hypericum perforatum,S167368,R54211,Investigated species,L101946,Plants,"Abstract How introduced plants, which may be locally adapted to specific climatic conditions in their native range, cope with the new abiotic conditions that they encounter as exotics is not well understood. In particular, it is unclear what role plasticity versus adaptive evolution plays in enabling exotics to persist under new environmental circumstances in the introduced range. We determined the extent to which native and introduced populations of St. John's Wort (Hypericum perforatum) are genetically differentiated with respect to leaf-level morphological and physiological traits that allow plants to tolerate different climatic conditions. In common gardens in Washington and Spain, and in a greenhouse, we examined clinal variation in percent leaf nitrogen and carbon, leaf δ13C values (as an integrative measure of water use efficiency), specific leaf area (SLA), root and shoot biomass, root/shoot ratio, total leaf area, and leaf area ratio (LAR). As well, we determined whether native European H. perforatum experienced directional selection on leaf-level traits in the introduced range and we compared, across gardens, levels of plasticity in these traits. In field gardens in both Washington and Spain, native populations formed latitudinal clines in percent leaf N. In the greenhouse, native populations formed latitudinal clines in root and shoot biomass and total leaf area, and in the Washington garden only, native populations also exhibited latitudinal clines in percent leaf C and leaf δ13C. Traits that failed to show consistent latitudinal clines instead exhibited significant phenotypic plasticity. Introduced St. John's Wort populations also formed significant or marginally significant latitudinal clines in percent leaf N in Washington and Spain, percent leaf C in Washington, and in root biomass and total leaf area in the greenhouse. In the Washington common garden, there was strong directional selection among European populations for higher percent leaf N and leaf δ13C, but no selection on any other measured trait. The presence of convergent, genetically based latitudinal clines between native and introduced H. perforatum, together with previously published molecular data, suggest that native and exotic genotypes have independently adapted to a broad-scale variation in climate that varies with latitude.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54212,Phenotypic plasticity of native vs. invasive purple loosestrife: A two-state multivariate approach,S167390,R54213,Investigated species,L101964,Plants,"The differences in phenotypic plasticity between invasive (North American) and native (German) provenances of the invasive plant Lythrum salicaria (purple loosestrife) were examined using a multivariate reaction norm approach testing two important attributes of reaction norms described by multivariate vectors of phenotypic change: the magnitude and direction of mean trait differences between environments. Data were collected for six life history traits from native and invasive plants using a split-plot design with experimentally manipulated water and nutrient levels. We found significant differences between native and invasive plants in multivariate phenotypic plasticity for comparisons between low and high water treatments within low nutrient levels, between low and high nutrient levels within high water treatments, and for comparisons that included both a water and nutrient level change. The significant genotype x environment (G x E) effects support the argument that invasiveness of purple loosestrife is closely associated with the interaction of high levels of soil nutrient and flooding water regime. Our results indicate that native and invasive plants take different strategies for growth and reproduction; native plants flowered earlier and allocated more to flower production, while invasive plants exhibited an extended period of vegetative growth before flowering to increase height and allocation to clonal reproduction, which may contribute to increased fitness and invasiveness in subsequent years.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54214,"Phenotypic plasticity, precipitation, and invasiveness in the fire-promoting grass Pennisetum setaceum (poaceae)",S167413,R54215,Investigated species,L101983,Plants,"Invasiveness may result from genetic variation and adaptation or phenotypic plasticity, and genetic variation in fitness traits may be especially critical. Pennisetum setaceum (fountain grass, Poaceae) is highly invasive in Hawaii (HI), moderately invasive in Arizona (AZ), and less invasive in southern California (CA). In common garden experiments, we examined the relative importance of quantitative trait variation, precipitation, and phenotypic plasticity in invasiveness. In two very different environments, plants showed no differences by state of origin (HI, CA, AZ) in aboveground biomass, seeds/flower, and total seed number. Plants from different states were also similar within watering treatment. Plants with supplemental watering, relative to unwatered plants, had greater biomass, specific leaf area (SLA), and total seed number, but did not differ in seeds/flower. Progeny grown from seeds produced under different watering treatments showed no maternal effects in seed mass, germination, biomass or SLA. High phenotypic plasticity, rather than local adaptation is likely responsible for variation in invasiveness. Global change models indicate that temperature and precipitation patterns over the next several decades will change, although the direction of change is uncertain. Drier summers in southern California may retard further invasion, while wetter summers may favor the spread of fountain grass.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54241,Greater morphological plasticity of exotic honeysuckle species may make them better invaders than native species,S167734,R54242,Investigated species,L102250,Plants,"sempervirens L., a non-invasive native. We hypothesized that greater morphological plasticity may contribute to the ability of L. japonica to occupy more habitat types, and contribute to its invasiveness. We compared the morphology of plants provided with climbing supports with plants that had no climbing supports, and thus quantified their morphological plasticity in response to an important variable in their habitats. The two species responded differently to the treatments, with L. japonica showing greater responses in more characters. For example, Lonicera japonica responded to climbing supports with a 15.3% decrease in internode length, a doubling of internode number and a 43% increase in shoot biomass. In contrast, climbing supports did not influence internode length or shoot biomass for L. sempervirens, and only resulted in a 25% increase in internode number. This plasticity may allow L. japonica to actively place plant modules in favorable microhabitats and ultimately affect plant fitness.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54585,"The incidence of exotic species following clearfelling of Eucalyptus regnans forest in the Central Highlands, Victoria",S171830,R54587,Investigated species,L105542,Plants,"Invasion by exotic species following clearfelling of Eucalyptus regnans F. Muell. (Mountain Ash) forest was examined in the Toolangi State Forest in the Central Highlands of Victoria. Coupes ranging in age from < 1- to 10-years-old and the spar-stage forests (1939 bushfire regrowth) adjacent to each of these coupes and a mature, 250-year-old forest were surveyed. The dispersal and establishment of weeds was facilitated by clearfelling. An influx of seeds of exotic species was detected in recently felled coupes but not in the adjacent, unlogged forests. Vehicles and frequently disturbed areas, such as roadside verges, are likely sources of the seeds of exotic species. The soil seed bank of younger coupes had a greater number and percentage of seeds of exotics than the 10-year-old coupes and the spar-stage and mature forests. Exotic species were a minor component (< 1% vegetation cover) in the more recently logged coupes and were not present in 10-year-old coupes and the spar-stage and mature forests. These particular exotic species did not persist in the dense regeneration nor exist in the older forests because the weeds were ruderal species (light-demanding, short-lived and short-statured plants). The degree of influence that these particular exotic species have on the regeneration and survival of native species in E. regnans forests is almost negligible. However, the current management practices may need to be addressed to prevent a more threatening exotic species from establishing in these coupes and forests.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54620,A comparison of the urban flora of different phytoclimatic regions in Italy,S172233,R54621,Investigated species,L105877,Plants,"This study is a comparison of the spontaneous vascular flora of five Italian cities: Milan, Ancona, Rome, Cagliari and Palermo. The aims of the study are to test the hypothesis that urbanization results in uniformity of urban floras, and to evaluate the role of alien species in the flora of settlements located in different phytoclimatic regions. To obtain comparable data, ten plots of 1 ha, each representing typical urban habitats, were analysed in each city. The results indicate a low floristic similarity between the cities, while the strongest similarity appears within each city and between each city and the seminatural vegetation of the surrounding region. In the Mediterranean settlements, even the most urbanized plots reflect the characters of the surrounding landscape and are rich in native species, while aliens are relatively few. These results differ from the reported uniformity and the high proportion of aliens which generally characterize urban floras elsewhere. To explain this trend the importance of apophytes (indigenous plants expanding into man-made habitats) is highlighted; several Mediterranean species adapted to disturbance (i.e. grazing, trampling, and human activities) are pre-adapted to the urban environment. In addition, consideration is given to the minor role played by the ‘urban heat island’ in the Mediterranean basin, and to the structure and history of several Italian settlements, where ancient walls, ruins and archaeological sites in the periphery as well as in the historical centres act as conservative habitats and provide connection with seed-sources on the outskirts.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54622,Responses of exotic plant species to fires in Pinus ponderosa forests in northern Arizona,S172271,R54624,Investigated species,L105909,Plants,". Changes in disturbance due to fire regime in southwestern Pinus ponderosa forests over the last century have led to dense forests that are threatened by widespread fire. It has been shown in other studies that a pulse of native, early-seral opportunistic species typically follow such disturbance events. With the growing importance of exotic plants in local flora, however, these exotics often fill this opportunistic role in recovery. We report the effects of fire severity on exotic plant species following three widespread fires of 1996 in northern Arizona P. ponderosa forests. Species richness and abundance of all vascular plant species, including exotics, were higher in burned than nearby unburned areas. Exotic species were far more important, in terms of cover, where fire severity was highest. Species present after wildfires include those of the pre-disturbed forest and new species that could not be predicted from above-ground flora of nearby unburned forests.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54630,Factors influencing dynamics of two invasive C-4 grasses in seasonally dry Hawaiian woodlands,S172353,R54631,Investigated species,L105977,Plants,"The introduced C4 bunchgrass, Schizachyrium condensatum, is abundant in unburned, seasonally dry woodlands on the island of Hawaii, where it promotes the spread of fire. After fire, it is partially replaced by Melinis minutiflora, another invasive C4 grass. Seed bank surveys in unburned woodland showed that Melinis seed is present in locations without adult plants. Using a combination of germination tests and seedling outplant ex- periments, we tested the hypothesis that Melinis was unable to invade the unburned wood- land because of nutrient and/or light limitation. We found that Melinis germination and seedling growth are depressed by the low light levels common under Schizachyrium in unburned woodland. Outplanted Melinis seedlings grew rapidly to flowering and persisted for several years in unburned woodland without nutrient additions, but only if Schizachyrium individuals were removed. Nutrients alone did not facilitate Melinis establishment. Competition between Melinis and Schizachyrium naturally occurs when individuals of both species emerge from the seed bank simultaneously, or when seedlings of one species emerge in sites already dominated by individuals of the other species. When both species are grown from seed, we found that Melinis consistently outcompetes Schizachyrium, re- gardless of light or nutrient treatments. When seeds of Melinis were added to pots with well-established Schizachyrium (and vice versa), Melinis eventually invaded and overgrew adult Schizachyrium under high, but not low, nutrients. By contrast, Schizachyrium could not invade established Melinis pots regardless of nutrient level. A field experiment dem- onstrated that Schizachyrium individuals are suppressed by Melinis in burned sites through competition for both light and nutrients. Overall, Melinis is a dominant competitor over Schizachyrium once it becomes estab- lished, whether in a pot or in the field. We believe that the dominance of Schizachyrium, rather than Melinis, in the unburned woodland is the result of asymmetric competition due to the prior establishment of Schizachyrium in these sites. If Schizachyrium were not present, the unburned woodland could support dense stands of Melinis. Fire disrupts the priority effect of Schizachyrium and allows the dominant competitor (Melinis) to enter the system where it eventually replaces Schizachyrium through resource competition.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54652,Plant and Small Vertebrate Composition and Diversity 36-39 Years After Root Plowing,S172622,R54653,Investigated species,L106202,Plants,"Abstract Root plowing is a common management practice to reduce woody vegetation and increase herbaceous forage for livestock on rangelands. Our objective was to test the hypotheses that four decades after sites are root plowed they have 1) lower plant species diversity, less heterogeneity, greater percent canopy cover of exotic grasses; and 2) lower abundance and diversity of amphibians, reptiles, and small mammals, compared to sites that were not disturbed by root plowing. Pairs of 4-ha sites were selected for sampling: in each pair of sites, one was root plowed in 1965 and another was not disturbed by root plowing (untreated). We estimated canopy cover of woody and herbaceous vegetation during summer 2003 and canopy cover of herbaceous vegetation during spring 2004. We trapped small mammals and herpetofauna in pitfall traps during late spring and summer 2001–2004. Species diversity and richness of woody plants were less on root-plowed than on untreated sites; however, herbaceous plant and animal species did not differ greatly between treatments. Evenness of woody vegetation was less on root-plowed sites, in part because woody legumes were more abundant. Abundance of small mammals and herpetofauna varied with annual rainfall more than it varied with root plowing. Although structural differences existed between vegetation communities, secondary succession of vegetation reestablishing after root plowing appears to be leading to convergence in plant and small animal species composition with untreated sites.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54656,Roads as conduits for exotic plant invasions in a semiarid landscape,S172684,R54658,Investigated species,L106254,Plants,"Abstract: Roads are believed to be a major contributing factor to the ongoing spread of exotic plants. We examined the effect of road improvement and environmental variables on exotic and native plant diversity in roadside verges and adjacent semiarid grassland, shrubland, and woodland communities of southern Utah ( U.S.A. ). We measured the cover of exotic and native species in roadside verges and both the richness and cover of exotic and native species in adjacent interior communities ( 50 m beyond the edge of the road cut ) along 42 roads stratified by level of road improvement ( paved, improved surface, graded, and four‐wheel‐drive track ). In roadside verges along paved roads, the cover of Bromus tectorum was three times as great ( 27% ) as in verges along four‐wheel‐drive tracks ( 9% ). The cover of five common exotic forb species tended to be lower in verges along four‐wheel‐drive tracks than in verges along more improved roads. The richness and cover of exotic species were both more than 50% greater, and the richness of native species was 30% lower, at interior sites adjacent to paved roads than at those adjacent to four‐wheel‐drive tracks. In addition, environmental variables relating to dominant vegetation, disturbance, and topography were significantly correlated with exotic and native species richness and cover. Improved roads can act as conduits for the invasion of adjacent ecosystems by converting natural habitats to those highly vulnerable to invasion. However, variation in dominant vegetation, soil moisture, nutrient levels, soil depth, disturbance, and topography may render interior communities differentially susceptible to invasions originating from roadside verges. Plant communities that are both physically invasible ( e.g., characterized by deep or fertile soils ) and disturbed appear most vulnerable. Decision‐makers considering whether to build, improve, and maintain roads should take into account the potential spread of exotic plants.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54659,Testing life history correlates of invasiveness using congeneric plant species,S172706,R54660,Investigated species,L106272,Plants,"We used three congeneric annual thistles, which vary in their ability to invade California (USA) annual grasslands, to test whether invasiveness is related to differences in life history traits. We hypothesized that populations of these summer-flowering Centaurea species must pass through a demographic gauntlet of survival and reproduction in order to persist and that the most invasive species (C. solstitialis) might possess unique life history characteristics. Using the idea of a demographic gauntlet as a conceptual framework, we compared each congener in terms of (1) seed germination and seedling establishment, (2) survival of rosettes subjected to competition from annual grasses, (3) subsequent growth and flowering in adult plants, and (4) variation in breeding system. Grazing and soil disturbance is thought to affect Centaurea establishment, growth, and reproduction, so we also explored differences among congeners in their response to clipping and to different sizes of soil disturbance. We found minimal differences among congeners in either seed germination responses or seedling establishment and survival. In contrast, differential growth responses of congeners to different sizes of canopy gaps led to large differences in adult size and fecundity. Canopy-gap size and clipping affected the fecundity of each species, but the most invasive species (C. solstitialis) was unique in its strong positive response to combinations of clipping and canopy gaps. In addition, the phenology of C. solstitialis allows this species to extend its growing season into the summer—a time when competition from winter annual vegetation for soil water is minimal. Surprisingly, C. solstitialis was highly self-incompatible while the less invasive species were highly self-compatible. Our results suggest that the invasiveness of C. solstitialis arises, in part, from its combined ability to persist in competition with annual grasses and its plastic growth and reproductive responses to open, disturbed habitat patches. Corresponding Editor: D. P. C. Peters.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54661,Invasibility and abiotic gradients: the positive correlation between native and exotic plant diversity,S195107,R57218,Investigated species,L122350,Plants,"We sampled the understory community in an old-growth, temperate forest to test alternative hypotheses explaining the establishment of exotic plants. We quantified the individual and net importance of distance from areas of human disturbance, native plant diversity, and environmental gradients in determining exotic plant establishment. Distance from disturbed areas, both within and around the reserve, was not correlated to exotic species richness. Numbers of native and exotic species were positively correlated at large (50 m 2 ) and small (10 m 2 ) plot sizes, a trend that persisted when relationships to environ- mental gradients were controlled statistically. Both native and exotic species richness in- creased with soil pH and decreased along a gradient of increasing nitrate availability. Exotic species were restricted to the upper portion of the pH gradient and had individualistic responses to the availability of soil resources. These results are inconsistent with both the diversity-resistance and resource-enrichment hypotheses for invasibility. Environmental conditions favoring native species richness also favor exotic species richness, and com- petitive interactions with the native flora do not appear to limit the entry of additional species into the understory community at this site. It appears that exotic species with niche requirements poorly represented in the regional flora of native species may establish with relatively little resistance or consequence for native species richness.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54663,Fire and competition in a southern California grassland: impacts on the rare forb Erodium macrophyllum,S172752,R54664,Investigated species,L106310,Plants,"Summary 1. The use of off-season burns to control exotic vegetation shows promise for land managers. In California, wildfires tend to occur in the summer and autumn, when most grassland vegetation is dormant. The effects of spring fires on native bunchgrasses have been examined but their impacts on native forbs have received less attention. 2. We introduced Erodium macrophyllum, a rare native annual forb, by seeding plots in 10 different areas in a California grassland. We tested the hypotheses that E. macrophyllum would perform better (increased fecundity and germination) when competing with native grasses than with a mixture of exotic and native grasses, and fire would alter subsequent demography of E. macrophyllum and other species’ abundances. We monitored the demography of E. macrophyllum for two seasons in plots manually weeded so that they were free from exotics, and in areas that were burned or not burned the spring after seeding. 3. Weeding increased E. macrophyllum seedling emergence, survival and fecundity during both seasons. When vegetation was burned in June 2001 (at the end of the first growing season) to kill exotic grass seeds before they dispersed, all E. macrophyllum plants had finished their life cycle and dispersed seeds, suggesting that burns at this time of year would not directly impact on fecundity. In the growing season after burning (2002), burned plots had less recruitment of E. macrophyllum but more establishment of native grass seedlings, suggesting burning may differentially affect seedling recruitment. 4. At the end of the second growing season (June 2002), burned plots had less cover of exotic and native grasses but more cover of exotic forbs. Nevertheless, E. macrophyllum plants in burned plots had greater fecundity than in non-burned plots, suggesting that exotic grasses are more competitive than exotic forbs. 5. A glasshouse study showed that exotic grasses competitively suppress E. macrophyllum to a greater extent than native grasses, indicating that the poor performance of E. macrophyllum in the non-burned plots was due to exotic grass competition. 6. Synthesis and applications. This study illustrates that fire can alter the competitive environment in grasslands with differential effects on rare forbs, and that exotic grasses strongly interfere with E. macrophyllum. For land managers, the benefits of prescribed spring burns will probably outweigh the costs of decreased E. macrophyllum establishment. Land managers can use spring burns to cause a flush of native grass recruitment and to create an environment that is, although abundant with exotic forbs, ultimately less competitive compared with non-burned areas dominated by exotic grasses.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54684,Influence of fire and soil nutrients on native and non-native annuals at remnant vegetation edges in the Western Australian wheatbelt,S173001,R54685,Investigated species,L106517,Plants,". The effect of fire on annual plants was examined in two vegetation types at remnant vegetation edges in the Western Australian wheatbelt. Density and cover of non-native species were consistently greatest at the reserve edges, decreasing rapidly with increasing distance from reserve edge. Numbers of native species showed little effect of distance from reserve edge. Fire had no apparent effect on abundance of non-natives in Allocasuarina shrubland but abundance of native plants increased. Density of both non-native and native plants in Acacia acuminata-Eucalyptus loxophleba woodland decreased after fire. Fewer non-native species were found in the shrubland than in the woodland in both unburnt and burnt areas, this difference being smallest between burnt areas. Levels of soil phosphorus and nitrate were higher in burnt areas of both communities and ammonium also increased in the shrubland. Levels of soil phosphorus and nitrate were higher at the reserve edge in the unburnt shrubland, but not in the woodland. There was a strong correlation between soil phosphorus levels and abundance of non-native species in the unburnt shrubland, but not after fire or in the woodland. Removal of non-native plants in the burnt shrubland had a strong positive effect on total abundance of native plants, apparently due to increases in growth of smaller, suppressed native plants in response to decreased competition. Two native species showed increased seed production in plots where non-native plants had been removed. There was a general indication that, in the short term, fire does not necessarily increase invasion of these communities by non-native species and could, therefore be a useful management tool in remnant vegetation, providing other disturbances are minimised.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54686,Disturbance Facilitates Invasion: The Effects Are Stronger Abroad than at Home,S173039,R54688,Investigated species,L106549,Plants,"Disturbance is one of the most important factors promoting exotic invasion. However, if disturbance per se is sufficient to explain exotic success, then “invasion” abroad should not differ from “colonization” at home. Comparisons of the effects of disturbance on organisms in their native and introduced ranges are crucial to elucidate whether this is the case; however, such comparisons have not been conducted. We investigated the effects of disturbance on the success of Eurasian native Centaurea solstitialis in two invaded regions, California and Argentina, and one native region, Turkey, by conducting field experiments consisting of simulating different disturbances and adding locally collected C. solstitialis seeds. We also tested differences among C. solstitialis genotypes in these three regions and the effects of local soil microbes on C. solstitialis performance in greenhouse experiments. Disturbance increased C. solstitialis abundance and performance far more in nonnative ranges than in the native range, but C. solstitialis biomass and fecundity were similar among populations from all regions grown under common conditions. Eurasian soil microbes suppressed growth of C. solstitialis plants, while Californian and Argentinean soil biota did not. We suggest that escape from soil pathogens may contribute to the disproportionately powerful effect of disturbance in introduced regions.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54694,Removal of nonnative vines and post-hurricane recruitment in tropical hardwood forests of Florida,S173138,R54696,Investigated species,L106632,Plants,"Abstract In hardwood subtropical forests of southern Florida, nonnative vines have been hypothesized to be detrimental, as many species form dense “vine blankets” that shroud the forest. To investigate the effects of nonnative vines in post-hurricane regeneration, we set up four large (two pairs of 30 × 60 m) study areas in each of three study sites. One of each pair was unmanaged and the other was managed by removal of nonnative plants, predominantly vines. Within these areas, we sampled vegetation in 5 × 5 m plots for stems 2 cm DBH (diameter at breast height) or greater and in 2 × 0.5 m plots for stems of all sizes. For five years, at annual censuses, we tagged and measured stems of vines, trees, shrubs and herbs in these plots. For each 5 × 5 m plot, we estimated percent coverage by individual vine species, using native and nonnative vines as classes. We investigated the hypotheses that: (1) plot coverage, occurrence and recruitment of nonnative vines were greater than that of native vines in unmanaged plots; (2) the management program was effective at reducing cover by nonnative vines; and (3) reduction of cover by nonnative vines improved recruitment of seedlings and saplings of native trees, shrubs, and herbs. In unmanaged plots, nonnative vines recruited more seedlings and had a significantly higher plot-cover index, but not a higher frequency of occurrence. Management significantly reduced cover by nonnative vines and had a significant overall positive effect on recruitment of seedlings and saplings of native trees, shrubs and herbs. Management also affected the seedling community (which included vines, trees, shrubs, and herbs) in some unanticipated ways, favoring early successional species for a longer period of time. The vine species with the greatest potential to “strangle” gaps were those that rapidly formed dense cover, had shade tolerant seedling recruitment, and were animal-dispersed. This suite of traits was more common in the nonnative vines than in the native vines. Our results suggest that some vines may alter the spatiotemporal pattern of recruitment sites in a forest ecosystem following a natural disturbance by creating many very shady spots very quickly.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54697,Alien Grass Invasion and Fire In the Seasonal Submontane Zone of Hawaii,S173161,R54698,Investigated species,L106651,Plants,"Island ecosystems are notably susceptible to biological invasions (Elton 1958), and the Hawaiian islands in particular have been colonized by many introduced species (Loope and Mueller-Dombois 1989). Introduced plants now dominate extensive areas of the Hawaiian Islands, and 86 species of alien plants are presently considered to pose serious threats to Hawaiian communities and ecosystems (Smith 1985). Among the most important invasive plants are several species of tropical and subtropical grasses that use the C4 photosynthetic pathway. These grasses now dominate extensive areas of dry and seasonally dry habitats in Hawai'i. They may compete with native species, and they have also been shown to alter hydrological properties in the areas they invade (MuellerDombois 1973). Most importantly, alien grasses can introduce fire into areas where it was previously rare or absent (Smith 1985), thereby altering the structure and functioning of previously native-dominated ecosystems. Many of these grasses evolved in fire-affected areas and have mechanisms for surviving and recovering rapidly from fire (Vogl 1975, Christensen 1985), while most native species in Hawai'i have little background with fire (Mueller-Dombois 1981) and hence few or no such mechanisms. Consequently, grass invasion could initiate a grass/fire cycle whereby invading grasses promote fire, which in turn favors alien grasses over native species. Such a scenario has been suggested in a number of areas, including Latin America, western North America, Australia, and Hawai'i (Parsons 1972, Smith 1985, Christensen and Burrows 1986, Mack 1986, MacDonald and Frame 1988). In most of these cases, land clearing by humans initiates colonization by alien grasses, and the grass/fire cycle then leads to their persistence. In Hawai'i and perhaps other areas, however, grass invasion occurs without any direct human intervention. Where such invasions initiate a grass/fire cy-",TRUE,noun
R24,Ecology and Evolutionary Biology,R54715,"Human activity facilitates altitudinal expansion of exotic plants along a road in montane grassland, South Africa",S173373,R54716,Investigated species,L106827,Plants,"ABSTRACT Question: Do anthropogenic activities facilitate the distribution of exotic plants along steep altitudinal gradients? Location: Sani Pass road, Grassland biome, South Africa. Methods: On both sides of this road, presence and abundance of exotic plants was recorded in four 25-m long road-verge plots and in parallel 25 m × 2 m adjacent land plots, nested at five altitudinal levels: 1500, 1800, 2100, 2400 and 2700 m a.s.l. Exotic community structure was analyzed using Canonical Correspondence Analysis while a two-level nested Generalized Linear Model was fitted for richness and cover of exotics. We tested the upper altitudinal limits for all exotics along this road for spatial clustering around four potential propagule sources using a t-test. Results: Community structure, richness and abundance of exotics were negatively correlated with altitude. Greatest invasion by exotics was recorded for adjacent land at the 1500 m level. Of the 45 exotics, 16 were found at higher altitudes than expected and observations were spatially clustered around potential propagule sources. Conclusions: Spatial clustering of upper altitudinal limits around human inhabited areas suggests that exotics originate from these areas, while exceeding expected altitudinal limits suggests that distribution ranges of exotics are presently underestimated. Exotics are generally characterised by a high propagule pressure and/or persistent seedbanks, thus future tarring of the Sani Pass may result in an increase of exotic species richness and abundance. This would initially result from construction-related soil disturbance and subsequently from increased traffic, water run-off, and altered fire frequency. We suggest examples of management actions to prevent this. Nomenclature: Germishuizen & Meyer (2003).",TRUE,noun
R24,Ecology and Evolutionary Biology,R54722,Alien plant dynamics following fire in Mediterranean-climate California shrublands,S173476,R54724,Investigated species,L106914,Plants,"Over 75 species of alien plants were recorded during the first five years after fire in southern California shrublands, most of which were European annuals. Both cover and richness of aliens varied between years and plant association. Alien cover was lowest in the first postfire year in all plant associations and remained low during succession in chaparral but increased in sage scrub. Alien cover and richness were significantly correlated with year (time since disturbance) and with precipitation in both coastal and interior sage scrub associations. Hypothesized factors determining alien dominance were tested with structural equation modeling. Models that included nitrogen deposition and distance from the coast were not significant, but with those variables removed we obtained a significant model that gave an R 2 5 0.60 for the response variable of fifth year alien dominance. Factors directly affecting alien dominance were (1) woody canopy closure and (2) alien seed banks. Significant indirect effects were (3) fire intensity, (4) fire history, (5) prefire stand structure, (6) aridity, and (7) community type. According to this model the most critical factor in- fluencing aliens is the rapid return of the shrub and subshrub canopy. Thus, in these communities a single functional type (woody plants) appears to the most critical element controlling alien invasion and persistence. Fire history is an important indirect factor be- cause it affects both prefire stand structure and postfire alien seed banks. Despite being fire-prone ecosystems, these shrublands are not adapted to fire per se, but rather to a particular fire regime. Alterations in the fire regime produce a very different selective environment, and high fire frequency changes the selective regime to favor aliens. This study does not support the widely held belief that prescription burning is a viable man- agement practice for controlling alien species on semiarid landscapes.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54753,Are invasive species the drivers or passengers of change in degraded ecosystems?,S173829,R54754,Investigated species,L107207,Plants,"Few invaded ecosystems are free from habitat loss and disturbance, leading to uncertainty whether dominant invasive species are driving community change or are passengers along for the environmental ride. The ''driver'' model predicts that invaded communities are highly interactive, with subordinate native species being limited or ex- cluded by competition from the exotic dominants. The ''passenger'' model predicts that invaded communities are primarily structured by noninteractive factors (environmental change, dispersal limitation) that are less constraining on the exotics, which thus dominate. We tested these alternative hypotheses in an invaded, fragmented, and fire-suppressed oak savanna. We examined the impact of two invasive dominant perennial grasses on community structure using a reduction (mowing of aboveground biomass) and removal (weeding of above- and belowground biomass) experiment conducted at different seasons and soil depths. We examined the relative importance of competition vs. dispersal limitation with experimental seed additions. Competition by the dominants limits the abundance and re- production of many native and exotic species based on their increased performance with removals and mowing. The treatments resulted in increased light availability and bare soil; soil moisture and N were unaffected. Although competition was limiting for some, 36 of 79 species did not respond to the treatments or declined in the absence of grass cover. Seed additions revealed that some subordinates are dispersal limited; competition alone was insufficient to explain their rarity even though it does exacerbate dispersal inefficiencies by lowering reproduction. While the net effects of the dominants were negative, their presence restricted woody plants, facilitated seedling survival with moderate disturbance (i.e., treatments applied in the fall), or was not the primary limiting factor for the occurrence of some species. Finally, the species most functionally distinct from the dominants (forbs, woody plants) responded most significantly to the treatments. This suggests that relative abundance is determined more by trade-offs relating to environmental conditions (long- term fire suppression) than to traits relating to resource capture (which should most impact functionally similar species). This points toward the passenger model as the underlying cause of exotic dominance, although their combined effects (suppressive and facilitative) on community structure are substantial.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54767,"Predicting Richness of Native, Rare, and Exotic Plants in Response to Habitat and Disturbance Variables across a Variegated Landscape",S174001,R54769,Investigated species,L107349,Plants,"Species richness of native, rare native, and exotic understorey plants was recorded at 120 sites in temperate grassy vegetation in New South Wales. Linear models were used to predict the effects of environment and disturbance on the richness of each of these groups. Total native species and rare native species showed similar responses, with rich- ness declining on sites of increasing natural fertility of par- ent material as well as declining under conditions of water",TRUE,noun
R24,Ecology and Evolutionary Biology,R54770,Disturbance-mediated competition and the spread of Phragmites australis in a coastal marsh,S174022,R54771,Investigated species,L107366,Plants,"In recent decades the grass Phragmites australis has been aggressively in- vading coastal, tidal marshes of North America, and in many areas it is now considered a nuisance species. While P. australis has historically been restricted to the relatively benign upper border of brackish and salt marshes, it has been expanding seaward into more phys- iologically stressful regions. Here we test a leading hypothesis that the spread of P. australis is due to anthropogenic modification of coastal marshes. We did a field experiment along natural borders between stands of P. australis and the other dominant grasses and rushes (i.e., matrix vegetation) in a brackish marsh in Rhode Island, USA. We applied a pulse disturbance in one year by removing or not removing neighboring matrix vegetation and adding three levels of nutrients (specifically nitrogen) in a factorial design, and then we monitored the aboveground performance of P. australis and the matrix vegetation. Both disturbances increased the density, height, and biomass of shoots of P. australis, and the effects of fertilization were more pronounced where matrix vegetation was removed. Clear- ing competing matrix vegetation also increased the distance that shoots expanded and their reproductive output, both indicators of the potential for P. australis to spread within and among local marshes. In contrast, the biomass of the matrix vegetation decreased with increasing severity of disturbance. Disturbance increased the total aboveground production of plants in the marsh as matrix vegetation was displaced by P. australis. A greenhouse experiment showed that, with increasing nutrient levels, P. australis allocates proportionally more of its biomass to aboveground structures used for spread than to belowground struc- tures used for nutrient acquisition. Therefore, disturbances that enrich nutrients or remove competitors promote the spread of P. australis by reducing belowground competition for nutrients between P. australis and the matrix vegetation, thus allowing P. australis, the largest plant in the marsh, to expand and displace the matrix vegetation. Reducing nutrient load and maintaining buffers of matrix vegetation along the terrestrial-marsh ecotone will, therefore, be important methods of control for this nuisance species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54801,Herbaceous layer contrast and alien plant occurrence in utility corridors and riparian forests of the Allegheny High Plateau,S174387,R54802,Investigated species,L107669,Plants,"communities by alien plant species that can adversely affect community structure and function. To determine how corridor establishment influences riparian vegetation of the Allegheny High Plateau of northwestern Pennsylvania, we compared the species composition and richness of the herbaceous layer (all vascular plants s 1 m tall) of utility corridors and adjacent headwater riparian forests, and tested the hypothesis that utility corridors serve as foci for the invasion of adjacent riparian forest by alien vascular plants. We contrasted plant species richness and vegetative cover, cover by growth form, species richness and cover of alien plants and cover of microhabitat components (open soil, rock, leaf litter, log, bryophyte) in utility corridors and adjacent riparian forest at 17 sites. Cluster analysis revealed that herbaceous layer species assemblages in corridors and riparian forest were compositionally distinct. Herbaceous layer cover and species richness were significantly (P s 0.05) greater in corridors than in riparian forest. Fern, graminoid, and forb species co-dominated herbaceous layer cover in corridors; fern cover dominated riparian forests. Cover of alien plants was significantly greater in corridors than in riparian forest. Alien plant species richness and cover were significantly and positively correlated with open soil, floodplain width, and active channel width in corridors but were significantly and negatively correlated with litter cover in riparian forest. Given that the majority of alien plant species we found in corridors were shade-intolerant and absent from riparian forests, we conclude that open utility corridors primarily serve as habitat refugia, rather than as invasion foci, for alien plant species in riparian forests of the Allegheny High Plateau.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54859,Feral sheep on Socorro Island: facilitators of alien plant colonization and ecosystem decay,S175080,R54860,Investigated species,L108246,Plants,"The paper examines the role of feral sheep (Ovis aries) in facilitating the naturalization of alien plants and degrading a formerly robust and stable ecosystem of Socorro, an isolated oceanic island in the Mexican Pacific Ocean. Approximately half of the island is still sheep‐free. The other half has been widely overgrazed and transformed into savannah and prairie‐like open habitats that exhibit sheet and gully erosion and are covered by a mix of native and alien invasive vegetation today. Vegetation transects in this moderately sheep‐impacted sector show that a significant number of native and endemic herb and shrub species exhibit sympatric distribution patterns with introduced plants. Only one alien plant species has been recorded from any undisturbed and sheep‐free island sector so far.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54956,"The vulnerability of habitats to plant invasion: disentangling the roles of propagule pressure, time and sampling effort",S175697,R54957,Investigated species,L108521,Plants,"Aim To quantify the vulnerability of habitats to invasion by alien plants having accounted for the effects of propagule pressure, time and sampling effort. Location New Zealand. Methods We used spatial, temporal and habitat information taken from 9297 herbarium records of 301 alien plant species to examine the vulnerability of 11 terrestrial habitats to plant invasions. A null model that randomized species records across habitats was used to account for variation in sampling effort and to derive a relative measure of invasion based either on all records for a species or only its first record. The relative level of invasion was related to the average distance of each habitat from the nearest conurbation, which was used as a proxy for propagule pressure. The habitat in which a species was first recorded was compared to the habitats encountered for all records of that species to determine whether the initial habitat could predict subsequent habitat occupancy. Results Variation in sampling effort in space and time significantly masked the underlying vulnerability of habitats to plant invasions. Distance from the nearest conurbation had little effect on the relative level of invasion in each habitat, but the number of first records of each species significantly declined with increasing distance. While Urban, Streamside and Coastal habitats were over-represented as sites of initial invasion, there was no evidence of major invasion hotspots from which alien plants might subsequently spread. Rather, the data suggest that certain habitats (especially Roadsides) readily accumulate alien plants from other habitats. Main conclusions Herbarium records combined with a suitable null model provide a powerful tool for assessing the relative vulnerability of habitats to plant invasion. The first records of alien plants tend to be found near conurbations, but this pattern disappears with subsequent spread. Regardless of the habitat where a species was first recorded, ultimately most alien plants spread to Roadside and Sparse habitats. This information suggests that such habitats may be useful targets for weed surveillance and monitoring.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55002,Factors explaining alien plant invasion success in a tropical ecosystem differ at each stage of invasion,S192026,R56969,Investigated species,L119980,Plants,"1 Understanding why some alien plant species become invasive when others fail is a fundamental goal in invasion ecology. We used detailed historical planting records of alien plant species introduced to Amani Botanical Garden, Tanzania and contemporary surveys of their invasion status to assess the relative ability of phylogeny, propagule pressure, residence time, plant traits and other factors to explain the success of alien plant species at different stages of the invasion process. 2 Species with native ranges centred in the tropics and with larger seeds were more likely to regenerate, whereas naturalization success was explained by longer residence time, faster growth rate, fewer seeds per fruit, smaller seed mass and shade tolerance. 3 Naturalized species spreading greater distances from original plantings tended to have more seeds per fruit, whereas species dispersed by canopy‐feeding animals and with native ranges centred on the tropics tended to have spread more widely in the botanical garden. Species dispersed by canopy‐feeding animals and with greater seed mass were more likely to be established in closed forest. 4 Phylogeny alone made a relatively minor contribution to the explanatory power of statistical models, but a greater proportion of variation in spread within the botanical garden and in forest establishment was explained by phylogeny alone than for other models. Phylogeny jointly with variables also explained a greater proportion of variation in forest establishment than in other models. Phylogenetic correction weakened the importance of dispersal syndrome in explaining compartmental spread, seed mass in the forest establishment model, and all factors except for growth rate and residence time in the naturalization model. 5 Synthesis. This study demonstrates that it matters considerably how invasive species are defined when trying to understand the relative ability of multiple variables to explain invasion success. By disentangling different invasion stages and using relatively objective criteria to assess species status, this study highlights that relatively simple models can help to explain why some alien plants are able to naturalize, spread and even establish in closed tropical forests.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55019,"The role of propagule pressure, genetic diversity and microsite availability for Senecio vernalis invasion",S176395,R55020,Investigated species,L109093,Plants,"Genetic diversity is supposed to support the colonization success of expanding species, in particular in situations where microsite availability is constrained. Addressing the role of genetic diversity in plant invasion experimentally requires its manipulation independent of propagule pressure. To assess the relative importance of these components for the invasion of Senecio vernalis, we created propagule mixtures of four levels of genotype diversity by combining seeds across remote populations, across proximate populations, within single populations and within seed families. In a first container experiment with constant Festuca rupicola density as matrix, genotype diversity was crossed with three levels of seed density. In a second experiment, we tested for effects of establishment limitation and genotype diversity by manipulating Festuca densities. Increasing genetic diversity had no effects on abundance and biomass of S. vernalis but positively affected the proportion of large individuals to small individuals. Mixtures composed from proximate populations had a significantly higher proportion of large individuals than mixtures composed from within seed families only. High propagule pressure increased emergence and establishment of S. vernalis but had no effect on individual growth performance. Establishment was favoured in containers with Festuca, but performance of surviving seedlings was higher in open soil treatments. For S. vernalis invasion, we found a shift in driving factors from density dependence to effects of genetic diversity across life stages. While initial abundance was mostly linked to the amount of seed input, genetic diversity, in contrast, affected later stages of colonization probably via sampling effects and seemed to contribute to filtering the genotypes that finally grew up. In consequence, when disentangling the mechanistic relationships of genetic diversity, seed density and microsite limitation in colonization of invasive plants, a clear differentiation between initial emergence and subsequent survival to juvenile and adult stages is required.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55021,"Assessing the Relative Importance of Disturbance, Herbivory, Diversity, and Propagule Pressure in Exotic Plant Invasion",S194945,R57204,Investigated species,L122216,Plants,"The current rate of invasive species introductions is unprecedented, and the dramatic impacts of exotic invasive plants on community and ecosystem properties have been well documented. Despite the pressing management implications, the mechanisms that control exotic plant invasion remain poorly understood. Several factors, such as disturbance, propagule pressure, species diversity, and herbivory, are widely believed to play a critical role in exotic plant invasions. However, few studies have examined the relative importance of these factors, and little is known about how propagule pressure interacts with various mechanisms of ecological resistance to determine invasion success. We quantified the relative importance of canopy disturbance, propagule pressure, species diversity, and herbivory in determining exotic plant invasion in 10 eastern hemlock forests in Pennsylvania and New Jersey (USA). Use of a maximum-likelihood estimation framework and information theoretics allowed us to quantify the strength of evidence for alternative models of the influence of these factors on changes in exotic plant abundance. In addition, we developed models to determine the importance of interactions between ecosystem properties and propagule pressure. These analyses were conducted for three abundant, aggressive exotic species that represent a range of life histories: Alliaria petiolata, Berberis thunbergii, and Microstegium vimineum. Of the four hypothesized determinants of exotic plant invasion considered in this study, canopy disturbance and propagule pressure appear to be the most important predictors of A. petiolata, B. thunbergii, and M. vimineum invasion. Herbivory was also found to be important in contributing to the invasion of some species. In addition, we found compelling evidence of an important interaction between propagule pressure and canopy disturbance. This is the first study to demonstrate the dominant role of the interaction between canopy disturbance and propagule pressure in determining forest invasibility relative to other potential controlling factors. The importance of the disturbance-propagule supply interaction, and its nonlinear functional form, has profound implications for the management of exotic plant species populations. Improving our ability to predict exotic plant invasions will require enhanced understanding of the interaction between propagule pressure and ecological resistance mechanisms.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55023,The importance of quantifying propagule pressure to understand invasion: an examination of riparian forest invasibility,S176437,R55024,Investigated species,L109127,Plants,"The widely held belief that riparian communities are highly invasible to exotic plants is based primarily on comparisons of the extent of invasion in riparian and upland communities. However, because differences in the extent of invasion may simply result from variation in propagule supply among recipient environments, true comparisons of invasibility require that both invasion success and propagule pressure are quantified. In this study, we quantified propagule pressure in order to compare the invasibility of riparian and upland forests and assess the accuracy of using a community's level of invasion as a surrogate for its invasibility. We found the extent of invasion to be a poor proxy for invasibility. The higher level of invasion in the studied riparian forests resulted from greater propagule availability rather than higher invasibility. Furthermore, failure to account for propagule pressure may confound our understanding of general invasion theories. Ecological theory suggests that species-rich communities should be less invasible. However, we found significant relationships between species diversity and invasion extent, but no diversity-invasibility relationship was detected for any species. Our results demonstrate that using a community's level of invasion as a surrogate for its invasibility can confound our understanding of invasibility and its determinants.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55048,Modeling Invasive Plant Spread: The Role of Plant-Environment Interactions and Model Structure,S176722,R55049,Investigated species,L109362,Plants,"Alien plants invade many ecosystems worldwide and often have substantial negative effects on ecosystem structure and functioning. Our ability to quantitatively predict these impacts is, in part, limited by the absence of suitable plant-spread models and by inadequate parameter estimates for such models. This paper explores the effects of model, plant, and environmental attributes on predicted rates and patterns of spread of alien pine trees (Pinus spp.) in South African fynbos (a mediterranean-type shrubland). A factorial experimental design was used to: (1) compare the predictions of a simple reaction-diffusion model and a spatially explicit, individual-based simulation model; (2) investigate the sensitivity of predicted rates and patterns of spread to parameter values; and (3) quantify the effects of the simulation model's spatial grain on its predictions. The results show that the spatial simulation model places greater emphasis on interactions among ecological processes than does the reaction-diffusion model. This ensures that the predictions of the two models differ substantially for some factor combinations. The most important factor in the model is dispersal ability. Fire frequency, fecundity, and age of reproductive maturity are less important, while adult mortality has little effect on the model's predictions. The simulation model's predictions are sensitive to the model's spatial grain. This suggests that simulation models that use matrices as a spatial framework should ensure that the spatial grain of the model is compatible with the spatial processes being modeled. We conclude that parameter estimation and model development must be integrated pro- cedures. This will ensure that the model's structure is compatible with the biological pro- cesses being modeled. Failure to do so may result in spurious predictions.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55050,ECOLOGICAL RESISTANCE TO BIOLOGICAL INVASION OVERWHELMED BY PROPAGULE PRESSURE,S197026,R57386,Investigated species,L123933,Plants,"Models and observational studies have sought patterns of predictability for invasion of natural areas by nonindigenous species, but with limited success. In a field experiment using forest understory plants, we jointly manipulated three hypothesized determinants of biological invasion outcome: resident diversity, physical disturbance and abiotic conditions, and propagule pressure. The foremost constraints on net habitat invasibility were the number of propagules that arrived at a site and naturally varying resident plant density. The physical environment (flooding regime) and the number of established resident species had negligible impact on habitat invasibility as compared to propagule pressure, despite manipulations that forced a significant reduction in resident richness, and a gradient in flooding from no flooding to annual flooding. This is the first experimental study to demonstrate the primacy of propagule pressure as a determinant of habitat invasibility in comparison with other candidate controlling factors.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55059,Reproductive potential and seedling establishment of the invasive alien tree Schinus molle (Anacardiaceae) in South Africa,S176846,R55060,Investigated species,L109464,Plants,"Schinus molle (Peruvian pepper tree) was introduced to South Africa more than 150 years ago and was widely planted, mainly along roads. Only in the last two decades has the species become naturalized and invasive in some parts of its new range, notably in semi-arid savannas. Research is being undertaken to predict its potential for further invasion in South Africa. We studied production, dispersal and predation of seeds, seed banks, and seedling establishment in relation to land uses at three sites, namely ungrazed savanna once used as a military training ground; a savanna grazed by native game; and an ungrazed mine dump. We found that seed production and seed rain density of S. molle varied greatly between study sites, but was high at all sites (384 864–1 233 690 seeds per tree per year; 3877–9477 seeds per square metre per year). We found seeds dispersed to distances of up to 320 m from female trees, and most seeds were deposited within 50 m of putative source trees. Annual seed rain density below canopies of Acacia tortillis, the dominant native tree at all sites, was significantly lower in grazed savanna. The quality of seed rain was much reduced by endophagous predators. Seed survival in the soil was low, with no survival recorded beyond 1 year. Propagule pressure to drive the rate of recruitment: densities of seedlings and sapling densities were higher in ungrazed savanna and the ungrazed mine dump than in grazed savanna, as reflected by large numbers of young individuals, but adult : seedling ratios did not differ between savanna sites. Frequent and abundant seed production, together with effective dispersal of viable S. molle seed by birds to suitable establishment sites below trees of other species to overcome predation effects, facilitates invasion. Disturbance enhances invasion, probably by reducing competition from native plants.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55090,Dispersal and recruitment limitation in native versus exotic tree species: life-history strategies and Janzen-Connell effects,S177203,R55091,Investigated species,L109759,Plants,"Life-history traits of invasive exotic plants are typically considered to be exceptional vis-a-vis native species. In particular, hyper-fecundity and long range dispersal are regarded as invasive traits, but direct comparisons with native species are needed to identify the life-history stages behind invasiveness. Until recently, this task was particularly problematic in forests as tree fecundity and dispersal were difficult to characterize in closed stands. We used inverse modelling to parameterize fecundity, seed dispersal and seedling dispersion functions for two exotic and eight native tree species in closed-canopy forests in Connecticut, USA. Interannual variation in seed production was dramatic for all species, with complete seed crop failures in at least one year for six native species. However, the average per capita seed production of the exotic Ailanthus altissima was extraordinary: 40 times higher than the next highest species. Seed production of the shade tolerant exotic Acer platanoides was average, but much higher than the native shade tolerant species, and the density of its established seedlings ( 3 years) was higher than any other species. Overall, the data supported a model in which adults of native and exotic species must reach a minimum size before seed production occurred. Once reached, the relationship between tree diameter and seed production was fairly flat for seven species, including both exotics. Seed dispersal was highly localized and usually showed a steep decline with increasing distance from parent trees: only Ailanthus altissima and Fraxinus americana had mean dispersal distances 10 m. Janzen-Connell patterns were clearly evident for both native and exotic species, as the mode and mean dispersion distance of seedlings were further from potential parent trees than seeds. The comparable intensity of Janzen-Connell effects between native and exotic species suggests that the enemy escape hypothesis alone cannot explain the invasiveness of these exotics. Our study confirms the general importance of colonization processes in invasions, yet demon strates how invasiveness can occur via divergent colonization strategies. Dispersal limitation of Acer platanoides and recruitment limitation of Ailanthus altissima will likely constitute some limit on their invasiveness in closed-canopy forests.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55099,Invasive alien plants infiltrate bird-mediated shrub nucleation processes in arid savanna,S187767,R56605,Investigated species,L116653,Plants,"1 The cultivation and dissemination of alien ornamental plants increases their potential to invade. More specifically, species with bird‐dispersed seeds can potentially infiltrate natural nucleation processes in savannas. 2 To test (i) whether invasion depends on facilitation by host trees, (ii) whether propagule pressure determines invasion probability, and (iii) whether alien host plants are better facilitators of alien fleshy‐fruited species than indigenous species, we mapped the distribution of alien fleshy‐fruited species planted inside a military base, and compared this with the distribution of alien and native fleshy‐fruited species established in the surrounding natural vegetation. 3 Abundance and diversity of fleshy‐fruited plant species was much greater beneath tree canopies than in open grassland and, although some native fleshy‐fruited plants were found both beneath host trees and in the open, alien fleshy‐fruited plants were found only beneath trees. 4 Abundance of fleshy‐fruited alien species in the natural savanna was positively correlated with the number of individuals of those species planted in the grounds of the military base, while the species richness of alien fleshy‐fruited taxa decreased with distance from the military base, supporting the notion that propagule pressure is a fundamental driver of invasions. 5 There were more fleshy‐fruited species beneath native Acacia tortilis than beneath alien Prosopis sp. trees of the equivalent size. Although there were significant differences in native plant assemblages beneath these hosts, the proportion of alien to native fleshy‐fruited species did not differ with host. 6 Synthesis. Birds facilitate invasion of a semi‐arid African savanna by alien fleshy‐fruited plants, and this process does not require disturbance. Instead, propagule pressure and a few simple biological observations define the probability that a plant will invade, with alien species planted in gardens being a major source of propagules. Some invading species have the potential to transform this savanna by overtopping native trees, leading to ecosystem‐level impacts. Likewise, the invasion of the open savanna by alien host trees (such as Prosopis sp.) may change the diversity, abundance and species composition of the fleshy‐fruited understorey. These results illustrate the complex interplay between propagule pressure, facilitation, and a range of other factors in biological invasions.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55114,Propagule pressure hypothesis not supported by an 80-year experiment on woody species invasion,S177487,R55116,Investigated species,L109986,Plants,"Ecological filters and availability of propagules play key roles structuring natural communities. Propagule pressure has recently been suggested to be a fundamental factor explaining the success or failure of biological introductions. We tested this hypothesis with a remarkable data set on trees introduced to Isla Victoria, Nahuel Huapi National Park, Argentina. More than 130 species of woody plants, many known to be highly invasive elsewhere, were introduced to this island early in the 20th century, as part of an experiment to test their suitability as commercial forestry trees for this region. We obtained detailed data on three estimates of propagule pressure (number of introduced individuals, number of areas where introduced, and number of years during which the species was planted) for 18 exotic woody species. We matched these data with a survey of the species and number of individuals currently invading the island. None of the three estimates of propagule pressure predicted the current pattern of invasion. We suggest that other factors, such as biotic resistance, may be operating to determine the observed pattern of invasion, and that propagule pressure may play a relatively minor role in explaining at least some observed patterns of invasion success and failure.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55120,"Planting intensity, residence time, and species traits determine invasion success of alien woody species",S177541,R55121,Investigated species,L110030,Plants,"We studied the relative importance of residence time, propagule pressure, and species traits in three stages of invasion of alien woody plants cultivated for about 150 years in the Czech Republic, Central Europe. The probability of escape from cultivation, naturalization, and invasion was assessed using classification trees. We compared 109 escaped-not-escaped congeneric pairs, 44 naturalized-not-naturalized, and 17 invasive-not-invasive congeneric pairs. We used the following predictors of the above probabilities: date of introduction to the target region as a measure of residence time; intensity of planting in the target area as a proxy for propagule pressure; the area of origin; and 21 species-specific biological and ecological traits. The misclassification rates of the naturalization and invasion model were low, at 19.3% and 11.8%, respectively, indicating that the variables used included the major determinants of these processes. The probability of escape increased with residence time in the Czech Republic, whereas the probability of naturalization increased with the residence time in Europe. This indicates that some species were already adapted to local conditions when introduced to the Czech Republic. Apart from residence time, the probability of escape depends on planting intensity (propagule pressure), and that of naturalization on the area of origin and fruit size; it is lower for species from Asia and those with small fruits. The probability of invasion is determined by a long residence time and the ability to tolerate low temperatures. These results indicate that a simple suite of factors determines, with a high probability, the invasion success of alien woody plants, and that the relative role of biological traits and other factors is stage dependent. High levels of propagule pressure as a result of planting lead to woody species eventually escaping from cultivation, regardless of biological traits. However, the biological traits play a role in later stages of invasion.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55129,Propagule pressure and resource availability determine plant community invasibility in a temperate forest understorey,S177652,R55131,Investigated species,L110121,Plants,"Few field experiments have examined the effects of both resource availability and propagule pressure on plant community invasibility. Two non-native forest species, a herb and a shrub (Hesperis matronalis and Rhamnus cathartica, respectively), were sown into 60 1-m 2 sub-plots distributed across three plots. These contained reconstructed native plant communities in a replaced surface soil layer in a North American forest interior. Resource availability and propagule pressure were manipulated as follows: understorey light level (shaded/unshaded), nutrient availability (control/fertilized), and seed pressures of the two non-native species (control/low/high). Hesperis and Rhamnus cover and the above-ground biomass of Hesperis were significantly higher in shaded sub-plots and at greater propagule pressures. Similarly, the above-ground biomass of Rhamnus was significantly increased with propagule pressure, although this was a function of density. In contrast, of species that seeded into plots from the surrounding forest during the growing season, the non-native species had significantly greater cover in unshaded sub-plots. Plants in these unshaded sub-plots were significantly taller than plants in shaded sub-plots, suggesting a greater fitness. Total and non-native species richness varied significantly among plots indicating the importance of fine-scale dispersal patterns. None of the experimental treatments influenced native species. Since the forest seed bank in our study was colonized primarily by non-native ruderal species that dominated understorey vegetation, the management of invasions by non-native species in forest understoreys will have to address factors that influence light levels and dispersal pathways.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55139,"Habitat, dispersal and propagule pressure control exotic plant infilling within an invaded range",S177753,R55140,Investigated species,L110204,Plants,"Deep in the heart of a longstanding invasion, an exotic grass is still invading. Range infilling potentially has the greatest impact on native communities and ecosystem processes, but receives much less attention than range expansion. ‘Snapshot' studies of invasive plant dispersal, habitat and propagule limitations cannot determine whether a landscape is saturated or whether a species is actively infilling empty patches. We investigate the mechanisms underlying invasive plant infilling by tracking the localized movement and expansion of Microstegium vimineum populations from 2009 to 2011 at sites along a 100-km regional gradient in eastern U.S. deciduous forests. We find that infilling proceeds most rapidly where the invasive plants occur in warm, moist habitats adjacent to roads: under these conditions they produce copious seed, the dispersal distances of which increase exponentially with proximity to roadway. Invasion then appears limited where conditions are generally dry and cool as propagule pressure tapers off. Invasion also is limited in habitats >1 m from road corridors, where dispersal distances decline precipitously. In contrast to propagule and dispersal limitations, we find little evidence that infilling is habitat limited, meaning that as long as M. vimineum seeds are available and transported, the plant generally invades quite vigorously. Our results suggest an invasive species continues to spread, in a stratified manner, within the invaded landscape long after first arriving. These dynamics conflict with traditional invasion models that emphasize an invasive edge with distinct boundaries. We find that propagule pressure and dispersal regulate infilling, providing the basis for projecting spread and landscape coverage, ecological effects and the efficacy of containment strategies.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56087,Invasibility of tropical islands by introduced plants: partitioning the influence of isolation and propagule pressure,S194596,R57173,Investigated species,L121929,Plants,"All else being equal, more isolated islands should be more susceptible to invasion because their native species are derived from a smaller pool of colonists, and isolated islands may be missing key functional groups. Although some analyses seem to support this hypothesis, previous studies have not taken into account differences in the number of plant introductions made to different islands, which will affect invasibility estimates. Furthermore, previous studies have not assessed invasibility in terms of the rates at which introduced plant species attain different degrees invasion or naturalization. I compared the naturalization status of introduced plants on two pairs of Pacific island groups that are similar in most respects but that differ in their distances from a mainland. Then, to factor out differences in propagule pressure due to differing numbers of introductions, I compared the naturalization status only among shared introductions. In the first comparison, Hawai‘i (3700 km from a mainland) had three times more casual/weakly naturalized, naturalized and pest species than Taiwan (160 km from a mainland); however, roughly half (54%) of this difference can be attributed to a larger number of plant introductions to Hawai‘i. In the second comparison, Fiji (2500 km from a mainland) did not differ in susceptibility to invasion in comparison to New Caledonia (1000 km from a mainland); the latter two island groups appear to have experienced roughly similar propagule pressure, and they have similar invasibility. The rate at which naturalized species have become pests is similar for Hawai‘i and other island groups. The higher susceptibility of Hawai‘i to invasion is related to more species entering the earliest stages in the invasion process (more casual and weakly naturalized species), and these higher numbers are then maintained in the naturalized and pest pools. The number of indigenous (not endemic) species was significantly correlated with susceptibility to invasion across all four island groups. When islands share similar climates and habitat diversity, the number of indigenous species may be a better predictor of invasibility than indices of physical isolation because it is a composite measure of biological isolation.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56108,Are island plant communities more invaded than their mainland counterparts?,S183003,R56109,Investigated species,L112485,Plants,"Questions: Are island vegetation communities moreinvaded than their mainland counterparts? Is thispattern consistent among community types?Location: The coastal provinces of Catalonia andthepara-oceanicBalearicIslands,bothinNESpain.These islands were connected to the continent morethan 5.35 million years ago and are now locatedo200km from the coast.Methods: We compiled a database of almost 3000phytosociological releve´s from the Balearic Islandsand Catalonia and compared the level of invasionby alien plants in island versus mainland commu-nities. Twenty distinct plant community typeswere compared between island and mainland coun-terparts.Results: The percentage of plots with alien species,number, percentage and cover percentage of alienspecies per plot was greater in Catalonia than in theBalearic Islands in most communities. Overall,across communities, more alien species were foundin the mainland (53) compared to the islands (onlynine). Despite these differences, patterns of the levelof invasion in communities were highly consistentbetween the islands and mainland. The most in-vaded communities were ruderal and riparian.Main conclusion: Our results indicate that para-oceanic island communities such as the BalearicIslands are less invaded than their mainlandcounterparts. This difference reflects a smaller re-gional alien species pool in the Balearic Islands thanin the adjacent mainland, probably due to differ-ences in landscape heterogeneity and propagulepressure.Keywords: alien plants; Balearic Islands; communitysimilarity; Mediterranean communities; para-ocea-nic islands; releve´; species richness.Nomenclature: Bolo`s & Vigo (1984–2001), Rivas-Martinez et al. (2001).",TRUE,noun
R24,Ecology and Evolutionary Biology,R56638,Positive interactions among plant species for pollinator service: assessing the 'magnet species' concept with invasive species,S188159,R56639,Investigated species,L116977,Plants,"Plants with poorly attractive flowers or with little floral rewards may have inadequate pollinator service, which in turn reduces seed output. However, pollinator service of less attractive species could be enhanced when they are associated with species with highly attractive flowers (so called ‘magnet-species’). Although several studies have reported the magnet species effect, few of them have evaluated whether this positive interaction result in an enhancement of the seed output for the beneficiary species. Here, we compared pollinator visitation rates and seed output of the invasive annual species Carduus pycnocephalus when grow associated with shrubs of the invasive Lupinus arboreus and when grow alone, and hypothesized that L. arboreus acts as a magnet species for C. pycnocephalus. Results showed that C. pycnocephalus individuals associated with L. arboreus had higher pollinator visitation rates and higher seed output than individuals growing alone. The higher visitation rates of C. pycnocephalus associated to L. arboreus were maintained after accounting for flower density, which consistently supports our hypothesis on the magnet species effect of L. arboreus. Given that both species are invasives, the facilitated pollination and reproduction of C. pycnocephalus by L. arboreus could promote its naturalization in the community, suggesting a synergistic invasional process contributing to an ‘invasional meltdown’. The magnet effect of Lupinus on Carduus found in this study seems to be one the first examples of indirect facilitative interactions via increased pollination among invasive species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56760,Facilitation and competition among invasive plants: A field experiment with alligatorweed and water hyacinth,S189526,R56761,Investigated species,L118100,Plants,"Ecosystems that are heavily invaded by an exotic species often contain abundant populations of other invasive species. This may reflect shared responses to a common factor, but may also reflect positive interactions among these exotic species. Armand Bayou (Pasadena, TX) is one such ecosystem where multiple species of invasive aquatic plants are common. We used this system to investigate whether presence of one exotic species made subsequent invasions by other exotic species more likely, less likely, or if it had no effect. We performed an experiment in which we selectively removed exotic rooted and/or floating aquatic plant species and tracked subsequent colonization and growth of native and invasive species. This allowed us to quantify how presence or absence of one plant functional group influenced the likelihood of successful invasion by members of the other functional group. We found that presence of alligatorweed (rooted plant) decreased establishment of new water hyacinth (free-floating plant) patches but increased growth of hyacinth in established patches, with an overall net positive effect on success of water hyacinth. Water hyacinth presence had no effect on establishment of alligatorweed but decreased growth of existing alligatorweed patches, with an overall net negative effect on success of alligatorweed. Moreover, observational data showed positive correlations between hyacinth and alligatorweed with hyacinth, on average, more abundant. The negative effect of hyacinth on alligatorweed growth implies competition, not strong mutual facilitation (invasional meltdown), is occurring in this system. Removal of hyacinth may increase alligatorweed invasion through release from competition. However, removal of alligatorweed may have more complex effects on hyacinth patch dynamics because there were strong opposing effects on establishment versus growth. The mix of positive and negative interactions between floating and rooted aquatic plants may influence local population dynamics of each group and thus overall invasion pressure in this watershed.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56803,Experimental evidence for indirect facilitation among invasive plants,S190006,R56804,Investigated species,L118494,Plants,"Facilitation among species may promote non‐native plant invasions through alteration of environmental conditions, enemies or mutualists. However, the role of non‐trophic indirect facilitation in invasions has rarely been examined. We used a long‐term field experiment to test for indirect facilitation by invasions of Microstegium vimineum (stiltgrass) on a secondary invasion of Alliaria petiolata (garlic mustard) by introducing Alliaria seed into replicated plots previously invaded experimentally by Microstegium. Alliaria more readily colonized control plots without Microstegium but produced almost seven times more biomass and nearly four times as many siliques per plant in Microstegium‐invaded plots. Improved performance of Alliaria in Microstegium‐invaded plots compared to control plots overwhelmed differences in total number of plants such that, on average, invaded plots contained 327% greater total Alliaria biomass and 234% more total siliques compared to control plots. The facilitation of Alliaria in Microstegium‐invaded plots was associated with an 85% reduction in the biomass of resident species at the peak of the growing season and significantly greater light availability in Microstegium‐invaded than control plots early in the growing season. Synthesis. Our results demonstrate that an initial plant invasion associated with suppression of resident species and increased resource availability can facilitate a secondary plant invasion. Such positive interactions among species with similar habitat requirements, but offset phenologies, may exacerbate invasions and their impacts on native ecosystems.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56845,Plant community ssociations of two invasive thistles,S190473,R56846,Investigated species,L118877,Plants,"We assessed the field-scale plant community associations of Carduus nutans and C. acanthoides, two similar, economically important invasive thistles. Several plant species were associated with the presence of Carduus thistles while others, including an important pasture species, were associated with Carduus free areas. Thus, even within fields, areas invaded by Carduus thistles have different vegetation than uninvaded areas, either because some plants can resist invasion or because invasion changes the local plant community. Our results will allow us to target future research about the role of vegetation structure in resisting and responding to invasion.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56899,Biodiversity ffects and rates of spred of nonnative eucalypt woodlands in central California,S191073,R56900,Investigated species,L119369,Plants,"Woodlands comprised of planted, nonnative trees are increasing in extent globally, while native woodlands continue to decline due to human activities. The ecological impacts of planted woodlands may include changes to the communities of understory plants and animals found among these nonnative trees relative to native woodlands, as well as invasion of adjacent habitat areas through spread beyond the originally planted areas. Eucalypts (Eucalyptus spp.) are among the most widely planted trees worldwide, and are very common in California, USA. The goals of our investigation were to compare the biological communities of nonnative eucalypt woodlands to native oak woodlands in coastal central California, and to examine whether planted eucalypt groves have increased in size over the past decades. We assessed site and habitat attributes and characterized biological communities using understory plant, ground-dwelling arthropod, amphibian, and bird communities as indicators. Degree of difference between native and nonnative woodlands depended on the indicator used. Eucalypts had significantly greater canopy height and cover, and significantly lower cover by perennial plants and species richness of arthropods than oaks. Community composition of arthropods also differed significantly between eucalypts and oaks. Eucalypts had marginally significantly deeper litter depth, lower abundance of native plants with ranges limited to western North America, and lower abundance of amphibians. In contrast to these differences, eucalypt and oak groves had very similar bird community composition, species richness, and abundance. We found no evidence of ""invasional meltdown,"" documenting similar abundance and richness of nonnatives in eucalypt vs. oak woodlands. Our time-series analysis revealed that planted eucalypt groves increased 271% in size, on average, over six decades, invading adjacent areas. Our results inform science-based management of California woodlands, revealing that while bird communities would probably not be affected by restoration of eucalypt to oak woodlands, such a restoration project would not only stop the spread of eucalypts into adjacent habitats but would also enhance cover by western North American native plants and perennials, enhance amphibian abundance, and increase arthropod richness.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56990,Alien aquatic plant species in European countries,S192290,R56991,Investigated species,L120200,Plants,"Hussner A (2012). Alien aquatic plant species in European countries. Weed Research52, 297–306. Summary Alien aquatic plant species cause serious ecological and economic impacts to European freshwater ecosystems. This study presents a comprehensive overview of all alien aquatic plants in Europe, their places of origin and their distribution within the 46 European countries. In total, 96 aquatic species from 30 families have been reported as aliens from at least one European country. Most alien aquatic plants are native to Northern America, followed by Asia and Southern America. Elodea canadensis is the most widespread alien aquatic plant in Europe, reported from 41 European countries. Azolla filiculoides ranks second (25), followed by Vallisneria spiralis (22) and Elodea nuttallii (20). The highest number of alien aquatic plant species has been found in Italy and France (34 species), followed by Germany (27), Belgium and Hungary (both 26) and the Netherlands (24). Even though the number of alien aquatic plants seems relatively small, the European and Mediterranean Plant Protection Organization (EPPO, http://www.eppo.org) has listed 18 of these species as invasive or potentially invasive within the EPPO region. As ornamental trade has been regarded as the major pathway for the introduction of alien aquatic plants, trading bans seem to be the most effective option to reduce the risk of further unintended entry of alien aquatic plants into Europe.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57010,"Alien flora of Europe: species diversity, temporal trends, geographical patterns and research needs",S192598,R57015,Investigated species,L120460,Plants,"The paper provides the first estimate of the composition and structure of alien plants occurring in the wild in the European continent, based on the results of the DAISIE project (2004–2008), funded by the 6th Framework Programme of the European Union and aimed at “creating an inventory of invasive species that threaten European terrestrial, freshwater and marine environments”. The plant section of the DAISIE database is based on national checklists from 48 European countries/regions and Israel; for many of them the data were compiled during the project and for some countries DAISIE collected the first comprehensive checklists of alien species, based on primary data (e.g., Cyprus, Greece, F. Y. R. O. Macedonia, Slovenia, Ukraine). In total, the database contains records of 5789 alien plant species in Europe (including those native to a part of Europe but alien to another part), of which 2843 are alien to Europe (of extra-European origin). The research focus was on naturalized species; there are in total 3749 naturalized aliens in Europe, of which 1780 are alien to Europe. This represents a marked increase compared to 1568 alien species reported by a previous analysis of data in Flora Europaea (1964–1980). Casual aliens were marginally considered and are represented by 1507 species with European origins and 872 species whose native range falls outside Europe. The highest diversity of alien species is concentrated in industrialized countries with a tradition of good botanical recording or intensive recent research. The highest number of all alien species, regardless of status, is reported from Belgium (1969), the United Kingdom (1779) and Czech Republic (1378). The United Kingdom (857), Germany (450), Belgium (447) and Italy (440) are countries with the most naturalized neophytes. The number of naturalized neophytes in European countries is determined mainly by the interaction of temperature and precipitation; it increases with increasing precipitation but only in climatically warm and moderately warm regions. Of the nowadays naturalized neophytes alien to Europe, 50% arrived after 1899, 25% after 1962 and 10% after 1989. At present, approximately 6.2 new species, that are capable of naturalization, are arriving each year. Most alien species have relatively restricted European distributions; half of all naturalized species occur in four or fewer countries/regions, whereas 70% of non-naturalized species occur in only one region. Alien species are drawn from 213 families, dominated by large global plant families which have a weedy tendency and have undergone major radiations in temperate regions (Asteraceae, Poaceae, Rosaceae, Fabaceae, Brassicaceae). There are 1567 genera, which have alien members in European countries, the commonest being globally-diverse genera comprising mainly urban and agricultural weeds (e.g., Amaranthus, Chenopodium and Solanum) or cultivated for ornamental purposes (Cotoneaster, the genus richest in alien species). Only a few large genera which have successfully invaded (e.g., Oenothera, Oxalis, Panicum, Helianthus) are predominantly of non-European origin. Conyza canadensis, Helianthus tuberosus and Robinia pseudoacacia are most widely distributed alien species. Of all naturalized aliens present in Europe, 64.1% occur in industrial habitats and 58.5% on arable land and in parks and gardens. Grasslands and woodlands are also highly invaded, with 37.4 and 31.5%, respectively, of all naturalized aliens in Europe present in these habitats. Mires, bogs and fens are least invaded; only approximately 10% of aliens in Europe occur there. Intentional introductions to Europe (62.8% of the total number of naturalized aliens) prevail over unintentional (37.2%). Ornamental and horticultural introductions escaped from cultivation account for the highest number of species, 52.2% of the total. Among unintentional introductions, contaminants of seed, mineral materials and other commodities are responsible for 1091 alien species introductions to Europe (76.6% of all species introduced unintentionally) and 363 species are assumed to have arrived as stowaways (directly associated with human transport but arriving independently of commodity). Most aliens in Europe have a native range in the same continent (28.6% of all donor region records are from another part of Europe where the plant is native); in terms of species numbers the contribution of Europe as a region of origin is 53.2%. Considering aliens to Europe separately, 45.8% of species have their native distribution in North and South America, 45.9% in Asia, 20.7% in Africa and 5.3% in Australasia. Based on species composition, European alien flora can be classified into five major groups: (1) north-western, comprising Scandinavia and the UK; (2) west-central, extending from Belgium and the Netherlands to Germany and Switzerland; (3) Baltic, including only the former Soviet Baltic states; (4) east-central, comprizing the remainder of central and eastern Europe; (5) southern, covering the entire Mediterranean region. The clustering patterns cut across some European bioclimatic zones; cultural factors such as regional trade links and traditional local preferences for crop, forestry and ornamental species are also important by influencing the introduced species pool. Finally, the paper evaluates a state of the art in the field of plant invasions in Europe, points to research gaps and outlines avenues of further research towards documenting alien plant invasions in Europe. The data are of varying quality and need to be further assessed with respect to the invasion status and residence time of the species included. This concerns especially the naturalized/casual status; so far, this information is available comprehensively for only 19 countries/regions of the 49 considered. Collating an integrated database on the alien flora of Europe can form a principal contribution to developing a European-wide management strategy of alien species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57035,Marketing time predicts naturalization of horticultural plants,S192836,R57036,Investigated species,L120656,Plants,"Horticulture is an important source of naturalized plants, but our knowledge about naturalization frequencies and potential patterns of naturalization in horticultural plants is limited. We analyzed a unique set of data derived from the detailed sales catalogs (1887-1930) of the most important early Florida, USA, plant nursery (Royal Palm Nursery) to detect naturalization patterns of these horticultural plants in the state. Of the 1903 nonnative species sold by the nursery, 15% naturalized. The probability of plants becoming naturalized increases significantly with the number of years the plants were marketed. Plants that became invasive and naturalized were sold for an average of 19.6 and 14.8 years, respectively, compared to 6.8 years for non-naturalized plants, and the naturalization of plants sold for 30 years or more is 70%. Unexpectedly, plants that were sold earlier were less likely to naturalize than those sold later. The nursery's inexperience, which caused them to grow and market many plants unsuited to Florida during their early period, may account for this pattern. Plants with pantropical distributions and those native to both Africa and Asia were more likely to naturalize (42%), than were plants native to other smaller regions, suggesting that plants with large native ranges were more likely to naturalize. Naturalization percentages also differed according to plant life form, with the most naturalization occurring in aquatic herbs (36.8%) and vines (30.8%). Plants belonging to the families Araceae, Apocynaceae, Convolvulaceae, Moraceae, Oleaceae, and Verbenaceae had higher than expected naturalization. Information theoretic model selection indicated that the number of years a plant was sold, alone or together with the first year a plant was sold, was the strongest predictor of naturalization. Because continued importation and marketing of nonnative horticultural plants will lead to additional plant naturalization and invasion, a comprehensive approach to address this problem, including research to identifyand select noninvasive forms and types of horticultural plants is urgently needed.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57057,Predicting the Australian weed status of southern African plants,S193085,R57058,Investigated species,L120861,Plants,"A method of predicting weed status was developed for southern African plants naturalized in Australia, based upon information on extra-Australian weed status, distribution and taxonomy. Weed status in Australia was associated with being geographically widespread in southern Africa, being found in a wide range of climates in southern Africa, being described as a weed or targeted by herbicides in southern Africa, with early introduction and establishment in Australia, and with weediness in regions other than southern Africa. Multiple logistic regressions were used to identify the variables that best predicted weed status. The best fitting regressions were for weeds present for a long time in Australia (more than 140 years). They utilized three variables, namely weed status, climatic range in southern Africa and the existence of congeneric weeds in southern Africa. The highest level of variation explained (43%) was obtained for agricultural weeds using a single variable, weed status in southern Africa. Being recorded as a weed in Australia was related to climatic range and the existence of congeneric weeds in southern Africa (40% of variation explained). No variables were suitable predictors of non-agricultural (environmental) weeds. The regressions were used to predict future weed status of plants either not introduced or recently arrived in Australia. Recently-arrived species which were predicted to become weeds are Acacia karroo Hayne (Mimosaceae), Arctotis venustra T. Norl. (Asteraceae), Sisymbrium thellungii O.E. Schulz (Brassicaceae) and Solanum retroflexum Dun. (Solanaceae). Twenty species not yet arrived in Australia were predicted to have a high likelihood of becoming weeds. Analysis of the residuals of the regressions indicated two long-established species which might prove to be good targets for biological control: Mesembryanthemum crystallinum L. (Aizoaceae) and Watsonia meriana (L.) Mill. (Iridaceae).",TRUE,noun
R24,Ecology and Evolutionary Biology,R57075,"How well do we understand the impacts of alien species on ecosystem services? A pan-European, cross-taxa assessment",S193296,R57076,Investigated species,L121036,Plants,"Recent comprehensive data provided through the DAISIE project (www.europe-aliens.org) have facilitated the development of the first pan-European assessment of the impacts of alien plants, vertebrates, and invertebrates – in terrestrial, freshwater, and marine environments – on ecosystem services. There are 1094 species with documented ecological impacts and 1347 with economic impacts. The two taxonomic groups with the most species causing impacts are terrestrial invertebrates and terrestrial plants. The North Sea is the maritime region that suffers the most impacts. Across taxa and regions, ecological and economic impacts are highly correlated. Terrestrial invertebrates create greater economic impacts than ecological impacts, while the reverse is true for terrestrial plants. Alien species from all taxonomic groups affect “supporting”, “provisioning”, “regulating”, and “cultural” services and interfere with human well-being. Terrestrial vertebrates are responsible for the greatest range of impacts, and these are widely distributed across Europe. Here, we present a review of the financial costs, as the first step toward calculating an estimate of the economic consequences of alien species in Europe.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57081,Plant introductions in Australia: how can we resolve ‘weedy’ conflicts of interest?,S193368,R57082,Investigated species,L121096,Plants,"Over 27,000 exotic plant species have been introduced to Australia, predominantly for use in gardening, agriculture and forestry. Less than 1% of such introductions have been solely accidental. Plant introductions also occur within Australia, as exotic and native species are moved across the country. Plant-based industries contribute around $50 billion to Australia’s economy each year, play a signifi cant social role and can also provide environmental benefi ts such as mitigating dryland salinity. However, one of the downsides of a new plant introduction is the potential to become a new weed. Overall, 10% of exotic plant species introduced since European settlement have naturalised, but this rate is higher for agricultural and forestry plants. Exotic plant species have become agricultural, noxious and natural ecosystem weeds at rates of 4%, 1% and 7% respectively. Whilst garden plants have the lowest probability of becoming weeds this is more than compensated by their vast numbers of introductions, such that gardening is the greatest source of weeds in Australia. Resolving confl icts of interest with plant introductions needs a collaborative effort between those stakeholders who would benefi t (i.e. grow the plant) and those who would potentially lose (i.e. gain a weed) to compare the weed risk, feasibility of management and benefi ts of the species in question. For proposed plant imports to Australia, weed risk is presently the single consideration under international trade rules. Hence the focus is on ensuring the optimal performance of the border Weed Risk Assessment System. For plant species already present in Australia there are inconsistencies in managing weed risk between the States/Territories. This is being addressed with the development of a national standard for weed risk management. For agricultural and forestry species of high economic value but signifi cant weed risk, the feasibility of standard risk management approaches needs to be investigated. Invasive garden plants need national action.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57107,Loss of native herbaceous species due to woody plant encroachment facilitates the establishment of an invasive grass,S193856,R57108,Investigated species,L121319,Plants,"Although negative relationships between diversity (frequently measured as species richness) and invasibility at neighborhood or community scales have often been reported, realistic natural diversity gradients have rarely been studied at this scale. We recreated a naturally occurring gradient in species richness to test the effects of species richness on community invasibility. In central Texas savannas, as the proportion of woody plants increases (a process known as woody plant encroachment), herbaceous habitat is both lost and fragmented, and native herbaceous species richness declines. We examined the effects of these species losses on invasibility in situ by removing species that occur less frequently in herbaceous patches as woody plant encroachment advances. This realistic species removal was accompanied by a parallel and equivalent removal of biomass with no changes in species richness. Over two springs, the nonnative bunchgrass Bothriochloa ischaemum germinated significantly more often in the biomass-removal treatment than in unmanipulated control plots, suggesting an effect of native plant density independent of diversity. Additionally, significantly more germination occurred in the species-removal treatment than in the biomass-removal treatment. Changes in species richness had a stronger effect on B. ischaemum germination than changes in plant density, demonstrating that niche-related processes contributed more to biotic resistance in this system than did species-neutral competitive interactions. Similar treatment effects were found on transplant growth. Thus we show that woody plant encroachment indirectly facilitates the establishment of an invasive grass by reducing native diversity. Although we found a negative relationship between species richness and invasibility at the scale of plots with similar composition and environmental conditions, we found a positive relationship between species richness and invasibility at larger scales. This apparent paradox is consistent with reports from other systems and may be the result of variation in environmental factors at larger scales similarly influencing both invasibility and richness. The habitat loss and fragmentation associated with woody plant encroachment are two of many processes that commonly threaten biodiversity, including climate change. Many of these processes are similarly likely to increase invasibility via their negative effects on native diversity.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57124,Diversity-invasibility across an experimental disturbance gradient in Appalachian forests,S194047,R57125,Investigated species,L121476,Plants,"Research examining the relationship between community diversity and invasions by nonnative species has raised new questions about the theory and management of biological invasions. Ecological theory predicts, and small-scale experiments confirm, lower levels of nonnative species invasion into species-rich compared to species-poor communities, but observational studies across a wider range of scales often report positive relationships between native and nonnative species richness. This paradox has been attributed to the scale dependency of diversity-invasibility relationships and to differences between experimental and observational studies. Disturbance is widely recognized as an important factor determining invasibility of communities, but few studies have investigated the relative and interactive roles of diversity and disturbance on nonnative species invasion. Here, we report how the relationship between native and nonnative plant species richness responded to an experimentally applied disturbance gradient (from no disturbance up to clearcut) in oak-dominated forests. We consider whether results are consistent with various explanations of diversity-invasibility relationships including biotic resistance, resource availability, and the potential effects of scale (1 m2 to 2 ha). We found no correlation between native and nonnative species richness before disturbance except at the largest spatial scale, but a positive relationship after disturbance across scales and levels of disturbance. Post-disturbance richness of both native and nonnative species was positively correlated with disturbance intensity and with variability of residual basal area of trees. These results suggest that more nonnative plants may invade species-rich communities compared to species-poor communities following disturbance.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57137,Control of plant species diversity and community invasibility by species immigration: seed richness versus seed density,S194187,R57138,Investigated species,L121590,Plants,"Brown, R. L. and Fridley, J. D. 2003. Control of plant species diversity andcommunity invasibility by species immigration: seed richness versus seed density. –Oikos 102: 15–24.Immigration rates of species into communities are widely understood to influencecommunity diversity, which in turn is widely expected to influence the susceptibilityof ecosystems to species invasion. For a given community, however, immigrationprocesses may impact diversity by means of two separable components: the numberof species represented in seed inputs and the density of seed per species. Theindependent effects of these components on plant species diversity and consequentrates of invasion are poorly understood. We constructed experimental plant commu-nities through repeated seed additions to independently measure the effects of seedrichness and seed density on the trajectory of species diversity during the develop-ment of annual plant communities. Because we sowed species not found in theimmediate study area, we were able to assess the invasibility of the resultingcommunities by recording the rate of establishment of species from adjacent vegeta-tion. Early in community development when species only weakly interacted, seedrichness had a strong effect on community diversity whereas seed density had littleeffect. After the plants became established, the effect of seed richness on measureddiversity strongly depended on seed density, and disappeared at the highest level ofseed density. The ability of surrounding vegetation to invade the experimentalcommunities was decreased by seed density but not by seed richness, primarilybecause the individual effects of a few sown species could explain the observedinvasion rates. These results suggest that seed density is just as important as seedrichness in the control of species diversity, and perhaps a more important determi-nant of community invasibility than seed richness in dynamic plant assemblages.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57146,Aquatic plant community invasibility and scale-dependent patterns in native and invasive species richness,S194291,R57147,Investigated species,L121676,Plants,"Invasive species richness often is negatively correlated with native species richness at the small spatial scale of sampling plots, but positively correlated in larger areas. The pattern at small scales has been interpreted as evidence that native plants can competitively exclude invasive species. Large-scale patterns have been understood to result from environmental heterogeneity, among other causes. We investigated species richness patterns among submerged and floating-leaved aquatic plants (87 native species and eight invasives) in 103 temperate lakes in Connecticut (northeastern USA) and found neither a consistently negative relationship at small (3-m2) scales, nor a positive relationship at large scales. Native species richness at sampling locations was uncorrelated with invasive species richness in 37 of the 60 lakes where invasive plants occurred; richness was negatively correlated in 16 lakes and positively correlated in seven. No correlation between native and invasive species richness was found at larger spatial scales (whole lakes and counties). Increases in richness with area were uncorrelated with abiotic heterogeneity. Logistic regression showed that the probability of occurrence of five invasive species increased in sampling locations (3 m2, n = 2980 samples) where native plants occurred, indicating that native plant species richness provided no resistance against invasion. However, the probability of three invasive species' occurrence declined as native plant density increased, indicating that density, if not species richness, provided some resistance with these species. Density had no effect on occurrence of three other invasive species. Based on these results, native species may resist invasion at small spatial scales only in communities where density is high (i.e., in communities where competition among individuals contributes to community structure). Most hydrophyte communities, however, appear to be maintained in a nonequilibrial condition by stress and/or disturbance. Therefore, most aquatic plant communities in temperate lakes are likely to be vulnerable to invasion.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57165,Species richness and exotic species invasion in middle Tennessee cedar glades in relation to abiotic and biotic factors,S194512,R57166,Investigated species,L121859,Plants,"Abstract Abiotic factors, particularly area, and biotic factors play important roles in determining species richness of continental islands such as cedar glades. We examined the relationship between environmental parameters and species richness on glades and the influence of native species richness on exotic invasion. Field surveys of vascular plants on 40 cedar glades in Rutherford County, Tennessee were conducted during the 2001–2003 growing seasons. Glades were geo-referenced to obtain area, perimeter, distance from autotour road, and degree of isolation. Amount of disturbance also was recorded. Two-hundred thirty two taxa were found with Andropogon virginicus, Croton monanthogynus, Juniperus virginiana, Panicum flexile, and Ulmus alata present on all glades. The exotics Ligustrum sinense, Leucanthemum vulgare, and Taraxacum officinale occurred on the majority of glades. Lobelia appendiculata var. gattingeri, Leavenworthia stylosa, and Pediomelum subacaule were the most frequent endemics. Richness of native, exotic and endemic species increased with increasing area and perimeter and decreased with increasing isolation (P ≤ 0.03); richness was unrelated to distance to road (P ≥ 0.20). Perimeter explained a greater amount of variation than area for native and exotic species, whereas area accounted for greater variation for endemic species. Slope of the relationship between area and total richness (0.17) was within the range reported for continental islands. Disturbed glades contained a higher number of exotic and native species than nondisturbed ones, but they were larger (P ≤ 0.03). Invasion of exotic species was unrelated to native species richness when glade size was statistically controlled (P = 0.88). Absence of a relationship is probably due to a lack of substantial competitive interactions. Most endemics occurred over a broad range of glade sizes emphasizing the point that glades of all sizes are worthy of protection.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57179,Spatial heterogeneity explains the scale dependence of the native-exotic diversity relationship,S194678,R57180,Investigated species,L121997,Plants,"While small-scale studies show that more diverse native communities are less invasible by exotics, studies at large spatial scales often find positive correlations between native and exotic diversity. This large-scale pattern is thought to arise because landscapes with favorable conditions for native species also have favorable conditions for exotic species. From theory, we proposed an alternative hypothesis: the positive relationship at large scales is driven by spatial heterogeneity in species composition, which is driven by spatial heterogeneity in the environment. Landscapes with more spatial heterogeneity in the environment can sustain more native and more exotic species, leading to a positive correlation of native and exotic diversity at large scales. In a nested data set for grassland plants, we detected negative relationships between native and exotic diversity at small spatial scales and positive relationships at large spatial scales. Supporting our hypothesis, the positive relationships between native and exotic diversity at large scales were driven by positive relationships between native and exotic beta diversity. Further, both native and exotic diversity were positively correlated with spatial heterogeneity in abiotic conditions (variance of soil depth, soil nitrogen, and aspect) but were uncorrelated with average abiotic conditions, supporting the spatial-heterogeneity hypothesis but not the favorable-conditions",TRUE,noun
R24,Ecology and Evolutionary Biology,R57185,Darwin's naturalization conundrum: dissecting taxonomic patterns of species invasions,S194746,R57186,Investigated species,L122053,Plants,"Darwin acknowledged contrasting, plausible arguments for how species invasions are influenced by phylogenetic relatedness to the native community. These contrasting arguments persist today without clear resolution. Using data on the naturalization and abundance of exotic plants in the Auckland region, we show how different expectations can be accommodated through attention to scale, assumptions about niche overlap, and stage of invasion. Probability of naturalization was positively related to the number of native species in a genus but negatively related to native congener abundance, suggesting the importance of both niche availability and biotic resistance. Once naturalized, however, exotic abundance was not related to the number of native congeners, but positively related to native congener abundance. Changing the scale of analysis altered this outcome: within habitats exotic abundance was negatively related to native congener abundance, implying that native and exotic species respond similarly to broad scale environmental variation across habitats, with biotic resistance occurring within habitats.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57197,"Invasibility of experimental grassland communities: the role of earthworms, plant functional group identity and seed size",S194877,R57198,Investigated species,L122160,Plants,"Invasions of natural communities by non-indigenous species threaten native biodiversity and are currently rated as one of the most important global-scale environmental problems. The mechanisms that make communities resistant to invasions and drive the establishment success of seedlings are essential both for management and for understanding community assembly and structure. Especially in grasslands, anecic earthworms are known to function as ecosystem engineers, however, their direct effects on plant community composition and on the invasibility of plant communities via plant seed burial, ingestion and digestion are poorly understood. In a greenhouse experiment we investigated the impact of Lumbricus terrestris, plant functional group identity and seed size of plant invader species and plant functional group of the established plant community on the number and biomass of plant invaders. We set up 120 microcosms comprising four plant community treatments, two earthworm treatments and three plant invader treatments containing three seed size classes. Earthworm performance was influenced by an interaction between plant functional group identity of the established plant community and that of invader species. The established plant community and invader seed size affected the number of invader plants significantly, while invader biomass was only affected by the established community. Since earthworm effects on the number and biomass of invader plants varied with seed size and plant functional group identity they probably play a key role in seedling establishment and plant community composition. Seeds and germinating seedlings in earthworm burrows may significantly contribute to earthworm nutrition, but this deserves further attention. Lumbricus terrestris likely behaves like a ‘farmer’ by collecting plant seeds which cannot directly be swallowed or digested. Presumably, these seeds are left in middens and become eatable after partial microbial decay. Increased earthworm numbers in more diverse plant communities likely contribute to the positive relationship between plant species diversity and resistance against invaders.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57216,Abiotic constraints eclipse biotic resistance in determining invasibility along experimental vernal pool gradients,S195093,R57217,Investigated species,L122338,Plants,"Effective management of invasive species requires that we understand the mechanisms determining community invasibility. Successful invaders must tolerate abiotic conditions and overcome resistance from native species in invaded habitats. Biotic resistance to invasions may reflect the diversity, abundance, or identity of species in a community. Few studies, however, have examined the relative importance of abiotic and biotic factors determining community invasibility. In a greenhouse experiment, we simulated the abiotic and biotic gradients typically found in vernal pools to better understand their impacts on invasibility. Specifically, we invaded plant communities differing in richness, identity, and abundance of native plants (the ""plant neighborhood"") and depth of inundation to measure their effects on growth, reproduction, and survival of five exotic plant species. Inundation reduced growth, reproduction, and survival of the five exotic species more than did plant neighborhood. Inundation reduced survival of three species and growth and reproduction of all five species. Neighboring plants reduced growth and reproduction of three species but generally did not affect survival. Brassica rapa, Centaurea solstitialis, and Vicia villosa all suffered high mortality due to inundation but were generally unaffected by neighboring plants. In contrast, Hordeum marinum and Lolium multiflorum, whose survival was unaffected by inundation, were more impacted by neighboring plants. However, the four measures describing plant neighborhood differed in their effects. Neighbor abundance impacted growth and reproduction more than did neighbor richness or identity, with growth and reproduction generally decreasing with increasing density and mass of neighbors. Collectively, these results suggest that abiotic constraints play the dominant role in determining invasibility along vernal pool and similar gradients. By reducing survival, abiotic constraints allow only species with the appropriate morphological and physiological traits to invade. In contrast, biotic resistance reduces invasibility only in more benign environments and is best predicted by the abundance, rather than diversity, of neighbors. These results suggest that stressful environments are not likely to be invaded by most exotic species. However, species, such as H. marinum, that are able to invade these habitats require careful management, especially since these environments often harbor rare species and communities.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57231,Predicting the landscape-scale distribution of alien plants and their threat to plant diversity,S195279,R57232,Investigated species,L122494,Plants,"Abstract: Invasive alien organisms pose a major threat to global biodiversity. The Cape Peninsula, South Africa, provides a case study of the threat of alien plants to native plant diversity. We sought to identify where alien plants would invade the landscape and what their threat to plant diversity could be. This information is needed to develop a strategy for managing these invasions at the landscape scale. We used logistic regression models to predict the potential distribution of six important invasive alien plants in relation to several environmental variables. The logistic regression models showed that alien plants could cover over 89% of the Cape Peninsula. Acacia cyclops and Pinus pinaster were predicted to cover the greatest area. These predictions were overlaid on the current distribution of native plant diversity for the Cape Peninsula in order to quantify the threat of alien plants to native plant diversity. We defined the threat to native plant diversity as the number of native plant species (divided into all species, rare and threatened species, and endemic species) whose entire range is covered by the predicted distribution of alien plant species. We used a null model, which assumed a random distribution of invaded sites, to assess whether area invaded is confounded with threat to native plant diversity. The null model showed that most alien species threaten more plant species than might be suggested by the area they are predicted to invade. For instance, the logistic regression model predicted that P. pinaster threatens 350 more native species, 29 more rare and threatened species, and 21 more endemic species than the null model would predict. Comparisons between the null and logistic regression models suggest that species richness and invasibility are positively correlated and that species richness is a poor indicator of invasive resistance in the study site. Our results emphasize the importance of adopting a spatially explicit approach to quantifying threats to biodiversity, and they provide the information needed to prioritize threats from alien species and the sites that need urgent management intervention.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57245,Filling in the gaps: modelling native species richness and invasions using spatially incomplete data,S195441,R57246,Investigated species,L122628,Plants,"Detailed knowledge of patterns of native species richness, an important component of biodiversity, and non‐native species invasions is often lacking even though this knowledge is essential to conservation efforts. However, we cannot afford to wait for complete information on the distribution and abundance of native and harmful invasive species. Using information from counties well surveyed for plants across the USA, we developed models to fill data gaps in poorly surveyed areas by estimating the density (number of species km−2) of native and non‐native plant species. Here, we show that native plant species density is non‐random, predictable, and is the best predictor of non‐native plant species density. We found that eastern agricultural sites and coastal areas are among the most invaded in terms of non‐native plant species densities, and that the central USA appears to have the greatest ratio of non‐native to native species. These large‐scale models could also be applied to smaller spatial scales or other taxa to set priorities for conservation and invasion mitigation, prevention, and control efforts.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57265,Species diversity and biological invasions: relating local process to community pattern,S195685,R57267,Investigated species,L122830,Plants,"In a California riparian system, the most diverse natural assemblages are the most invaded by exotic plants. A direct in situ manipulation of local diversity and a seed addition experiment showed that these patterns emerge despite the intrinsic negative effects of diversity on invasions. The results suggest that species loss at small scales may reduce invasion resistance. At community-wide scales, the overwhelming effects of ecological factors spatially covarying with diversity, such as propagule supply, make the most diverse communities most likely to be invaded.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57268,"Local interactions, dispersal, and native and exotic plant diversity along a California stream",S195704,R57269,Investigated species,L122845,Plants,"Although the species pool, dispersal, and local interactions all influence species diversity, their relative importance is debated. I examined their importance in controlling the number of native and exotic plant species occupying tussocks formed by the sedge Carex nudata along a California stream. Of particular interest were the factors underlying a downstream increase in plant diversity and biological invasions. I conducted seed addition experiments and manipulated local diversity and cover to evaluate the degree to which tussocks saturate with species, and to examine the roles of local competitive processes, abiotic factors, and seed supply in controlling the system-wide patterns. Seeds of three native and three exotic plants sown onto experimentally assembled tussock communities less successfully established on tussocks with a greater richness of resident plants. Nonetheless, even the most diverse tussocks were somewhat colonized, suggesting that tussocks are not completely saturated with species. Similarly, in an experiment where I sowed seeds onto natural tussocks along the river, colonization increased two- to three-fold when I removed the resident species. Even on intact tussocks, however, seed addition increased diversity, indicating that the tussock assemblages are seed limited. Colonization success on cleared and uncleared tussocks increased downstream from km 0 to km 3 of the study site, but showed no trends from km 3 to km 8. This suggests that while abiotic and biotic features of the tussocks may control the increase in diversity and invasions from km 0 to km 3, similar increases from km 3 to km 8 are more likely explained by potential downstream increases in seed supply. The effective water dispersal of seed mimics and prevailingly downstream winds indicated that dispersal most likely occurs in a downstream direction. These results suggest that resident species diversity, competitive interactions, and seed supply similarly influence the colonization of native and exotic species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57273,"Phalaris arundinacea seedling establishment: effects of canopy complexity in fen, mesocosm, and restoration experiments",S195759,R57274,Investigated species,L122890,Plants,"Phalaris arundinacea L. (reed canary grass) is a major invader of wetlands in temperate North America; it creates monotypic stands and displaces native vegetation. In this study, the effect of plant canopies on the establishment of P. arundinacea from seed in a fen, fen-like mesocosms, and a fen restoration site was assessed. In Wingra Fen, canopies that were more resistant to P. arundinacea establishment had more species (eight or nine versus four to six species) and higher cover of Aster firmus. In mesocosms planted with Glyceria striata plus 1, 6, or 15 native species, all canopies closed rapidly and prevented P. arundinacea establishment from seed, regardless of the density of the matrix species or the number of added species. Only after gaps were created in the canopy was P. arundinacea able to establish seedlings; then, the 15-species treatment reduced establishment to 48% of that for single-species canopies. A similar experiment in the restoration site produced less cover of native plants, and P. a...",TRUE,noun
R24,Ecology and Evolutionary Biology,R57281,Evenness-invasibility relationships differ between two extinction scenarios in tallgrass prairie,S195861,R57283,Investigated species,L122974,Plants,"Experiments that have manipulated species richness with random draws of species from a larger species pool have usually found that invasibility declines as richness increases. These results have usually been attributed to niche complementarity, and interpreted to mean that communities will become less resistant to invaders as species go locally extinct. However, it is not clear how relevant these studies are to real-world situations where species extinctions are non-random, and where species diversity declines due to increased rarity (i.e. reduced evenness) without having local extinctions. We experimentally varied species richness from 1 to 4, and evenness from 0.44 to 0.97 with two different extinction scenarios in two-year old plantings using seedling transplants in western Iowa. In both scenarios, evenness was varied by changing the level of dominance of the tall grass Andropogon gerardii. In one scenario, which simulated a loss of short species from Andropogon communities, we directly tested for complementarity in light capture due to having species in mixtures with dissimilar heights. We contrasted this scenario with a second set of mixtures that contained all tall species. In both cases, we controlled for factors such as rooting depth and planting density. Mean invader biomass was higher in monocultures (5.4 g m 2 week 1 ) than in 4-species mixtures (3.2 g m 2 week 1 ). Reduced evenness did not affect invader biomass in mixtures with dissimilar heights. However, the amount of invader biomass decreased by 60% as evenness increased across mixtures with all tall species. This difference was most pronounced early in the growing season when high evenness plots had greater light capture than low evenness plots. These results suggest that the effect of reduced species diversity on invasibility are 1) not related to complementarity through height dissimilarity, and 2) variable depending on the phenological traits of the species that are becoming rare or going locally extinct.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57292,The distribution and habitat associations of non-native plant species in urban riparian habitats,S195968,R57293,Investigated species,L123061,Plants,"Questions: 1. What are the distribution and habitat associations of non-native (neophyte) species in riparian zones? 2. Are there significant differences, in terms of plant species diversity, composition, habitat condition and species attributes, between plant communities where non-natives are present or abundant and those where non-natives are absent or infrequent? 3. Are the observed differences generic to non-natives or do individual non-native species differ in their vegetation associations? Location: West Midlands Conurbation (WMC), UK. Methods: 56 sites were located randomly on four rivers across the WMC. Ten 2 m × 2 m quadrats were placed within 15 m of the river to sample vegetation within the floodplain at each site. All vascular plants were recorded along with site information such as surrounding land use and habitat types. Results: Non-native species were found in many vegetation types and on all rivers in the WMC. There were higher numbers of non-natives on more degraded, human-modified rivers. More non-native species were found in woodland, scrub and tall herb habitats than in grasslands. We distinguish two types of communities with non-natives. In communities colonized following disturbance, in comparison to quadrats containing no non-native species, those with non-natives had higher species diversity and more forbs, annuals and shortlived monocarpic perennials. Native species in quadrats containing non-natives were characteristic of conditions of higher fertility and pH, had a larger specific leaf area and were less stress tolerant or competitive. In later successional communities dominated by particular non-natives, native diversity declined with increasing cover of non-natives. Associated native species were characteristic of low light conditions. Conclusions: Communities containing non-natives can be associated with particular types of native species. Extrinsic factors (disturbance, eutrophication) affected both native and non-native species. In disturbed riparian habitats the key determinant of diversity is dominance by competitive invasive species regardless of their native or non-native origin.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57305,Patterns of invasion of an urban remnant of a species-rich grassland in southeastern Australia by non-native plant species,S196107,R57306,Investigated species,L123174,Plants,". The invasion by non-native plant species of an urban remnant of a species-rich Themeda triandra grassland in southeastern Australia was quantified and related to abiotic influences. Richness and cover of non-native species were highest at the edges of the remnant and declined to relatively uniform levels within the remnant. Native species richness and cover were lowest at the edge adjoining a roadside but then showed little relation to distance from edge. Roadside edge quadrats were floristically distinct from most other quadrats when ordinated by Detrended Correspondence Analysis. Soil phosphorus was significantly higher at the roadside edge but did not vary within the remnant itself. All other abiotic factors measured (NH4, NO3, S, pH and % organic carbon) showed little variation across the remnant. Non-native species richness and cover were strongly correlated with soil phosphorus levels. Native species were negatively correlated with soil phosphorus levels. Canonical Correspondence Analysis identified the perennial non-native grasses of high biomass as species most dependent on high soil nutrient levels. Such species may be resource-limited in undisturbed soils. Three classes of non-native plants have invaded this species-rich grassland: (1) generalist species (> 50 % frequency), mostly therophytes with non-specialized habitat or germination requirements; (2) resource-limited species comprising perennial species of high biomass that are dependent on nutrient increases and/or soil disturbances before they can invade the community and; (3) species of intermediate frequency (1–30 %), of low to high biomass potential, that appear to have non-specialized habitat requirements but are currently limited by seed dispersal, seedling establishment or the current site management. Native species richness and cover are most negatively affected by increases in non-native cover. Declines are largely evident once the non-native cover exceeds 40 %. Widespread, generalist non-native species are numerous in intact sites and will have to be considered a permanent part of the flora of remnant grasslands. Management must aim to minimize increases in cover of any non-native species or the disturbances that favour the establishment of competitive non-native grasses if the native grassland flora is to be conserved in small, fragmented remnants.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57311,Native plant diversity increases herbivory to non-natives,S196174,R57312,Investigated species,L123229,Plants,"There is often an inverse relationship between the diversity of a plant community and the invasibility of that community by non-native plants. Native herbivores that colonize novel plants may contribute to diversity–invasibility relationships by limiting the relative success of non-native plants. Here, we show that, in large collections of non-native oak trees at sites across the USA, non-native oaks introduced to regions with greater oak species richness accumulated greater leaf damage than in regions with low oak richness. Underlying this trend was the ability of herbivores to exploit non-native plants that were close relatives to their native host. In diverse oak communities, non-native trees were on average more closely related to native trees and received greater leaf damage than those in depauperate oak communities. Because insect herbivores colonize non-native plants that are similar to their native hosts, in communities with greater native plant diversity, non-natives experience greater herbivory.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57313,"Habitat stress, species pool size and biotic resistance influence exotic plant richness in the Flooding Pampa grasslands",S196198,R57314,Investigated species,L123249,Plants,"1 Theory and empirical evidence suggest that community invasibility is influenced by propagule pressure, physical stress and biotic resistance from resident species. We studied patterns of exotic and native species richness across the Flooding Pampas of Argentina, and tested for exotic richness correlates with major environmental gradients, species pool size, and native richness, among and within different grassland habitat types. 2 Native and exotic richness were positively correlated across grassland types, increasing from lowland meadows and halophyte steppes, through humid to mesophyte prairies in more elevated topographic positions. Species pool size was positively correlated with local richness of native and exotic plants, being larger for mesophyte and humid prairies. Localities in the more stressful meadow and halophyte steppe habitats contained smaller fractions of their landscape species pools. 3 Native and exotic species numbers decreased along a gradient of increasing soil salinity and decreasing soil depth, and displayed a unimodal relationship with soil organic carbon. When covarying habitat factors were held constant, exotic and native richness residuals were still positively correlated across sites. Within grassland habitat types, exotic and native species richness were positively associated in meadows and halophyte steppes but showed no consistent relationship in the least stressful, prairie habitat types. 4 Functional group composition differed widely between native and exotic species pools. Patterns suggesting biotic resistance to invasion emerged only within humid prairies, where exotic richness decreased with increasing richness of native warm‐season grasses. This negative relationship was observed for other descriptors of invasion such as richness and cover of annual cool‐season forbs, the commonest group of exotics. 5 Our results support the view that ecological factors correlated with differences in invasion success change with the range of environmental heterogeneity encompassed by the analysis. Within narrow habitat ranges, invasion resistance may be associated with either physical stress or resident native diversity. Biotic resistance through native richness, however, appeared to be effective only at intermediate locations along a stress/fertility gradient. 6 We show that certain functional groups, not just total native richness, may be critical to community resistance to invasion. Identifying such native species groups is important for directing management and conservation efforts.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57317,"Exotic plants on Lord Howe Island: distribution in space and time, 1853-1981",S196240,R57318,Investigated species,L123283,Plants,"One hundred and seventy-three exotic angiosperms form 48.2% of the angiosperm flora of Lord Howe Island (310 35'S, 1590 05'E) in the south Pacific Ocean. The families Poaceae (23%) and Asteraceae (13%) dominate the exotic flora. Some 30% are native to the Old World, 26% from the New World and 14% from Eurasia. Exotics primarily occur on heavily disturbed areas but c. 10% are widely distributed in undisturbed vegetation. Analysis of historical records, eleven species lists over the 128 years 1853-1981, shows that invasion has been a continuous process at an exponential rate. Exotics have been naturalized at the overall rate of 1.3 species y-1. Most exotics were deliberately introduced as pasture species or accidentally as contaminants although ornamental plants are increasing. Exotics show some evidence of invading progressively less disturbed habitats but the response of each species is individualistic. As introduction of exotics is a social rather than an ecological problem, the present pattern will continue.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57343,Native and naturalized plant diversity are positively correlated in scrub communities of California and Chile,S196549,R57345,Investigated species,L123538,Plants,"Abstract. An emerging body of literature suggests that the richness of native and naturalized plant species are often positively correlated. It is unclear, however, whether this relationship is robust across spatial scales, and how a disturbance regime may affect it. Here, I examine the relationships of both richness and abundance between native and naturalized species of plants in two mediterranean scrub communities: coastal sage scrub (CSS) in California and xeric‐sloped matorral (XSM) in Chile. In each vegetation type I surveyed multiple sites, where I identified vascular plant species and estimated their relative cover. Herbaceous species richness was higher in XSM, while cover of woody species was higher in CSS, where woody species have a strong impact upon herbaceous species. As there were few naturalized species with a woody growth form, the analyses performed here relate primarily to herbaceous species. Relationships between the herbaceous cover of native and naturalized species were not significant in CSS, but were nearly significant in XSM. The herbaceous species richness of native and naturalized plants were not significantly correlated on sites that had burned less than one year prior to sampling in CSS, and too few sites were available to examine this relationship in XSM. In post 1‐year burn sites, however, herbaceous richness of native and naturalized species were positively correlated in both CSS and XSM. This relationship occurred at all spatial scales, from 400 m2 to 1 m2 plots. The consistency of this relationship in this study, together with its reported occurrence in the literature, suggests that this relationship may be general. Finally, the residuals from the correlations between native and naturalized species richness and cover, when plotted against site age (i.e. time since the last fire), show that richness and cover of naturalized species are strongly favoured on recently burned sites in XSM; this suggests that herbaceous species native to Chile are relatively poorly adapted to fire.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57359,A null model of exotic plant diversity tested with exotic and native species-area relationships,S196718,R57360,Investigated species,L123677,Plants,"At large spatial scales, exotic and native plant diversity exhibit a strong positive relationship. This may occur because exotic and native species respond similarly to processes that influence diversity over large geographical areas. To test this hypothesis, we compared exotic and native species-area relationships within six North American ecoregions. We predicted and found that within ecoregions the ratio of exotic to native species richness remains constant with increasing area. Furthermore, we predicted that areas with more native species than predicted by the species-area relationship would have proportionally more exotics as well. We did find that these exotic and native deviations were highly correlated, but areas that were good (or bad) for native plants were even better (or worse) for exotics. Similar processes appear to influence exotic and native plant diversity but the degree of this influence may differ with site quality.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57365,Plant species invasions along the latitudinal gradient in the United States,S196793,R57366,Investigated species,L123740,Plants,"It has been long established that the richness of vascular plant species and many animal taxa decreases with increasing latitude, a pattern that very generally follows declines in actual and potential evapotranspiration, solar radiation, temperature, and thus, total productivity. Using county-level data on vascular plants from the United States (3000 counties in the conterminous 48 states), we used the Akaike Information Criterion (AIC) to evaluate competing models predicting native and nonnative plant species density (number of species per square kilometer in a county) from various combinations of biotic variables (e.g., native bird species density, vegetation carbon, normalized difference vegetation in- dex), environmental/topographic variables (elevation, variation in elevation, the number of land cover classes in the county, radiation, mean precipitation, actual evapotranspiration, and potential evapotranspiration), and human variables (human population density, crop- land, and percentage of disturbed lands in a county). We found no evidence of a latitudinal gradient for the density of native plant species and a significant, slightly positive latitudinal gradient for the density of nonnative plant species. We found stronger evidence of a sig- nificant, positive productivity gradient (vegetation carbon) for the density of native plant species and nonnative plant species. We found much stronger significant relationships when biotic, environmental/topographic, and human variables were used to predict native plant species density and nonnative plant species density. Biotic variables generally had far greater influence in multivariate models than human or environmental/topographic variables. Later, we found that the best, single, positive predictor of the density of nonnative plant species in a county was the density of native plant species in a county. While further study is needed, it may be that, while humans facilitate the initial establishment invasions of non- native plant species, the spread and subsequent distributions of nonnative species are con- trolled largely by biotic and environmental factors.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57384,Biotic resistance to invader establishment of a southern Appalachian plant community is determined by environmental conditions,S197012,R57385,Investigated species,L123921,Plants,"Summary 1 Tests of the relationship between resident plant species richness and habitat invasibility have yielded variable results. I investigated the roles of experimental manipulation of understorey species richness and overstorey characteristics in resistance to invader establishment in a floodplain forest in south-western Virginia, USA. 2 I manipulated resident species richness in experimental plots along a flooding gradient, keeping plot densities at their original levels, and quantified the overstorey characteristics of each plot. 3 After manipulating the communities, I transplanted 10 randomly chosen invaders from widespread native and non-native forest species into the experimental plots. Success of an invasion was measured by survival and growth of the invader. 4 Native and non-native invader establishment trends were influenced by different aspects of the biotic community and these relationships depended on the site of invasion. The most significant influence on non-native invader survival in this system of streamside and upper terrace plots was the overstorey composition. Non-native species survival in the flooded plots after 2 years was significantly positively related to proximity to larger trees. However, light levels did not fully explain the overstorey effect and were unrelated to native survivorship. The effects of understorey richness on survivorship depended on the origin of the invaders and the sites they were transplanted into. Additionally, native species growth was significantly affected by understorey plot richness. 5 The direction and strength of interactions with both the overstorey (for non-native invaders) and understorey richness (for natives and non-natives) changed with the site of invasion and associated environmental conditions. Rather than supporting the hypothesis of biotic resistance to non-native invasion, my results suggest that native invaders experienced increased competition with the native understorey plants in the more benign upland habitat and facilitation in the stressful riparian zone.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57594,Effects of fungal pathogens on seeds of native and exotic plants: a test using congeneric pairs ,S198172,R57595,Investigated species,L124350,Plants,"Summary 1 It has previously been hypothesized that low rates of attack by natural enemies may contribute to the invasiveness of exotic plants. 2 We tested this hypothesis by investigating the influence of pathogens on survival during a critical life-history stage: the seed bank. We used fungicide treatments to estimate the impacts of soil fungi on buried seeds of a taxonomically broad suite of congeneric natives and exotics, in both upland and wetland meadows. 3 Seeds of both natives and exotics were recovered at lower rates in wetlands than in uplands. Fungicide addition reduced this difference by improving recovery in wetlands, indicating that the lower recovery was largely attributable to a higher level of fungal mortality. This suggests that fungal pathogens may contribute to the exclusion of upland species from wetlands. 4 The effects of fungicide on the recovery of buried seeds did not differ between natives and exotics. Seeds of exotics were recovered at a higher rate than seeds of natives in uplands, but this effect was not attributable to fungal pathogens. 5 Fungal seed pathogens may offer poor prospects for the management of most exotic species. The lack of consistent differences in the responses of natives vs. exotics to fungicide suggests few aliens owe their success to low seed pathogen loads, while impacts of seed-pathogenic biocontrol agents on non-target species would be frequent.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57596,Post-dispersal losses to seed predators: an experimental comparison of native and exotic old field plants,S198195,R57597,Investigated species,L124369,Plants,"Invasions by exotic plants may be more likely if exotics have low rates of attack by natural enemies, includ - ing post-dispersal seed predators (granivores). We investigated this idea with a field experiment conducted near Newmarket, Ontario, in which we experimentally excluded vertebrate and terrestrial insect seed predators from seeds of 43 native and exotic old-field plants. Protection from vertebrates significantly increased recovery of seeds; vertebrate exclusion produced higher recovery than controls for 30 of the experimental species, increasing overall seed recovery from 38.2 to 45.6%. Losses to vertebrates varied among species, significantly increasing with seed mass. In contrast, insect exclusion did not significantly improve seed recovery. There was no evidence that aliens benefitted from a re - duced rate of post-dispersal seed predation. The impacts of seed predators did not differ significantly between natives and exotics, which instead showed very similar responses to predator exclusion treatments. These results indicate that while vertebrate granivores had important impacts, especially on large-seeded species, exotics did not generally benefit from reduced rates of seed predation. Instead, differences between natives and exotics were small compared with interspecific variation within these groups. Resume : L'invasion par les plantes adventices est plus plausible si ces plantes ont peu d'ennemis naturels, incluant les predateurs post-dispersion des graines (granivores). Les auteurs ont examine cette idee lors d'une experience sur le ter- rain, conduite pres de Newmarket en Ontario, dans laquelle ils ont experimentalement empeche les predateurs de grai- nes, vertebres et insectes terrestres, d'avoir acces aux graines de 43 especes de plantes indigenes ou exotiques, de vielles prairies. La protection contre les vertebres augmente significativement la survie des graines; l'exclusion permet de recuperer plus de graines comparativement aux temoins chez 30 especes de plantes experimentales, avec une aug- mentation generale de recuperation allant de 38.2 a 45.6%. Les pertes occasionnees par les vertebres varient selon les especes, augmentant significativement avec la grosseur des graines. Au contraire, l'exclusion des insectes n'augmente pas significativement les nombres de graines recuperees. Ils n'y a pas de preuve que les adventices auraient beneficie d'une reduction du taux de predation post-dispersion des graines. Les impacts des predateurs de graines ne different pas significativement entre les especes indigenes et introduites, qui montrent au contraire des reactions tres similaires aux traitements d'exclusion des predateurs. Ces resultats indiquent que bien que les granivores vertebres aient des im - pacts importants, surtout sur les especes a grosses graines, les plantes introduites ne beneficient generalement pas de taux reduits de predation des graines. Au contraire, les differences entre les plantes indigenes et les plantes introduites sont petites comparativement a la variation interspecifique a l'interieur de chacun de ces groupes. Mots cles : adventices, exotiques, granivores, envahisseurs, vieilles prairies, predateurs de graines. (Traduit par la Redaction) Blaney and Kotanen 292",TRUE,noun
R24,Ecology and Evolutionary Biology,R57598,Lack of pre-dispersal seed predators in introduced Asteraceae in New Zealand,S198218,R57599,Investigated species,L124388,Plants,"The idea that naturalised invading plants have fewer phytophagous insects associated with them in their new environment relative to their native range is often assumed, but quantitative data are few and mostly refer to pests on crop species. In this study, the incidence of seed-eating insect larvae in flowerheads of naturalised Asteraceae in New Zealand is compared with that in Britain where the species are native. Similar surveys were carried out in both countries by sampling 200 flowerheads of three populations of the same thirteen species. In the New Zealand populations only one seed-eating insect larva was found in 7800 flowerheads (0.013% infected flowerheads, all species combined) in contrast with the British populations which had 487 (6.24%) flowerheads infested. Possible reasons for the low colonization level of the introduced Asteraceae by native insects in New Zealand are 1) the relatively recent introduction of the plants (100-200 years), 2) their phylogenetic distance from the native flora, and 3) the specialised nature of the bud-infesting habit of the insects.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57600,"Impact of fire on leaf nutrients, arthropod fauna and herbivory of native and exotic eucalypts in Kings Park, Perth, Western Australia",S198242,R57601,Investigated species,L124408,Plants,"The vegetation of Kings Park, near the centre of Perth, Western Australia, once had an overstorey of Eucalyptus marginata (jarrah) or Eucalyptus gomphocephala (tuart), and many trees still remain in the bushland parts of the Park. Avenues and roadsides have been planted with eastern Australian species, including Eucalyptus cladocalyx (sugar gum) and Eucalyptus botryoides (southern mahogany), both of which have become invasive. The present study examined the effect of a recent burn on the level of herbivory on these native and exotic eucalypts. Leaf damage, shoot extension and number of new leaves were measured on tagged shoots of saplings of each tree species in unburnt and burnt areas over an 8-month period. Leaf macronutrient levels were quantified and the number of arthropods on saplings was measured at the end of the recording period by chemical knockdown. Leaf macronutrients were mostly higher in all four species in the burnt area, and this was associated with generally higher numbers of canopy arthropods and greater levels of leaf damage. It is suggested that the pulse of soil nutrients after the fire resulted in more nutrient-rich foliage, which in turn was more palatable to arthropods. The resulting high levels of herbivory possibly led to reduced shoot extension of E. gomphocephala, E. botryoides and, to a lesser extent, E. cladocalyx. This acts as a negative feedback mechanism that lessens the tendency for lush, post-fire regrowth to outcompete other species of plants. There was no consistent difference in the levels of the various types of leaf damage or of arthropods on the native and the exotic eucalypts, suggesting that freedom from herbivory is not contributing to the invasiveness of the two exotic species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57607,Herbivores and the success of exotic plants: a phylogenetically controlled experiment,S198326,R57608,Investigated species,L124478,Plants,"In a field experiment with 30 locally occurring old-field plant species grown in a common garden, we found that non-native plants suffer levels of attack (leaf herbivory) equal to or greater than levels suffered by congeneric native plants. This phylogenetically controlled analysis is in striking contrast to the recent findings from surveys of exotic organisms, and suggests that even if enemy release does accompany the invasion process, this may not be an important mechanism of invasion, particularly for plants with close relatives in the recipient flora.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57612,Diversity and abundance patterns of phytophagous insect communities on alien and native host plants in the Brassicaceae,S198387,R57613,Investigated species,L124529,Plants,"The herbivore load (abundance and species richness of herbivores) on alien plants is supposed to be one of the keys to understand the invasiveness of species. We investigate the phytophagous insect communities on cabbage plants (Brassicaceae) in Europe. We compare the communities of endophagous and ectophagous insects as well as of Coleoptera and Lepidoptera on native and alien cabbage plant species. Contrary to many other reports, we found no differences in the herbivore load between native and alien hosts. The majority of insect species attacked alien as well as native hosts. Across insect species, there was no difference in the patterns of host range on native and on alien hosts. Likewise the similarity of insect communities across pairs of host species was not different between natives and aliens. We conclude that the general similarity in the community patterns between native and alien cabbage plant species are due to the chemical characteristics of this plant family. All cabbage plants share glucosinolates. This may facilitate host switches from natives to aliens. Hence the presence of native congeners may influence invasiveness of alien plants.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57620,"Herbivory, disease, recruitment limitation, and success of alien and native tree species",S198499,R57622,Investigated species,L124623,Plants,"The Enemies Hypothesis predicts that alien plants have a competitive ad- vantage over native plants because they are often introduced with few herbivores or diseases. To investigate this hypothesis, we transplanted seedlings of the invasive alien tree, Sapium sebiferum (Chinese tallow tree) and an ecologically similar native tree, Celtis laevigata (hackberry), into mesic forest, floodplain forest, and coastal prairie sites in east Texas and manipulated foliar fungal diseases and insect herbivores with fungicidal and insecticidal sprays. As predicted by the Enemies Hypothesis, insect herbivores caused significantly greater damage to untreated Celtis seedlings than to untreated Sapium seedlings. However, contrary to predictions, suppression of insect herbivores caused significantly greater in- creases in survivorship and growth of Sapium seedlings compared to Celtis seedlings. Regressions suggested that Sapium seedlings compensate for damage in the first year but that this greatly increases the risk of mortality in subsequent years. Fungal diseases had no effects on seedling survival or growth. The Recruitment Limitation Hypothesis predicts that the local abundance of a species will depend more on local seed input than on com- petitive ability at that location. To investigate this hypothesis, we added seeds of Celtis and Sapium on and off of artificial soil disturbances at all three sites. Adding seeds increased the density of Celtis seedlings and sometimes Sapium seedlings, with soil disturbance only affecting density of Celtis. Together the results of these experiments suggest that the success of Sapium may depend on high rates of seed input into these ecosystems and high growth potential, as well as performance advantages of seedlings caused by low rates of herbivory.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57623,Natural-enemy release facilitates habitat expansion of the invasive tropical shrub Clidemia hirta,S198538,R57625,Investigated species,L124656,Plants,"Nonnative, invasive plant species often increase in growth, abundance, or habitat distribution in their introduced ranges. The enemy-release hypothesis, proposed to account for these changes, posits that herbivores and pathogens (natural enemies) limit growth or survival of plants in native areas, that natural enemies have less impact in the introduced than in the native range, and that the release from natural-enemy regulation in areas of introduction accounts in part for observed changes in plant abundance. We tested experimentally the enemy-release hypothesis with the invasive neotropical shrub Clidemia hirta (L.) D. Don (Melastomataceae). Clidemia hirta does not occur in forest in its native range but is a vigorous invader of tropical forest in its introduced range. Therefore, we tested the specific prediction that release from natural enemies has contributed to its ex- panded habitat distribution. We planted C. hirta into understory and open habitats where it is native (Costa Rica) and where it has been introduced (Hawaii) and applied pesticides to examine the effects of fungal pathogen and insect herbivore exclusion. In understory sites in Costa Rica, C. hirta survival increased by 12% if sprayed with insecticide, 19% with fungicide, and 41% with both insecticide and fungicide compared to control plants sprayed only with water. Exclusion of natural enemies had no effect on survival in open sites in Costa Rica or in either habitat in Hawaii. Fungicide application promoted relative growth rates of plants that survived to the end of the experiment in both habitats of Costa Rica but not in Hawaii, suggesting that fungal pathogens only limit growth of C. hirta where it is native. Galls, stem borers, weevils, and leaf rollers were prevalent in Costa Rica but absent in Hawaii. In addition, the standing percentage of leaf area missing on plants in the control (water only) treatment was five times greater on plants in Costa Rica than in Hawaii and did not differ between habitats. The results from this study suggest that significant effects of herbivores and fungal pathogens may be limited to particular habitats. For Clidemia hirta, its absence from forest understory in its native range likely results in part from the strong pressures of natural enemies. Its invasion into Hawaiian forests is apparently aided by a release from these herbivores and pathogens.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57632,Enemy release? An experiment with congeneric plant pairs and diverse above- and belowground enemies,S198653,R57634,Investigated species,L124753,Plants,"Several hypotheses proposed to explain the success of introduced species focus on altered interspecific interactions. One of the most prominent, the Enemy Release Hypothesis, posits that invading species benefit compared to their native counterparts if they lose their herbivores and pathogens during the invasion process. We previously reported on a common garden experiment (from 2002) in which we compared levels of herbivory between 30 taxonomically paired native and introduced old-field plants. In this phyloge- netically controlled comparison, herbivore damage tended to be higher on introduced than on native plants. This striking pattern, the opposite of current theory, prompted us to further investigate herbivory and several other interspecific interactions in a series of linked ex- periments with the same set of species. Here we show that, in these new experiments, introduced plants, on average, received less insect herbivory and were subject to half the negative soil microbial feedback compared to natives; attack by fungal and viral pathogens also tended to be reduced on introduced plants compared to natives. Although plant traits (foliar C:N, toughness, and water content) suggested that introduced species should be less resistant to generalist consumers, they were not consistently more heavily attacked. Finally, we used meta-analysis to combine data from this study with results from our previous work to show that escape generally was inconsistent among guilds of enemies: there were few instances in which escape from multiple guilds occurred for a taxonomic pair, and more cases in which the patterns of escape from different enemies canceled out. Our examination of multiple interspecific interactions demonstrates that escape from one guild of enemies does not necessarily imply escape from other guilds. Because the effects of each guild are likely to vary through space and time, the net effect of all enemies is also likely to be variable. The net effect of these interactions may create ''invasion opportunity windows'': times when introduced species make advances in native communities.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57635,Invasive exotic plants suffer less herbivory than non-invasive exotic plants,S198672,R57636,Investigated species,L124768,Plants,"We surveyed naturally occurring leaf herbivory in nine invasive and nine non-invasive exotic plant species sampled in natural areas in Ontario, New York and Massachusetts, and found that invasive plants experienced, on average, 96% less leaf damage than non-invasive species. Invasive plants were also more taxonomically isolated than non-invasive plants, belonging to families with 75% fewer native North American genera. However, the relationship between taxonomic isolation at the family level and herbivory was weak. We suggest that invasive plants may possess novel phytochemicals with anti-herbivore properties in addition to allelopathic and anti-microbial characteristics. Herbivory could be employed as an easily measured predictor of the likelihood that recently introduced exotic plants may become invasive.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57637,"Herbivory, time since introduction and the invasiveness of exotic plants",S198695,R57638,Investigated species,L124787,Plants,"1 We tested the enemy release hypothesis for invasiveness using field surveys of herbivory on 39 exotic and 30 native plant species growing in natural areas near Ottawa, Canada, and found that exotics suffered less herbivory than natives. 2 For the 39 introduced species, we also tested relationships between herbivory, invasiveness and time since introduction to North America. Highly invasive plants had significantly less herbivory than plants ranked as less invasive. Recently arrived plants also tended to be more invasive; however, there was no relationship between time since introduction and herbivory. 3 Release from herbivory may be key to the success of highly aggressive invaders. Low herbivory may also indicate that a plant possesses potent defensive chemicals that are novel to North America, which may confer resistance to pathogens or enable allelopathy in addition to deterring herbivorous insects.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57654,Phytophagous insects of giant hogweed Heracleum mantegazzianum (Apiaceae) in invaded areas of Europe and in its native area of the Caucasus,S198917,R57655,Investigated species,L124975,Plants,"Giant hogweed, Heracleum mantegazzianum (Apiaceae), was introduced from the Caucasus into Western Europe more than 150 years ago and later became an invasive weed which created major problems for European authorities. Phytophagous insects were collected in the native range of the giant hogweed (Caucasus) and were compared to those found on plants in the invaded parts of Europe. The list of herbivores was compiled from surveys of 27 localities in nine countries during two seasons. In addition, litera- ture records for herbivores were analysed for a total of 16 Heracleum species. We recorded a total of 265 herbivorous insects on Heracleum species and we analysed them to describe the herbivore assemblages, locate vacant niches, and identify the most host- specific herbivores on H. mantegazzianum. When combining our investigations with similar studies of herbivores on other invasive weeds, all studies show a higher proportion of specialist herbivores in the native habitats compared to the invaded areas, supporting the ""enemy release hypothesis"" (ERH). When analysing the relative size of the niches (measured as plant organ biomass), we found less herbivore species per biomass on the stem and roots, and more on the leaves (Fig. 5). Most herbivores were polyphagous gener- alists, some were found to be oligophagous (feeding within the same family of host plants) and a few had only Heracleum species as host plants (monophagous). None were known to feed exclusively on H. mantegazzianum. The oligophagous herbivores were restricted to a few taxonomic groups, especially within the Hemiptera, and were particularly abundant on this weed.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57662,"Insect herbivore faunal diversity among invasive, non-invasive and native Eugenia species: Implications for the enemy release hypothesis",S199067,R57666,Investigated species,L125103,Plants,"Abstract The enemy release hypothesis (ERH) frequently has been invoked to explain the naturalization and spread of introduced species. One ramification of the ERH is that invasive plants sustain less herbivore pressure than do native species. Empirical studies testing the ERH have mostly involved two-way comparisons between invasive introduced plants and their native counterparts in the invaded region. Testing the ERH would be more meaningful if such studies also included introduced non-invasive species because introduced plants, regardless of their abundance or impact, may support a reduced insect herbivore fauna and experience less damage. In this study, we employed a three-way comparison, in which we compared herbivore faunas among native, introduced invasive, and introduced non-invasive plants in the genus Eugenia (Myrtaceae) which all co-occur in South Florida. We observed a total of 25 insect species in 12 families and 6 orders feeding on the six species of Eugenia. Of these insect species, the majority were native (72%), polyphagous (64%), and ectophagous (68%). We found that invasive introduced Eugenia has a similar level of herbivore richness as both the native and the non-invasive introduced Eugenia. However, the numbers and percentages of oligophagous insect species were greatest on the native Eugenia, but they were not different between the invasive and non-invasive introduced Eugenia. One oligophagous endophagous insect has likely shifted from the native to the invasive, but none to the non-invasive Eugenia. In summary, the invasive Eugenia encountered equal, if not greater, herbivore pressure than the non-invasive Eugenia, including from oligophagous and endophagous herbivores. Our data only provided limited support to the ERH. We would not have been able to draw this conclusion without inclusion of the non-invasive Eugenia species in the study.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57674,The interaction between soil nutrients and leaf loss during early 14 establishment in plant invasion,S199203,R57676,Investigated species,L125219,Plants,"Nitrogen availability affects both plant growth and the preferences of herbivores. We hypothesized that an interaction between these two factors could affect the early establishment of native and exotic species differently, promoting invasion in natural systems. Taxonomically paired native and invasive species (Acer platanoides, Acer rubrum, Lonicera maackii, Diervilla lonicera, Celastrus orbiculata, Celastrus scandens, Elaeagnus umbellata, Ceanothus americanus, Ampelopsis brevipedunculata, and Vitis riparia) were grown in relatively high-resource (hardwood forests) and low-resource (pine barrens) communities on Long Island, New York, for a period of 3 months. Plants were grown in ambient and nitrogen-enhanced conditions in both communities. Nitrogen additions produced an average 12% initial increase in leaf number of all plants. By the end of the experiment, invasive species outperformed native species in nitrogen-enhanced plots in hardwood forests, where all plants experienced increased damage relative to control plots. Native species experienced higher overall amounts of damage in hardwood forests, losing, on average, 45% more leaves than exotic species, and only native species experienced a decline in growth rates (32% compared with controls). In contrast, in pine barrens, there were no differences in damage and no differences in performance between native and invasive plants. Our results suggest that unequal damage by natural enemies may play a role in determining community composition by shifting the competitive advantage to exotic species in nitrogen-enhanced environments. FOR. SCI. 53(6):701-709.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57685,"When there is no escape: The effects of natural enemies on native, invasive, and noninvasive plants",S199393,R57690,Investigated species,L125381,Plants,"An important question in the study of biological invasions is the degree to which successful invasion can be explained by release from control by natural enemies. Natural enemies dominate explanations of two alternate phenomena: that most introduced plants fail to establish viable populations (biotic resistance hypothesis) and that some introduced plants become noxious invaders (natural enemies hypothesis). We used a suite of 18 phylogenetically related native and nonnative clovers (Trifolium and Medicago) and the foliar pathogens and invertebrate herbivores that attack them to answer two questions. Do native species suffer greater attack by natural enemies relative to introduced species at the same site? Are some introduced species excluded from native plant communities because they are susceptible to local natural enemies? We address these questions using three lines of evidence: (1) the frequency of attack and composition of fungal pathogens and herbivores for each clover species in four years of common garden experiments, as well as susceptibility to inoculation with a common pathogen; (2) the degree of leaf damage suffered by each species in common garden experiments; and (3) fitness effects estimated using correlative approaches and pathogen removal experiments. Introduced species showed no evidence of escape from pathogens, being equivalent to native species as a group in terms of infection levels, susceptibility, disease prevalence, disease severity (with more severe damage on introduced species in one year), the influence of disease on mortality, and the effect of fungicide treatment on mortality and biomass. In contrast, invertebrate herbivores caused more damage on native species in two years, although the influence of herbivore attack on mortality did not differ between native and introduced species. Within introduced species, the predictions of the biotic resistance hypothesis were not supported: the most invasive species showed greater infection, greater prevalence and severity of disease, greater prevalence of herbivory, and greater effects of fungicide on biomass and were indistinguishable from noninvasive introduced species in all other respects. Therefore, although herbivores preferred native over introduced species, escape from pest pressure cannot be used to explain why some introduced clovers are common invaders in coastal prairie while others are not.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57691,Soil feedback of exotic savanna grass relates to pathogen absence and mycorrhizal selectivity,S199418,R57692,Investigated species,L125402,Plants,"Enemy release of exotic plants from soil pathogens has been tested by examining plant-soil feedback effects in repetitive growth cycles. However, positive soil feedback may also be due to enhanced benefit from the local arbuscular mycorrhizal fungi (AMF). Few studies actually have tested pathogen effects, and none of them did so in arid savannas. In the Kalahari savanna in Botswana, we compared the soil feedback of the exotic grass Cenchrus biflorus with that of two dominant native grasses, Eragrostis lehmanniana and Aristida meridionalis. The exotic grass had neutral to positive soil feedback, whereas both native grasses showed neutral to negative feedback effects. Isolation and testing of root-inhabiting fungi of E. lehmanniana yielded two host-specific pathogens that did not influence the exotic C. biflorus or the other native grass, A. meridionalis. None of the grasses was affected by the fungi that were isolated from the roots of the exotic C. biflorus. We isolated and compared the AMF community of the native and exotic grasses by polymerase chain reaction-denaturing gradient gel elecrophoresis (PCR-DGGE), targeting AMF 18S rRNA. We used roots from monospecific field stands and from plants grown in pots with mixtures of soils from the monospecific field stands. Three-quarters of the root samples of the exotic grass had two nearly identical sequences, showing 99% similarity with Glomus versiforme. The two native grasses were also associated with distinct bands, but each of these bands occurred in only a fraction of the root samples. The native grasses contained a higher diversity of AMF bands than the exotic grass. Canonical correspondence analyses of the AMF band patterns revealed almost as much difference between the native and exotic grasses as between the native grasses. In conclusion, our results support the hypothesis that release from soil-borne enemies may facilitate local abundance of exotic plants, and we provide the first evidence that these processes may occur in arid savanna ecosystems. Pathogenicity tests implicated the involvement of soil pathogens in the soil feedback responses, and further studies should reveal the functional consequences of the observed high infection with a low diversity of AMF in the roots of exotic plants.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57693,"Tolerance to herbivory, and not resistance, may explain differential success of invasive, naturalized, and native North American temperate vines",S199490,R57697,Investigated species,L125464,Plants,"Numerous hypotheses suggest that natural enemies can influence the dynamics of biological invasions. Here, we use a group of 12 related native, invasive, and naturalized vines to test the relative importance of resistance and tolerance to herbivory in promoting biological invasions. In a field experiment in Long Island, New York, we excluded mammal and insect herbivores and examined plant growth and foliar damage over two growing seasons. This novel approach allowed us to compare the relative damage from mammal and insect herbivores and whether damage rates were related to invasion. In a greenhouse experiment, we simulated herbivory through clipping and measured growth response. After two seasons of excluding herbivores, there was no difference in relative growth rates among invasive, naturalized, and native woody vines, and all vines were susceptible to damage from mammal and insect herbivores. Thus, differential attack by herbivores and plant resistance to herbivory did not explain invasion success of these species. In the field, where damage rates were high, none of the vines were able to fully compensate for damage from mammals. However, in the greenhouse, we found that invasive vines were more tolerant of simulated herbivory than native and naturalized relatives. Our results indicate that invasive vines are not escaping herbivory in the novel range, rather they are persisting despite high rates of herbivore damage in the field. While most studies of invasive plants and natural enemies have focused on resistance, this work suggests that tolerance may also play a large role in facilitating invasions.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57700,The invasive shrub Buddleja davidii performs better in its introduced range,S199533,R57701,Investigated species,L125499,Plants,"It is commonly assumed that invasive plants grow more vigorously in their introduced than in their native range, which is then attributed to release from natural enemies or to microevolutionary changes, or both. However, few studies have tested this assumption by comparing the performance of invasive species in their native vs. introduced ranges. Here, we studied abundance, growth, reproduction, and herbivory in 10 native Chinese and 10 invasive German populations of the invasive shrub Buddleja davidii (Scrophulariaceae; butterfly bush). We found strong evidence for increased plant vigour in the introduced range: plants in invasive populations were significantly taller and had thicker stems, larger inflorescences, and heavier seeds than plants in native populations. These differences in plant performance could not be explained by a more benign climate in the introduced range. Since leaf herbivory was substantially reduced in invasive populations, our data rather suggest that escape from natural enemies, associated with increased plant growth and reproduction, contributes to the invasion success of B. davidii in Central Europe.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57720,"Herbivores, but not other insects, are scarce on alien plants",S199781,R57721,Investigated species,L125707,Plants,"Abstract Understanding how the landscape-scale replacement of indigenous plants with alien plants influences ecosystem structure and functioning is critical in a world characterized by increasing biotic homogenization. An important step in this process is to assess the impact on invertebrate communities. Here we analyse insect species richness and abundance in sweep collections from indigenous and alien (Australasian) woody plant species in South Africa's Western Cape. We use phylogenetically relevant comparisons and compare one indigenous with three Australasian alien trees within each of Fabaceae: Mimosoideae, Myrtaceae, and Proteaceae: Grevilleoideae. Although some of the alien species analysed had remarkably high abundances of herbivores, even when intentionally introduced biological control agents are discounted, overall, herbivorous insect assemblages from alien plants were slightly less abundant and less diverse compared with those from indigenous plants – in accordance with predictions from the enemy release hypothesis. However, there were no clear differences in other insect feeding guilds. We conclude that insect assemblages from alien plants are generally quite diverse, and significant differences between these and assemblages from indigenous plants are only evident for herbivorous insects.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57727,Diversity and abundance of arthropod floral visitor and herbivore assemblages on exotic and native Senecio species,S199868,R57728,Investigated species,L125780,Plants,"The enemy release hypothesis predicts that native herbivores prefer native, rather than exotic plants, giving invaders a competitive advantage. In contrast, the biotic resistance hypothesis states that many invaders are prevented from establishing because of competitive interactions, including herbivory, with native fauna and flora. Success or failure of spread and establishment might also be influenced by the presence or absence of mutualists, such as pollinators. Senecio madagascariensis (fireweed), an annual weed from South Africa, inhabits a similar range in Australia to the related native S. pinnatifolius. The aim of this study was to determine, within the context of invasion biology theory, whether the two Senecio species share insect fauna, including floral visitors and herbivores. Surveys were carried out in south-east Queensland on allopatric populations of the two Senecio species, with collected insects identified to morphospecies. Floral visitor assemblages were variable between populations. However, the two Senecio species shared the two most abundant floral visitors, honeybees and hoverflies. Herbivore assemblages, comprising mainly hemipterans of the families Cicadellidae and Miridae, were variable between sites and no patterns could be detected between Senecio species at the morphospecies level. However, when insect assemblages were pooled (i.e. community level analysis), S. pinnatifolius was shown to host a greater total abundance and richness of herbivores. Senecio madagascariensis is unlikely to be constrained by lack of pollinators in its new range and may benefit from lower levels of herbivory compared to its native congener S. pinnatifolius.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57740,Acceleration of Exotic Plant Invasion in a Forested Ecosystem by a Generalist Herbivore,S200039,R57741,Investigated species,L125925,Plants,"Abstract: The successful invasion of exotic plants is often attributed to the absence of coevolved enemies in the introduced range (i.e., the enemy release hypothesis). Nevertheless, several components of this hypothesis, including the role of generalist herbivores, remain relatively unexplored. We used repeated censuses of exclosures and paired controls to investigate the role of a generalist herbivore, white‐tailed deer (Odocoileus virginianus), in the invasion of 3 exotic plant species (Microstegium vimineum, Alliaria petiolata, and Berberis thunbergii) in eastern hemlock (Tsuga canadensis) forests in New Jersey and Pennsylvania (U.S.A.). This work was conducted in 10 eastern hemlock (T. canadensis) forests that spanned gradients in deer density and in the severity of canopy disturbance caused by an introduced insect pest, the hemlock woolly adelgid (Adelges tsugae). We used maximum likelihood estimation and information theoretics to quantify the strength of evidence for alternative models of the influence of deer density and its interaction with the severity of canopy disturbance on exotic plant abundance. Our results were consistent with the enemy release hypothesis in that exotic plants gained a competitive advantage in the presence of generalist herbivores in the introduced range. The abundance of all 3 exotic plants increased significantly more in the control plots than in the paired exclosures. For all species, the inclusion of canopy disturbance parameters resulted in models with substantially greater support than the deer density only models. Our results suggest that white‐tailed deer herbivory can accelerate the invasion of exotic plants and that canopy disturbance can interact with herbivory to magnify the impact. In addition, our results provide compelling evidence of nonlinear relationships between deer density and the impact of herbivory on exotic species abundance. These findings highlight the important role of herbivore density in determining impacts on plant abundance and provide evidence of the operation of multiple mechanisms in exotic plant invasion.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57748,"Cryptic seedling herbivory by nocturnal introduced generalists impacts survival, performance of native and exotic plants",S200177,R57750,Investigated species,L126045,Plants,"Although much of the theory on the success of invasive species has been geared at escape from specialist enemies, the impact of introduced generalist invertebrate herbivores on both native and introduced plant species has been underappreciated. The role of nocturnal invertebrate herbivores in structuring plant communities has been examined extensively in Europe, but less so in North America. Many nocturnal generalists (slugs, snails, and earwigs) have been introduced to North America, and 96% of herbivores found during a night census at our California Central Valley site were introduced generalists. We explored the role of these herbivores in the distribution, survivorship, and growth of 12 native and introduced plant species from six families. We predicted that introduced species sharing an evolutionary history with these generalists might be less vulnerable than native plant species. We quantified plant and herbivore abundances within our heterogeneous site and also established herbivore removal experiments in 160 plots spanning the gamut of microhabitats. As 18 collaborators, we checked 2000 seedling sites every day for three weeks to assess nocturnal seedling predation. Laboratory feeding trials allowed us to quantify the palatability of plant species to the two dominant nocturnal herbivores at the site (slugs and earwigs) and allowed us to account for herbivore microhabitat preferences when analyzing attack rates on seedlings. The relationship between local slug abundance and percent cover of five common plant taxa at the field site was significantly negatively associated with the mean palatability of these taxa to slugs in laboratory trials. Moreover, seedling mortality of 12 species in open-field plots was positively correlated with mean palatability of these taxa to both slugs and earwigs in laboratory trials. Counter to expectations, seedlings of native species were neither more vulnerable nor more palatable to nocturnal generalists than those of introduced species. Growth comparison of plants within and outside herbivore exclosures also revealed no differences between native and introduced plant species, despite large impacts of herbivores on growth. Cryptic nocturnal predation on seedlings was common and had large effects on plant establishment at our site. Without intensive monitoring, such predation could easily be misconstrued as poor seedling emergence.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57751,Plant-soil feedback induces shifts in biomass allocation in the invasive plant Chromolaena odorata,S200198,R57752,Investigated species,L126062,Plants,"1. Soil communities and their interactions with plants may play a major role in determining the success of invasive species. However, rigorous investigations of this idea using cross‐continental comparisons, including native and invasive plant populations, are still scarce.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57753,Release from soil pathogens plays an important role in the success of invasive Carpobrotus in the Mediterranean,S200224,R57754,Investigated species,L126084,Plants,"Introduced plant species can become locally dominant and threaten native flora and fauna. This dominance is often thought to be a result of release from specialist enemies in the invaded range, or the evolution of increased competitive ability. Soil borne microorganisms have often been overlooked as enemies in this context, but a less deleterious plant soil interaction in the invaded range could explain local dominance. Two plant species, Carpobrotus edulis and the hybrid Carpobrotus X cf. acinaciformis, are considered major pests in the Mediterranean basin. We tested if release from soil-borne enemies and/or evolution of increased competitive ability could explain this dominance. Comparing biomass production in non-sterile soil with that in sterilized soil, we found that inoculation with rhizosphere soil from the native range reduced biomass production by 32% while inoculation with rhizosphere soil from the invaded range did not have a significant effect on plant biomass. Genotypes from the invaded range, including a hybrid, did not perform better than plants from the native range in sterile soil. Hence evolution of increased competitive ability and hybridization do not seem to play a major role. We conclude that the reduced negative net impact of the soil community in the invaded range may contribute to the success of Carpobrotus species in the Mediterranean basin. © 2008 SAAB. Published by Elsevier B.V. All rights reserved.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57755,Release from foliar and floral fungal pathogen species does not explain the geographic spread of naturalized North American plants in Europe,S200247,R57756,Investigated species,L126103,Plants,"1 During the last centuries many alien species have established and spread in new regions, where some of them cause large ecological and economic problems. As one of the main explanations of the spread of alien species, the enemy‐release hypothesis is widely accepted and frequently serves as justification for biological control. 2 We used a global fungus–plant host distribution data set for 140 North American plant species naturalized in Europe to test whether alien plants are generally released from foliar and floral pathogens, whether they are mainly released from pathogens that are rare in the native range, and whether geographic spread of the North American plant species in Europe is associated with release from fungal pathogens. 3 We show that the 140 North American plant species naturalized in Europe were released from 58% of their foliar and floral fungal pathogen species. However, when we also consider fungal pathogens of the native North American host range that in Europe so far have only been reported on other plant species, the estimated release is reduced to 10.3%. Moreover, in Europe North American plants have mainly escaped their rare, pathogens, of which the impact is restricted to few populations. Most importantly and directly opposing the enemy‐release hypothesis, geographic spread of the alien plants in Europe was negatively associated with their release from fungal pathogens. 4 Synthesis. North American plants may have escaped particular fungal species that control them in their native range, but based on total loads of fungal species, release from foliar and floral fungal pathogens does not explain the geographic spread of North American plant species in Europe. To test whether enemy release is the major driver of plant invasiveness, we urgently require more studies comparing release of invasive and non‐invasive alien species from enemies of different guilds, and studies that assess the actual impact of the enemies.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57761,Community structure of insect herbivores on introduced and native Solidago plants in Japan,S200320,R57762,Investigated species,L126164,Plants,"We compared community composition, density, and species richness of herbivorous insects on the introduced plant Solidago altissima L. (Asteraceae) and the related native species Solidago virgaurea L. in Japan. We found large differences in community composition on the two Solidago species. Five hemipteran sap feeders were found only on S. altissima. Two of them, the aphid Uroleucon nigrotuberculatum Olive (Hemiptera: Aphididae) and the scale insect Parasaissetia nigra Nietner (Hemiptera: Coccidae), were exotic species, accounting for 62% of the total individuals on S. altissima. These exotic sap feeders mostly determined the difference of community composition on the two plant species. In contrast, the herbivore community on S. virgaurea consisted predominately of five native insects: two lepidopteran leaf chewers and three dipteran leaf miners. Overall species richness did not differ between the plants because the increased species richness of sap feeders was offset by the decreased richness of leaf chewers and leaf miners on S. altissima. The overall density of herbivorous insects was higher on S. altissima than on S. virgaurea, because of the high density of the two exotic sap feeding species on S. altissima. We discuss the importance of analyzing community composition in terms of feeding guilds of insect herbivores for understanding how communities of insect herbivores are organized on introduced plants in novel habitats.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57763,Entomofauna of the introduced Chinese Tallow Tree,S200343,R57764,Investigated species,L126183,Plants,"Abstract Entomofauna in monospecific stands of the introduced Chinese tallow tree (Sapium sebiferum) and native mixed woodlands was sampled in 1982 along the Texas coast and compared to samples of arthropods from an earlier study of native coastal prairie and from a study of arthropods in S. sebiferum in 2004. Species diversity, richness, and abundance were highest in prairie, and were higher in mixed woodland than in S. sebiferum. Nonmetric multidimensional scaling distinguished orders and families of arthropods, and families of herbivores in S. sebiferum from mixed woodland and coastal prairie. Taxonomic similarity between S. sebiferum and mixed woodland was 51%. Fauna from S. sebiferum in 2001 was more similar to mixed woodland than to samples from S. sebiferum collected in 1982. These results indicate that the entomofauna in S. sebiferum originated from mixed prairie and that, with time, these faunas became more similar. Species richness and abundance of herbivores was lower in S. sebiferum, but proportion of total species in all trophic groups, except herbivores, was higher in S. sebiferum than mixed woodland. Low concentration of tannin in leaves of S. sebiferum did not explain low loss of leaves to herbivores. Lower abundance of herbivores on introduced species of plants fits the enemy release hypothesis, and low concentration of defense compounds in the face of low number of herbivores fits the evolution of increased competitive ability hypothesis.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57791,Virulence of soil-borne pathogens and invasion by Prunus serotina,S200710,R57793,Investigated species,L126492,Plants,"*Globally, exotic invaders threaten biodiversity and ecosystem function. Studies often report that invading plants are less affected by enemies in their invaded vs home ranges, but few studies have investigated the underlying mechanisms. *Here, we investigated the variation in prevalence, species composition and virulence of soil-borne Pythium pathogens associated with the tree Prunus serotina in its native US and non-native European ranges by culturing, DNA sequencing and controlled pathogenicity trials. *Two controlled pathogenicity experiments showed that Pythium pathogens from the native range caused 38-462% more root rot and 80-583% more seedling mortality, and 19-45% less biomass production than Pythium from the non-native range. DNA sequencing indicated that the most virulent Pythium taxa were sampled only from the native range. The greater virulence of Pythium sampled from the native range therefore corresponded to shifts in species composition across ranges rather than variation within a common Pythium species. *Prunus serotina still encounters Pythium in its non-native range but encounters less virulent taxa. Elucidating patterns of enemy virulence in native and nonnative ranges adds to our understanding of how invasive plants escape disease. Moreover, this strategy may identify resident enemies in the non-native range that could be used to manage invasive plants.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57803,Testing hypotheses for exotic plant success: parallel experiments in the native and introduced ranges,S200862,R57805,Investigated species,L126620,Plants,"A central question in ecology concerns how some exotic plants that occur at low densities in their native range are able to attain much higher densities where they are introduced. This question has remained unresolved in part due to a lack of experiments that assess factors that affect the population growth or abundance of plants in both ranges. We tested two hypotheses for exotic plant success: escape from specialist insect herbivores and a greater response to disturbance in the introduced range. Within three introduced populations in Montana, USA, and three native populations in Germany, we experimentally manipulated insect herbivore pressure and created small-scale disturbances to determine how these factors affect the performance of houndstongue (Cynoglossum officinale), a widespread exotic in western North America. Herbivores reduced plant size and fecundity in the native range but had little effect on plant performance in the introduced range. Small-scale experimental disturbances enhanced seedling recruitment in both ranges, but subsequent seedling survival was more positively affected by disturbance in the introduced range. We combined these experimental results with demographic data from each population to parameterize integral projection population models to assess how enemy escape and disturbance might differentially influence C. officinale in each range. Model results suggest that escape from specialist insects would lead to only slight increases in the growth rate (lambda) of introduced populations. In contrast, the larger response to disturbance in the introduced vs. native range had much greater positive effects on lambda. These results together suggest that, at least in the regions where the experiments were performed, the differences in response to small disturbances by C. officinale contribute more to higher abundance in the introduced range compared to at home. Despite the challenges of conducting experiments on a wide biogeographic scale and the logistical constraints of adequately sampling populations within a range, this approach is a critical step forward to understanding the success of exotic plants.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57808,Range-expanding populations of a globally introduced weed experience negative plant-soil feedbacks,S200930,R57810,Investigated species,L126678,Plants,"Background Biological invasions are fundamentally biogeographic processes that occur over large spatial scales. Interactions with soil microbes can have strong impacts on plant invasions, but how these interactions vary among areas where introduced species are highly invasive vs. naturalized is still unknown. In this study, we examined biogeographic variation in plant-soil microbe interactions of a globally invasive weed, Centaurea solstitialis (yellow starthistle). We addressed the following questions (1) Is Centaurea released from natural enemy pressure from soil microbes in introduced regions? and (2) Is variation in plant-soil feedbacks associated with variation in Centaurea's invasive success? Methodology/Principal Findings We conducted greenhouse experiments using soils and seeds collected from native Eurasian populations and introduced populations spanning North and South America where Centaurea is highly invasive and noninvasive. Soil microbes had pervasive negative effects in all regions, although the magnitude of their effect varied among regions. These patterns were not unequivocally congruent with the enemy release hypothesis. Surprisingly, we also found that Centaurea generated strong negative feedbacks in regions where it is the most invasive, while it generated neutral plant-soil feedbacks where it is noninvasive. Conclusions/Significance Recent studies have found reduced below-ground enemy attack and more positive plant-soil feedbacks in range-expanding plant populations, but we found increased negative effects of soil microbes in range-expanding Centaurea populations. While such negative feedbacks may limit the long-term persistence of invasive plants, such feedbacks may also contribute to the success of invasions, either by having disproportionately negative impacts on competing species, or by yielding relatively better growth in uncolonized areas that would encourage lateral spread. Enemy release from soil-borne pathogens is not sufficient to explain the success of this weed in such different regions. The biogeographic variation in soil-microbe effects indicates that different mechanisms may operate on this species in different regions, thus establishing geographic mosaics of species interactions that contribute to variation in invasion success.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57814,Remote analysis of biological invasion and the impact of enemy release,S200998,R57815,Investigated species,L126736,Plants,"Escape from natural enemies is a widely held generalization for the success of exotic plants. We conducted a large-scale experiment in Hawaii (USA) to quantify impacts of ungulate removal on plant growth and performance, and to test whether elimination of an exotic generalist herbivore facilitated exotic success. Assessment of impacted and control sites before and after ungulate exclusion using airborne imaging spectroscopy and LiDAR, time series satellite observations, and ground-based field studies over nine years indicated that removal of generalist herbivores facilitated exotic success, but the abundance of native species was unchanged. Vegetation cover <1 m in height increased in ungulate-free areas from 48.7% +/- 1.5% to 74.3% +/- 1.8% over 8.4 years, corresponding to an annualized growth rate of lambda = 1.05 +/- 0.01 yr(-1) (median +/- SD). Most of the change was attributable to exotic plant species, which increased from 24.4% +/- 1.4% to 49.1% +/- 2.0%, (lambda = 1.08 +/- 0.01 yr(-1)). Native plants experienced no significant change in cover (23.0% +/- 1.3% to 24.2% +/- 1.8%, lambda = 1.01 +/- 0.01 yr(-1)). Time series of satellite phenology were indistinguishable between the treatment and a 3.0-km2 control site for four years prior to ungulate removal, but they diverged immediately following exclusion of ungulates. Comparison of monthly EVI means before and after ungulate exclusion and between the managed and control areas indicates that EVI strongly increased in the managed area after ungulate exclusion. Field studies and airborne analyses show that the dominant invader was Senecio madagascariensis, an invasive annual forb that increased from < 0.01% to 14.7% fractional cover in ungulate-free areas (lambda = 1.89 +/- 0.34 yr(-1)), but which was nearly absent from the control site. A combination of canopy LAI, water, and fractional cover were expressed in satellite EVI time series and indicate that the invaded region maintained greenness during drought conditions. These findings demonstrate that enemy release from generalist herbivores can facilitate exotic success and suggest a plausible mechanism by which invasion occurred. They also show how novel remote-sensing technology can be integrated with conservation and management to help address exotic plant invasions.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57847,Coexistence between native and exotic species is facilitated by asymmetries in competitive ability and susceptibility to herbivores,S201444,R57849,Investigated species,L127114,Plants,"Differences between native and exotic species in competitive ability and susceptibility to herbivores are hypothesized to facilitate coexistence. However, little fieldwork has been conducted to determine whether these differences are present in invaded communities. Here, we experimentally examined whether asymmetries exist between native and exotic plants in a community invaded for over 200 years and whether removing competitors or herbivores influences coexistence. We found that natives and exotics exhibit pronounced asymmetries, as exotics are competitively superior to natives, but are more significantly impacted by herbivores. We also found that herbivore removal mediated the outcome of competitive interactions and altered patterns of dominance across our field sites. Collectively, these findings suggest that asymmetric biotic interactions between native and exotic plants can help to facilitate coexistence in invaded communities.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57860,Herbivory by an introduced Asian weevil negatively affects population growth of an invasive Brazilian shrub in Florida,S201623,R57862,Investigated species,L127267,Plants,"The enemy release hypothesis (ERH) is often cited to explain why some plants successfully invade natural communities while others do not. This hypothesis maintains that plant populations are regulated by coevolved enemies in their native range but are relieved of this pressure where their enemies have not been co-introduced. Some studies have shown that invasive plants sustain lower levels of herbivore damage when compared to native species, but how damage affects fitness and population dynamics remains unclear. We used a system of co-occurring native and invasive Eugenia congeners in south Florida (USA) to experimentally test the ERH, addressing deficiencies in our understanding of the role of natural enemies in plant invasion at the population level. Insecticide was used to experimentally exclude insect herbivores from invasive Eugenia uniflora and its native co-occurring congeners in the field for two years. Herbivore damage, plant growth, survival, and population growth rates for the three species were then compared for control and insecticide-treated plants. Our results contradict the ERH, indicating that E. uniflora sustains more herbivore damage than its native congeners and that this damage negatively impacts stem height, survival, and population growth. In addition, most damage to E. uniflora, a native of Brazil, is carried out by Myllocerus undatus, a recently introduced weevil from Sri Lanka, and M. undatus attacks a significantly greater proportion of E. uniflora leaves than those of its native congeners. This interaction is particularly interesting because M. undatus and E. uniflora share no coevolutionary history, having arisen on two separate continents and come into contact on a third. Our study is the first to document negative population-level effects for an invasive plant as a result of the introduction of a novel herbivore. Such inhibitory interactions are likely to become more prevalent as suites of previously noninteracting species continue to accumulate and new communities assemble worldwide.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57872,Arthropod Communities on Native and Nonnative Early Successional Plants,S201763,R57873,Investigated species,L127385,Plants,"ABSTRACT Early successional ruderal plants in North America include numerous native and nonnative species, and both are abundant in disturbed areas. The increasing presence of nonnative plants may negatively impact a critical component of food web function if these species support fewer or a less diverse arthropod fauna than the native plant species that they displace. We compared arthropod communities on six species of common early successional native plants and six species of nonnative plants, planted in replicated native and nonnative plots in a farm field. Samples were taken twice each year for 2 yr. In most arthropod samples, total biomass and abundance were substantially higher on the native plants than on the nonnative plants. Native plants produced as much as five times more total arthropod biomass and up to seven times more species per 100 g of dry leaf biomass than nonnative plants. Both herbivores and natural enemies (predators and parasitoids) predominated on native plants when analyzed separately. In addition, species richness was about three times greater on native than on nonnative plants, with 83 species of insects collected exclusively from native plants, and only eight species present only on nonnatives. These results support a growing body of evidence suggesting that nonnative plants support fewer arthropods than native plants, and therefore contribute to reduced food resources for higher trophic levels.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57889,"Biogeographic comparisons of herbivore attack, growth and impact of Japanese knotweed between Japan and France",S202007,R57891,Investigated species,L127593,Plants,"To shed light on the process of how exotic species become invasive, it is necessary to study them both in their native and non‐native ranges. Our intent was to measure differences in herbivory, plant growth and the impact on other species in Fallopia japonica in its native and non‐native ranges. We performed a cross‐range full descriptive, field study in Japan (native range) and France (non‐native range). We assessed DNA ploidy levels, the presence of phytophagous enemies, the amount of leaf damage, several growth parameters and the co‐occurrence of Fallopia japonica with other plant species of herbaceous communities. Invasive Fallopia japonica plants were all octoploid, a ploidy level we did not encounter in the native range, where plants were all tetraploid. Octoploids in France harboured far less phytophagous enemies, suffered much lower levels of herbivory, grew larger and had a much stronger impact on plant communities than tetraploid conspecifics in the native range in Japan. Our data confirm that Fallopia japonica performs better – plant vigour and dominance in the herbaceous community – in its non‐native than its native range. Because we could not find octoploids in the native range, we cannot separate the effects of differences in ploidy from other biogeographic factors. To go further, common garden experiments would now be needed to disentangle the proper role of each factor, taking into account the ploidy levels of plants in their native and non‐native ranges. Synthesis. As the process by which invasive plants successfully invade ecosystems in their non‐native range is probably multifactorial in most cases, examining several components – plant growth, herbivory load, impact on recipient systems – of plant invasions through biogeographic comparisons is important. Our study contributes towards filling this gap in the research, and it is hoped that this method will spread in invasion ecology, making such an approach more common.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57900,"The herbivorous arthropods associated with the invasive alien plant, Arundo donax, and the native analogous plant, Phragmites australis, in the Free State Province, South Africa",S202126,R57901,Investigated species,L127692,Plants,"The Enemy Release Hypothesis (ERH) predicts that when plant species are introduced outside their native range there is a release from natural enemies resulting in the plants becoming problematic invasive alien species (Lake & Leishman 2004; Puliafico et al. 2008). The release from natural enemies may benefit alien plants more than simply reducing herbivory because, according to the Evolution of Increased Competitive Ability (EICA) hypothesis, without pressure from herbivores more resources that were previously allocated to defence can be allocated to reproduction (Blossey & Notzold 1995). Alien invasive plants are therefore expected to have simpler herbivore communities with fewer specialist herbivores (Frenzel & Brandl 2003; Heleno et al. 2008; Heger & Jeschke 2014).",TRUE,noun
R24,Ecology and Evolutionary Biology,R57902,Herbivores on native and exotic Senecio plants: is host switching related to plant novelty and insect diet breadth under field conditions?,S202153,R57903,Investigated species,L127715,Plants,"Native herbivores can establish novel interactions with alien plants after invasion. Nevertheless, it is unclear whether these new associations are quantitatively significant compared to the assemblages with native flora under natural conditions. Herbivores associated with two exotic plants, namely Senecio inaequidens and S. pterophorus, and two coexisting natives, namely S. vulgaris and S. lividus, were surveyed in a replicated long‐term field study to ascertain whether the plant–herbivore assemblages in mixed communities are related to plant novelty and insect diet breadth. Native herbivores used exotic Senecio as their host plants. Of the 19 species of Lepidoptera, Diptera, and Hemiptera found in this survey, 14 were associated with the exotic Senecio plants. Most of these species were polyphagous, yet we found a higher number of individuals with a narrow diet breadth, which is contrary to the assumption that host switching mainly occurs in generalist herbivores. The Senecio specialist Sphenella marginata (Diptera: Tephritidae) was the most abundant and widely distributed insect species (ca. 80% of the identified specimens). Sphenella was associated with S. lividus, S. vulgaris and S. inaequidens and was not found on S. pterophorus. The presence of native plant congeners in the invaded community did not ensure an instantaneous ecological fitting between insects and alien plants. We conclude that novel associations between native herbivores and introduced Senecio plants are common under natural conditions. Plant novelty is, however, not the only predictor of herbivore abundance due to the complexity of natural conditions.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57907,Little evidence for release from herbivores as a driver of plant invasiveness from a multi-species herbivore-removal experiment,S202274,R57911,Investigated species,L127820,Plants,"Enemy release is frequently posed as a main driver of invasiveness of alien species. However, an experimental multi-species test examining performance and herbivory of invasive alien, non-invasive alien and native plant species in the presence and absence of natural enemies is lacking. In a common garden experiment in Switzerland, we manipulated exposure of seven alien invasive, eight alien non-invasive and fourteen native species from six taxonomic groups to natural enemies (invertebrate herbivores), by applying a pesticide treatment under two different nutrient levels. We assessed biomass production, herbivore damage and the major herbivore taxa on plants. Across all species, plants gained significantly greater biomass under pesticide treatment. However, invasive, non-invasive and native species did not differ in their biomass response to pesticide treatment at either nutrient level. The proportion of leaves damaged on invasive species was significantly lower compared to native species, but not when compared to non-invasive species. However, the difference was lost when plant size was accounted for. There were no differences between invasive, non-invasive and native species in herbivore abundance. Our study offers little support for invertebrate herbivore release as a driver of plant invasiveness, but suggests that future enemy release studies should account for differences in plant size among species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57920,Invasive plants escape from suppressive soil biota at regional scales,S202402,R57921,Investigated species,L127928,Plants,"A prominent hypothesis for plant invasions is escape from the inhibitory effects of soil biota. Although the strength of these inhibitory effects, measured as soil feedbacks, has been assessed between natives and exotics in non‐native ranges, few studies have compared the strength of plant–soil feedbacks for exotic species in soils from non‐native versus native ranges. We examined whether 6 perennial European forb species that are widespread invaders in North American grasslands (Centaurea stoebe, Euphorbia esula, Hypericum perforatum, Linaria vulgaris, Potentilla recta and Leucanthemum vulgare) experienced different suppressive effects of soil biota collected from 21 sites across both ranges. Four of the six species tested exhibited substantially reduced shoot biomass in ‘live’ versus sterile soil from Europe. In contrast, North American soils produced no significant feedbacks on any of the invasive species tested indicating a broad scale escape from the inhibitory effects of soil biota. Negative feedbacks generated by European soil varied idiosyncratically among sites and species. Since this variation did not correspond with the presence of the target species at field sites, it suggests that negative feedbacks can be generated from soil biota that are widely distributed in native ranges in the absence of density‐dependent effects. Synthesis. Our results show that for some invasives, native soils have strong suppressive potential, whereas this is not the case in soils from across the introduced range. Differences in regional‐scale evolutionary history among plants and soil biota could ultimately help explain why some exotics are able to occur at higher abundance in the introduced versus native range.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57926,Grassland fires may favor native over introduced plants by reducing pathogen loads,S202481,R57927,Investigated species,L127995,Plants,"Grasslands have been lost and degraded in the United States since Euro-American settlement due to agriculture, development, introduced invasive species, and changes in fire regimes. Fire is frequently used in prairie restoration to control invasion by trees and shrubs, but may have additional consequences. For example, fire might reduce damage by herbivore and pathogen enemies by eliminating litter, which harbors eggs and spores. Less obviously, fire might influence enemy loads differently for native and introduced plant hosts. We used a controlled burn in a Willamette Valley (Oregon) prairie to examine these questions. We expected that, without fire, introduced host plants should have less damage than native host plants because the introduced species are likely to have left many of their enemies behind when they were transported to their new range (the enemy release hypothesis, or ERH). If the ERH holds, then fire, which should temporarily reduce enemies on all species, should give an advantage to the natives because they should see greater total reduction in damage by enemies. Prior to the burn, we censused herbivore and pathogen attack on eight plant species (five of nonnative origin: Bromus hordaceous, Cynosuros echinatus, Galium divaricatum, Schedonorus arundinaceus (= Festuca arundinacea), and Sherardia arvensis; and three natives: Danthonia californica, Epilobium minutum, and Lomatium nudicale). The same plots were monitored for two years post-fire. Prior to the burn, native plants had more kinds of damage and more pathogen damage than introduced plants, consistent with the ERH. Fire reduced pathogen damage relative to the controls more for the native than the introduced species, but the effects on herbivory were negligible. Pathogen attack was correlated with plant reproductive fitness, whereas herbivory was not. These results suggest that fire may be useful for promoting some native plants in prairies due to its negative effects on their pathogens.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57943,Comparison of invertebrate herbivores on native and non-native Senecio species: Implications for the enemy release hypothesis,S202754,R57947,Investigated species,L128228,Plants,"The enemy release hypothesis posits that non-native plant species may gain a competitive advantage over their native counterparts because they are liberated from co-evolved natural enemies from their native area. The phylogenetic relationship between a non-native plant and the native community may be important for understanding the success of some non-native plants, because host switching by insect herbivores is more likely to occur between closely related species. We tested the enemy release hypothesis by comparing leaf damage and herbivorous insect assemblages on the invasive species Senecio madagascariensis Poir. to that on nine congeneric species, of which five are native to the study area, and four are non-native but considered non-invasive. Non-native species had less leaf damage than natives overall, but we found no significant differences in the abundance, richness and Shannon diversity of herbivores between native and non-native Senecio L. species. The herbivore assemblage and percentage abundance of herbivore guilds differed among all Senecio species, but patterns were not related to whether the species was native or not. Species-level differences indicate that S. madagascariensis may have a greater proportion of generalist insect damage (represented by phytophagous leaf chewers) than the other Senecio species. Within a plant genus, escape from natural enemies may not be a sufficient explanation for why some non-native species become more invasive than others.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57950,Phytophagous Insects on Native and Non-Native Host Plants: Combining the Community Approach and the Biogeographical Approach,S202851,R57954,Investigated species,L128311,Plants,"During the past centuries, humans have introduced many plant species in areas where they do not naturally occur. Some of these species establish populations and in some cases become invasive, causing economic and ecological damage. Which factors determine the success of non-native plants is still incompletely understood, but the absence of natural enemies in the invaded area (Enemy Release Hypothesis; ERH) is one of the most popular explanations. One of the predictions of the ERH, a reduced herbivore load on non-native plants compared with native ones, has been repeatedly tested. However, many studies have either used a community approach (sampling from native and non-native species in the same community) or a biogeographical approach (sampling from the same plant species in areas where it is native and where it is non-native). Either method can sometimes lead to inconclusive results. To resolve this, we here add to the small number of studies that combine both approaches. We do so in a single study of insect herbivory on 47 woody plant species (trees, shrubs, and vines) in the Netherlands and Japan. We find higher herbivore diversity, higher herbivore load and more herbivory on native plants than on non-native plants, generating support for the enemy release hypothesis.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57964,Natural selection on plant resistance to herbivores in the native and introduced range,S202987,R57965,Investigated species,L128425,Plants,"Plants introduced into a new range are expected to harbour fewer specialized herbivores and to receive less damage than conspecifics in native ranges. Datura stramonium was introduced in Spain about five centuries ago. Here, we compare damage by herbivores, plant size, and leaf trichomes between plants from non-native and native ranges and perform selection analyses. Non-native plants experienced much less damage, were larger and less pubescent than plants of native populations. While plant size was related to fitness in both ranges, selection to increase resistance was only detected in the native region. We suggest this is a consequence of a release from enemies in this new environment.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57971,"Insect assemblages associated with the exotic riparian shrub Russian olive (Elaeagnaceae), and co-occurring native shrubs in British Columbia, Canada",S203075,R57972,Investigated species,L128499,Plants,"AbstractRussian olive (Elaeagnus angustifolia Linnaeus; Elaeagnaceae) is an exotic shrub/tree that has become invasive in many riparian ecosystems throughout semi-arid, western North America, including southern British Columbia, Canada. Despite its prevalence and the potentially dramatic impacts it can have on riparian and aquatic ecosystems, little is known about the insect communities associated with Russian olive within its invaded range. At six sites throughout the Okanagan valley of southern British Columbia, Canada, we compared the diversity of insects associated with Russian olive plants to that of insects associated with two commonly co-occurring native plant species: Woods’ rose (Rosa woodsii Lindley; Rosaceae) and Saskatoon (Amelanchier alnifolia (Nuttall) Nuttall ex Roemer; Rosaceae). Total abundance did not differ significantly among plant types. Family richness and Shannon diversity differed significantly between Woods’ rose and Saskatoon, but not between either of these plant types and Russian olive. An abundance of Thripidae (Thysanoptera) on Russian olive and Tingidae (Hemiptera) on Saskatoon contributed to significant compositional differences among plant types. The families Chloropidae (Diptera), Heleomyzidae (Diptera), and Gryllidae (Orthoptera) were uniquely associated with Russian olive, albeit in low abundances. Our study provides valuable and novel information about the diversity of insects associated with an emerging plant invader of western Canada.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57973,Evidence for enemy release and increased seed production and size for two invasive Australian acacias,S203100,R57974,Investigated species,L128520,Plants,"Invasive plants are hypothesized to have higher fitness in introduced areas due to their release from pathogens and herbivores and the relocation of resources to reproduction. However, few studies have tested this hypothesis in native and introduced regions. A biogeographical approach is fundamental to understanding the mechanisms involved in plant invasions and to detect rapid evolutionary changes in the introduced area. Reproduction was assessed in native and introduced ranges of two invasive Australian woody legumes, Acacia dealbata and A. longifolia. Seed production, pre‐dispersal seed predation, seed and elaiosome size and seedling size were assessed in 7–10 populations from both ranges, taking into account the effect of differences in climate. There was a significantly higher percentage of fully developed seeds per pod, a lower proportion of aborted seeds and the absence of pre‐dispersal predation in the introduced range for both Acacia species. Acacia longifolia produced more seeds per pod in the invaded range, whereas A. dealbata produced more seeds per tree in the invaded range. Seeds were bigger in the invaded range for both species, and elaiosome: seed ratio was smaller for A. longifolia in the invaded range. Seedlings were also larger in the invaded range, suggesting that the increase in seed size results into greater offspring growth. There were no differences in the climatic conditions of sites occupied by A. longifolia in both regions. Minimum temperature was higher in Portuguese A. dealbata populations, but this difference did not explain the increase in seed production and seed size in the introduced range. It did have, however, a positive effect on the number of pods per tree. Synthesis. Acacia dealbata and A. longifolia escape pre‐dispersal predation in the introduced range and display a higher production of fully developed seeds per fruit and bigger seeds. These differences may explain the invasion of both species because they result in an increased seedling growth and the production of abundant soil seedbanks in the introduced area.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57988,Alien and native plant establishment in grassland communities is more strongly affected by disturbance than above- and below-ground enemies,S203294,R57989,Investigated species,L128684,Plants,"Understanding the factors that drive commonness and rarity of plant species and whether these factors differ for alien and native species are key questions in ecology. If a species is to become common in a community, incoming propagules must first be able to establish. The latter could be determined by competition with resident plants, the impacts of herbivores and soil biota, or a combination of these factors. We aimed to tease apart the roles that these factors play in determining establishment success in grassland communities of 10 alien and 10 native plant species that are either common or rare in Germany, and from four families. In a two‐year multisite field experiment, we assessed the establishment success of seeds and seedlings separately, under all factorial combinations of low vs. high disturbance (mowing vs mowing and tilling of the upper soil layer), suppression or not of pathogens (biocide application) and, for seedlings only, reduction or not of herbivores (net‐cages). Native species showed greater establishment success than alien species across all treatments, regardless of their commonness. Moreover, establishment success of all species was positively affected by disturbance. Aliens showed lower establishment success in undisturbed sites with biocide application. Release of the undisturbed resident community from pathogens by biocide application might explain this lower establishment success of aliens. These findings were consistent for establishment from either seeds or seedlings, although less significantly so for seedlings, suggesting a more important role of pathogens in very early stages of establishment after germination. Herbivore exclusion did play a limited role in seedling establishment success. Synthesis: In conclusion, we found that less disturbed grassland communities exhibited strong biotic resistance to establishment success of species, whether alien or native. However, we also found evidence that alien species may benefit weakly from soilborne enemy release, but that this advantage over native species is lost when the latter are also released by biocide application. Thus, disturbance was the major driver for plant species establishment success and effects of pathogens on alien plant establishment may only play a minor role.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57994,Can enemy release explain the invasion success of the diploid Leucanthemum vulgare in North America?,S203377,R57995,Investigated species,L128755,Plants,"Abstract Enemy release is a commonly accepted mechanism to explain plant invasions. Both the diploid Leucanthemum vulgare and the morphologically very similar tetraploid Leucanthemum ircutianum have been introduced into North America. To verify which species is more prevalent in North America we sampled 98 Leucanthemum populations and determined their ploidy level. Although polyploidy has repeatedly been proposed to be associated with increased invasiveness in plants, only two of the populations surveyed in North America were the tetraploid L. ircutianum . We tested the enemy release hypothesis by first comparing 20 populations of L. vulgare and 27 populations of L. ircutianum in their native range in Europe, and then comparing the European L. vulgare populations with 31 L. vulgare populations sampled in North America. Characteristics of the site and associated vegetation, plant performance and invertebrate herbivory were recorded. In Europe, plant height and density of the two species were similar but L. vulgare produced more flower heads than L. ircutianum . Leucanthemum vulgare in North America was 17 % taller, produced twice as many flower heads and grew much denser compared to L. vulgare in Europe. Attack rates by root- and leaf-feeding herbivores on L. vulgare in Europe (34 and 75 %) was comparable to that on L. ircutianum (26 and 71 %) but higher than that on L. vulgare in North America (10 and 3 %). However, herbivore load and leaf damage were low in Europe. Cover and height of the co-occurring vegetation was higher in L. vulgare populations in the native than in the introduced range, suggesting that a shift in plant competition may more easily explain the invasion success of L. vulgare than escape from herbivory.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57996,A Comparison of Herbivore Damage on Three Invasive Plants and Their Native Congeners: Implications for the Enemy Release Hypothesis,S203400,R57997,Investigated species,L128774,Plants,"ABSTRACT One explanation for the success of exotic plants in their introduced habitats is that, upon arriving to a new continent, plants escaped their native herbivores or pathogens, resulting in less damage and lower abundance of enemies than closely related native species (enemy release hypothesis). We tested whether the three exotic plant species, Rubus phoenicolasius (wineberry), Fallopia japonica (Japanese knotweed), and Persicaria perfoliata (mile-a-minute weed), suffered less herbivory or pathogen attack than native species by comparing leaf damage and invertebrate herbivore abundance and diversity on the invasive species and their native congeners. Fallopia japonica and R. phoenicolasius received less leaf damage than their native congeners, and F. japonica also contained a lower diversity and abundance of invertebrate herbivores. If the observed decrease in damage experienced by these two plant species contributes to increased fitness, then escape from enemies may provide at least a partial explanation for their invasiveness. However, P. perfoliata actually received greater leaf damage than its native congener. Rhinoncomimus latipes, a weevil previously introduced in the United States as a biological control for P. perfoliata, accounted for the greatest abundance of insects collected from P. perfoliata. Therefore, it is likely that the biocontrol R. latipes was responsible for the greater damage on P. perfoliata, suggesting this insect may be effective at controlling P. perfoliata populations if its growth and reproduction is affected by the increased herbivore damage.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56706,Non-native ecosystem engineer alters estuarine communities,S188927,R56707,Investigated species,L117609,Polychaetes,"Many ecosystems are created by the presence of ecosystem engineers that play an important role in determining species' abundance and species composition. Additionally, a mosaic environment of engineered and non-engineered habitats has been shown to increase biodiversity. Non-native ecosystem engineers can be introduced into environments that do not contain or have lost species that form biogenic habitat, resulting in dramatic impacts upon native communities. Yet, little is known about how non-native ecosystem engineers interact with natives and other non-natives already present in the environment, specifically whether non-native ecosystem engineers facilitate other non-natives, and whether they increase habitat heterogeneity and alter the diversity, abundance, and distribution of benthic species. Through sampling and experimental removal of reefs, we examine the effects of a non-native reef-building tubeworm, Ficopomatus enigmaticus, on community composition in the central Californian estuary, Elkhorn Slough. Tubeworm reefs host significantly greater abundances of many non-native polychaetes and amphipods, particularly the amphipods Monocorophium insidiosum and Melita nitida, compared to nearby mudflats. Infaunal assemblages under F. enigmaticus reefs and around reef's edges show very low abundance and taxonomic diversity. Once reefs are removed, the newly exposed mudflat is colonized by opportunistic non-native species, such as M. insidiosum and the polychaete Streblospio benedicti, making removal of reefs a questionable strategy for control. These results show that provision of habitat by a non-native ecosystem engineer may be a mechanism for invasional meltdown in Elkhorn Slough, and that reefs increase spatial heterogeneity in the abundance and composition of benthic communities.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56545,Ecological traits of the amphipod invader Dikerogammarus villosus on a mesohabitat scale,S187095,R56546,Ecological Level of evidence,L116098,Population,"Since 1995, Dikerogammarus villosus Sowinski, a Ponto-Caspian amphi- pod species, has been invading most of Western Europe' s hydrosystems. D. villosus geographic extension and quickly increasing population density has enabled it to become a major component of macrobenthic assemblages in recipient ecosystems. The ecological characteristics of D. villosus on a mesohabitat scale were investigated at a station in the Moselle River. This amphipod is able to colonize a wide range of sub- stratum types, thus posing a threat to all freshwater ecosystems. Rivers whose domi- nant substratum is cobbles and which have tree roots along the banks could harbour particularly high densities of D. villosus. A relationship exists between substratum par- ticle size and the length of the individuals, and spatial segregation according to length was shown. This allows the species to limit intra-specific competition between genera- tions while facilitating reproduction. A strong association exists between D. villosus and other Ponto-Caspian species, such as Dreissena polymorpha and Corophium cur- vispinum, in keeping with Invasional Meltdown Theory. Four taxa (Coenagrionidae, Calopteryx splendens, Corophium curvispinum and Gammarus pulex ) exhibited spa- tial niches that overlap significantly that of D. villosus. According to the predatory be- haviour of the newcomer, their populations may be severely impacted.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56559,"Exotic species replacement: shifting dominance of dreissenid mussels in the Soulanges Canal, upper St. Lawrence River, Canada",S187255,R56560,Ecological Level of evidence,L116230,Population,"Abstract During the early 1990s, 2 Eurasian macrofouling mollusks, the zebra mussel Dreissena polymorpha and the quagga mussel D. bugensis, colonized the freshwater section of the St. Lawrence River and decimated native mussel populations through competitive interference. For several years, zebra mussels dominated molluscan biomass in the river; however, quagga mussels have increased in abundance and are apparently displacing zebra mussels from the Soulanges Canal, west of the Island of Montreal. The ratio of quagga mussel biomass to zebra mussel biomass on the canal wall is correlated with depth, and quagga mussels constitute >99% of dreissenid biomass on bottom sediments. This dominance shift did not substantially affect the total dreissenid biomass, which has remained at 3 to 5 kg fresh mass /m2 on the canal walls for nearly a decade. The mechanism for this shift is unknown, but may be related to a greater bioenergetic efficiency for quaggas, which attained larger shell sizes than zebra mussels at all depths. Similar events have occurred in the lower Great Lakes where zebra mussels once dominated littoral macroinvertebrate biomass, demonstrating that a well-established and prolific invader can be replaced by another introduced species without prior extinction.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56577,Functional diversity of mammalian predators and extinction in island birds,S187454,R56578,Ecological Level of evidence,L116394,Population,"The probability of a bird species going extinct on oceanic islands in the period since European colonization is predicted by the number of introduced predatory mammal species, but the exact mechanism driving this relationship is unknown. One possibility is that larger exotic predator communities include a wider array of predator functional types. These predator communities may target native bird species with a wider range of behavioral or life history characteristics. We explored the hypothesis that the functional diversity of the exotic predators drives bird species extinctions. We also tested how different combinations of functionally important traits of the predators explain variation in extinction probability. Our results suggest a unique impact of each introduced mammal species on native bird populations, as opposed to a situation where predators exhibit functional redundancy. Further, the impact of each additional predator may be facilitated by those already present, suggesting the possibility of “invasional meltdown.”",TRUE,noun
R24,Ecology and Evolutionary Biology,R56624,Plant resources and colony growth in na invasive ant: the importance of honeydew-producing hemiptera in Carbohydrate transfer across trophic levels,S188006,R56625,Ecological Level of evidence,L116851,Population,"Abstract Studies have suggested that plant-based nutritional resources are important in promoting high densities of omnivorous and invasive ants, but there have been no direct tests of the effects of these resources on colony productivity. We conducted an experiment designed to determine the relative importance of plants and honeydew-producing insects feeding on plants to the growth of colonies of the invasive ant Solenopsis invicta (Buren). We found that colonies of S. invicta grew substantially when they only had access to unlimited insect prey; however, colonies that also had access to plants colonized by honeydew-producing Hemiptera grew significantly and substantially (≈50%) larger. Our experiment also showed that S. invicta was unable to acquire significant nutritional resources directly from the Hemiptera host plant but acquired them indirectly from honeydew. Honeydew alone is unlikely to be sufficient for colony growth, however, and both carbohydrates abundant in plants and proteins abundant in animals are likely to be necessary for optimal growth. Our experiment provides important insight into the effects of a common tritrophic interaction among an invasive mealybug, Antonina graminis (Maskell), an invasive host grass, Cynodon dactylon L. Pers., and S. invicta in the southeastern United States, suggesting that interactions among these species can be important in promoting extremely high population densities of S. invicta.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56678,"Effects of introduced Canada geese (Branta canadensis) on native plant communities of the southern gulf islands, British Columbia",S188622,R56679,Ecological Level of evidence,L117359,Population,"Abstract: Recent experiments suggest that introduced, non-migratory Canada geese (Branta canadensis) may be facilitating the spread of exotic grasses and decline of native plant species abundance on small islets in the Georgia Basin, British Columbia, which otherwise harbour outstanding examples of threatened maritime meadow ecosystems. We examined this idea by testing if the presence of geese predicted the abundance of exotic grasses and native competitors at 2 spatial scales on 39 islands distributed throughout the Southern Gulf and San Juan Islands of Canada and the United States, respectively. At the plot level, we found significant positive relationships between the percent cover of goose feces and exotic annual grasses. However, this trend was absent at the scale of whole islands. Because rapid population expansion of introduced geese in the region only began in the 1980s, our results are consistent with the hypothesis that the deleterious effects of geese on the cover of exotic annual grasses have yet to proceed beyond the local scale, and that a window of opportunity now exists in which to implement management strategies to curtail this emerging threat to native ecosystems. Research is now needed to test if the removal of geese results in the decline of exotic annual grasses.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56815,"Replacement of nonnative rainbow trout by nonnative brown trout in the Chitose River system, Hokkaido, northern Japan",S190148,R56816,Ecological Level of evidence,L118611,Population,"In this study, evidence for interspecific interaction was provided by comparing distribution patterns of nonnative rainbow trout Onchorhynchus mykiss and brown trout Salmo trutta between the past and present in the Chitose River system, Hokkaido, northern Japan. O. mykiss was first introduced in 1920 in the Chitose River system and has since successfully established a population. Subsequently, another nonnative salmonid species, S. trutta have expanded the Chitose River system since the early 1980s. At present, S. trutta have replaced O. mykiss in the majority of the Chitose River, although O. mykiss have persisted in areas above migration barriers that prevent S. trutta expansion. In conclusion, the results of this study highlight the role of interspecific interactions between sympatric nonnative species on the establishment and persistence of populations of nonnative species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56819,Over-invasion by functionally equivalent invasive species,S190192,R56820,Ecological Level of evidence,L118647,Population,"Multiple invasive species have now established at most locations around the world, and the rate of new species invasions and records of new invasive species continue to grow. Multiple invasive species interact in complex and unpredictable ways, altering their invasion success and impacts on biodiversity. Incumbent invasive species can be replaced by functionally similar invading species through competitive processes; however the generalized circumstances leading to such competitive displacement have not been well investigated. The likelihood of competitive displacement is a function of the incumbent advantage of the resident invasive species and the propagule pressure of the colonizing invasive species. We modeled interactions between populations of two functionally similar invasive species and indicated the circumstances under which dominance can be through propagule pressure and incumbent advantage. Under certain circumstances, a normally subordinate species can be incumbent and reject a colonizing dominant species, or successfully colonize in competition with a dominant species during simultaneous invasion. Our theoretical results are supported by empirical studies of the invasion of islands by three invasive Rattus species. Competitive displacement is prominent in invasive rats and explains the replacement of R. exulans on islands subsequently invaded by European populations of R. rattus and R. norvegicus. These competition outcomes between invasive species can be found in a broad range of taxa and biomes, and are likely to become more common. Conservation management must consider that removing an incumbent invasive species may facilitate invasion by another invasive species. Under very restricted circumstances of dominant competitive ability but lesser impact, competitive displacement may provide a novel method of biological control.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56843,Does whirling disease mediate hybridization between a native and nonnative trout?,S190459,R56844,Ecological Level of evidence,L118866,Population,"AbstractThe spread of nonnative species over the last century has profoundly altered freshwater ecosystems, resulting in novel species assemblages. Interactions between nonnative species may alter their impacts on native species, yet few studies have addressed multispecies interactions. The spread of whirling disease, caused by the nonnative parasite Myxobolus cerebralis, has generated declines in wild trout populations across western North America. Westslope Cutthroat Trout Oncorhynchus clarkii lewisi in the northern Rocky Mountains are threatened by hybridization with introduced Rainbow Trout O. mykiss. Rainbow Trout are more susceptible to whirling disease than Cutthroat Trout and may be more vulnerable due to differences in spawning location. We hypothesized that the presence of whirling disease in a stream would (1) reduce levels of introgressive hybridization at the site scale and (2) limit the size of the hybrid zone at the whole-stream scale. We measured levels of introgression and the spatial ext...",TRUE,noun
R24,Ecology and Evolutionary Biology,R56849,The effects of mice on stoats in southern beech forests,S190525,R56850,Ecological Level of evidence,L118920,Population,"Introduced stoats (Mustela erminea) are important invasive predators in southern beech (Nothofagus sp.) forests in New Zealand. In these forests, one of their primary prey species – introduced house mice (Mus musculus), fluctuate dramatically between years, driven by the irregular heavy seed-fall (masting) of the beech trees. We examined the effects of mice on stoats in this system by comparing the weights, age structure and population densities of stoats caught on two large islands in Fiordland, New Zealand – one that has mice (Resolution Island) and one that does not (Secretary Island). On Resolution Island, the stoat population showed a history of recruitment spikes and troughs linked to beech masting, whereas the Secretary Island population had more constant recruitment, indicating that rodents are probably the primary cause for the ‘boom and bust’ population cycle of stoats in beech forests. Resolutions Island stoats were 10% heavier on average than Secretary Island stoats, supporting the hypothesis that the availability of larger prey (mice verses wētā) leads to larger stoats. Beech masting years on this island were also correlated with a higher weight for stoats born in the year of the masting event. The detailed demographic information on the stoat populations of these two islands supports previously suggested interactions among mice, stoats and beech masting. These interactions may have important consequences for the endemic species that interact with fluctuating populations of mice and stoats.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56867,Comparisons of isotopic niche widths of some invasive and indigenous fauna in a South African river,S190725,R56868,Ecological Level of evidence,L119084,Population,"Summary Biological invasions threaten ecosystem integrity and biodiversity, with numerous adverse implications for native flora and fauna. Established populations of two notorious freshwater invaders, the snail Tarebia granifera and the fish Pterygoplichthys disjunctivus, have been reported on three continents and are frequently predicted to be in direct competition with native species for dietary resources. Using comparisons of species' isotopic niche widths and stable isotope community metrics, we investigated whether the diets of the invasive T. granifera and P. disjunctivus overlapped with those of native species in a highly invaded river. We also attempted to resolve diet composition for both species, providing some insight into the original pathway of invasion in the Nseleni River, South Africa. Stable isotope metrics of the invasive species were similar to or consistently mid-range in comparison with their native counterparts, with the exception of markedly more uneven spread in isotopic space relative to indigenous species. Dietary overlap between the invasive P. disjunctivus and native fish was low, with the majority of shared food resources having overlaps of <0.26. The invasive T. granifera showed effectively no overlap with the native planorbid snail. However, there was a high degree of overlap between the two invasive species (˜0.86). Bayesian mixing models indicated that detrital mangrove Barringtonia racemosa leaves contributed the largest proportion to P. disjunctivus diet (0.12–0.58), while the diet of T. granifera was more variable with high proportions of detrital Eichhornia crassipes (0.24–0.60) and Azolla filiculoides (0.09–0.33) as well as detrital Barringtonia racemosa leaves (0.00–0.30). Overall, although the invasive T. granifera and P. disjunctivus were not in direct competition for dietary resources with native species in the Nseleni River system, their spread in isotopic space suggests they are likely to restrict energy available to higher consumers in the food web. Establishment of these invasive populations in the Nseleni River is thus probably driven by access to resources unexploited or unavailable to native residents.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56913,Scaling the consequences of interactions between invaders from the indivdual to the population level,S191236,R56914,Ecological Level of evidence,L119503,Population,"Abstract The impact of human‐induced stressors, such as invasive species, is often measured at the organismal level, but is much less commonly scaled up to the population level. Interactions with invasive species represent an increasingly common source of stressor in many habitats. However, due to the increasing abundance of invasive species around the globe, invasive species now commonly cause stresses not only for native species in invaded areas, but also for other invasive species. I examine the European green crab Carcinus maenas, an invasive species along the northeast coast of North America, which is known to be negatively impacted in this invaded region by interactions with the invasive Asian shore crab Hemigrapsus sanguineus. Asian shore crabs are known to negatively impact green crabs via two mechanisms: by directly preying on green crab juveniles and by indirectly reducing green crab fecundity via interference (and potentially exploitative) competition that alters green crab diets. I used life‐table analyses to scale these two mechanistic stressors up to the population level in order to examine their relative impacts on green crab populations. I demonstrate that lost fecundity has larger impacts on per capita population growth rates, but that both predation and lost fecundity are capable of reducing population growth sufficiently to produce the declines in green crab populations that have been observed in areas where these two species overlap. By scaling up the impacts of one invader on a second invader, I have demonstrated that multiple documented interactions between these species are capable of having population‐level impacts and that both may be contributing to the decline of European green crabs in their invaded range on the east coast of North America.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56915,Positive plant and bird diversity response to experimental deer population reduction after decades of uncontrolled browsing,S191258,R56916,Ecological Level of evidence,L119521,Population,"During the 20th century, deer (family Cervidae), both native and introduced populations, dramatically increased in abundance in many parts of the world and became seen as major threats to biodiversity in forest ecosystems. Here, we evaluated the consequences that restoring top‐down herbivore population control has on plants and birds.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56921,Twelve years of repeated wild hog activity promotes population maintenance of na invasive clonal plant in a coastal dune ecosystem,S191325,R56922,Ecological Level of evidence,L119576,Population,"Abstract Invasive animals can facilitate the success of invasive plant populations through disturbance. We examined the relationship between the repeated foraging disturbance of an invasive animal and the population maintenance of an invasive plant in a coastal dune ecosystem. We hypothesized that feral wild hog (Sus scrofa) populations repeatedly utilized tubers of the clonal perennial, yellow nutsedge (Cyperus esculentus) as a food source and evaluated whether hog activity promoted the long‐term maintenance of yellow nutsedge populations on St. Catherine's Island, Georgia, United States. Using generalized linear mixed models, we tested the effect of wild hog disturbance on permanent sites for yellow nutsedge culm density, tuber density, and percent cover of native plant species over a 12‐year period. We found that disturbance plots had a higher number of culms and tubers and a lower percentage of native live plant cover than undisturbed control plots. Wild hogs redisturbed the disturbed plots approximately every 5 years. Our research provides demographic evidence that repeated foraging disturbances by an invasive animal promote the long‐term population maintenance of an invasive clonal plant. Opportunistic facultative interactions such as we demonstrate in this study are likely to become more commonplace as greater numbers of introduced species are integrated into ecological communities around the world.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56923,Early life stages of exotic gobiids as new hosts for unionid glochidia,S191347,R56924,Ecological Level of evidence,L119594,Population,"Summary Introduction of an exotic species has the potential to alter interactions between fish and bivalves; yet our knowledge in this field is limited, not least by lack of studies involving fish early life stages (ELS). Here, for the first time, we examine glochidial infection of fish ELS by native and exotic bivalves in a system recently colonised by two exotic gobiid species (round goby Neogobius melanostomus, tubenose goby Proterorhinus semilunaris) and the exotic Chinese pond mussel Anodonta woodiana. The ELS of native fish were only rarely infected by native glochidia. By contrast, exotic fish displayed significantly higher native glochidia prevalence and mean intensity of infection than native fish (17 versus 2% and 3.3 versus 1.4 respectively), inferring potential for a parasite spillback/dilution effect. Exotic fish also displayed a higher parasitic load for exotic glochidia, inferring potential for invasional meltdown. Compared to native fish, presence of gobiids increased the total number of glochidia transported downstream on drifting fish by approximately 900%. We show that gobiid ELS are a novel, numerous and ‘attractive’ resource for unionid glochidia. As such, unionids could negatively affect gobiid recruitment through infection-related mortality of gobiid ELS and/or reinforce downstream unionid populations through transport on drifting gobiid ELS. These implications go beyond what is suggested in studies of older life stages, thereby stressing the importance of an holistic ontogenetic approach in ecological studies.",TRUE,noun
R24,Ecology and Evolutionary Biology,R144046,Land Use and Avian Species Diversity Along an Urban Gradient,S576582,R144048,Focal entity,R144052,Populations,"I examined the distribution and abundance of bird species across an urban gradient, and concomitant changes in community structure, by censusing summer resident bird populations at six sites in Santa Clara County, California (all former oak woodlands). These sites represented a gradient of urban land use that ranged from relatively undisturbed to highly developed, and included a biological preserve, recreational area, golf course, residential neighborhood, office park, and business district. The composition of the bird community shifted from predominantly native species in the undisturbed area to invasive and exotic species in the business district. Species richness, Shannon diversity, and bird biomass peaked at moderately disturbed sites. One or more species reached maximal densities in each of the sites, and some species were restricted to a given site. The predevelopment bird species (assumed to be those found at the most undisturbed site) dropped out gradually as the sites became more urban. These patterns were significantly related to shifts in habitat structure that occurred along the gradient, as determined by canonical correspondence analysis (CCA) using the environmental variables of percent land covered by pavement, buildings, lawn, grasslands, and trees or shrubs. I compared each formal site to four additional sites with similar levels of development within a two-county area to verify that the bird communities at the formal study sites were rep- resentative of their land use category.",TRUE,noun
R24,Ecology and Evolutionary Biology,R52133,Is phylogenetic relatedness to native species important for the establishment of reptiles introduced to California and Florida?,S163652,R53398,Investigated species,L99035,Reptiles,"Aim Charles Darwin posited that introduced species with close relatives were less likely to succeed because of fiercer competition resulting from their similarity to residents. There is much debate about the generality of this rule, and recent studies on plant and fish introductions have been inconclusive. Information on phylogenetic relatedness is potentially valuable for explaining invasion outcomes and could form part of screening protocols for minimizing future invasions. We provide the first test of this hypothesis for terrestrial vertebrates using two new molecular phylogenies for native and introduced reptiles for two regions with the best data on introduction histories.",TRUE,noun
R24,Ecology and Evolutionary Biology,R53295,Establishment of introduced reptiles increases with the presence and richness of native congeners,S162887,R53296,Investigated species,L98398,Reptiles,"Darwin proposed two contradictory hypotheses to explain the influence of congeners on the outcomes of invasion: the naturalization hypothesis, which predicts a negative relationship between the presence of congeners and invasion success, and the pre-adaptation hypothesis, which predicts a positive relationship between the presence of congeners and invasion success. Studies testing these hypotheses have shown mixed support. We tested these hypotheses using the establishment success of non-native reptiles and congener presence/absence and richness across the globe. Our results demonstrated support for the pre-adaptation hypothesis. We found that globally, both on islands and continents, establishment success was higher in the presence than in the absence of congeners and that establishment success increased with increasing congener richness. At the life form level, establishment success was higher for lizards, marginally higher for snakes, and not different for turtles in the presence of congeners; data were insufficient to test the hypotheses for crocodiles. There was no relationship between establishment success and congener richness for any life form. We suggest that we found support for the pre-adaptation hypothesis because, at the scale of our analysis, native congeners represent environmental conditions appropriate for the species rather than competition for niche space. Our results imply that areas to target for early detection of non-native reptiles are those that host closely related species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R52133,Is phylogenetic relatedness to native species important for the establishment of reptiles introduced to California and Florida?,S162497,R53248,Non-plant species ,L98068,Reptiles,"Aim Charles Darwin posited that introduced species with close relatives were less likely to succeed because of fiercer competition resulting from their similarity to residents. There is much debate about the generality of this rule, and recent studies on plant and fish introductions have been inconclusive. Information on phylogenetic relatedness is potentially valuable for explaining invasion outcomes and could form part of screening protocols for minimizing future invasions. We provide the first test of this hypothesis for terrestrial vertebrates using two new molecular phylogenies for native and introduced reptiles for two regions with the best data on introduction histories.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56537,Widespread association of the invasive ant Solenopsis invicta with an invasive mealybug,S186990,R56538,Outcome of interaction,L116010,Resource,"Factors such as aggressiveness and adaptation to disturbed environments have been suggested as important characteristics of invasive ant species, but diet has rarely been considered. However, because invasive ants reach extraordinary densities at introduced locations, increased feeding efficiency or increased exploitation of new foods should be important in their success. Earlier studies suggest that honeydew produced by Homoptera (e.g., aphids, mealybugs, scale insects) may be important in the diet of the invasive ant species Solenopsis invicta. To determine if this is the case, we studied associations of S. invicta and Homoptera in east Texas and conducted a regional survey for such associations throughout the species' range in the southeast United States. In east Texas, we found that S. invicta tended Ho- moptera extensively and actively constructed shelters around them. The shelters housed a variety of Homoptera whose frequency differed according to either site location or season, presumably because of differences in host plant availability and temperature. Overall, we estimate that the honeydew produced in Homoptera shelters at study sites in east Texas could supply nearly one-half of the daily energetic requirements of an S. invicta colony. Of that, 70% may come from a single species of invasive Homoptera, the mealybugAntonina graminis. Homoptera shelters were also common at regional survey sites and A. graminis occurred in shelters at nine of 11 survey sites. A comparison of shelter densities at survey sites and in east Texas suggests that our results from east Texas could apply throughout the range of S. invicta in the southeast United States. Antonina graminis may be an ex- ceptionally important nutritional resource for S. invicta in the southeast United States. While it remains largely unstudied, the tending of introduced or invasive Homoptera also appears important to other, and perhaps all, invasive ant species. Exploitative or mutually beneficial associations that occur between these insects may be an important, previously unrecognized factor promoting their success.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56746,Ecology of brushtail possums in New Zealand dryland ecosystem,S189369,R56747,Outcome of interaction,L117971,Resource,"The introduced brushtail possum (Trichosurus vulpecula) is a major environmental and agricultural pest in New Zealand but little information is available on the ecology of possums in drylands, which cover c. 19% of the country. Here, we describe a temporal snapshot of the diet and feeding preferences of possums in a dryland habitat in New Zealand's South Island, as well as movement patterns and survival rates. We also briefly explore spatial patterns in capture rates. We trapped 279 possums at an average capture rate of 9 possums per 100 trap nights. Capture rates on individual trap lines varied from 0 to 38%, decreased with altitude, and were highest in the eastern (drier) parts of the study area. Stomach contents were dominated by forbs and sweet briar (Rosa rubiginosa); both items were consumed preferentially relative to availability. Possums also strongly preferred crack willow (Salix fragilis), which was uncommon in the study area and consumed only occasionally, but in large amounts. Estimated activity areas of 29 possums radio-tracked for up to 12 months varied from 0.2 to 19.5 ha (mean 5.1 ha). Nine possums (4 male, 5 female) undertook dispersal movements (≥1000 m), the longest of which was 4940 m. The most common dens of radio-collared possums were sweet briar shrubs, followed by rock outcrops. Estimated annual survival was 85% for adults and 54% for subadults. Differences between the diets, activity areas and den use of possums in this study and those in forest or farmland most likely reflect differences in availability and distribution of resources. Our results suggest that invasive willow and sweet briar may facilitate the existence of possums by providing abundant food and shelter. In turn, possums may facilitate the spread of weeds by acting as a seed vector. This basic ecological information will be useful in modelling and managing the impacts of possum populations in drylands.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56764,Cane toads on cowpats: commercial livestock production facilitates toad invasion in tropical Australia,S189569,R56765,Outcome of interaction,L118135,Resource,"Habitat disturbance and the spread of invasive organisms are major threats to biodiversity, but the interactions between these two factors remain poorly understood in many systems. Grazing activities may facilitate the spread of invasive cane toads (Rhinella marina) through tropical Australia by providing year-round access to otherwise-seasonal resources. We quantified the cane toad’s use of cowpats (feces piles) in the field, and conducted experimental trials to assess the potential role of cowpats as sources of prey, water, and warmth for toads. Our field surveys show that cane toads are found on or near cowpats more often than expected by chance. Field-enclosure experiments show that cowpats facilitate toad feeding by providing access to dung beetles. Cowpats also offer moist surfaces that can reduce dehydration rates of toads and are warmer than other nearby substrates. Livestock grazing is the primary form of land use over vast areas of Australia, and pastoral activities may have contributed substantially to the cane toad’s successful invasion of that continent.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55099,Invasive alien plants infiltrate bird-mediated shrub nucleation processes in arid savanna,S187766,R56605,Outcome of interaction,L116652,Richness,"1 The cultivation and dissemination of alien ornamental plants increases their potential to invade. More specifically, species with bird‐dispersed seeds can potentially infiltrate natural nucleation processes in savannas. 2 To test (i) whether invasion depends on facilitation by host trees, (ii) whether propagule pressure determines invasion probability, and (iii) whether alien host plants are better facilitators of alien fleshy‐fruited species than indigenous species, we mapped the distribution of alien fleshy‐fruited species planted inside a military base, and compared this with the distribution of alien and native fleshy‐fruited species established in the surrounding natural vegetation. 3 Abundance and diversity of fleshy‐fruited plant species was much greater beneath tree canopies than in open grassland and, although some native fleshy‐fruited plants were found both beneath host trees and in the open, alien fleshy‐fruited plants were found only beneath trees. 4 Abundance of fleshy‐fruited alien species in the natural savanna was positively correlated with the number of individuals of those species planted in the grounds of the military base, while the species richness of alien fleshy‐fruited taxa decreased with distance from the military base, supporting the notion that propagule pressure is a fundamental driver of invasions. 5 There were more fleshy‐fruited species beneath native Acacia tortilis than beneath alien Prosopis sp. trees of the equivalent size. Although there were significant differences in native plant assemblages beneath these hosts, the proportion of alien to native fleshy‐fruited species did not differ with host. 6 Synthesis. Birds facilitate invasion of a semi‐arid African savanna by alien fleshy‐fruited plants, and this process does not require disturbance. Instead, propagule pressure and a few simple biological observations define the probability that a plant will invade, with alien species planted in gardens being a major source of propagules. Some invading species have the potential to transform this savanna by overtopping native trees, leading to ecosystem‐level impacts. Likewise, the invasion of the open savanna by alien host trees (such as Prosopis sp.) may change the diversity, abundance and species composition of the fleshy‐fruited understorey. These results illustrate the complex interplay between propagule pressure, facilitation, and a range of other factors in biological invasions.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54218,Rapid evolution in response to introduced predators I: rates and patterns of morphological and life-history trait divergence,S167466,R54219,Species name,L102028,Salmonids,"Abstract Background Introduced species can have profound effects on native species, communities, and ecosystems, and have caused extinctions or declines in native species globally. We examined the evolutionary response of native zooplankton populations to the introduction of non-native salmonids in alpine lakes in the Sierra Nevada of California, USA. We compared morphological and life-history traits in populations of Daphnia with a known history of introduced salmonids and populations that have no history of salmonid introductions. Results Our results show that Daphnia populations co-existing with fish have undergone rapid adaptive reductions in body size and in the timing of reproduction. Size-related traits decreased by up to 13 percent in response to introduced fish. Rates of evolutionary change are as high as 4,238 darwins (0.036 haldanes). Conclusion Species introductions into aquatic habitats can dramatically alter the selective environment of native species leading to a rapid evolutionary response. Knowledge of the rates and limits of adaptation is an important component of understanding the long-term effects of alterations in the species composition of communities. We discuss the evolutionary consequences of species introductions and compare the rate of evolution observed in the Sierra Nevada Daphnia to published estimates of evolutionary change in ecological timescales.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54828,Quantifying the impact of an extreme climate event on species diversity in fragmented temperate forests: the effect of the October 1987 storm on British broadleaved woodlands,S174708,R54829,Type of disturbance,L107936,Storm,"We report the impact of an extreme weather event, the October 1987 severe storm, on fragmented woodlands in southern Britain. We analysed ecological changes between 1971 and 2002 in 143 200‐m2 plots in 10 woodland sites exposed to the storm with an ecologically equivalent sample of 150 plots in 16 non‐exposed sites. Comparing both years, understorey plant species‐richness, species composition, soil pH and woody basal area of the tree and shrub canopy were measured. We tested the hypothesis that the storm had deflected sites from the wider national trajectory of an increase in woody basal area and reduced understorey species‐richness associated with ageing canopies and declining woodland management. We also expected storm disturbance to amplify the background trend of increasing soil pH, a UK‐wide response to reduced atmospheric sulphur deposition. Path analysis was used to quantify indirect effects of storm exposure on understorey species richness via changes in woody basal area and soil pH. By 2002, storm exposure was estimated to have increased mean species richness per 200 m2 by 32%. Woody basal area changes were highly variable and did not significantly differ with storm exposure. Increasing soil pH was associated with a 7% increase in richness. There was no evidence that soil pH increased more as a function of storm exposure. Changes in species richness and basal area were negatively correlated: a 3.4% decrease in richness occurred for every 0.1‐m2 increase in woody basal area per plot. Despite all sites substantially exceeding the empirical critical load for nitrogen deposition, there was no evidence that in the 15 years since the storm, disturbance had triggered a eutrophication effect associated with dominance of gaps by nitrophilous species. Synthesis. Although the impacts of the 1987 storm were spatially variable in terms of impacts on woody basal area, the storm had a positive effect on understorey species richness. There was no evidence that disturbance had increased dominance of gaps by invasive species. This could change if recovery from acidification results in a soil pH regime associated with greater macronutrient availability.",TRUE,noun
R24,Ecology and Evolutionary Biology,R56616,Non-native habitat as home for non-native species: comparison of communities associated with invasive tubeworm and native oyster reefs,S187905,R56617,Investigated species,L116767,Tubeworm,"Introduction vectors for marine non-native species, such as oyster culture and boat foul- ing, often select for organisms dependent on hard substrates during some or all life stages. In soft- sediment estuaries, hard substrate is a limited resource, which can increase with the introduction of hard habitat-creating non-native species. Positive interactions between non-native, habitat-creating species and non-native species utilizing such habitats could be a mechanism for enhanced invasion success. Most previous studies on aquatic invasive habitat-creating species have demonstrated posi- tive responses in associated communities, but few have directly addressed responses of other non- native species. We explored the association of native and non-native species with invasive habitat- creating species by comparing communities associated with non-native, reef-building tubeworms Ficopomatus enigmaticus and native oysters Ostrea conchaphila in Elkhorn Slough, a central Califor- nia estuary. Non-native habitat supported greater densities of associated organisms—primarily highly abundant non-native amphipods (e.g. Monocorophium insidiosum, Melita nitida), tanaid (Sinelebus sp.), and tube-dwelling polychaetes (Polydora spp.). Detritivores were the most common trophic group, making up disproportionately more of the community associated with F. enigmaticus than was the case in the O. conchaphila community. Analysis of similarity (ANOSIM) showed that native species' community structure varied significantly among sites, but not between biogenic habi- tats. In contrast, non-natives varied with biogenic habitat type, but not with site. Thus, reefs of the invasive tubeworm F. enigmaticus interact positively with other non-native species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54620,A comparison of the urban flora of different phytoclimatic regions in Italy,S172227,R54621,Type of disturbance,L105871,Urbanization,"This study is a comparison of the spontaneous vascular flora of five Italian cities: Milan, Ancona, Rome, Cagliari and Palermo. The aims of the study are to test the hypothesis that urbanization results in uniformity of urban floras, and to evaluate the role of alien species in the flora of settlements located in different phytoclimatic regions. To obtain comparable data, ten plots of 1 ha, each representing typical urban habitats, were analysed in each city. The results indicate a low floristic similarity between the cities, while the strongest similarity appears within each city and between each city and the seminatural vegetation of the surrounding region. In the Mediterranean settlements, even the most urbanized plots reflect the characters of the surrounding landscape and are rich in native species, while aliens are relatively few. These results differ from the reported uniformity and the high proportion of aliens which generally characterize urban floras elsewhere. To explain this trend the importance of apophytes (indigenous plants expanding into man-made habitats) is highlighted; several Mediterranean species adapted to disturbance (i.e. grazing, trampling, and human activities) are pre-adapted to the urban environment. In addition, consideration is given to the minor role played by the ‘urban heat island’ in the Mediterranean basin, and to the structure and history of several Italian settlements, where ancient walls, ruins and archaeological sites in the periphery as well as in the historical centres act as conservative habitats and provide connection with seed-sources on the outskirts.",TRUE,noun
R24,Ecology and Evolutionary Biology,R54638,Exotic invasive species in urban wetlands: environmental correlates and implications for wetland management,S172447,R54639,Type of disturbance,L106055,Urbanization,"Summary 1. Wetlands in urban regions are subjected to a wide variety of anthropogenic disturbances, many of which may promote invasions of exotic plant species. In order to devise management strategies, the influence of different aspects of the urban and natural environments on invasion and community structure must be understood. 2. The roles of soil variables, anthropogenic effects adjacent to and within the wetlands, and vegetation structure on exotic species occurrence within 21 forested wetlands in north-eastern New Jersey, USA, were compared. The hypotheses were tested that different vegetation strata and different invasive species respond similarly to environmental factors, and that invasion increases with increasing direct human impact, hydrologic disturbance, adjacent residential land use and decreasing wetland area. Canonical correspondence analyses, correlation and logistic regression analyses were used to examine invasion by individual species and overall site invasion, as measured by the absolute and relative number of exotic species in the site flora. 3. Within each stratum, different sets of environmental factors separated exotic and native species. Nutrients, soil clay content and pH, adjacent land use and canopy composition were the most frequently identified factors affecting species, but individual species showed highly individualistic responses to the sets of environmental variables, often responding in opposite ways to the same factor. 4. Overall invasion increased with decreasing area but only when sites > 100 ha were included. Unexpectedly, invasion decreased with increasing proportions of industrial/commercial adjacent land use. 5. The hypotheses were only partially supported; invasion does not increase in a simple way with increasing human presence and disturbance. 6. Synthesis and applications . The results suggest that a suite of environmental conditions can be identified that are associated with invasion into urban wetlands, which can be widely used for assessment and management. However, a comprehensive ecosystem approach is needed that places the remediation of physical alterations from urbanization within a landscape context. Specifically, sediment, inputs and hydrologic changes need to be related to adjoining urban land use and to the overlapping requirements of individual native and exotic species.",TRUE,noun
R24,Ecology and Evolutionary Biology,R57988,Alien and native plant establishment in grassland communities is more strongly affected by disturbance than above- and below-ground enemies,S203304,R57989,Research Method,L128693,Experiment,"Understanding the factors that drive commonness and rarity of plant species and whether these factors differ for alien and native species are key questions in ecology. If a species is to become common in a community, incoming propagules must first be able to establish. The latter could be determined by competition with resident plants, the impacts of herbivores and soil biota, or a combination of these factors. We aimed to tease apart the roles that these factors play in determining establishment success in grassland communities of 10 alien and 10 native plant species that are either common or rare in Germany, and from four families. In a two‐year multisite field experiment, we assessed the establishment success of seeds and seedlings separately, under all factorial combinations of low vs. high disturbance (mowing vs mowing and tilling of the upper soil layer), suppression or not of pathogens (biocide application) and, for seedlings only, reduction or not of herbivores (net‐cages). Native species showed greater establishment success than alien species across all treatments, regardless of their commonness. Moreover, establishment success of all species was positively affected by disturbance. Aliens showed lower establishment success in undisturbed sites with biocide application. Release of the undisturbed resident community from pathogens by biocide application might explain this lower establishment success of aliens. These findings were consistent for establishment from either seeds or seedlings, although less significantly so for seedlings, suggesting a more important role of pathogens in very early stages of establishment after germination. Herbivore exclusion did play a limited role in seedling establishment success. Synthesis: In conclusion, we found that less disturbed grassland communities exhibited strong biotic resistance to establishment success of species, whether alien or native. However, we also found evidence that alien species may benefit weakly from soilborne enemy release, but that this advantage over native species is lost when the latter are also released by biocide application. Thus, disturbance was the major driver for plant species establishment success and effects of pathogens on alien plant establishment may only play a minor role.",TRUE,noun
R24,Ecology and Evolutionary Biology,R55061,Determinants of vertebrate invasion success in Europe and North America,S176880,R55063,Measure of invasion success,L109492,Spread,"Species that are frequently introduced to an exotic range have a high potential of becoming invasive. Besides propagule pressure, however, no other generally strong determinant of invasion success is known. Although evidence has accumulated that human affiliates (domesticates, pets, human commensals) also have high invasion success, existing studies do not distinguish whether this success can be completely explained by or is partly independent of propagule pressure. Here, we analyze both factors independently, propagule pressure and human affiliation. We also consider a third factor directly related to humans, hunting, and 17 traits on each species' population size and extent, diet, body size, and life history. Our dataset includes all 2362 freshwater fish, mammals, and birds native to Europe or North America. In contrast to most previous studies, we look at the complete invasion process consisting of (1) introduction, (2) establishment, and (3) spread. In this way, we not only consider which of the introduced species became invasive but also which species were introduced. Of the 20 factors tested, propagule pressure and human affiliation were the two strongest determinants of invasion success across all taxa and steps. This was true for multivariate analyses that account for intercorrelations among variables as well as univariate analyses, suggesting that human affiliation influenced invasion success independently of propagule pressure. Some factors affected the different steps of the invasion process antagonistically. For example, game species were much more likely to be introduced to an exotic continent than nonhunted species but tended to be less likely to establish themselves and spread. Such antagonistic effects show the importance of considering the complete invasion process.",TRUE,noun
R302,Economics,R182241,COVID-19 Disruptions Disproportionately Affect Female Academics,S717784,R187531,gender,R187512,male,"The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents’ circumstances, such as a spouse’s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.",TRUE,noun
R302,Economics,R182241,COVID-19 Disruptions Disproportionately Affect Female Academics,S717757,R187531,target population,R178513,academics,"The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents’ circumstances, such as a spouse’s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.",TRUE,noun
R302,Economics,R182241,COVID-19 Disruptions Disproportionately Affect Female Academics,S717702,R187526,influencing factor,R187528,child,"The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents’ circumstances, such as a spouse’s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.",TRUE,noun
R302,Economics,R182241,COVID-19 Disruptions Disproportionately Affect Female Academics,S717783,R187531,gender,R187511,female,"The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents’ circumstances, such as a spouse’s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.",TRUE,noun
R267,Energy Systems,R110083,Optimal Sizing and Scheduling of Hybrid Energy Systems: The Cases of Morona Santiago and the Galapagos Islands,S502056,R110088,System location,L362963,Galapagos,"Hybrid energy systems (HESs) generate electricity from multiple energy sources that complement each other. Recently, due to the reduction in costs of photovoltaic (PV) modules and wind turbines, these types of systems have become economically competitive. In this study, a mathematical programming model is applied to evaluate the techno-economic feasibility of autonomous units located in two isolated areas of Ecuador: first, the province of Galapagos (subtropical island) and second, the province of Morona Santiago (Amazonian tropical forest). The two case studies suggest that HESs are potential solutions to reduce the dependence of rural villages on fossil fuels and viable mechanisms to bring electrical power to isolated communities in Ecuador. Our results reveal that not only from the economic but also from the environmental point of view, for the case of the Galapagos province, a hybrid energy system with a PV–wind–battery configuration and a levelized cost of energy (LCOE) equal to 0.36 $/kWh is the optimal energy supply system. For the case of Morona Santiago, a hybrid energy system with a PV–diesel–battery configuration and an LCOE equal to 0.37 $/kWh is the most suitable configuration to meet the load of a typical isolated community in Ecuador. The proposed optimization model can be used as a decision-support tool for evaluating the viability of autonomous HES projects at any other location.",TRUE,noun
R194,Engineering,R139938,Micromachined accelerometer with no proof mass,S558637,R139940,Working fluid,L392589,Air,"This paper describes a revolutionary micromachined accelerometer which is simple, reliable, and inexpensive to make. The operating principle of this accelerometer is based on free-convection heat transfer of a tiny hot air bubble in an enclosed chamber. An experimental device has demonstrated a 0.6 milli-g sensitivity which can theoretically be extended to sub-micro-g level.",TRUE,noun
R194,Engineering,R139290,Engineered Hierarchical CuO Nanoleaves Based Electrochemical Nonenzymatic Biosensor for Glucose Detection,S555197,R139294,Has study area,L390561,Biosensors,"In this study, we synthesized hierarchical CuO nanoleaves in large-quantity via the hydrothermal method. We employed different techniques to characterize the morphological, structural, optical properties of the as-prepared hierarchical CuO nanoleaves sample. An electrochemical based nonenzymatic glucose biosensor was fabricated using engineered hierarchical CuO nanoleaves. The electrochemical behavior of fabricated biosensor towards glucose was analyzed with cyclic voltammetry (CV) and amperometry (i–t) techniques. Owing to the high electroactive surface area, hierarchical CuO nanoleaves based nonenzymatic biosensor electrode shows enhanced electrochemical catalytic behavior for glucose electro-oxidation in 100 mM sodium hydroxide (NaOH) electrolyte. The nonenzymatic biosensor displays a high sensitivity (1467.32 μ A/(mM cm 2 )), linear range (0.005–5.89 mM), and detection limit of 12 nM (S/N = 3). Moreover, biosensor displayed good selectivity, reproducibility, repeatability, and stability at room temperature over three-week storage period. Further, as-fabricated nonenzymatic glucose biosensors were employed for practical applications in human serum sample measurements. The obtained data were compared to the commercial biosensor, which demonstrates the practical usability of nonenzymatic glucose biosensors in real sample analysis.",TRUE,noun
R194,Engineering,R139614,Highly Efficient and Stable Sn-Rich Perovskite Solar Cells by Introducing Bromine,S557160,R139617,keywords,L391637,Bromine,"Compositional engineering of recently arising methylammonium (MA) lead (Pb) halide based perovskites is an essential approach for finding better perovskite compositions to resolve still remaining issues of toxic Pb, long-term instability, etc. In this work, we carried out crystallographic, morphological, optical, and photovoltaic characterization of compositional MASn0.6Pb0.4I3-xBrx by gradually introducing bromine (Br) into parental Pb-Sn binary perovskite (MASn0.6Pb0.4I3) to elucidate its function in Sn-rich (Sn:Pb = 6:4) perovskites. We found significant advances in crystallinity and dense coverage of the perovskite films by inserting the Br into Sn-rich perovskite lattice. Furthermore, light-intensity-dependent open circuit voltage (Voc) measurement revealed much suppressed trap-assisted recombination for a proper Br-added (x = 0.4) device. These contributed to attaining the unprecedented power conversion efficiency of 12.1% and Voc of 0.78 V, which are, to the best of our knowledge, the highest performance in the Sn-rich (≥60%) perovskite solar cells reported so far. In addition, impressive enhancement of photocurrent-output stability and little hysteresis were found, which paves the way for the development of environmentally benign (Pb reduction), stable monolithic tandem cells using the developed low band gap (1.24-1.26 eV) MASn0.6Pb0.4I3-xBrx with suggested composition (x = 0.2-0.4).",TRUE,noun
R194,Engineering,R141119,High-isolation CPW MEMS shunt switches-part 1: modeling ,S564024,R141121,keywords,L395800,Capacitance,"This paper, the first of two parts, presents an electromagnetic model for membrane microelectromechanical systems (MEMS) shunt switches for microwave/millimeter-wave applications. The up-state capacitance can be accurately modeled using three-dimensional static solvers, and full-wave solvers are used to predict the current distribution and inductance of the switch. The loss in the up-state position is equivalent to the coplanar waveguide line loss and is 0.01-0.02 dB at 10-30 GHz for a 2-/spl mu/m-thick Au MEMS shunt switch. It is seen that the capacitance, inductance, and series resistance can be accurately extracted from DC-40 GHz S-parameter measurements. It is also shown that dramatic increase in the down-state isolation (20/sup +/ dB) can be achieved with the choice of the correct LC series resonant frequency of the switch. In part 2 of this paper, the equivalent capacitor-inductor-resistor model is used in the design of tuned high isolation switches at 10 and 30 GHz.",TRUE,noun
R194,Engineering,R141153,Effect of Environmental Humidity on Dielectric Charging Effect in RF MEMS Capacitive Switches Based on C–V Properties,S564269,R141155,keywords,L395995,Capacitance,"A capacitance-voltage (C- V) model is developed for RF microelectromechanical systems (MEMS) switches at upstate and downstate. The transient capacitance response of the RF MEMS switches at different switch states was measured for different humidity levels. By using the C -V model as well as the voltage shift dependent of trapped charges, the transient trapped charges at different switch states and humidity levels are obtained. Charging models at different switch states are explored in detail. It is shown that the injected charges increase linearly with humidity levels and the internal polarization increases with increasing humidity at downstate. The speed of charge injection at 80% relative humidity (RH) is about ten times faster than that at 20% RH. A measurement of pull-in voltage shifts by C- V sweep cycles at 20% and 80 % RH gives a reasonable evidence. The present model is useful to understand the pull-in voltage shift of the RF MEMS switch.",TRUE,noun
R194,Engineering,R155352,Flexible Quasi-Vertical In-Ga-Zn-O Thin-Film Transistor With 300-nm Channel Length,S622741,R155354,keywords,L428697,Capacitance,"In this letter, we report a flexible Indium-Gallium-Zinc-Oxide quasi-vertical thin-film transistor (QVTFT) with 300-nm channel length, fabricated on a free-standing polyimide foil, using a low-temperature process <;150 °C. A bilayer lift-off process is used to structure a spacing layer with a tilted sidewall and the drain contact on top of the source electrode. The resulting quasi-vertical profile ensures a good coverage of the successive device layers. The fabricated flexible QVTFT exhibits an ON/OFF current ratio of 104, a threshold voltage of 1.5 V, a maximum transconductance of 0.73 μS μm-1, and a total gate capacitance of 76 nF μm-1. From S-parameter measurements, we extracted a transit frequency of 1.5 MHz. Furthermore, the flexible QVTFT is fully operational when bent to a tensile radius of 5 mm.",TRUE,noun
R194,Engineering,R141130,Effects of surface roughness on electromagnetic characteristics of capacitive switches,S564255,R141132, MEMS switch type,L395981,Capacitive,"This paper studies the effect of surface roughness on up-state and down-state capacitances of microelectromechanical systems (MEMS) capacitive switches. When the root-mean-square (RMS) roughness is 10 nm, the up-state capacitance is approximately 9% higher than the theoretical value. When the metal bridge is driven down, the normalized contact area between the metal bridge and the surface of the dielectric layer is less than 1% if the RMS roughness is larger than 2 nm. Therefore, the down-state capacitance is actually determined by the non-contact part of the metal bridge. The normalized isolation is only 62% for RMS roughness of 10 nm when the hold-down voltage is 30 V. The analysis also shows that the down-state capacitance and the isolation increase with the hold-down voltage. The normalized isolation increases from 58% to 65% when the hold-down voltage increases from 10 V to 60 V for RMS roughness of 10 nm.",TRUE,noun
R194,Engineering,R135551,Flexible Capacitive Pressure Sensor Enhanced by Tilted Micropillar Arrays,S536337,R135554,keywords,R135593,Capacitive,"Sensitivity of the sensor is of great importance in practical applications of wearable electronics or smart robotics. In the present study, a capacitive sensor enhanced by a tilted micropillar array-structured dielectric layer is developed. Because the tilted micropillars undergo bending deformation rather than compression deformation, the distance between the electrodes is easier to change, even discarding the contribution of the air gap at the interface of the structured dielectric layer and the electrode, thus resulting in high pressure sensitivity (0.42 kPa-1) and very small detection limit (1 Pa). In addition, eliminating the presence of uncertain air gap, the dielectric layer is strongly bonded with the electrode, which makes the structure robust and endows the sensor with high stability and reliable capacitance response. These characteristics allow the device to remain in normal use without the need for repair or replacement despite mechanical damage. Moreover, the proposed sensor can be tailored to any size and shape, which is further demonstrated in wearable application. This work provides a new strategy for sensors that are required to be sensitive and reliable in actual applications.",TRUE,noun
R194,Engineering,R139623,Hybrid Perovskite Films by a New Variant of Pulsed Excimer Laser Deposition: A Room-Temperature Dry Process,S557216,R139625,keywords,L391682,Deposition,"A new variant of the classic pulsed laser deposition (PLD) process is introduced as a room-temperature dry process for the growth and stoichiometry control of hybrid perovskite films through the use of nonstoichiometric single target ablation and off-axis growth. Mixed halide hybrid perovskite films nominally represented by CH3NH3PbI3–xAx (A = Cl or F) are also grown and are shown to reveal interesting trends in the optical properties and photoresponse. Growth of good quality lead-free CH3NH3SnI3 films is also demonstrated, and the corresponding optical properties are presented. Finally, perovskite solar cells fabricated at room temperature (which makes the process adaptable to flexible substrates) are shown to yield a conversion efficiency of about 7.7%.",TRUE,noun
R194,Engineering,R139618,Efficiently Improving the Stability of Inverted Perovskite Solar Cells by Employing Polyethylenimine-Modified Carbon Nanotubes as Electrodes,S557194,R139622,keywords,L391664,Electrodes,"Inverted perovskite solar cells (PSCs) have been becoming more and more attractive, owing to their easy-fabrication and suppressed hysteresis, while the ion diffusion between metallic electrode and perovskite layer limit the long-term stability of devices. In this work, we employed a novel polyethylenimine (PEI) modified cross-stacked superaligned carbon nanotube (CSCNT) film in the inverted planar PSCs configurated FTO/NiO x/methylammonium lead tri-iodide (MAPbI3)/6, 6-phenyl C61-butyric acid methyl ester (PCBM)/CSCNT:PEI. By modifying CSCNT with a certain concentration of PEI (0.5 wt %), suitable energy level alignment and promoted interfacial charge transfer have been achieved, leading to a significant enhancement in the photovoltaic performance. As a result, a champion power conversion efficiency (PCE) of ∼11% was obtained with a Voc of 0.95 V, a Jsc of 18.7 mA cm-2, a FF of 0.61 as well as negligible hysteresis. Moreover, CSCNT:PEI based inverted PSCs show superior durability in comparison to the standard silver based devices, remaining over 85% of the initial PCE after 500 h aging under various conditions, including long-term air exposure, thermal, and humid treatment. This work opens up a new avenue of facile modified carbon electrodes for highly stable and hysteresis suppressed PSCs.",TRUE,noun
R194,Engineering,R141136,A zipper RF MEMS tunable capacitor with interdigitated RF and actuation electrodes,S564144,R141138,keywords,L395900,Electrodes,"This paper presents a new RF MEMS tunable capacitor based on the zipper principle and with interdigitated RF and actuation electrodes. The electrode configuration prevents dielectric charging under high actuation voltages. It also increases the capacitance ratio and the tunable analog range. The effect of the residual stress on the capacitance tunability is also investigated. Two devices with different interdigital RF and actuation electrodes are fabricated on an alumina substrate and result in a capacitance ratio around 3.0 (Cmin = 70?90 fF, Cmax = 240?270 fF) and with a Q > 100 at 3 GHz. This design can be used in wideband tunable filters and matching networks.",TRUE,noun
R194,Engineering,R151008,Quality Inspection of Textile Artificial Textures Using a Neuro-Symbolic Hybrid System Methodology ,S645610,R151010,Has participants,R157547,expert,"In the industrial sector there are many processes where the visual inspection is essential, the automation of that processes becomes a necessity to guarantee the quality of several objects. In this paper we propose a methodology for textile quality inspection based on the texture cue of an image. To solve this, we use a Neuro-Symbolic Hybrid System (NSHS) that allow us to combine an artificial neural network and the symbolic representation of the expert knowledge. The artificial neural network uses the CasCor learning algorithm and we use production rules to represent the symbolic knowledge. The features used for inspection has the advantage of being tolerant to rotation and scale changes. We compare the results with those obtained from an automatic computer vision task, and we conclude that results obtained using the proposed methodology are better.",TRUE,noun
R194,Engineering,R155344,Flexible Self-Aligned Amorphous InGaZnO Thin-Film Transistors With Submicrometer Channel Length and a Transit Frequency of 135 MHz,S622735,R155346,keywords,L428691,Fabrication,"Flexible large area electronics promise to enable new devices such as rollable displays and electronic skins. Radio frequency (RF) applications demand circuits operating in the megahertz regime, which is hard to achieve for electronics fabricated on amorphous and temperature sensitive plastic substrates. Here, we present self-aligned amorphous indium-gallium-zinc oxide-based thin-film transistors (TFTs) fabricated on free-standing plastic foil using fabrication temperatures . Self-alignment by backside illumination between gate and source/drain electrodes was used to realize flexible transistors with a channel length of 0.5 μm and reduced parasitic capacities. The flexible TFTs exhibit a transit frequency of 135 MHz when operated at 2 V. The device performance is maintained when the TFTs are bent to a tensile radius of 3.5 mm, which makes this technology suitable for flexible RFID tags and AM radios.",TRUE,noun
R194,Engineering,R144780,Solar Blind Photodetectors Enabled by Nanotextured β-Ga2O3 Films Grown via Oxidation of GaAs Substrates,S579610,R144782,substrate,L405311,GaAs,"A simple and inexpensive method for growing Ga2O3 using GaAs wafers is demonstrated. Si-doped GaAs wafers are heated to 1050 °C in a horizontal tube furnace in both argon and air ambients in order to convert their surfaces to β-Ga2O3. The β-Ga2O3 films are characterized using scanning electron micrograph, energy-dispersive X-ray spectroscopy, and X-ray diffraction. They are also used to fabricate solar blind photodetectors. The devices, which had nanotextured surfaces, exhibited a high sensitivity to ultraviolet (UV) illumination due in part to large surface areas. Furthermore, the films have coherent interfaces with the substrate, which leads to a robust device with high resistance to thermo-mechanical stress. The photoconductance of the β-Ga2O3 films is found to increase by more than three orders of magnitude under 270 nm ultraviolet illumination with respect to the dark current. The fabricated device shows a responsivity of ∼292 mA/W at this wavelength.",TRUE,noun
R194,Engineering,R139283,Glucose Biosensor Based on Disposable Activated Carbon Electrodes Modified with Platinum Nanoparticles Electrodeposited on Poly(Azure A),S555143,R139286,keywords,L390517,glucose,"Herein, a novel electrochemical glucose biosensor based on glucose oxidase (GOx) immobilized on a surface containing platinum nanoparticles (PtNPs) electrodeposited on poly(Azure A) (PAA) previously electropolymerized on activated screen-printed carbon electrodes (GOx-PtNPs-PAA-aSPCEs) is reported. The resulting electrochemical biosensor was validated towards glucose oxidation in real samples and further electrochemical measurement associated with the generated H2O2. The electrochemical biosensor showed an excellent sensitivity (42.7 μA mM−1 cm−2), limit of detection (7.6 μM), linear range (20 μM–2.3 mM), and good selectivity towards glucose determination. Furthermore, and most importantly, the detection of glucose was performed at a low potential (0.2 V vs. Ag). The high performance of the electrochemical biosensor was explained through surface exploration using field emission SEM, XPS, and impedance measurements. The electrochemical biosensor was successfully applied to glucose quantification in several real samples (commercial juices and a plant cell culture medium), exhibiting a high accuracy when compared with a classical spectrophotometric method. This electrochemical biosensor can be easily prepared and opens up a good alternative in the development of new sensitive glucose sensors.",TRUE,noun
R194,Engineering,R141640,Photoluminescence based H2 and O2 gas sensing by ZnO nanowires,S567826,R141643,Target gas,L398637,Hydrogen,"Gas sensing properties of ZnO nanowires prepared via thermal chemical vapor deposition method were investigated by analyzing change in their photoluminescence (PL) spectra. The as-synthesized nanowires show two different PL peaks positioned at 380 nm and 520 nm. The 380 nm emission is ascribed to near band edge emission, and the green peak (520 nm) appears due to the oxygen vacancy defects. The intensity of the green PL signal enhances upon hydrogen gas exposure, whereas it gets quenched upon oxygen gas loading. The ZnO nanowires' sensing response values were observed as about 54% for H2 gas and 9% for O2 gas at room temperature for 50 sccm H2/O2 gas flow rate. The sensor response was also analyzed as a function of sample temperature ranging from 300 K to 400 K. A conclusion was derived from the observations that the H2/O2 gases affect the adsorbed oxygen species on the surface of ZnO nanowires. The adsorbed species result in the band bending and hence changes the depletion region which causes variation i...",TRUE,noun
R194,Engineering,R148160,Sensitivity amplification of fiber-optic in-line Mach–Zehnder Interferometer sensors with modified Vernier-effect,S594550,R148162,keywords,L413316,Interferometer,"In this paper, a novel sensitivity amplification method for fiber-optic in-line Mach-Zehnder interferometer (MZI) sensors has been proposed and demonstrated. The sensitivity magnification is achieved through a modified Vernier-effect. Two cascaded in-line MZIs based on offset splicing of single mode fiber (SMF) have been used to verify the effect of sensitivity amplification. Vernier-effect is generated due to the small free spectral range (FSR) difference between the cascaded in-line MZIs. Frequency component corresponding to the envelope of the superimposed spectrum is extracted to take Inverse Fast Fourier Transform (IFFT). Thus we can obtain the envelope precisely from the messy superimposed spectrum. Experimental results show that a maximum sensitivity amplification factor of nearly 9 is realized. The proposed sensitivity amplification method is universal for the vast majority of in-line MZIs.",TRUE,noun
R194,Engineering,R155294,Vernier effect of two cascaded in-fiber Mach–Zehnder interferometers based on a spherical-shaped structure,S621790,R155296,keywords,L428065,interferometers,"The Vernier effect of two cascaded in-fiber Mach-Zehnder interferometers (MZIs) based on a spherical-shaped structure has been investigated. The envelope based on the Vernier effect is actually formed by a frequency component of the superimposed spectrum, and the frequency value is determined by the subtraction between the optical path differences of two cascaded MZIs. A method based on band-pass filtering is put forward to extract the envelope efficiently; strain and curvature measurements are carried out to verify the validity of the method. The results show that the strain and curvature sensitivities are enhanced to -8.47 pm/με and -33.70 nm/m-1 with magnification factors of 5.4 and -5.4, respectively. The detection limit of the sensors with the Vernier effect is also discussed.",TRUE,noun
R194,Engineering,R155301,High-Sensitivity Fiber-Optic Strain Sensor Based on the Vernier Effect and Separated Fabry–Perot Interferometers,S621797,R155306,keywords,L428072,interferometers,"A high-sensitivity fiber-optic strain sensor, based on the Vernier effect and separated Fabry–Perot interferometers (FPIs), is proposed and experimentally demonstrated. One air-cavity FPI is used as a sensing FPI (SFPI) and another is used as a matched FPI (MFPI) to generate the Vernier effect. The two FPIs are connected by a fiber link but separated by a long section of single-mode fiber (SMF). The SFPI is fabricated by splicing a section of microfiber between two SMFs with a large lateral offset, and the MFPI is formed by a section of hollow-core fiber sandwiched between two SMFs. By using the Vernier effect, the strain sensitivity of the proposed sensor reaches $\text{1.15 nm/}\mu \varepsilon $, which is the highest strain sensitivity of an FPI-based sensor reported so far. Owing to the separated structure of the proposed sensor, the MFPI can be isolated from the SFPI and the detection environment. Therefore, the MFPI is not affected by external physical quantities (such as strain and temperature) and thus has a very low temperature cross-sensitivity. The experimental results show that a low-temperature cross-sensitivity of $\text{0.056 } \mu \varepsilon /^ {\circ }{\text{C}}$ can be obtained with the proposed sensor. With its advantages of simple fabrication, high strain sensitivity, and low-temperature cross-sensitivity, the proposed sensor has great application prospects in several fields.",TRUE,noun
R194,Engineering,R155367,Highly Robust Flexible Vertical-Channel Thin-Film Transistors Using Atomic-Layer-Deposited Oxide Channels and Zeocoat Spacers on Ultrathin Polyimide Substrates,S622746,R155371,keywords,L428702,Layers,Mechanically flexible vertical-channel-structured thin-film transistors (VTFTs) with a channel length of 200 nm were fabricated on 1.2 μm thick colorless polyimide (CPI) substrates. All layers comp...,TRUE,noun
R194,Engineering,R141656,Natural Biowaste-Cocoon-Derived Granular Activated Carbon-Coated ZnO Nanorods: A Simple Route To Synthesizing a Core–Shell Structure and Its Highly Enhanced UV and Hydrogen Sensing Properties,S567953,R141660,ZnO form,L398732,Nanorods,"Granular activated carbon (GAC) materials were prepared via simple gas activation of silkworm cocoons and were coated on ZnO nanorods (ZNRs) by the facile hydrothermal method. The present combination of GAC and ZNRs shows a core-shell structure (where the GAC is coated on the surface of ZNRs) and is exposed by systematic material analysis. The as-prepared samples were then fabricated as dual-functional sensors and, most fascinatingly, the as-fabricated core-shell structure exhibits better UV and H2 sensing properties than those of as-fabricated ZNRs and GAC. Thus, the present core-shell structure-based H2 sensor exhibits fast responses of 11% (10 ppm) and 23.2% (200 ppm) with ultrafast response and recovery. However, the UV sensor offers an ultrahigh photoresponsivity of 57.9 A W-1, which is superior to that of as-grown ZNRs (0.6 A W-1). Besides this, switching photoresponse of GAC/ZNR core-shell structures exhibits a higher switching ratio (between dark and photocurrent) of 1585, with ultrafast response and recovery, than that of as-grown ZNRs (40). Because of the fast adsorption ability of GAC, it was observed that the finest distribution of GAC on ZNRs results in rapid electron transportation between the conduction bands of GAC and ZNRs while sensing H2 and UV. Furthermore, the present core-shell structure-based UV and H2 sensors also well-retained excellent sensitivity, repeatability, and long-term stability. Thus, the salient feature of this combination is that it provides a dual-functional sensor with biowaste cocoon and ZnO, which is ecological and inexpensive.",TRUE,noun
R194,Engineering,R141621,Probing the highly efficient room temperature ammonia gas sensing properties of a luminescent ZnO nanowire array prepared via an AAO-assisted template route,S567676,R141623,ZnO form,L398527,Nanowires,"Here, we report the facile synthesis of a highly ordered luminescent ZnO nanowire array using a low temperature anodic aluminium oxide (AAO) template route which can be economically produced in large scale quantity. The as-synthesized nanowires have diameters ranging from 60 to 70 nm and length ∼11 μm. The photoluminescence spectrum reveals that the AAO/ZnO assembly has a strong green emission peak at 490 nm upon excitation at a wavelength of 406 nm. Furthermore, the ZnO nanowire array-based gas sensor has been fabricated by a simple micromechanical technique and its NH3 gas sensing properties have been explored thoroughly. The fabricated gas sensor exhibits excellent sensitivity and fast response to NH3 gas at room temperature. Moreover, for 50 ppm NH3 concentration, the observed value of sensitivity is around 68%, while the response and recovery times are 28 and 29 seconds, respectively. The present synthesis technique to produce a highly ordered ZnO nanowire array and a fabricated gas sensor has great potential to push the low cost gas sensing nanotechnology.",TRUE,noun
R194,Engineering,R141640,Photoluminescence based H2 and O2 gas sensing by ZnO nanowires,S567825,R141643,ZnO form,L398636,Nanowires,"Gas sensing properties of ZnO nanowires prepared via thermal chemical vapor deposition method were investigated by analyzing change in their photoluminescence (PL) spectra. The as-synthesized nanowires show two different PL peaks positioned at 380 nm and 520 nm. The 380 nm emission is ascribed to near band edge emission, and the green peak (520 nm) appears due to the oxygen vacancy defects. The intensity of the green PL signal enhances upon hydrogen gas exposure, whereas it gets quenched upon oxygen gas loading. The ZnO nanowires' sensing response values were observed as about 54% for H2 gas and 9% for O2 gas at room temperature for 50 sccm H2/O2 gas flow rate. The sensor response was also analyzed as a function of sample temperature ranging from 300 K to 400 K. A conclusion was derived from the observations that the H2/O2 gases affect the adsorbed oxygen species on the surface of ZnO nanowires. The adsorbed species result in the band bending and hence changes the depletion region which causes variation i...",TRUE,noun
R194,Engineering,R139273,A Highly Sensitive Nonenzymatic Glucose Biosensor Based on the Regulatory Effect of Glucose on Electrochemical Behaviors of Colloidal Silver Nanoparticles on MoS2,S555077,R139276,keywords,L390464,nonenzymatic,"A novel and highly sensitive nonenzymatic glucose biosensor was developed by nucleating colloidal silver nanoparticles (AgNPs) on MoS2. The facile fabrication method, high reproducibility (97.5%) and stability indicates a promising capability for large-scale manufacturing. Additionally, the excellent sensitivity (9044.6 μA·mM−1·cm−2), low detection limit (0.03 μM), appropriate linear range of 0.1–1000 μM, and high selectivity suggests that this biosensor has a great potential to be applied for noninvasive glucose detection in human body fluids, such as sweat and saliva.",TRUE,noun
R194,Engineering,R145538,Role of ALD Al2O3 Surface Passivation on the Performance of p-Type Cu2O Thin Film Transistors,S582817,R145548,keywords,L407057,Passivation,"High-performance p-type oxide thin film transistors (TFTs) have great potential for many semiconductor applications. However, these devices typically suffer from low hole mobility and high off-state currents. We fabricated p-type TFTs with a phase-pure polycrystalline Cu2O semiconductor channel grown by atomic layer deposition (ALD). The TFT switching characteristics were improved by applying a thin ALD Al2O3 passivation layer on the Cu2O channel, followed by vacuum annealing at 300 °C. Detailed characterization by transmission electron microscopy-energy dispersive X-ray analysis and X-ray photoelectron spectroscopy shows that the surface of Cu2O is reduced following Al2O3 deposition and indicates the formation of a 1-2 nm thick CuAlO2 interfacial layer. This, together with field-effect passivation caused by the high negative fixed charge of the ALD Al2O3, leads to an improvement in the TFT performance by reducing the density of deep trap states as well as by reducing the accumulation of electrons in the semiconducting layer in the device off-state.",TRUE,noun
R194,Engineering,R139602,Organometal Halide Perovskites as Visible-Light Sensitizers for Photovoltaic Cells,S557068,R139603,keywords,L391562,Perovskites,"Two organolead halide perovskite nanocrystals, CH(3)NH(3)PbBr(3) and CH(3)NH(3)PbI(3), were found to efficiently sensitize TiO(2) for visible-light conversion in photoelectrochemical cells. When self-assembled on mesoporous TiO(2) films, the nanocrystalline perovskites exhibit strong band-gap absorptions as semiconductors. The CH(3)NH(3)PbI(3)-based photocell with spectral sensitivity of up to 800 nm yielded a solar energy conversion efficiency of 3.8%. The CH(3)NH(3)PbBr(3)-based cell showed a high photovoltage of 0.96 V with an external quantum conversion efficiency of 65%.",TRUE,noun
R194,Engineering,R139614,Highly Efficient and Stable Sn-Rich Perovskite Solar Cells by Introducing Bromine,S557158,R139617,keywords,L391635,Perovskites,"Compositional engineering of recently arising methylammonium (MA) lead (Pb) halide based perovskites is an essential approach for finding better perovskite compositions to resolve still remaining issues of toxic Pb, long-term instability, etc. In this work, we carried out crystallographic, morphological, optical, and photovoltaic characterization of compositional MASn0.6Pb0.4I3-xBrx by gradually introducing bromine (Br) into parental Pb-Sn binary perovskite (MASn0.6Pb0.4I3) to elucidate its function in Sn-rich (Sn:Pb = 6:4) perovskites. We found significant advances in crystallinity and dense coverage of the perovskite films by inserting the Br into Sn-rich perovskite lattice. Furthermore, light-intensity-dependent open circuit voltage (Voc) measurement revealed much suppressed trap-assisted recombination for a proper Br-added (x = 0.4) device. These contributed to attaining the unprecedented power conversion efficiency of 12.1% and Voc of 0.78 V, which are, to the best of our knowledge, the highest performance in the Sn-rich (≥60%) perovskite solar cells reported so far. In addition, impressive enhancement of photocurrent-output stability and little hysteresis were found, which paves the way for the development of environmentally benign (Pb reduction), stable monolithic tandem cells using the developed low band gap (1.24-1.26 eV) MASn0.6Pb0.4I3-xBrx with suggested composition (x = 0.2-0.4).",TRUE,noun
R194,Engineering,R139632,Fabrication of Efficient Low-Bandgap Perovskite Solar Cells by Combining Formamidinium Tin Iodide with Methylammonium Lead Iodide,S557300,R139633,keywords,L391754,Perovskites,"Mixed tin (Sn)-lead (Pb) perovskites with high Sn content exhibit low bandgaps suitable for fabricating the bottom cell of perovskite-based tandem solar cells. In this work, we report on the fabrication of efficient mixed Sn-Pb perovskite solar cells using precursors combining formamidinium tin iodide (FASnI3) and methylammonium lead iodide (MAPbI3). The best-performing cell fabricated using a (FASnI3)0.6(MAPbI3)0.4 absorber with an absorption edge of ∼1.2 eV achieved a power conversion efficiency (PCE) of 15.08 (15.00)% with an open-circuit voltage of 0.795 (0.799) V, a short-circuit current density of 26.86(26.82) mA/cm(2), and a fill factor of 70.6(70.0)% when measured under forward (reverse) voltage scan. The average PCE of 50 cells we have fabricated is 14.39 ± 0.33%, indicating good reproducibility.",TRUE,noun
R194,Engineering,R144780,Solar Blind Photodetectors Enabled by Nanotextured β-Ga2O3 Films Grown via Oxidation of GaAs Substrates,S579619,R144782,keywords,L405315,photodetector,"A simple and inexpensive method for growing Ga2O3 using GaAs wafers is demonstrated. Si-doped GaAs wafers are heated to 1050 °C in a horizontal tube furnace in both argon and air ambients in order to convert their surfaces to β-Ga2O3. The β-Ga2O3 films are characterized using scanning electron micrograph, energy-dispersive X-ray spectroscopy, and X-ray diffraction. They are also used to fabricate solar blind photodetectors. The devices, which had nanotextured surfaces, exhibited a high sensitivity to ultraviolet (UV) illumination due in part to large surface areas. Furthermore, the films have coherent interfaces with the substrate, which leads to a robust device with high resistance to thermo-mechanical stress. The photoconductance of the β-Ga2O3 films is found to increase by more than three orders of magnitude under 270 nm ultraviolet illumination with respect to the dark current. The fabricated device shows a responsivity of ∼292 mA/W at this wavelength.",TRUE,noun
R194,Engineering,R144792,Thermal annealing effect on β-Ga2O3 thin film solar blind photodetector heteroepitaxially grown on sapphire substrate,S580079,R144794,keywords,L405540,photodetector,"This paper presents the effect of thermal annealing on β‐Ga2O3 thin film solar‐blind (SB) photodetector (PD) synthesized on c‐plane sapphire substrates by a low pressure chemical vapor deposition (LPCVD). The thin films were synthesized using high purity gallium (Ga) and oxygen (O2) as source precursors. The annealing was performed ex situ the under the oxygen atmosphere, which helped to reduce oxygen or oxygen‐related vacancies in the thin film. Metal/semiconductor/metal (MSM) type photodetectors were fabricated using both the as‐grown and annealed films. The PDs fabricated on the annealed films had lower dark current, higher photoresponse and improved rejection ratio (R250/R370 and R250/R405) compared to the ones fabricated on the as‐grown films. These improved PD performances are due to the significant reduction of the photo‐generated carriers trapped by oxygen or oxygen‐related vacancies.",TRUE,noun
R194,Engineering,R144807,Solar blind deep ultraviolet β-Ga2O3 photodetectors grown on sapphire by the Mist-CVD method,S580086,R144810,keywords,L405547,Photodetectors,"In this report, we demonstrate high spectral responsivity (SR) solar blind deep ultraviolet (UV) β-Ga2O3 metal-semiconductor-metal (MSM) photodetectors grown by the mist chemical-vapor deposition (Mist-CVD) method. The β-Ga2O3 thin film was grown on c-plane sapphire substrates, and the fabricated MSM PDs with Al contacts in an interdigitated geometry were found to exhibit peak SR>150A/W for the incident light wavelength of 254 nm at a bias of 20 V. The devices exhibited very low dark current, about 14 pA at 20 V, and showed sharp transients with a photo-to-dark current ratio>105. The corresponding external quantum efficiency is over 7 × 104%. The excellent deep UV β-Ga2O3 photodetectors will enable significant advancements for the next-generation photodetection applications.",TRUE,noun
R194,Engineering,R139618,Efficiently Improving the Stability of Inverted Perovskite Solar Cells by Employing Polyethylenimine-Modified Carbon Nanotubes as Electrodes,S557195,R139622,keywords,L391665,Polyethylenimine,"Inverted perovskite solar cells (PSCs) have been becoming more and more attractive, owing to their easy-fabrication and suppressed hysteresis, while the ion diffusion between metallic electrode and perovskite layer limit the long-term stability of devices. In this work, we employed a novel polyethylenimine (PEI) modified cross-stacked superaligned carbon nanotube (CSCNT) film in the inverted planar PSCs configurated FTO/NiO x/methylammonium lead tri-iodide (MAPbI3)/6, 6-phenyl C61-butyric acid methyl ester (PCBM)/CSCNT:PEI. By modifying CSCNT with a certain concentration of PEI (0.5 wt %), suitable energy level alignment and promoted interfacial charge transfer have been achieved, leading to a significant enhancement in the photovoltaic performance. As a result, a champion power conversion efficiency (PCE) of ∼11% was obtained with a Voc of 0.95 V, a Jsc of 18.7 mA cm-2, a FF of 0.61 as well as negligible hysteresis. Moreover, CSCNT:PEI based inverted PSCs show superior durability in comparison to the standard silver based devices, remaining over 85% of the initial PCE after 500 h aging under various conditions, including long-term air exposure, thermal, and humid treatment. This work opens up a new avenue of facile modified carbon electrodes for highly stable and hysteresis suppressed PSCs.",TRUE,noun
R194,Engineering,R141127,RF MEMS Switches With Enhanced Power-Handling Capabilities,S564074,R141129,keywords,L395842,Self-actuation,"This paper reports on the experimental and theoretical characterization of RF microelectromechanical systems (MEMS) switches for high-power applications. First, we investigate the problem of self-actuation due to high RF power and we demonstrate switches that do not self-actuate or catastrophically fail with a measured RF power of up to 5.5 W. Second, the problem of switch stiction to the down state as a function of the applied RF power is also theoretically and experimentally studied. Finally, a novel switch design with a top electrode is introduced and its advantages related to RF power-handling capabilities are presented. By applying this technology, we demonstrate hot-switching measurements with a maximum power of 0.8 W. Our results, backed by theory and measurements, illustrate that careful design can significantly improve the power-handling capabilities of RF MEMS switches.",TRUE,noun
R194,Engineering,R148163,Sensitivity-enhanced temperature sensor by hybrid cascaded configuration of a Sagnac loop and a F-P cavity,S594555,R148165,keywords,L413321,Sensitivity,"A hybrid cascaded configuration consisting of a fiber Sagnac interferometer (FSI) and a Fabry-Perot interferometer (FPI) was proposed and experimentally demonstrated to enhance the temperature intensity by the Vernier-effect. The FSI, which consists of a certain length of Panda fiber, is for temperature sensing, while the FPI acts as a filter due to its temperature insensitivity. The two interferometers have almost the same free spectral range, with the spectral envelope of the cascaded sensor shifting much more than the single FSI. Experimental results show that the temperature sensitivity is enhanced from −1.4 nm/°C (single FSI) to −29.0 (cascaded configuration). The enhancement factor is 20.7, which is basically consistent with theoretical analysis (19.9).",TRUE,noun
R194,Engineering,R155287,Ultra-Sensitive Strain Sensor Based on Femtosecond Laser Inscribed In-Fiber Reflection Mirrors and Vernier Effect,S621543,R155290,keywords,L427930,Sensitivity,"One of the efficient techniques to enhance the sensitivity of optical fiber sensor is to utilize Vernier effect. However, the complex system structure, precisely controlled device fabrication, or expensive materials required for implementing the technique creates the difficulties for practical applications. Here, we propose a highly sensitive optical fiber strain sensor based on two cascaded Fabry–Perot interferometers and Vernier effect. Of the two interferometers, one is for sensing and the other for referencing, and they are formed by two pairs of in-fiber reflection mirrors fabricated by femtosecond laser pulse illumination to induce refractive-index-modified area in the fiber core. A relatively large distance between the two Fabry–Perot interferometers needs to be used to ensure the independent operation of the two interferometers. The fabrication of the device is simple, and the cavity's length can be precisely controlled by a computer-controlled three-dimensional micromachining platform. Moreover, as the device is based on the inner structure inside the optical fiber, good robustness of the device can be guaranteed. The experimental results obtained show that the strain sensitivity of the device is ∼28.11 pm/μϵ, while the temperature sensitivity achieved is ∼278.48 pm/°C.",TRUE,noun
R194,Engineering,R139614,Highly Efficient and Stable Sn-Rich Perovskite Solar Cells by Introducing Bromine,S557162,R139617,keywords,L391639,Sn-rich,"Compositional engineering of recently arising methylammonium (MA) lead (Pb) halide based perovskites is an essential approach for finding better perovskite compositions to resolve still remaining issues of toxic Pb, long-term instability, etc. In this work, we carried out crystallographic, morphological, optical, and photovoltaic characterization of compositional MASn0.6Pb0.4I3-xBrx by gradually introducing bromine (Br) into parental Pb-Sn binary perovskite (MASn0.6Pb0.4I3) to elucidate its function in Sn-rich (Sn:Pb = 6:4) perovskites. We found significant advances in crystallinity and dense coverage of the perovskite films by inserting the Br into Sn-rich perovskite lattice. Furthermore, light-intensity-dependent open circuit voltage (Voc) measurement revealed much suppressed trap-assisted recombination for a proper Br-added (x = 0.4) device. These contributed to attaining the unprecedented power conversion efficiency of 12.1% and Voc of 0.78 V, which are, to the best of our knowledge, the highest performance in the Sn-rich (≥60%) perovskite solar cells reported so far. In addition, impressive enhancement of photocurrent-output stability and little hysteresis were found, which paves the way for the development of environmentally benign (Pb reduction), stable monolithic tandem cells using the developed low band gap (1.24-1.26 eV) MASn0.6Pb0.4I3-xBrx with suggested composition (x = 0.2-0.4).",TRUE,noun
R194,Engineering,R139618,Efficiently Improving the Stability of Inverted Perovskite Solar Cells by Employing Polyethylenimine-Modified Carbon Nanotubes as Electrodes,S557196,R139622,keywords,L391666,Stability,"Inverted perovskite solar cells (PSCs) have been becoming more and more attractive, owing to their easy-fabrication and suppressed hysteresis, while the ion diffusion between metallic electrode and perovskite layer limit the long-term stability of devices. In this work, we employed a novel polyethylenimine (PEI) modified cross-stacked superaligned carbon nanotube (CSCNT) film in the inverted planar PSCs configurated FTO/NiO x/methylammonium lead tri-iodide (MAPbI3)/6, 6-phenyl C61-butyric acid methyl ester (PCBM)/CSCNT:PEI. By modifying CSCNT with a certain concentration of PEI (0.5 wt %), suitable energy level alignment and promoted interfacial charge transfer have been achieved, leading to a significant enhancement in the photovoltaic performance. As a result, a champion power conversion efficiency (PCE) of ∼11% was obtained with a Voc of 0.95 V, a Jsc of 18.7 mA cm-2, a FF of 0.61 as well as negligible hysteresis. Moreover, CSCNT:PEI based inverted PSCs show superior durability in comparison to the standard silver based devices, remaining over 85% of the initial PCE after 500 h aging under various conditions, including long-term air exposure, thermal, and humid treatment. This work opens up a new avenue of facile modified carbon electrodes for highly stable and hysteresis suppressed PSCs.",TRUE,noun
R194,Engineering,R141119,High-isolation CPW MEMS shunt switches-part 1: modeling ,S564023,R141121,keywords,L395799,Switches,"This paper, the first of two parts, presents an electromagnetic model for membrane microelectromechanical systems (MEMS) shunt switches for microwave/millimeter-wave applications. The up-state capacitance can be accurately modeled using three-dimensional static solvers, and full-wave solvers are used to predict the current distribution and inductance of the switch. The loss in the up-state position is equivalent to the coplanar waveguide line loss and is 0.01-0.02 dB at 10-30 GHz for a 2-/spl mu/m-thick Au MEMS shunt switch. It is seen that the capacitance, inductance, and series resistance can be accurately extracted from DC-40 GHz S-parameter measurements. It is also shown that dramatic increase in the down-state isolation (20/sup +/ dB) can be achieved with the choice of the correct LC series resonant frequency of the switch. In part 2 of this paper, the equivalent capacitor-inductor-resistor model is used in the design of tuned high isolation switches at 10 and 30 GHz.",TRUE,noun
R194,Engineering,R141133,A High-Power Temperature-Stable Electrostatic RF MEMS Capacitive Switch Based on a Thermal Buckle-Beam Design,S564118,R141135,keywords,L395878,Switches,"This paper presents the design, fabrication and measurements of a novel vertical electrostatic RF MEMS switch which utilizes the lateral thermal buckle-beam actuator design in order to reduce the switch sensitivity to thermal stresses. The effect of biaxial and stress gradients are taken into consideration, and the buckle-beam designs show minimal sensitivity to these stresses. Several switches with 4,8, and 12 suspension beams are presented. All the switches demonstrate a low sensitivity to temperature, and the variation in the pull-in voltage is ~ -50 mV/°C from 25-125°C. The change in the up-state capacitance for the same temperature range is <; ± 3%. The switches also exhibit excellent RF and mechanical performances, and a capacitance ratio of ~ 20-23 (Cυ. = 85-115 fF, Cd = 1.7-2.6 pF) with Q > 150 at 10 GHz in the up-state position is reported. The mechanical resonant frequencies and quality factors are fο = 60-160 kHz and Qm = 2.3-4.5, respectively. The measured switching and release times are ~ 2-5 μs and ~ 5-6.5 μs, respectively. Power handling measurements show good stability with ~ 4 W of incident power at 10 GHz.",TRUE,noun
R194,Engineering,R141139,Fabrication of low pull-in voltage RF MEMS switches on glass substrate in recessed CPW configuration for V-band application,S564179,R141141,keywords,L395925,Switches,"A new technique for the fabrication of radio frequency (RF) microelectromechanical systems (MEMS) shunt switches in recessed coplaner waveguide (CPW) configuration on glass substrates is presented. Membranes with low spring constant are used for reducing the pull-in voltage. A layer of silicon dioxide is deposited on glass wafer and is used to form the recess, which partially defines the gap between the membrane and signal line. Positive photoresist S1813 is used as a sacrificial layer and gold as the membrane material. The membranes are released with the help of Pirhana solution and finally rinsed in low surface tension liquid to avoid stiction during release. Switches with 500 µm long two-meander membranes show very high isolation of greater than 40 dB at their resonant frequency of 61 GHz and pull-in voltage less than 15 V, while switches with 700 µm long six-strip membranes show isolation greater than 30 dB at the frequency of 65 GHz and pull-in voltage less than 10 V. Both types of switches show insertion loss less than 0.65 dB up to 65 GHz.",TRUE,noun
R194,Engineering,R145512,"Highly Stable, Solution‐Processed Ga‐Doped IZTO Thin Film Transistor by Ar/O
2
Plasma Treatment",S582670,R145515,keywords,L406955,Transistors,"The effects of gallium doping into indium–zinc–tin oxide (IZTO) thin film transistors (TFTs) and Ar/O2 plasma treatment on the performance of a‐IZTO TFT are reported. The Ga doping ratio is varied from 0 to 20%, and it is found that 10% gallium doping in a‐IZTO TFT results in a saturation mobility (µsat) of 11.80 cm2 V−1 s−1, a threshold voltage (Vth) of 0.17 V, subthreshold swing (SS) of 94 mV dec−1, and on/off current ratio (Ion/Ioff) of 1.21 × 107. Additionally, the performance of 10% Ga‐doped IZTO TFT can be further improved by Ar/O2 plasma treatment. It is found that 30 s plasma treatment gives the best TFT performances such as µsat of 30.60 cm2 V−1 s−1, Vth of 0.12 V, SS of 92 mV dec−1, and Ion/Ioff ratio of 7.90 × 107. The bias‐stability of 10% Ga‐doped IZTO TFT is also improved by 30 s plasma treatment. The enhancement of the TFT performance appears to be due to the reduction in the oxygen vacancy and OH concentrations.",TRUE,noun
R194,Engineering,R145538,Role of ALD Al2O3 Surface Passivation on the Performance of p-Type Cu2O Thin Film Transistors,S582821,R145548,keywords,L407061,Transistors,"High-performance p-type oxide thin film transistors (TFTs) have great potential for many semiconductor applications. However, these devices typically suffer from low hole mobility and high off-state currents. We fabricated p-type TFTs with a phase-pure polycrystalline Cu2O semiconductor channel grown by atomic layer deposition (ALD). The TFT switching characteristics were improved by applying a thin ALD Al2O3 passivation layer on the Cu2O channel, followed by vacuum annealing at 300 °C. Detailed characterization by transmission electron microscopy-energy dispersive X-ray analysis and X-ray photoelectron spectroscopy shows that the surface of Cu2O is reduced following Al2O3 deposition and indicates the formation of a 1-2 nm thick CuAlO2 interfacial layer. This, together with field-effect passivation caused by the high negative fixed charge of the ALD Al2O3, leads to an improvement in the TFT performance by reducing the density of deep trap states as well as by reducing the accumulation of electrons in the semiconducting layer in the device off-state.",TRUE,noun
R194,Engineering,R139969,A Reliable Liquid-Based CMOS MEMS Micro Thermal Convective Accelerometer With Enhanced Sensitivity and Limit of Detection,S558879,R139971,Working fluid,L392787,Water,"In this paper, a liquid-based micro thermal convective accelerometer (MTCA) is optimized by the Rayleigh number (Ra) based compact model and fabricated using the $0.35\mu $ m CMOS MEMS technology. To achieve water-proof performance, the conformal Parylene C coating was adopted as the isolation layer with the accelerated life-testing results of a 9-year-lifetime for liquid-based MTCA. Then, the device performance was characterized considering sensitivity, response time, and noise. Both the theoretical and experimental results demonstrated that fluid with a larger Ra number can provide better performance for the MTCA. More significantly, Ra based model showed its advantage to make a more accurate prediction than the simple linear model to select suitable fluid to enhance the sensitivity and balance the linear range of the device. Accordingly, an alcohol-based MTCA was achieved with a two-order-of magnitude increase in sensitivity (43.8 mV/g) and one-order-of-magnitude decrease in the limit of detection (LOD) ( $61.9~\mu \text{g}$ ) compared with the air-based MTCA. [2021-0092]",TRUE,noun
R194,Engineering,R155358,Flexible InGaZnO TFTs With fmax Above 300 MHz,S622755,R155366,keywords,L428711,Gain,"In this letter, the AC performance and influence of bending on flexible IGZO thin-film transistors, exhibiting a maximum oscillation frequency (maximum power gain frequency) ${f}_{\textsf {max}}$ beyond 300 MHz, are presented. Self-alignment was used to realize TFTs with channel length down to 0.5 $\mu \text{m}$ . The layout of these TFTs was optimized for good AC performance. Besides the channel dimensions, this includes ground-signal-ground contact pads. The AC performance of these short channel devices was evaluated by measuring their two port scattering parameters. These measurements were used to extract the unity gain power frequency from the maximum stable gain and the unilateral gain. The two complimentary definitions result in ${f}_{\textsf {max}}$ values of (304 ± 12) and (398 ± 53) MHz, respectively. Furthermore, the transistor performance is not significantly altered by mechanical strain. Here, ${f}_{\textsf {max}}$ reduces by 3.6% when a TFT is bent to a tensile radius of 3.5 mm.",TRUE,noun
R194,Engineering,R155276,Experimental Characterization of a Vernier Strain Sensor Using Cascaded Fiber Rings,S621481,R155278,keywords,L427881,Strain,"A highly sensitive strain sensor consisting of two cascaded fiber ring resonators based on the Vernier effect is proposed. Each fiber ring resonator, composed of an input optical coupler, an output optical coupler, and a polarization controller, has a comb-like transmission spectrum with peaks at its resonance wavelengths. As a result, the Vernier effect will be generated, due to the displacement of the two transmission spectra. Using this technique, strain measurements can be achieved by measuring the free spectral range of the cascaded fiber ring resonators. The experimental results show that the sensing setup can operate in large strain range with a sensitivity of 0.0129 nm-1/με. The new generation of Vernier strain sensor can also be useful for micro-displacement measurement.",TRUE,noun
R194,Engineering,R155287,Ultra-Sensitive Strain Sensor Based on Femtosecond Laser Inscribed In-Fiber Reflection Mirrors and Vernier Effect,S621540,R155290,keywords,L427927,Strain,"One of the efficient techniques to enhance the sensitivity of optical fiber sensor is to utilize Vernier effect. However, the complex system structure, precisely controlled device fabrication, or expensive materials required for implementing the technique creates the difficulties for practical applications. Here, we propose a highly sensitive optical fiber strain sensor based on two cascaded Fabry–Perot interferometers and Vernier effect. Of the two interferometers, one is for sensing and the other for referencing, and they are formed by two pairs of in-fiber reflection mirrors fabricated by femtosecond laser pulse illumination to induce refractive-index-modified area in the fiber core. A relatively large distance between the two Fabry–Perot interferometers needs to be used to ensure the independent operation of the two interferometers. The fabrication of the device is simple, and the cavity's length can be precisely controlled by a computer-controlled three-dimensional micromachining platform. Moreover, as the device is based on the inner structure inside the optical fiber, good robustness of the device can be guaranteed. The experimental results obtained show that the strain sensitivity of the device is ∼28.11 pm/μϵ, while the temperature sensitivity achieved is ∼278.48 pm/°C.",TRUE,noun
R194,Engineering,R155291,Ultrasensitive strain sensor based on Vernier- effect improved parallel structured fiber-optic Fabry-Perot interferometer,S621786,R155293,keywords,L428061,Strain,"A novel parallel structured fiber-optic Fabry-Perot interferometer (FPI) based on Vernier-effect is theoretically proposed and experimentally demonstrated for ultrasensitive strain measurement. This proposed sensor consists of open-cavity and closed-cavity fiber-optic FPI, both of which are connected in parallel via a 3 dB coupler. The open-cavity is implemented for sensing, while the closed-cavity for reference. Experimental results show that the proposed parallel structured fiber-optic FPI can provide an ultra-high strain sensitivity of -43.2 pm/με, which is 4.6 times higher than that of a single open-cavity FPI. Furthermore, the sensor is simple in fabrication, robust in structure, and stable in measurement. Finally, the parallel structured fiber-optic FPI scheme proposed in this paper can also be applied to other sensing field, and provide a new perspective idea for high sensitivity sensing.",TRUE,noun
R145,Environmental Sciences,R9221,"The ACCESS coupled model: description, control climate and evaluation",S14693,R9228,Earth System Model,R9273,Atmosphere,"4OASIS3.2–5 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 °C in ACCESS1.0 and 0.04 °C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.",TRUE,noun
R145,Environmental Sciences,R23260,The NCEP Climate Forecast System Reanalysis,S72085,R23261,Earth System Model,R23262,Atmosphere,"The NCEP Climate Forecast System Reanalysis (CFSR) was completed for the 31-yr period from 1979 to 2009, in January 2010. The CFSR was designed and executed as a global, high-resolution coupled atmosphere–ocean–land surface–sea ice system to provide the best estimate of the state of these coupled domains over this period. The current CFSR will be extended as an operational, real-time product into the future. New features of the CFSR include 1) coupling of the atmosphere and ocean during the generation of the 6-h guess field, 2) an interactive sea ice model, and 3) assimilation of satellite radiances by the Gridpoint Statistical Interpolation (GSI) scheme over the entire period. The CFSR global atmosphere resolution is ~38 km (T382) with 64 levels extending from the surface to 0.26 hPa. The global ocean's latitudinal spacing is 0.25° at the equator, extending to a global 0.5° beyond the tropics, with 40 levels to a depth of 4737 m. The global land surface model has four soil levels and the global sea ice m...",TRUE,noun
R145,Environmental Sciences,R23273,"The ACCESS coupled model: description, control climate and evaluation",S72161,R23274,Earth System Model,R23276,Atmosphere,"4OASIS3.2–5 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 °C in ACCESS1.0 and 0.04 °C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.",TRUE,noun
R145,Environmental Sciences,R23287,A Modified Dynamic Framework for the Atmospheric Spectral Model and Its Application,S72201,R23288,Earth System Model,R23289,Atmosphere,"This paper describes a dynamic framework for an atmospheric general circulation spectral model in which a reference stratified atmospheric temperature and a reference surface pressure are introduced into the governing equations so as to improve the calculation of the pressure gradient force and gradients of surface pressure and temperature. The vertical profile of the reference atmospheric temperature approximately corresponds to that of the U.S. midlatitude standard atmosphere within the troposphere and stratosphere, and the reference surface pressure is a function of surface terrain geopotential and is close to the observed mean surface pressure. Prognostic variables for the temperature and surface pressure are replaced by their perturbations from the prescribed references. The numerical algorithms of the explicit time difference scheme for vorticity and the semi-implicit time difference scheme for divergence, perturbation temperature, and perturbation surface pressure equation are given in detail. The modified numerical framework is implemented in the Community Atmosphere Model version 3 (CAM3) developed at the National Center for Atmospheric Research (NCAR) to test its validation and impact on simulated climate. Both the original and the modified models are run with the same spectral resolution (T42), the same physical parameterizations, and the same boundary conditions corresponding to the observed monthly mean sea surface temperature and sea ice concentration from 1971 to 2000. This permits one to evaluate the performance of the new dynamic framework compared to the commonly used one. Results show that there is a general improvement for the simulated climate at regional and global scales, especially for temperature and wind.",TRUE,noun
R145,Environmental Sciences,R23326,GFDL’s ESM2 Global Coupled Climate–Carbon Earth System Models. Part I: Physical Formulation and Baseline Simulation Characteristics,S72408,R23327,Earth System Model,R23328,Atmosphere,"AbstractThe authors describe carbon system formulation and simulation characteristics of two new global coupled carbon–climate Earth System Models (ESM), ESM2M and ESM2G. These models demonstrate good climate fidelity as described in part I of this study while incorporating explicit and consistent carbon dynamics. The two models differ almost exclusively in the physical ocean component; ESM2M uses the Modular Ocean Model version 4.1 with vertical pressure layers, whereas ESM2G uses generalized ocean layer dynamics with a bulk mixed layer and interior isopycnal layers. On land, both ESMs include a revised land model to simulate competitive vegetation distributions and functioning, including carbon cycling among vegetation, soil, and atmosphere. In the ocean, both models include new biogeochemical algorithms including phytoplankton functional group dynamics with flexible stoichiometry. Preindustrial simulations are spun up to give stable, realistic carbon cycle means and variability. Significant differences...",TRUE,noun
R145,Environmental Sciences,R23338,"Present-Day Atmospheric Simulations Using GISS ModelE: Comparison to In Situ, Satellite, and Reanalysis Data",S72490,R23339,Earth System Model,R23341,Atmosphere,"Abstract A full description of the ModelE version of the Goddard Institute for Space Studies (GISS) atmospheric general circulation model (GCM) and results are presented for present-day climate simulations (ca. 1979). This version is a complete rewrite of previous models incorporating numerous improvements in basic physics, the stratospheric circulation, and forcing fields. Notable changes include the following: the model top is now above the stratopause, the number of vertical layers has increased, a new cloud microphysical scheme is used, vegetation biophysics now incorporates a sensitivity to humidity, atmospheric turbulence is calculated over the whole column, and new land snow and lake schemes are introduced. The performance of the model using three configurations with different horizontal and vertical resolutions is compared to quality-controlled in situ data, remotely sensed and reanalysis products. Overall, significant improvements over previous models are seen, particularly in upper-atmosphere te...",TRUE,noun
R145,Environmental Sciences,R23353,"Present-Day Atmospheric Simulations Using GISS ModelE: Comparison to In Situ, Satellite, and Reanalysis Data",S72576,R23354,Earth System Model,R23356,Atmosphere,"Abstract A full description of the ModelE version of the Goddard Institute for Space Studies (GISS) atmospheric general circulation model (GCM) and results are presented for present-day climate simulations (ca. 1979). This version is a complete rewrite of previous models incorporating numerous improvements in basic physics, the stratospheric circulation, and forcing fields. Notable changes include the following: the model top is now above the stratopause, the number of vertical layers has increased, a new cloud microphysical scheme is used, vegetation biophysics now incorporates a sensitivity to humidity, atmospheric turbulence is calculated over the whole column, and new land snow and lake schemes are introduced. The performance of the model using three configurations with different horizontal and vertical resolutions is compared to quality-controlled in situ data, remotely sensed and reanalysis products. Overall, significant improvements over previous models are seen, particularly in upper-atmosphere te...",TRUE,noun
R145,Environmental Sciences,R23368,"Present-Day Atmospheric Simulations Using GISS ModelE: Comparison to In Situ, Satellite, and Reanalysis Data",S72662,R23369,Earth System Model,R23371,Atmosphere,"Abstract A full description of the ModelE version of the Goddard Institute for Space Studies (GISS) atmospheric general circulation model (GCM) and results are presented for present-day climate simulations (ca. 1979). This version is a complete rewrite of previous models incorporating numerous improvements in basic physics, the stratospheric circulation, and forcing fields. Notable changes include the following: the model top is now above the stratopause, the number of vertical layers has increased, a new cloud microphysical scheme is used, vegetation biophysics now incorporates a sensitivity to humidity, atmospheric turbulence is calculated over the whole column, and new land snow and lake schemes are introduced. The performance of the model using three configurations with different horizontal and vertical resolutions is compared to quality-controlled in situ data, remotely sensed and reanalysis products. Overall, significant improvements over previous models are seen, particularly in upper-atmosphere te...",TRUE,noun
R145,Environmental Sciences,R23383,"Present-Day Atmospheric Simulations Using GISS ModelE: Comparison to In Situ, Satellite, and Reanalysis Data",S72748,R23384,Earth System Model,R23386,Atmosphere,"Abstract A full description of the ModelE version of the Goddard Institute for Space Studies (GISS) atmospheric general circulation model (GCM) and results are presented for present-day climate simulations (ca. 1979). This version is a complete rewrite of previous models incorporating numerous improvements in basic physics, the stratospheric circulation, and forcing fields. Notable changes include the following: the model top is now above the stratopause, the number of vertical layers has increased, a new cloud microphysical scheme is used, vegetation biophysics now incorporates a sensitivity to humidity, atmospheric turbulence is calculated over the whole column, and new land snow and lake schemes are introduced. The performance of the model using three configurations with different horizontal and vertical resolutions is compared to quality-controlled in situ data, remotely sensed and reanalysis products. Overall, significant improvements over previous models are seen, particularly in upper-atmosphere te...",TRUE,noun
R145,Environmental Sciences,R23443,"The Norwegian Earth System Model, NorESM1-M – Part 1: Description and basic evaluation of the physical climate",S73043,R23444,Earth System Model,R23446,Atmosphere,"Abstract. The core version of the Norwegian Climate Center's Earth System Model, named NorESM1-M, is presented. The NorESM family of models are based on the Community Climate System Model version 4 (CCSM4) of the University Corporation for Atmospheric Research, but differs from the latter by, in particular, an isopycnic coordinate ocean model and advanced chemistry–aerosol–cloud–radiation interaction schemes. NorESM1-M has a horizontal resolution of approximately 2° for the atmosphere and land components and 1° for the ocean and ice components. NorESM is also available in a lower resolution version (NorESM1-L) and a version that includes prognostic biogeochemical cycling (NorESM1-ME). The latter two model configurations are not part of this paper. Here, a first-order assessment of the model stability, the mean model state and the internal variability based on the model experiments made available to CMIP5 are presented. Further analysis of the model performance is provided in an accompanying paper (Iversen et al., 2013), presenting the corresponding climate response and scenario projections made with NorESM1-M.",TRUE,noun
R145,Environmental Sciences,R23471,INGV-CMCC Carbon (ICC): A Carbon Cycle Earth System Model,S73169,R23472,Earth System Model,R23473,Atmosphere,"This document describes the CMCC Earth System Model (ESM) for the representation of the carbon cycle in the atmosphere, land, and ocean system. The structure of the report follows the software architecture of the full system. It is intended to give a technical description of the numerical models at the base of the ESM, and how they are coupled with each other.",TRUE,noun
R145,Environmental Sciences,R110584,Magnesium and calcium concentrations in the surface water and bottom deposits of a river-lake system. ,S503766,R110586,Major cations,R71807,Calcium,"River-lake systems comprise chains of lakes connected by rivers and streams that flow into and out of them. The contact zone between a lake and a river can act as a barrier, where inflowing matter is accumulated and transformed. Magnesium and calcium are natural components of surface water, and their concentrations can be shaped by various factors, mostly the geological structure of a catchment area, soil class and type, plant cover, weather conditions (precipitation-evaporation, seasonal variations), land relief, type and intensity of water supply (surface runoffs and groundwater inflows), etc. The aim of this study was to analyze the influence of a river-lake system on magnesium and calcium concentrations in surface water (inflows, lake, outflow) and their accumulation in bottom deposits. The study was performed between March 2011 and May 2014 in a river-lake system comprising Lake Symsar with inflows, lying in the Olsztyn Lakeland region. The study revealed that calcium and magnesium were retained in the water column and the bottom deposits of the lake at 12.75 t Mg year-1 and 1.97 t Ca year-1. On average, 12.7±1.2 g of calcium and 1.77±0.9 g of magnesium accumulated in 1 kg of bottom deposits in Lake Symsar. The river-lake system, which received pollutants from an agricultural catchment, influenced the Ca2+ and Mg2+ concentrations in the water and the bottom deposits of Lake Symsar. The Tolknicka Struga drainage canal, to which incompletely treated municipal wastewater was discharged, also affected Ca2+ and Mg2+ levels, thus indicating the significant influence of anthropogenic factors.",TRUE,noun
R33,Epidemiology,R187041,Drugs4Covid: Drug-driven Knowledge Exploitation based on Scientific Publications,S715326,R187042,Dataset name,L482204,Drugs4Covid,"In the absence of sufficient medication for COVID patients due to the increased demand, disused drugs have been employed or the doses of those available were modified by hospital pharmacists. Some evidences for the use of alternative drugs can be found in the existing scientific literature that could assist in such decisions. However, exploiting large corpus of documents in an efficient manner is not easy, since drugs may not appear explicitly related in the texts and could be mentioned under different brand names. Drugs4Covid combines word embedding techniques and semantic web technologies to enable a drug-oriented exploration of large medical literature. Drugs and diseases are identified according to the ATC classification and MeSH categories respectively. More than 60K articles and 2M paragraphs have been processed from the CORD-19 corpus with information of COVID-19, SARS, and other related coronaviruses. An open catalogue of drugs has been created and results are publicly available through a drug browser, a keyword-guided text explorer, and a knowledge graph.",TRUE,noun
R33,Epidemiology,R142068,"Diseases and Health Outcomes Registry Systems in I.R. Iran: Successful Initiative to Improve Public Health Programs, Quality of Care, and Biomedical Research",S570841,R142071,Location ,L400707,Iran,"Registration systems for diseases and other health outcomes provide important resource for biomedical research, as well as tools for public health surveillance and improvement of quality of care. The Ministry of Health and Medical Education (MOHME) of Iran launched a national program to establish registration systems for different diseases and health outcomes. Based on the national program, we organized several workshops and training programs and disseminated the concepts and knowledge of the registration systems. Following a call for proposals, we received 100 applications and after thorough evaluation and corrections by the principal investigators, we approved and granted about 80 registries for three years. Having strong steering committee, committed executive and scientific group, establishing national and international collaboration, stating clear objectives, applying feasible software, and considering stable financing were key components for a successful registry and were considered in the evaluation processes. We paid particulate attention to non-communicable diseases, which constitute an emerging public health problem. We prioritized establishment of regional population-based cancer registries (PBCRs) in 10 provinces in collaboration with the International Agency for Research on Cancer. This initiative was successful and registry programs became popular among researchers and research centers and created several national and international collaborations in different areas to answer important public health and clinical questions. In this paper, we report the details of the program and list of registries that were granted in the first round.",TRUE,noun
R33,Epidemiology,R142089,Iranian Registry of Crohn’s and Colitis: study profile of first nation-wide inflammatory bowel disease registry in Middle East,S570878,R142093,Location ,L400738,Iran,"Background/Aims A recent study revealed increasing incidence and prevalence of inflammatory bowel disease (IBD) in Iran. The Iranian Registry of Crohn’s and Colitis (IRCC) was designed recently to answer the needs. We reported the design, methods of data collection, and aims of IRCC in this paper. Methods IRCC is a multicenter prospective registry, which is established with collaboration of more than 100 gastroenterologists from different provinces of Iran. Minimum data set for IRCC was defined according to an international consensus on standard set of outcomes for IBD. A pilot feasibility study was performed on 553 IBD patients with a web-based questionnaire. The reliability of questionnaire evaluated by Cronbach’s α. Results All sections of questionnaire had Cronbach’s α of more than 0.6. In pilot study, 312 of participants (56.4%) were male and mean age was 38 years (standard deviation=12.8) and 378 patients (68.35%) had ulcerative colitis, 303 subjects (54,7%) had college education and 358 patients (64.74%) were of Fars ethnicity. We found that 68 (12.3%), 44 (7.9%), and 13 (2.3%) of participants were smokers, hookah and opium users, respectively. History of appendectomy was reported in 58 of patients (10.48%). The most common medication was 5-aminosalicylate (94.39%). Conclusions To the best of our knowledge, IRCC is the first national IBD registry in the Middle East and could become a reliable infrastructure for national and international research on IBD. IRCC will improve the quality of care of IBD patients and provide national information for policy makers to better plan for controlling IBD in Iran.",TRUE,noun
R33,Epidemiology,R187043,COVID-19 Surveillance in a Primary Care Sentinel Network: In-Pandemic Development of an Application Ontology,S715337,R187045,Has project,L482214,Ping,"Background Creating an ontology for COVID-19 surveillance should help ensure transparency and consistency. Ontologies formalize conceptualizations at either the domain or application level. Application ontologies cross domains and are specified through testable use cases. Our use case was an extension of the role of the Oxford Royal College of General Practitioners (RCGP) Research and Surveillance Centre (RSC) to monitor the current pandemic and become an in-pandemic research platform. Objective This study aimed to develop an application ontology for COVID-19 that can be deployed across the various use-case domains of the RCGP RSC research and surveillance activities. Methods We described our domain-specific use case. The actor was the RCGP RSC sentinel network, the system was the course of the COVID-19 pandemic, and the outcomes were the spread and effect of mitigation measures. We used our established 3-step method to develop the ontology, separating ontological concept development from code mapping and data extract validation. We developed a coding system–independent COVID-19 case identification algorithm. As there were no gold-standard pandemic surveillance ontologies, we conducted a rapid Delphi consensus exercise through the International Medical Informatics Association Primary Health Care Informatics working group and extended networks. Results Our use-case domains included primary care, public health, virology, clinical research, and clinical informatics. Our ontology supported (1) case identification, microbiological sampling, and health outcomes at an individual practice and at the national level; (2) feedback through a dashboard; (3) a national observatory; (4) regular updates for Public Health England; and (5) transformation of a sentinel network into a trial platform. We have identified a total of 19,115 people with a definite COVID-19 status, 5226 probable cases, and 74,293 people with possible COVID-19, within the RCGP RSC network (N=5,370,225). Conclusions The underpinning structure of our ontological approach has coped with multiple clinical coding challenges. At a time when there is uncertainty about international comparisons, clarity about the basis on which case definitions and outcomes are made from routine data is essential.",TRUE,noun
R456,Ethics,R140317,Costume in the dance archive: Towards a records-centred ethics of care,S560084,R140320,dance group,R140323,Rubicon,"Focusing on the archival records of the production and performance of Dance in Trees and Church by the Swedish independent dance group Rubicon, this article conceptualizes a records-oriented costume ethics. Theorizations of costume as a co-creative agent of performance are brought into the dance archive to highlight the productivity of paying attention to costume in the making of performance history. Addressing recent developments within archival studies, a feminist ethics of care and radical empathy is employed, which is the capability to empathically engage with others, even if it can be difficult, as a means of exploring how a records-centred costume ethics can be conceptualized for the dance archive. The exploration resulted in two ethical stances useful for better attending to costume-bodies in the dance archive: (1) caring for costume-body relations in the dance archive means that a conventional, so-called static understanding of records as neutral carriers of facts is replaced by a more inclusive, expanding and infinite process. By moving across time and space, and with a caring attitude finding and exploring fragments from various, sometimes contradictory production processes, one can help scattered and poorly represented dance and costume histories to emerge and contribute to the formation of identity and memory. (2) The use of bodily empathy with records can respectfully bring together the understanding of costume in performance as inseparable from the performer’s body with dance as an art form that explicitly uses the dancing costume-body as an expressive tool. It is argued that bodily empathy with records in the dance archive helps one access bodily holisms that create possibilities for exploring the potential of art to critically expose and render strange ideological systems and normativities.",TRUE,noun
R356,"Family, Life Course, and Society",R76542,Up and About: Older Adults’ Well-being During the COVID-19 Pandemic in a Swedish Longitudinal Study,S352673,R76545,Control variables,R77200,Age,"Abstract Objectives To investigate early effects of the COVID-19 pandemic related to (a) levels of worry, risk perception, and social distancing; (b) longitudinal effects on well-being; and (c) effects of worry, risk perception, and social distancing on well-being. Methods We analyzed annual changes in four aspects of well-being over 5 years (2015–2020): life satisfaction, financial satisfaction, self-rated health, and loneliness in a subsample (n = 1,071, aged 65–71) from a larger survey of Swedish older adults. The 2020 wave, collected March 26–April 2, included measures of worry, risk perception, and social distancing in response to COVID-19. Results (a) In relation to COVID-19: 44.9% worried about health, 69.5% about societal consequences, 25.1% about financial consequences; 86.4% perceived a high societal risk, 42.3% a high risk of infection, and 71.2% reported high levels of social distancing. (b) Well-being remained stable (life satisfaction and loneliness) or even increased (self-rated health and financial satisfaction) in 2020 compared to previous years. (c) More worry about health and financial consequences was related to lower scores in all four well-being measures. Higher societal worry and more social distancing were related to higher well-being. Discussion In the early stage of the pandemic, Swedish older adults on average rated their well-being as high as, or even higher than, previous years. However, those who worried more reported lower well-being. Our findings speak to the resilience, but also heterogeneity, among older adults during the pandemic. Further research, on a broad range of health factors and long-term psychological consequences, is needed.",TRUE,noun
R83,Food Processing,R75019,The Impact of Pulsed Electric Field on the Extraction of Bioactive Compounds from Beetroot,S344293,R75023,Vegetable source,R75043,Beetroot,"Beetroot is a root vegetable rich in different bioactive components, such as vitamins, minerals, phenolics, carotenoids, nitrate, ascorbic acids, and betalains, that can have a positive effect on human health. The aim of this work was to study the influence of the pulsed electric field (PEF) at different electric field strengths (4.38 and 6.25 kV/cm), pulse number 10–30, and energy input 0–12.5 kJ/kg as a pretreatment method on the extraction of betalains from beetroot. The obtained results showed that the application of PEF pre-treatment significantly (p < 0.05) influenced the efficiency of extraction of bioactive compounds from beetroot. The highest increase in the content of betalain compounds in the red beet’s extract (betanin by 329%, vulgaxanthin by 244%, compared to the control sample), was noted for 20 pulses of electric field at 4.38 kV/cm of strength. Treatment of the plant material with a PEF also resulted in an increase in the electrical conductivity compared to the non-treated sample due to the increase in cell membrane permeability, which was associated with leakage of substances able to conduct electricity, including mineral salts, into the intercellular space.",TRUE,noun
R83,Food Processing,R75019,The Impact of Pulsed Electric Field on the Extraction of Bioactive Compounds from Beetroot,S344288,R75023,Compound of interest,R75040,Betanin,"Beetroot is a root vegetable rich in different bioactive components, such as vitamins, minerals, phenolics, carotenoids, nitrate, ascorbic acids, and betalains, that can have a positive effect on human health. The aim of this work was to study the influence of the pulsed electric field (PEF) at different electric field strengths (4.38 and 6.25 kV/cm), pulse number 10–30, and energy input 0–12.5 kJ/kg as a pretreatment method on the extraction of betalains from beetroot. The obtained results showed that the application of PEF pre-treatment significantly (p < 0.05) influenced the efficiency of extraction of bioactive compounds from beetroot. The highest increase in the content of betalain compounds in the red beet’s extract (betanin by 329%, vulgaxanthin by 244%, compared to the control sample), was noted for 20 pulses of electric field at 4.38 kV/cm of strength. Treatment of the plant material with a PEF also resulted in an increase in the electrical conductivity compared to the non-treated sample due to the increase in cell membrane permeability, which was associated with leakage of substances able to conduct electricity, including mineral salts, into the intercellular space.",TRUE,noun
R317,Geographic Information Sciences,R78256,How to Assess Visual Communication of Uncertainty? A Systematic Review of Geospatial Uncertainty Visualisation User Studies,S353988,R78258,Reviews,R74259,uncertainty,"Abstract For decades, uncertainty visualisation has attracted attention in disciplines such as cartography and geographic visualisation, scientific visualisation and information visualisation. Most of this research deals with the development of new approaches to depict uncertainty visually; only a small part is concerned with empirical evaluation of such techniques. This systematic review aims to summarize past user studies and describe their characteristics and findings, focusing on the field of geographic visualisation and cartography and thus on displays containing geospatial uncertainty. From a discussion of the main findings, we derive lessons learned and recommendations for future evaluation in the field of uncertainty visualisation. We highlight the importance of user tasks for successful solutions and recommend moving towards task-centered typologies to support systematic evaluation in the field of uncertainty visualisation.",TRUE,noun
R317,Geographic Information Sciences,R78263,"Evaluating the effect of visually represented geodata uncertainty on decision-making: systematic review, lessons learned, and recommendations",S354015,R78265,Reviews,R74259,uncertainty,"ABSTRACT For many years, uncertainty visualization has been a topic of research in several disparate fields, particularly in geographical visualization (geovisualization), information visualization, and scientific visualization. Multiple techniques have been proposed and implemented to visually depict uncertainty, but their evaluation has received less attention by the research community. In order to understand how uncertainty visualization influences reasoning and decision-making using spatial information in visual displays, this paper presents a comprehensive review of uncertainty visualization assessments from geovisualization and related fields. We systematically analyze characteristics of the studies under review, i.e., number of participants, tasks, evaluation metrics, etc. An extensive summary of findings with respect to the effects measured or the impact of different visualization techniques helps to identify commonalities and differences in the outcome. Based on this summary, we derive “lessons learned” and provide recommendations for carrying out evaluation of uncertainty visualizations. As a basis for systematic evaluation, we present a categorization of research foci related to evaluating the effects of uncertainty visualization on decision-making. By assigning the studies to categories, we identify gaps in the literature and suggest key research questions for the future. This paper is the second of two reviews on uncertainty visualization. It follows the first that covers the communication of uncertainty, to investigate the effects of uncertainty visualization on reasoning and decision-making.",TRUE,noun
R317,Geographic Information Sciences,R78266,Evaluating the impact of visualization of risk upon emergency route-planning,S354032,R78271,Has evaluation,R74875,Experiment,"ABSTRACT This paper reports on a controlled experiment evaluating how different cartographic representations of risk affect participants’ performance on a complex spatial decision task: route planning. The specific experimental scenario used is oriented towards emergency route-planning during flood response. The experiment compared six common abstract and metaphorical graphical symbolizations of risk. The results indicate a pattern of less-preferred graphical symbolizations associated with slower responses and lower-risk route choices. One mechanism that might explain these observed relationships would be that more complex and effortful maps promote closer attention paid by participants and lower levels of risk taking. Such user considerations have important implications for the design of maps and mapping interfaces for emergency planning and response. The data also highlights the importance of the ‘right decision, wrong outcome problem’ inherent in decision-making under uncertainty: in individual instances, more risky decisions do not always lead to worse outcomes.",TRUE,noun
R146,Geology,R108144,Mapping of Alteration Zones in Mineral Rich Belt of South-East Rajasthan Using Remote Sensing Techniques,S492634,R108145,Minerals Mapped/ Identified,L357135,Clay,"Remote sensing techniques have emerged as an asset for various geological studies. Satellite images obtained by different sensors contain plenty of information related to the terrain. Digital image processing further helps in customized ways for the prospecting of minerals. In this study, an attempt has been made to map the hydrothermally altered zones using multispectral and hyperspectral datasets of South East Rajasthan. Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Hyperion (Level1R) dataset have been processed to generate different Band Ratio Composites (BRCs). For this study, ASTER derived BRCs were generated to delineate the alteration zones, gossans, abundant clays and host rocks. ASTER and Hyperion images were further processed to extract mineral end members and classified mineral maps have been produced using Spectral Angle Mapper (SAM) method. Results were validated with the geological map of the area which shows positive agreement with the image processing outputs. Thus, this study concludes that the band ratios and image processing in combination play significant role in demarcation of alteration zones which may provide pathfinders for mineral prospecting studies. Keywords—Advanced space-borne thermal emission and reflection radiometer, ASTER, Hyperion, Band ratios, Alteration zones, spectral angle mapper.",TRUE,noun
R146,Geology,R137118,"Spaceborne visible and thermal infrared lithologic mapping of impact-exposed subsurface lithologies at the Haughton impact structure, Devon Island, Canadian High Arctic: Applications to Mars",S541937,R137119,Minerals Identified (Terrestrial samples),L381637,gypsum,"Abstract— This study serves as a proof‐of‐concept for the technique of using visible‐near infrared (VNIR), short‐wavelength infrared (SWIR), and thermal infrared (TIR) spectroscopic observations to map impact‐exposed subsurface lithologies and stratigraphy on Earth or Mars. The topmost layer, three subsurface layers and undisturbed outcrops of the target sequence exposed just 10 km to the northeast of the 23 km diameter Haughton impact structure (Devon Island, Nunavut, Canada) were mapped as distinct spectral units using Landsat 7 ETM+ (VNIR/SWIR) and ASTER (VNIR/SWIR/TIR) multispectral images. Spectral mapping was accomplished by using standard image contrast‐stretching algorithms. Both spectral matching and deconvolution algorithms were applied to image‐derived ASTER TIR emissivity spectra using spectra from a library of laboratory‐measured spectra of minerals (Arizona State University) and whole‐rocks (Ward's). These identifications were made without the use of a priori knowledge from the field (i.e., a “blind” analysis). The results from this analysis suggest a sequence of dolomitic rock (in the crater rim), limestone (wall), gypsum‐rich carbonate (floor), and limestone again (central uplift). These matched compositions agree with the lithologic units and the pre‐impact stratigraphic sequence as mapped during recent field studies of the Haughton impact structure by Osinski et al. (2005a). Further conformation of the identity of image‐derived spectra was confirmed by matching these spectra with laboratory‐measured spectra of samples collected from Haughton. The results from the “blind” remote sensing methods used here suggest that these techniques can also be used to understand subsurface lithologies on Mars, where ground truth knowledge may not be generally available.",TRUE,noun
R146,Geology,R137130,"Spectral and chemical characterization of gypsum-phyllosilicate association in Tiruchirapalli, South India, and its implications",S542038,R137131,Minerals Identified (Terrestrial samples),L381714,gypsum,"Here, we present the detailed chemical and spectral characteristics of gypsum‐phyllosilicate association of Karai Shale Formation in Tiruchirapalli region of the Cauvery Basin in South India. The Karai Shale Formation comprises Odiyam sandy clay and gypsiferous clay, well exposed in Karai village of Tiruchirapalli area, Tamil Nadu in South India. Gypsum is fibrous to crystalline and translucent/transparent type with fluid inclusions preserved in it. Along some cleavage planes, alteration features have been observed. Visible and near infrared (VNIR), Raman, and Fourier transform infrared techniques were used to obtain the excitation/vibration bands of mineral phases. VNIR spectroscopic analysis of the gypsum samples has shown absorption features at 560, 650, 900, 1,000, 1,200, 1,445, 1,750, 1,900, 2,200, and 2,280 nm in the electrical and vibrational range of electromagnetic radiation. VNIR results of phyllosilicate samples have shown absorption features at 1,400, 1,900, and 2,200 nm. Further, we have identified the prominent Raman bands at 417.11, 496.06, 619.85, 673.46, 1,006.75, 1,009.75, ∼1,137.44, ∼3,403, and 3,494.38 cm−1 for gypsum due to sulphate and hydroxyl ion vibrations. We propose that gypsum veins in Karai may have precipitated in the fractures formed due to pressure/forces generated by crystal growth. The combined results of chemical and spectral studies have shown that these techniques have significant potential to identify the pure/mineral associates/similar chemical compositions elsewhere. Our results definitely provide the database from a range of spectroscopic techniques to better identify similar minerals and/or mineral‐associations in an extraterrestrial scenario. This study has significant implications in understanding various geological processes such as fluid‐rock interactions and alteration processes involving water on the planets such as Mars.",TRUE,noun
R146,Geology,R108129,Comparison of Airborne Hyperspectral Data and EO-1 Hyperion for Mineral Mapping,S492585,R108130,Minerals Mapped/ Identified,L357086,Buddingtonite,"Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-/spl mu/m range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements are required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS.",TRUE,noun
R146,Geology,R108129,Comparison of Airborne Hyperspectral Data and EO-1 Hyperion for Mineral Mapping,S492579,R108130,Minerals Mapped/ Identified,L357080,Calcite,"Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-/spl mu/m range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements are required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS.",TRUE,noun
R146,Geology,R108129,Comparison of Airborne Hyperspectral Data and EO-1 Hyperion for Mineral Mapping,S492584,R108130,Minerals Mapped/ Identified,L357085,Carbonate,"Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-/spl mu/m range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements are required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS.",TRUE,noun
R146,Geology,R108126,"The Performance of the Satellite-borne Hyperion Hyperspectral VNIR-SWIR Imaging System for Mineral Mapping at Mount Fitton, South Australia",S492576,R108127,Minerals Mapped/ Identified,L357077,Chlorite,"Satellite-based hyperspectral imaging became a reality in November 2000 with the successful launch and operation of the Hyperion system on board the EO-1 platform. Hyperion is a pushbroom imager with 220 spectral bands in the 400-2500 nm wavelength range, a 30 meter pixel size and a 7.5 km swath. Pre-launch characterization of Hyperion measured low signal to noise (SNR<40:1) for the geologically significant shortwave infrared (SWIR) wavelength region (2000-2500 nm). The impact of this low SNR on Hyperion's capacity to resolve spectral detail was evaluated for the Mount Fitton test site in South Australia, which comprises a diverse range of minerals with narrow, diagnostic absorption bands in the SWIR. Following radiative transfer correction of the Hyperion radiance at sensor data to surface radiance (apparent reflectance), diagnostic spectral signatures were clearly apparent, including: green vegetation; talc; dolomite; chlorite; white mica and possibly tremolite. Even though the derived surface composition maps generated from these image endmembers were noisy (both random and column), they were nonetheless spatially coherent and correlated well with the known geology. In addition, the Hyperion data were used to measure and map spectral shifts of <10 nm in the SWIR related to white mica chemical variations.",TRUE,noun
R146,Geology,R137118,"Spaceborne visible and thermal infrared lithologic mapping of impact-exposed subsurface lithologies at the Haughton impact structure, Devon Island, Canadian High Arctic: Applications to Mars",S541946,R137119,Preprocessing required,L381646,Deconvolution,"Abstract— This study serves as a proof‐of‐concept for the technique of using visible‐near infrared (VNIR), short‐wavelength infrared (SWIR), and thermal infrared (TIR) spectroscopic observations to map impact‐exposed subsurface lithologies and stratigraphy on Earth or Mars. The topmost layer, three subsurface layers and undisturbed outcrops of the target sequence exposed just 10 km to the northeast of the 23 km diameter Haughton impact structure (Devon Island, Nunavut, Canada) were mapped as distinct spectral units using Landsat 7 ETM+ (VNIR/SWIR) and ASTER (VNIR/SWIR/TIR) multispectral images. Spectral mapping was accomplished by using standard image contrast‐stretching algorithms. Both spectral matching and deconvolution algorithms were applied to image‐derived ASTER TIR emissivity spectra using spectra from a library of laboratory‐measured spectra of minerals (Arizona State University) and whole‐rocks (Ward's). These identifications were made without the use of a priori knowledge from the field (i.e., a “blind” analysis). The results from this analysis suggest a sequence of dolomitic rock (in the crater rim), limestone (wall), gypsum‐rich carbonate (floor), and limestone again (central uplift). These matched compositions agree with the lithologic units and the pre‐impact stratigraphic sequence as mapped during recent field studies of the Haughton impact structure by Osinski et al. (2005a). Further conformation of the identity of image‐derived spectra was confirmed by matching these spectra with laboratory‐measured spectra of samples collected from Haughton. The results from the “blind” remote sensing methods used here suggest that these techniques can also be used to understand subsurface lithologies on Mars, where ground truth knowledge may not be generally available.",TRUE,noun
R146,Geology,R108126,"The Performance of the Satellite-borne Hyperion Hyperspectral VNIR-SWIR Imaging System for Mineral Mapping at Mount Fitton, South Australia",S492574,R108127,Minerals Mapped/ Identified,L357075,Dolomite,"Satellite-based hyperspectral imaging became a reality in November 2000 with the successful launch and operation of the Hyperion system on board the EO-1 platform. Hyperion is a pushbroom imager with 220 spectral bands in the 400-2500 nm wavelength range, a 30 meter pixel size and a 7.5 km swath. Pre-launch characterization of Hyperion measured low signal to noise (SNR<40:1) for the geologically significant shortwave infrared (SWIR) wavelength region (2000-2500 nm). The impact of this low SNR on Hyperion's capacity to resolve spectral detail was evaluated for the Mount Fitton test site in South Australia, which comprises a diverse range of minerals with narrow, diagnostic absorption bands in the SWIR. Following radiative transfer correction of the Hyperion radiance at sensor data to surface radiance (apparent reflectance), diagnostic spectral signatures were clearly apparent, including: green vegetation; talc; dolomite; chlorite; white mica and possibly tremolite. Even though the derived surface composition maps generated from these image endmembers were noisy (both random and column), they were nonetheless spatially coherent and correlated well with the known geology. In addition, the Hyperion data were used to measure and map spectral shifts of <10 nm in the SWIR related to white mica chemical variations.",TRUE,noun
R146,Geology,R108129,Comparison of Airborne Hyperspectral Data and EO-1 Hyperion for Mineral Mapping,S492580,R108130,Minerals Mapped/ Identified,L357081,Dolomite,"Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-/spl mu/m range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements are required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS.",TRUE,noun
R146,Geology,R108144,Mapping of Alteration Zones in Mineral Rich Belt of South-East Rajasthan Using Remote Sensing Techniques,S492633,R108145,Minerals Mapped/ Identified,L357134,Gossan,"Remote sensing techniques have emerged as an asset for various geological studies. Satellite images obtained by different sensors contain plenty of information related to the terrain. Digital image processing further helps in customized ways for the prospecting of minerals. In this study, an attempt has been made to map the hydrothermally altered zones using multispectral and hyperspectral datasets of South East Rajasthan. Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Hyperion (Level1R) dataset have been processed to generate different Band Ratio Composites (BRCs). For this study, ASTER derived BRCs were generated to delineate the alteration zones, gossans, abundant clays and host rocks. ASTER and Hyperion images were further processed to extract mineral end members and classified mineral maps have been produced using Spectral Angle Mapper (SAM) method. Results were validated with the geological map of the area which shows positive agreement with the image processing outputs. Thus, this study concludes that the band ratios and image processing in combination play significant role in demarcation of alteration zones which may provide pathfinders for mineral prospecting studies. Keywords—Advanced space-borne thermal emission and reflection radiometer, ASTER, Hyperion, Band ratios, Alteration zones, spectral angle mapper.",TRUE,noun
R146,Geology,R109219,"Comparison of three principal component analysis techniques to porphyry copper alteration mapping: A case study, meiduk area, kerman, iran)",S498395,R109220,Minerals Mapped/ Identified,L360735,Hydroxyl,"RÉSUMÉ La méthode d'analyse en composantes principales est utilisée couramment pour la cartographie des altérations dans les provinces métallogéniques. Trois techniques d'analyse en composantes principales sont utilisées pour la cartographie des altérations autour d'intrusions porphyriques dans la zone de Meiduk: la méthode sélective, la méthode sélective développée ou Crosta et la méthode standard. Dans cette étude, on compare les résultats de l'application de ces trois techniques différentes sur les bandes Landsat TM. La comparaison est basée principalement sur l'analyse visuelle et les observations des résultats sur le terrain. L'analyse en composantes principales sélective utilisant les bandes 5 et 7 de TM est la plus adéquate pour la cartographie des altérations. Toutefois, les zones végétales sont aussi mises en évidence dans l'image PC2. L'application de la méthode sélective développée sur les bandes 1, 4, 5 et 7 de TM pour la cartographie des hydroxyles met en évidence des halos d'altération autour des intrusions, mais son application à la cartographie des oxydes de fer permet de distinguer une zone étendue avec lithologie sédimentaire. La méthode d'analyse en composantes principales standard par contre permet de rehausser les altérations dans l'image PC5, mais elle est laborieuse en termes de temps machine. La cartographie des hydroxyles utilisant la technique sélective développée ou Crosta est plus appropriée pour la cartographie des altérations autour des masses intrusives porphyriques.",TRUE,noun
R146,Geology,R109216,Principal Component Analysis for Alteration Mapping,S498316,R109217,Minerals/ Feature Mapped,L360671,Hydroxyl,"Reducing the number of image bands input for principal component analysis (PCA) ensures that certain materials will not be mapped and increases the likelihood that others will be unequivocally mapped into only one of the principal component images. In arid terrain, PCA of four TM bands will avoid iron-oxide and thus more reliably detect hydroxyl-bearing minerals if only one input band is from the visible spectrum. PCA for iron-oxide mapping will avoid hydroxyls if only one of the SWIR bands is used. A simple principal component color composite image can then be created in which anomalous concentrations of hydroxyl, hydroxyl plus iron-oxide, and iron-oxide are displayed brightly in red-green-blue (RGB) color space. This composite allows qualitative inferences on alteration type and intensity to be made which can be widely applied.",TRUE,noun
R146,Geology,R137115,Integration of Absorption Feature Information from Visible to Longwave Infrared Spectral Ranges for Mineral Mapping,S541786,R137116,has dataset,L381506,HyMap,"Merging hyperspectral data from optical and thermal ranges allows a wider variety of minerals to be mapped and thus allows lithology to be mapped in a more complex way. In contrast, in most of the studies that have taken advantage of the data from the visible (VIS), near-infrared (NIR), shortwave infrared (SWIR) and longwave infrared (LWIR) spectral ranges, these different spectral ranges were analysed and interpreted separately. This limits the complexity of the final interpretation. In this study a presentation is made of how multiple absorption features, which are directly linked to the mineral composition and are present throughout the VIS, NIR, SWIR and LWIR ranges, can be automatically derived and, moreover, how these new datasets can be successfully used for mineral/lithology mapping. The biggest advantage of this approach is that it overcomes the issue of prior definition of endmembers, which is a requested routine employed in all widely used spectral mapping techniques. In this study, two different airborne image datasets were analysed, HyMap (VIS/NIR/SWIR image data) and Airborne Hyperspectral Scanner (AHS, LWIR image data). Both datasets were acquired over the Sokolov lignite open-cast mines in the Czech Republic. It is further demonstrated that even in this case, when the absorption feature information derived from multispectral LWIR data is integrated with the absorption feature information derived from hyperspectral VIS/NIR/SWIR data, an important improvement in terms of more complex mineral mapping is achieved.",TRUE,noun
R146,Geology,R137045,"Integration of Raman, emission, and reflectance spectroscopy for earth and lunar mineralogy",S541412,R137047,Minerals Identified (Terrestrial samples),L381255,Iron,"Abstract. Spectroscopy plays a vital role in the identification and characterization of minerals on terrestrial and planetary surfaces. We review the three different spectroscopic techniques for characterizing minerals on the Earth and lunar surfaces separately. Seven sedimentary and metamorphic terrestrial rock samples were analyzed with three field-based spectrometers, i.e., Raman, Fourier transform infrared (FTIR), and visible to near infrared and shortwave infrared (Vis–NIR–SWIR) spectrometers. Similarly, a review of work done by previous researchers on lunar rock samples was also carried out for their Raman, Vis–NIR–SWIR, and thermal (mid-infrared) spectral responses. It has been found in both the cases that the spectral information such as Si-O-Si stretching (polymorphs) in Raman spectra, identification of impurities, Christiansen and Restrahlen band center variation in mid-infrared spectra, location of elemental substitution, the content of iron, and shifting of the band center of diagnostic absorption features at 1 and 2 μm in reflectance spectra are contributing to the characterization and identification of terrestrial and lunar minerals. We show that quartz can be better characterized by considering silica polymorphs from Raman spectra, emission features in the range of 8 to 14 μm in FTIR spectra, and reflectance absorption features from Vis–NIR–SWIR spectra. KREEP materials from Apollo 12 and 14 samples are also better characterized using integrated spectroscopic studies. Integrated spectral responses felicitate comprehensive characterization and better identification of minerals. We suggest that Raman spectroscopy and visible and NIR-thermal spectroscopy are the best techniques to explore the Earth’s and lunar mineralogy.",TRUE,noun
R146,Geology,R137115,Integration of Absorption Feature Information from Visible to Longwave Infrared Spectral Ranges for Mineral Mapping,S541930,R137116,Minerals Identified (Terrestrial samples),L381630,lignite,"Merging hyperspectral data from optical and thermal ranges allows a wider variety of minerals to be mapped and thus allows lithology to be mapped in a more complex way. In contrast, in most of the studies that have taken advantage of the data from the visible (VIS), near-infrared (NIR), shortwave infrared (SWIR) and longwave infrared (LWIR) spectral ranges, these different spectral ranges were analysed and interpreted separately. This limits the complexity of the final interpretation. In this study a presentation is made of how multiple absorption features, which are directly linked to the mineral composition and are present throughout the VIS, NIR, SWIR and LWIR ranges, can be automatically derived and, moreover, how these new datasets can be successfully used for mineral/lithology mapping. The biggest advantage of this approach is that it overcomes the issue of prior definition of endmembers, which is a requested routine employed in all widely used spectral mapping techniques. In this study, two different airborne image datasets were analysed, HyMap (VIS/NIR/SWIR image data) and Airborne Hyperspectral Scanner (AHS, LWIR image data). Both datasets were acquired over the Sokolov lignite open-cast mines in the Czech Republic. It is further demonstrated that even in this case, when the absorption feature information derived from multispectral LWIR data is integrated with the absorption feature information derived from hyperspectral VIS/NIR/SWIR data, an important improvement in terms of more complex mineral mapping is achieved.",TRUE,noun
R146,Geology,R108126,"The Performance of the Satellite-borne Hyperion Hyperspectral VNIR-SWIR Imaging System for Mineral Mapping at Mount Fitton, South Australia",S492577,R108127,Minerals Mapped/ Identified,L357078,Mica,"Satellite-based hyperspectral imaging became a reality in November 2000 with the successful launch and operation of the Hyperion system on board the EO-1 platform. Hyperion is a pushbroom imager with 220 spectral bands in the 400-2500 nm wavelength range, a 30 meter pixel size and a 7.5 km swath. Pre-launch characterization of Hyperion measured low signal to noise (SNR<40:1) for the geologically significant shortwave infrared (SWIR) wavelength region (2000-2500 nm). The impact of this low SNR on Hyperion's capacity to resolve spectral detail was evaluated for the Mount Fitton test site in South Australia, which comprises a diverse range of minerals with narrow, diagnostic absorption bands in the SWIR. Following radiative transfer correction of the Hyperion radiance at sensor data to surface radiance (apparent reflectance), diagnostic spectral signatures were clearly apparent, including: green vegetation; talc; dolomite; chlorite; white mica and possibly tremolite. Even though the derived surface composition maps generated from these image endmembers were noisy (both random and column), they were nonetheless spatially coherent and correlated well with the known geology. In addition, the Hyperion data were used to measure and map spectral shifts of <10 nm in the SWIR related to white mica chemical variations.",TRUE,noun
R146,Geology,R108129,Comparison of Airborne Hyperspectral Data and EO-1 Hyperion for Mineral Mapping,S492582,R108130,Minerals Mapped/ Identified,L357083,Silica,"Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-/spl mu/m range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements are required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS.",TRUE,noun
R146,Geology,R108126,"The Performance of the Satellite-borne Hyperion Hyperspectral VNIR-SWIR Imaging System for Mineral Mapping at Mount Fitton, South Australia",S492575,R108127,Minerals Mapped/ Identified,L357076,Talc,"Satellite-based hyperspectral imaging became a reality in November 2000 with the successful launch and operation of the Hyperion system on board the EO-1 platform. Hyperion is a pushbroom imager with 220 spectral bands in the 400-2500 nm wavelength range, a 30 meter pixel size and a 7.5 km swath. Pre-launch characterization of Hyperion measured low signal to noise (SNR<40:1) for the geologically significant shortwave infrared (SWIR) wavelength region (2000-2500 nm). The impact of this low SNR on Hyperion's capacity to resolve spectral detail was evaluated for the Mount Fitton test site in South Australia, which comprises a diverse range of minerals with narrow, diagnostic absorption bands in the SWIR. Following radiative transfer correction of the Hyperion radiance at sensor data to surface radiance (apparent reflectance), diagnostic spectral signatures were clearly apparent, including: green vegetation; talc; dolomite; chlorite; white mica and possibly tremolite. Even though the derived surface composition maps generated from these image endmembers were noisy (both random and column), they were nonetheless spatially coherent and correlated well with the known geology. In addition, the Hyperion data were used to measure and map spectral shifts of <10 nm in the SWIR related to white mica chemical variations.",TRUE,noun
R146,Geology,R108126,"The Performance of the Satellite-borne Hyperion Hyperspectral VNIR-SWIR Imaging System for Mineral Mapping at Mount Fitton, South Australia",S492578,R108127,Minerals Mapped/ Identified,L357079,Tremolite,"Satellite-based hyperspectral imaging became a reality in November 2000 with the successful launch and operation of the Hyperion system on board the EO-1 platform. Hyperion is a pushbroom imager with 220 spectral bands in the 400-2500 nm wavelength range, a 30 meter pixel size and a 7.5 km swath. Pre-launch characterization of Hyperion measured low signal to noise (SNR<40:1) for the geologically significant shortwave infrared (SWIR) wavelength region (2000-2500 nm). The impact of this low SNR on Hyperion's capacity to resolve spectral detail was evaluated for the Mount Fitton test site in South Australia, which comprises a diverse range of minerals with narrow, diagnostic absorption bands in the SWIR. Following radiative transfer correction of the Hyperion radiance at sensor data to surface radiance (apparent reflectance), diagnostic spectral signatures were clearly apparent, including: green vegetation; talc; dolomite; chlorite; white mica and possibly tremolite. Even though the derived surface composition maps generated from these image endmembers were noisy (both random and column), they were nonetheless spatially coherent and correlated well with the known geology. In addition, the Hyperion data were used to measure and map spectral shifts of <10 nm in the SWIR related to white mica chemical variations.",TRUE,noun
R136,Graphics,R8301,The Document Components Ontology (DoCO),S12701,R8302,Ontology,R8303,DoCO,"The availability in machine-readable form of descriptions of the structure of documents, as well as of the document discourse (e.g. the scientific discourse within scholarly articles), is crucial for facilitating semantic publishing and the overall comprehension of documents by both users and machines. In this paper we introduce DoCO, the Document Components Ontology, an OWL 2 DL ontology that provides a general-purpose structured vocabulary of document elements to describe both structural and rhetorical document components in RDF. In addition to describing the formal description of the ontology, this paper showcases its utility in practice in a variety of our own applications and other activities of the Semantic Publishing community that rely on DoCO to annotate and retrieve document components of scholarly articles.",TRUE,noun
R136,Graphics,R9512,The Document Components Ontology (DoCO),S15165,R9513,Ontology,R9514,DoCO,"The availability in machine-readable form of descriptions of the structure of documents, as well as of the document discourse (e.g. the scientific discourse within scholarly articles), is crucial for facilitating semantic publishing and the overall comprehension of documents by both users and machines. In this paper we introduce DoCO, the Document Components Ontology, an OWL 2 DL ontology that provides a general-purpose structured vocabulary of document elements to describe both structural and rhetorical document components in RDF. In addition to describing the formal description of the ontology, this paper showcases its utility in practice in a variety of our own applications and other activities of the Semantic Publishing community that rely on DoCO to annotate and retrieve document components of scholarly articles.",TRUE,noun
R136,Graphics,R9545,The Document Components Ontology (DoCO),S15398,R9546,Ontology,R9547,DoCO,"The availability in machine-readable form of descriptions of the structure of documents, as well as of the document discourse (e.g. the scientific discourse within scholarly articles), is crucial for facilitating semantic publishing and the overall comprehension of documents by both users and machines. In this paper we introduce DoCO, the Document Components Ontology, an OWL 2 DL ontology that provides a general-purpose structured vocabulary of document elements to describe both structural and rhetorical document components in RDF. In addition to describing the formal description of the ontology, this paper showcases its utility in practice in a variety of our own applications and other activities of the Semantic Publishing community that rely on DoCO to annotate and retrieve document components of scholarly articles.",TRUE,noun
R136,Graphics,R8345,"Decentralised Authoring, Annotations and Notifications for a Read-Write Web with dokieli",S12942,R8346,Semantic representation,R8347,Dokie.li,"Abstract While the Web was designed as a decentralised environment, individual authors still lack the ability to conveniently author and publish documents, and to engage in social interactions with documents of others in a truly decentralised fashion. We present dokieli, a fully decentralised, browser-based authoring and annotation platform with built-in support for social interactions, through which people retain ownership of and sovereignty over their data. The resulting “living” documents are interoperable and independent of dokieli since they follow standards and best practices, such as HTML+RDFa for a fine-grained semantic structure, Linked Data Platform for personal data storage, and Linked Data Notifications for updates. This article describes dokieli’s architecture and implementation, demonstrating advanced document authoring and interaction without a single point of control. Such an environment provides the right technological conditions for independent publication of scientific articles, news, and other works that benefit from diverse voices and open interactions. To experience the described features please open this document in your Web browser under its canonical URI: http://csarven.ca/dokieli-rww.",TRUE,noun
R136,Graphics,R6421,Browsing Linked Data with Fenfire,S7656,R6422,implementation,R6423,Fenfire,"A wealth of information has recently become available as browsable RDF data on the Web, but the selection of client applications to interact with this Linked Data remains limited. We show how to browse Linked Data with Fenfire, a Free and Open Source Software RDF browser and editor that employs a graph view and focuses on an engaging and interactive browsing experience. This sets Fenfire apart from previous table- and outline-based Linked Data browsers.",TRUE,noun
R136,Graphics,R6421,Browsing Linked Data with Fenfire,S77875,R25705,System,L48739,Fenfire ,"A wealth of information has recently become available as browsable RDF data on the Web, but the selection of client applications to interact with this Linked Data remains limited. We show how to browse Linked Data with Fenfire, a Free and Open Source Software RDF browser and editor that employs a graph view and focuses on an engaging and interactive browsing experience. This sets Fenfire apart from previous table- and outline-based Linked Data browsers.",TRUE,noun
R136,Graphics,R6425,An Open Source Software for Exploring and Manipulating Networks,S7672,R6426,implementation,R6427,Gephi,"Gephi is an open source software for graph and network analysis. It uses a 3D render engine to display large networks in real-time and to speed up the exploration. A flexible and multi-task architecture brings new possibilities to work with complex data sets and produce valuable visual results. We present several key features of Gephi in the context of interactive exploration and interpretation of networks. It provides easy and broad access to network data and allows for spatializing, filtering, navigating, manipulating and clustering. Finally, by presenting dynamic features of Gephi, we highlight key aspects of dynamic network visualization.",TRUE,noun
R136,Graphics,R6461,"LodLive, exploring the web of data",S7825,R6462,implementation,R6463,Lodlive,"LodLive project, http://en.lodlive.it/, provides a demonstration of the use of Linked Data standard (RDF, SPARQL) to browse RDF resources. The application aims to spread linked data principles with a simple and friendly interface and reusable techniques. In this report we present an overview of the potential of LodLive, mentioning tools and methodologies that were used to create it.",TRUE,noun
R136,Graphics,R6461,"LodLive, exploring the web of data",S78016,R25718,System,L48837,Lodlive ,"LodLive project, http://en.lodlive.it/, provides a demonstration of the use of Linked Data standard (RDF, SPARQL) to browse RDF resources. The application aims to spread linked data principles with a simple and friendly interface and reusable techniques. In this report we present an overview of the potential of LodLive, mentioning tools and methodologies that were used to create it.",TRUE,noun
R136,Graphics,R8356,The anatomy of a nanopublication,S13008,R8357,Semantic representation,R8358,Nanopublications,"As the amount of scholarly communication increases, it is increasingly difficult for specific core scientific statements to be found, connected and curated. Additionally, the redundancy of these statements in multiple fora makes it difficult to determine attribution, quality and provenance. To tackle these challenges, the Concept Web Alliance has promoted the notion of nanopublications (core scientific statements with associated context). In this document, we present a model of nanopublications along with a Named Graph/RDF serialization of the model. Importantly, the serialization is defined completely using already existing community-developed technologies. Finally, we discuss the importance of aggregating nanopublications and the role that the Concept Wiki plays in facilitating it.",TRUE,noun
R136,Graphics,R6413,NodeTrix: a Hybrid Visualization of Social Networks,S7624,R6414,implementation,R6415,NodeTrix,"The need to visualize large social networks is growing as hardware capabilities make analyzing large networks feasible and many new data sets become available. Unfortunately, the visualizations in existing systems do not satisfactorily resolve the basic dilemma of being readable both for the global structure of the network and also for detailed analysis of local communities. To address this problem, we present NodeTrix, a hybrid representation for networks that combines the advantages of two traditional representations: node-link diagrams are used to show the global structure of a network, while arbitrary portions of the network can be shown as adjacency matrices to better support the analysis of communities. A key contribution is a set of interaction techniques. These allow analysts to create a NodeTrix visualization by dragging selections to and from node-link and matrix forms, and to flexibly manipulate the NodeTrix representation to explore the dataset and create meaningful summary visualizations of their findings. Finally, we present a case study applying NodeTrix to the analysis of the InfoVis 2004 coauthorship dataset to illustrate the capabilities of NodeTrix as both an exploration tool and an effective means of communicating results.",TRUE,noun
R136,Graphics,R6413,NodeTrix: a Hybrid Visualization of Social Networks,S77851,R25703,System,L48723,NodeTrix ,"The need to visualize large social networks is growing as hardware capabilities make analyzing large networks feasible and many new data sets become available. Unfortunately, the visualizations in existing systems do not satisfactorily resolve the basic dilemma of being readable both for the global structure of the network and also for detailed analysis of local communities. To address this problem, we present NodeTrix, a hybrid representation for networks that combines the advantages of two traditional representations: node-link diagrams are used to show the global structure of a network, while arbitrary portions of the network can be shown as adjacency matrices to better support the analysis of communities. A key contribution is a set of interaction techniques. These allow analysts to create a NodeTrix visualization by dragging selections to and from node-link and matrix forms, and to flexibly manipulate the NodeTrix representation to explore the dataset and create meaningful summary visualizations of their findings. Finally, we present a case study applying NodeTrix to the analysis of the InfoVis 2004 coauthorship dataset to illustrate the capabilities of NodeTrix as both an exploration tool and an effective means of communicating results.",TRUE,noun
R136,Graphics,R6457,Using Hierarchical Edge Bundles to visualize complex ontologies in GLOW,S78012,R25717,Domain,R25700,ontology,"In the past decade, much effort has been put into the visual representation of ontologies. However, present visualization strategies are not equipped to handle complex ontologies with many relations, leading to visual clutter and inefficient use of space. In this paper, we propose GLOW, a method for ontology visualization based on Hierarchical Edge Bundles. Hierarchical Edge Bundles is a new visually attractive technique for displaying relations in hierarchical data, such as concept structures formed by 'subclass-of' and 'type-of' relations. We have developed a visualization library based on OWL API, as well as a plug-in for Protégé, a well-known ontology editor. The displayed adjacency relations can be selected from an ontology using a set of common configurations, allowing for intuitive discovery of information. Our evaluation demonstrates that the GLOW visualization provides better visual clarity, and displays relations and complex ontologies better than the existing Protégé visualization plug-in Jambalaya.",TRUE,noun
R136,Graphics,R6465,Visualizing Populated Ontologies with OntoTrix,S78036,R25719,Domain,R25700,ontology,"Research on visualizing Semantic Web data has yielded many tools that rely on information visualization techniques to better support the user in understanding and editing these data. Most tools structure the visualization according to the concept definitions and interrelations that constitute the ontology's vocabulary. Instances are often treated as somewhat peripheral information, when considered at all. These instances, that populate ontologies, represent an essential part of any knowledge base. Understanding instance-level data might be easier for users because of their higher concreteness, but instances will often be orders of magnitude more numerous than the concept definitions that give them machine-processable meaning. As such, the visualization of instance-level data poses different but real challenges. The authors present a visualization technique designed to enable users to visualize large instance sets and the relations that connect them. This visualization uses both node-link and adjacency matrix representations of graphs to visualize different parts of the data depending on their semantic and local structural properties. The technique was originally devised for simple social network visualization. The authors extend it to handle the richer and more complex graph structures of populated ontologies, exploiting ontological knowledge to drive the layout of, and navigation in, the representation embedded in a smooth zoomable environment.",TRUE,noun
R136,Graphics,R6499,Facets and Pivoting for Flexible and Usable Linked Data Exploration,S7909,R6500,implementation,R6501,Rhizomer,"The success of Open Data initiatives has increased the amount of data available on the Web. Unfortunately, most of this data is only available in raw tabular form, what makes analysis and reuse quite difficult for non-experts. Linked Data principles allow for a more sophisticated approach by making explicit both the structure and semantics of the data. However, from the end-user viewpoint, they continue to be monolithic files completely opaque or difficult to explore by making tedious semantic queries. Our objective is to facilitate the user to grasp what kind of entities are in the dataset, how they are interrelated, which are their main properties and values, etc. Rhizomer is a tool for data publishing whose interface provides a set of components borrowed from Information Architecture (IA) that facilitate awareness of the dataset at hand. It automatically generates navigation menus and facets based on the kinds of things in the dataset and how they are described through metadata properties and values. Moreover, motivated by recent tests with end-users, it also provides the possibility to pivot among the faceted views created for each class of resources in the dataset.",TRUE,noun
R136,Graphics,R6499,Facets and Pivoting for Flexible and Usable Linked Data Exploration,S77607,R25668,System,L48566,Rhizomer ,"The success of Open Data initiatives has increased the amount of data available on the Web. Unfortunately, most of this data is only available in raw tabular form, what makes analysis and reuse quite difficult for non-experts. Linked Data principles allow for a more sophisticated approach by making explicit both the structure and semantics of the data. However, from the end-user viewpoint, they continue to be monolithic files completely opaque or difficult to explore by making tedious semantic queries. Our objective is to facilitate the user to grasp what kind of entities are in the dataset, how they are interrelated, which are their main properties and values, etc. Rhizomer is a tool for data publishing whose interface provides a set of components borrowed from Information Architecture (IA) that facilitate awareness of the dataset at hand. It automatically generates navigation menus and facets based on the kinds of things in the dataset and how they are described through metadata properties and values. Moreover, motivated by recent tests with end-users, it also provides the possibility to pivot among the faceted views created for each class of resources in the dataset.",TRUE,noun
R136,Graphics,R6511,SemLens: visual analysis of semantic data with scatter plots and semantic lenses,S7986,R6512,implementation,R6513,SemLens,"Querying the Semantic Web and analyzing the query results are often complex tasks that can be greatly facilitated by visual interfaces. A major challenge in the design of these interfaces is to provide intuitive and efficient interaction support without limiting too much the analytical degrees of freedom. This paper introduces SemLens, a visual tool that combines scatter plots and semantic lenses to overcome this challenge and to allow for a simple yet powerful analysis of RDF data. The scatter plots provide a global overview on an object collection and support the visual discovery of correlations and patterns in the data. The semantic lenses add dimensions for local analysis of subsets of the objects. A demo accessing DBpedia data is used for illustration.",TRUE,noun
R136,Graphics,R6429,Using Clusters in RDF Visualization,S7687,R6430,implementation,R6431,Trisolda,"Clustered graph visualization techniques are an easy to understand way of hiding complex parts of a visualized graph when they are not needed by the user. When visualizing RDF, there are several situations where such clusters are defined in a very natural way. Using this techniques, we can give the user optional access to some detailed information without unnecessarily occupying space in the basic view of the data. This paper describes algorithms for clustered visualization used in the Trisolda RDF visualizer. Most notable is the newly added clustered navigation technique.",TRUE,noun
R136,Graphics,R6429,Using Clusters in RDF Visualization,S77904,R25708,System,L48759,Trisolda ,"Clustered graph visualization techniques are an easy to understand way of hiding complex parts of a visualized graph when they are not needed by the user. When visualizing RDF, there are several situations where such clusters are defined in a very natural way. Using this techniques, we can give the user optional access to some detailed information without unnecessarily occupying space in the basic view of the data. This paper describes algorithms for clustered visualization used in the Trisolda RDF visualizer. Most notable is the newly added clustered navigation technique.",TRUE,noun
R136,Graphics,R6499,Facets and Pivoting for Flexible and Usable Linked Data Exploration,S77619,R25668,App. Type,R6538,Web,"The success of Open Data initiatives has increased the amount of data available on the Web. Unfortunately, most of this data is only available in raw tabular form, what makes analysis and reuse quite difficult for non-experts. Linked Data principles allow for a more sophisticated approach by making explicit both the structure and semantics of the data. However, from the end-user viewpoint, they continue to be monolithic files completely opaque or difficult to explore by making tedious semantic queries. Our objective is to facilitate the user to grasp what kind of entities are in the dataset, how they are interrelated, which are their main properties and values, etc. Rhizomer is a tool for data publishing whose interface provides a set of components borrowed from Information Architecture (IA) that facilitate awareness of the dataset at hand. It automatically generates navigation menus and facets based on the kinds of things in the dataset and how they are described through metadata properties and values. Moreover, motivated by recent tests with end-users, it also provides the possibility to pivot among the faceted views created for each class of resources in the dataset.",TRUE,noun
R136,Graphics,R6511,SemLens: visual analysis of semantic data with scatter plots and semantic lenses,S77671,R25676,App. Type,R6538,Web,"Querying the Semantic Web and analyzing the query results are often complex tasks that can be greatly facilitated by visual interfaces. A major challenge in the design of these interfaces is to provide intuitive and efficient interaction support without limiting too much the analytical degrees of freedom. This paper introduces SemLens, a visual tool that combines scatter plots and semantic lenses to overcome this challenge and to allow for a simple yet powerful analysis of RDF data. The scatter plots provide a global overview on an object collection and support the visual discovery of correlations and patterns in the data. The semantic lenses add dimensions for local analysis of subsets of the objects. A demo accessing DBpedia data is used for illustration.",TRUE,noun
R136,Graphics,R6515,Formal Linked Data Visualization Model,S77686,R25679,App. Type,R6538,Web,"Recently, the amount of semantic data available in the Web has increased dramatically. The potential of this vast amount of data is enormous but in most cases it is difficult for users to explore and use this data, especially for those without experience with Semantic Web technologies. Applying information visualization techniques to the Semantic Web helps users to easily explore large amounts of data and interact with them. In this article we devise a formal Linked Data Visualization Model (LDVM), which allows to dynamically connect data with visualizations. We report about our implementation of the LDVM comprising a library of generic visualizations that enable both users and data analysts to get an overview on, visualize and explore the Data Web and perform detailed analyzes on Linked Data.",TRUE,noun
R136,Graphics,R6531,Using Semantics for Interactive Visual Analysis of Linked Open Data,S77751,R25690,App. Type,R6538,Web,"Providing easy to use methods for visual analysis of Linked Data is often hindered by the complexity of semantic technologies. On the other hand, semantic information inherent to Linked Data provides opportunities to support the user in interactively analysing the data. This paper provides a demonstration of an interactive, Web-based visualisation tool, the ""Vis Wizard"", which makes use of semantics to simplify the process of setting up visualisations, transforming the data and, most importantly, interactively analysing multiple datasets using brushing and linking methods.",TRUE,noun
R136,Graphics,R6445,ZoomRDF: semantic fisheye zooming on RDF data.,S7756,R6446,implementation,R6447,ZoomRDF,"With the development of Semantic Web in recent years, an increasing amount of semantic data has been created in form of Resource Description Framework (RDF). Current visualization techniques help users quickly understand the underlying RDF data by displaying its structure in an overview. However, detailed information can only be accessed by further navigation. An alternative approach is to display the global context as well as the local details simultaneously in a unified view. This view supports the visualization and navigation on RDF data in an integrated way. In this demonstration, we present ZoomRDF, a framework that: i) adapts a space-optimized visualization algorithm for RDF, which allows more resources to be displayed, thus maximizes the utilization of display space, ii) combines the visualization with a fisheye zooming concept, which assigns more space to some individual nodes while still preserving the overview structure of the data, iii) considers both the importance of resources and the user interaction on them, which offers more display space to those elements the user may be interested in. We implement the framework based on the Gene Ontology and demonstrate that it facilitates tasks like RDF data exploration and editing.",TRUE,noun
R136,Graphics,R6409,A tool for visualization and editing of OWL ontologies,S7608,R6410,implementation,R6411,GrOWL,"In an effort to optimize visualization and editing of OWL ontologies we have developed GrOWL: a browser and visual editor for OWL that accurately visualizes the underlying DL semantics of OWL ontologies while avoiding the difficulties of the verbose OWL syntax. In this paper, we discuss GrOWL visualization model and the essential visualization techniques implemented in GrOWL.",TRUE,noun
R136,Graphics,R6511,SemLens: visual analysis of semantic data with scatter plots and semantic lenses,S77662,R25676,Vis. Types,R6494,Scatter,"Querying the Semantic Web and analyzing the query results are often complex tasks that can be greatly facilitated by visual interfaces. A major challenge in the design of these interfaces is to provide intuitive and efficient interaction support without limiting too much the analytical degrees of freedom. This paper introduces SemLens, a visual tool that combines scatter plots and semantic lenses to overcome this challenge and to allow for a simple yet powerful analysis of RDF data. The scatter plots provide a global overview on an object collection and support the visual discovery of correlations and patterns in the data. The semantic lenses add dimensions for local analysis of subsets of the objects. A demo accessing DBpedia data is used for illustration.",TRUE,noun
R40,Immunology and Infectious Disease,R142246,Safety and Immunogenicity of Two RNA-Based Covid-19 Vaccine Candidates,S571592,R142248,Organisations,R142251,BioNTech,"Abstract Background Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections and the resulting disease, coronavirus disease 2019 (Covid-19), have spread to millions of persons worldwide. Multiple vaccine candidates are under development, but no vaccine is currently available. Interim safety and immunogenicity data about the vaccine candidate BNT162b1 in younger adults have been reported previously from trials in Germany and the United States. Methods In an ongoing, placebo-controlled, observer-blinded, dose-escalation, phase 1 trial conducted in the United States, we randomly assigned healthy adults 18 to 55 years of age and those 65 to 85 years of age to receive either placebo or one of two lipid nanoparticle–formulated, nucleoside-modified RNA vaccine candidates: BNT162b1, which encodes a secreted trimerized SARS-CoV-2 receptor–binding domain; or BNT162b2, which encodes a membrane-anchored SARS-CoV-2 full-length spike, stabilized in the prefusion conformation. The primary outcome was safety (e.g., local and systemic reactions and adverse events); immunogenicity was a secondary outcome. Trial groups were defined according to vaccine candidate, age of the participants, and vaccine dose level (10 μg, 20 μg, 30 μg, and 100 μg). In all groups but one, participants received two doses, with a 21-day interval between doses; in one group (100 μg of BNT162b1), participants received one dose. Results A total of 195 participants underwent randomization. In each of 13 groups of 15 participants, 12 participants received vaccine and 3 received placebo. BNT162b2 was associated with a lower incidence and severity of systemic reactions than BNT162b1, particularly in older adults. In both younger and older adults, the two vaccine candidates elicited similar dose-dependent SARS-CoV-2–neutralizing geometric mean titers, which were similar to or higher than the geometric mean titer of a panel of SARS-CoV-2 convalescent serum samples. Conclusions The safety and immunogenicity data from this U.S. phase 1 trial of two vaccine candidates in younger and older adults, added to earlier interim safety and immunogenicity data regarding BNT162b1 in younger adults from trials in Germany and the United States, support the selection of BNT162b2 for advancement to a pivotal phase 2–3 safety and efficacy evaluation. (Funded by BioNTech and Pfizer; ClinicalTrials.gov number, NCT04368728.)",TRUE,noun
R40,Immunology and Infectious Disease,R142246,Safety and Immunogenicity of Two RNA-Based Covid-19 Vaccine Candidates,S571593,R142248,Organisations,R142252,Pfizer,"Abstract Background Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections and the resulting disease, coronavirus disease 2019 (Covid-19), have spread to millions of persons worldwide. Multiple vaccine candidates are under development, but no vaccine is currently available. Interim safety and immunogenicity data about the vaccine candidate BNT162b1 in younger adults have been reported previously from trials in Germany and the United States. Methods In an ongoing, placebo-controlled, observer-blinded, dose-escalation, phase 1 trial conducted in the United States, we randomly assigned healthy adults 18 to 55 years of age and those 65 to 85 years of age to receive either placebo or one of two lipid nanoparticle–formulated, nucleoside-modified RNA vaccine candidates: BNT162b1, which encodes a secreted trimerized SARS-CoV-2 receptor–binding domain; or BNT162b2, which encodes a membrane-anchored SARS-CoV-2 full-length spike, stabilized in the prefusion conformation. The primary outcome was safety (e.g., local and systemic reactions and adverse events); immunogenicity was a secondary outcome. Trial groups were defined according to vaccine candidate, age of the participants, and vaccine dose level (10 μg, 20 μg, 30 μg, and 100 μg). In all groups but one, participants received two doses, with a 21-day interval between doses; in one group (100 μg of BNT162b1), participants received one dose. Results A total of 195 participants underwent randomization. In each of 13 groups of 15 participants, 12 participants received vaccine and 3 received placebo. BNT162b2 was associated with a lower incidence and severity of systemic reactions than BNT162b1, particularly in older adults. In both younger and older adults, the two vaccine candidates elicited similar dose-dependent SARS-CoV-2–neutralizing geometric mean titers, which were similar to or higher than the geometric mean titer of a panel of SARS-CoV-2 convalescent serum samples. Conclusions The safety and immunogenicity data from this U.S. phase 1 trial of two vaccine candidates in younger and older adults, added to earlier interim safety and immunogenicity data regarding BNT162b1 in younger adults from trials in Germany and the United States, support the selection of BNT162b2 for advancement to a pivotal phase 2–3 safety and efficacy evaluation. (Funded by BioNTech and Pfizer; ClinicalTrials.gov number, NCT04368728.)",TRUE,noun
R43,Immunopathology,R12252,A new coronavirus associated with human respiratory disease in China,S74046,R12273,patient characteristics,R25008,patient,"Abstract Emerging infectious diseases, such as severe acute respiratory syndrome (SARS) and Zika virus disease, present a major threat to public health 1–3 . Despite intense research efforts, how, when and where new diseases appear are still a source of considerable uncertainty. A severe respiratory disease was recently reported in Wuhan, Hubei province, China. As of 25 January 2020, at least 1,975 cases had been reported since the first patient was hospitalized on 12 December 2019. Epidemiological investigations have suggested that the outbreak was associated with a seafood market in Wuhan. Here we study a single patient who was a worker at the market and who was admitted to the Central Hospital of Wuhan on 26 December 2019 while experiencing a severe respiratory syndrome that included fever, dizziness and a cough. Metagenomic RNA sequencing 4 of a sample of bronchoalveolar lavage fluid from the patient identified a new RNA virus strain from the family Coronaviridae , which is designated here ‘WH-Human 1’ coronavirus (and has also been referred to as ‘2019-nCoV’). Phylogenetic analysis of the complete viral genome (29,903 nucleotides) revealed that the virus was most closely related (89.1% nucleotide similarity) to a group of SARS-like coronaviruses (genus Betacoronavirus, subgenus Sarbecovirus) that had previously been found in bats in China 5 . This outbreak highlights the ongoing ability of viral spill-over from animals to cause severe disease in humans.",TRUE,noun
R43,Immunopathology,R12252,A new coronavirus associated with human respiratory disease in China,S18803,R12273,Has method,R12283,sequencing,"Abstract Emerging infectious diseases, such as severe acute respiratory syndrome (SARS) and Zika virus disease, present a major threat to public health 1–3 . Despite intense research efforts, how, when and where new diseases appear are still a source of considerable uncertainty. A severe respiratory disease was recently reported in Wuhan, Hubei province, China. As of 25 January 2020, at least 1,975 cases had been reported since the first patient was hospitalized on 12 December 2019. Epidemiological investigations have suggested that the outbreak was associated with a seafood market in Wuhan. Here we study a single patient who was a worker at the market and who was admitted to the Central Hospital of Wuhan on 26 December 2019 while experiencing a severe respiratory syndrome that included fever, dizziness and a cough. Metagenomic RNA sequencing 4 of a sample of bronchoalveolar lavage fluid from the patient identified a new RNA virus strain from the family Coronaviridae , which is designated here ‘WH-Human 1’ coronavirus (and has also been referred to as ‘2019-nCoV’). Phylogenetic analysis of the complete viral genome (29,903 nucleotides) revealed that the virus was most closely related (89.1% nucleotide similarity) to a group of SARS-like coronaviruses (genus Betacoronavirus, subgenus Sarbecovirus) that had previously been found in bats in China 5 . This outbreak highlights the ongoing ability of viral spill-over from animals to cause severe disease in humans.",TRUE,noun
R351,Industrial and Organizational Psychology,R76567,Individual differences and changes in subjective wellbeing during the early stages of the COVID-19 pandemic.,S352689,R76571,Control variables,R77200,Age,"The COVID-19 pandemic has considerably impacted many people's lives. This study examined changes in subjective wellbeing between December 2019 and May 2020 and how stress appraisals and coping strategies relate to individual differences and changes in subjective wellbeing during the early stages of the pandemic. Data were collected at 4 time points from 979 individuals in Germany. Results showed that, on average, life satisfaction, positive affect, and negative affect did not change significantly between December 2019 and March 2020 but decreased between March and May 2020. Across the latter timespan, individual differences in life satisfaction were positively related to controllability appraisals, active coping, and positive reframing, and negatively related to threat and centrality appraisals and planning. Positive affect was positively related to challenge and controllable-by-self appraisals, active coping, using emotional support, and religion, and negatively related to threat appraisal and humor. Negative affect was positively related to threat and centrality appraisals, denial, substance use, and self-blame, and negatively related to controllability appraisals and emotional support. Contrary to expectations, the effects of stress appraisals and coping strategies on changes in subjective wellbeing were small and mostly nonsignificant. These findings imply that the COVID-19 pandemic represents not only a major medical and economic crisis, but also has a psychological dimension, as it can be associated with declines in key facets of people's subjective wellbeing. Psychological practitioners should address potential declines in subjective wellbeing with their clients and attempt to enhance clients' general capability to use functional stress appraisals and effective coping strategies. (PsycInfo Database Record (c) 2020 APA, all rights reserved).",TRUE,noun
R358,Inequality and Stratification,R75946,Who is most affected by the Corona crisis? An analysis of changes in stress and well-being in Switzerland,S351993,R77086,Examined (sub-)group,R77092,men,"ABSTRACT This study analyses the consequences of the Covid-19 crisis on stress and well-being in Switzerland. In particular, we assess whether vulnerable groups in terms of social isolation, increased workload and limited socioeconomic resources are affected more than others. Using longitudinal data from the Swiss Household Panel, including a specific Covid-19 study, we estimate change score models to predict changes in perceived stress and life satisfaction at the end of the semi-lockdown in comparison to before the crisis. We find no general change in life satisfaction and a small decrease in stress. Yet, in line with our expectations, more vulnerable groups in terms of social isolation (young adults, Covid-19 risk group members, individuals without a partner), workload (women) and socioeconomic resources (unemployed and those who experienced a deteriorating financial situation) reported a decrease in life satisfaction. Stress levels decreased most strongly among high earners, workers on short-time work and the highly educated.",TRUE,noun
R358,Inequality and Stratification,R75946,Who is most affected by the Corona crisis? An analysis of changes in stress and well-being in Switzerland,S351992,R77084,Examined (sub-)group,R77090,women,"ABSTRACT This study analyses the consequences of the Covid-19 crisis on stress and well-being in Switzerland. In particular, we assess whether vulnerable groups in terms of social isolation, increased workload and limited socioeconomic resources are affected more than others. Using longitudinal data from the Swiss Household Panel, including a specific Covid-19 study, we estimate change score models to predict changes in perceived stress and life satisfaction at the end of the semi-lockdown in comparison to before the crisis. We find no general change in life satisfaction and a small decrease in stress. Yet, in line with our expectations, more vulnerable groups in terms of social isolation (young adults, Covid-19 risk group members, individuals without a partner), workload (women) and socioeconomic resources (unemployed and those who experienced a deteriorating financial situation) reported a decrease in life satisfaction. Stress levels decreased most strongly among high earners, workers on short-time work and the highly educated.",TRUE,noun
R278,Information Science,R76438,"Tools for historical corpus research, and a corpus of Latin",S351871,R76440,Metadata,L250567,date,"We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word’s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.",TRUE,noun
R278,Information Science,R38897,Multi-Agent Systems: A Survey,S139128,R38899,consists of,R45042,Agent,"Multi-agent systems (MASs) have received tremendous attention from scholars in different disciplines, including computer science and civil engineering, as a means to solve complex problems by subdividing them into smaller tasks. The individual tasks are allocated to autonomous entities, known as agents. Each agent decides on a proper action to solve the task using multiple inputs, e.g., history of actions, interactions with its neighboring agents, and its goal. The MAS has found multiple applications, including modeling complex systems, smart grids, and computer networks. Despite their wide applicability, there are still a number of challenges faced by MAS, including coordination between agents, security, and task allocation. This survey provides a comprehensive discussion of all aspects of MAS, starting from definitions, features, applications, challenges, and communications to evaluation. A classification on MAS applications and challenges is provided along with references for further studies. We expect this paper to serve as an insightful and comprehensive resource on the MAS for researchers and practitioners in the area.",TRUE,noun
R278,Information Science,R46576,NERA: Named Entity Recognition for Arabic,S142396,R46577,Language/domain,L87544,Arabic,"Name identification has been worked on quite intensively for the past few years, and has been incorporated into several products revolving around natural language processing tasks. Many researchers have attacked the name identification problem in a variety of languages, but only a few limited research efforts have focused on named entity recognition for Arabic script. This is due to the lack of resources for Arabic named entities and the limited amount of progress made in Arabic natural language processing in general. In this article, we present the results of our attempt at the recognition and extraction of the 10 most important categories of named entities in Arabic script: the person name, location, company, date, time, price, measurement, phone number, ISBN, and file name. We developed the system Named Entity Recognition for Arabic (NERA) using a rulebased approach. The resources created are: a Whitelist representing a dictionary of names, and a grammar, in the form of regular expressions, which are responsible for recognizing the named entities. A filtration mechanism is used that serves two different purposes: (a) revision of the results from a named entity extractor by using metadata, in terms of a Blacklist or rejecter, about ill-formed named entities and (b) disambiguation of identical or overlapping textual matches returned by different name entity extractors to get the correct choice. In NERA, we addressed major challenges posed by NER in the Arabic language arising due to the complexity of the language, peculiarities in the Arabic orthographic system, nonstandardization of the written text, ambiguity, and lack of resources. NERA has been effectively evaluated using our own tagged corpus; it achieved satisfactory results in terms of precision, recall, and F-measure.",TRUE,noun
R278,Information Science,R46586,RENAR: A rule-based arabic named entity recognition system,S142466,R46587,Language/domain,L87594,Arabic,"Named entity recognition has served many natural language processing tasks such as information retrieval, machine translation, and question answering systems. Many researchers have addressed the name identification issue in a variety of languages and recently some research efforts have started to focus on named entity recognition for the Arabic language. We present a working Arabic information extraction (IE) system that is used to analyze large volumes of news texts every day to extract the named entity (NE) types person, organization, location, date, and number, as well as quotations (direct reported speech) by and about people. The named entity recognition (NER) system was not developed for Arabic, but instead a multilingual NER system was adapted to also cover Arabic. The Semitic language Arabic substantially differs from the Indo-European and Finno-Ugric languages currently covered. This article thus describes what Arabic language-specific resources had to be developed and what changes needed to be made to the rule set in order to be applicable to the Arabic language. The achieved evaluation results are generally satisfactory, but could be improved for certain entity types.",TRUE,noun
R278,Information Science,R46595,Arabic Named Entity Recognition: A Feature-Driven Study ,S142522,R46596,Language/domain,L87634,Arabic,"The named entity recognition task aims at identifying and classifying named entities within an open-domain text. This task has been garnering significant attention recently as it has been shown to help improve the performance of many natural language processing applications. In this paper, we investigate the impact of using different sets of features in three discriminative machine learning frameworks, namely, support vector machines, maximum entropy and conditional random fields for the task of named entity recognition. Our language of interest is Arabic. We explore lexical, contextual and morphological features and nine data-sets of different genres and annotations. We measure the impact of the different features in isolation and incrementally combine them in order to evaluate the robustness to noise of each approach. We achieve the highest performance using a combination of 15 features in conditional random fields using broadcast news data (Fbeta = 1=83.34).",TRUE,noun
R278,Information Science,R175046,"Open access, readership, citations: a randomized controlled trial of scientific journal publishing",S693230,R175048,open_access_medium,L466105,articles,"Does free access to journal articles result in greater diffusion of scientific knowledge? Using a randomized controlled trial of open access publishing, involving 36 participating journals in the sciences, social sciences, and humanities, we report on the effects of free access on article downloads and citations. Articles placed in the open access condition (n=712) received significantly more downloads and reached a broader audience within the first year, yet were cited no more frequently, nor earlier, than subscription‐access control articles (n=2533) within 3 yr. These results may be explained by social stratification, a process that concentrates scientific authors at a small number of elite research universities with excellent access to the scientific literature. The real beneficiaries of open access publishing may not be the research community but communities of practice that consume, but rarely contribute to, the corpus of literature.—Davis, P. M. Open access, readership, citations: a randomized controlled trial of scientific journal publishing. FASEB J. 25, 2129‐2134 (2011). www.fasebj.org",TRUE,noun
R278,Information Science,R175056,Attracting new users or business as usual? A case study of converting academic subscription-based journals to open access,S693348,R175064,open_access_medium,L466194,articles,"Abstract This paper studies a selection of 11 Norwegian journals in the humanities and social sciences and their conversion from subscription to open access, a move heavily incentivized by governmental mandates and open access policies. By investigating the journals’ visiting logs in the period 2014–2019, the study finds that a conversion to open access induces higher visiting numbers; all journals in the study had a significant increase, which can be attributed to the conversion. Converting a journal had no spillover in terms of increased visits to previously published articles still behind the paywall in the same journals. Visits from previously subscribing Norwegian higher education institutions did not account for the increase in visits, indicating that the increase must be accounted for by visitors from other sectors. The results could be relevant for policymakers concerning the effects of strict policies targeting economically vulnerable national journals, and could further inform journal owners and editors on the effects of converting to open access.",TRUE,noun
R278,Information Science,R76438,"Tools for historical corpus research, and a corpus of Latin",S351867,R76440,Metadata,L250563,author,"We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word’s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.",TRUE,noun
R278,Information Science,R41295,Spanbert: Improving pre-training by representing and predicting spans,S130982,R41297,Has,R41341,Baselines,"We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random tokens, and (2) training the span boundary representations to predict the entire content of the masked span, without relying on the individual token representations within it. SpanBERT consistently outperforms BERT and our better-tuned baselines, with substantial gains on span selection tasks such as question answering and coreference resolution. In particular, with the same training data and model size as BERT large , our single model obtains 94.6% and 88.7% F1 on SQuAD 1.1 and 2.0 respectively. We also achieve a new state of the art on the OntoNotes coreference resolution task (79.6% F1), strong performance on the TACRED relation extraction benchmark, and even gains on GLUE. 1",TRUE,noun
R278,Information Science,R70866,OpenBiodiv: A Knowledge Graph for Literature-Extracted Linked Open Data in Biodiversity Science,S337124,R70867,Domain,L243372,Biodiversity,"Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.",TRUE,noun
R278,Information Science,R182018,AUPress: A Comparison of an Open Access University Press with Traditional Presses,S704101,R182019,open_access_medium,L475076,books,"This study is a comparison of AUPress with three other traditional (non-open access) Canadian university presses. The analysis is based on the rankings that are correlated with book sales on Amazon.com and Amazon.ca. Statistical methods include the sampling of the sales ranking of randomly selected books from each press. The results of one-way ANOVA analyses show that there is no significant difference in the ranking of printed books sold by AUPress in comparison with traditional university presses. However, AUPress, can demonstrate a significantly larger readership for its books as evidenced by the number of downloads of the open electronic versions.",TRUE,noun
R278,Information Science,R182020,The profits of free books: an experiment to measure the impact of open access publishing,S704117,R182022,open_access_medium,L475088,books,"This article describes an experiment to measure the impact of open access (OA) publishing of academic books. During a period of nine months, three sets of 100 books were disseminated through an institutional repository, the Google Book Search program, or both channels. A fourth set of 100 books was used as control group. OA publishing enhances discovery and online consultation. Within the context of the experiment, no relation could be found between OA publishing and citation rates. Contrary to expectations, OA publishing does not stimulate or diminish sales figures. The Google Book Search program is superior to the repository.",TRUE,noun
R278,Information Science,R76438,"Tools for historical corpus research, and a corpus of Latin",S351872,R76440,Metadata,L250568,century,"We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word’s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.",TRUE,noun
R278,Information Science,R186167,Scalable SPARQL querying of large RDF graphs,S711755,R186169,Material,R186178,cluster,"The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.",TRUE,noun
R278,Information Science,R68931,The New DBpedia Release Cycle: Increasing Agility and Efficiency in Knowledge Extraction Workflows,S327334,R68933,Material,R68934,community,"Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility , to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .",TRUE,noun
R278,Information Science,R136009,Ontology-Based Personalized Course Recommendation Framework,S538523,R136012,Personalisation features,R136017,courses,"Choosing a higher education course at university is not an easy task for students. A wide range of courses are offered by the individual universities whose delivery mode and entry requirements differ. A personalized recommendation system can be an effective way of suggesting the relevant courses to the prospective students. This paper introduces a novel approach that personalizes course recommendations that will match the individual needs of users. The proposed approach developed a framework of an ontology-based hybrid-filtering system called the ontology-based personalized course recommendation (OPCR). This approach aims to integrate the information from multiple sources based on the hierarchical ontology similarity with a view to enhancing the efficiency and the user satisfaction and to provide students with appropriate recommendations. The OPCR combines collaborative-based filtering with content-based filtering. It also considers familiar related concepts that are evident in the profiles of both the student and the course, determining the similarity between them. Furthermore, OPCR uses an ontology mapping technique, recommending jobs that will be available following the completion of each course. This method can enable students to gain a comprehensive knowledge of courses based on their relevance, using dynamic ontology mapping to link the course profiles and student profiles with job profiles. Results show that a filtering algorithm that uses hierarchically related concepts produces better outcomes compared to a filtering method that considers only keyword similarity. In addition, the quality of the recommendations is improved when the ontology similarity between the items’ and the users’ profiles were utilized. This approach, using a dynamic ontology mapping, is flexible and can be adapted to different domains. The proposed framework can be used to filter the items for both postgraduate courses and items from other domains.",TRUE,noun
R278,Information Science,R108652,"A streamlined workflow for conversion, peer review, and publication of genomics metadata as omics data papers ",S496112,R108654,Material,R108921,data,"Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.",TRUE,noun
R278,Information Science,R108690,Open Science meets Food Modelling: Introducing the Food Modelling Journal (FMJ),S496107,R108916,Material,R108921,data,"This Editorial describes the rationale, focus, scope and technology behind the newly launched, open access, innovative Food Modelling Journal (FMJ). The Journal is designed to publish those outputs of the research cycle that usually precede the publication of the research article, but have their own value and re-usability potential. Such outputs are methods, models, software and data. The Food Modelling Journal is launched by the AGINFRA+ community and is integrated with the AGINFRA+ Virtual Research Environment (VRE) to facilitate and streamline the authoring, peer review and publication of the manuscripts via the ARPHA Publishing Platform.",TRUE,noun
R278,Information Science,R186167,Scalable SPARQL querying of large RDF graphs,S711749,R186169,Material,R186172,data,"The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.",TRUE,noun
R278,Information Science,R150376,The Pattern of Patterns: What is a pattern in conceptual modeling?,S603084,R150377,Material,R150389,domain,"It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.",TRUE,noun
R278,Information Science,R135998,A Hybrid Knowlegde-Based Approach for Recommending Massive Learning Activities,S538495,R136000,keywords,R136005,e-learning,"In recent years, the development of recommender systems has attracted increased interest in several domains, especially in e-learning. Massive Open Online Courses have brought a revolution. However, deficiency in support and personalization in this context drive learners to lose their motivation and leave the learning process. To overcome this problem we focus on adapting learning activities to learners' needs using a recommender system.This paper attempts to provide an introduction to different recommender systems for e-learning settings, as well as to present our proposed recommender system for massive learning activities in order to provide learners with the suitable learning activities to follow the learning process and maintain their motivation. We propose a hybrid knowledge-based recommender system based on ontology for recommendation of e-learning activities to learners in the context of MOOCs. In the proposed recommendation approach, ontology is used to model and represent the knowledge about the domain model, learners and learning activities.",TRUE,noun
R278,Information Science,R145318,"Electronic Surveillance System for the Early Notification of Community-Based Epidemics (ESSENCE): Overview, Components, and Public Health Applications",S581714,R145327,Epidemiological surveillance users,R144737,Epidemiologists,"Background The Electronic Surveillance System for the Early Notification of Community-Based Epidemics (ESSENCE) is a secure web-based tool that enables health care practitioners to monitor health indicators of public health importance for the detection and tracking of disease outbreaks, consequences of severe weather, and other events of concern. The ESSENCE concept began in an internally funded project at the Johns Hopkins University Applied Physics Laboratory, advanced with funding from the State of Maryland, and broadened in 1999 as a collaboration with the Walter Reed Army Institute for Research. Versions of the system have been further developed by Johns Hopkins University Applied Physics Laboratory in multiple military and civilian programs for the timely detection and tracking of health threats. Objective This study aims to describe the components and development of a biosurveillance system increasingly coordinating all-hazards health surveillance and infectious disease monitoring among large and small health departments, to list the key features and lessons learned in the growth of this system, and to describe the range of initiatives and accomplishments of local epidemiologists using it. Methods The features of ESSENCE include spatial and temporal statistical alerting, custom querying, user-defined alert notifications, geographical mapping, remote data capture, and event communications. To expedite visualization, configurable and interactive modes of data stratification and filtering, graphical and tabular customization, user preference management, and sharing features allow users to query data and view geographic representations, time series and data details pages, and reports. These features allow ESSENCE users to gather and organize the resulting wealth of information into a coherent view of population health status and communicate findings among users. Results The resulting broad utility, applicability, and adaptability of this system led to the adoption of ESSENCE by the Centers for Disease Control and Prevention, numerous state and local health departments, and the Department of Defense, both nationally and globally. The open-source version of Suite for Automated Global Electronic bioSurveillance is available for global, resource-limited settings. Resourceful users of the US National Syndromic Surveillance Program ESSENCE have applied it to the surveillance of infectious diseases, severe weather and natural disaster events, mass gatherings, chronic diseases and mental health, and injury and substance abuse. Conclusions With emerging high-consequence communicable diseases and other health conditions, the continued user requirement–driven enhancements of ESSENCE demonstrate an adaptable disease surveillance capability focused on the everyday needs of public health. The challenge of a live system for widely distributed users with multiple different data sources and high throughput requirements has driven a novel, evolving architecture design.",TRUE,noun
R278,Information Science,R76438,"Tools for historical corpus research, and a corpus of Latin",S351870,R76440,Metadata,L250566,era,"We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word’s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.",TRUE,noun
R278,Information Science,R76423,Expanding horizons in historical linguistics with the 400-million word Corpus of Historical American English,S351826,R76425,Corpus genres,R77050,fiction,"The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.",TRUE,noun
R278,Information Science,R150376,The Pattern of Patterns: What is a pattern in conceptual modeling?,S603088,R150377,Material,R150393,framework,"It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.",TRUE,noun
R278,Information Science,R76438,"Tools for historical corpus research, and a corpus of Latin",S351869,R76440,Metadata,L250565,genre,"We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word’s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.",TRUE,noun
R278,Information Science,R38897,Multi-Agent Systems: A Survey,S126983,R38899,Uses metric,L76672,Inputs,"Multi-agent systems (MASs) have received tremendous attention from scholars in different disciplines, including computer science and civil engineering, as a means to solve complex problems by subdividing them into smaller tasks. The individual tasks are allocated to autonomous entities, known as agents. Each agent decides on a proper action to solve the task using multiple inputs, e.g., history of actions, interactions with its neighboring agents, and its goal. The MAS has found multiple applications, including modeling complex systems, smart grids, and computer networks. Despite their wide applicability, there are still a number of challenges faced by MAS, including coordination between agents, security, and task allocation. This survey provides a comprehensive discussion of all aspects of MAS, starting from definitions, features, applications, challenges, and communications to evaluation. A classification on MAS applications and challenges is provided along with references for further studies. We expect this paper to serve as an insightful and comprehensive resource on the MAS for researchers and practitioners in the area.",TRUE,noun
R278,Information Science,R73196,Persistent Identification of Instruments,S338901,R73205,Entity type,R73206,Instruments,"Instruments play an essential role in creating research data. Given the importance of instruments and associated metadata to the assessment of data quality and data reuse, globally unique, persistent and resolvable identification of instruments is crucial. The Research Data Alliance Working Group Persistent Identification of Instruments (PIDINST) developed a community-driven solution for persistent identification of instruments which we present and discuss in this paper. Based on an analysis of 10 use cases, PIDINST developed a metadata schema and prototyped schema implementation with DataCite and ePIC as representative persistent identifier infrastructures and with HZB (Helmholtz-Zentrum Berlin fur Materialien und Energie) and BODC (British Oceanographic Data Centre) as representative institutional instrument providers. These implementations demonstrate the viability of the proposed solution in practice. Moving forward, PIDINST will further catalyse adoption and consolidate the schema by addressing new stakeholder requirements.",TRUE,noun
R278,Information Science,R76438,"Tools for historical corpus research, and a corpus of Latin",S351873,R76440,Languages,R76407,Latin,"We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word’s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.",TRUE,noun
R278,Information Science,R76438,"Tools for historical corpus research, and a corpus of Latin",S351875,R76440,Dataset name,L250570,LatinISE,"We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word’s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.",TRUE,noun
R278,Information Science,R70872,Data Curation in the OpenAIRE Scholarly Communication Infrastructure,S337181,R70873,Content,L243419,Metadata,"OpenAIRE is the European Union initiative for an Open Access Infrastructure for Research in support of open scholarly communication and access to the research output of European funded projects and open access content from a network of institutional and disciplinary repositories. This article outlines the curation activities conducted in the OpenAIRE infrastructure, which employs a multi-level, multi-targeted approach: the publication and implementation of interoperability guidelines to assist in the local data curation processes, the data curation due to the integration of heterogeneous sources supporting different types of data, the inference of links to accomplish the publication research contextualization and data enrichment, and the end-user metadata curation that allows users to edit the attributes and provide links among the entities.",TRUE,noun
R278,Information Science,R145065,Description and validation of a new automated surveillance system for Clostridium difficile in Denmark,S580431,R145068,Epidemiological surveillance software,R145070,MiBa,"SUMMARY The surveillance of Clostridium difficile (CD) in Denmark consists of laboratory based data from Departments of Clinical Microbiology (DCMs) sent to the National Registry of Enteric Pathogens (NREP). We validated a new surveillance system for CD based on the Danish Microbiology Database (MiBa). MiBa automatically collects microbiological test results from all Danish DCMs. We built an algorithm to identify positive test results for CD recorded in MiBa. A CD case was defined as a person with a positive culture for CD or PCR detection of toxin A and/or B and/or binary toxin. We compared CD cases identified through the MiBa-based surveillance with those reported to NREP and locally in five DCMs representing different Danish regions. During 2010–2014, NREP reported 13 896 CD cases, and the MiBa-based surveillance 21 252 CD cases. There was a 99·9% concordance between the local datasets and the MiBa-based surveillance. Surveillance based on MiBa was superior to the current surveillance system, and the findings show that the number of CD cases in Denmark hitherto has been under-reported. There were only minor differences between local data and the MiBa-based surveillance, showing the completeness and validity of CD data in MiBa. This nationwide electronic system can greatly strengthen surveillance and research in various applications.",TRUE,noun
R278,Information Science,R146256,Improving national surveillance of Lyme neuroborreliosis in Denmark through electronic reporting of specific antibody index testing from 2010 to 2012,S585645,R146258,Epidemiological surveillance software,R145070,MiBa,"Our aim was to evaluate the results of automated surveillance of Lyme neuroborreliosis (LNB) in Denmark using the national microbiology database (MiBa), and to describe the epidemiology of laboratory-confirmed LNB at a national level. MiBa-based surveillance includes electronic transfer of laboratory results, in contrast to the statutory surveillance based on manually processed notifications. Antibody index (AI) testing is the recommend laboratory test to support the diagnosis of LNB in Denmark. In the period from 2010 to 2012, 217 clinical cases of LNB were notified to the statutory surveillance system, while 533 cases were reported AI positive by the MiBa system. Thirty-five unconfirmed cases (29 AI-negative and 6 not tested) were notified, but not captured by MiBa. Using MiBa, the number of reported cases was increased almost 2.5 times. Furthermore, the reporting was timelier (median lag time: 6 vs 58 days). Average annual incidence of AI-confirmed LNB in Denmark was 3.2/100,000 population and incidences stratified by municipality ranged from none to above 10/100,000. This is the first study reporting nationwide incidence of LNB using objective laboratory criteria. Laboratory-based surveillance with electronic data-transfer was more accurate, complete and timely compared to the surveillance based on manually processed notifications. We propose using AI test results for LNB surveillance instead of clinical reporting.",TRUE,noun
R278,Information Science,R44157,BioBERT: a pre-trained biomedical language representation model for biomedical text mining,S154288,R50466,contains,R50465,Model,"Abstract Motivation Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. Results We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts. Availability and implementation We make the pre-trained weights of BioBERT freely available at https://github.com/naver/biobert-pretrained, and the source code for fine-tuning BioBERT available at https://github.com/dmis-lab/biobert.",TRUE,noun
R278,Information Science,R44210,Semantic relation classification via bidirectional LSTM networks with entity-aware attention using latent entity typing,S134552,R44211,Has,R44212,Model,"Classifying semantic relations between entity pairs in sentences is an important task in natural language processing (NLP). Most previous models applied to relation classification rely on high-level lexical and syntactic features obtained by NLP tools such as WordNet, the dependency parser, part-of-speech (POS) tagger, and named entity recognizers (NER). In addition, state-of-the-art neural models based on attention mechanisms do not fully utilize information related to the entity, which may be the most crucial feature for relation classification. To address these issues, we propose a novel end-to-end recurrent neural model that incorporates an entity-aware attention mechanism with a latent entity typing (LET) method. Our model not only effectively utilizes entities and their latent types as features, but also builds word representations by applying self-attention based on symmetrical similarity of a sentence itself. Moreover, the model is interpretable by visualizing applied attention mechanisms. Experimental results obtained with the SemEval-2010 Task 8 dataset, which is one of the most popular relation classification tasks, demonstrate that our model outperforms existing state-of-the-art models without any high-level features.",TRUE,noun
R278,Information Science,R44287,Graph Convolution over Pruned Dependency Trees Improves Relation Extraction,S134667,R44288,Has,R44289,Model,"Dependency trees help relation extraction models capture long-range relations between words. However, existing dependency-based models either neglect crucial information (e.g., negation) by pruning the dependency trees too aggressively, or are computationally inefficient because it is difficult to parallelize over different tree structures. We propose an extension of graph convolutional networks that is tailored for relation extraction, which pools information over arbitrary dependency structures efficiently in parallel. To incorporate relevant information while maximally removing irrelevant content, we further apply a novel pruning strategy to the input trees by keeping words immediately around the shortest path between the two entities among which a relation might hold. The resulting model achieves state-of-the-art performance on the large-scale TACRED dataset, outperforming existing sequence and dependency-based neural models. We also show through detailed analysis that this model has complementary strengths to sequence models, and combining them further improves the state of the art.",TRUE,noun
R278,Information Science,R46617,Named entity recognition for Nepali text using support vector machines,S142676,R46618,Language/domain,L87744,Nepali,"Named Entity Recognition aims to identify and to classify rigid designators in text such as proper names, biological species, and temporal expressions into some predefined categories. There has been growing interest in this field of research since the early 1990s. Named Entity Recognition has a vital role in different fields of natural language processing such as Machine Translation, Information Extraction, Question Answering System and various other fields. In this paper, Named Entity Recognition for Nepali text, based on the Support Vector Machine (SVM) is presented which is one of machine learning approaches for the classification task. A set of features are extracted from training data set. Accuracy and efficiency of SVM classifier are analyzed in three different sizes of training data set. Recognition systems are tested with ten datasets for Nepali text. The strength of this work is the efficient feature extraction and the comprehensive recognition techniques. The Support Vector Machine based Named Entity Recognition is limited to use a certain set of features and it uses a small dictionary which affects its performance. The learning performance of recognition system is observed. It is found that system can learn well from the small set of training data and increase the rate of learning on the increment of training size.",TRUE,noun
R278,Information Science,R76423,Expanding horizons in historical linguistics with the 400-million word Corpus of Historical American English,S351828,R76425,Corpus genres,R77052,newspapers,"The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.",TRUE,noun
R278,Information Science,R109143,Towards Improving the Quality of Knowledge Graphs with Data-driven Ontology Patterns and SHACL,S498220,R109145,input,L360626,ontology,"As Linked Data available on the Web continue to grow, understanding their structure and assessing their quality remains a challenging task making such the bottleneck for their reuse. ABSTAT is an online semantic profiling tool which helps data consumers in better understanding of the data by extracting data-driven ontology patterns and statistics about the data. The SHACL Shapes Constraint Language helps users capturing quality issues in the data by means of co straints. In this paper we propose a methodology to improve the quality of different versions of the data by means of SHACL constraints learned from the semantic profiles produced by ABSTAT.",TRUE,noun
R278,Information Science,R136009,Ontology-Based Personalized Course Recommendation Framework,S538521,R136012,keywords,R135501,ontology,"Choosing a higher education course at university is not an easy task for students. A wide range of courses are offered by the individual universities whose delivery mode and entry requirements differ. A personalized recommendation system can be an effective way of suggesting the relevant courses to the prospective students. This paper introduces a novel approach that personalizes course recommendations that will match the individual needs of users. The proposed approach developed a framework of an ontology-based hybrid-filtering system called the ontology-based personalized course recommendation (OPCR). This approach aims to integrate the information from multiple sources based on the hierarchical ontology similarity with a view to enhancing the efficiency and the user satisfaction and to provide students with appropriate recommendations. The OPCR combines collaborative-based filtering with content-based filtering. It also considers familiar related concepts that are evident in the profiles of both the student and the course, determining the similarity between them. Furthermore, OPCR uses an ontology mapping technique, recommending jobs that will be available following the completion of each course. This method can enable students to gain a comprehensive knowledge of courses based on their relevance, using dynamic ontology mapping to link the course profiles and student profiles with job profiles. Results show that a filtering algorithm that uses hierarchically related concepts produces better outcomes compared to a filtering method that considers only keyword similarity. In addition, the quality of the recommendations is improved when the ontology similarity between the items’ and the users’ profiles were utilized. This approach, using a dynamic ontology mapping, is flexible and can be adapted to different domains. The proposed framework can be used to filter the items for both postgraduate courses and items from other domains.",TRUE,noun
R278,Information Science,R136019,Ontology-based E-learning Content Recommender System for Addressing the Pure Cold-start Problem,S538549,R136021,keywords,R135501,ontology,"E-learning recommender systems are gaining significance nowadays due to its ability to enhance the learning experience by providing tailor-made services based on learner preferences. A Personalized Learning Environment (PLE) that automatically adapts to learner characteristics such as learning styles and knowledge level can recommend appropriate learning resources that would favor the learning process and improve learning outcomes. The pure cold-start problem is a relevant issue in PLEs, which arises due to the lack of prior information about the new learner in the PLE to create appropriate recommendations. This article introduces a semantic framework based on ontology to address the pure cold-start problem in content recommenders. The ontology encapsulates the domain knowledge about the learners as well as Learning Objects (LOs). The semantic model that we built has been experimented with different combinations of the key learner parameters such as learning style, knowledge level, and background knowledge. The proposed framework utilizes these parameters to build natural learner groups from the learner ontology using SPARQL queries. The ontology holds 480 learners’ data, 468 annotated learning objects with 5,600 learner ratings. A multivariate k-means clustering algorithm, an unsupervised machine learning technique for grouping similar data, is used to evaluate the learner similarity computation accuracy. The learner satisfaction achieved with the proposed model is measured based on the ratings given by the 40 participants of the experiments. From the evaluation perspective, it is evident that 79% of the learners are satisfied with the recommendations generated by the proposed model in pure cold-start condition.",TRUE,noun
R278,Information Science,R70872,Data Curation in the OpenAIRE Scholarly Communication Infrastructure,S337179,R70873,Database,L243418,OpenAIRE,"OpenAIRE is the European Union initiative for an Open Access Infrastructure for Research in support of open scholarly communication and access to the research output of European funded projects and open access content from a network of institutional and disciplinary repositories. This article outlines the curation activities conducted in the OpenAIRE infrastructure, which employs a multi-level, multi-targeted approach: the publication and implementation of interoperability guidelines to assist in the local data curation processes, the data curation due to the integration of heterogeneous sources supporting different types of data, the inference of links to accomplish the publication research contextualization and data enrichment, and the end-user metadata curation that allows users to edit the attributes and provide links among the entities.",TRUE,noun
R278,Information Science,R70866,OpenBiodiv: A Knowledge Graph for Literature-Extracted Linked Open Data in Biodiversity Science,S337127,R70867,Database,L243375,OpenBiodiv,"Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.",TRUE,noun
R278,Information Science,R145085,"Developing open source, self-contained disease surveillance software applications for use in resource-limited settings",S580528,R145087,Epidemiological surveillance software,R145098,OpenESSENCE,"Abstract Background Emerging public health threats often originate in resource-limited countries. In recognition of this fact, the World Health Organization issued revised International Health Regulations in 2005, which call for significantly increased reporting and response capabilities for all signatory nations. Electronic biosurveillance systems can improve the timeliness of public health data collection, aid in the early detection of and response to disease outbreaks, and enhance situational awareness. Methods As components of its Suite for Automated Global bioSurveillance (SAGES) program, The Johns Hopkins University Applied Physics Laboratory developed two open-source, electronic biosurveillance systems for use in resource-limited settings. OpenESSENCE provides web-based data entry, analysis, and reporting. ESSENCE Desktop Edition provides similar capabilities for settings without internet access. Both systems may be configured to collect data using locally available cell phone technologies. Results ESSENCE Desktop Edition has been deployed for two years in the Republic of the Philippines. Local health clinics have rapidly adopted the new technology to provide daily reporting, thus eliminating the two-to-three week data lag of the previous paper-based system. Conclusions OpenESSENCE and ESSENCE Desktop Edition are two open-source software products with the capability of significantly improving disease surveillance in a wide range of resource-limited settings. These products, and other emerging surveillance technologies, can assist resource-limited countries compliance with the revised International Health Regulations.",TRUE,noun
R278,Information Science,R175051,"The transition of
ARVO
journals to open access",S693339,R175063,research_field_investigated,R136170,Ophthalmology,"In January 2016, the three journals of the Association for Research in Vision and Ophthalmology (ARVO) transitioned to gold open access. Increased author charges were introduced to partially offset the loss of subscription revenue. Submissions to the two established journals initially dropped by almost 15% but have now stabilized. The transition has not impacted acceptance rates and impact factors, and article pageviews and downloads may have increased as a result of open access.",TRUE,noun
R278,Information Science,R135998,A Hybrid Knowlegde-Based Approach for Recommending Massive Learning Activities,S538494,R136000,keywords,R136004,personalization,"In recent years, the development of recommender systems has attracted increased interest in several domains, especially in e-learning. Massive Open Online Courses have brought a revolution. However, deficiency in support and personalization in this context drive learners to lose their motivation and leave the learning process. To overcome this problem we focus on adapting learning activities to learners' needs using a recommender system.This paper attempts to provide an introduction to different recommender systems for e-learning settings, as well as to present our proposed recommender system for massive learning activities in order to provide learners with the suitable learning activities to follow the learning process and maintain their motivation. We propose a hybrid knowledge-based recommender system based on ontology for recommendation of e-learning activities to learners in the context of MOOCs. In the proposed recommendation approach, ontology is used to model and represent the knowledge about the domain model, learners and learning activities.",TRUE,noun
R278,Information Science,R150170,Epidemiologic Surveillance in Developing Countries,S602214,R150172,Epidemiological surveillance users,R145386,Physician,"developed countries in many ways. Most people are poorer, less educated, more likely to die at a young age, and less knowledgeable about factors that cause, prevent, or cure disease. Biological and physical hazards are more common, which results in greater incidence, disability, and death. Although disease is common, both the people and government have much fewer resources for prevention or medical care. Many efficacious drugs arc too expensive and not readily available for those in greatest need. Salaries are so low that government physicians or nurses must work after-hours in private clinics to feed, clothe, and educate their families. The establishment and maintenance of an epidemiological surveillance system in such an environment requires a differ ent orientation from that found in wealthier nations. The scarcity of resources is a dominant concern. Salaried time spent gathering data is lost to service activities, such as treating gastrointestinal problems or preventing childhood diseases. As a result, components in a surveillance system must be justified, as are purchases of examination tables or radiographic equipment. A costly, extensive surveillance system may cause more harm than good. In this article 1 will define epidemiologic surveillance. 1 also will describe the various components of a surveillance program, show how microcomputers and existing software can be used to increase effectiveness, and illustrate how",TRUE,noun
R278,Information Science,R150376,The Pattern of Patterns: What is a pattern in conceptual modeling?,S603080,R150377,Data,R150385,properties,"It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.",TRUE,noun
R278,Information Science,R46580,Named entity recognition for Punjabi language text summarization,S142424,R46581,Language/domain,L87564,Punjabi,"Entity Recognition (NER) is used to locate and classify atomic elements in text into predetermined classes such as the names of persons, organizations, locations, concepts etc. NER is used in many applications like text summarization, text classification, question answering and machine translation systems etc. For English a lot of work has already done in field of NER, where capitalization is a major clue for rules, whereas Indian Languages do not have such feature. This makes the task difficult for Indian languages. This paper explains the Named Entity Recognition System for Punjabi language text summarization. A Condition based approach has been used for developing NER system for Punjabi language. Various rules have been developed like prefix rule, suffix rule, propername rule, middlename rule and lastname rule. For implementing NER, various resources in Punjabi, have been developed like a list of prefix names, a list of suffix names, a list of proper names, middle names and last names. The Precision, Recall and F-Score for condition based NER approach are 89.32%, 83.4% and 86.25% respectively.",TRUE,noun
R278,Information Science,R46639,Named entity recognition in query,S142830,R46640,Language/domain,L87854,Queries,"This paper addresses the problem of Named Entity Recognition in Query (NERQ), which involves detection of the named entity in a given query and classification of the named entity into predefined classes. NERQ is potentially useful in many applications in web search. The paper proposes taking a probabilistic approach to the task using query log data and Latent Dirichlet Allocation. We consider contexts of a named entity (i.e., the remainders of the named entity in queries) as words of a document, and classes of the named entity as topics. The topic model is constructed by a novel and general learning method referred to as WS-LDA (Weakly Supervised Latent Dirichlet Allocation), which employs weakly supervised learning (rather than unsupervised learning) using partially labeled seed entities. Experimental results show that the proposed method based on WS-LDA can accurately perform NERQ, and outperform the baseline methods.",TRUE,noun
R278,Information Science,R38544,Estimating relative depth in single images via rankboost,S126408,R38546,Has approach,R38547,RankBoost,"In this paper, we present a novel approach to estimate the relative depth of regions in monocular images. There are several contributions. First, the task of monocular depth estimation is considered as a learning-to-rank problem which offers several advantages compared to regression approaches. Second, monocular depth clues of human perception are modeled in a systematic manner. Third, we show that these depth clues can be modeled and integrated appropriately in a Rankboost framework. For this purpose, a space-efficient version of Rankboost is derived that makes it applicable to rank a large number of objects, as posed by the given problem. Finally, the monocular depth clues are combined with results from a deep learning approach. Experimental results show that the error rate is reduced by adding the monocular features while outperforming state-of-the-art systems.",TRUE,noun
R278,Information Science,R73156,ORCID: a system to uniquely identify researchers,S338717,R73158,Entity type,R44056,Researchers,"The Open Researcher & Contributor ID (ORCID) registry presents a unique opportunity to solve the problem of author name ambiguity. At its core the value of the ORCID registry is that it crosses disciplines, organizations, and countries, linking ORCID with both existing identifier schemes as well as publications and other research activities. By supporting linkages across multiple datasets – clinical trials, publications, patents, datasets – such a registry becomes a switchboard for researchers and publishers alike in managing the dissemination of research findings. We describe use cases for embedding ORCID identifiers in manuscript submission workflows, prior work searches, manuscript citations, and repository deposition. We make recommendations for storing and displaying ORCID identifiers in publication metadata to include ORCID identifiers, with CrossRef integration as a specific example. Finally, we provide an overview of ORCID membership and integration tools and resources.",TRUE,noun
R278,Information Science,R44157,BioBERT: a pre-trained biomedical language representation model for biomedical text mining,S154287,R50466,contains,R50464,Results,"Abstract Motivation Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. Results We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts. Availability and implementation We make the pre-trained weights of BioBERT freely available at https://github.com/naver/biobert-pretrained, and the source code for fine-tuning BioBERT available at https://github.com/dmis-lab/biobert.",TRUE,noun
R278,Information Science,R41374,Attention Guided Graph Convolutional Networks for Relation Extraction,S131067,R41376,Has,R41416,results,"Dependency trees convey rich structural information that is proven useful for extracting relations among entities in text. However, how to effectively make use of relevant information while ignoring irrelevant information from the dependency trees remains a challenging research question. Existing approaches employing rule based hard-pruning strategies for selecting relevant partial dependency structures may not always yield optimal results. In this work, we propose Attention Guided Graph Convolutional Networks (AGGCNs), a novel model which directly takes full dependency trees as inputs. Our model can be understood as a soft-pruning approach that automatically learns how to selectively attend to the relevant sub-structures useful for the relation extraction task. Extensive results on various tasks including cross-sentence n-ary relation extraction and large-scale sentence-level relation extraction show that our model is able to better leverage the structural information of the full dependency trees, giving significantly better results than previous approaches.",TRUE,noun
R278,Information Science,R41526,Enriching pre-trained language model with entity information for relation classification,S131291,R41528,Has,R41530,Results,"Relation classification is an important NLP task to extract relations between entities. The state-of-the-art methods for relation classification are primarily based on Convolutional or Recurrent Neural Networks. Recently, the pre-trained BERT model achieves very successful results in many NLP classification / sequence labeling tasks. Relation classification differs from those tasks in that it relies on information of both the sentence and the two target entities. In this paper, we propose a model that both leverages the pre-trained BERT language model and incorporates information from the target entities to tackle the relation classification task. We locate the target entities and transfer the information through the pre-trained architecture and incorporate the corresponding encoding of the two entities. We achieve significant improvement over the state-of-the-art method on the SemEval-2010 task 8 relational dataset.",TRUE,noun
R278,Information Science,R46373,Sentence similarity learning by lexical decomposition and composition,S141512,R46375,Has,R46376,Results,"Most conventional sentence similarity methods only focus on similar parts of two input sentences, and simply ignore the dissimilar parts, which usually give us some clues and semantic meanings about the sentences. In this work, we propose a model to take into account both the similarities and dissimilarities by decomposing and composing lexical semantics over sentences. The model represents each word as a vector, and calculates a semantic matching vector for each word based on all words in the other sentence. Then, each word vector is decomposed into a similar component and a dissimilar component based on the semantic matching vector. After this, a two-channel CNN model is employed to capture features by composing the similar and dissimilar components. Finally, a similarity score is estimated over the composed feature vectors. Experimental results show that our model gets the state-of-the-art performance on the answer sentence selection task, and achieves a comparable result on the paraphrase identification task.",TRUE,noun
R278,Information Science,R46427,Machine comprehension using match-lstm and answer pointer,S141588,R46429,Has,R46430,Results,"Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.",TRUE,noun
R278,Information Science,R38897,Multi-Agent Systems: A Survey,S139169,R39116,Challenges,R45074,Security,"Multi-agent systems (MASs) have received tremendous attention from scholars in different disciplines, including computer science and civil engineering, as a means to solve complex problems by subdividing them into smaller tasks. The individual tasks are allocated to autonomous entities, known as agents. Each agent decides on a proper action to solve the task using multiple inputs, e.g., history of actions, interactions with its neighboring agents, and its goal. The MAS has found multiple applications, including modeling complex systems, smart grids, and computer networks. Despite their wide applicability, there are still a number of challenges faced by MAS, including coordination between agents, security, and task allocation. This survey provides a comprehensive discussion of all aspects of MAS, starting from definitions, features, applications, challenges, and communications to evaluation. A classification on MAS applications and challenges is provided along with references for further studies. We expect this paper to serve as an insightful and comprehensive resource on the MAS for researchers and practitioners in the area.",TRUE,noun
R278,Information Science,R108690,Open Science meets Food Modelling: Introducing the Food Modelling Journal (FMJ),S496110,R108916,Material,R108923,software,"This Editorial describes the rationale, focus, scope and technology behind the newly launched, open access, innovative Food Modelling Journal (FMJ). The Journal is designed to publish those outputs of the research cycle that usually precede the publication of the research article, but have their own value and re-usability potential. Such outputs are methods, models, software and data. The Food Modelling Journal is launched by the AGINFRA+ community and is integrated with the AGINFRA+ Virtual Research Environment (VRE) to facilitate and streamline the authoring, peer review and publication of the manuscripts via the ARPHA Publishing Platform.",TRUE,noun
R278,Information Science,R41295,Spanbert: Improving pre-training by representing and predicting spans,S131005,R41297,model,R41360,SpanBERT,"We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random tokens, and (2) training the span boundary representations to predict the entire content of the masked span, without relying on the individual token representations within it. SpanBERT consistently outperforms BERT and our better-tuned baselines, with substantial gains on span selection tasks such as question answering and coreference resolution. In particular, with the same training data and model size as BERT large , our single model obtains 94.6% and 88.7% F1 on SQuAD 1.1 and 2.0 respectively. We also achieve a new state of the art on the OntoNotes coreference resolution task (79.6% F1), strong performance on the TACRED relation extraction benchmark, and even gains on GLUE. 1",TRUE,noun
R278,Information Science,R109143,Towards Improving the Quality of Knowledge Graphs with Data-driven Ontology Patterns and SHACL,S498224,R109145,output,L360629,statistics,"As Linked Data available on the Web continue to grow, understanding their structure and assessing their quality remains a challenging task making such the bottleneck for their reuse. ABSTAT is an online semantic profiling tool which helps data consumers in better understanding of the data by extracting data-driven ontology patterns and statistics about the data. The SHACL Shapes Constraint Language helps users capturing quality issues in the data by means of co straints. In this paper we propose a methodology to improve the quality of different versions of the data by means of SHACL constraints learned from the semantic profiles produced by ABSTAT.",TRUE,noun
R278,Information Science,R36010,"Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction",S328845,R69260,Concept types,L239584,Task,"We introduce a multi-task setup of identifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called SciIE with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.",TRUE,noun
R278,Information Science,R76438,"Tools for historical corpus research, and a corpus of Latin",S351868,R76440,Metadata,L250564,title,"We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word’s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.",TRUE,noun
R278,Information Science,R46578,Rule-Based named entity recognition in Urdu,S142410,R46579,Language/domain,L87554,Urdu,"Named Entity Recognition or Extraction (NER) is an important task for automated text processing for industries and academia engaged in the field of language processing, intelligence gathering and Bioinformatics. In this paper we discuss the general problem of Named Entity Recognition, more specifically the challenges in NER in languages that do not have language resources e.g. large annotated corpora. We specifically address the challenges for Urdu NER and differentiate it from other South Asian (Indic) languages. We discuss the differences between Hindi and Urdu and conclude that the NER computational models for Hindi cannot be applied to Urdu. A rule-based Urdu NER algorithm is presented that outperforms the models that use statistical learning.",TRUE,noun
R278,Information Science,R46582,Named entity recognition system for Urdu,S142438,R46583,Language/domain,L87574,Urdu,"Named Entity Recognition (NER) is a task which helps in finding out Persons name, Location names, Brand names, Abbreviations, Date, Time etc and classifies the m into predefined different categories. NER plays a major role in various Natural Language Processing (NLP) fields like Information Extraction, Machine Translations and Question Answering. This paper describes the problems of NER in the context of Urdu Language and provides relevant solutions. The system is developed to tag thirteen different Named Entities (NE), twelve NE proposed by IJCNLP-08 and Izaafats. We have used the Rule Based approach and developed the various rules to extract the Named Entities in the given Urdu text.",TRUE,noun
R278,Information Science,R49502,Pairwise Multi-Class Document Classification for Semantic Relations between Wikipedia Articles,S147795,R49504,Dataset used,R35008,Wikipedia,"Many digital libraries recommend literature to their users considering the similarity between a query document and their repository. However, they often fail to distinguish what is the relationship that makes two documents alike. In this paper, we model the problem of finding the relationship between two documents as a pairwise document classification task. To find the semantic relation between documents, we apply a series of techniques, such as GloVe, Paragraph Vectors, BERT, and XLNet under different configurations (e.g., sequence length, vector concatenation scheme), including a Siamese architecture for the Transformer-based systems. We perform our experiments on a newly proposed dataset of 32,168 Wikipedia article pairs and Wikidata properties that define the semantic document relations. Our results show vanilla BERT as the best performing system with an F1-score of 0.93, which we manually examine to better understand its applicability to other domains. Our findings suggest that classifying semantic relations between documents is a solvable task and motivates the development of a recommender system based on the evaluated techniques. The discussions in this paper serve as first steps in the exploration of documents through SPARQL-like queries such that one could find documents that are similar in one aspect but dissimilar in another.",TRUE,noun
R278,Information Science,R182020,The profits of free books: an experiment to measure the impact of open access publishing,S704116,R182022,Method,L475087,experiment,"This article describes an experiment to measure the impact of open access (OA) publishing of academic books. During a period of nine months, three sets of 100 books were disseminated through an institutional repository, the Google Book Search program, or both channels. A fourth set of 100 books was used as control group. OA publishing enhances discovery and online consultation. Within the context of the experiment, no relation could be found between OA publishing and citation rates. Contrary to expectations, OA publishing does not stimulate or diminish sales figures. The Google Book Search program is superior to the repository.",TRUE,noun
R278,Information Science,R73156,ORCID: a system to uniquely identify researchers,S338749,R73158,provided services,L243977,Search,"The Open Researcher & Contributor ID (ORCID) registry presents a unique opportunity to solve the problem of author name ambiguity. At its core the value of the ORCID registry is that it crosses disciplines, organizations, and countries, linking ORCID with both existing identifier schemes as well as publications and other research activities. By supporting linkages across multiple datasets – clinical trials, publications, patents, datasets – such a registry becomes a switchboard for researchers and publishers alike in managing the dissemination of research findings. We describe use cases for embedding ORCID identifiers in manuscript submission workflows, prior work searches, manuscript citations, and repository deposition. We make recommendations for storing and displaying ORCID identifiers in publication metadata to include ORCID identifiers, with CrossRef integration as a specific example. Finally, we provide an overview of ORCID membership and integration tools and resources.",TRUE,noun
R278,Information Science,R73196,Persistent Identification of Instruments,S338929,R73205,provided services,L244024,Search,"Instruments play an essential role in creating research data. Given the importance of instruments and associated metadata to the assessment of data quality and data reuse, globally unique, persistent and resolvable identification of instruments is crucial. The Research Data Alliance Working Group Persistent Identification of Instruments (PIDINST) developed a community-driven solution for persistent identification of instruments which we present and discuss in this paper. Based on an analysis of 10 use cases, PIDINST developed a metadata schema and prototyped schema implementation with DataCite and ePIC as representative persistent identifier infrastructures and with HZB (Helmholtz-Zentrum Berlin fur Materialien und Energie) and BODC (British Oceanographic Data Centre) as representative institutional instrument providers. These implementations demonstrate the viability of the proposed solution in practice. Moving forward, PIDINST will further catalyse adoption and consolidate the schema by addressing new stakeholder requirements.",TRUE,noun
R278,Information Science,R109854,A systematic metadata harvesting workflow for analysing scientific networks,S501173,R109858,Bibliographic data source,R109859,Crossref,"One of the disciplines behind the science of science is the study of scientific networks. This work focuses on scientific networks as a social network having different nodes and connections. Nodes can be represented by authors, articles or journals while connections by citation, co-citation or co-authorship. One of the challenges in creating scientific networks is the lack of publicly available comprehensive data set. It limits the variety of analyses on the same set of nodes of different scientific networks. To supplement such analyses we have worked on publicly available citation metadata from Crossref and OpenCitatons. Using this data a workflow is developed to create scientific networks. Analysis of these networks gives insights into academic research and scholarship. Different techniques of social network analysis have been applied in the literature to study these networks. It includes centrality analysis, community detection, and clustering coefficient. We have used metadata of Scientometrics journal, as a case study, to present our workflow. We did a sample run of the proposed workflow to identify prominent authors using centrality analysis. This work is not a bibliometric study of any field rather it presents replicable Python scripts to perform network analysis. With an increase in the popularity of open access and open metadata, we hypothesise that this workflow shall provide an avenue for understanding scientific scholarship in multiple dimensions.",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R164003,SoMeSci- A 5 Star Open Data Gold Standard Knowledge Graph of Software Mentions in Scientific Articles,S663033,R166456,Software entity types,R166459,Application,"Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R166504,bioNerDS: exploring bioinformatics’ database and software use through literature mining,S663227,R166505,Data domains,R38522,Biology,"Abstract Background Biology-focused databases and software define bioinformatics and their use is central to computational biology. In such a complex and dynamic field, it is of interest to understand what resources are available, which are used, how much they are used, and for what they are used. While scholarly literature surveys can provide some insights, large-scale computer-based approaches to identify mentions of bioinformatics databases and software from primary literature would automate systematic cataloguing, facilitate the monitoring of usage, and provide the foundations for the recovery of computational methods for analysing biological data, with the long-term aim of identifying best/common practice in different areas of biology. Results We have developed bioNerDS, a named entity recogniser for the recovery of bioinformatics databases and software from primary literature. We identify such entities with an F-measure ranging from 63% to 91% at the mention level and 63-78% at the document level, depending on corpus. Not attaining a higher F-measure is mostly due to high ambiguity in resource naming, which is compounded by the on-going introduction of new resources. To demonstrate the software, we applied bioNerDS to full-text articles from BMC Bioinformatics and Genome Biology. General mention patterns reflect the remit of these journals, highlighting BMC Bioinformatics’s emphasis on new tools and Genome Biology’s greater emphasis on data analysis. The data also illustrates some shifts in resource usage: for example, the past decade has seen R and the Gene Ontology join BLAST and GenBank as the main components in bioinformatics processing. Conclusions We demonstrate the feasibility of automatically identifying resource names on a large-scale from the scientific literature and show that the generated data can be used for exploration of bioinformatics database and software usage. For example, our results help to investigate the rate of change in resource usage and corroborate the suspicion that a vast majority of resources are created, but rarely (if ever) used thereafter. bioNerDS is available at http://bionerds.sourceforge.net/.",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R163875,The role of software in science: a knowledge graph-based analysis of software mentions in PubMed Central,S663258,R166530,Relation types,R166470,Citation,"Science across all disciplines has become increasingly data-driven, leading to additional needs with respect to software for collecting, processing and analysing data. Thus, transparency about software used as part of the scientific process is crucial to understand provenance of individual research data and insights, is a prerequisite for reproducibility and can enable macro-analysis of the evolution of scientific methods over time. However, missing rigor in software citation practices renders the automated detection and disambiguation of software mentions a challenging problem. In this work, we provide a large-scale analysis of software usage and citation practices facilitated through an unprecedented knowledge graph of software mentions and affiliated metadata generated through supervised information extraction models trained on a unique gold standard corpus and applied to more than 3 million scientific articles. Our information extraction approach distinguishes different types of software and mentions, disambiguates mentions and outperforms the state-of-the-art significantly, leading to the most comprehensive corpus of 11.8 M software mentions that are described through a knowledge graph consisting of more than 300 M triples. Our analysis provides insights into the evolution of software usage and citation patterns across various fields, ranks of journals, and impact of publications. Whereas, to the best of our knowledge, this is the most comprehensive analysis of software use and citation at the time, all data and models are shared publicly to facilitate further research into scientific use and citation of software.",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R164003,SoMeSci- A 5 Star Open Data Gold Standard Knowledge Graph of Software Mentions in Scientific Articles,S662994,R166456,Relation types,R166470,Citation,"Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R164003,SoMeSci- A 5 Star Open Data Gold Standard Knowledge Graph of Software Mentions in Scientific Articles,S662991,R166456,Relation types,R166467,Developer,"Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R166497,Softcite dataset: A dataset of software mentions in biomedical and economic research publications,S663202,R166503,Data domains,R302,Economics,"Software contributions to academic research are relatively invisible, especially to the formalized scholarly reputation system based on bibliometrics. In this article, we introduce a gold‐standard dataset of software mentions from the manual annotation of 4,971 academic PDFs in biomedicine and economics. The dataset is intended to be used for automatic extraction of software mentions from PDF format research publications by supervised learning at scale. We provide a description of the dataset and an extended discussion of its creation process, including improved text conversion of academic PDFs. Finally, we reflect on our challenges and lessons learned during the dataset creation, in hope of encouraging more discussion about creating datasets for machine learning use.",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R140043,Unleashing innovation through internal hackathons,S559027,R140045,Has method,R140048,Hackathon,"Hackathons have become an increasingly popular approach for organizations to both test their new products and services as well as to generate new ideas. Most events either focus on attracting external developers or requesting employees of the organization to focus on a specific problem. In this paper we describe extensions to this paradigm that open up the event to internal employees and preserve the open-ended nature of the hackathon itself. In this paper we describe our initial motivation and objectives for conducting an internal hackathon, our experience in pioneering an internal hackathon at AT&T including specific things we did to make the internal hackathon successful. We conclude with the benefits (both expected and unexpected) we achieved from the internal hackathon approach, and recommendations for continuing the use of this valuable tool within AT&T.",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R140059,Open data hackathons: an innovative strategy to enhance entrepreneurial intention,S559057,R140061,has dubject domain,R140066,Hackathons,"
Purpose
In terms of entrepreneurship, open data benefits include economic growth, innovation, empowerment and new or improved products and services. Hackathons encourage the development of new applications using open data and the creation of startups based on these applications. Researchers focus on factors that affect nascent entrepreneurs’ decision to create a startup but researches in the field of open data hackathons have not been fully investigated yet. This paper aims to suggest a model that incorporates factors that affect the decision of establishing a startup by developers who have participated in open data hackathons.
Design/methodology/approach
In total, 70 papers were examined and analyzed using a three-phased literature review methodology, which was suggested by Webster and Watson (2002). These surveys investigated several factors that affect a nascent entrepreneur to create a startup.
Findings
Eventually, by identifying the motivations for developers to participate in a hackathon, and understanding the benefits of the use of open data, researchers will be able to elaborate the proposed model and evaluate if the contest has contributed to the decision of establish a startup and what factors affect the decision to establish a startup apply to open data developers, and if the participants of the contest agree with these factors.
Originality/value
The paper expands the scope of open data research on entrepreneurship field, stating the need for more research to be conducted regarding the open data in entrepreneurship through hackathons.
",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R140070,Hackathons as Co-optation Ritual: Socializing Workers and Institutionalizing Innovation in the “New” Economy,S559074,R140072,has subject domain,R140077,Hackathons,"Abstract Hackathons, time-bounded events where participants write computer code and build apps, have become a popular means of socializing tech students and workers to produce “innovation” despite little promise of material reward. Although they offer participants opportunities for learning new skills and face-to-face networking and set up interaction rituals that create an emotional “high,” potential advantage is even greater for the events’ corporate sponsors, who use them to outsource work, crowdsource innovation, and enhance their reputation. Ethnographic observations and informal interviews at seven hackathons held in New York during the course of a single school year show how the format of the event and sponsors’ discursive tropes, within a dominant cultural frame reflecting the appeal of Silicon Valley, reshape unpaid and precarious work as an extraordinary opportunity, a ritual of ecstatic labor, and a collective imaginary for fictional expectations of innovation that benefits all, a powerful strategy for manufacturing workers’ consent in the “new” economy.",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R140059,Open data hackathons: an innovative strategy to enhance entrepreneurial intention,S559060,R140061,Has result,R140069,model,"
Purpose
In terms of entrepreneurship, open data benefits include economic growth, innovation, empowerment and new or improved products and services. Hackathons encourage the development of new applications using open data and the creation of startups based on these applications. Researchers focus on factors that affect nascent entrepreneurs’ decision to create a startup but researches in the field of open data hackathons have not been fully investigated yet. This paper aims to suggest a model that incorporates factors that affect the decision of establishing a startup by developers who have participated in open data hackathons.
Design/methodology/approach
In total, 70 papers were examined and analyzed using a three-phased literature review methodology, which was suggested by Webster and Watson (2002). These surveys investigated several factors that affect a nascent entrepreneur to create a startup.
Findings
Eventually, by identifying the motivations for developers to participate in a hackathon, and understanding the benefits of the use of open data, researchers will be able to elaborate the proposed model and evaluate if the contest has contributed to the decision of establish a startup and what factors affect the decision to establish a startup apply to open data developers, and if the participants of the contest agree with these factors.
Originality/value
The paper expands the scope of open data research on entrepreneurship field, stating the need for more research to be conducted regarding the open data in entrepreneurship through hackathons.
",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R178149,A Similarity-Inclusive Link Prediction Based Recommender System Approach,S698645,R178151,Material,R175116,MovieLens,"Despite being a challenging research field with many unresolved problems, recommender systems are getting more popular in recent years. These systems rely on the personal preferences of users on items given in the form of ratings and return the preferable items based on choices of like-minded users. In this study, a graph-based recommender system using link prediction techniques incorporating similarity metrics is proposed. A graph-based recommender system that has ratings of users on items can be represented as a bipartite graph, where vertices correspond to users and items and edges to ratings. Recommendation generation in a bipartite graph is a link prediction problem. In current literature, modified link prediction approaches are used to distinguish between fundamental relational dualities of like vs. dislike and similar vs. dissimilar. However, the similarity relationship between users/items is mostly disregarded in the complex domain. The proposed model utilizes user-user and item-item cosine similarity value with the relational dualities in order to improve coverage and hits rate of the system by carefully incorporating similarities. On the standard MovieLens Hetrec and MovieLens datasets, the proposed similarity-inclusive link prediction method performed empirically well compared to other methods operating in the complex domain. The experimental results show that the proposed recommender system can be a plausible alternative to overcome the deficiencies in recommender systems.",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R164003,SoMeSci- A 5 Star Open Data Gold Standard Knowledge Graph of Software Mentions in Scientific Articles,S663035,R166456,Software entity types,R166460,Plugin,"Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R156129,DIGITAL MANUFACTURING: REQUIREMENTS AND CHALLENGES FOR IMPLEMENTING DIGITAL SURROGATES,S627006,R156131,has viewpoint,R156134,Process,"A key challenge for manufacturers today is efficiently producing and delivering products on time. Issues include demand for customized products, changes in orders, and equipment status change, complicating the decision-making process. A real-time digital representation of the manufacturing operation would help address these challenges. Recent technology advancements of smart sensors, IoT, and cloud computing make it possible to realize a ""digital twin"" of a manufacturing system or process. Digital twins or surrogates are data-driven virtual representations that replicate, connect, and synchronize the operation of a manufacturing system or process. They utilize dynamically collected data to track system behaviors, analyze performance, and help make decisions without interrupting production. In this paper, we define digital surrogate, explore their relationships to simulation, digital thread, artificial intelligence, and IoT. We identify the technology and standard requirements and challenges for implementing digital surrogates. A production planning case is used to exemplify the digital surrogate concept.",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R164003,SoMeSci- A 5 Star Open Data Gold Standard Knowledge Graph of Software Mentions in Scientific Articles,S655163,R164005,Concept types,L445080,Software,"Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R166497,Softcite dataset: A dataset of software mentions in biomedical and economic research publications,S663165,R166503,Entity types,R166495,Software,"Software contributions to academic research are relatively invisible, especially to the formalized scholarly reputation system based on bibliometrics. In this article, we introduce a gold‐standard dataset of software mentions from the manual annotation of 4,971 academic PDFs in biomedicine and economics. The dataset is intended to be used for automatic extraction of software mentions from PDF format research publications by supervised learning at scale. We provide a description of the dataset and an extended discussion of its creation process, including improved text conversion of academic PDFs. Finally, we reflect on our challenges and lessons learned during the dataset creation, in hope of encouraging more discussion about creating datasets for machine learning use.",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R164003,SoMeSci- A 5 Star Open Data Gold Standard Knowledge Graph of Software Mentions in Scientific Articles,S662959,R166456,Dataset name,R166457,SoMeSci,"Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R156129,DIGITAL MANUFACTURING: REQUIREMENTS AND CHALLENGES FOR IMPLEMENTING DIGITAL SURROGATES,S627005,R156131,has viewpoint,R156133,System,"A key challenge for manufacturers today is efficiently producing and delivering products on time. Issues include demand for customized products, changes in orders, and equipment status change, complicating the decision-making process. A real-time digital representation of the manufacturing operation would help address these challenges. Recent technology advancements of smart sensors, IoT, and cloud computing make it possible to realize a ""digital twin"" of a manufacturing system or process. Digital twins or surrogates are data-driven virtual representations that replicate, connect, and synchronize the operation of a manufacturing system or process. They utilize dynamically collected data to track system behaviors, analyze performance, and help make decisions without interrupting production. In this paper, we define digital surrogate, explore their relationships to simulation, digital thread, artificial intelligence, and IoT. We identify the technology and standard requirements and challenges for implementing digital surrogates. A production planning case is used to exemplify the digital surrogate concept.",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R166497,Softcite dataset: A dataset of software mentions in biomedical and economic research publications,S663214,R166503,Entity types,R166518,Version,"Software contributions to academic research are relatively invisible, especially to the formalized scholarly reputation system based on bibliometrics. In this article, we introduce a gold‐standard dataset of software mentions from the manual annotation of 4,971 academic PDFs in biomedicine and economics. The dataset is intended to be used for automatic extraction of software mentions from PDF format research publications by supervised learning at scale. We provide a description of the dataset and an extended discussion of its creation process, including improved text conversion of academic PDFs. Finally, we reflect on our challenges and lessons learned during the dataset creation, in hope of encouraging more discussion about creating datasets for machine learning use.",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R164003,SoMeSci- A 5 Star Open Data Gold Standard Knowledge Graph of Software Mentions in Scientific Articles,S662992,R166456,Relation types,R166468,Version,"Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.",TRUE,noun
R137681,"Information Systems, Process and Knowledge Management",R166497,Softcite dataset: A dataset of software mentions in biomedical and economic research publications,S663162,R166503,Relation types,R166468,Version,"Software contributions to academic research are relatively invisible, especially to the formalized scholarly reputation system based on bibliometrics. In this article, we introduce a gold‐standard dataset of software mentions from the manual annotation of 4,971 academic PDFs in biomedicine and economics. The dataset is intended to be used for automatic extraction of software mentions from PDF format research publications by supervised learning at scale. We provide a description of the dataset and an extended discussion of its creation process, including improved text conversion of academic PDFs. Finally, we reflect on our challenges and lessons learned during the dataset creation, in hope of encouraging more discussion about creating datasets for machine learning use.",TRUE,noun
R128,Inorganic Chemistry,R110993,"Anilido-oxazoline-ligated rare-earth metal complexes: synthesis, characterization and highly cis-1,4-selective polymerization of isoprene",S505430,R110998,Ligand,L364937,Anilido-oxazoline,"Anilido-oxazoline-ligated rare-earth metal complexes show strong fluorescence emissions and good catalytic performance on isoprene polymerization with high cis-1,4-selectivity.
",TRUE,noun
R128,Inorganic Chemistry,R160606,Etching Silicon with Aqueous Acidic Ozone Solutions: Reactivity Studies and Surface Investigations,S640747,R160639,substrate,R160640,Silicon,"Aqueous acidic ozone (O3)-containing solutions are increasingly used for silicon treatment in photovoltaic and semiconductor industries. We studied the behavior of aqueous hydrofluoric acid (HF)-containing solutions (i.e., HF–O3, HF–H2SO4–O3, and HF–HCl–O3 mixtures) toward boron-doped solar-grade (100) silicon wafers. The solubility of O3 and etching rates at 20 °C were investigated. The mixtures were analyzed for the potential oxidizing species by UV–vis and Raman spectroscopy. Concentrations of O3 (aq), O3 (g), and Cl2 (aq) were determined by titrimetric volumetric analysis. F–, Cl–, and SO42– ion contents were determined by ion chromatography. Model experiments were performed to investigate the oxidation of H-terminated silicon surfaces by H2O–O2, H2O–O3, H2O–H2SO4–O3, and H2O–HCl–O3 mixtures. The oxidation was monitored by diffuse reflection infrared Fourier transformation (DRIFT) spectroscopy. The resulting surfaces were examined by scanning electron microscopy (SEM) and X-ray photoelectron spectrosc...",TRUE,noun
R310,Labor Economics,R77070,The Impact of the Coronavirus Lockdown on Mental Health: Evidence from the US,S352592,R77073,Control variables,R46728,Gender,"The coronavirus outbreak has caused significant disruptions to people’s lives. We document the impact of state-wide stay-at-home orders on mental health using real time survey data in the US. The lockdown measures lowered mental health by 0.085 standard deviations. This large negative effect is entirely driven by women. As a result of the lockdown measures, the existing gender gap in mental health has increased by 66%. The negative effect on women’s mental health cannot be explained by an increase in financial worries or childcare responsibilities.",TRUE,noun
R310,Labor Economics,R77070,The Impact of the Coronavirus Lockdown on Mental Health: Evidence from the US,S351984,R77073,Examined (sub-)group,R77092,men,"The coronavirus outbreak has caused significant disruptions to people’s lives. We document the impact of state-wide stay-at-home orders on mental health using real time survey data in the US. The lockdown measures lowered mental health by 0.085 standard deviations. This large negative effect is entirely driven by women. As a result of the lockdown measures, the existing gender gap in mental health has increased by 66%. The negative effect on women’s mental health cannot be explained by an increase in financial worries or childcare responsibilities.",TRUE,noun
R310,Labor Economics,R77070,The Impact of the Coronavirus Lockdown on Mental Health: Evidence from the US,S351983,R77072,Examined (sub-)group,R77090,women,"The coronavirus outbreak has caused significant disruptions to people’s lives. We document the impact of state-wide stay-at-home orders on mental health using real time survey data in the US. The lockdown measures lowered mental health by 0.085 standard deviations. This large negative effect is entirely driven by women. As a result of the lockdown measures, the existing gender gap in mental health has increased by 66%. The negative effect on women’s mental health cannot be explained by an increase in financial worries or childcare responsibilities.",TRUE,noun
R112125,Machine Learning,R140132,DeepWalk: online learning of social representations,S559547,R140134,Uses dataset,R128386,BlogCatalog,"We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10% higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60% less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.",TRUE,noun
R112125,Machine Learning,R140132,DeepWalk: online learning of social representations,S559548,R140134,Uses dataset,R128294,Flickr,"We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10% higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60% less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.",TRUE,noun
R112125,Machine Learning,R140153,Clinical Concept Embeddings Learned from Massive Sources of Medical Data,S559967,R140155,Has method ,R140295,GloVe,"Word embeddings have emerged as a popular approach to unsupervised learning of word relationships in machine learning and natural language processing. In this article, we benchmark two of the most popular algorithms, GloVe and word2vec, to assess their suitability for capturing medical relationships in large sources of biomedical data. Leaning on recent theoretical insights, we provide a unified view of these algorithms and demonstrate how different sources of data can be combined to construct the largest ever set of embeddings for 108,477 medical concepts using an insurance claims database of 60 million members, 20 million clinical notes, and 1.7 million full text biomedical journal articles. We evaluate our approach, called cui2vec, on a set of clinically relevant benchmarks and in many instances demonstrate state of the art performance relative to previous results. Finally, we provide a downloadable set of pre-trained embeddings for other researchers to use, as well as an online tool for interactive exploration of the cui2vec embeddings.",TRUE,noun
R112125,Machine Learning,R140177,Embedding logical queries on knowledge graphs,S560153,R140179,Type of data,R6490,Graph,"Learning low-dimensional embeddings of knowledge graphs is a powerful approach used to predict unobserved or missing edges between entities. However, an open challenge in this area is developing techniques that can go beyond simple edge prediction and handle more complex logical queries, which might involve multiple unobserved edges, entities, and variables. For instance, given an incomplete biological knowledge graph, we might want to predict ""em what drugs are likely to target proteins involved with both diseases X and Y?"" -- a query that requires reasoning about all possible proteins that might interact with diseases X and Y. Here we introduce a framework to efficiently make predictions about conjunctive logical queries -- a flexible but tractable subset of first-order logic -- on incomplete knowledge graphs. In our approach, we embed graph nodes in a low-dimensional space and represent logical operators as learned geometric operations (e.g., translation, rotation) in this embedding space. By performing logical operations within a low-dimensional embedding space, our approach achieves a time complexity that is linear in the number of query variables, compared to the exponential complexity required by a naive enumeration-based approach. We demonstrate the utility of this framework in two application studies on real-world datasets with millions of relations: predicting logical relationships in a network of drug-gene-disease interactions and in a graph-based representation of social interactions derived from a popular web forum.",TRUE,noun
R112125,Machine Learning,R159399,"DEHB: Evolutionary Hyperband for Scalable, Robust and Efficient Hyperparameter Optimization",S635024,R159430,keywords,R159438,Hyperband,"Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever. Here, we combine the advantages of the popular bandit-based HPO method Hyperband (HB) and the evolutionary search approach of Differential Evolution (DE) to yield a new HPO method which we call DEHB. Comprehensive results on a very broad range of HPO problems, as well as a wide range of tabular benchmarks from neural architecture search, demonstrate that DEHB achieves strong performance far more robustly than all previous HPO methods we are aware of, especially for high-dimensional problems with discrete input dimensions. For example, DEHB is up to 1000x faster than random search. It is also efficient in computational time, conceptually simple and easy to implement, positioning it well to become a new default HPO method.",TRUE,noun
R112125,Machine Learning,R157417,Autoformer: Searching transformers for visual recognition,S631122,R157419,keywords,R157437,ImageNet,"Recently, pure transformer-based models have shown great potentials for vision tasks such as image classification and detection. However, the design of transformer networks is challenging. It has been observed that the depth, embedding dimension, and number of heads can largely affect the performance of vision transformers. Previous models configure these dimensions based upon manual crafting. In this work, we propose a new one-shot architecture search framework, namely AutoFormer, dedicated to vision transformer search. AutoFormer entangles the weights of different blocks in the same layers during supernet training. Benefiting from the strategy, the trained supernet allows thousands of subnets to be very well-trained. Specifically, the performance of these subnets with weights inherited from the supernet is comparable to those retrained from scratch. Besides, the searched models, which we refer to AutoFormers, surpass the recent state-of-the-arts such as ViT and DeiT. In particular, AutoFormer-tiny/small/base achieve 74.7%/81.7%/82.4% top-1 accuracy on ImageNet with 5.7M/22.9M/53.7M parameters, respectively. Lastly, we verify the transferability of AutoFormer by providing the performance on downstream benchmarks and distillation experiments. Code and models are available at https://github.com/microsoft/Cream.",TRUE,noun
R112125,Machine Learning,R147894,Active Learning Yields Better Training Data for Scientific Named Entity Recognition,S593417,R147896,Concept types,R147968,Polymers,"Despite significant progress in natural language processing, machine learning models require substantial expertannotated training data to perform well in tasks such as named entity recognition (NER) and entity relations extraction. Furthermore, NER is often more complicated when working with scientific text. For example, in polymer science, chemical structure may be encoded using nonstandard naming conventions, the same concept can be expressed using many different terms (synonymy), and authors may refer to polymers with ad-hoc labels. These challenges, which are not unique to polymer science, make it difficult to generate training data, as specialized skills are needed to label text correctly. We have previously designed polyNER, a semi-automated system for efficient identification of scientific entities in text. PolyNER applies word embedding models to generate entity-rich corpora for productive expert labeling, and then uses the resulting labeled data to bootstrap a context-based classifier. PolyNER facilitates a labeling process that is otherwise tedious and expensive. Here, we use active learning to efficiently obtain more annotations from experts and improve performance. Our approach requires just five hours of expert time to achieve discrimination capacity comparable to that of a state-of-the-art chemical NER toolkit.",TRUE,noun
R112125,Machine Learning,R144816,NLTK: The Natural Language Toolkit,S579781,R144818,is a,R144821,Toolkit,"NLTK, the Natural Language Toolkit, is a suite of open source program modules, tutorials and problem sets, providing ready-to-use computational linguistics courseware. NLTK covers symbolic and statistical natural language processing, and is interfaced to annotated corpora. Students augment and replace existing components, learn structured programming by example, and manipulate sophisticated models from the outset.",TRUE,noun
R112125,Machine Learning,R144816,NLTK: The Natural Language Toolkit,S579836,R144818,Documentation,R144859,Tutorials,"NLTK, the Natural Language Toolkit, is a suite of open source program modules, tutorials and problem sets, providing ready-to-use computational linguistics courseware. NLTK covers symbolic and statistical natural language processing, and is interfaced to annotated corpora. Students augment and replace existing components, learn structured programming by example, and manipulate sophisticated models from the outset.",TRUE,noun
R112125,Machine Learning,R140153,Clinical Concept Embeddings Learned from Massive Sources of Medical Data,S559968,R140155,Has method ,R4649,word2vec,"Word embeddings have emerged as a popular approach to unsupervised learning of word relationships in machine learning and natural language processing. In this article, we benchmark two of the most popular algorithms, GloVe and word2vec, to assess their suitability for capturing medical relationships in large sources of biomedical data. Leaning on recent theoretical insights, we provide a unified view of these algorithms and demonstrate how different sources of data can be combined to construct the largest ever set of embeddings for 108,477 medical concepts using an insurance claims database of 60 million members, 20 million clinical notes, and 1.7 million full text biomedical journal articles. We evaluate our approach, called cui2vec, on a set of clinically relevant benchmarks and in many instances demonstrate state of the art performance relative to previous results. Finally, we provide a downloadable set of pre-trained embeddings for other researchers to use, as well as an online tool for interactive exploration of the cui2vec embeddings.",TRUE,noun
R112125,Machine Learning,R140164,OPA2Vec: combining formal and informal content of biomedical ontologies to improve similarity-based prediction,S559996,R140167,Has method ,R4649,word2vec,"MOTIVATION Ontologies are widely used in biology for data annotation, integration and analysis. In addition to formally structured axioms, ontologies contain meta-data in the form of annotation axioms which provide valuable pieces of information that characterize ontology classes. Annotation axioms commonly used in ontologies include class labels, descriptions or synonyms. Despite being a rich source of semantic information, the ontology meta-data are generally unexploited by ontology-based analysis methods such as semantic similarity measures. RESULTS We propose a novel method, OPA2Vec, to generate vector representations of biological entities in ontologies by combining formal ontology axioms and annotation axioms from the ontology meta-data. We apply a Word2Vec model that has been pre-trained on either a corpus or abstracts or full-text articles to produce feature vectors from our collected data. We validate our method in two different ways: first, we use the obtained vector representations of proteins in a similarity measure to predict protein-protein interaction on two different datasets. Second, we evaluate our method on predicting gene-disease associations based on phenotype similarity by generating vector representations of genes and diseases using a phenotype ontology, and applying the obtained vectors to predict gene-disease associations using mouse model phenotypes. We demonstrate that OPA2Vec significantly outperforms existing methods for predicting gene-disease associations. Using evidence from mouse models, we apply OPA2Vec to identify candidate genes for several thousand rare and orphan diseases. OPA2Vec can be used to produce vector representations of any biomedical entity given any type of biomedical ontology. AVAILABILITY AND IMPLEMENTATION https://github.com/bio-ontology-research-group/opa2vec. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.",TRUE,noun
R112125,Machine Learning,R140132,DeepWalk: online learning of social representations,S559549,R140134,Uses dataset,R140219,Youtube,"We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10% higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60% less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.",TRUE,noun
R126,Materials Chemistry,R41144,Up-scalable and controllable electrolytic production of photo-responsive nanostructured silicon,S130582,R41145,(pseudo)reference electrode,L79357,Ag/AgCl,"The electrochemical reduction of solid silica has been investigated in molten CaCl2 at 900 °C for the one-step, up-scalable, controllable and affordable production of nanostructured silicon with promising photo-responsive properties. Cyclic voltammetry of the metallic cavity electrode loaded with fine silica powder was performed to elaborate the electrochemical reduction mechanism. Potentiostatic electrolysis of porous and dense silica pellets was carried out at different potentials, focusing on the influences of the electrolysis potential and the microstructure of the precursory silica on the product purity and microstructure. The findings suggest a potential range between −0.60 and −0.95 V (vs. Ag/AgCl) for the production of nanostructured silicon with high purity (>99 wt%). According to the elucidated mechanism on the electro-growth of the silicon nanostructures, optimal process parameters for the controllable preparation of high-purity silicon nanoparticles and nanowires were identified. Scaling-up the optimal electrolysis was successful at the gram-scale for the preparation of high-purity silicon nanowires which exhibited promising photo-responsive properties.",TRUE,noun
R126,Materials Chemistry,R146888,High-performance fullerene-free polymer solar cells with 6.31% efficiency,S593173,R146891,Mobility,R147887,Electron,"A nonfullerene electron acceptor (IEIC) based on indaceno[1,2-b:5,6-b′]dithiophene and 2-(3-oxo-2,3-dihydroinden-1-ylidene)malononitrile was designed and synthesized, and fullerene-free polymer solar cells based on the IEIC acceptor showed power conversion efficiencies of up to 6.31%.
",TRUE,noun
R126,Materials Chemistry,R146907,Non-fullerene polymer solar cells based on a selenophene-containing fused-ring acceptor with photovoltaic performance of 8.6%,S593148,R146909,Mobility,R147874,Electron,"In this work, we present a non-fullerene electron acceptor bearing a fused five-heterocyclic ring containing selenium atoms, denoted as IDSe-T-IC, for fullerene-free polymer solar cells (PSCs).
",TRUE,noun
R126,Materials Chemistry,R146918,Design and Synthesis of a Low Bandgap Small Molecule Acceptor for Efficient Polymer Solar Cells,S593123,R146920,Mobility,R147863,Electron,"A novel non-fullerene acceptor, possessing a very low bandgap of 1.34 eV and a high-lying lowest unoccupied molecular orbital level of -3.95 eV, is designed and synthesized by introducing electron-donating alkoxy groups to the backbone of a conjugated small molecule. Impressive power conversion efficiencies of 8.4% and 10.7% are obtained for fabricated single and tandem polymer solar cells.",TRUE,noun
R126,Materials Chemistry,R146985,Exploiting Noncovalently Conformational Locking as a Design Strategy for High Performance Fused-Ring Electron Acceptor Used in Polymer Solar Cells,S593066,R146988,Mobility,R147841,Electron,"We have developed a kind of novel fused-ring small molecular acceptor, whose planar conformation can be locked by intramolecular noncovalent interaction. The formation of planar supramolecular fused-ring structure by conformation locking can effectively broaden its absorption spectrum, enhance the electron mobility, and reduce the nonradiative energy loss. Polymer solar cells (PSCs) based on this acceptor afforded a power conversion efficiency (PCE) of 9.6%. In contrast, PSCs based on similar acceptor, which cannot form a flat conformation, only gave a PCE of 2.3%. Such design strategy, which can make the synthesis of small molecular acceptor much easier, will be promising in developing a new acceptor for high efficiency polymer solar cells.",TRUE,noun
R126,Materials Chemistry,R147898,Side-Chain Isomerization on an n-type Organic Semiconductor ITIC Acceptor Makes 11.77% High Efficiency Polymer Solar Cells,S593241,R147899,Mobility,R147904,Electron,"Low bandgap n-type organic semiconductor (n-OS) ITIC has attracted great attention for the application as an acceptor with medium bandgap p-type conjugated polymer as donor in nonfullerene polymer solar cells (PSCs) because of its attractive photovoltaic performance. Here we report a modification on the molecular structure of ITIC by side-chain isomerization with meta-alkyl-phenyl substitution, m-ITIC, to further improve its photovoltaic performance. In a comparison with its isomeric counterpart ITIC with para-alkyl-phenyl substitution, m-ITIC shows a higher film absorption coefficient, a larger crystalline coherence, and higher electron mobility. These inherent advantages of m-ITIC resulted in a higher power conversion efficiency (PCE) of 11.77% for the nonfullerene PSCs with m-ITIC as acceptor and a medium bandgap polymer J61 as donor, which is significantly improved over that (10.57%) of the corresponding devices with ITIC as acceptor. To the best of our knowledge, the PCE of 11.77% is one of the highest values reported in the literature to date for nonfullerene PSCs. More importantly, the m-ITIC-based device shows less thickness-dependent photovoltaic behavior than ITIC-based devices in the active-layer thickness range of 80-360 nm, which is beneficial for large area device fabrication. These results indicate that m-ITIC is a promising low bandgap n-OS for the application as an acceptor in PSCs, and the side-chain isomerization could be an easy and convenient way to further improve the photovoltaic performance of the donor and acceptor materials for high efficiency PSCs.",TRUE,noun
R126,Materials Chemistry,R147918,High-Performance Electron Acceptor with Thienyl Side Chains for Organic Photovoltaics,S593332,R147931,Mobility,R147942,Electron,"We develop an efficient fused-ring electron acceptor (ITIC-Th) based on indacenodithieno[3,2-b]thiophene core and thienyl side-chains for organic solar cells (OSCs). Relative to its counterpart with phenyl side-chains (ITIC), ITIC-Th shows lower energy levels (ITIC-Th: HOMO = -5.66 eV, LUMO = -3.93 eV; ITIC: HOMO = -5.48 eV, LUMO = -3.83 eV) due to the σ-inductive effect of thienyl side-chains, which can match with high-performance narrow-band-gap polymer donors and wide-band-gap polymer donors. ITIC-Th has higher electron mobility (6.1 × 10(-4) cm(2) V(-1) s(-1)) than ITIC (2.6 × 10(-4) cm(2) V(-1) s(-1)) due to enhanced intermolecular interaction induced by sulfur-sulfur interaction. We fabricate OSCs by blending ITIC-Th acceptor with two different low-band-gap and wide-band-gap polymer donors. In one case, a power conversion efficiency of 9.6% was observed, which rivals some of the highest efficiencies for single junction OSCs based on fullerene acceptors.",TRUE,noun
R126,Materials Chemistry,R147944,A near-infrared non-fullerene electron acceptor for high performance polymer solar cells,S593370,R147951,Mobility,R147956,Electron,"Low-bandgap polymers/molecules are an interesting family of semiconductor materials, and have enabled many recent exciting breakthroughs in the field of organic electronics, especially for organic photovoltaics (OPVs). Here, such a low-bandgap (1.43 eV) non-fullerene electron acceptor (BT-IC) bearing a fused 7-heterocyclic ring with absorption edge extending to the near-infrared (NIR) region was specially designed and synthesized. Benefitted from its NIR light harvesting, high performance OPVs were fabricated with medium bandgap polymers (J61 and J71) as donors, showing power conversion efficiencies of 9.6% with J61 and 10.5% with J71 along with extremely low energy loss (0.56 eV for J61 and 0.53 eV for J71). Interestingly, femtosecond transient absorption spectroscopy studies on both systems show that efficient charge generation was observed despite the fact that the highest occupied molecular orbital (HOMO)–HOMO offset (ΔEH) in the blends was as low as 0.10 eV, suggesting that such a small ΔEH is not a crucial limitation in realizing high performance of NIR non-fullerene based OPVs. Our results indicated that BT-IC is an interesting NIR non-fullerene acceptor with great potential application in tandem/multi-junction, semitransparent, and ternary blend solar cells.",TRUE,noun
R126,Materials Chemistry,R148204,Halogenated conjugated molecules for ambipolar field-effect transistors and non-fullerene organic solar cells,S594287,R148219,Mobility,R148229,Electron,"A series of halogenated conjugated molecules, containing F, Cl, Br and I, were easily prepared via Knoevenagel condensation and applied in field-effect transistors and organic solar cells. Halogenated conjugated materials were found to possess deep frontier energy levels and high crystallinity compared to their non-halogenated analogues, which is due to the strong electronegativity and heavy atom effect of halogens. As a result, halogenated semiconductors provide high electron mobilities up to 1.3 cm2 V−1 s−1 in transistors and high efficiencies over 9% in non-fullerene solar cells.",TRUE,noun
R126,Materials Chemistry,R148232,Enhancing Performance of Nonfullerene Acceptors via Side‐Chain Conjugation Strategy,S594320,R148234,Mobility,R148240,Electron,"A side‐chain conjugation strategy in the design of nonfullerene electron acceptors is proposed, with the design and synthesis of a side‐chain‐conjugated acceptor (ITIC2) based on a 4,8‐bis(5‐(2‐ethylhexyl)thiophen‐2‐yl)benzo[1,2‐b:4,5‐b′]di(cyclopenta‐dithiophene) electron‐donating core and 1,1‐dicyanomethylene‐3‐indanone electron‐withdrawing end groups. ITIC2 with the conjugated side chains exhibits an absorption peak at 714 nm, which redshifts 12 nm relative to ITIC1. The absorption extinction coefficient of ITIC2 is 2.7 × 105m−1 cm−1, higher than that of ITIC1 (1.5 × 105m−1 cm−1). ITIC2 exhibits slightly higher highest occupied molecular orbital (HOMO) (−5.43 eV) and lowest unoccupied molecular orbital (LUMO) (−3.80 eV) energy levels relative to ITIC1 (HOMO: −5.48 eV; LUMO: −3.84 eV), and higher electron mobility (1.3 × 10−3 cm2 V−1 s−1) than that of ITIC1 (9.6 × 10−4 cm2 V−1 s−1). The power conversion efficiency of ITIC2‐based organic solar cells is 11.0%, much higher than that of ITIC1‐based control devices (8.54%). Our results demonstrate that side‐chain conjugation can tune energy levels, enhance absorption, and electron mobility, and finally enhance photovoltaic performance of nonfullerene acceptors.",TRUE,noun
R126,Materials Chemistry,R148246,"Design, synthesis, and structural characterization of the first dithienocyclopentacarbazole-based n-type organic semiconductor and its application in non-fullerene polymer solar cells",S594359,R148250,Mobility,R148256,Electron,"Ladder-type dithienocyclopentacarbazole (DTCC) cores, which possess highly extended π-conjugated backbones and versatile modular structures for derivatization, were widely used to develop high-performance p-type polymeric semiconductors. However, an n-type DTCC-based organic semiconductor has not been reported to date. In this study, the first DTCC-based n-type organic semiconductor (DTCC–IC) with a well-defined A–D–A backbone was designed, synthesized, and characterized, in which a DTCC derivative substituted by four p-octyloxyphenyl groups was used as the electron-donating core and two strongly electron-withdrawing 3-(dicyanomethylene)indan-1-one moieties were used as the terminal acceptors. It was found that DTCC–IC has strong light-capturing ability in the range of 500–720 nm and exhibits an impressively high molar absorption coefficient of 2.24 × 105 M−1 cm−1 at 669 nm owing to effective intramolecular charge transfer and a strong D–A effect. Cyclic voltammetry measurements indicated that the HOMO and LUMO energy levels of DTCC–IC are −5.50 and −3.87 eV, respectively. More importantly, a high electron mobility of 2.17 × 10−3 cm2 V−1 s−1 was determined by the space-charge-limited current method; this electron mobility can be comparable to that of fullerene derivative acceptors (μe ∼ 10−3 cm2 V−1 s−1). To investigate its application potential in non-fullerene solar cells, we fabricated organic solar cells (OSCs) by blending a DTCC–IC acceptor with a PTB7-Th donor under various conditions. The results suggest that the optimized device exhibits a maximum power conversion efficiency (PCE) of up to 6% and a rational high VOC of 0.95 V. These findings demonstrate that the ladder-type DTCC core is a promising building block for the development of high-mobility n-type organic semiconductors for OSCs.",TRUE,noun
R126,Materials Chemistry,R148537,"A Twisted Thieno[3,4-b
]thiophene-Based Electron Acceptor Featuring a 14-π-Electron Indenoindene Core for High-Performance Organic Photovoltaics",S595559,R148539,Mobility,R148544,Electron,"With an indenoindene core, a new thieno[3,4‐b]thiophene‐based small‐molecule electron acceptor, 2,2′‐((2Z,2′Z)‐((6,6′‐(5,5,10,10‐tetrakis(2‐ethylhexyl)‐5,10‐dihydroindeno[2,1‐a]indene‐2,7‐diyl)bis(2‐octylthieno[3,4‐b]thiophene‐6,4‐diyl))bis(methanylylidene))bis(5,6‐difluoro‐3‐oxo‐2,3‐dihydro‐1H‐indene‐2,1‐diylidene))dimalononitrile (NITI), is successfully designed and synthesized. Compared with 12‐π‐electron fluorene, a carbon‐bridged biphenylene with an axial symmetry, indenoindene, a carbon‐bridged E‐stilbene with a centrosymmetry, shows elongated π‐conjugation with 14 π‐electrons and one more sp3 carbon bridge, which may increase the tunability of electronic structure and film morphology. Despite its twisted molecular framework, NITI shows a low optical bandgap of 1.49 eV in thin film and a high molar extinction coefficient of 1.90 × 105m−1 cm−1 in solution. By matching NITI with a large‐bandgap polymer donor, an extraordinary power conversion efficiency of 12.74% is achieved, which is among the best performance so far reported for fullerene‐free organic photovoltaics and is inspiring for the design of new electron acceptors.",TRUE,noun
R126,Materials Chemistry,R148606,Fused Hexacyclic Nonfullerene Acceptor with Strong Near‐Infrared Absorption for Semitransparent Organic Solar Cells with 9.77% Efficiency,S595762,R148607,Mobility,R148611,Electron,"A fused hexacyclic electron acceptor, IHIC, based on strong electron‐donating group dithienocyclopentathieno[3,2‐b]thiophene flanked by strong electron‐withdrawing group 1,1‐dicyanomethylene‐3‐indanone, is designed, synthesized, and applied in semitransparent organic solar cells (ST‐OSCs). IHIC exhibits strong near‐infrared absorption with extinction coefficients of up to 1.6 × 105m−1 cm−1, a narrow optical bandgap of 1.38 eV, and a high electron mobility of 2.4 × 10−3 cm2 V−1 s−1. The ST‐OSCs based on blends of a narrow‐bandgap polymer donor PTB7‐Th and narrow‐bandgap IHIC acceptor exhibit a champion power conversion efficiency of 9.77% with an average visible transmittance of 36% and excellent device stability; this efficiency is much higher than any single‐junction and tandem ST‐OSCs reported in the literature.",TRUE,noun
R126,Materials Chemistry,R148630,Naphthodithiophene‐Based Nonfullerene Acceptor for High‐Performance Organic Photovoltaics: Effect of Extended Conjugation,S595859,R148632,Mobility,R148637,Electron,"Naphtho[1,2‐b:5,6‐b′]dithiophene is extended to a fused octacyclic building block, which is end capped by strong electron‐withdrawing 2‐(5,6‐difluoro‐3‐oxo‐2,3‐dihydro‐1H‐inden‐1‐ylidene)malononitrile to yield a fused‐ring electron acceptor (IOIC2) for organic solar cells (OSCs). Relative to naphthalene‐based IHIC2, naphthodithiophene‐based IOIC2 with a larger π‐conjugation and a stronger electron‐donating core shows a higher lowest unoccupied molecular orbital energy level (IOIC2: −3.78 eV vs IHIC2: −3.86 eV), broader absorption with a smaller optical bandgap (IOIC2: 1.55 eV vs IHIC2: 1.66 eV), and a higher electron mobility (IOIC2: 1.0 × 10−3 cm2 V−1 s−1 vs IHIC2: 5.0 × 10−4 cm2 V−1 s−1). Thus, IOIC2‐based OSCs show higher values in open‐circuit voltage, short‐circuit current density, fill factor, and thereby much higher power conversion efficiency (PCE) values than those of the IHIC2‐based counterpart. In particular, as‐cast OSCs based on FTAZ: IOIC2 yield PCEs of up to 11.2%, higher than that of the control devices based on FTAZ: IHIC2 (7.45%). Furthermore, by using 0.2% 1,8‐diiodooctane as the processing additive, a PCE of 12.3% is achieved from the FTAZ:IOIC2‐based devices, higher than that of the FTAZ:IHIC2‐based devices (7.31%). These results indicate that incorporating extended conjugation into the electron‐donating fused‐ring units in nonfullerene acceptors is a promising strategy for designing high‐performance electron acceptors.",TRUE,noun
R126,Materials Chemistry,R148652,"Dithieno[3,2-b
:2′,3′-d
]pyrrol Fused Nonfullerene Acceptors Enabling Over 13% Efficiency for Organic Solar Cells",S595934,R148654,Mobility,R148658,Electron,"A new electron‐rich central building block, 5,5,12,12‐tetrakis(4‐hexylphenyl)‐indacenobis‐(dithieno[3,2‐b:2′,3′‐d]pyrrol) (INP), and two derivative nonfullerene acceptors (INPIC and INPIC‐4F) are designed and synthesized. The two molecules reveal broad (600–900 nm) and strong absorption due to the satisfactory electron‐donating ability of INP. Compared with its counterpart INPIC, fluorinated nonfullerene acceptor INPIC‐4F exhibits a stronger near‐infrared absorption with a narrower optical bandgap of 1.39 eV, an improved crystallinity with higher electron mobility, and down‐shifted highest occupied molecular orbital and lowest unoccupied molecular orbital energy levels. Organic solar cells (OSCs) based on INPIC‐4F exhibit a high power conversion efficiency (PCE) of 13.13% and a relatively low energy loss of 0.54 eV, which is among the highest efficiencies reported for binary OSCs in the literature. The results demonstrate the great potential of the new INP as an electron‐donating building block for constructing high‐performance nonfullerene acceptors for OSCs.",TRUE,noun
R126,Materials Chemistry,R148663,Dithienopicenocarbazole-Based Acceptors for Efficient Organic Solar Cells with Optoelectronic Response Over 1000 nm and an Extremely Low Energy Loss,S595975,R148666,Mobility,R148672,Electron,"Two cheliform non-fullerene acceptors, DTPC-IC and DTPC-DFIC, based on a highly electron-rich core, dithienopicenocarbazole (DTPC), are synthesized, showing ultra-narrow bandgaps (as low as 1.21 eV). The two-dimensional nitrogen-containing conjugated DTPC possesses strong electron-donating capability, which induces intense intramolecular charge transfer and intermolecular π-π stacking in derived acceptors. The solar cell based on DTPC-DFIC and a spectrally complementary polymer donor, PTB7-Th, showed a high power conversion efficiency of 10.21% and an extremely low energy loss of 0.45 eV, which is the lowest among reported efficient OSCs.",TRUE,noun
R126,Materials Chemistry,R146779,A Solution-Processable Electron Acceptor Based on Dibenzosilole and Diketopyrrolopyrrole for Organic Solar Cells,S587770,R146781,Mobility type,R146802,Electron,"Organic solar cells (OSCs) are a promising cost-effective alternative for utility of solar energy, and possess low-cost, light-weight, and fl exibility advantages. [ 1–7 ] Much attention has been focused on the development of OSCs which have seen a dramatic rise in effi ciency over the last decade, and the encouraging power conversion effi ciency (PCE) over 9% has been achieved from bulk heterojunction (BHJ) OSCs. [ 8 ] With regard to photoactive materials, fullerenes and their derivatives, such as [6,6]-phenyl C 61 butyric acid methyl ester (PC 61 BM), have been the dominant electron-acceptor materials in BHJ OSCs, owing to their high electron mobility, large electron affi nity and isotropy of charge transport. [ 9 ] However, fullerenes have a few disadvantages, such as restricted electronic tuning and weak absorption in the visible region. Furthermore, in typical BHJ system of poly(3-hexylthiophene) (P3HT):PC 61 BM, mismatching energy levels between donor and acceptor leads to energy loss and low open-circuit voltages ( V OC ). To solve these problems, novel electron acceptor materials with strong and broad absorption spectra and appropriate energy levels are necessary for OSCs. Recently, non-fullerene small molecule acceptors have been developed. [ 10 , 11 ] However, rare reports on the devices based on solution-processed non-fullerene small molecule acceptors have shown PCEs approaching or exceeding 1.5%, [ 12–19 ] and only one paper reported PCEs over 2%. [ 16 ]",TRUE,noun
R126,Materials Chemistry,R146794,A Rhodanine Flanked Nonfullerene Acceptor for Solution-Processed Organic Photovoltaics,S587772,R146795,Mobility type,R146803,Electron,"A novel small molecule, FBR, bearing 3-ethylrhodanine flanking groups was synthesized as a nonfullerene electron acceptor for solution-processed bulk heterojunction organic photovoltaics (OPV). A straightforward synthesis route was employed, offering the potential for large scale preparation of this material. Inverted OPV devices employing poly(3-hexylthiophene) (P3HT) as the donor polymer and FBR as the acceptor gave power conversion efficiencies (PCE) up to 4.1%. Transient and steady state optical spectroscopies indicated efficient, ultrafast charge generation and efficient photocurrent generation from both donor and acceptor. Ultrafast transient absorption spectroscopy was used to investigate polaron generation efficiency as well as recombination dynamics. It was determined that the P3HT:FBR blend is highly intermixed, leading to increased charge generation relative to comparative devices with P3HT:PC60BM, but also faster recombination due to a nonideal morphology in which, in contrast to P3HT:PC60BM devices, the acceptor does not aggregate enough to create appropriate percolation pathways that prevent fast nongeminate recombination. Despite this nonoptimal morphology the P3HT:FBR devices exhibit better performance than P3HT:PC60BM devices, used as control, demonstrating that this acceptor shows great promise for further optimization.",TRUE,noun
R126,Materials Chemistry,R146812,"π-Bridge-Independent 2-(Benzo[c][1,2,5]thiadiazol-4-ylmethylene)malononitrile-Substituted Nonfullerene Acceptors for Efficient Bulk Heterojunction Solar Cells",S587876,R146826,Mobility type,R146822,Electron,"Molecular acceptors are promising alternatives to fullerenes (e.g., PC61/71BM) in the fabrication of high-efficiency bulk-heterojunction (BHJ) solar cells. While solution-processed polymer–fullerene BHJ devices have recently met the 10% efficiency threshold, molecular acceptors have yet to prove comparably efficient with polymer donors. At this point in time, it is important to forge a better understanding of the design parameters that directly impact small-molecule (SM) acceptor performance in BHJ solar cells. In this report, we show that 2-(benzo[c][1,2,5]thiadiazol-4-ylmethylene)malononitrile (BM)-terminated SM acceptors can achieve efficiencies as high as 5.3% in BHJ solar cells with the polymer donor PCE10. Through systematic device optimization and characterization studies, we find that the nonfullerene analogues (FBM, CBM, and CDTBM) all perform comparably well, independent of the molecular structure and electronics of the π-bridge that links the two electron-deficient BM end groups. With estimated...",TRUE,noun
R126,Materials Chemistry,R146842,Push–Pull Type Non-Fullerene Acceptors for Polymer Solar Cells: Effect of the Donor Core,S587921,R146845,Mobility type,R146823,Electron,"There has been a growing interest in the design and synthesis of non-fullerene acceptors for organic solar cells that may overcome the drawbacks of the traditional fullerene-based acceptors. Herein, two novel push-pull (acceptor-donor-acceptor) type small-molecule acceptors, that is, ITDI and CDTDI, with indenothiophene and cyclopentadithiophene as the core units and 2-(3-oxo-2,3-dihydroinden-1-ylidene)malononitrile (INCN) as the end-capping units, are designed and synthesized for non-fullerene polymer solar cells (PSCs). After device optimization, PSCs based on ITDI exhibit good device performance with a power conversion efficiency (PCE) as high as 8.00%, outperforming the CDTDI-based counterparts fabricated under identical condition (2.75% PCE). We further discuss the performance of these non-fullerene PSCs by correlating the energy level and carrier mobility with the core of non-fullerene acceptors. These results demonstrate that indenothiophene is a promising electron-donating core for high-performance non-fullerene small-molecule acceptors.",TRUE,noun
R126,Materials Chemistry,R146924,Nonfullerene Polymer Solar Cells Based on a Main-Chain Twisted Low-Bandgap Acceptor with Power Conversion Efficiency of 13.2%,S593086,R146928,Acceptor,R147845,i-IEICO-4F,"A new acceptor–donor–acceptor-structured nonfullerene acceptor, 2,2′-((2Z,2′Z)-(((4,4,9,9-tetrakis(4-hexylphenyl)-4,9-dihydro-s-indaceno[1,2-b:5,6-b′]dithiophene-2,7-diyl)bis(4-((2-ethylhexyl)oxy)thiophene-4,3-diyl))bis(methanylylidene))bis(5,6-difluoro-3-oxo-2,3-dihydro-1H-indene-2,1-diylidene))dimalononitrile (i-IEICO-4F), is designed and synthesized via main-chain substituting position modification of 2-(5,6-difluoro-3-oxo-2,3-dihydro-1H-indene-2,1-diylidene)dimalononitrile. Unlike its planar analogue IEICO-4F with strong absorption in the near-infrared region, i-IEICO-4F exhibits a twisted main-chain configuration, resulting in 164 nm blue shifts and leading to complementary absorption with the wide-bandgap polymer (J52). A high solution molar extinction coefficient of 2.41 × 105 M–1 cm–1, and sufficiently high energy of charge-transfer excitons of 1.15 eV in a J52:i-IEICO-4F blend were observed, in comparison with those of 2.26 × 105 M–1 cm–1 and 1.08 eV for IEICO-4F. A power conversion efficiency of...",TRUE,noun
R126,Materials Chemistry,R147898,Side-Chain Isomerization on an n-type Organic Semiconductor ITIC Acceptor Makes 11.77% High Efficiency Polymer Solar Cells,S593248,R147899,Acceptor,R147907,m-ITIC,"Low bandgap n-type organic semiconductor (n-OS) ITIC has attracted great attention for the application as an acceptor with medium bandgap p-type conjugated polymer as donor in nonfullerene polymer solar cells (PSCs) because of its attractive photovoltaic performance. Here we report a modification on the molecular structure of ITIC by side-chain isomerization with meta-alkyl-phenyl substitution, m-ITIC, to further improve its photovoltaic performance. In a comparison with its isomeric counterpart ITIC with para-alkyl-phenyl substitution, m-ITIC shows a higher film absorption coefficient, a larger crystalline coherence, and higher electron mobility. These inherent advantages of m-ITIC resulted in a higher power conversion efficiency (PCE) of 11.77% for the nonfullerene PSCs with m-ITIC as acceptor and a medium bandgap polymer J61 as donor, which is significantly improved over that (10.57%) of the corresponding devices with ITIC as acceptor. To the best of our knowledge, the PCE of 11.77% is one of the highest values reported in the literature to date for nonfullerene PSCs. More importantly, the m-ITIC-based device shows less thickness-dependent photovoltaic behavior than ITIC-based devices in the active-layer thickness range of 80-360 nm, which is beneficial for large area device fabrication. These results indicate that m-ITIC is a promising low bandgap n-OS for the application as an acceptor in PSCs, and the side-chain isomerization could be an easy and convenient way to further improve the photovoltaic performance of the donor and acceptor materials for high efficiency PSCs.",TRUE,noun
R126,Materials Chemistry,R142138,Mapping Intracellular Temperature Using Green Fluorescent Protein,S571055,R142140,Material,R142141,Protein,"Heat is of fundamental importance in many cellular processes such as cell metabolism, cell division and gene expression. (1-3) Accurate and noninvasive monitoring of temperature changes in individual cells could thus help clarify intricate cellular processes and develop new applications in biology and medicine. Here we report the use of green fluorescent proteins (GFP) as thermal nanoprobes suited for intracellular temperature mapping. Temperature probing is achieved by monitoring the fluorescence polarization anisotropy of GFP. The method is tested on GFP-transfected HeLa and U-87 MG cancer cell lines where we monitored the heat delivery by photothermal heating of gold nanorods surrounding the cells. A spatial resolution of 300 nm and a temperature accuracy of about 0.4 °C are achieved. Benefiting from its full compatibility with widely used GFP-transfected cells, this approach provides a noninvasive tool for fundamental and applied research in areas ranging from molecular biology to therapeutic and diagnostic studies.",TRUE,noun
R126,Materials Chemistry,R141708,"N,S co-doped carbon dots as a stable bio-imaging probe for detection of intracellular temperature and tetracycline",S568296,R141713,Method of nanomaterial synthesis,R141666,Reflux,"Stable bioimaging with nanomaterials in living cells has been a great challenge and of great importance for understanding intracellular events and elucidating various biological phenomena. Herein, we demonstrate that N,S co-doped carbon dots (N,S-CDs) produced by one-pot reflux treatment of C3N3S3 with ethane diamine at a relatively low temperature (80 °C) exhibit a high fluorescence quantum yield of about 30.4%, favorable biocompatibility, low-toxicity, strong resistance to photobleaching and good stability. The N,S-CDs as an effective temperature indicator exhibit good temperature-dependent fluorescence with a sensational linear response from 20 to 80 °C. In addition, the obtained N,S-CDs facilitate high selectivity detection of tetracycline (TC) with a detection limit as low as 3 × 10-10 M and a wide linear range from 1.39 × 10-5 to 1.39 × 10-9 M. More importantly, the N,S-CDs display an unambiguous bioimaging ability in the detection of intracellular temperature and TC with satisfactory results.",TRUE,noun
R137654,Mechanical Process Engineering,R145720,Investigations on Tailored Forming of AISI 52100 as Rolling Bearing Raceway,S633430,R145728,has material,R157129,material,"Hybrid cylindrical roller thrust bearing washers of type 81212 were manufactured by tailored forming. An AISI 1022M base material, featuring a sufficient strength for structural loads, was cladded with the bearing steel AISI 52100 by plasma transferred arc welding (PTA). Though AISI 52100 is generally regarded as non-weldable, it could be applied as a cladding material by adjusting PTA parameters. The cladded parts were investigated after each individual process step and subsequently tested under rolling contact load. Welding defects that could not be completely eliminated by the subsequent hot forming were characterized by means of scanning acoustic microscopy and micrographs. Below the surface, pores with a typical size of ten µm were found to a depth of about 0.45 mm. In the material transition zone and between individual weld seams, larger voids were observed. Grinding of the surface after heat treatment caused compressive residual stresses near the surface with a relatively small depth. Fatigue tests were carried out on an FE8 test rig. Eighty-two percent of the calculated rating life for conventional bearings was achieved. A high failure slope of the Weibull regression was determined. A relationship between the weld defects and the fatigue behavior is likely.",TRUE,noun
R137654,Mechanical Process Engineering,R145729,Manufacturing and Evaluation of Multi-Material Axial-Bearing Washers by Tailored Forming,S633486,R145731,has material,R157118,material,"Components subject to rolling contact fatigue, such as gears and rolling bearings, are among the fundamental machine elements in mechanical and vehicle engineering. Rolling bearings are generally not designed to be fatigue-resistant, as the necessary oversizing is not technically and economically marketable. In order to improve the load-bearing capacity, resource efficiency and application possibilities of rolling bearings and other possible multi-material solid components, a new process chain was developed at Leibniz University Hannover as a part of the Collaborative Research Centre 1153 “Tailored Forming”. Semi-finished products, already joined before the forming process, are used here to allow a further optimisation of joint quality by forming and finishing. In this paper, a plasma-powder-deposition welding process is presented, which enables precise material deposition and control of the welding depth. For this study, bearing washers (serving as rolling bearing raceways) of a cylindrical roller thrust bearing, similar to type 81212 with a multi-layer structure, were manufactured. A previously non-weldable high-performance material, steel AISI 5140, was used as the cladding layer. Depending on the degree of forming, grain-refinement within the welded material was achieved by thermo-mechanical treatment of the joining zone during the forming process. This grain-refinements lead to an improvement of the mechanical properties and thus, to a higher lifetime for washers of an axial cylindrical roller bearing, which were examined as an exemplary component on a fatigue test bench. To evaluate the bearing washers, the results of the bearing tests were compared with industrial bearings and deposition welded axial-bearing washers without subsequent forming. In addition, the bearing washers were analysed micro-tribologically and by scanning acoustic microscopy both after welding and after the forming process. Nano-scratch tests were carried out on the bearing washers to analyse the layer properties. Together with the results of additional microscopic images of the surface and cross-sections, the causes of failure due to fatigue and wear were identified.",TRUE,noun
R137654,Mechanical Process Engineering,R145732,Tribological Study on Tailored-Formed Axial Bearing Washers,S689844,R145734,has material,R172915,material,"To enhance tribological contacts under cyclic load, high performance materials are required. Utilizing the same high-strength material for the whole machine element is not resource-efficient. In order to manufacture machine elements with extended functionality and specific properties, a combination of different materials can be used in a single component for a more efficient material utilization. By combining different joining techniques with subsequent forming, multi-material or tailored components can be manufactured. To reduce material costs and energy consumption during the component service life, a less expensive lightweight material should be used for regions remote from the highly stressed zones. The scope is not only to obtain the desired shape and dimensions for the finishing process, but also to improve properties like the bond strength between different materials and the microscopic structure of the material. The multi-material approach can be applied to all components requiring different properties in separate component regions such as shafts, bearings or bushes. The current study exemplarily presents the process route for the production of an axial bearing washer by means of tailored forming technology. The bearing washers were chosen to fit axial roller bearings (type 81212). The manufacturing process starts with the laser wire cladding of a hard facing made of martensitic chromium silicon steel (1.4718) on a base substrate of S235 (1.0038) steel. Subsequently, the bearing washers are forged. After finishing, the surfaces of the bearing washers were tested in thrust bearings on an FE-8 test rig. The operational test of the bearings consists in a run-in phase at 250 rpm. A bearing failure is determined by a condition monitoring system. Before and after this, the bearings were inspected by optical and ultrasonic microscopy in order to examine whether the bond of the coat is resistant against rolling contact fatigue. The feasibility of the approach could be proven by endurance test. The joining zone was able to withstand the rolling contact stresses and the bearing failed due to material-induced fatigue with high cycle stability.",TRUE,noun
R137654,Mechanical Process Engineering,R162731,Cross-wedge rolling of PTA-welded hybrid steel billets with rolling bearing steel and hard material coatings,S649322,R162790,has material,R162800,material,"Within the Collaborative Research Centre 1153 “Tailored Forming“ a process chain for the manufacturing of hybrid high performance components is developed. Exemplary process steps consist of deposit welding of high performance steel on low-cost steel, pre-shaping by cross-wedge rolling and finishing by milling.Hard material coatings such as Stellite 6 or Delcrome 253 are used as wear or corrosion protection coatings in industrial applications. Scientists of the Institute of Material Science welded these hard material alloys onto a base material, in this case C22.8, to create a hybrid workpiece. Scientists of the Institut fur Integrierte Produktion Hannover have shown that these hybrid workpieces can be formed without defects (e.g. detachment of the coating) by cross-wedge rolling. After forming, the properties of the coatings are retained or in some cases even improved (e.g. the transition zone between base material and coating). By adjustments in the welding process, it was possible to apply the 100Cr6 rolling bearing steel, as of now declared as non-weldable, on the low-cost steel C22.8. 100Cr6 was formed afterwards in its hybrid bonding state with C22.8 by cross-wedge rolling, thus a component-integrated bearing seat was produced. Even after welding and forming, the rolling bearing steel coating could still be quench-hardened to a hardness of over 60 HRC. This paper shows the potential of forming hybrid billets to tailored parts. Since industrially available standard materials can be used for hard material coatings by this approach, even though they are not weldable by conventional methods, it is not necessary to use expensive, for welding designed materials to implement a hybrid component concept.Within the Collaborative Research Centre 1153 “Tailored Forming“ a process chain for the manufacturing of hybrid high performance components is developed. Exemplary process steps consist of deposit welding of high performance steel on low-cost steel, pre-shaping by cross-wedge rolling and finishing by milling.Hard material coatings such as Stellite 6 or Delcrome 253 are used as wear or corrosion protection coatings in industrial applications. Scientists of the Institute of Material Science welded these hard material alloys onto a base material, in this case C22.8, to create a hybrid workpiece. Scientists of the Institut fur Integrierte Produktion Hannover have shown that these hybrid workpieces can be formed without defects (e.g. detachment of the coating) by cross-wedge rolling. After forming, the properties of the coatings are retained or in some cases even improved (e.g. the transition zone between base material and coating). By adjustments in the welding process, it was possible to apply the 100Cr6 ro...",TRUE,noun
R137654,Mechanical Process Engineering,R171846,Investigation of the material combination 20MnCr5 and X45CrSi9-3 in the Tailored Forming of shafts with bearing seats,S687689,R172322,has material,R172324,material,"Abstract The Tailored Forming process chain is used to manufacture hybrid components and consists of a joining process or Additive Manufacturing for various materials (e.g. deposition welding), subsequent hot forming, machining and heat treatment. In this way, components can be produced with materials adapted to the load case. For this paper, hybrid shafts are produced by deposition welding of a cladding made of X45CrSi9-3 onto a workpiece made from 20MnCr5. The hybrid shafts are then formed by means of cross-wedge rolling. It is investigated, how the thickness of the cladding and the type of cooling after hot forming (in air or in water) affect the properties of the cladding. The hybrid shafts are formed without layer separation. However, slight core loosening occurres in the area of the bearing seat due to the Mannesmann effect. The microhardness of the cladding is only slightly effected by the cooling strategy, while the microhardness of the base material is significantly higher in water cooled shafts. The microstructure of the cladding after both cooling strategies consists mainly of martensite. In the base material, air cooling results in a mainly ferritic microstructure with grains of ferrite-pearlite. Quenching in water results in a microstructure containing mainly martensite.",TRUE,noun
R136138,Medical Informatics and Medical Bioinformatics,R148112,"2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text",S593910,R148114,Concept types,R148124,Problem,"The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records presented three tasks: a concept extraction task focused on the extraction of medical concepts from patient reports; an assertion classification task focused on assigning assertion types for medical problem concepts; and a relation classification task focused on assigning relation types that hold between medical problems, tests, and treatments. i2b2 and the VA provided an annotated reference standard corpus for the three tasks. Using this reference standard, 22 systems were developed for concept extraction, 21 for assertion classification, and 16 for relation classification. These systems showed that machine learning approaches could be augmented with rule-based systems to determine concepts, assertions, and relations. Depending on the task, the rule-based systems can either provide input for machine learning or post-process the output of machine learning. Ensembles of classifiers, information from unlabeled data, and external knowledge sources can help when the training data are inadequate.",TRUE,noun
R136138,Medical Informatics and Medical Bioinformatics,R148112,"2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text",S593913,R148114,Concept types,R148125,Test,"The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records presented three tasks: a concept extraction task focused on the extraction of medical concepts from patient reports; an assertion classification task focused on assigning assertion types for medical problem concepts; and a relation classification task focused on assigning relation types that hold between medical problems, tests, and treatments. i2b2 and the VA provided an annotated reference standard corpus for the three tasks. Using this reference standard, 22 systems were developed for concept extraction, 21 for assertion classification, and 16 for relation classification. These systems showed that machine learning approaches could be augmented with rule-based systems to determine concepts, assertions, and relations. Depending on the task, the rule-based systems can either provide input for machine learning or post-process the output of machine learning. Ensembles of classifiers, information from unlabeled data, and external knowledge sources can help when the training data are inadequate.",TRUE,noun
R136138,Medical Informatics and Medical Bioinformatics,R148112,"2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text",S593909,R148114,Concept types,R148123,Treatment,"The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records presented three tasks: a concept extraction task focused on the extraction of medical concepts from patient reports; an assertion classification task focused on assigning assertion types for medical problem concepts; and a relation classification task focused on assigning relation types that hold between medical problems, tests, and treatments. i2b2 and the VA provided an annotated reference standard corpus for the three tasks. Using this reference standard, 22 systems were developed for concept extraction, 21 for assertion classification, and 16 for relation classification. These systems showed that machine learning approaches could be augmented with rule-based systems to determine concepts, assertions, and relations. Depending on the task, the rule-based systems can either provide input for machine learning or post-process the output of machine learning. Ensembles of classifiers, information from unlabeled data, and external knowledge sources can help when the training data are inadequate.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R151496,Cyclodextrins in acetazolamide eye drop formulations,S607469,R151498,Uses drug,R151500,Acetazolamide,"The interaction of acetazolamide with beta-cyclodextrin, (beta-CD), dimethyl-beta-cyclodextrin (DM-beta-CD) and trimethyl-beta-cyclodextrin (TM-beta-CD) was monitored spectrophotometrically. The results revealed formation of equimolar complexes. The apparent solubility of acetazolamide in water was found to increase linearly with increasing CD concentration. The effect of CDs on the permeation of acetazolamide through semi-permeable membranes and the topical delivery of acetazolamide was investigated. Maximum acetazolamide penetration was obtained when just enough CD was used to keep all acetazolamide in solution. For an acetazolamide concentration of 10 mg/ml, the optimum CD concentration appeared to be 3.5 mmol/l for beta-CD, 2.8 mmol/l for TM-beta-CD and 6.0 mmol/l for DM-beta-CD. The effect of CDs on the bioavailability of acetazolamide was assessed by measuring the intraocular pressure in rabbits. The results indicated that CDs have a significant influence on the biological performance of the drug leading to augmentation in its intensity of action and bioavailability as well as prolongation in its duration of action.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R151506,Topically effective ocular hypotensive acetazolamide and ethoxyzolamide formulations in rabbits,S607495,R151508,Uses drug,R151500,Acetazolamide,"Abstract— The effect of topically active 2‐hydroxypropyl‐β‐cyclodextrin (HP‐β‐CyD) eye‐drop formulations containing solutions of acetazolamide, ethoxyzolamide or timolol on the intra‐ocular pressure (IOP) was investigated in normotensive conscious rabbits. Both acetazolamide and ethoxyzolamide were active but their IOP‐lowering effect was less than that of timolol. The IOP‐lowering effects of acetazolamide and ethoxyzolamide and that of timolol appeared to be to some extent additive. Combination of acetazolamide and timolol or ethoxyzolamide and timolol in one HP‐β‐CyD formulation resulted in a significant increase in the duration of activity compared with HP‐β‐CyD formulations containing only acetazolamide, ethoxyzolamide or timolol. Also, it was possible to increase the IOP‐lowering effect of acetazolamide by formulating the drug as a suspension in an aqueous HP‐β‐CyD vehicle.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R141417,"Multiplex Paper-Based Colorimetric DNA Sensor Using Pyrrolidinyl Peptide Nucleic Acid-Induced AgNPs Aggregation for Detecting MERS-CoV, MTB, and HPV Oligonucleotides",S566064,R141418,Type of nanoparticles,L397316,AgNPs,"The development of simple fluorescent and colorimetric assays that enable point-of-care DNA and RNA detection has been a topic of significant research because of the utility of such assays in resource limited settings. The most common motifs utilize hybridization to a complementary detection strand coupled with a sensitive reporter molecule. Here, a paper-based colorimetric assay for DNA detection based on pyrrolidinyl peptide nucleic acid (acpcPNA)-induced nanoparticle aggregation is reported as an alternative to traditional colorimetric approaches. PNA probes are an attractive alternative to DNA and RNA probes because they are chemically and biologically stable, easily synthesized, and hybridize efficiently with the complementary DNA strands. The acpcPNA probe contains a single positive charge from the lysine at C-terminus and causes aggregation of citrate anion-stabilized silver nanoparticles (AgNPs) in the absence of complementary DNA. In the presence of target DNA, formation of the anionic DNA-acpcPNA duplex results in dispersion of the AgNPs as a result of electrostatic repulsion, giving rise to a detectable color change. Factors affecting the sensitivity and selectivity of this assay were investigated, including ionic strength, AgNP concentration, PNA concentration, and DNA strand mismatches. The method was used for screening of synthetic Middle East respiratory syndrome coronavirus (MERS-CoV), Mycobacterium tuberculosis (MTB), and human papillomavirus (HPV) DNA based on a colorimetric paper-based analytical device developed using the aforementioned principle. The oligonucleotide targets were detected by measuring the color change of AgNPs, giving detection limits of 1.53 (MERS-CoV), 1.27 (MTB), and 1.03 nM (HPV). The acpcPNA probe exhibited high selectivity for the complementary oligonucleotides over single-base-mismatch, two-base-mismatch, and noncomplementary DNA targets. The proposed paper-based colorimetric DNA sensor has potential to be an alternative approach for simple, rapid, sensitive, and selective DNA detection.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R148187,Dual-Peptide-Functionalized Albumin-Based Nanoparticles with pH-Dependent Self-Assembly Behavior for Drug Delivery,S594189,R148188,Polymer,R136224,Albumin,"Drug delivery has become an important strategy for improving the chemotherapy efficiency. Here we developed a multifunctionalized nanosized albumin-based drug-delivery system with tumor-targeting, cell-penetrating, and endolysosomal pH-responsive properties. cRGD-BSA/KALA/DOX nanoparticles were fabricated by self-assembly through electrostatic interaction between cell-penetrating peptide KALA and cRGD-BSA, with cRGD as a tumor-targeting ligand. Under endosomal/lysosomal acidic conditions, the changes in the electric charges of cRGD-BSA and KALA led to the disassembly of the nanoparticles to accelerate intracellular drug release. cRGD-BSA/KALA/DOX nanoparticles showed an enhanced inhibitory effect in the growth of αvβ3-integrin-overexpressed tumor cells, indicating promising application in cancer treatments.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138920,Functionalization of Silver Nanoparticles Loaded with Paclitaxel-induced A549 Cells Apoptosis Through ROS-Mediated Signaling Pathways,S552017,R138924,keywords,L388281,Apoptosis,"Background: Paclitaxel (PTX) is one of the most important and effective anticancer drugs for the treatment of human cancer. However, its low solubility and severe adverse effects limited clinical use. To overcome this limitation, nanotechnology has been used to overcome tumors due to its excellent antimicrobial activity. Objective: This study was to demonstrate the anticancer properties of functionalization silver nanoparticles loaded with paclitaxel (Ag@PTX) induced A549 cells apoptosis through ROS-mediated signaling pathways. Methods: The Ag@PTX nanoparticles were charged with a zeta potential of about -17 mv and characterized around 2 nm with a narrow size distribution. Results: Ag@PTX significantly decreased the viability of A549 cells and possessed selectivity between cancer and normal cells. Ag@PTX induced A549 cells apoptosis was confirmed by nuclear condensation, DNA fragmentation, and activation of caspase-3. Furthermore, Ag@PTX enhanced the anti-cancer activity of A549 cells through ROS-mediated p53 and AKT signalling pathways. Finally, in a xenograft nude mice model, Ag@PTX suppressed the growth of tumors. Conclusion: Our findings suggest that Ag@PTX may be a candidate as a chemopreventive agent and could be a highly efficient way to achieve anticancer synergism for human cancers.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R141415,Development of Label-Free Colorimetric Assay for MERS-CoV Using Gold Nanoparticles,S566046,R141416,Type of nanoparticles,L397301,AuNPs,"Worldwide outbreaks of infectious diseases necessitate the development of rapid and accurate diagnostic methods. Colorimetric assays are a representative tool to simply identify the target molecules in specimens through color changes of an indicator (e.g., nanosized metallic particle, and dye molecules). The detection method is used to confirm the presence of biomarkers visually and measure absorbance of the colored compounds at a specific wavelength. In this study, we propose a colorimetric assay based on an extended form of double-stranded DNA (dsDNA) self-assembly shielded gold nanoparticles (AuNPs) under positive electrolyte (e.g., 0.1 M MgCl2) for detection of Middle East respiratory syndrome coronavirus (MERS-CoV). This platform is able to verify the existence of viral molecules through a localized surface plasmon resonance (LSPR) shift and color changes of AuNPs in the UV–vis wavelength range. We designed a pair of thiol-modified probes at either the 5′ end or 3′ end to organize complementary base pairs with upstream of the E protein gene (upE) and open reading frames (ORF) 1a on MERS-CoV. The dsDNA of the target and probes forms a disulfide-induced long self-assembled complex, which protects AuNPs from salt-induced aggregation and transition of optical properties. This colorimetric assay could discriminate down to 1 pmol/μL of 30 bp MERS-CoV and further be adapted for convenient on-site detection of other infectious diseases, especially in resource-limited settings.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R151628,"Improvement of Nasal Bioavailability of Luteinizing Hormone-Releasing Hormone Agonist, Buserelin, by Cyclodextrin Derivatives in Rats",S607868,R151630,Uses drug,R151632,Buserelin ,"The effects of chemically modified cyclodextrins on the nasal absorption of buserelin, an agonist of luteinizing hormone-releasing hormone, were investigated in anesthetized rats. Of the cyclodextrins tested, dimethyl-beta-cyclodextrin (DM-beta-CyD) was the most effective in improving the rate and extent of the nasal bioavailability of buserelin. Fluorescence spectroscopic studies indicated that the cyclodextrins formed inclusion complexes with buserelin, which may reduce the diffusibility of buserelin across the nasal epithelium and may participate in the protection of the peptide against enzymatic degradation in the nasal mucosa. Additionally, the cyclodextrins increased the permeability of the nasal mucosa, which was the primary determinant based on the multiple regression analysis of the nasal absorption enhancement of buserelin. Scanning electron microscopic observations revealed that DM-beta-CyD induced no remarkable changes in the surface morphology of the nasal mucosa at a minimal concentration necessary to achieve substantial absorption enhancement. The present results suggest that DM-beta-CyD could improve the nasal bioavailability of buserelin and is well-tolerated by the nasal mucosa of the rat.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R149178,Correlation between Ferumoxytol Uptake in Tumor Lesions by MRI and Response to Nanoliposomal Irinotecan in Patients with Advanced Solid Tumors: A Pilot Study,S597689,R149180,Indication,R128085,Cancer,"Purpose: To determine whether deposition characteristics of ferumoxytol (FMX) iron nanoparticles in tumors, identified by quantitative MRI, may predict tumor lesion response to nanoliposomal irinotecan (nal-IRI). Experimental Design: Eligible patients with previously treated solid tumors had FMX-MRI scans before and following (1, 24, and 72 hours) FMX injection. After MRI acquisition, R2* signal was used to calculate FMX levels in plasma, reference tissue, and tumor lesions by comparison with a phantom-based standard curve. Patients then received nal-IRI (70 mg/m2 free base strength) biweekly until progression. Two percutaneous core biopsies were collected from selected tumor lesions 72 hours after FMX or nal-IRI. Results: Iron particle levels were quantified by FMX-MRI in plasma, reference tissues, and tumor lesions in 13 of 15 eligible patients. On the basis of a mechanistic pharmacokinetic model, tissue permeability to FMX correlated with early FMX-MRI signals at 1 and 24 hours, while FMX tissue binding contributed at 72 hours. Higher FMX levels (ranked relative to median value of multiple evaluable lesions from 9 patients) were significantly associated with reduction in lesion size by RECIST v1.1 at early time points (P < 0.001 at 1 hour and P < 0.003 at 24 hours FMX-MRI, one-way ANOVA). No association was observed with post-FMX levels at 72 hours. Irinotecan drug levels in lesions correlated with patient's time on treatment (Spearman ρ = 0.7824; P = 0.0016). Conclusions: Correlation between FMX levels in tumor lesions and nal-IRI activity suggests that lesion permeability to FMX and subsequent tumor uptake may be a useful noninvasive and predictive biomarker for nal-IRI response in patients with solid tumors. Clin Cancer Res; 23(14); 3638–48. ©2017 AACR.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138607,A Novel Nanoparticle Formulation for Sustained Paclitaxel Delivery,S550709,R138609,keywords,L387535,cancer,"PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138920,Functionalization of Silver Nanoparticles Loaded with Paclitaxel-induced A549 Cells Apoptosis Through ROS-Mediated Signaling Pathways,S552018,R138924,keywords,L388282,Cancer,"Background: Paclitaxel (PTX) is one of the most important and effective anticancer drugs for the treatment of human cancer. However, its low solubility and severe adverse effects limited clinical use. To overcome this limitation, nanotechnology has been used to overcome tumors due to its excellent antimicrobial activity. Objective: This study was to demonstrate the anticancer properties of functionalization silver nanoparticles loaded with paclitaxel (Ag@PTX) induced A549 cells apoptosis through ROS-mediated signaling pathways. Methods: The Ag@PTX nanoparticles were charged with a zeta potential of about -17 mv and characterized around 2 nm with a narrow size distribution. Results: Ag@PTX significantly decreased the viability of A549 cells and possessed selectivity between cancer and normal cells. Ag@PTX induced A549 cells apoptosis was confirmed by nuclear condensation, DNA fragmentation, and activation of caspase-3. Furthermore, Ag@PTX enhanced the anti-cancer activity of A549 cells through ROS-mediated p53 and AKT signalling pathways. Finally, in a xenograft nude mice model, Ag@PTX suppressed the growth of tumors. Conclusion: Our findings suggest that Ag@PTX may be a candidate as a chemopreventive agent and could be a highly efficient way to achieve anticancer synergism for human cancers.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138607,A Novel Nanoparticle Formulation for Sustained Paclitaxel Delivery,S550710,R138609,keywords,R75762,Chitosan,"PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138611,Paclitaxel/Chitosan Nanosupensions Provide Enhanced Intravesical Bladder Cancer Therapy with Sustained and Prolonged Delivery of Paclitaxel,S550760,R138615,keywords,R75762,Chitosan,"Bladder cancer (BC) is a very common cancer. Nonmuscle-invasive bladder cancer (NMIBC) is the most common type of bladder cancer. After postoperative tumor resection, chemotherapy intravesical instillation is recommended as a standard treatment to significantly reduce recurrences. Nanomedicine-mediated delivery of a chemotherapeutic agent targeting cancer could provide a solution to obtain longer residence time and high bioavailability of an anticancer drug. The approach described here provides a nanomedicine with sustained and prolonged delivery of paclitaxel and enhanced therapy of intravesical bladder cancer, which is paclitaxel/chitosan (PTX/CS) nanosupensions (NSs). The positively charged PTX/CS NSs exhibited a rod-shaped morphology with a mean diameter about 200 nm. They have good dispersivity in water without any protective agents, and the positively charged properties make them easy to be adsorbed on the inner mucosa of the bladder through electrostatic adsorption. PTX/CS NSs also had a high drug loading capacity and can maintain sustained release of paclitaxel which could be prolonged over 10 days. Cell experiments in vitro demonstrated that PTX/CS NSs had good biocompatibility and effective bladder cancer cell proliferation inhibition. The significant anticancer efficacy against intravesical bladder cancer was verified by an in situ bladder cancer model. The paclitaxel/chitosan nanosupensions could provide sustained delivery of chemotherapeutic agents with significant anticancer efficacy against intravesical bladder cancer.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138607,A Novel Nanoparticle Formulation for Sustained Paclitaxel Delivery,S550720,R138609,Polymer,R75762,Chitosan,"PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138611,Paclitaxel/Chitosan Nanosupensions Provide Enhanced Intravesical Bladder Cancer Therapy with Sustained and Prolonged Delivery of Paclitaxel,S550775,R138615,Polymer,R75762,Chitosan,"Bladder cancer (BC) is a very common cancer. Nonmuscle-invasive bladder cancer (NMIBC) is the most common type of bladder cancer. After postoperative tumor resection, chemotherapy intravesical instillation is recommended as a standard treatment to significantly reduce recurrences. Nanomedicine-mediated delivery of a chemotherapeutic agent targeting cancer could provide a solution to obtain longer residence time and high bioavailability of an anticancer drug. The approach described here provides a nanomedicine with sustained and prolonged delivery of paclitaxel and enhanced therapy of intravesical bladder cancer, which is paclitaxel/chitosan (PTX/CS) nanosupensions (NSs). The positively charged PTX/CS NSs exhibited a rod-shaped morphology with a mean diameter about 200 nm. They have good dispersivity in water without any protective agents, and the positively charged properties make them easy to be adsorbed on the inner mucosa of the bladder through electrostatic adsorption. PTX/CS NSs also had a high drug loading capacity and can maintain sustained release of paclitaxel which could be prolonged over 10 days. Cell experiments in vitro demonstrated that PTX/CS NSs had good biocompatibility and effective bladder cancer cell proliferation inhibition. The significant anticancer efficacy against intravesical bladder cancer was verified by an in situ bladder cancer model. The paclitaxel/chitosan nanosupensions could provide sustained delivery of chemotherapeutic agents with significant anticancer efficacy against intravesical bladder cancer.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138621,Targeted Delivery of Insoluble Cargo (Paclitaxel) by PEGylated Chitosan Nanoparticles Grafted with Arg-Gly-Asp (RGD),S550802,R138623,Polymer,R75762,Chitosan,"Poor delivery of insoluble anticancer drugs has so far precluded their clinical application. In this study, we developed a tumor-targeting delivery system for insoluble drug (paclitaxel, PTX) by PEGylated O-carboxymethyl-chitosan (CMC) nanoparticles grafted with cyclic Arg-Gly-Asp (RGD) peptide. To improve the loading efficiency (LE), we combined O/W/O double emulsion method with temperature-programmed solidification technique and controlled PTX within the matrix network as in situ nanocrystallite form. Furthermore, these CMC nanoparticles were PEGylated, which could reduce recognition by the reticuloendothelial system (RES) and prolong the circulation time in blood. In addition, further graft of cyclic RGD peptide at the terminal of PEG chain endowed these nanoparticles with higher affinity to in vitro Lewis lung carcinoma (LLC) cells and in vivo tumor tissue. These outstanding properties enabled as-designed nanodevice to exhibit a greater tumor growth inhibition effect and much lower side effects over the commercial formulation Taxol.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R147246,PEG-g-chitosan nanoparticles functionalized with the monoclonal antibody OX26 for brain drug targeting,S590243,R147248,Polymer,R75762,Chitosan,"AIM Drug targeting to the CNS is challenging due to the presence of blood-brain barrier. We investigated chitosan (Cs) nanoparticles (NPs) as drug transporter system across the blood-brain barrier, based on mAb OX26 modified Cs. MATERIALS & METHODS Cs NPs functionalized with PEG, modified and unmodified with OX26 (Cs-PEG-OX26) were prepared and chemico-physically characterized. These NPs were administered (intraperitoneal) in mice to define their ability to reach the brain. RESULTS Brain uptake of OX26-conjugated NPs is much higher than of unmodified NPs, because: long-circulating abilities (conferred by PEG), interaction between cationic Cs and brain endothelium negative charges and OX26 TfR receptor affinity. CONCLUSION Cs-PEG-OX26 NPs are promising drug delivery system to the CNS.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R110242,"Comparison, synthesis and evaluation of anticancer drug-loaded polymeric nanoparticles on breast cancer cell lines",S505748,R110244,keywords,R110259,Cisplatin,"Breast cancer is a major form of cancer, with a high mortality rate in women. It is crucial to achieve more efficient and safe anticancer drugs. Recent developments in medical nanotechnology have resulted in novel advances in cancer drug delivery. Cisplatin, doxorubicin, and 5-fluorouracil are three important anti-cancer drugs which have poor water-solubility. In this study, we used cisplatin, doxorubicin, and 5-fluorouracil-loaded polycaprolactone-polyethylene glycol (PCL-PEG) nanoparticles to improve the stability and solubility of molecules in drug delivery systems. The nanoparticles were prepared by a double emulsion method and characterized with Fourier Transform Infrared (FTIR) spectroscopy and Hydrogen-1 nuclear magnetic resonance (1HNMR). Cells were treated with equal concentrations of cisplatin, doxorubicin and 5-fluorouracil-loaded PCL-PEG nanoparticles, and free cisplatin, doxorubicin and 5-fluorouracil. The 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay confirmed that cisplatin, doxorubicin, and 5-fluorouracil-loaded PCL-PEG nanoparticles enhanced cytotoxicity and drug delivery in T47D and MCF7 breast cancer cells. However, the IC50 value of doxorubicin was lower than the IC50 values of both cisplatin and 5-fluorouracil, where the difference was statistically considered significant (p˂0.05). However, the IC50 value of all drugs on T47D were lower than those on MCF7.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R110242,"Comparison, synthesis and evaluation of anticancer drug-loaded polymeric nanoparticles on breast cancer cell lines",S505921,R110244,Uses drug,R110259,Cisplatin,"Breast cancer is a major form of cancer, with a high mortality rate in women. It is crucial to achieve more efficient and safe anticancer drugs. Recent developments in medical nanotechnology have resulted in novel advances in cancer drug delivery. Cisplatin, doxorubicin, and 5-fluorouracil are three important anti-cancer drugs which have poor water-solubility. In this study, we used cisplatin, doxorubicin, and 5-fluorouracil-loaded polycaprolactone-polyethylene glycol (PCL-PEG) nanoparticles to improve the stability and solubility of molecules in drug delivery systems. The nanoparticles were prepared by a double emulsion method and characterized with Fourier Transform Infrared (FTIR) spectroscopy and Hydrogen-1 nuclear magnetic resonance (1HNMR). Cells were treated with equal concentrations of cisplatin, doxorubicin and 5-fluorouracil-loaded PCL-PEG nanoparticles, and free cisplatin, doxorubicin and 5-fluorouracil. The 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay confirmed that cisplatin, doxorubicin, and 5-fluorouracil-loaded PCL-PEG nanoparticles enhanced cytotoxicity and drug delivery in T47D and MCF7 breast cancer cells. However, the IC50 value of doxorubicin was lower than the IC50 values of both cisplatin and 5-fluorouracil, where the difference was statistically considered significant (p˂0.05). However, the IC50 value of all drugs on T47D were lower than those on MCF7.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R140238,Oral Drug Delivery Systems for Ulcerative Colitis Therapy: A Comparative Study with Microparticles and Nanoparticles,S559634,R140240,Uses drug,R140241,Curcumin,"
Background:
Oral administrations of microparticles (MPs) and nanoparticles (NPs) have
been widely employed as therapeutic approaches for the treatment of ulcerative colitis (UC). However,
no previous study has comparatively investigated the therapeutic efficacies of MPs and NPs.
Methods:
In this study, curcumin (CUR)-loaded MPs (CUR-MPs) and CUR-loaded NPs (CUR-NPs)
were prepared using a single water-in-oil emulsion solvent evaporation technique. Their therapeutic
outcomes against UC were further comparatively studied.
Results:
The resultant spherical MPs and NPs exhibited slightly negative zeta-potential with average
particle diameters of approximately 1.7 µm and 270 nm, respectively. It was found that NPs exhibited
a much higher CUR release rate than MPs within the same period of investigation. In vivo experiments
demonstrated that oral administration of CUR-MPs and CUR-NPs reduced the symptoms
of inflammation in a UC mouse model induced by dextran sulfate sodium. Importantly, CUR-NPs
showed much better therapeutic outcomes in alleviating UC compared with CUR-MPs.
Conclusion:
NPs can improve the anti-inflammatory activity of CUR by enhancing the drug release
and cellular uptake efficiency, in comparison with MPs. Thus, they could be exploited as a promising
oral drug delivery system for effective UC treatment.
",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R144485,PLGA nanoparticles modified with a BBB-penetrating peptide co-delivering Aβ generation inhibitor and curcumin attenuate memory deficits and neuropathology in Alzheimer's disease mice,S578704,R144487,Uses drug,R140241,Curcumin,"Alzheimer's disease (AD) is the most common form of dementia, characterized by the formation of extracellular senile plaques and neuronal loss caused by amyloid β (Aβ) aggregates in the brains of AD patients. Conventional strategies failed to treat AD in clinical trials, partly due to the poor solubility, low bioavailability and ineffectiveness of the tested drugs to cross the blood-brain barrier (BBB). Moreover, AD is a complex, multifactorial neurodegenerative disease; one-target strategies may be insufficient to prevent the processes of AD. Here, we designed novel kind of poly(lactide-co-glycolic acid) (PLGA) nanoparticles by loading with Aβ generation inhibitor S1 (PQVGHL peptide) and curcumin to target the detrimental factors in AD development and by conjugating with brain targeting peptide CRT (cyclic CRTIGPSVC peptide), an iron-mimic peptide that targets transferrin receptor (TfR), to improve BBB penetration. The average particle size of drug-loaded PLGA nanoparticles and CRT-conjugated PLGA nanoparticles were 128.6 nm and 139.8 nm, respectively. The results of Y-maze and new object recognition test demonstrated that our PLGA nanoparticles significantly improved the spatial memory and recognition in transgenic AD mice. Moreover, PLGA nanoparticles remarkably decreased the level of Aβ, reactive oxygen species (ROS), TNF-α and IL-6, and enhanced the activities of super oxide dismutase (SOD) and synapse numbers in the AD mouse brains. Compared with other PLGA nanoparticles, CRT peptide modified-PLGA nanoparticles co-delivering S1 and curcumin exhibited most beneficial effect on the treatment of AD mice, suggesting that conjugated CRT peptide, and encapsulated S1 and curcumin exerted their corresponding functions for the treatment.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R144491,Curcumin Loaded-PLGA Nanoparticles Conjugated with Tet-1 Peptide for Potential Use in Alzheimer's Disease,S578733,R144492,Uses drug,R140241,Curcumin,"Alzheimer's disease is a growing concern in the modern world. As the currently available medications are not very promising, there is an increased need for the fabrication of newer drugs. Curcumin is a plant derived compound which has potential activities beneficial for the treatment of Alzheimer's disease. Anti-amyloid activity and anti-oxidant activity of curcumin is highly beneficial for the treatment of Alzheimer's disease. The insolubility of curcumin in water restricts its use to a great extend, which can be overcome by the synthesis of curcumin nanoparticles. In our work, we have successfully synthesized water-soluble PLGA coated- curcumin nanoparticles and characterized it using different techniques. As drug targeting to diseases of cerebral origin are difficult due to the stringency of blood-brain barrier, we have coupled the nanoparticle with Tet-1 peptide, which has the affinity to neurons and possess retrograde transportation properties. Our results suggest that curcumin encapsulated-PLGA nanoparticles are able to destroy amyloid aggregates, exhibit anti-oxidative property and are non-cytotoxic. The encapsulation of the curcumin in PLGA does not destroy its inherent properties and so, the PLGA-curcumin nanoparticles can be used as a drug with multiple functions in treating Alzheimer's disease proving it to be a potential therapeutic tool against this dreaded disease.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R151517,"Cyclodextrins in eye drop formulations: enhanced topical delivery of corticosteroids to the eye: Acta
Ophthalmologica
Scandinavica
2002",S607546,R151519,Uses drug,R151524,Dexamethasone,"Cyclodextrins are cylindrical oligosaccharides with a lipophilic central cavity and hydrophilic outer surface. They can form water-soluble complexes with lipophilic drugs, which 'hide' in the cavity. Cyclodextrins can be used to form aqueous eye drop solutions with lipophilic drugs, such as steroids and some carbonic anhydrase inhibitors. The cyclodextrins increase the water solubility of the drug, enhance drug absorption into the eye, improve aqueous stability and reduce local irritation. Cyclodextrins are useful excipients in eye drop formulations of various drugs, including steroids of any kind, carbonic anhydrase inhibitors, pilocarpine, cyclosporins, etc. Their use in ophthalmology has already begun and is likely to expand the selection of drugs available as eye drops. In this paper we review the properties of cyclodextrins and their application in eye drop formulations, of which their use in the formulation of dexamethasone eye drops is an example. Cyclodextrins have been used to formulate eye drops containing corticosteroids, such as dexamethasone, with levels of concentration and ocular absorption which, according to human and animal studies, are many times those seen with presently available formulations. Cyclodextrin-based dexamethasone eye drops are well tolerated in the eye and seem to provide a higher degree of bioavailability and clinical efficiency than the steroid eye drop formulations presently available. Such formulations offer the possibility of once per day application of corticosteroid eye drops after eye surgery, and more intensive topical steroid treatment in severe inflammation. While cyclodextrins have been known for more than a century, their use in ophthalmology is just starting. Cyclodextrins are useful excipients in eye drop formulations for a variety of lipophilic drugs. They will facilitate eye drop formulations for drugs that otherwise might not be available for topical use, while improving absorption and stability and decreasing local irritation.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R151520,Topical and systemic absorption in delivery of dexamethasone to the anterior and posterior segments of the eye,S607547,R151522,Uses drug,R151524,Dexamethasone,"PURPOSE This study aimed to: (1) determine the relative efficiencies of topical and systemic absorption of drugs delivered by eyedrops to the anterior and posterior segments of the eye; (2) establish whether dexamethasone-cyclodextrin eyedrops deliver significant levels of drug to the retina and vitreous in the rabbit eye, and (3) compare systemic absorption following topical application to the eye versus intranasal or intravenous delivery. METHODS In order to distinguish between topical and systemic absorption in the eye, we applied 0.5% dexamethasone-cyclodextrin eyedrops to one (study) eye of rabbits and not to the contralateral (control) eye. Drug levels were measured in each eye. The study eye showed the result of the combination of topical and systemic absorption, whereas the control eye showed the result of systemic absorption only. Systemic absorption was also examined after intranasal and intravenous administration of the same dose of dexamethasone. RESULTS In the aqueous humour dexamethasone levels were 170 +/- 76 ng/g (mean +/- standard deviation) in the study eye and 6 +/- 2 ng/g in the control eye. Similar ratios were seen in the iris and ciliary body. In the retina the dexamethasone level was 33 +/- 7 ng/g in the study eye and 14 +/- 3 ng/g in the control eye. Similar ratios were seen in the vitreous humour. Systemic absorption was similar from ocular, intranasal and intravenous administration. CONCLUSIONS Absorption after topical application dominates in the anterior segment. Topical absorption also plays a significant role in delivering dexamethasone to the posterior segment of the rabbit eye. In medication administered to the retina, 40% of the drug reaches the retina via the systemic route and 60% via topical penetration. Dexamethasone-cyclodextrin eyedrops deliver a significant amount of drug to the rabbit retina.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R149126,Superparamagnetic Iron Oxide–enhanced MR Imaging of Head and Neck Lymph Nodes,S597533,R149128,Field of application,R149129,Diagnosis,"PURPOSE To compare findings on superparamagnetic iron oxide (SPIO)-enhanced magnetic resonance (MR) images of the head and neck with those from resected lymph node specimens and to determine the effect of such imaging on surgical planning in patients with histopathologically proved squamous cell carcinoma of the head and neck. MATERIALS AND METHODS Thirty patients underwent MR imaging with nonenhanced and SPIO-enhanced (2.6 mg Fe/kg intravenously) T1-weighted (500/15 [repetition time msec/echo time msec]) and T2-weighted (1,900/80) spin-echo and T2-weighted gradient-echo (GRE) (500/15, 15 degrees flip angle) sequences. Signal intensity decrease was measured, and visual analysis was performed. Surgical plans were modified, if necessary, according to MR findings. Histopathologic and MR findings were compared. RESULTS Histopathologic evaluation of 1,029 lymph nodes revealed 69 were metastatic. MR imaging enabled detection of 59 metastases. Regarding lymph node levels, MR diagnosis was correct in 26 of 27 patients who underwent surgery: Only one metastasis was localized in level II with MR imaging, whereas histopathologic evaluation placed it at level III. Extent of surgery was changed in seven patients. SPIO-enhanced T2-weighted GRE was the best sequence for differentiating between benign and malignant lymph nodes. CONCLUSION SPIO-enhanced MR imaging has an important effect on planning the extent of surgery. On a patient basis, SPIO-enhanced MR images compared well with resected specimens.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R148280,Lactoferrin bioconjugated solid lipid nanoparticles: a new drug delivery system for potential brain targeting,S594469,R148282,Uses drug,R148287,Docetaxel,"Abstract Background: Delivery of drugs to brain is a subtle task in the therapy of many severe neurological disorders. Solid lipid nanoparticles (SLN) easily diffuse the blood–brain barrier (BBB) due to their lipophilic nature. Furthermore, ligand conjugation on SLN surface enhances the targeting efficiency. Lactoferin (Lf) conjugated SLN system is first time attempted for effective brain targeting in this study. Purpose: Preparation of Lf-modified docetaxel (DTX)-loaded SLN for proficient delivery of DTX to brain. Methods: DTX-loaded SLN were prepared using emulsification and solvent evaporation method and conjugation of Lf on SLN surface (C-SLN) was attained through carbodiimide chemistry. These lipidic nanoparticles were evaluated by DLS, AFM, FTIR, XRD techniques and in vitro release studies. Colloidal stability study was performed in biologically simulated environment (normal saline and serum). These lipidic nanoparticles were further evaluated for its targeting mechanism for uptake in brain tumour cells and brain via receptor saturation studies and distribution studies in brain, respectively. Results: Particle size of lipidic nanoparticles was found to be optimum. Surface morphology (zeta potential, AFM) and surface chemistry (FTIR) confirmed conjugation of Lf on SLN surface. Cytotoxicity studies revealed augmented apoptotic activity of C-SLN than SLN and DTX. Enhanced cytotoxicity was demonstrated by receptor saturation and uptake studies. Brain concentration of DTX was elevated significantly with C-SLN than marketed formulation. Conclusions: It is evident from the cytotoxicity, uptake that SLN has potential to deliver drug to brain than marketed formulation but conjugating Lf on SLN surface (C-SLN) further increased the targeting potential for brain tumour. Moreover, brain distribution studies corroborated the use of C-SLN as a viable vehicle to target drug to brain. Hence, C-SLN was demonstrated to be a promising DTX delivery system to brain as it possessed remarkable biocompatibility, stability and efficacy than other reported delivery systems.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R110242,"Comparison, synthesis and evaluation of anticancer drug-loaded polymeric nanoparticles on breast cancer cell lines",S505749,R110244,keywords,R72024,Doxorubicin,"Breast cancer is a major form of cancer, with a high mortality rate in women. It is crucial to achieve more efficient and safe anticancer drugs. Recent developments in medical nanotechnology have resulted in novel advances in cancer drug delivery. Cisplatin, doxorubicin, and 5-fluorouracil are three important anti-cancer drugs which have poor water-solubility. In this study, we used cisplatin, doxorubicin, and 5-fluorouracil-loaded polycaprolactone-polyethylene glycol (PCL-PEG) nanoparticles to improve the stability and solubility of molecules in drug delivery systems. The nanoparticles were prepared by a double emulsion method and characterized with Fourier Transform Infrared (FTIR) spectroscopy and Hydrogen-1 nuclear magnetic resonance (1HNMR). Cells were treated with equal concentrations of cisplatin, doxorubicin and 5-fluorouracil-loaded PCL-PEG nanoparticles, and free cisplatin, doxorubicin and 5-fluorouracil. The 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay confirmed that cisplatin, doxorubicin, and 5-fluorouracil-loaded PCL-PEG nanoparticles enhanced cytotoxicity and drug delivery in T47D and MCF7 breast cancer cells. However, the IC50 value of doxorubicin was lower than the IC50 values of both cisplatin and 5-fluorouracil, where the difference was statistically considered significant (p˂0.05). However, the IC50 value of all drugs on T47D were lower than those on MCF7.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R110242,"Comparison, synthesis and evaluation of anticancer drug-loaded polymeric nanoparticles on breast cancer cell lines",S505924,R110244,Uses drug,R72024,Doxorubicin,"Breast cancer is a major form of cancer, with a high mortality rate in women. It is crucial to achieve more efficient and safe anticancer drugs. Recent developments in medical nanotechnology have resulted in novel advances in cancer drug delivery. Cisplatin, doxorubicin, and 5-fluorouracil are three important anti-cancer drugs which have poor water-solubility. In this study, we used cisplatin, doxorubicin, and 5-fluorouracil-loaded polycaprolactone-polyethylene glycol (PCL-PEG) nanoparticles to improve the stability and solubility of molecules in drug delivery systems. The nanoparticles were prepared by a double emulsion method and characterized with Fourier Transform Infrared (FTIR) spectroscopy and Hydrogen-1 nuclear magnetic resonance (1HNMR). Cells were treated with equal concentrations of cisplatin, doxorubicin and 5-fluorouracil-loaded PCL-PEG nanoparticles, and free cisplatin, doxorubicin and 5-fluorouracil. The 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay confirmed that cisplatin, doxorubicin, and 5-fluorouracil-loaded PCL-PEG nanoparticles enhanced cytotoxicity and drug delivery in T47D and MCF7 breast cancer cells. However, the IC50 value of doxorubicin was lower than the IC50 values of both cisplatin and 5-fluorouracil, where the difference was statistically considered significant (p˂0.05). However, the IC50 value of all drugs on T47D were lower than those on MCF7.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R144478,Co-delivery of doxorubicin and siRNA for glioma therapy by a brain targeting system: angiopep-2-modified poly(lactic-co-glycolic acid) nanoparticles,S578674,R144480,Uses drug,R72024,Doxorubicin,"Abstract It is very challenging to treat brain cancer because of the blood–brain barrier (BBB) restricting therapeutic drug or gene to access the brain. In this research project, angiopep-2 (ANG) was used as a brain-targeted peptide for preparing multifunctional ANG-modified poly(lactic-co-glycolic acid) (PLGA) nanoparticles (NPs), which encapsulated both doxorubicin (DOX) and epidermal growth factor receptor (EGFR) siRNA, designated as ANG/PLGA/DOX/siRNA. This system could efficiently deliver DOX and siRNA into U87MG cells leading to significant cell inhibition, apoptosis and EGFR silencing in vitro. It demonstrated that this drug system was capable of penetrating the BBB in vivo, resulting in more drugs accumulation in the brain. The animal study using the brain orthotopic U87MG glioma xenograft model indicated that the ANG-targeted co-delivery of DOX and EGFR siRNA resulted in not only the prolongation of the life span of the glioma-bearing mice but also an obvious cell apoptosis in glioma tissue.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R148330,Enhanced Intracellular Delivery and Chemotherapy for Glioma Rats by Transferrin-Conjugated Biodegradable Polymersomes Loaded with Doxorubicin,S594658,R148332,Uses drug,R72024,Doxorubicin,"A brain drug delivery system for glioma chemotherapy based on transferrin-conjugated biodegradable polymersomes, Tf-PO-DOX, was made and evaluated with doxorubicin (DOX) as a model drug. Biodegradable polymersomes (PO) loaded with doxorubicin (DOX) were prepared by the nanoprecipitation method (PO-DOX) and then conjugated with transferrin (Tf) to yield Tf-PO-DOX with an average diameter of 107 nm and surface Tf molecule number per polymersome of approximately 35. Compared with PO-DOX and free DOX, Tf-PO-DOX demonstrated the strongest cytotoxicity against C6 glioma cells and the greatest intracellular delivery. It was shown in pharmacokinetic and brain distribution experiments that Tf-PO significantly enhanced brain delivery of DOX, especially the delivery of DOX into brain tumor cells. Pharmacodynamics results revealed a significant reduction of tumor volume and a significant increase of median survival time in the group of Tf-PO-DOX compared with those in saline control animals, animals treated with PO-DOX, and free DOX solution. By terminal deoxynucleotidyl transferase-mediated dUTP nick-end-labeling, Tf-PO-DOX could extensively make tumor cell apoptosis. These results indicated that Tf-PO-DOX could significantly enhance the intracellular delivery of DOX in glioma and the chemotherapeutic effect of DOX for glioma rats.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R148414,Evaluation of psoralen ethosomes for topical delivery in rats by using in vivo microdialysis,S595131,R148416,Type of Lipid-based nanoparticle,R148395,Ethosomes,"This study aimed to improve skin permeation and deposition of psoralen by using ethosomes and to investigate real-time drug release in the deep skin in rats. We used a uniform design method to evaluate the effects of different ethosome formulations on entrapment efficiency and drug skin deposition. Using in vitro and in vivo methods, we investigated skin penetration and release from psoralen-loaded ethosomes in comparison with an ethanol tincture. In in vitro studies, the use of ethosomes was associated with a 6.56-fold greater skin deposition of psoralen than that achieved with the use of the tincture. In vivo skin microdialysis showed that the peak concentration and area under the curve of psoralen from ethosomes were approximately 3.37 and 2.34 times higher, respectively, than those of psoralen from the tincture. Moreover, it revealed that the percutaneous permeability of ethosomes was greater when applied to the abdomen than when applied to the chest or scapulas. Enhanced permeation and skin deposition of psoralen delivered by ethosomes may help reduce toxicity and improve the efficacy of long-term psoralen treatment.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R148267,Enhanced delivery of etoposide across the blood–brain barrier to restrain brain tumor growth using melanotransferrin antibody- and tamoxifen-conjugated solid lipid nanoparticles,S594416,R148269,Uses drug,R142586,Etoposide,"Abstract Melanotransferrin antibody (MA) and tamoxifen (TX) were conjugated on etoposide (ETP)-entrapped solid lipid nanoparticles (ETP-SLNs) to target the blood–brain barrier (BBB) and glioblastom multiforme (GBM). MA- and TX-conjugated ETP-SLNs (MA–TX–ETP–SLNs) were used to infiltrate the BBB comprising a monolayer of human astrocyte-regulated human brain-microvascular endothelial cells (HBMECs) and to restrain the proliferation of malignant U87MG cells. TX-grafted ETP-SLNs (TX–ETP–SLNs) significantly enhanced the BBB permeability coefficient for ETP and raised the fluorescent intensity of calcein-AM when compared with ETP-SLNs. In addition, surface MA could increase the BBB permeability coefficient for ETP about twofold. The viability of HBMECs was higher than 86%, suggesting a high biocompatibility of MA–TX–ETP-SLNs. Moreover, the efficiency in antiproliferation against U87MG cells was in the order of MA–TX–ETP-SLNs > TX–ETP-SLNs > ETP-SLNs > SLNs. The capability of MA–TX–ETP-SLNs to target HBMECs and U87MG cells during internalization was verified by immunochemical staining of expressed melanotransferrin. MA–TX–ETP-SLNs can be a potent pharmacotherapy to deliver ETP across the BBB to GBM.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R160844,Solid-state characterization of Felodipine–Soluplus amorphous solid dispersions,S641968,R160846,Uses drug,R160778,Felodipine,"Abstract The aim of the current study is to develop amorphous solid dispersion (SD) via hot melt extrusion technology to improve the solubility of a water-insoluble compound, felodipine (FEL). The solubility was dramatically increased by preparation of amorphous SDs via hot-melt extrusion with an amphiphilic polymer, Soluplus® (SOL). FEL was found to be miscible with SOL by calculating the solubility parameters. The solubility of FEL within SOL was determined to be in the range of 6.2–9.9% (w/w). Various techniques were applied to characterize the solid-state properties of the amorphous SDs. These included Fourier Transform Infrared Spectrometry spectroscopy and Raman spectroscopy to detect the formation of hydrogen bonding between the drug and the polymer. Scanning electron microscopy was performed to study the morphology of the SDs. Among all the hot-melt extrudates, FEL was found to be molecularly dispersed within the polymer matrix for the extrudates containing 10% drug, while few small crystals were detected in the 30 and 50% extrudates. In conclusion, solubility of FEL was enhanced while a homogeneous SD was achieved for 10% drug loading.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R149143,Ferumoxytol for treatment of iron deficiency anemia in patients with chronic kidney disease,S597620,R149145,Product,R149162,Ferumoxytol,"Background: Iron deficiency anemia (IDA) is a common problem in patients with chronic kidney disease (CKD). Use of intravenous (i.v.) iron effectively treats the resultant anemia, but available iron products have side effects or dosing regimens that limit safety and convenience. Objective: Ferumoxytol (Feraheme™) is a new i.v. iron product recently approved for use in treatment of IDA in CKD patients. This article reviews the structure, pharmacokinetics, and clinical trial results on ferumoxytol. The author also offers his opinions on the role of this product in clinical practice. Methods: This review encompasses important information contained in clinical and preclinical studies of ferumoxytol and is supplemented with information from the US Food and Drug Administration. Results/conclusion: Ferumoxytol offers substantial safety and superior efficacy compared with oral iron therapy. As ferumoxytol can be administered as 510 mg in < 1 min, it is substantially more convenient than other iron products in nondialysis patients. Although further experience with this product is needed in patients at higher risk of drug reactions, ferumoxytol is likely to be highly useful in the hospital and outpatient settings for treatment of IDA.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R149178,Correlation between Ferumoxytol Uptake in Tumor Lesions by MRI and Response to Nanoliposomal Irinotecan in Patients with Advanced Solid Tumors: A Pilot Study,S597692,R149180,Product,R149162,Ferumoxytol,"Purpose: To determine whether deposition characteristics of ferumoxytol (FMX) iron nanoparticles in tumors, identified by quantitative MRI, may predict tumor lesion response to nanoliposomal irinotecan (nal-IRI). Experimental Design: Eligible patients with previously treated solid tumors had FMX-MRI scans before and following (1, 24, and 72 hours) FMX injection. After MRI acquisition, R2* signal was used to calculate FMX levels in plasma, reference tissue, and tumor lesions by comparison with a phantom-based standard curve. Patients then received nal-IRI (70 mg/m2 free base strength) biweekly until progression. Two percutaneous core biopsies were collected from selected tumor lesions 72 hours after FMX or nal-IRI. Results: Iron particle levels were quantified by FMX-MRI in plasma, reference tissues, and tumor lesions in 13 of 15 eligible patients. On the basis of a mechanistic pharmacokinetic model, tissue permeability to FMX correlated with early FMX-MRI signals at 1 and 24 hours, while FMX tissue binding contributed at 72 hours. Higher FMX levels (ranked relative to median value of multiple evaluable lesions from 9 patients) were significantly associated with reduction in lesion size by RECIST v1.1 at early time points (P < 0.001 at 1 hour and P < 0.003 at 24 hours FMX-MRI, one-way ANOVA). No association was observed with post-FMX levels at 72 hours. Irinotecan drug levels in lesions correlated with patient's time on treatment (Spearman ρ = 0.7824; P = 0.0016). Conclusions: Correlation between FMX levels in tumor lesions and nal-IRI activity suggests that lesion permeability to FMX and subsequent tumor uptake may be a useful noninvasive and predictive biomarker for nal-IRI response in patients with solid tumors. Clin Cancer Res; 23(14); 3638–48. ©2017 AACR.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R148275,"Galantamine-loaded solid–lipid nanoparticles for enhanced brain delivery: preparation, characterization, in vitro and in vivo evaluations",S594440,R148277,Uses drug,R148279,Galantamine,"Abstract Galantamine hydrobromide, a promising acetylcholinesterase inhibitor is reported to be associated with cholinergic side effects. Its poor brain penetration results in lower bioavailability to the target site. With an aim to overcome these limitations, solid–lipid nanoparticulate formulation of galantamine hydrobromide was developed employing biodegradable and biocompatible components. The selected galantamine hydrobromide-loaded solid–lipid nanoparticles offered nanocolloidal with size lower than 100 nm and maximum drug entrapment 83.42 ± 0.63%. In vitro drug release from these spherical drug-loaded nanoparticles was observed to be greater than 90% for a period of 24 h in controlled manner. In vivo evaluations demonstrated significant memory restoration capability in cognitive deficit rats in comparison with naive drug. The developed carriers offered approximately twice bioavailability to that of plain drug. Hence, the galantamine hydrobromide-loaded solid–lipid nanoparticles can be a promising vehicle for safe and effective delivery especially in disease like Alzheimer’s.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R148337,The proton permeability of self-assembled polymersomes and their neuroprotection by enhancing a neuroprotective peptide across the blood–brain barrier after modification with lactoferrin,S594684,R148339,Uses drug,R148344,Humanin,"Biotherapeutics such as peptides possess strong potential for the treatment of intractable neurological disorders. However, because of their low stability and the impermeability of the blood-brain barrier (BBB), biotherapeutics are difficult to transport into brain parenchyma via intravenous injection. Herein, we present a novel poly(ethylene glycol)-poly(d,l-lactic-co-glycolic acid) polymersome-based nanomedicine with self-assembled bilayers, which was functionalized with lactoferrin (Lf-POS) to facilitate the transport of a neuroprotective peptide into the brain. The apparent diffusion coefficient (D*) of H(+) through the polymersome membrane was 5.659 × 10(-26) cm(2) s(-1), while that of liposomes was 1.017 × 10(-24) cm(2) s(-1). The stability of the polymersome membrane was much higher than that of liposomes. The uptake of polymersomes by mouse brain capillary endothelial cells proved that the optimal density of lactoferrin was 101 molecules per polymersome. Fluorescence imaging indicated that Lf101-POS was effectively transferred into the brain. In pharmacokinetics, compared with transferrin-modified polymersomes and cationic bovine serum albumin-modified polymersomes, Lf-POS obtained the greatest BBB permeability surface area and percentage of injected dose per gram (%ID per g). Furthermore, Lf-POS holding S14G-humanin protected against learning and memory impairment induced by amyloid-β25-35 in rats. Western blotting revealed that the nanomedicine provided neuroprotection against over-expression of apoptotic proteins exhibiting neurofibrillary tangle pathology in neurons. The results indicated that polymersomes can be exploited as a promising non-invasive nanomedicine capable of mediating peptide therapeutic delivery and controlling the release of drugs to the central nervous system.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R137522,"Paclitaxel-loaded poly(D,L-lactide-co-glycolide) nanoparticles for radiotherapy in hypoxic human tumor cells in vitro",S544438,R137524,keywords,R137529,Hypoxia,"Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5%. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel;Drug delivery;Nanoparticle;Radiotherapy;Hypoxia;Human tumor cells;cellular uptake",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R160791,Mechanochemical Synthesis of Pharmaceutical Cocrystal Suspensions via Hot Melt Extrusion: Feasibility Studies and Physicochemical Characterization,S641673,R160793,Uses drug,R160796,Ibuprofen,"Engineered cocrystals offer an alternative solid drug form with tailored physicochemical properties. Interestingly, although cocrystals provide many new possibilities, they also present new challenges, particularly in regard to their design and large-scale manufacture. Current literature has primarily focused on the preparation and characterization of novel cocrystals typically containing only the drug and coformer, leaving the subsequent formulation less explored. In this paper we propose, for the first time, the use of hot melt extrusion for the mechanochemical synthesis of pharmaceutical cocrystals in the presence of a meltable binder. In this approach, we examine excipients that are amenable to hot melt extrusion, forming a suspension of cocrystal particulates embedded in a pharmaceutical matrix. Using ibuprofen and isonicotinamide as a model cocrystal reagent pair, formulations extruded with a small molecular matrix carrier (xylitol) were examined to be intimate mixtures wherein the newly formed cocrystal particulates were physically suspended in a matrix. With respect to formulations extruded using polymeric carriers (Soluplus and Eudragit EPO, respectively), however, there was no evidence within PXRD patterns of either crystalline ibuprofen or the cocrystal. Importantly, it was established in this study that an appropriate carrier for a cocrystal reagent pair during HME processing should satisfy certain criteria including limited interaction with parent reagents and cocrystal product, processing temperature sufficiently lower than the onset of cocrystal Tm, low melt viscosity, and rapid solidification upon cooling.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R155590,"The Effect of Cyclodextrins on the In Vitro and In Vivo Properties of Insulin-Loaded Poly (D,L-Lactic-Co-Glycolic Acid) Microspheres: EFFECT OF CYCLODEXTRINS ON MICROSPHERES",S623608,R155592,Uses drug,R144473,Insulin,"In this work we describe the development and characterization of a new formulation of insulin (INS). Insulin was complexed with cyclodextrins (CD) in order to improve its solubility and stability being available as a dry powder, after encapsulation into poly (D,L-lactic-co-glycolic acid) (PLGA) microspheres. The complex INS : CD was encapsulated into microspheres in order to obtain particles with an average diameter between 2 and 6 microm. This system was able to induce significant reduction of the plasma glucose level in two rodent models, normal mice and diabetic rats, after intratracheal administration.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R155595,Encapsulation of insulin–cyclodextrin complex in PLGA microspheres: a new approach for prolonged pulmonary insulin delivery,S623627,R155597,Uses drug,R144473,Insulin,"The insulin administration by pulmonary route has been investigated in the last years with good perspectives as alternative for parenteral administration. However, it has been reported that insulin absorption after pulmonary administration is limited by various factors. Moreover, in the related studies one daily injection of long-acting insulin was necessary for a correct glycemic control. To abolish the insulin injection, the present study aimed to develop a new formulation for prolonged pulmonary insulin delivery based on the encapsulation of an insulin:dimethyl-β-cyclodextrin (INS:DM-β-CD) complex into PLGA microspheres. The molar ratio of insulin/cyclodextrin in the complex was equal to 1:5. The particles were obtained by the w/o/w solvent evaporation method. The inner aqueous phase of the w/o/w multiple emulsion contained the INS:DM-β-CD complex. The characteristics of the INS:DM-β-CD complex obtained were assessed by 1H-NMR spectroscopy and Circular Dichroism study. The average diameter of the microspheres prepared, evaluated by laser diffractometry, was 2.53 ± 1.8 µm and the percentage of insulin loading was 14.76 ± 1.1. The hypoglycemic response after intratracheal administration (3.0 I.U. kg−1) of INS:DM-β-CD complex-loaded microspheres to diabetic rats indicated an efficient and prolonged release of the hormone compared with others insulin formulations essayed.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R160791,Mechanochemical Synthesis of Pharmaceutical Cocrystal Suspensions via Hot Melt Extrusion: Feasibility Studies and Physicochemical Characterization,S641675,R160795,Uses drug,R160797,isonicotinamide,"Engineered cocrystals offer an alternative solid drug form with tailored physicochemical properties. Interestingly, although cocrystals provide many new possibilities, they also present new challenges, particularly in regard to their design and large-scale manufacture. Current literature has primarily focused on the preparation and characterization of novel cocrystals typically containing only the drug and coformer, leaving the subsequent formulation less explored. In this paper we propose, for the first time, the use of hot melt extrusion for the mechanochemical synthesis of pharmaceutical cocrystals in the presence of a meltable binder. In this approach, we examine excipients that are amenable to hot melt extrusion, forming a suspension of cocrystal particulates embedded in a pharmaceutical matrix. Using ibuprofen and isonicotinamide as a model cocrystal reagent pair, formulations extruded with a small molecular matrix carrier (xylitol) were examined to be intimate mixtures wherein the newly formed cocrystal particulates were physically suspended in a matrix. With respect to formulations extruded using polymeric carriers (Soluplus and Eudragit EPO, respectively), however, there was no evidence within PXRD patterns of either crystalline ibuprofen or the cocrystal. Importantly, it was established in this study that an appropriate carrier for a cocrystal reagent pair during HME processing should satisfy certain criteria including limited interaction with parent reagents and cocrystal product, processing temperature sufficiently lower than the onset of cocrystal Tm, low melt viscosity, and rapid solidification upon cooling.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R144137,Low active loading of cargo into engineered extracellular vesicles results in inefficient miRNA mimic delivery,S576976,R144142,Membrane protein,R144147,Lamp2a,"ABSTRACT Extracellular vesicles (EVs) hold great potential as novel systems for nucleic acid delivery due to their natural composition. Our goal was to load EVs with microRNA that are synthesized by the cells that produce the EVs. HEK293T cells were engineered to produce EVs expressing a lysosomal associated membrane, Lamp2a fusion protein. The gene encoding pre-miR-199a was inserted into an artificial intron of the Lamp2a fusion protein. The TAT peptide/HIV-1 transactivation response (TAR) RNA interacting peptide was exploited to enhance the EV loading of the pre-miR-199a containing a modified TAR RNA loop. Computational modeling demonstrated a stable interaction between the modified pre-miR-199a loop and TAT peptide. EMSA gel shift, recombinant Dicer processing and luciferase binding assays confirmed the binding, processing and functionality of the modified pre-miR-199a. The TAT-TAR interaction enhanced the loading of the miR-199a into EVs by 65-fold. Endogenously loaded EVs were ineffective at delivering active miR-199a-3p therapeutic to recipient SK-Hep1 cells. While the low degree of miRNA loading into EVs through this approach resulted in inefficient distribution of RNA cargo into recipient cells, the TAT TAR strategy to load miRNA into EVs may be valuable in other drug delivery approaches involving miRNA mimics or other hairpin containing RNAs.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R144328,"Preparation, Biodistribution and Neurotoxicity of Liposomal Cisplatin following Convection Enhanced Delivery in Normal and F98 Glioma Bearing Rats",S577916,R144329,Type of nanocarrier,R144264,Liposomes,"The purpose of this study was to evaluate two novel liposomal formulations of cisplatin as potential therapeutic agents for treatment of the F98 rat glioma. The first was a commercially produced agent, Lipoplatin™, which currently is in a Phase III clinical trial for treatment of non-small cell lung cancer (NSCLC). The second, produced in our laboratory, was based on the ability of cisplatin to form coordination complexes with lipid cholesteryl hemisuccinate (CHEMS). The in vitro tumoricidal activity of the former previously has been described in detail by other investigators. The CHEMS liposomal formulation had a Pt loading efficiency of 25% and showed more potent in vitro cytotoxicity against F98 glioma cells than free cisplatin at 24 h. In vivo CHEMS liposomes showed high retention at 24 h after intracerebral (i.c.) convection enhanced delivery (CED) to F98 glioma bearing rats. Neurotoxicologic studies were carried out in non-tumor bearing Fischer rats following i.c. CED of Lipoplatin™ or CHEMS liposomes or their “hollow” counterparts. Unexpectedly, Lipoplatin™ was highly neurotoxic when given i.c. by CED and resulted in death immediately following or within a few days after administration. Similarly “hollow” Lipoplatin™ liposomes showed similar neurotoxicity indicating that this was due to the liposomes themselves rather than the cisplatin. This was particularly surprising since Lipoplatin™ has been well tolerated when administered intravenously. In contrast, CHEMS liposomes and their “hollow” counterparts were clinically well tolerated. However, a variety of dose dependent neuropathologic changes from none to severe were seen at either 10 or 14 d following their administration. These findings suggest that further refinements in the design and formulation of cisplatin containing liposomes will be required before they can be administered i.c. by CED for the treatment of brain tumors and that a formulation that may be safe when given systemically may be highly neurotoxic when administered directly into the brain.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R147032,Glycosylated Sertraline-Loaded Liposomes for Brain Targeting: QbD Study of Formulation Variabilities and Brain Transport,S590162,R147034,Type of nanocarrier,R147232,Liposomes,"Effectiveness of CNS-acting drugs depends on the localization, targeting, and capacity to be transported through the blood–brain barrier (BBB) which can be achieved by designing brain-targeting delivery vectors. Hence, the objective of this study was to screen the formulation and process variables affecting the performance of sertraline (Ser-HCl)-loaded pegylated and glycosylated liposomes. The prepared vectors were characterized for Ser-HCl entrapment, size, surface charge, release behavior, and in vitro transport through the BBB. Furthermore, the compatibility among liposomal components was assessed using SEM, FTIR, and DSC analysis. Through a thorough screening study, enhancement of Ser-HCl entrapment, nanosized liposomes with low skewness, maximized stability, and controlled drug leakage were attained. The solid-state characterization revealed remarkable interaction between Ser-HCl and the charging agent to determine drug entrapment and leakage. Moreover, results of liposomal transport through mouse brain endothelialpolyoma cells demonstrated greater capacity of the proposed glycosylated liposomes to target the cerebellar due to its higher density of GLUT1 and higher glucose utilization. This transport capacity was confirmed by the inhibiting action of both cytochalasin B and phenobarbital. Using C6 glioma cells model, flow cytometry, time-lapse live cell imaging, and in vivo NIR fluorescence imaging demonstrated that optimized glycosylated liposomes can be transported through the BBB by classical endocytosis, as well as by specific transcytosis. In conclusion, the current study proposed a thorough screening of important formulation and process variabilities affecting brain-targeting liposomes for further scale-up processes.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138607,A Novel Nanoparticle Formulation for Sustained Paclitaxel Delivery,S550713,R138609,keywords,L387537,mucoadhesive,"PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R144256,"Nanoemulsions: formation, properties and applications",S577488,R144258,Type of nanocarrier,R144259,Nanoemulsions,"Nanoemulsions are kinetically stable liquid-in-liquid dispersions with droplet sizes on the order of 100 nm. Their small size leads to useful properties such as high surface area per unit volume, robust stability, optically transparent appearance, and tunable rheology. Nanoemulsions are finding application in diverse areas such as drug delivery, food, cosmetics, pharmaceuticals, and material synthesis. Additionally, they serve as model systems to understand nanoscale colloidal dispersions. High and low energy methods are used to prepare nanoemulsions, including high pressure homogenization, ultrasonication, phase inversion temperature and emulsion inversion point, as well as recently developed approaches such as bubble bursting method. In this review article, we summarize the major methods to prepare nanoemulsions, theories to predict droplet size, physical conditions and chemical additives which affect droplet stability, and recent applications.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R137522,"Paclitaxel-loaded poly(D,L-lactide-co-glycolide) nanoparticles for radiotherapy in hypoxic human tumor cells in vitro",S544436,R137524,keywords,R111075,Nanoparticles,"Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5%. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel;Drug delivery;Nanoparticle;Radiotherapy;Hypoxia;Human tumor cells;cellular uptake",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138058,"Formulation, optimization, hemocompatibility and pharmacokinetic evaluation of PLGA nanoparticles containing paclitaxel",S546382,R138064,keywords,R111075,Nanoparticles,"Abstract Objective: Paclitaxel (PTX)-loaded polymer (Poly(lactic-co-glycolic acid), PLGA)-based nanoformulation was developed with the objective of formulating cremophor EL-free nanoformulation intended for intravenous use. Significance: The polymeric PTX nanoparticles free from the cremophor EL will help in eliminating the shortcomings of the existing delivery system as cremophor EL causes serious allergic reactions to the subjects after intravenous use. Methods and results: Paclitaxel-loaded nanoparticles were formulated by nanoprecipitation method. The diminutive nanoparticles (143.2 nm) with uniform size throughout (polydispersity index, 0.115) and high entrapment efficiency (95.34%) were obtained by employing the Box–Behnken design for the optimization of the formulation with the aid of desirability approach-based numerical optimization technique. Optimized levels for each factor viz. polymer concentration (X1), amount of organic solvent (X2), and surfactant concentration (X3) were 0.23%, 5 ml %, and 1.13%, respectively. The results of the hemocompatibility studies confirmed the safety of PLGA-based nanoparticles for intravenous administration. Pharmacokinetic evaluations confirmed the longer retention of PTX in systemic circulation. Conclusion: In a nutshell, the developed polymeric nanoparticle formulation of PTX precludes the inadequacy of existing PTX formulation and can be considered as superior alternative carrier system of the same.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138607,A Novel Nanoparticle Formulation for Sustained Paclitaxel Delivery,S550714,R138609,keywords,R111075,Nanoparticles,"PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138621,Targeted Delivery of Insoluble Cargo (Paclitaxel) by PEGylated Chitosan Nanoparticles Grafted with Arg-Gly-Asp (RGD),S550820,R138623,keywords,R111075,Nanoparticles,"Poor delivery of insoluble anticancer drugs has so far precluded their clinical application. In this study, we developed a tumor-targeting delivery system for insoluble drug (paclitaxel, PTX) by PEGylated O-carboxymethyl-chitosan (CMC) nanoparticles grafted with cyclic Arg-Gly-Asp (RGD) peptide. To improve the loading efficiency (LE), we combined O/W/O double emulsion method with temperature-programmed solidification technique and controlled PTX within the matrix network as in situ nanocrystallite form. Furthermore, these CMC nanoparticles were PEGylated, which could reduce recognition by the reticuloendothelial system (RES) and prolong the circulation time in blood. In addition, further graft of cyclic RGD peptide at the terminal of PEG chain endowed these nanoparticles with higher affinity to in vitro Lewis lung carcinoma (LLC) cells and in vivo tumor tissue. These outstanding properties enabled as-designed nanodevice to exhibit a greater tumor growth inhibition effect and much lower side effects over the commercial formulation Taxol.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138611,Paclitaxel/Chitosan Nanosupensions Provide Enhanced Intravesical Bladder Cancer Therapy with Sustained and Prolonged Delivery of Paclitaxel,S550761,R138615,keywords,R138618,nanosupension,"Bladder cancer (BC) is a very common cancer. Nonmuscle-invasive bladder cancer (NMIBC) is the most common type of bladder cancer. After postoperative tumor resection, chemotherapy intravesical instillation is recommended as a standard treatment to significantly reduce recurrences. Nanomedicine-mediated delivery of a chemotherapeutic agent targeting cancer could provide a solution to obtain longer residence time and high bioavailability of an anticancer drug. The approach described here provides a nanomedicine with sustained and prolonged delivery of paclitaxel and enhanced therapy of intravesical bladder cancer, which is paclitaxel/chitosan (PTX/CS) nanosupensions (NSs). The positively charged PTX/CS NSs exhibited a rod-shaped morphology with a mean diameter about 200 nm. They have good dispersivity in water without any protective agents, and the positively charged properties make them easy to be adsorbed on the inner mucosa of the bladder through electrostatic adsorption. PTX/CS NSs also had a high drug loading capacity and can maintain sustained release of paclitaxel which could be prolonged over 10 days. Cell experiments in vitro demonstrated that PTX/CS NSs had good biocompatibility and effective bladder cancer cell proliferation inhibition. The significant anticancer efficacy against intravesical bladder cancer was verified by an in situ bladder cancer model. The paclitaxel/chitosan nanosupensions could provide sustained delivery of chemotherapeutic agents with significant anticancer efficacy against intravesical bladder cancer.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R144328,"Preparation, Biodistribution and Neurotoxicity of Liposomal Cisplatin following Convection Enhanced Delivery in Normal and F98 Glioma Bearing Rats",S577932,R144329,Disadvantages,L404507,Neurotoxicity,"The purpose of this study was to evaluate two novel liposomal formulations of cisplatin as potential therapeutic agents for treatment of the F98 rat glioma. The first was a commercially produced agent, Lipoplatin™, which currently is in a Phase III clinical trial for treatment of non-small cell lung cancer (NSCLC). The second, produced in our laboratory, was based on the ability of cisplatin to form coordination complexes with lipid cholesteryl hemisuccinate (CHEMS). The in vitro tumoricidal activity of the former previously has been described in detail by other investigators. The CHEMS liposomal formulation had a Pt loading efficiency of 25% and showed more potent in vitro cytotoxicity against F98 glioma cells than free cisplatin at 24 h. In vivo CHEMS liposomes showed high retention at 24 h after intracerebral (i.c.) convection enhanced delivery (CED) to F98 glioma bearing rats. Neurotoxicologic studies were carried out in non-tumor bearing Fischer rats following i.c. CED of Lipoplatin™ or CHEMS liposomes or their “hollow” counterparts. Unexpectedly, Lipoplatin™ was highly neurotoxic when given i.c. by CED and resulted in death immediately following or within a few days after administration. Similarly “hollow” Lipoplatin™ liposomes showed similar neurotoxicity indicating that this was due to the liposomes themselves rather than the cisplatin. This was particularly surprising since Lipoplatin™ has been well tolerated when administered intravenously. In contrast, CHEMS liposomes and their “hollow” counterparts were clinically well tolerated. However, a variety of dose dependent neuropathologic changes from none to severe were seen at either 10 or 14 d following their administration. These findings suggest that further refinements in the design and formulation of cisplatin containing liposomes will be required before they can be administered i.c. by CED for the treatment of brain tumors and that a formulation that may be safe when given systemically may be highly neurotoxic when administered directly into the brain.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R160810,Influence of Process and Formulation Parameters on Dissolution and Stability Characteristics of Kollidon® VA 64 Hot-Melt Extrudates,S641757,R160812,Uses drug,R160808,Nifedipine,"The objective of the present study was to investigate the effects of processing variables and formulation factors on the characteristics of hot-melt extrudates containing a copolymer (Kollidon® VA 64). Nifedipine was used as a model drug in all of the extrudates. Differential scanning calorimetry (DSC) was utilized on the physical mixtures and melts of varying drug–polymer concentrations to study their miscibility. The drug–polymer binary mixtures were studied for powder flow, drug release, and physical and chemical stabilities. The effects of moisture absorption on the content uniformity of the extrudates were also studied. Processing the materials at lower barrel temperatures (115–135°C) and higher screw speeds (50–100 rpm) exhibited higher post-processing drug content (~99–100%). DSC and X-ray diffraction studies confirmed that melt extrusion of drug–polymer mixtures led to the formation of solid dispersions. Interestingly, the extrusion process also enhanced the powder flow characteristics, which occurred irrespective of the drug load (up to 40% w/w). Moreover, the content uniformity of the extrudates, unlike the physical mixtures, was not sensitive to the amount of moisture absorbed. The extrusion conditions did not influence drug release from the extrudates; however, release was greatly affected by the drug loading. Additionally, the drug release from the physical mixture of nifedipine–Kollidon® VA 64 was significantly different when compared to the corresponding extrudates (f2 = 36.70). The extrudates exhibited both physical and chemical stabilities throughout the period of study. Overall, hot-melt extrusion technology in combination with Kollidon® VA 64 produced extrudates capable of higher drug loading, with enhanced flow characteristics, and excellent stability.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R160835,The Influence of Drug Physical State on the Dissolution Enhancement of Solid Dispersions Prepared Via Hot-Melt Extrusion: A Case Study Using Olanzapine,S641913,R160837,Uses drug,R160838,Olanzapine,"In this study, we examine the relationship between the physical structure and dissolution behavior of olanzapine (OLZ) prepared via hot-melt extrusion in three polymers [polyvinylpyrrolidone (PVP) K30, polyvinylpyrrolidone-co-vinyl acetate (PVPVA) 6:4, and Soluplus® (SLP)]. In particular, we examine whether full amorphicity is necessary to achieve a favorable dissolution profile. Drug–polymer miscibility was estimated using melting point depression and Hansen solubility parameters. Solid dispersions were characterized using differential scanning calorimetry, X-ray powder diffraction, and scanning electron microscopy. All the polymers were found to be miscible with OLZ in a decreasing order of PVP>PVPVA>SLP. At a lower extrusion temperature (160°C), PVP generated fully amorphous dispersions with OLZ, whereas the formulations with PVPVA and SLP contained 14%–16% crystalline OLZ. Increasing the extrusion temperature to 180°C allowed the preparation of fully amorphous systems with PVPVA and SLP. Despite these differences, the dissolution rates of these preparations were comparable, with PVP showing a lower release rate despite being fully amorphous. These findings suggested that, at least in the particular case of OLZ, the absence of crystalline material may not be critical to the dissolution performance. We suggest alternative key factors determining dissolution, particularly the dissolution behavior of the polymers themselves.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138058,"Formulation, optimization, hemocompatibility and pharmacokinetic evaluation of PLGA nanoparticles containing paclitaxel",S546385,R138064,keywords,L384170,optimization,"Abstract Objective: Paclitaxel (PTX)-loaded polymer (Poly(lactic-co-glycolic acid), PLGA)-based nanoformulation was developed with the objective of formulating cremophor EL-free nanoformulation intended for intravenous use. Significance: The polymeric PTX nanoparticles free from the cremophor EL will help in eliminating the shortcomings of the existing delivery system as cremophor EL causes serious allergic reactions to the subjects after intravenous use. Methods and results: Paclitaxel-loaded nanoparticles were formulated by nanoprecipitation method. The diminutive nanoparticles (143.2 nm) with uniform size throughout (polydispersity index, 0.115) and high entrapment efficiency (95.34%) were obtained by employing the Box–Behnken design for the optimization of the formulation with the aid of desirability approach-based numerical optimization technique. Optimized levels for each factor viz. polymer concentration (X1), amount of organic solvent (X2), and surfactant concentration (X3) were 0.23%, 5 ml %, and 1.13%, respectively. The results of the hemocompatibility studies confirmed the safety of PLGA-based nanoparticles for intravenous administration. Pharmacokinetic evaluations confirmed the longer retention of PTX in systemic circulation. Conclusion: In a nutshell, the developed polymeric nanoparticle formulation of PTX precludes the inadequacy of existing PTX formulation and can be considered as superior alternative carrier system of the same.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R137522,"Paclitaxel-loaded poly(D,L-lactide-co-glycolide) nanoparticles for radiotherapy in hypoxic human tumor cells in vitro",S546894,R137524,keywords,L384535,Paclitaxel,"Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5%. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel;Drug delivery;Nanoparticle;Radiotherapy;Hypoxia;Human tumor cells;cellular uptake",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138043,"Paclitaxel-loaded PLGA nanoparticles surface modified with transferrin and Pluronic®P85, anin vitrocell line andin vivobiodistribution studies on rat model",S546883,R138045,keywords,R135745,Paclitaxel,"The development of multidrug resistance (due to drug efflux by P-glycoproteins) is a major drawback with the use of paclitaxel (PTX) in the treatment of cancer. The rationale behind this study is to prepare PTX nanoparticles (NPs) for the reversal of multidrug resistance based on the fact that PTX loaded into NPs is not recognized by P-glycoproteins and hence is not effluxed out of the cell. Also, the intracellular penetration of the NPs could be enhanced by anchoring transferrin (Tf) on the PTX-PLGA-NPs. PTX-loaded PLGA NPs (PTX-PLGA-NPs), Pluronic®P85-coated PLGA NPs (P85-PTX-PLGA-NPs), and Tf-anchored PLGA NPs (Tf-PTX-PLGA-NPs) were prepared and evaluted for cytotoxicity and intracellular uptake using C6 rat glioma cell line. A significant increase in cytotoxicity was observed in the order of Tf-PTX-PLGA-NPs > P85-PTX-PLGA-NPs > PTX-PLGA-NPs in comparison to drug solution. In vivo biodistribution on male Sprague–Dawley rats bearing C6 glioma (subcutaneous) showed higher tumor PTX concentrations in animals administered with PTX-NPs compared to drug solution.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138058,"Formulation, optimization, hemocompatibility and pharmacokinetic evaluation of PLGA nanoparticles containing paclitaxel",S546884,R138064,keywords,R135745,Paclitaxel,"Abstract Objective: Paclitaxel (PTX)-loaded polymer (Poly(lactic-co-glycolic acid), PLGA)-based nanoformulation was developed with the objective of formulating cremophor EL-free nanoformulation intended for intravenous use. Significance: The polymeric PTX nanoparticles free from the cremophor EL will help in eliminating the shortcomings of the existing delivery system as cremophor EL causes serious allergic reactions to the subjects after intravenous use. Methods and results: Paclitaxel-loaded nanoparticles were formulated by nanoprecipitation method. The diminutive nanoparticles (143.2 nm) with uniform size throughout (polydispersity index, 0.115) and high entrapment efficiency (95.34%) were obtained by employing the Box–Behnken design for the optimization of the formulation with the aid of desirability approach-based numerical optimization technique. Optimized levels for each factor viz. polymer concentration (X1), amount of organic solvent (X2), and surfactant concentration (X3) were 0.23%, 5 ml %, and 1.13%, respectively. The results of the hemocompatibility studies confirmed the safety of PLGA-based nanoparticles for intravenous administration. Pharmacokinetic evaluations confirmed the longer retention of PTX in systemic circulation. Conclusion: In a nutshell, the developed polymeric nanoparticle formulation of PTX precludes the inadequacy of existing PTX formulation and can be considered as superior alternative carrier system of the same.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138607,A Novel Nanoparticle Formulation for Sustained Paclitaxel Delivery,S550715,R138609,keywords,R135745,Paclitaxel,"PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138611,Paclitaxel/Chitosan Nanosupensions Provide Enhanced Intravesical Bladder Cancer Therapy with Sustained and Prolonged Delivery of Paclitaxel,S550759,R138615,keywords,R135745,Paclitaxel,"Bladder cancer (BC) is a very common cancer. Nonmuscle-invasive bladder cancer (NMIBC) is the most common type of bladder cancer. After postoperative tumor resection, chemotherapy intravesical instillation is recommended as a standard treatment to significantly reduce recurrences. Nanomedicine-mediated delivery of a chemotherapeutic agent targeting cancer could provide a solution to obtain longer residence time and high bioavailability of an anticancer drug. The approach described here provides a nanomedicine with sustained and prolonged delivery of paclitaxel and enhanced therapy of intravesical bladder cancer, which is paclitaxel/chitosan (PTX/CS) nanosupensions (NSs). The positively charged PTX/CS NSs exhibited a rod-shaped morphology with a mean diameter about 200 nm. They have good dispersivity in water without any protective agents, and the positively charged properties make them easy to be adsorbed on the inner mucosa of the bladder through electrostatic adsorption. PTX/CS NSs also had a high drug loading capacity and can maintain sustained release of paclitaxel which could be prolonged over 10 days. Cell experiments in vitro demonstrated that PTX/CS NSs had good biocompatibility and effective bladder cancer cell proliferation inhibition. The significant anticancer efficacy against intravesical bladder cancer was verified by an in situ bladder cancer model. The paclitaxel/chitosan nanosupensions could provide sustained delivery of chemotherapeutic agents with significant anticancer efficacy against intravesical bladder cancer.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138920,Functionalization of Silver Nanoparticles Loaded with Paclitaxel-induced A549 Cells Apoptosis Through ROS-Mediated Signaling Pathways,S552015,R138924,keywords,R135745,Paclitaxel,"Background: Paclitaxel (PTX) is one of the most important and effective anticancer drugs for the treatment of human cancer. However, its low solubility and severe adverse effects limited clinical use. To overcome this limitation, nanotechnology has been used to overcome tumors due to its excellent antimicrobial activity. Objective: This study was to demonstrate the anticancer properties of functionalization silver nanoparticles loaded with paclitaxel (Ag@PTX) induced A549 cells apoptosis through ROS-mediated signaling pathways. Methods: The Ag@PTX nanoparticles were charged with a zeta potential of about -17 mv and characterized around 2 nm with a narrow size distribution. Results: Ag@PTX significantly decreased the viability of A549 cells and possessed selectivity between cancer and normal cells. Ag@PTX induced A549 cells apoptosis was confirmed by nuclear condensation, DNA fragmentation, and activation of caspase-3. Furthermore, Ag@PTX enhanced the anti-cancer activity of A549 cells through ROS-mediated p53 and AKT signalling pathways. Finally, in a xenograft nude mice model, Ag@PTX suppressed the growth of tumors. Conclusion: Our findings suggest that Ag@PTX may be a candidate as a chemopreventive agent and could be a highly efficient way to achieve anticancer synergism for human cancers.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R137522,"Paclitaxel-loaded poly(D,L-lactide-co-glycolide) nanoparticles for radiotherapy in hypoxic human tumor cells in vitro",S544460,R137524,Uses drug,R135745,Paclitaxel,"Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5%. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel;Drug delivery;Nanoparticle;Radiotherapy;Hypoxia;Human tumor cells;cellular uptake",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138043,"Paclitaxel-loaded PLGA nanoparticles surface modified with transferrin and Pluronic®P85, anin vitrocell line andin vivobiodistribution studies on rat model",S546913,R138045,Uses drug,R135745,Paclitaxel,"The development of multidrug resistance (due to drug efflux by P-glycoproteins) is a major drawback with the use of paclitaxel (PTX) in the treatment of cancer. The rationale behind this study is to prepare PTX nanoparticles (NPs) for the reversal of multidrug resistance based on the fact that PTX loaded into NPs is not recognized by P-glycoproteins and hence is not effluxed out of the cell. Also, the intracellular penetration of the NPs could be enhanced by anchoring transferrin (Tf) on the PTX-PLGA-NPs. PTX-loaded PLGA NPs (PTX-PLGA-NPs), Pluronic®P85-coated PLGA NPs (P85-PTX-PLGA-NPs), and Tf-anchored PLGA NPs (Tf-PTX-PLGA-NPs) were prepared and evaluted for cytotoxicity and intracellular uptake using C6 rat glioma cell line. A significant increase in cytotoxicity was observed in the order of Tf-PTX-PLGA-NPs > P85-PTX-PLGA-NPs > PTX-PLGA-NPs in comparison to drug solution. In vivo biodistribution on male Sprague–Dawley rats bearing C6 glioma (subcutaneous) showed higher tumor PTX concentrations in animals administered with PTX-NPs compared to drug solution.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138058,"Formulation, optimization, hemocompatibility and pharmacokinetic evaluation of PLGA nanoparticles containing paclitaxel",S546914,R138064,Uses drug,R135745,Paclitaxel,"Abstract Objective: Paclitaxel (PTX)-loaded polymer (Poly(lactic-co-glycolic acid), PLGA)-based nanoformulation was developed with the objective of formulating cremophor EL-free nanoformulation intended for intravenous use. Significance: The polymeric PTX nanoparticles free from the cremophor EL will help in eliminating the shortcomings of the existing delivery system as cremophor EL causes serious allergic reactions to the subjects after intravenous use. Methods and results: Paclitaxel-loaded nanoparticles were formulated by nanoprecipitation method. The diminutive nanoparticles (143.2 nm) with uniform size throughout (polydispersity index, 0.115) and high entrapment efficiency (95.34%) were obtained by employing the Box–Behnken design for the optimization of the formulation with the aid of desirability approach-based numerical optimization technique. Optimized levels for each factor viz. polymer concentration (X1), amount of organic solvent (X2), and surfactant concentration (X3) were 0.23%, 5 ml %, and 1.13%, respectively. The results of the hemocompatibility studies confirmed the safety of PLGA-based nanoparticles for intravenous administration. Pharmacokinetic evaluations confirmed the longer retention of PTX in systemic circulation. Conclusion: In a nutshell, the developed polymeric nanoparticle formulation of PTX precludes the inadequacy of existing PTX formulation and can be considered as superior alternative carrier system of the same.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138607,A Novel Nanoparticle Formulation for Sustained Paclitaxel Delivery,S550726,R138609,Uses drug,R135745,Paclitaxel,"PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138611,Paclitaxel/Chitosan Nanosupensions Provide Enhanced Intravesical Bladder Cancer Therapy with Sustained and Prolonged Delivery of Paclitaxel,S550774,R138615,Uses drug,R135745,Paclitaxel,"Bladder cancer (BC) is a very common cancer. Nonmuscle-invasive bladder cancer (NMIBC) is the most common type of bladder cancer. After postoperative tumor resection, chemotherapy intravesical instillation is recommended as a standard treatment to significantly reduce recurrences. Nanomedicine-mediated delivery of a chemotherapeutic agent targeting cancer could provide a solution to obtain longer residence time and high bioavailability of an anticancer drug. The approach described here provides a nanomedicine with sustained and prolonged delivery of paclitaxel and enhanced therapy of intravesical bladder cancer, which is paclitaxel/chitosan (PTX/CS) nanosupensions (NSs). The positively charged PTX/CS NSs exhibited a rod-shaped morphology with a mean diameter about 200 nm. They have good dispersivity in water without any protective agents, and the positively charged properties make them easy to be adsorbed on the inner mucosa of the bladder through electrostatic adsorption. PTX/CS NSs also had a high drug loading capacity and can maintain sustained release of paclitaxel which could be prolonged over 10 days. Cell experiments in vitro demonstrated that PTX/CS NSs had good biocompatibility and effective bladder cancer cell proliferation inhibition. The significant anticancer efficacy against intravesical bladder cancer was verified by an in situ bladder cancer model. The paclitaxel/chitosan nanosupensions could provide sustained delivery of chemotherapeutic agents with significant anticancer efficacy against intravesical bladder cancer.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138621,Targeted Delivery of Insoluble Cargo (Paclitaxel) by PEGylated Chitosan Nanoparticles Grafted with Arg-Gly-Asp (RGD),S550801,R138623,Uses drug,R135745,Paclitaxel,"Poor delivery of insoluble anticancer drugs has so far precluded their clinical application. In this study, we developed a tumor-targeting delivery system for insoluble drug (paclitaxel, PTX) by PEGylated O-carboxymethyl-chitosan (CMC) nanoparticles grafted with cyclic Arg-Gly-Asp (RGD) peptide. To improve the loading efficiency (LE), we combined O/W/O double emulsion method with temperature-programmed solidification technique and controlled PTX within the matrix network as in situ nanocrystallite form. Furthermore, these CMC nanoparticles were PEGylated, which could reduce recognition by the reticuloendothelial system (RES) and prolong the circulation time in blood. In addition, further graft of cyclic RGD peptide at the terminal of PEG chain endowed these nanoparticles with higher affinity to in vitro Lewis lung carcinoma (LLC) cells and in vivo tumor tissue. These outstanding properties enabled as-designed nanodevice to exhibit a greater tumor growth inhibition effect and much lower side effects over the commercial formulation Taxol.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138909,Targeted Paclitaxel by Conjugation to Iron Oxide and Gold Nanoparticles,S551987,R138911,Uses drug,R135745,Paclitaxel,"The Fe(3)O(4) nanoparticles, tailored with maleimidyl 3-succinimidopropionate ligands, were conjugated with paclitaxel molecules that were attached with a poly(ethylene glycol) (PEG) spacer through a phosphodiester moiety at the (C-2')-OH position. The average number of paclitaxel molecules/nanoparticles was determined as 83. These nanoparticles liberated paclitaxel molecules upon exposure to phosphodiesterase.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R138920,Functionalization of Silver Nanoparticles Loaded with Paclitaxel-induced A549 Cells Apoptosis Through ROS-Mediated Signaling Pathways,S552026,R138924,Uses drug,R135745,Paclitaxel,"Background: Paclitaxel (PTX) is one of the most important and effective anticancer drugs for the treatment of human cancer. However, its low solubility and severe adverse effects limited clinical use. To overcome this limitation, nanotechnology has been used to overcome tumors due to its excellent antimicrobial activity. Objective: This study was to demonstrate the anticancer properties of functionalization silver nanoparticles loaded with paclitaxel (Ag@PTX) induced A549 cells apoptosis through ROS-mediated signaling pathways. Methods: The Ag@PTX nanoparticles were charged with a zeta potential of about -17 mv and characterized around 2 nm with a narrow size distribution. Results: Ag@PTX significantly decreased the viability of A549 cells and possessed selectivity between cancer and normal cells. Ag@PTX induced A549 cells apoptosis was confirmed by nuclear condensation, DNA fragmentation, and activation of caspase-3. Furthermore, Ag@PTX enhanced the anti-cancer activity of A549 cells through ROS-mediated p53 and AKT signalling pathways. Finally, in a xenograft nude mice model, Ag@PTX suppressed the growth of tumors. Conclusion: Our findings suggest that Ag@PTX may be a candidate as a chemopreventive agent and could be a highly efficient way to achieve anticancer synergism for human cancers.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R142718,Cellular Uptake Mechanism of Paclitaxel Nanocrystals Determined by Confocal Imaging and Kinetic Measurement,S573408,R142720,Uses drug,R138126,Paclitaxel,"Nanocrystal formulation has become a viable solution for delivering poorly soluble drugs including chemotherapeutic agents. The purpose of this study was to examine cellular uptake of paclitaxel nanocrystals by confocal imaging and concentration measurement. It was found that drug nanocrystals could be internalized by KB cells at much higher concentrations than a conventional, solubilized formulation. The imaging and quantitative results suggest that nanocrystals could be directly taken up by cells as solid particles, likely via endocytosis. Moreover, it was found that polymer treatment to drug nanocrystals, such as surface coating and lattice entrapment, significantly influenced the cellular uptake. While drug molecules are in the most stable physical state, nanocrystals of a poorly soluble drug are capable of achieving concentrated intracellular presence enabling needed therapeutic effects.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R142744,Hybrid Nanocrystals: Achieving Concurrent Therapeutic and Bioimaging Functionalities toward Solid Tumors,S573646,R142746,Uses drug,R138126,Paclitaxel,"Bioimaging and therapeutic agents accumulated in ectopic tumors following intravenous administration of hybrid nanocrystals to tumor-bearing mice. Solid, nanosized paclitaxel crystals physically incorporated fluorescent molecules throughout the crystal lattice and retained fluorescent properties in the solid state. Hybrid nanocrystals were significantly localized in solid tumors and remained in the tumor for several days. An anticancer effect is expected of these hybrid nanocrystals.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R151616,Influence of Hydroxypropyl β-Cyclodextrin on the Corneal Permeation of Pilocarpine,S607823,R151618,Uses drug,R151612,Pilocarpine,Abstract The influence of hydroxypropyl β-cyclodextrin (HPβCD) on the corneal permeation of pilocarpine nitrate was investigated by an in vitro permeability study using isolated rabbit cornea. Pupillary-response pattern to pilocarpine nitrate with and without HPβCD was examined in rabbit eye. Corneal permeation of pilocarpine nitrate was found to be four times higher after adding HPβCD into the formulation. The reduction of pupil diameter (miosis) by pilocarpine nitrate was significantly increased as a result of HPβCD addition into the simple aqueous solution of the active substance. The highest miotic response was obtained with the formulation prepared in a vehicle of Carbopol® 940. It is suggested that ocular bioavailability of pilocarpine nitrate could be improved by the addition of HPβCD.,TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R144137,Low active loading of cargo into engineered extracellular vesicles results in inefficient miRNA mimic delivery,S576981,R144142,Cargo,L403932,Pre-miR-199a,"ABSTRACT Extracellular vesicles (EVs) hold great potential as novel systems for nucleic acid delivery due to their natural composition. Our goal was to load EVs with microRNA that are synthesized by the cells that produce the EVs. HEK293T cells were engineered to produce EVs expressing a lysosomal associated membrane, Lamp2a fusion protein. The gene encoding pre-miR-199a was inserted into an artificial intron of the Lamp2a fusion protein. The TAT peptide/HIV-1 transactivation response (TAR) RNA interacting peptide was exploited to enhance the EV loading of the pre-miR-199a containing a modified TAR RNA loop. Computational modeling demonstrated a stable interaction between the modified pre-miR-199a loop and TAT peptide. EMSA gel shift, recombinant Dicer processing and luciferase binding assays confirmed the binding, processing and functionality of the modified pre-miR-199a. The TAT-TAR interaction enhanced the loading of the miR-199a into EVs by 65-fold. Endogenously loaded EVs were ineffective at delivering active miR-199a-3p therapeutic to recipient SK-Hep1 cells. While the low degree of miRNA loading into EVs through this approach resulted in inefficient distribution of RNA cargo into recipient cells, the TAT TAR strategy to load miRNA into EVs may be valuable in other drug delivery approaches involving miRNA mimics or other hairpin containing RNAs.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R144353,Exosome-based nanocarriers as bio-inspired and versatile vehicles for drug delivery: recent advances and challenges,S578066,R144356,Cargo,R142141,Protein,"Recent decades have witnessed the fast and impressive development of nanocarriers as a drug delivery system. Considering the safety, delivery efficiency and stability of nanocarriers, there are many obstacles in accomplishing successful clinical translation of these nanocarrier-based drug delivery systems. The gap has urged drug delivery scientists to develop innovative nanocarriers with high compatibility, stability and longer circulation time. Exosomes are nanometer-sized, lipid-bilayer-enclosed extracellular vesicles secreted by many types of cells. Exosomes serving as versatile drug vehicles have attracted increasing attention due to their inherent ability of shuttling proteins, lipids and genes among cells and their natural affinity to target cells. Attractive features of exosomes, such as nanoscopic size, low immunogenicity, high biocompatibility, encapsulation of various cargoes and the ability to overcome biological barriers, distinguish them from other nanocarriers. To date, exosome-based nanocarriers delivering small molecule drugs as well as bioactive macromolecules have been developed for the treatment of many prevalent and obstinate diseases including cancer, CNS disorders and some other degenerative diseases. Exosome-based nanocarriers have a huge prospect in overcoming many hindrances encountered in drug and gene delivery. This review highlights the advances as well as challenges of exosome-based nanocarriers as drug vehicles. Special focus has been placed on the advantages of exosomes in delivering various cargoes and in treating obstinate diseases, aiming to offer new insights for exploring exosomes in the field of drug delivery.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R148414,Evaluation of psoralen ethosomes for topical delivery in rats by using in vivo microdialysis,S595132,R148416,Uses drug,R148418,Psoralen,"This study aimed to improve skin permeation and deposition of psoralen by using ethosomes and to investigate real-time drug release in the deep skin in rats. We used a uniform design method to evaluate the effects of different ethosome formulations on entrapment efficiency and drug skin deposition. Using in vitro and in vivo methods, we investigated skin penetration and release from psoralen-loaded ethosomes in comparison with an ethanol tincture. In in vitro studies, the use of ethosomes was associated with a 6.56-fold greater skin deposition of psoralen than that achieved with the use of the tincture. In vivo skin microdialysis showed that the peak concentration and area under the curve of psoralen from ethosomes were approximately 3.37 and 2.34 times higher, respectively, than those of psoralen from the tincture. Moreover, it revealed that the percutaneous permeability of ethosomes was greater when applied to the abdomen than when applied to the chest or scapulas. Enhanced permeation and skin deposition of psoralen delivered by ethosomes may help reduce toxicity and improve the efficacy of long-term psoralen treatment.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R137522,"Paclitaxel-loaded poly(D,L-lactide-co-glycolide) nanoparticles for radiotherapy in hypoxic human tumor cells in vitro",S544437,R137524,keywords,R137528,Radiotherapy,"Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5%. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel;Drug delivery;Nanoparticle;Radiotherapy;Hypoxia;Human tumor cells;cellular uptake",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R110813,Resveratrol loaded polymeric micelles for theranostic targeting of breast cancer cells,S505740,R110815,keywords,R72355,Resveratrol,"Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179±22 nm were produced. Res was loaded with high EE of 73±0.9% and DL content of 6.2±0.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R110813,Resveratrol loaded polymeric micelles for theranostic targeting of breast cancer cells,S505925,R110815,Uses drug,R72355,Resveratrol,"Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179±22 nm were produced. Res was loaded with high EE of 73±0.9% and DL content of 6.2±0.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R147032,Glycosylated Sertraline-Loaded Liposomes for Brain Targeting: QbD Study of Formulation Variabilities and Brain Transport,S590153,R147034,Uses drug,R147229,Sertraline,"Effectiveness of CNS-acting drugs depends on the localization, targeting, and capacity to be transported through the blood–brain barrier (BBB) which can be achieved by designing brain-targeting delivery vectors. Hence, the objective of this study was to screen the formulation and process variables affecting the performance of sertraline (Ser-HCl)-loaded pegylated and glycosylated liposomes. The prepared vectors were characterized for Ser-HCl entrapment, size, surface charge, release behavior, and in vitro transport through the BBB. Furthermore, the compatibility among liposomal components was assessed using SEM, FTIR, and DSC analysis. Through a thorough screening study, enhancement of Ser-HCl entrapment, nanosized liposomes with low skewness, maximized stability, and controlled drug leakage were attained. The solid-state characterization revealed remarkable interaction between Ser-HCl and the charging agent to determine drug entrapment and leakage. Moreover, results of liposomal transport through mouse brain endothelialpolyoma cells demonstrated greater capacity of the proposed glycosylated liposomes to target the cerebellar due to its higher density of GLUT1 and higher glucose utilization. This transport capacity was confirmed by the inhibiting action of both cytochalasin B and phenobarbital. Using C6 glioma cells model, flow cytometry, time-lapse live cell imaging, and in vivo NIR fluorescence imaging demonstrated that optimized glycosylated liposomes can be transported through the BBB by classical endocytosis, as well as by specific transcytosis. In conclusion, the current study proposed a thorough screening of important formulation and process variabilities affecting brain-targeting liposomes for further scale-up processes.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R144478,Co-delivery of doxorubicin and siRNA for glioma therapy by a brain targeting system: angiopep-2-modified poly(lactic-co-glycolic acid) nanoparticles,S578679,R144480,Uses drug,R72219,siRNA,"Abstract It is very challenging to treat brain cancer because of the blood–brain barrier (BBB) restricting therapeutic drug or gene to access the brain. In this research project, angiopep-2 (ANG) was used as a brain-targeted peptide for preparing multifunctional ANG-modified poly(lactic-co-glycolic acid) (PLGA) nanoparticles (NPs), which encapsulated both doxorubicin (DOX) and epidermal growth factor receptor (EGFR) siRNA, designated as ANG/PLGA/DOX/siRNA. This system could efficiently deliver DOX and siRNA into U87MG cells leading to significant cell inhibition, apoptosis and EGFR silencing in vitro. It demonstrated that this drug system was capable of penetrating the BBB in vivo, resulting in more drugs accumulation in the brain. The animal study using the brain orthotopic U87MG glioma xenograft model indicated that the ANG-targeted co-delivery of DOX and EGFR siRNA resulted in not only the prolongation of the life span of the glioma-bearing mice but also an obvious cell apoptosis in glioma tissue.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R160835,The Influence of Drug Physical State on the Dissolution Enhancement of Solid Dispersions Prepared Via Hot-Melt Extrusion: A Case Study Using Olanzapine,S641917,R160837,Carrier for hot melt extrusion,R160701,Soluplus®,"In this study, we examine the relationship between the physical structure and dissolution behavior of olanzapine (OLZ) prepared via hot-melt extrusion in three polymers [polyvinylpyrrolidone (PVP) K30, polyvinylpyrrolidone-co-vinyl acetate (PVPVA) 6:4, and Soluplus® (SLP)]. In particular, we examine whether full amorphicity is necessary to achieve a favorable dissolution profile. Drug–polymer miscibility was estimated using melting point depression and Hansen solubility parameters. Solid dispersions were characterized using differential scanning calorimetry, X-ray powder diffraction, and scanning electron microscopy. All the polymers were found to be miscible with OLZ in a decreasing order of PVP>PVPVA>SLP. At a lower extrusion temperature (160°C), PVP generated fully amorphous dispersions with OLZ, whereas the formulations with PVPVA and SLP contained 14%–16% crystalline OLZ. Increasing the extrusion temperature to 180°C allowed the preparation of fully amorphous systems with PVPVA and SLP. Despite these differences, the dissolution rates of these preparations were comparable, with PVP showing a lower release rate despite being fully amorphous. These findings suggested that, at least in the particular case of OLZ, the absence of crystalline material may not be critical to the dissolution performance. We suggest alternative key factors determining dissolution, particularly the dissolution behavior of the polymers themselves.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R160844,Solid-state characterization of Felodipine–Soluplus amorphous solid dispersions,S641969,R160846,Carrier for hot melt extrusion,R160701,Soluplus®,"Abstract The aim of the current study is to develop amorphous solid dispersion (SD) via hot melt extrusion technology to improve the solubility of a water-insoluble compound, felodipine (FEL). The solubility was dramatically increased by preparation of amorphous SDs via hot-melt extrusion with an amphiphilic polymer, Soluplus® (SOL). FEL was found to be miscible with SOL by calculating the solubility parameters. The solubility of FEL within SOL was determined to be in the range of 6.2–9.9% (w/w). Various techniques were applied to characterize the solid-state properties of the amorphous SDs. These included Fourier Transform Infrared Spectrometry spectroscopy and Raman spectroscopy to detect the formation of hydrogen bonding between the drug and the polymer. Scanning electron microscopy was performed to study the morphology of the SDs. Among all the hot-melt extrudates, FEL was found to be molecularly dispersed within the polymer matrix for the extrudates containing 10% drug, while few small crystals were detected in the 30 and 50% extrudates. In conclusion, solubility of FEL was enhanced while a homogeneous SD was achieved for 10% drug loading.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R160813,Development of In Vitro–In Vivo Correlation for Amorphous Solid Dispersion Immediate-Release Suvorexant Tablets and Application to Clinically Relevant Dissolution Specifications and In-Process Controls,S641782,R160815,Uses drug,R160816,Suvorexant,"Although in vitro-in vivo correlations (IVIVCs) are commonly pursued for modified-release products, there are limited reports of successful IVIVCs for immediate-release (IR) formulations. This manuscript details the development of a Multiple Level C IVIVC for the amorphous solid dispersion formulation of suvorexant, a BCS class II compound, and its application to establishing dissolution specifications and in-process controls. Four different 40 mg batches were manufactured at different tablet hardnesses to produce distinct dissolution profiles. These batches were evaluated in a relative bioavailability clinical study in healthy volunteers. Although no differences were observed for the total exposure (AUC) of the different batches, a clear relationship between dissolution and Cmax was observed. A validated Multiple Level C IVIVC against Cmax was developed for the 10, 15, 20, 30, and 45 min dissolution time points and the tablet disintegration time. The relationship established between tablet tensile strength and dissolution was subsequently used to inform suitable tablet hardness ranges within acceptable Cmax limits. This is the first published report for a validated Multiple Level C IVIVC for an IR solid dispersion formulation demonstrating how this approach can facilitate Quality by Design in formulation development and help toward clinically relevant specifications and in-process controls.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R148267,Enhanced delivery of etoposide across the blood–brain barrier to restrain brain tumor growth using melanotransferrin antibody- and tamoxifen-conjugated solid lipid nanoparticles,S594421,R148269,Uses drug,R148274,Tamoxifen,"Abstract Melanotransferrin antibody (MA) and tamoxifen (TX) were conjugated on etoposide (ETP)-entrapped solid lipid nanoparticles (ETP-SLNs) to target the blood–brain barrier (BBB) and glioblastom multiforme (GBM). MA- and TX-conjugated ETP-SLNs (MA–TX–ETP–SLNs) were used to infiltrate the BBB comprising a monolayer of human astrocyte-regulated human brain-microvascular endothelial cells (HBMECs) and to restrain the proliferation of malignant U87MG cells. TX-grafted ETP-SLNs (TX–ETP–SLNs) significantly enhanced the BBB permeability coefficient for ETP and raised the fluorescent intensity of calcein-AM when compared with ETP-SLNs. In addition, surface MA could increase the BBB permeability coefficient for ETP about twofold. The viability of HBMECs was higher than 86%, suggesting a high biocompatibility of MA–TX–ETP-SLNs. Moreover, the efficiency in antiproliferation against U87MG cells was in the order of MA–TX–ETP-SLNs > TX–ETP-SLNs > ETP-SLNs > SLNs. The capability of MA–TX–ETP-SLNs to target HBMECs and U87MG cells during internalization was verified by immunochemical staining of expressed melanotransferrin. MA–TX–ETP-SLNs can be a potent pharmacotherapy to deliver ETP across the BBB to GBM.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R149143,Ferumoxytol for treatment of iron deficiency anemia in patients with chronic kidney disease,S597614,R149145,Field of application,R149158,Therapy,"Background: Iron deficiency anemia (IDA) is a common problem in patients with chronic kidney disease (CKD). Use of intravenous (i.v.) iron effectively treats the resultant anemia, but available iron products have side effects or dosing regimens that limit safety and convenience. Objective: Ferumoxytol (Feraheme™) is a new i.v. iron product recently approved for use in treatment of IDA in CKD patients. This article reviews the structure, pharmacokinetics, and clinical trial results on ferumoxytol. The author also offers his opinions on the role of this product in clinical practice. Methods: This review encompasses important information contained in clinical and preclinical studies of ferumoxytol and is supplemented with information from the US Food and Drug Administration. Results/conclusion: Ferumoxytol offers substantial safety and superior efficacy compared with oral iron therapy. As ferumoxytol can be administered as 510 mg in < 1 min, it is substantially more convenient than other iron products in nondialysis patients. Although further experience with this product is needed in patients at higher risk of drug reactions, ferumoxytol is likely to be highly useful in the hospital and outpatient settings for treatment of IDA.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R144378,In vivo biodistribution of venlafaxine-PLGA nanoparticles for brain delivery: plain vs. functionalized nanoparticles,S578203,R144382,Uses drug,R144383,Venlafaxine,"ABSTRACT Background: Actually, no drugs provide therapeutic benefit to approximately one-third of depressed patients. Depression is predicted to become the first global disease by 2030. So, new therapeutic interventions are imperative. Research design and methods: Venlafaxine-loaded poly(lactic-co-glycolic acid) (PLGA) nanoparticles (NPs) were surface functionalized with two ligands against transferrin receptor to enhance access to brain. An in vitro blood–brain barrier model using hCMEC/D3 cell line was developed to evaluate permeability. In vivo biodistribution studies were performed using C57/bl6 mice. Particles were administered intranasal and main organs were analyzed. Results: Particles were obtained as a lyophilized powder easily to re-suspend. Internalization and permeability studies showed the following cell association sequence: TfRp-NPs>Tf-NPs>plain NPs. Permeability studies also showed that encapsulated VLF was not affected by P-gP pump efflux increasing its concentration in the basolateral side after 24 h. In vivo studies showed that 25% of plain NPs reach the brain after 30 min of one intranasal administration while less than 5% of functionalized NPs get the target. Conclusions: Plain NPs showed the highest ability to reach the brain vs. functionalized NPs after 30 min by intranasal administration. We suggest plain NPs probably travel via direct nose-to-brian route whereas functionalized NPs reach the brain by receptor-mediated endocytosis.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R148289,Vincristine and temozolomide combined chemotherapy for the treatment of glioma: a comparison of solid lipid nanoparticles and nanostructured lipid carriers for dual drugs delivery,S594513,R148292,Uses drug,R148300,Vincristine,"Abstract Context: Glioma is a common malignant brain tumor originating in the central nervous system. Efficient delivery of therapeutic agents to the cells and tissues is a difficult challenge. Co-delivery of anticancer drugs into the cancer cells or tissues by multifunctional nanocarriers may provide a new paradigm in cancer treatment. Objective: In this study, solid lipid nanoparticles (SLNs) and nanostructured lipid carriers (NLCs) were constructed for co-delivery of vincristine (VCR) and temozolomide (TMZ) to develop the synergetic therapeutic action of the two drugs. The antitumor effects of these two systems were compared to provide a better choice for gliomatosis cerebri treatment. Methods: VCR- and TMZ-loaded SLNs (VT-SLNs) and NLCs (VT-NLCs) were formulated. Their particle size, zeta potential, drug encapsulation efficiency (EE) and drug loading capacity were evaluated. The single TMZ-loaded SLNs and NLCs were also prepared as contrast. Anti-tumor efficacies of the two kinds of carriers were evaluated on U87 malignant glioma cells and mice bearing malignant glioma model. Results: Significantly better glioma inhibition was observed on NLCs formulations than SLNs, and dual drugs displayed the highest antitumor efficacy in vivo and in vitro than all the other formulations used. Conclusion: VT-NLCs can deliver VCR and TMZ into U87MG cells more efficiently, and inhibition efficacy is higher than VT-SLNs. This dual drugs-loaded NLCs could be an outstanding drug delivery system to achieve excellent therapeutic efficiency for the treatment of malignant gliomatosis cerebri.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R155615,Inhaled Voriconazole for Prevention of Invasive Pulmonary Aspergillosis,S623718,R155617,Uses drug,R155618,Voriconazole,"ABSTRACT Targeted airway delivery of antifungals as prophylaxis against invasive aspergillosis may lead to high lung drug concentrations while avoiding toxicities associated with systemically administered agents. We evaluated the effectiveness of aerosolizing the intravenous formulation of voriconazole as prophylaxis against invasive pulmonary aspergillosis caused by Aspergillus fumigatus in an established murine model. Inhaled voriconazole significantly improved survival and limited the extent of invasive disease, as assessed by histopathology, compared to control and amphotericin B treatments.",TRUE,noun
R67,Medicinal Chemistry and Pharmaceutics,R144268,"PEG–lipid micelles as drug carriers: physiochemical attributes, formulation principles and biological implication",S577516,R144270,Composition,L404257,Water,"Abstract PEG–lipid micelles, primarily conjugates of polyethylene glycol (PEG) and distearyl phosphatidylethanolamine (DSPE) or PEG–DSPE, have emerged as promising drug-delivery carriers to address the shortcomings associated with new molecular entities with suboptimal biopharmaceutical attributes. The flexibility in PEG–DSPE design coupled with the simplicity of physical drug entrapment have distinguished PEG–lipid micelles as versatile and effective drug carriers for cancer therapy. They were shown to overcome several limitations of poorly soluble drugs such as non-specific biodistribution and targeting, lack of water solubility and poor oral bioavailability. Therefore, considerable efforts have been made to exploit the full potential of these delivery systems; to entrap poorly soluble drugs and target pathological sites both passively through the enhanced permeability and retention (EPR) effect and actively by linking the terminal PEG groups with targeting ligands, which were shown to increase delivery efficiency and tissue specificity. This article reviews the current state of PEG–lipid micelles as delivery carriers for poorly soluble drugs, their biological implications and recent developments in exploring their active targeting potential. In addition, this review sheds light on the physical properties of PEG–lipid micelles and their relevance to the inherent advantages and applications of PEG–lipid micelles for drug delivery.",TRUE,noun
R63,Molecular and Cellular Neuroscience,R110387,Aldehyde dehydrogenase 2 activity and aldehydic load contribute to neuroinflammation and Alzheimer’s disease related pathology,S505202,R110390,keywords,R110940,Neuroinflammation,"Abstract Aldehyde dehydrogenase 2 deficiency (ALDH2*2) causes facial flushing in response to alcohol consumption in approximately 560 million East Asians. Recent meta-analysis demonstrated the potential link between ALDH2*2 mutation and Alzheimer’s Disease (AD). Other studies have linked chronic alcohol consumption as a risk factor for AD. In the present study, we show that fibroblasts of an AD patient that also has an ALDH2*2 mutation or overexpression of ALDH2*2 in fibroblasts derived from AD patients harboring ApoE ε4 allele exhibited increased aldehydic load, oxidative stress, and increased mitochondrial dysfunction relative to healthy subjects and exposure to ethanol exacerbated these dysfunctions. In an in vivo model, daily exposure of WT mice to ethanol for 11 weeks resulted in mitochondrial dysfunction, oxidative stress and increased aldehyde levels in their brains and these pathologies were greater in ALDH2*2/*2 (homozygous) mice. Following chronic ethanol exposure, the levels of the AD-associated protein, amyloid-β, and neuroinflammation were higher in the brains of the ALDH2*2/*2 mice relative to WT. Cultured primary cortical neurons of ALDH2*2/*2 mice showed increased sensitivity to ethanol and there was a greater activation of their primary astrocytes relative to the responses of neurons or astrocytes from the WT mice. Importantly, an activator of ALDH2 and ALDH2*2, Alda-1, blunted the ethanol-induced increases in Aβ, and the neuroinflammation in vitro and in vivo. These data indicate that impairment in the metabolism of aldehydes, and specifically ethanol-derived acetaldehyde, is a contributor to AD associated pathology and highlights the likely risk of alcohol consumption in the general population and especially in East Asians that carry ALDH2*2 mutation.",TRUE,noun
R112127,Multiagent Systems,R138297,A Multi-Agent System for the management of E-Government Services,S548010,R138299,has Target users,R138235,Citizens,"This paper aims at studying the exploitation of intelligent agents for supporting citizens to access e-government services. To this purpose, it proposes a multi-agent system capable of suggesting to the users the most interesting services for them; specifically, these suggestions are computed by taking into account both their exigencies/preferences and the capabilities of the devices they are currently exploiting. The paper first describes the proposed system and, then, reports various experimental results. Finally, it presents a comparison between our system and other related ones already presented in the literature.",TRUE,noun
R279,Nanoscience and Nanotechnology,R148377,Bioinspired Cocatalysts Decorated WO3 Nanotube Toward Unparalleled Hydrogen Sulfide Chemiresistor,S594906,R148380,keywords,L413547, apoferritin,"Herein, we incorporated dual biotemplates, i.e., cellulose nanocrystals (CNC) and apoferritin, into electrospinning solution to achieve three distinct benefits, i.e., (i) facile synthesis of a WO3 nanotube by utilizing the self-agglomerating nature of CNC in the core of as-spun nanofibers, (ii) effective sensitization by partial phase transition from WO3 to Na2W4O13 induced by interaction between sodium-doped CNC and WO3 during calcination, and (iii) uniform functionalization with monodispersive apoferritin-derived Pt catalytic nanoparticles (2.22 ± 0.42 nm). Interestingly, the sensitization effect of Na2W4O13 on WO3 resulted in highly selective H2S sensing characteristics against seven different interfering molecules. Furthermore, synergistic effects with a bioinspired Pt catalyst induced a remarkably enhanced H2S response ( Rair/ Rgas = 203.5), unparalleled selectivity ( Rair/ Rgas < 1.3 for the interfering molecules), and rapid response (<10 s)/recovery (<30 s) time at 1 ppm of H2S under 95% relative humidity level. This work paves the way for a new class of cosensitization routes to overcome critical shortcomings of SMO-based chemical sensors, thus providing a potential platform for diagnosis of halitosis.",TRUE,noun
R279,Nanoscience and Nanotechnology,R148377,Bioinspired Cocatalysts Decorated WO3 Nanotube Toward Unparalleled Hydrogen Sulfide Chemiresistor,S594904,R148380,keywords,L413545,biotemplates,"Herein, we incorporated dual biotemplates, i.e., cellulose nanocrystals (CNC) and apoferritin, into electrospinning solution to achieve three distinct benefits, i.e., (i) facile synthesis of a WO3 nanotube by utilizing the self-agglomerating nature of CNC in the core of as-spun nanofibers, (ii) effective sensitization by partial phase transition from WO3 to Na2W4O13 induced by interaction between sodium-doped CNC and WO3 during calcination, and (iii) uniform functionalization with monodispersive apoferritin-derived Pt catalytic nanoparticles (2.22 ± 0.42 nm). Interestingly, the sensitization effect of Na2W4O13 on WO3 resulted in highly selective H2S sensing characteristics against seven different interfering molecules. Furthermore, synergistic effects with a bioinspired Pt catalyst induced a remarkably enhanced H2S response ( Rair/ Rgas = 203.5), unparalleled selectivity ( Rair/ Rgas < 1.3 for the interfering molecules), and rapid response (<10 s)/recovery (<30 s) time at 1 ppm of H2S under 95% relative humidity level. This work paves the way for a new class of cosensitization routes to overcome critical shortcomings of SMO-based chemical sensors, thus providing a potential platform for diagnosis of halitosis.",TRUE,noun
R279,Nanoscience and Nanotechnology,R161508,Fabrication of a SnO2 Nanowire Gas Sensor and Sensor Performance for Hydrogen,S644983,R161510,keywords,L440649,Electrodes,SnO2 nanowire gas sensors have been fabricated on Cd−Au comb-shaped interdigitating electrodes using thermal evaporation of the mixed powders of SnO2 and active carbon. The self-assembly grown sensors have excellent performance in sensor response to hydrogen concentration in the range of 10 to 1000 ppm. This high response is attributed to the large portion of undercoordinated atoms on the surface of the SnO2 nanowires. The influence of the Debye length of the nanowires and the gap between electrodes in the gas sensor response is examined and discussed.,TRUE,noun
R279,Nanoscience and Nanotechnology,R148377,Bioinspired Cocatalysts Decorated WO3 Nanotube Toward Unparalleled Hydrogen Sulfide Chemiresistor,S595001,R148380,Method of nanomaterial synthesis,L413618,Electrospinning,"Herein, we incorporated dual biotemplates, i.e., cellulose nanocrystals (CNC) and apoferritin, into electrospinning solution to achieve three distinct benefits, i.e., (i) facile synthesis of a WO3 nanotube by utilizing the self-agglomerating nature of CNC in the core of as-spun nanofibers, (ii) effective sensitization by partial phase transition from WO3 to Na2W4O13 induced by interaction between sodium-doped CNC and WO3 during calcination, and (iii) uniform functionalization with monodispersive apoferritin-derived Pt catalytic nanoparticles (2.22 ± 0.42 nm). Interestingly, the sensitization effect of Na2W4O13 on WO3 resulted in highly selective H2S sensing characteristics against seven different interfering molecules. Furthermore, synergistic effects with a bioinspired Pt catalyst induced a remarkably enhanced H2S response ( Rair/ Rgas = 203.5), unparalleled selectivity ( Rair/ Rgas < 1.3 for the interfering molecules), and rapid response (<10 s)/recovery (<30 s) time at 1 ppm of H2S under 95% relative humidity level. This work paves the way for a new class of cosensitization routes to overcome critical shortcomings of SMO-based chemical sensors, thus providing a potential platform for diagnosis of halitosis.",TRUE,noun
R279,Nanoscience and Nanotechnology,R151376,ZnO/Cu Nanocomposite: A Platform for Direct Electrochemistry of Enzymes and Biosensing Applications,S607277,R151378,Type of Biosensor,L419905,Enzymes,"Unique structured nanomaterials can facilitate the direct electron transfer between redox proteins and the electrodes. Here, in situ directed growth on an electrode of a ZnO/Cu nanocomposite was prepared by a simple corrosion approach, which enables robust mechanical adhesion and electrical contact between the nanostructured ZnO and the electrodes. This is great help to realize the direct electron transfer between the electrode surface and the redox protein. SEM images demonstrate that the morphology of the ZnO/Cu nanocomposite has a large specific surface area, which is favorable to immobilize the biomolecules and construct biosensors. Using glucose oxidase (GOx) as a model, this ZnO/Cu nanocomposite is employed for immobilization of GOx and the construction of the glucose biosensor. Direct electron transfer of GOx is achieved at ZnO/Cu nanocomposite with a high heterogeneous electron transfer rate constant of 0.67 ± 0.06 s(-1). Such ZnO/Cu nanocomposite provides a good matrix for direct electrochemistry of enzymes and mediator-free enzymatic biosensors.",TRUE,noun
R279,Nanoscience and Nanotechnology,R151352,Enzymatic glucose biosensor based on ZnO nanorod array grown by hydrothermal decomposition,S607155,R151354,Type of Biosensor,L419825,Glucose,"We report herein a glucose biosensor based on glucose oxidase (GOx) immobilized on ZnO nanorod array grown by hydrothermal decomposition. In a phosphate buffer solution with a pH value of 7.4, negatively charged GOx was immobilized on positively charged ZnO nanorods through electrostatic interaction. At an applied potential of +0.8V versus Ag∕AgCl reference electrode, ZnO nanorods based biosensor presented a high and reproducible sensitivity of 23.1μAcm−2mM−1 with a response time of less than 5s. The biosensor shows a linear range from 0.01to3.45mM and an experiment limit of detection of 0.01mM. An apparent Michaelis-Menten constant of 2.9mM shows a high affinity between glucose and GOx immobilized on ZnO nanorods.",TRUE,noun
R279,Nanoscience and Nanotechnology,R151357,Zinc oxide nanocomb biosensor for glucose detection,S607191,R151359,Type of Biosensor,L419845,Glucose,Single crystal zinc oxide nanocombs were synthesized in bulk quantity by vapor phase transport. A glucose biosensor was constructed using these nanocombs as supporting materials for glucose oxidase (GOx) loading. The zinc oxide nanocomb glucose biosensor showed a high sensitivity (15.33μA∕cm2mM) for glucose detection and high affinity of GOx to glucose (the apparent Michaelis-Menten constant KMapp=2.19mM). The detection limit measured was 0.02mM. These results demonstrate that zinc oxide nanostructures have potential applications in biosensors.,TRUE,noun
R279,Nanoscience and Nanotechnology,R151360,ZnO Nanotube Arrays as Biosensors for Glucose,S607198,R151362,Type of Biosensor,L419850,Glucose,"Highly oriented single-crystal ZnO nanotube (ZNT) arrays were prepared by a two-step electrochemical/chemical process on indium-doped tin oxide (ITO) coated glass in an aqueous solution. The prepared ZNT arrays were further used as a working electrode to fabricate an enzyme-based glucose biosensor through immobilizing glucose oxidase in conjunction with a Nafion coating. The present ZNT arrays-based biosensor exhibits high sensitivity of 30.85 μA cm−2 mM−1 at an applied potential of +0.8 V vs. SCE, wide linear calibration ranges from 10 μM to 4.2 mM, and a low limit of detection (LOD) at 10 μM (measured) for sensing of glucose. The apparent Michaelis−Menten constant KMapp was calculated to be 2.59 mM, indicating a higher bioactivity for the biosensor.",TRUE,noun
R279,Nanoscience and Nanotechnology,R151382,A novel amperometric biosensor based on ZnO nanoparticles-modified carbon paste electrode for determination of glucose in human serum,S607302,R151384,Type of Biosensor,L419922,Glucose,"Abstract Zinc oxide nanoparticles-(ZnONPs)modified carbon paste enzyme electrodes (ZnONPsMCPE) were developed for determination of glucose. The determination of glucose was carried out by oxidation of H2O2 at +0.4 V. ZnONPsMCPE provided biocompatible microenvironment for GOx and necessary pathway of electron transfer between GOx and electrode. The response of GOx/ZnONPsMCPE was proportional to glucose concentration and detection limit was 9.1 × 10–3 mM. Km and Imax, were calculated as 0.124 mM and 2.033 μA. The developed biosensor exhibits high analytical performance with wide linear range (9.1 × 10–3–14.5 mM), selectivity and reproducibility. Serum glucose results allow us to ascertain practical utility of GOx/ZnONPsMCPE biosensor.",TRUE,noun
R279,Nanoscience and Nanotechnology,R143695,Electrically conductive thermoplastic elastomer nanocomposites at ultralow graphene loading levels for strain sensor applications,S575022,R143697,keywords,L402778,Graphene,"An electrically conductive ultralow percolation threshold of 0.1 wt% graphene was observed in the thermoplastic polyurethane (TPU) nanocomposites. The homogeneously dispersed graphene effectively enhanced the mechanical properties of TPU significantly at a low graphene loading of 0.2 wt%. These nanocomposites were subjected to cyclic loading to investigate the influences of graphene loading, strain amplitude and strain rate on the strain sensing performances. The two dimensional graphene and the flexible TPU matrix were found to endow these nanocomposites with a wide range of strain sensitivity (gauge factor ranging from 0.78 for TPU with 0.6 wt% graphene at the strain rate of 0.1 min−1 to 17.7 for TPU with 0.2 wt% graphene at the strain rate of 0.3 min−1) and good sensing stability for different strain patterns. In addition, these nanocomposites demonstrated good recoverability and reproducibility after stabilization by cyclic loading. An analytical model based on tunneling theory was used to simulate the resistance response to strain under different strain rates. The change in the number of conductive pathways and tunneling distance under strain was responsible for the observed resistance-strain behaviors. This study provides guidelines for the fabrication of graphene based polymer strain sensors.",TRUE,noun
R279,Nanoscience and Nanotechnology,R143705,A highly stretchable and sensitive strain sensor based on graphene–elastomer composites with a novel double-interconnected network,S575096,R143707,keywords,L402839,Graphene ,"The construction of a continuous conductive network with a low percolation threshold plays a key role in fabricating a high performance strain sensor. Herein, a highly stretchable and sensitive strain sensor based on binary rubber blend/graphene was fabricated by a simple and effective assembly approach. A novel double-interconnected network composed of compactly continuous graphene conductive networks was designed and constructed using the composites, thereby resulting in an ultralow percolation threshold of 0.3 vol%, approximately 12-fold lower than that of the conventional graphene-based composites with a homogeneously dispersed morphology (4.0 vol%). Near the percolation threshold, the sensors could be stretched in excess of 100% applied strain, and exhibited a high stretchability, sensitivity (gauge factor ∼82.5) and good reproducibility (∼300 cycles) of up to 100% strain under cyclic tensile tests. The proposed strategy provides a novel effective approach for constructing a double-interconnected conductive network using polymer composites, and is very competitive for developing and designing high performance strain sensors.",TRUE,noun
R279,Nanoscience and Nanotechnology,R161508,Fabrication of a SnO2 Nanowire Gas Sensor and Sensor Performance for Hydrogen,S644980,R161510,keywords,L440646,Hydrogen,SnO2 nanowire gas sensors have been fabricated on Cd−Au comb-shaped interdigitating electrodes using thermal evaporation of the mixed powders of SnO2 and active carbon. The self-assembly grown sensors have excellent performance in sensor response to hydrogen concentration in the range of 10 to 1000 ppm. This high response is attributed to the large portion of undercoordinated atoms on the surface of the SnO2 nanowires. The influence of the Debye length of the nanowires and the gap between electrodes in the gas sensor response is examined and discussed.,TRUE,noun
R279,Nanoscience and Nanotechnology,R143734,Ni and NiO Nanoparticles Decorated Metal–Organic Framework Nanosheets: Facile Synthesis and High-Performance Nonenzymatic Glucose Detection in Human Serum,S575794,R143738,keywords,L403350,Metal-organic,"Ni-MOF (metal-organic framework)/Ni/NiO/carbon frame nanocomposite was formed by combing Ni and NiO nanoparticles and a C frame with Ni-MOF using an efficient one-step calcination method. The morphology and structure of Ni-MOF/Ni/NiO/C nanocomposite were characterized by transmission electron microscopy (TEM), X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), and energy disperse spectroscopy (EDS) mapping. Ni-MOF/Ni/NiO/C nanocomposites were immobilized onto glassy carbon electrodes (GCEs) with Nafion film to construct high-performance nonenzymatic glucose and H2O2 electrochemical sensors. Cyclic voltammetric (CV) study showed Ni-MOF/Ni/NiO/C nanocomposite displayed better electrocatalytic activity toward glucose oxidation as compared to Ni-MOF. Amperometric study indicated the glucose sensor displayed high performance, offering a low detection limit (0.8 μM), a high sensitivity of 367.45 mA M-1 cm-2, and a wide linear range (from 4 to 5664 μM). Importantly, good reproducibility, long-time stability, and excellent selectivity were obtained within the as-fabricated glucose sensor. Furthermore, the constructed high-performance sensor was utilized to monitor the glucose levels in human serum, and satisfactory results were obtained. It demonstrated the Ni-MOF/Ni/NiO/C nanocomposite can be used as a good electrochemical sensing material in practical biological applications.",TRUE,noun
R279,Nanoscience and Nanotechnology,R135569,A Highly Sensitive and Flexible Capacitive Pressure Sensor Based on a Porous Three-Dimensional PDMS/Microsphere Composite,S536346,R135573,keywords,R135599,Microspheres,"In recent times, polymer-based flexible pressure sensors have been attracting a lot of attention because of their various applications. A highly sensitive and flexible sensor is suggested, capable of being attached to the human body, based on a three-dimensional dielectric elastomeric structure of polydimethylsiloxane (PDMS) and microsphere composite. This sensor has maximal porosity due to macropores created by sacrificial layer grains and micropores generated by microspheres pre-mixed with PDMS, allowing it to operate at a wider pressure range (~150 kPa) while maintaining a sensitivity (of 0.124 kPa−1 in a range of 0~15 kPa) better than in previous studies. The maximized pores can cause deformation in the structure, allowing for the detection of small changes in pressure. In addition to exhibiting a fast rise time (~167 ms) and fall time (~117 ms), as well as excellent reproducibility, the fabricated pressure sensor exhibits reliability in its response to repeated mechanical stimuli (2.5 kPa, 1000 cycles). As an application, we develop a wearable device for monitoring repeated tiny motions, such as the pulse on the human neck and swallowing at the Adam’s apple. This sensory device is also used to detect movements in the index finger and to monitor an insole system in real-time.",TRUE,noun
R279,Nanoscience and Nanotechnology,R143695,Electrically conductive thermoplastic elastomer nanocomposites at ultralow graphene loading levels for strain sensor applications,S575021,R143697,keywords,L402777,Nanocomposites,"An electrically conductive ultralow percolation threshold of 0.1 wt% graphene was observed in the thermoplastic polyurethane (TPU) nanocomposites. The homogeneously dispersed graphene effectively enhanced the mechanical properties of TPU significantly at a low graphene loading of 0.2 wt%. These nanocomposites were subjected to cyclic loading to investigate the influences of graphene loading, strain amplitude and strain rate on the strain sensing performances. The two dimensional graphene and the flexible TPU matrix were found to endow these nanocomposites with a wide range of strain sensitivity (gauge factor ranging from 0.78 for TPU with 0.6 wt% graphene at the strain rate of 0.1 min−1 to 17.7 for TPU with 0.2 wt% graphene at the strain rate of 0.3 min−1) and good sensing stability for different strain patterns. In addition, these nanocomposites demonstrated good recoverability and reproducibility after stabilization by cyclic loading. An analytical model based on tunneling theory was used to simulate the resistance response to strain under different strain rates. The change in the number of conductive pathways and tunneling distance under strain was responsible for the observed resistance-strain behaviors. This study provides guidelines for the fabrication of graphene based polymer strain sensors.",TRUE,noun
R279,Nanoscience and Nanotechnology,R161508,Fabrication of a SnO2 Nanowire Gas Sensor and Sensor Performance for Hydrogen,S644982,R161510,keywords,L440648,Nanowires,SnO2 nanowire gas sensors have been fabricated on Cd−Au comb-shaped interdigitating electrodes using thermal evaporation of the mixed powders of SnO2 and active carbon. The self-assembly grown sensors have excellent performance in sensor response to hydrogen concentration in the range of 10 to 1000 ppm. This high response is attributed to the large portion of undercoordinated atoms on the surface of the SnO2 nanowires. The influence of the Debye length of the nanowires and the gap between electrodes in the gas sensor response is examined and discussed.,TRUE,noun
R279,Nanoscience and Nanotechnology,R161508,Fabrication of a SnO2 Nanowire Gas Sensor and Sensor Performance for Hydrogen,S644981,R161510,keywords,L440647,Sensors,SnO2 nanowire gas sensors have been fabricated on Cd−Au comb-shaped interdigitating electrodes using thermal evaporation of the mixed powders of SnO2 and active carbon. The self-assembly grown sensors have excellent performance in sensor response to hydrogen concentration in the range of 10 to 1000 ppm. This high response is attributed to the large portion of undercoordinated atoms on the surface of the SnO2 nanowires. The influence of the Debye length of the nanowires and the gap between electrodes in the gas sensor response is examined and discussed.,TRUE,noun
R279,Nanoscience and Nanotechnology,R110342,Structure and optical properties of TiO2 thin films deposited by ALD method,S502834,R110344,substrate,R110346,Silicon,AbstractThis paper presents the results of study on titanium dioxide thin films prepared by atomic layer deposition method on a silicon substrate. The changes of surface morphology have been observed in topographic images performed with the atomic force microscope (AFM) and scanning electron microscope (SEM). Obtained roughness parameters have been calculated with XEI Park Systems software. Qualitative studies of chemical composition were also performed using the energy dispersive spectrometer (EDS). The structure of titanium dioxide was investigated by X-ray crystallography. A variety of crystalline TiO2was also confirmed by using the Raman spectrometer. The optical reflection spectra have been measured with UV-Vis spectrophotometry.,TRUE,noun
R279,Nanoscience and Nanotechnology,R155402,Strain‐Induced Band‐Gap Tuning of 2D‐SnSSe Flakes for Application in Flexible Sensors,S624118,R155405,Material,L429600,SnSSe,"Flexible strain‐sensitive‐material‐based sensors are desired owing to their widespread applications in intelligent robots, health monitoring, human motion detection, and other fields. High electrical–mechanical coupling behaviors of 2D materials make them one of the most promising candidates for miniaturized, integrated, and high‐resolution strain sensors, motivating to explore the influence of strain‐induced band‐gap changes on electrical properties of more materials and assess their potential application in strain sensors. Herein, a ternary SnSSe alloy nanosheet‐based strain sensor is reported showing an enhanced gauge factor (GF) up to 69.7 and a good reproducibility and linearity within strain of 0.9%. Such sensor holds high‐sensitive features under low strain, and demonstrates an improved sensitivity with a decrease in the membrane thickness. The high sensitivity is attributed to widening band gap and density of states reduction induced by strain, as verified by theoretical model and first‐principles calculations. These findings show that a sensor with adjustable strain sensitivity might be realized by simply changing the elemental constituents of 2D alloying materials.",TRUE,noun
R279,Nanoscience and Nanotechnology,R161623,"Stretchable, Transparent, Ultrasensitive, and Patchable Strain Sensor for Human–Machine Interfaces Comprising a Nanohybrid of Carbon Nanotubes and Conductive Elastomers",S645392,R161625,Device Location,L440873,Face,"UNLABELLED Interactivity between humans and smart systems, including wearable, body-attachable, or implantable platforms, can be enhanced by realization of multifunctional human-machine interfaces, where a variety of sensors collect information about the surrounding environment, intentions, or physiological conditions of the human to which they are attached. Here, we describe a stretchable, transparent, ultrasensitive, and patchable strain sensor that is made of a novel sandwich-like stacked piezoresisitive nanohybrid film of single-wall carbon nanotubes (SWCNTs) and a conductive elastomeric composite of polyurethane (PU)-poly(3,4-ethylenedioxythiophene) polystyrenesulfonate ( PEDOT PSS). This sensor, which can detect small strains on human skin, was created using environmentally benign water-based solution processing. We attributed the tunability of strain sensitivity (i.e., gauge factor), stability, and optical transparency to enhanced formation of percolating networks between conductive SWCNTs and PEDOT phases at interfaces in the stacked PU-PEDOT:PSS/SWCNT/PU-PEDOT:PSS structure. The mechanical stability, high stretchability of up to 100%, optical transparency of 62%, and gauge factor of 62 suggested that when attached to the skin of the face, this sensor would be able to detect small strains induced by emotional expressions such as laughing and crying, as well as eye movement, and we confirmed this experimentally.",TRUE,noun
R145261,Natural Language Processing,R163656,"PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track",S653514,R163658,Concept types,R70143,drug,"One of the biomedical entity types of relevance for medicine or biosciences are chemical compounds and drugs. The correct detection these entities is critical for other text mining applications building on them, such as adverse drug-reaction detection, medication-related fake news or drug-target extraction. Although a significant effort was made to detect mentions of drugs/chemicals in English texts, so far only very limited attempts were made to recognize them in medical documents in other languages. Taking into account the growing amount of medical publications and clinical records written in Spanish, we have organized the first shared task on detecting drug and chemical entities in Spanish medical documents. Additionally, we included a clinical concept-indexing sub-track asking teams to return SNOMED-CT identifiers related to drugs/chemicals for a collection of documents. For this task, named PharmaCoNER, we generated annotation guidelines together with a corpus of 1,000 manually annotated clinical case studies. A total of 22 teams participated in the sub-track 1, (77 system runs), and 7 teams in the sub-track 2 (19 system runs). Top scoring teams used sophisticated deep learning approaches yielding very competitive results with F-measures above 0.91. These results indicate that there is a real interest in promoting biomedical text mining efforts beyond English. We foresee that the PharmaCoNER annotation guidelines, corpus and participant systems will foster the development of new resources for clinical and biomedical text mining systems of Spanish medical data.",TRUE,noun
R145261,Natural Language Processing,R163656,"PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track",S660242,R165689,Entity types,R163206,Drug,"One of the biomedical entity types of relevance for medicine or biosciences are chemical compounds and drugs. The correct detection these entities is critical for other text mining applications building on them, such as adverse drug-reaction detection, medication-related fake news or drug-target extraction. Although a significant effort was made to detect mentions of drugs/chemicals in English texts, so far only very limited attempts were made to recognize them in medical documents in other languages. Taking into account the growing amount of medical publications and clinical records written in Spanish, we have organized the first shared task on detecting drug and chemical entities in Spanish medical documents. Additionally, we included a clinical concept-indexing sub-track asking teams to return SNOMED-CT identifiers related to drugs/chemicals for a collection of documents. For this task, named PharmaCoNER, we generated annotation guidelines together with a corpus of 1,000 manually annotated clinical case studies. A total of 22 teams participated in the sub-track 1, (77 system runs), and 7 teams in the sub-track 2 (19 system runs). Top scoring teams used sophisticated deep learning approaches yielding very competitive results with F-measures above 0.91. These results indicate that there is a real interest in promoting biomedical text mining efforts beyond English. We foresee that the PharmaCoNER annotation guidelines, corpus and participant systems will foster the development of new resources for clinical and biomedical text mining systems of Spanish medical data.",TRUE,noun
R145261,Natural Language Processing,R150009,Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank,S601330,R150011,Evaluation metrics,R142103,Accuracy,"Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive/negative classification from 80% up to 85.4%. The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7%, an improvement of 9.7% over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases.",TRUE,noun
R145261,Natural Language Processing,R162557,Overview of DrugProt BioCreative VII track: quality evaluation and large scale text mining of drug-gene/protein relations,S686944,R172093,Relation types,R172077,Activator,"Considering recent progress in NLP, deep learning techniques and biomedical language models there is a pressing need to generate annotated resources and comparable evaluation scenarios that enable the development of advanced biomedical relation extraction systems that extract interactions between drugs/chemical entities and genes, proteins or miRNAs. Building on the results and experience of the CHEMDNER, CHEMDNER patents and ChemProt tracks, we have posed the DrugProt track at BioCreative VII. The DrugProt track focused on the evaluation of automatic systems able to extract 13 different types of drug-genes/protein relations of importance to understand gene regulatory and pharmacological mechanisms. The DrugProt track addressed regulatory associations (direct/indirect, activator/inhibitor relations), certain types of binding associations (antagonist and agonist relations) as well as metabolic associations (substrate or product relations). To promote development of novel tools and offer a comparative evaluation scenario we have released 61,775 manually annotated gene mentions, 65,561 chemical and drug mentions and a total of 24,526 relationships manually labeled by domain experts. A total of 30 teams submitted results for the DrugProt main track, while 9 teams submitted results for the large-scale text mining subtrack that required processing of over 2,3 million records. Teams obtained very competitive results, with predictions reaching fmeasures of over 0.92 for some relation types (antagonist) and fmeasures across all relation types close to 0.8. INTRODUCTION Among the most relevant biological and pharmacological relation types are those that involve (a) chemical compounds and drugs as well as (b) gene products including genes, proteins, miRNAs. A variety of associations between chemicals and genes/proteins are described in the biomedical literature, and there is a growing interest in facilitating a more systematic extraction of these relations from the literature, either for manual database curation initiatives or to generate large knowledge graphs of importance for drug discovery, drug repurposing, building regulatory or interaction networks or to characterize off-target interactions of drugs that might be of importance to understand better adverse drug reactions. At BioCreative VI, the ChemProt track tried to promote the development of novel systems between chemicals and genes for groups of biologically related association types (ChemProt track relation groups or CPRs). Although the obtained results did have a considerable impact in the development and evaluation of new biomedical relation extraction systems, a limitation of grouping more specific relation types into broader groups was the difficulty to directly exploit the results for database curation efforts and biomedical knowledge graph mining application scenarios. The considerable interest in the integration of chemical and biomedical data for drug-discovery purposes, together with the ongoing curation of relationships between biological and chemical entities from scientific publications and patents due to the recent COVID-19 pandemic, motivated the DrugProt track of BioCreative VII, which proposed using more granular relation types. In order to facilitate the development of more granular relation extraction systems large manually annotated corpora are needed. Those corpora should include high-quality manually labled entity mentions together with exhaustive relation annotations generated by domain experts. TRACK AND CORPUS DESCRIPTION Corpus description To carry out the DrugProt track at BioCreative VII, we have released a large manually labelled corpus including annotations of mentions of chemical compounds and drugs as well as genes, proteins and miRNAs. Domain experts with experience in biomedical literature annotation and database curation annotated by hand all abstracts using the BRAT annotation interface. The manual labeling of chemicals and genes was done in separate steps and by different experts to avoid introducing biases during the text annotation process. The manual tagging of entity mentions of chemicals and drugs as well as genes, proteins and miRNAs was done following a carefully designed annotation process and in line with publicly released annotation guidelines. Gene/protein entity mentions were manually mapped to their corresponding biologic al database identifiers whenever possible and classified as either normalizable to databases (tag: GENE-Y) or non normalizable mentions (GENE-N). Teams that participated at the DrugProt track were only provided with this classification of gene mentions and not the actual database identifier to avoid usage of external knowledge bases for producing their predictions. The corpus construction process required first annotating exhaustively all chemical and gene mentions (phase 1). Afterwards the relation annotation phase followed (phase 2), were relationships between these two types of entities had to be labeled according to public available annotation guidelines. Thus, to facilitate the annotation of chemical-protein interactions, the DrugProt track organizers constructed very granular relation annotation rules described in a 33 pages annotation guidelines document. These guidelines were refined during an iterative process based on the annotation of sample documents. The guidelines provided the basic details of the chemicalprotein interaction annotation task and the conventions that had to be followed during the corpus construction process. They incorporated suggestions made by curators as well as observations of annotation inconsistencies encountered when comparing results from different human curators. In brief, DrugProt interactions covered direct interactions (when a physical contact existed between a chemical/drug and a gene/protein) as well as indirect regulatory interactions that alter either the function or the quantity of the gene/gene product. The aim of the iterative manual annotation cycle was to improve the quality and consistency of the guidelines. During the planning of the guidelines some rules had to be reformulated to make them more explicit and clear and additional rules were added wherever necessary to better cover the practical annotation scenario and for being more complete. The manual annotation task basically consisted of labeling or marking manually through a customized BRAT webinterface the interactions given the article abstracts as content. Figure 1 summarizes the DrugProt relation types included in the annotation guidelines. Fig. 1. Overview of the DrugProt relation type hierarchy. The corpus annotation carried out for the DrugProt track was exhaustive for all the types of interactions previously specified. This implied that mentions of other kind of relationships between chemicals and genes (e.g. phenotypic and biological responses) were not manually labelled. Moreover, the DrugProt relations are directed in the sense that only relations of “what a chemical does to a gene/protein"" (chemical → gene/protein direction) were annotated, and not vice versa. To establish a easy to understand relation nomenclature and avoid redundant class definitions, we reviewed several chemical repositories that included chemical – biology information. We revised DrugBank, the Therapeutic Targets Database (TTD) and ChEMBL, assay normalization ontologies (BAO) and previously existing formalizations for the annotation of relationships: the Biological Expression Language (BEL), curation guidelines for transcription regulation interactions (DNA-binding transcription factor – target gene interaction) and SIGNOR, a database of causal relationships between biological entities. Each of these resources inspired the definition of the subclasses DIRECT REGULATOR (e.g. DrugBank, ChEMBL, BAO and SIGNOR) and the INDIRECT REGULATOR (e.g. BEL, curation guidelines for transcription regulation interactions and SIGNOR). For example, DrugBank relationships for drugs included a total of 22 definitions, some of them overlapping with CHEMPROT subclasses (e.g. “Inhibitor”, “Antagonist”, “Agonist”,...), some of them being regarded as highly specific for the purpose of this task (e.g. “intercalation”, “cross-linking/alkylation”) or referring to biological roles (e.g. “Antibody”, “Incorporation into and Destabilization”) and others, partially overlapping between them (e.g. “Binder” and “Ligand”), that were merged into a single class. Concerning indirect regulatory aspects, the five classes of casual relationships between a subject and an object term defined by BEL (“decreases”, “directlyDecreases”, “increases”, “directlyIncreases” and “causesNoChange”) were highly inspiring. Subclasses definitions of pharmacological modes of action were defined according to the UPHAR/BPS Guide to Pharmacology in 2016. For the DrugProt track a very granular chemical-protein relation annotation was carried out, with the aim to cover most of the relations that are of importance from the point of view of biochemical and pharmacological/biomedical perspective. Nevertheless, for the DrugProt track only a total of 13 relation types were used, keeping those that had enough training instances/examples and sufficient manual annotation consistency. The final list of relation types used for this shared task was: INDIRECT-DOWNREGULATOR, INDIRECTUPREGULATOR, DIRECT-REGULATOR, ACTIVATOR, INHIBITOR, AGONIST, ANTAGONIST, AGONISTACTIVATOR, AGONIST-INHIBITOR, PRODUCT-OF, SUBSTRATE, SUBSTRATE_PRODUCT-OF or PART-OF. The DrugProt corpus was split randomly into training, development and test set. We also included a background and large scale background collection of records that were automatically annotated with drugs/chemicals and genes/proteins/miRNAs using an entity tagger trained on the manual DrugProt entity mentions. The background collections were merged with the test set to be able to get team predictions also for these records. Table 1 shows a su",TRUE,noun
R145261,Natural Language Processing,R162557,Overview of DrugProt BioCreative VII track: quality evaluation and large scale text mining of drug-gene/protein relations,S686946,R172093,Relation types,R172079,Agonist,"Considering recent progress in NLP, deep learning techniques and biomedical language models there is a pressing need to generate annotated resources and comparable evaluation scenarios that enable the development of advanced biomedical relation extraction systems that extract interactions between drugs/chemical entities and genes, proteins or miRNAs. Building on the results and experience of the CHEMDNER, CHEMDNER patents and ChemProt tracks, we have posed the DrugProt track at BioCreative VII. The DrugProt track focused on the evaluation of automatic systems able to extract 13 different types of drug-genes/protein relations of importance to understand gene regulatory and pharmacological mechanisms. The DrugProt track addressed regulatory associations (direct/indirect, activator/inhibitor relations), certain types of binding associations (antagonist and agonist relations) as well as metabolic associations (substrate or product relations). To promote development of novel tools and offer a comparative evaluation scenario we have released 61,775 manually annotated gene mentions, 65,561 chemical and drug mentions and a total of 24,526 relationships manually labeled by domain experts. A total of 30 teams submitted results for the DrugProt main track, while 9 teams submitted results for the large-scale text mining subtrack that required processing of over 2,3 million records. Teams obtained very competitive results, with predictions reaching fmeasures of over 0.92 for some relation types (antagonist) and fmeasures across all relation types close to 0.8. INTRODUCTION Among the most relevant biological and pharmacological relation types are those that involve (a) chemical compounds and drugs as well as (b) gene products including genes, proteins, miRNAs. A variety of associations between chemicals and genes/proteins are described in the biomedical literature, and there is a growing interest in facilitating a more systematic extraction of these relations from the literature, either for manual database curation initiatives or to generate large knowledge graphs of importance for drug discovery, drug repurposing, building regulatory or interaction networks or to characterize off-target interactions of drugs that might be of importance to understand better adverse drug reactions. At BioCreative VI, the ChemProt track tried to promote the development of novel systems between chemicals and genes for groups of biologically related association types (ChemProt track relation groups or CPRs). Although the obtained results did have a considerable impact in the development and evaluation of new biomedical relation extraction systems, a limitation of grouping more specific relation types into broader groups was the difficulty to directly exploit the results for database curation efforts and biomedical knowledge graph mining application scenarios. The considerable interest in the integration of chemical and biomedical data for drug-discovery purposes, together with the ongoing curation of relationships between biological and chemical entities from scientific publications and patents due to the recent COVID-19 pandemic, motivated the DrugProt track of BioCreative VII, which proposed using more granular relation types. In order to facilitate the development of more granular relation extraction systems large manually annotated corpora are needed. Those corpora should include high-quality manually labled entity mentions together with exhaustive relation annotations generated by domain experts. TRACK AND CORPUS DESCRIPTION Corpus description To carry out the DrugProt track at BioCreative VII, we have released a large manually labelled corpus including annotations of mentions of chemical compounds and drugs as well as genes, proteins and miRNAs. Domain experts with experience in biomedical literature annotation and database curation annotated by hand all abstracts using the BRAT annotation interface. The manual labeling of chemicals and genes was done in separate steps and by different experts to avoid introducing biases during the text annotation process. The manual tagging of entity mentions of chemicals and drugs as well as genes, proteins and miRNAs was done following a carefully designed annotation process and in line with publicly released annotation guidelines. Gene/protein entity mentions were manually mapped to their corresponding biologic al database identifiers whenever possible and classified as either normalizable to databases (tag: GENE-Y) or non normalizable mentions (GENE-N). Teams that participated at the DrugProt track were only provided with this classification of gene mentions and not the actual database identifier to avoid usage of external knowledge bases for producing their predictions. The corpus construction process required first annotating exhaustively all chemical and gene mentions (phase 1). Afterwards the relation annotation phase followed (phase 2), were relationships between these two types of entities had to be labeled according to public available annotation guidelines. Thus, to facilitate the annotation of chemical-protein interactions, the DrugProt track organizers constructed very granular relation annotation rules described in a 33 pages annotation guidelines document. These guidelines were refined during an iterative process based on the annotation of sample documents. The guidelines provided the basic details of the chemicalprotein interaction annotation task and the conventions that had to be followed during the corpus construction process. They incorporated suggestions made by curators as well as observations of annotation inconsistencies encountered when comparing results from different human curators. In brief, DrugProt interactions covered direct interactions (when a physical contact existed between a chemical/drug and a gene/protein) as well as indirect regulatory interactions that alter either the function or the quantity of the gene/gene product. The aim of the iterative manual annotation cycle was to improve the quality and consistency of the guidelines. During the planning of the guidelines some rules had to be reformulated to make them more explicit and clear and additional rules were added wherever necessary to better cover the practical annotation scenario and for being more complete. The manual annotation task basically consisted of labeling or marking manually through a customized BRAT webinterface the interactions given the article abstracts as content. Figure 1 summarizes the DrugProt relation types included in the annotation guidelines. Fig. 1. Overview of the DrugProt relation type hierarchy. The corpus annotation carried out for the DrugProt track was exhaustive for all the types of interactions previously specified. This implied that mentions of other kind of relationships between chemicals and genes (e.g. phenotypic and biological responses) were not manually labelled. Moreover, the DrugProt relations are directed in the sense that only relations of “what a chemical does to a gene/protein"" (chemical → gene/protein direction) were annotated, and not vice versa. To establish a easy to understand relation nomenclature and avoid redundant class definitions, we reviewed several chemical repositories that included chemical – biology information. We revised DrugBank, the Therapeutic Targets Database (TTD) and ChEMBL, assay normalization ontologies (BAO) and previously existing formalizations for the annotation of relationships: the Biological Expression Language (BEL), curation guidelines for transcription regulation interactions (DNA-binding transcription factor – target gene interaction) and SIGNOR, a database of causal relationships between biological entities. Each of these resources inspired the definition of the subclasses DIRECT REGULATOR (e.g. DrugBank, ChEMBL, BAO and SIGNOR) and the INDIRECT REGULATOR (e.g. BEL, curation guidelines for transcription regulation interactions and SIGNOR). For example, DrugBank relationships for drugs included a total of 22 definitions, some of them overlapping with CHEMPROT subclasses (e.g. “Inhibitor”, “Antagonist”, “Agonist”,...), some of them being regarded as highly specific for the purpose of this task (e.g. “intercalation”, “cross-linking/alkylation”) or referring to biological roles (e.g. “Antibody”, “Incorporation into and Destabilization”) and others, partially overlapping between them (e.g. “Binder” and “Ligand”), that were merged into a single class. Concerning indirect regulatory aspects, the five classes of casual relationships between a subject and an object term defined by BEL (“decreases”, “directlyDecreases”, “increases”, “directlyIncreases” and “causesNoChange”) were highly inspiring. Subclasses definitions of pharmacological modes of action were defined according to the UPHAR/BPS Guide to Pharmacology in 2016. For the DrugProt track a very granular chemical-protein relation annotation was carried out, with the aim to cover most of the relations that are of importance from the point of view of biochemical and pharmacological/biomedical perspective. Nevertheless, for the DrugProt track only a total of 13 relation types were used, keeping those that had enough training instances/examples and sufficient manual annotation consistency. The final list of relation types used for this shared task was: INDIRECT-DOWNREGULATOR, INDIRECTUPREGULATOR, DIRECT-REGULATOR, ACTIVATOR, INHIBITOR, AGONIST, ANTAGONIST, AGONISTACTIVATOR, AGONIST-INHIBITOR, PRODUCT-OF, SUBSTRATE, SUBSTRATE_PRODUCT-OF or PART-OF. The DrugProt corpus was split randomly into training, development and test set. We also included a background and large scale background collection of records that were automatically annotated with drugs/chemicals and genes/proteins/miRNAs using an entity tagger trained on the manual DrugProt entity mentions. The background collections were merged with the test set to be able to get team predictions also for these records. Table 1 shows a su",TRUE,noun
R145261,Natural Language Processing,R162557,Overview of DrugProt BioCreative VII track: quality evaluation and large scale text mining of drug-gene/protein relations,S686947,R172093,Relation types,R172078,Antagonist,"Considering recent progress in NLP, deep learning techniques and biomedical language models there is a pressing need to generate annotated resources and comparable evaluation scenarios that enable the development of advanced biomedical relation extraction systems that extract interactions between drugs/chemical entities and genes, proteins or miRNAs. Building on the results and experience of the CHEMDNER, CHEMDNER patents and ChemProt tracks, we have posed the DrugProt track at BioCreative VII. The DrugProt track focused on the evaluation of automatic systems able to extract 13 different types of drug-genes/protein relations of importance to understand gene regulatory and pharmacological mechanisms. The DrugProt track addressed regulatory associations (direct/indirect, activator/inhibitor relations), certain types of binding associations (antagonist and agonist relations) as well as metabolic associations (substrate or product relations). To promote development of novel tools and offer a comparative evaluation scenario we have released 61,775 manually annotated gene mentions, 65,561 chemical and drug mentions and a total of 24,526 relationships manually labeled by domain experts. A total of 30 teams submitted results for the DrugProt main track, while 9 teams submitted results for the large-scale text mining subtrack that required processing of over 2,3 million records. Teams obtained very competitive results, with predictions reaching fmeasures of over 0.92 for some relation types (antagonist) and fmeasures across all relation types close to 0.8. INTRODUCTION Among the most relevant biological and pharmacological relation types are those that involve (a) chemical compounds and drugs as well as (b) gene products including genes, proteins, miRNAs. A variety of associations between chemicals and genes/proteins are described in the biomedical literature, and there is a growing interest in facilitating a more systematic extraction of these relations from the literature, either for manual database curation initiatives or to generate large knowledge graphs of importance for drug discovery, drug repurposing, building regulatory or interaction networks or to characterize off-target interactions of drugs that might be of importance to understand better adverse drug reactions. At BioCreative VI, the ChemProt track tried to promote the development of novel systems between chemicals and genes for groups of biologically related association types (ChemProt track relation groups or CPRs). Although the obtained results did have a considerable impact in the development and evaluation of new biomedical relation extraction systems, a limitation of grouping more specific relation types into broader groups was the difficulty to directly exploit the results for database curation efforts and biomedical knowledge graph mining application scenarios. The considerable interest in the integration of chemical and biomedical data for drug-discovery purposes, together with the ongoing curation of relationships between biological and chemical entities from scientific publications and patents due to the recent COVID-19 pandemic, motivated the DrugProt track of BioCreative VII, which proposed using more granular relation types. In order to facilitate the development of more granular relation extraction systems large manually annotated corpora are needed. Those corpora should include high-quality manually labled entity mentions together with exhaustive relation annotations generated by domain experts. TRACK AND CORPUS DESCRIPTION Corpus description To carry out the DrugProt track at BioCreative VII, we have released a large manually labelled corpus including annotations of mentions of chemical compounds and drugs as well as genes, proteins and miRNAs. Domain experts with experience in biomedical literature annotation and database curation annotated by hand all abstracts using the BRAT annotation interface. The manual labeling of chemicals and genes was done in separate steps and by different experts to avoid introducing biases during the text annotation process. The manual tagging of entity mentions of chemicals and drugs as well as genes, proteins and miRNAs was done following a carefully designed annotation process and in line with publicly released annotation guidelines. Gene/protein entity mentions were manually mapped to their corresponding biologic al database identifiers whenever possible and classified as either normalizable to databases (tag: GENE-Y) or non normalizable mentions (GENE-N). Teams that participated at the DrugProt track were only provided with this classification of gene mentions and not the actual database identifier to avoid usage of external knowledge bases for producing their predictions. The corpus construction process required first annotating exhaustively all chemical and gene mentions (phase 1). Afterwards the relation annotation phase followed (phase 2), were relationships between these two types of entities had to be labeled according to public available annotation guidelines. Thus, to facilitate the annotation of chemical-protein interactions, the DrugProt track organizers constructed very granular relation annotation rules described in a 33 pages annotation guidelines document. These guidelines were refined during an iterative process based on the annotation of sample documents. The guidelines provided the basic details of the chemicalprotein interaction annotation task and the conventions that had to be followed during the corpus construction process. They incorporated suggestions made by curators as well as observations of annotation inconsistencies encountered when comparing results from different human curators. In brief, DrugProt interactions covered direct interactions (when a physical contact existed between a chemical/drug and a gene/protein) as well as indirect regulatory interactions that alter either the function or the quantity of the gene/gene product. The aim of the iterative manual annotation cycle was to improve the quality and consistency of the guidelines. During the planning of the guidelines some rules had to be reformulated to make them more explicit and clear and additional rules were added wherever necessary to better cover the practical annotation scenario and for being more complete. The manual annotation task basically consisted of labeling or marking manually through a customized BRAT webinterface the interactions given the article abstracts as content. Figure 1 summarizes the DrugProt relation types included in the annotation guidelines. Fig. 1. Overview of the DrugProt relation type hierarchy. The corpus annotation carried out for the DrugProt track was exhaustive for all the types of interactions previously specified. This implied that mentions of other kind of relationships between chemicals and genes (e.g. phenotypic and biological responses) were not manually labelled. Moreover, the DrugProt relations are directed in the sense that only relations of “what a chemical does to a gene/protein"" (chemical → gene/protein direction) were annotated, and not vice versa. To establish a easy to understand relation nomenclature and avoid redundant class definitions, we reviewed several chemical repositories that included chemical – biology information. We revised DrugBank, the Therapeutic Targets Database (TTD) and ChEMBL, assay normalization ontologies (BAO) and previously existing formalizations for the annotation of relationships: the Biological Expression Language (BEL), curation guidelines for transcription regulation interactions (DNA-binding transcription factor – target gene interaction) and SIGNOR, a database of causal relationships between biological entities. Each of these resources inspired the definition of the subclasses DIRECT REGULATOR (e.g. DrugBank, ChEMBL, BAO and SIGNOR) and the INDIRECT REGULATOR (e.g. BEL, curation guidelines for transcription regulation interactions and SIGNOR). For example, DrugBank relationships for drugs included a total of 22 definitions, some of them overlapping with CHEMPROT subclasses (e.g. “Inhibitor”, “Antagonist”, “Agonist”,...), some of them being regarded as highly specific for the purpose of this task (e.g. “intercalation”, “cross-linking/alkylation”) or referring to biological roles (e.g. “Antibody”, “Incorporation into and Destabilization”) and others, partially overlapping between them (e.g. “Binder” and “Ligand”), that were merged into a single class. Concerning indirect regulatory aspects, the five classes of casual relationships between a subject and an object term defined by BEL (“decreases”, “directlyDecreases”, “increases”, “directlyIncreases” and “causesNoChange”) were highly inspiring. Subclasses definitions of pharmacological modes of action were defined according to the UPHAR/BPS Guide to Pharmacology in 2016. For the DrugProt track a very granular chemical-protein relation annotation was carried out, with the aim to cover most of the relations that are of importance from the point of view of biochemical and pharmacological/biomedical perspective. Nevertheless, for the DrugProt track only a total of 13 relation types were used, keeping those that had enough training instances/examples and sufficient manual annotation consistency. The final list of relation types used for this shared task was: INDIRECT-DOWNREGULATOR, INDIRECTUPREGULATOR, DIRECT-REGULATOR, ACTIVATOR, INHIBITOR, AGONIST, ANTAGONIST, AGONISTACTIVATOR, AGONIST-INHIBITOR, PRODUCT-OF, SUBSTRATE, SUBSTRATE_PRODUCT-OF or PART-OF. The DrugProt corpus was split randomly into training, development and test set. We also included a background and large scale background collection of records that were automatically annotated with drugs/chemicals and genes/proteins/miRNAs using an entity tagger trained on the manual DrugProt entity mentions. The background collections were merged with the test set to be able to get team predictions also for these records. Table 1 shows a su",TRUE,noun
R145261,Natural Language Processing,R163595,Overview of the Bacteria Biotope Task at BioNLP Shared Task 2016,S653032,R163597,Concept types,R163604,Bacteria,"This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2016, which follows the previous 2013 and 2011 editions. The task focuses on the extraction of the locations (biotopes and geographical places) of bacteria from PubMe abstracts and the characterization of bacteria and their associated habitats with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on bacteria habitats for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, the challenge organization, and the evaluation metrics. We also provide an analysis of the results obtained by participants.",TRUE,noun
R145261,Natural Language Processing,R163595,Overview of the Bacteria Biotope Task at BioNLP Shared Task 2016,S659889,R165603,Entity types,R163604,Bacteria,"This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2016, which follows the previous 2013 and 2011 editions. The task focuses on the extraction of the locations (biotopes and geographical places) of bacteria from PubMe abstracts and the characterization of bacteria and their associated habitats with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on bacteria habitats for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, the challenge organization, and the evaluation metrics. We also provide an analysis of the results obtained by participants.",TRUE,noun
R145261,Natural Language Processing,R164551,BioNLP shared Task 2013 – An Overview of the Bacteria Biotope Task,S659859,R165593,Entity types,R163604,Bacteria,"This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2013, which follows BioNLP-ST-11. The Bacteria Biotope task aims to extract the location of bacteria from scientific web pages and to characterize these locations with respect to the OntoBiotope ontology. Bacteria locations are crucil knowledge in biology for phenotype studies. The paper details the corpus specifications, the evaluation metrics, and it summarizes and discusses the participant results.",TRUE,noun
R145261,Natural Language Processing,R141057,Overview of the Epigenetics and Post-translational Modifications (EPI) task of BioNLP Shared Task 2011,S651221,R141059,Event / Relation Types,R163316,Catalysis,"This paper presents the preparation, resources, results and analysis of the Epigenetics and Post-translational Modifications (EPI) task, a main task of the BioNLP Shared Task 2011. The task concerns the extraction of detailed representations of 14 protein and DNA modification events, the catalysis of these reactions, and the identification of instances of negated or speculatively stated event instances. Seven teams submitted final results to the EPI task in the shared task, with the highest-performing system achieving 53% F-score in the full task and 69% F-score in the extraction of a simplified set of core event arguments.",TRUE,noun
R145261,Natural Language Processing,R141057,Overview of the Epigenetics and Post-translational Modifications (EPI) task of BioNLP Shared Task 2011,S660835,R165801,Event types,R163316,Catalysis,"This paper presents the preparation, resources, results and analysis of the Epigenetics and Post-translational Modifications (EPI) task, a main task of the BioNLP Shared Task 2011. The task concerns the extraction of detailed representations of 14 protein and DNA modification events, the catalysis of these reactions, and the identification of instances of negated or speculatively stated event instances. Seven teams submitted final results to the EPI task in the shared task, with the highest-performing system achieving 53% F-score in the full task and 69% F-score in the extraction of a simplified set of core event arguments.",TRUE,noun
R145261,Natural Language Processing,R162901,ClearTK-TimeML: A minimalist approach to TempEval 2013,S649713,R162903,Tool name,R162906,ClearTK-TimeML,"The ClearTK-TimeML submission to TempEval 2013 competed in all English tasks: identifying events, identifying times, and identifying temporal relations. The system is a pipeline of machine-learning models, each with a small set of features from a simple morpho-syntactic annotation pipeline, and where temporal relations are only predicted for a small set of syntactic constructions and relation types. ClearTKTimeML ranked 1 st for temporal relation F1, time extent strict F1 and event tense accuracy.",TRUE,noun
R145261,Natural Language Processing,R163747,CrossNER: Evaluating Cross-Domain Named Entity Recognition,S653882,R163749,Dataset name,R163750,CrossNER,"Cross-domain named entity recognition (NER) models are able to cope with the scarcity issue of NER samples in target domains. However, most of the existing NER benchmarks lack domain-specialized entity types or do not focus on a certain domain, leading to a less effective cross-domain evaluation. To address these obstacles, we introduce a cross-domain NER dataset (CrossNER), a fully-labeled collection of NER data spanning over five diverse domains with specialized entity categories for different domains. Additionally, we also provide a domain-related corpus since using it to continue pre-training language models (domain-adaptive pre-training) is effective for the domain adaptation. We then conduct comprehensive experiments to explore the effectiveness of leveraging different levels of the domain corpus and pre-training strategies to do domain-adaptive pre-training for the cross-domain task. Results show that focusing on the fractional corpus containing domain-specialized entities and utilizing a more challenging pre-training strategy in domain-adaptive pre-training are beneficial for the NER domain adaptation, and our proposed method can consistently outperform existing cross-domain NER baselines. Nevertheless, experiments also illustrate the challenge of this cross-domain NER task. We hope that our dataset and baselines will catalyze research in the NER domain adaptation area. The code and data are available at this https URL.",TRUE,noun
R145261,Natural Language Processing,R146357,The STEM-ECR Dataset: Grounding Scientific Entity References in STEM Scholarly Content to Authoritative Encyclopedic and Lexicographic Sources,S585994,R146359,Concept types,R146373,Data,"We introduce the STEM (Science, Technology, Engineering, and Medicine) Dataset for Scientific Entity Extraction, Classification, and Resolution, version 1.0 (STEM-ECR v1.0). The STEM-ECR v1.0 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction, classification, and resolution tasks in a domain-independent fashion. It comprises abstracts in 10 STEM disciplines that were found to be the most prolific ones on a major publishing platform. We describe the creation of such a multidisciplinary corpus and highlight the obtained findings in terms of the following features: 1) a generic conceptual formalism for scientific entities in a multidisciplinary scientific context; 2) the feasibility of the domain-independent human annotation of scientific entities under such a generic formalism; 3) a performance benchmark obtainable for automatic extraction of multidisciplinary scientific entities using BERT-based neural models; 4) a delineated 3-step entity resolution procedure for human annotation of the scientific entities via encyclopedic entity linking and lexicographic word sense disambiguation; and 5) human evaluations of Babelfy returned encyclopedic links and lexicographic senses for our entities. Our findings cumulatively indicate that human annotation and automatic learning of multidisciplinary scientific concepts as well as their semantic disambiguation in a wide-ranging setting as STEM is reasonable.",TRUE,noun
R145261,Natural Language Processing,R146853,SciREX: A Challenge Dataset for Document-Level Information Extraction,S588003,R146855,Concept types,R146859,Dataset,"Extracting information from full documents is an important problem in many domains, but most previous work focus on identifying relationships within a sentence or a paragraph. It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections. In this paper, we introduce SciREX, a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles. We annotate our dataset by integrating automatic and human annotations, leveraging existing scientific knowledge resources. We develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE. Analyzing the model performance shows a significant gap between human performance and current baselines, inviting the community to use our dataset as a challenge to develop document-level IE models. Our data and code are publicly available at https://github.com/allenai/SciREX .",TRUE,noun
R145261,Natural Language Processing,R146872,"Identification of Tasks, Datasets, Evaluation Metrics, and Numeric Scores for Scientific Leaderboards Construction",S591776,R146874,Concept types,R147517,Dataset,"While the fast-paced inception of novel tasks and new datasets helps foster active research in a community towards interesting directions, keeping track of the abundance of research activity in different areas on different datasets is likely to become increasingly difficult. The community could greatly benefit from an automatic system able to summarize scientific results, e.g., in the form of a leaderboard. In this paper we build two datasets and develop a framework (TDMS-IE) aimed at automatically extracting task, dataset, metric and score from NLP papers, towards the automatic construction of leaderboards. Experiments show that our model outperforms several baselines by a large margin. Our model is a first step towards automatic leaderboard construction, e.g., in the NLP domain.",TRUE,noun
R145261,Natural Language Processing,R147638,Identifying used methods and datasets in scientific publications,S592304,R147640,Concept types,R147641,Dataset,"Although it has become common to assess publications and researchers by means of their citation count (e.g., using the h-index), measuring the impact of scientific methods and datasets (e.g., using an “h-index for datasets”) has been performed only to a limited extent. This is not surprising because the usage information of methods and datasets is typically not explicitly provided by the authors, but hidden in a publication’s text. In this paper, we propose an approach to identifying methods and datasets in texts that have actually been used by the authors. Our approach first recognizes datasets and methods in the text by means of a domain-specific named entity recognition method with minimal human interaction. It then classifies these mentions into used vs. non-used based on the textual contexts. The obtained labels are aggregated on the document level and integrated into the Microsoft Academic Knowledge Graph modeling publications’ metadata. In experiments based on the Microsoft Academic Graph, we show that both method and dataset mentions can be identified and correctly classified with respect to their usage to a high degree. Overall, our approach facilitates method and dataset recommendation, enhanced paper recommendation, and scientific impact quantification. It can be extended in such a way that it can identify mentions of any entity type (e.g., task).",TRUE,noun
R145261,Natural Language Processing,R162474,Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task,S686691,R172005,Coarse-grained Entity type,R166118,Disease,"Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task—a result that approaches the human inter-annotator agreement (0.8875)—and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system’s ability to return real-time results: the average response time for each team’s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/",TRUE,noun
R145261,Natural Language Processing,R171917,Web services-based text-mining demonstrates broad impacts for interoperability and process simplification,S686429,R171919,Coarse-grained Entity type,R166118,Disease,"The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation tasks collectively represent a community-wide effort to evaluate a variety of text-mining and information extraction systems applied to the biological domain. The BioCreative IV Workshop included five independent subject areas, including Track 3, which focused on named-entity recognition (NER) for the Comparative Toxicogenomics Database (CTD; http://ctdbase.org). Previously, CTD had organized document ranking and NER-related tasks for the BioCreative Workshop 2012; a key finding of that effort was that interoperability and integration complexity were major impediments to the direct application of the systems to CTD's text-mining pipeline. This underscored a prevailing problem with software integration efforts. Major interoperability-related issues included lack of process modularity, operating system incompatibility, tool configuration complexity and lack of standardization of high-level inter-process communications. One approach to potentially mitigate interoperability and general integration issues is the use of Web services to abstract implementation details; rather than integrating NER tools directly, HTTP-based calls from CTD's asynchronous, batch-oriented text-mining pipeline could be made to remote NER Web services for recognition of specific biological terms using BioC (an emerging family of XML formats) for inter-process communications. To test this concept, participating groups developed Representational State Transfer /BioC-compliant Web services tailored to CTD's NER requirements. Participants were provided with a comprehensive set of training materials. CTD evaluated results obtained from the remote Web service-based URLs against a test data set of 510 manually curated scientific articles. Twelve groups participated in the challenge. Recall, precision, balanced F-scores and response times were calculated. Top balanced F-scores for gene, chemical and disease NER were 61, 74 and 51%, respectively. Response times ranged from fractions-of-a-second to over a minute per article. We present a description of the challenge and summary of results, demonstrating how curation groups can effectively use interoperable NER technologies to simplify text-mining pipeline implementation. Database URL: http://ctdbase.org/",TRUE,noun
R145261,Natural Language Processing,R163224,An empirical evaluation of resources for the identification of diseases and adverse effects in biomedical literature,S650998,R163226,Concept types,R150515,Disease,"The mentions of human health perturbations such as the diseases and adverse effects denote a special entity class in the biomedical literature. They help in understanding the underlying risk factors and develop a preventive rationale. The recognition of these named entities in texts through dictionary-based approaches relies on the availability of appropriate terminological resources. Although few resources are publicly available, not all are suitable for the text mining needs. Therefore, this work provides an overview of the well known resources with respect to human diseases and adverse effects such as the MeSH, MedDRA, ICD-10, SNOMED CT, and UMLS. Individual dictionaries are generated from these resources and their performance in recognizing the named entities is evaluated over a manually annotated corpus. In addition, the steps for curating the dictionaries, rule-based acronym disambiguation and their impact on the dictionary performance is discussed. The results show that the MedDRA and UMLS achieve the best recall. Besides this, MedDRA provides an additional benefit of achieving a higher precision. The combination of search results of all the dictionaries achieve a considerably high recall. The corpus is available on http://www.scai.fraunhofer.de/disease-ae-corpus.html",TRUE,noun
R145261,Natural Language Processing,R162474,Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task,S686712,R172005,Number of development data mentions,R172009,Disease,"Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task—a result that approaches the human inter-annotator agreement (0.8875)—and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system’s ability to return real-time results: the average response time for each team’s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/",TRUE,noun
R145261,Natural Language Processing,R162474,Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task,S686716,R172005,Number of test data mentions,R172011,Disease,"Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task—a result that approaches the human inter-annotator agreement (0.8875)—and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system’s ability to return real-time results: the average response time for each team’s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/",TRUE,noun
R145261,Natural Language Processing,R162474,Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task,S686718,R172005,Number of training data mentions,R172013,Disease,"Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task—a result that approaches the human inter-annotator agreement (0.8875)—and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system’s ability to return real-time results: the average response time for each team’s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/",TRUE,noun
R145261,Natural Language Processing,R162474,Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task,S648204,R162476,Concept types,R162481,diseases,"Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task—a result that approaches the human inter-annotator agreement (0.8875)—and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system’s ability to return real-time results: the average response time for each team’s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/",TRUE,noun
R145261,Natural Language Processing,R147113,MS MARCO: A Human Generated MAchine Reading COmprehension Dataset,S589274,R147115,Type of knowledge source,L410137,Documents,"This paper presents our recent work on the design and development of a new, large scale dataset, which we name MS MARCO, for MAchine Reading COmprehension. This new dataset is aimed to overcome a number of well-known weaknesses of previous publicly available datasets for the same task of reading comprehension and question answering. In MS MARCO, all questions are sampled from real anonymized user queries. The context passages, from which answers in the dataset are derived, are extracted from real web documents using the most advanced version of the Bing search engine. The answers to the queries are human generated. Finally, a subset of these queries has multiple answers. We aim to release one million queries and the corresponding answers in the dataset, which, to the best of our knowledge, is the most comprehensive real-world dataset of its kind in both quantity and quality. We are currently releasing 100,000 queries with their corresponding answers to inspire work in reading comprehension and question answering along with gathering feedback from the research community.",TRUE,noun
R145261,Natural Language Processing,R147129,A Hierarchical Attention Retrieval Model for Healthcare Question Answering,S589386,R147131,Type of knowledge source,L410221,Documents,"The growth of the Web in recent years has resulted in the development of various online platforms that provide healthcare information services. These platforms contain an enormous amount of information, which could be beneficial for a large number of people. However, navigating through such knowledgebases to answer specific queries of healthcare consumers is a challenging task. A majority of such queries might be non-factoid in nature, and hence, traditional keyword-based retrieval models do not work well for such cases. Furthermore, in many scenarios, it might be desirable to get a short answer that sufficiently answers the query, instead of a long document with only a small amount of useful information. In this paper, we propose a neural network model for ranking documents for question answering in the healthcare domain. The proposed model uses a deep attention mechanism at word, sentence, and document levels, for efficient retrieval for both factoid and non-factoid queries, on documents of varied lengths. Specifically, the word-level cross-attention allows the model to identify words that might be most relevant for a query, and the hierarchical attention at sentence and document levels allows it to do effective retrieval on both long and short documents. We also construct a new large-scale healthcare question-answering dataset, which we use to evaluate our model. Experimental evaluation results against several state-of-the-art baselines show that our model outperforms the existing retrieval techniques.",TRUE,noun
R145261,Natural Language Processing,R146081,Analyzing the Dynamics of Research by Extracting Key Aspects of Scientific Papers,S585003,R146083,Concept types,R146085,Domain,"We present a method for characterizing a research work in terms of its focus, domain of application, and techniques used. We show how tracing these aspects over time provides a novel measure of the influence of research communities on each other. We extract these characteristics by matching semantic extraction patterns, learned using bootstrapping, to the dependency trees of sentences in an article’s",TRUE,noun
R145261,Natural Language Processing,R146357,The STEM-ECR Dataset: Grounding Scientific Entity References in STEM Scholarly Content to Authoritative Encyclopedic and Lexicographic Sources,S586061,R146379,Data domains,R194,Engineering,"We introduce the STEM (Science, Technology, Engineering, and Medicine) Dataset for Scientific Entity Extraction, Classification, and Resolution, version 1.0 (STEM-ECR v1.0). The STEM-ECR v1.0 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction, classification, and resolution tasks in a domain-independent fashion. It comprises abstracts in 10 STEM disciplines that were found to be the most prolific ones on a major publishing platform. We describe the creation of such a multidisciplinary corpus and highlight the obtained findings in terms of the following features: 1) a generic conceptual formalism for scientific entities in a multidisciplinary scientific context; 2) the feasibility of the domain-independent human annotation of scientific entities under such a generic formalism; 3) a performance benchmark obtainable for automatic extraction of multidisciplinary scientific entities using BERT-based neural models; 4) a delineated 3-step entity resolution procedure for human annotation of the scientific entities via encyclopedic entity linking and lexicographic word sense disambiguation; and 5) human evaluations of Babelfy returned encyclopedic links and lexicographic senses for our entities. Our findings cumulatively indicate that human annotation and automatic learning of multidisciplinary scientific concepts as well as their semantic disambiguation in a wide-ranging setting as STEM is reasonable.",TRUE,noun
R145261,Natural Language Processing,R76157,SemEval-2020 Task 3: Graded Word Similarity in Context,S351313,R76291,Language,R6219,English,"This paper presents the Graded Word Similarity in Context (GWSC) task which asked participants to predict the effects of context on human perception of similarity in English, Croatian, Slovene and Finnish. We received 15 submissions and 11 system description papers. A new dataset (CoSimLex) was created for evaluation in this task: it contains pairs of words, each annotated within two different contexts. Systems beat the baselines by significant margins, but few did well in more than one language or subtask. Almost every system employed a Transformer model, but with many variations in the details: WordNet sense embeddings, translation of contexts, TF-IDF weightings, and the automatic creation of datasets for fine-tuning were all used to good effect.",TRUE,noun
R145261,Natural Language Processing,R147129,A Hierarchical Attention Retrieval Model for Healthcare Question Answering,S589384,R147131,Question Types,L410219,Factoid,"The growth of the Web in recent years has resulted in the development of various online platforms that provide healthcare information services. These platforms contain an enormous amount of information, which could be beneficial for a large number of people. However, navigating through such knowledgebases to answer specific queries of healthcare consumers is a challenging task. A majority of such queries might be non-factoid in nature, and hence, traditional keyword-based retrieval models do not work well for such cases. Furthermore, in many scenarios, it might be desirable to get a short answer that sufficiently answers the query, instead of a long document with only a small amount of useful information. In this paper, we propose a neural network model for ranking documents for question answering in the healthcare domain. The proposed model uses a deep attention mechanism at word, sentence, and document levels, for efficient retrieval for both factoid and non-factoid queries, on documents of varied lengths. Specifically, the word-level cross-attention allows the model to identify words that might be most relevant for a query, and the hierarchical attention at sentence and document levels allows it to do effective retrieval on both long and short documents. We also construct a new large-scale healthcare question-answering dataset, which we use to evaluate our model. Experimental evaluation results against several state-of-the-art baselines show that our model outperforms the existing retrieval techniques.",TRUE,noun
R145261,Natural Language Processing,R154297,Question Answering Benchmarks for Wikidata,S646113,R154298,Knowledge Base,R161784,Freebase,"Wikidata is becoming an increasingly important knowledge base whose usage is spreading in the research community. However, most question answering systems evaluation datasets rely on Freebase or DBpedia. We present two new datasets in order to train and benchmark QA systems over Wikidata. The first is a translation of the popular SimpleQuestions dataset to Wikidata, the second is a dataset created by collecting user feedbacks.",TRUE,noun
R145261,Natural Language Processing,R147992,Large-scale semantic parsing via schema matching and lexicon extension,S593525,R147994,Type of knowledge source,L412748,Freebase,"Supervised training procedures for semantic parsers produce high-quality semantic parsers, but they have difficulty scaling to large databases because of the sheer number of logical constants for which they must see labeled training data. We present a technique for developing semantic parsers for large databases based on a reduction to standard supervised training algorithms, schema matching, and pattern learning. Leveraging techniques from each of these areas, we develop a semantic parser for Freebase that is capable of parsing questions with an F1 that improves by 0.42 over a purely-supervised learning algorithm.",TRUE,noun
R145261,Natural Language Processing,R162349,BioCreAtIvE Task 1A: gene mention finding evaluation,S662844,R166331,Coarse-grained Entity type,R148462,Gene,"Abstract Background The biological research literature is a major repository of knowledge. As the amount of literature increases, it will get harder to find the information of interest on a particular topic. There has been an increasing amount of work on text mining this literature, but comparing this work is hard because of a lack of standards for making comparisons. To address this, we worked with colleagues at the Protein Design Group, CNB-CSIC, Madrid to develop BioCreAtIvE (Critical Assessment for Information Extraction in Biology), an open common evaluation of systems on a number of biological text mining tasks. We report here on task 1A, which deals with finding mentions of genes and related entities in text. ""Finding mentions"" is a basic task, which can be used as a building block for other text mining tasks. The task makes use of data and evaluation software provided by the (US) National Center for Biotechnology Information (NCBI). Results 15 teams took part in task 1A. A number of teams achieved scores over 80% F-measure (balanced precision and recall). The teams that tried to use their task 1A systems to help on other BioCreAtIvE tasks reported mixed results. Conclusion The 80% plus F-measure results are good, but still somewhat lag the best scores achieved in some other domains such as newswire, due in part to the complexity and length of gene names, compared to person or organization names in newswire.",TRUE,noun
R145261,Natural Language Processing,R171842,BC4GO: a full-text corpus for the BioCreative IV GO task,S686267,R171844,Coarse-grained Entity type,R148462,Gene,"Gene function curation via Gene Ontology (GO) annotation is a common task among Model Organism Database groups. Owing to its manual nature, this task is considered one of the bottlenecks in literature curation. There have been many previous attempts at automatic identification of GO terms and supporting information from full text. However, few systems have delivered an accuracy that is comparable with humans. One recognized challenge in developing such systems is the lack of marked sentence-level evidence text that provides the basis for making GO annotations. We aim to create a corpus that includes the GO evidence text along with the three core elements of GO annotations: (i) a gene or gene product, (ii) a GO term and (iii) a GO evidence code. To ensure our results are consistent with real-life GO data, we recruited eight professional GO curators and asked them to follow their routine GO annotation protocols. Our annotators marked up more than 5000 text passages in 200 articles for 1356 distinct GO terms. For evidence sentence selection, the inter-annotator agreement (IAA) results are 9.3% (strict) and 42.7% (relaxed) in F1-measures. For GO term selection, the IAAs are 47% (strict) and 62.9% (hierarchical). Our corpus analysis further shows that abstracts contain ∼10% of relevant evidence sentences and 30% distinct GO terms, while the Results/Experiment section has nearly 60% relevant sentences and >70% GO terms. Further, of those evidence sentences found in abstracts, less than one-third contain enough experimental detail to fulfill the three core criteria of a GO annotation. This result demonstrates the need of using full-text articles for text mining GO annotations. Through its use at the BioCreative IV GO (BC4GO) task, we expect our corpus to become a valuable resource for the BioNLP research community. Database URL: http://www.biocreative.org/resources/corpora/bc-iv-go-task-corpus/.",TRUE,noun
R145261,Natural Language Processing,R171917,Web services-based text-mining demonstrates broad impacts for interoperability and process simplification,S686428,R171919,Coarse-grained Entity type,R164659,Gene,"The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation tasks collectively represent a community-wide effort to evaluate a variety of text-mining and information extraction systems applied to the biological domain. The BioCreative IV Workshop included five independent subject areas, including Track 3, which focused on named-entity recognition (NER) for the Comparative Toxicogenomics Database (CTD; http://ctdbase.org). Previously, CTD had organized document ranking and NER-related tasks for the BioCreative Workshop 2012; a key finding of that effort was that interoperability and integration complexity were major impediments to the direct application of the systems to CTD's text-mining pipeline. This underscored a prevailing problem with software integration efforts. Major interoperability-related issues included lack of process modularity, operating system incompatibility, tool configuration complexity and lack of standardization of high-level inter-process communications. One approach to potentially mitigate interoperability and general integration issues is the use of Web services to abstract implementation details; rather than integrating NER tools directly, HTTP-based calls from CTD's asynchronous, batch-oriented text-mining pipeline could be made to remote NER Web services for recognition of specific biological terms using BioC (an emerging family of XML formats) for inter-process communications. To test this concept, participating groups developed Representational State Transfer /BioC-compliant Web services tailored to CTD's NER requirements. Participants were provided with a comprehensive set of training materials. CTD evaluated results obtained from the remote Web service-based URLs against a test data set of 510 manually curated scientific articles. Twelve groups participated in the challenge. Recall, precision, balanced F-scores and response times were calculated. Top balanced F-scores for gene, chemical and disease NER were 61, 74 and 51%, respectively. Response times ranged from fractions-of-a-second to over a minute per article. We present a description of the challenge and summary of results, demonstrating how curation groups can effectively use interoperable NER technologies to simplify text-mining pipeline implementation. Database URL: http://ctdbase.org/",TRUE,noun
R145261,Natural Language Processing,R162349,BioCreAtIvE Task 1A: gene mention finding evaluation,S647539,R162350,Concept types,R148042,Gene,"Abstract Background The biological research literature is a major repository of knowledge. As the amount of literature increases, it will get harder to find the information of interest on a particular topic. There has been an increasing amount of work on text mining this literature, but comparing this work is hard because of a lack of standards for making comparisons. To address this, we worked with colleagues at the Protein Design Group, CNB-CSIC, Madrid to develop BioCreAtIvE (Critical Assessment for Information Extraction in Biology), an open common evaluation of systems on a number of biological text mining tasks. We report here on task 1A, which deals with finding mentions of genes and related entities in text. ""Finding mentions"" is a basic task, which can be used as a building block for other text mining tasks. The task makes use of data and evaluation software provided by the (US) National Center for Biotechnology Information (NCBI). Results 15 teams took part in task 1A. A number of teams achieved scores over 80% F-measure (balanced precision and recall). The teams that tried to use their task 1A systems to help on other BioCreAtIvE tasks reported mixed results. Conclusion The 80% plus F-measure results are good, but still somewhat lag the best scores achieved in some other domains such as newswire, due in part to the complexity and length of gene names, compared to person or organization names in newswire.",TRUE,noun
R145261,Natural Language Processing,R162426,BioCreative V BioC track overview: collaborative biocurator assistant task for BioGRID,S648120,R162428,Concept types,R162454,gene,"BioC is a simple XML format for text, annotations and relations, and was developed to achieve interoperability for biomedical text processing. Following the success of BioC in BioCreative IV, the BioCreative V BioC track addressed a collaborative task to build an assistant system for BioGRID curation. In this paper, we describe the framework of the collaborative BioC task and discuss our findings based on the user survey. This track consisted of eight subtasks including gene/protein/organism named entity recognition, protein–protein/genetic interaction passage identification and annotation visualization. Using BioC as their data-sharing and communication medium, nine teams, world-wide, participated and contributed either new methods or improvements of existing tools to address different subtasks of the BioC track. Results from different teams were shared in BioC and made available to other teams as they addressed different subtasks of the track. In the end, all submitted runs were merged using a machine learning classifier to produce an optimized output. The biocurator assistant system was evaluated by four BioGRID curators in terms of practical usability. The curators’ feedback was overall positive and highlighted the user-friendly design and the convenient gene/protein curation tool based on text mining. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-1-bioc/",TRUE,noun
R145261,Natural Language Processing,R162457,Overview of the CHEMDNER patents task,S648160,R162459,Concept types,R162454,gene,"A considerable effort has been made to extract biological and chemical entities, as well as their relationships, from the scientific literature, either manually through traditional literature curation or by using information extraction and text mining technologies. Medicinal chemistry patents contain a wealth of information, for instance to uncover potential biomarkers that might play a role in cancer treatment and prognosis. However, current biomedical annotation databases do not cover such information, partly due to limitations of publicly available biomedical patent mining software. As part of the BioCreative V CHEMDNER patents track, we present the results of the first named entity recognition (NER) assignment carried out to detect mentions of chemical compounds and genes/proteins in running patent text. More specifically, this task aimed to evaluate the performance of automatic name recognition strategies capable of isolating chemical names and gene and gene product mentions from surrounding text within patent titles and abstracts. A total of 22 unique teams submitted results for at least one of the three CHEMDNER subtasks. The first subtask, called the CEMP (chemical entity mention in patents) task, focused on the detection of chemical named entity mentions in patents, requesting teams to return the start and end indices corresponding to all the chemical entities found in a given record. A total of 21 teams submitted 93 runs, for this subtask. The top performing team reached an f-measure of 0.89 with a precision of 0.87 and a recall of 0.91. The CPD (chemical passage detection) task required the classification of patent titles and abstracts whether they do or do not contain chemical compound mentions. Nine teams returned predictions for this task (40 runs). The top run in terms of Matthew’s correlation coefficient (MCC) had a score of 0.88, the highest sensitivity ? Corresponding author",TRUE,noun
R145261,Natural Language Processing,R163666,An Overview of the Active Gene Annotation Corpus and the BioNLP OST 2019 AGAC Track Tasks,S653580,R163668,Concept types,R148462,Gene,"The active gene annotation corpus (AGAC) was developed to support knowledge discovery for drug repurposing. Based on the corpus, the AGAC track of the BioNLP Open Shared Tasks 2019 was organized, to facilitate cross-disciplinary collaboration across BioNLP and Pharmacoinformatics communities, for drug repurposing. The AGAC track consists of three subtasks: 1) named entity recognition, 2) thematic relation extraction, and 3) loss of function (LOF) / gain of function (GOF) topic classification. The AGAC track was participated by five teams, of which the performance are compared and analyzed. The the results revealed a substantial room for improvement in the design of the task, which we analyzed in terms of “imbalanced data”, “selective annotation” and “latent topic annotation”.",TRUE,noun
R145261,Natural Language Processing,R163319,Overview of the Infectious Diseases (ID) task of BioNLP Shared Task 2011,S656619,R164438,Entity types,R148042,Gene,"This paper presents the preparation, resources, results and analysis of the Infectious Diseases (ID) information extraction task, a main task of the BioNLP Shared Task 2011. The ID task represents an application and extension of the BioNLP'09 shared task event extraction approach to full papers on infectious diseases. Seven teams submitted final results to the task, with the highest-performing system achieving 56% F-score in the full task, comparable to state-of-the-art performance in the established BioNLP'09 task. The results indicate that event extraction methods generalize well to new domains and full-text publications and are applicable to the extraction of events relevant to the molecular mechanisms of infectious diseases.",TRUE,noun
R145261,Natural Language Processing,R163656,"PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track",S660243,R165689,Entity types,R164659,Gene,"One of the biomedical entity types of relevance for medicine or biosciences are chemical compounds and drugs. The correct detection these entities is critical for other text mining applications building on them, such as adverse drug-reaction detection, medication-related fake news or drug-target extraction. Although a significant effort was made to detect mentions of drugs/chemicals in English texts, so far only very limited attempts were made to recognize them in medical documents in other languages. Taking into account the growing amount of medical publications and clinical records written in Spanish, we have organized the first shared task on detecting drug and chemical entities in Spanish medical documents. Additionally, we included a clinical concept-indexing sub-track asking teams to return SNOMED-CT identifiers related to drugs/chemicals for a collection of documents. For this task, named PharmaCoNER, we generated annotation guidelines together with a corpus of 1,000 manually annotated clinical case studies. A total of 22 teams participated in the sub-track 1, (77 system runs), and 7 teams in the sub-track 2 (19 system runs). Top scoring teams used sophisticated deep learning approaches yielding very competitive results with F-measures above 0.91. These results indicate that there is a real interest in promoting biomedical text mining efforts beyond English. We foresee that the PharmaCoNER annotation guidelines, corpus and participant systems will foster the development of new resources for clinical and biomedical text mining systems of Spanish medical data.",TRUE,noun
R145261,Natural Language Processing,R163666,An Overview of the Active Gene Annotation Corpus and the BioNLP OST 2019 AGAC Track Tasks,S660266,R165696,Entity types,R164659,Gene,"The active gene annotation corpus (AGAC) was developed to support knowledge discovery for drug repurposing. Based on the corpus, the AGAC track of the BioNLP Open Shared Tasks 2019 was organized, to facilitate cross-disciplinary collaboration across BioNLP and Pharmacoinformatics communities, for drug repurposing. The AGAC track consists of three subtasks: 1) named entity recognition, 2) thematic relation extraction, and 3) loss of function (LOF) / gain of function (GOF) topic classification. The AGAC track was participated by five teams, of which the performance are compared and analyzed. The the results revealed a substantial room for improvement in the design of the task, which we analyzed in terms of “imbalanced data”, “selective annotation” and “latent topic annotation”.",TRUE,noun
R145261,Natural Language Processing,R164531,BioNLP Shared Task 2013 – An overview of the Genic Regulation Network Task,S656980,R164533,Entity types,R148462,Gene,The goal of the Genic Regulation Network task (GRN) is to extract a regulation network that links and integrates a v a r i e y o f m o l e a interactions between genes and proteins of the well-studied model bacterium Bacillus subtilis. It is an extension of the BI task of BioNLP-ST’11. The corpus is composed of sentences selected from publicly available PubMed scientific,TRUE,noun
R145261,Natural Language Processing,R163406,Overview of the Cancer Genetics (CG) task of BioNLP Shared Task 2013,S652182,R163408,Event / Relation Types,R163436,General,"We present the design, preparation, results and analysis of the Cancer Genetics (CG) event extraction task, a main task of the BioNLP Shared Task (ST) 2013. The CG task is an information extraction task targeting the recognition of events in text, represented as structured n-ary associations of given physical entities. In addition to addressing the cancer domain, the CG task is differentiated from previous event extraction tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple levels of biological organization, ranging from the molecular through the cellular and organ levels up to whole organisms. Final test set submissions were accepted from six teams. The highest-performing system achieved an Fscore of 55.4%. This level of performance is broadly comparable with the state of the art for established molecular-level extraction tasks, demonstrating that event extraction resources and methods generalize well to higher levels of biological organization and are applicable to the analysis of scientific texts on cancer. The CG task continues as an open challenge to all interested parties, with tools and resources available from http://2013. bionlp-st.org/.",TRUE,noun
R145261,Natural Language Processing,R163406,Overview of the Cancer Genetics (CG) task of BioNLP Shared Task 2013,S660994,R165824,Event types,R163436,General,"We present the design, preparation, results and analysis of the Cancer Genetics (CG) event extraction task, a main task of the BioNLP Shared Task (ST) 2013. The CG task is an information extraction task targeting the recognition of events in text, represented as structured n-ary associations of given physical entities. In addition to addressing the cancer domain, the CG task is differentiated from previous event extraction tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple levels of biological organization, ranging from the molecular through the cellular and organ levels up to whole organisms. Final test set submissions were accepted from six teams. The highest-performing system achieved an Fscore of 55.4%. This level of performance is broadly comparable with the state of the art for established molecular-level extraction tasks, demonstrating that event extraction resources and methods generalize well to higher levels of biological organization and are applicable to the analysis of scientific texts on cancer. The CG task continues as an open challenge to all interested parties, with tools and resources available from http://2013. bionlp-st.org/.",TRUE,noun
R145261,Natural Language Processing,R163595,Overview of the Bacteria Biotope Task at BioNLP Shared Task 2016,S653033,R163597,Concept types,R163605,Habitat,"This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2016, which follows the previous 2013 and 2011 editions. The task focuses on the extraction of the locations (biotopes and geographical places) of bacteria from PubMe abstracts and the characterization of bacteria and their associated habitats with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on bacteria habitats for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, the challenge organization, and the evaluation metrics. We also provide an analysis of the results obtained by participants.",TRUE,noun
R145261,Natural Language Processing,R163595,Overview of the Bacteria Biotope Task at BioNLP Shared Task 2016,S659887,R165604,Entity types,R163605,Habitat,"This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2016, which follows the previous 2013 and 2011 editions. The task focuses on the extraction of the locations (biotopes and geographical places) of bacteria from PubMe abstracts and the characterization of bacteria and their associated habitats with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on bacteria habitats for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, the challenge organization, and the evaluation metrics. We also provide an analysis of the results obtained by participants.",TRUE,noun
R145261,Natural Language Processing,R162924,HeidelTime: High Quality Rule-Based Extraction and Normalization of Temporal Expressions,S649913,R162926,Tool name,R162967,HeidelTime,"In this paper, we describe HeidelTime, a system for the extraction and normalization of temporal expressions. HeidelTime is a rule-based system mainly using regular expression patterns for the extraction of temporal expressions and knowledge resources as well as linguistic clues for their normalization. In the TempEval-2 challenge, HeidelTime achieved the highest F-Score (86%) for the extraction and the best results in assigning the correct value attribute, i.e., in understanding the semantics of the temporal expressions.",TRUE,noun
R145261,Natural Language Processing,R162557,Overview of DrugProt BioCreative VII track: quality evaluation and large scale text mining of drug-gene/protein relations,S686945,R172093,Relation types,R172076,Inhibitor,"Considering recent progress in NLP, deep learning techniques and biomedical language models there is a pressing need to generate annotated resources and comparable evaluation scenarios that enable the development of advanced biomedical relation extraction systems that extract interactions between drugs/chemical entities and genes, proteins or miRNAs. Building on the results and experience of the CHEMDNER, CHEMDNER patents and ChemProt tracks, we have posed the DrugProt track at BioCreative VII. The DrugProt track focused on the evaluation of automatic systems able to extract 13 different types of drug-genes/protein relations of importance to understand gene regulatory and pharmacological mechanisms. The DrugProt track addressed regulatory associations (direct/indirect, activator/inhibitor relations), certain types of binding associations (antagonist and agonist relations) as well as metabolic associations (substrate or product relations). To promote development of novel tools and offer a comparative evaluation scenario we have released 61,775 manually annotated gene mentions, 65,561 chemical and drug mentions and a total of 24,526 relationships manually labeled by domain experts. A total of 30 teams submitted results for the DrugProt main track, while 9 teams submitted results for the large-scale text mining subtrack that required processing of over 2,3 million records. Teams obtained very competitive results, with predictions reaching fmeasures of over 0.92 for some relation types (antagonist) and fmeasures across all relation types close to 0.8. INTRODUCTION Among the most relevant biological and pharmacological relation types are those that involve (a) chemical compounds and drugs as well as (b) gene products including genes, proteins, miRNAs. A variety of associations between chemicals and genes/proteins are described in the biomedical literature, and there is a growing interest in facilitating a more systematic extraction of these relations from the literature, either for manual database curation initiatives or to generate large knowledge graphs of importance for drug discovery, drug repurposing, building regulatory or interaction networks or to characterize off-target interactions of drugs that might be of importance to understand better adverse drug reactions. At BioCreative VI, the ChemProt track tried to promote the development of novel systems between chemicals and genes for groups of biologically related association types (ChemProt track relation groups or CPRs). Although the obtained results did have a considerable impact in the development and evaluation of new biomedical relation extraction systems, a limitation of grouping more specific relation types into broader groups was the difficulty to directly exploit the results for database curation efforts and biomedical knowledge graph mining application scenarios. The considerable interest in the integration of chemical and biomedical data for drug-discovery purposes, together with the ongoing curation of relationships between biological and chemical entities from scientific publications and patents due to the recent COVID-19 pandemic, motivated the DrugProt track of BioCreative VII, which proposed using more granular relation types. In order to facilitate the development of more granular relation extraction systems large manually annotated corpora are needed. Those corpora should include high-quality manually labled entity mentions together with exhaustive relation annotations generated by domain experts. TRACK AND CORPUS DESCRIPTION Corpus description To carry out the DrugProt track at BioCreative VII, we have released a large manually labelled corpus including annotations of mentions of chemical compounds and drugs as well as genes, proteins and miRNAs. Domain experts with experience in biomedical literature annotation and database curation annotated by hand all abstracts using the BRAT annotation interface. The manual labeling of chemicals and genes was done in separate steps and by different experts to avoid introducing biases during the text annotation process. The manual tagging of entity mentions of chemicals and drugs as well as genes, proteins and miRNAs was done following a carefully designed annotation process and in line with publicly released annotation guidelines. Gene/protein entity mentions were manually mapped to their corresponding biologic al database identifiers whenever possible and classified as either normalizable to databases (tag: GENE-Y) or non normalizable mentions (GENE-N). Teams that participated at the DrugProt track were only provided with this classification of gene mentions and not the actual database identifier to avoid usage of external knowledge bases for producing their predictions. The corpus construction process required first annotating exhaustively all chemical and gene mentions (phase 1). Afterwards the relation annotation phase followed (phase 2), were relationships between these two types of entities had to be labeled according to public available annotation guidelines. Thus, to facilitate the annotation of chemical-protein interactions, the DrugProt track organizers constructed very granular relation annotation rules described in a 33 pages annotation guidelines document. These guidelines were refined during an iterative process based on the annotation of sample documents. The guidelines provided the basic details of the chemicalprotein interaction annotation task and the conventions that had to be followed during the corpus construction process. They incorporated suggestions made by curators as well as observations of annotation inconsistencies encountered when comparing results from different human curators. In brief, DrugProt interactions covered direct interactions (when a physical contact existed between a chemical/drug and a gene/protein) as well as indirect regulatory interactions that alter either the function or the quantity of the gene/gene product. The aim of the iterative manual annotation cycle was to improve the quality and consistency of the guidelines. During the planning of the guidelines some rules had to be reformulated to make them more explicit and clear and additional rules were added wherever necessary to better cover the practical annotation scenario and for being more complete. The manual annotation task basically consisted of labeling or marking manually through a customized BRAT webinterface the interactions given the article abstracts as content. Figure 1 summarizes the DrugProt relation types included in the annotation guidelines. Fig. 1. Overview of the DrugProt relation type hierarchy. The corpus annotation carried out for the DrugProt track was exhaustive for all the types of interactions previously specified. This implied that mentions of other kind of relationships between chemicals and genes (e.g. phenotypic and biological responses) were not manually labelled. Moreover, the DrugProt relations are directed in the sense that only relations of “what a chemical does to a gene/protein"" (chemical → gene/protein direction) were annotated, and not vice versa. To establish a easy to understand relation nomenclature and avoid redundant class definitions, we reviewed several chemical repositories that included chemical – biology information. We revised DrugBank, the Therapeutic Targets Database (TTD) and ChEMBL, assay normalization ontologies (BAO) and previously existing formalizations for the annotation of relationships: the Biological Expression Language (BEL), curation guidelines for transcription regulation interactions (DNA-binding transcription factor – target gene interaction) and SIGNOR, a database of causal relationships between biological entities. Each of these resources inspired the definition of the subclasses DIRECT REGULATOR (e.g. DrugBank, ChEMBL, BAO and SIGNOR) and the INDIRECT REGULATOR (e.g. BEL, curation guidelines for transcription regulation interactions and SIGNOR). For example, DrugBank relationships for drugs included a total of 22 definitions, some of them overlapping with CHEMPROT subclasses (e.g. “Inhibitor”, “Antagonist”, “Agonist”,...), some of them being regarded as highly specific for the purpose of this task (e.g. “intercalation”, “cross-linking/alkylation”) or referring to biological roles (e.g. “Antibody”, “Incorporation into and Destabilization”) and others, partially overlapping between them (e.g. “Binder” and “Ligand”), that were merged into a single class. Concerning indirect regulatory aspects, the five classes of casual relationships between a subject and an object term defined by BEL (“decreases”, “directlyDecreases”, “increases”, “directlyIncreases” and “causesNoChange”) were highly inspiring. Subclasses definitions of pharmacological modes of action were defined according to the UPHAR/BPS Guide to Pharmacology in 2016. For the DrugProt track a very granular chemical-protein relation annotation was carried out, with the aim to cover most of the relations that are of importance from the point of view of biochemical and pharmacological/biomedical perspective. Nevertheless, for the DrugProt track only a total of 13 relation types were used, keeping those that had enough training instances/examples and sufficient manual annotation consistency. The final list of relation types used for this shared task was: INDIRECT-DOWNREGULATOR, INDIRECTUPREGULATOR, DIRECT-REGULATOR, ACTIVATOR, INHIBITOR, AGONIST, ANTAGONIST, AGONISTACTIVATOR, AGONIST-INHIBITOR, PRODUCT-OF, SUBSTRATE, SUBSTRATE_PRODUCT-OF or PART-OF. The DrugProt corpus was split randomly into training, development and test set. We also included a background and large scale background collection of records that were automatically annotated with drugs/chemicals and genes/proteins/miRNAs using an entity tagger trained on the manual DrugProt entity mentions. The background collections were merged with the test set to be able to get team predictions also for these records. Table 1 shows a su",TRUE,noun
R145261,Natural Language Processing,R165975,Named Entity Recognition for Astronomy Literature,S661497,R165977,Fine-grained Entity types,R165990,ion,"We present a system for named entity recognition (ner) in astronomy journal articles. We have developed this system on a ne corpus comprising approximately 200,000 words of text from astronomy articles. These have been manually annotated with ∼40 entity types of interest to astronomers. We report on the challenges involved in extracting the corpus, defining entity classes and annotating scientific text. We investigate which features of an existing state-of-the-art Maximum Entropy approach perform well on astronomy text. Our system achieves an F-score of 87.8%.",TRUE,noun
R145261,Natural Language Processing,R165926,Transforming Wikipedia into Named Entity Training Data,S661469,R165928,Coarse-grained Entity types,R157010,Location,"Statistical named entity recognisers require costly hand-labelled training data and, as a result, most existing corpora are small. We exploit Wikipedia to create a massive corpus of named entity annotated text. We transform Wikipedia’s links into named entity annotations by classifying the target articles into common entity types (e.g. person, organisation and location). Comparing to MUC, CONLL and BBN corpora, Wikipedia generally performs better than other cross-corpus train/test pairs.",TRUE,noun
R145261,Natural Language Processing,R69282,SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications,S583695,R69283,Concept types,R145754,Material,"We describe the SemEval task of extracting keyphrases and relations between them from scientific documents, which is crucial for understanding which publications describe which processes, tasks and materials. Although this was a new task, we had a total of 26 submissions across 3 evaluation scenarios. We expect the task and the findings reported in this paper to be relevant for researchers working on understanding scientific content, as well as the broader knowledge base population and information extraction communities.",TRUE,noun
R145261,Natural Language Processing,R163224,An empirical evaluation of resources for the identification of diseases and adverse effects in biomedical literature,S651013,R163226,Other resources,R163263,MedDRA,"The mentions of human health perturbations such as the diseases and adverse effects denote a special entity class in the biomedical literature. They help in understanding the underlying risk factors and develop a preventive rationale. The recognition of these named entities in texts through dictionary-based approaches relies on the availability of appropriate terminological resources. Although few resources are publicly available, not all are suitable for the text mining needs. Therefore, this work provides an overview of the well known resources with respect to human diseases and adverse effects such as the MeSH, MedDRA, ICD-10, SNOMED CT, and UMLS. Individual dictionaries are generated from these resources and their performance in recognizing the named entities is evaluated over a manually annotated corpus. In addition, the steps for curating the dictionaries, rule-based acronym disambiguation and their impact on the dictionary performance is discussed. The results show that the MedDRA and UMLS achieve the best recall. Besides this, MedDRA provides an additional benefit of achieving a higher precision. The combination of search results of all the dictionaries achieve a considerably high recall. The corpus is available on http://www.scai.fraunhofer.de/disease-ae-corpus.html",TRUE,noun
R145261,Natural Language Processing,R146357,The STEM-ECR Dataset: Grounding Scientific Entity References in STEM Scholarly Content to Authoritative Encyclopedic and Lexicographic Sources,S586064,R146379,Data domains,R38525,Medicine,"We introduce the STEM (Science, Technology, Engineering, and Medicine) Dataset for Scientific Entity Extraction, Classification, and Resolution, version 1.0 (STEM-ECR v1.0). The STEM-ECR v1.0 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction, classification, and resolution tasks in a domain-independent fashion. It comprises abstracts in 10 STEM disciplines that were found to be the most prolific ones on a major publishing platform. We describe the creation of such a multidisciplinary corpus and highlight the obtained findings in terms of the following features: 1) a generic conceptual formalism for scientific entities in a multidisciplinary scientific context; 2) the feasibility of the domain-independent human annotation of scientific entities under such a generic formalism; 3) a performance benchmark obtainable for automatic extraction of multidisciplinary scientific entities using BERT-based neural models; 4) a delineated 3-step entity resolution procedure for human annotation of the scientific entities via encyclopedic entity linking and lexicographic word sense disambiguation; and 5) human evaluations of Babelfy returned encyclopedic links and lexicographic senses for our entities. Our findings cumulatively indicate that human annotation and automatic learning of multidisciplinary scientific concepts as well as their semantic disambiguation in a wide-ranging setting as STEM is reasonable.",TRUE,noun
R145261,Natural Language Processing,R163656,"PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track",S653540,R163658,Data domains,R136133,Medicine,"One of the biomedical entity types of relevance for medicine or biosciences are chemical compounds and drugs. The correct detection these entities is critical for other text mining applications building on them, such as adverse drug-reaction detection, medication-related fake news or drug-target extraction. Although a significant effort was made to detect mentions of drugs/chemicals in English texts, so far only very limited attempts were made to recognize them in medical documents in other languages. Taking into account the growing amount of medical publications and clinical records written in Spanish, we have organized the first shared task on detecting drug and chemical entities in Spanish medical documents. Additionally, we included a clinical concept-indexing sub-track asking teams to return SNOMED-CT identifiers related to drugs/chemicals for a collection of documents. For this task, named PharmaCoNER, we generated annotation guidelines together with a corpus of 1,000 manually annotated clinical case studies. A total of 22 teams participated in the sub-track 1, (77 system runs), and 7 teams in the sub-track 2 (19 system runs). Top scoring teams used sophisticated deep learning approaches yielding very competitive results with F-measures above 0.91. These results indicate that there is a real interest in promoting biomedical text mining efforts beyond English. We foresee that the PharmaCoNER annotation guidelines, corpus and participant systems will foster the development of new resources for clinical and biomedical text mining systems of Spanish medical data.",TRUE,noun
R145261,Natural Language Processing,R163224,An empirical evaluation of resources for the identification of diseases and adverse effects in biomedical literature,S651012,R163226,Other resources,R145007,MeSH,"The mentions of human health perturbations such as the diseases and adverse effects denote a special entity class in the biomedical literature. They help in understanding the underlying risk factors and develop a preventive rationale. The recognition of these named entities in texts through dictionary-based approaches relies on the availability of appropriate terminological resources. Although few resources are publicly available, not all are suitable for the text mining needs. Therefore, this work provides an overview of the well known resources with respect to human diseases and adverse effects such as the MeSH, MedDRA, ICD-10, SNOMED CT, and UMLS. Individual dictionaries are generated from these resources and their performance in recognizing the named entities is evaluated over a manually annotated corpus. In addition, the steps for curating the dictionaries, rule-based acronym disambiguation and their impact on the dictionary performance is discussed. The results show that the MedDRA and UMLS achieve the best recall. Besides this, MedDRA provides an additional benefit of achieving a higher precision. The combination of search results of all the dictionaries achieve a considerably high recall. The corpus is available on http://www.scai.fraunhofer.de/disease-ae-corpus.html",TRUE,noun
R145261,Natural Language Processing,R147638,Identifying used methods and datasets in scientific publications,S592305,R147640,Concept types,R147642,Method,"Although it has become common to assess publications and researchers by means of their citation count (e.g., using the h-index), measuring the impact of scientific methods and datasets (e.g., using an “h-index for datasets”) has been performed only to a limited extent. This is not surprising because the usage information of methods and datasets is typically not explicitly provided by the authors, but hidden in a publication’s text. In this paper, we propose an approach to identifying methods and datasets in texts that have actually been used by the authors. Our approach first recognizes datasets and methods in the text by means of a domain-specific named entity recognition method with minimal human interaction. It then classifies these mentions into used vs. non-used based on the textual contexts. The obtained labels are aggregated on the document level and integrated into the Microsoft Academic Knowledge Graph modeling publications’ metadata. In experiments based on the Microsoft Academic Graph, we show that both method and dataset mentions can be identified and correctly classified with respect to their usage to a high degree. Overall, our approach facilitates method and dataset recommendation, enhanced paper recommendation, and scientific impact quantification. It can be extended in such a way that it can identify mentions of any entity type (e.g., task).",TRUE,noun
R145261,Natural Language Processing,R146872,"Identification of Tasks, Datasets, Evaluation Metrics, and Numeric Scores for Scientific Leaderboards Construction",S591777,R146874,Concept types,R147518,Metric,"While the fast-paced inception of novel tasks and new datasets helps foster active research in a community towards interesting directions, keeping track of the abundance of research activity in different areas on different datasets is likely to become increasingly difficult. The community could greatly benefit from an automatic system able to summarize scientific results, e.g., in the form of a leaderboard. In this paper we build two datasets and develop a framework (TDMS-IE) aimed at automatically extracting task, dataset, metric and score from NLP papers, towards the automatic construction of leaderboards. Experiments show that our model outperforms several baselines by a large margin. Our model is a first step towards automatic leaderboard construction, e.g., in the NLP domain.",TRUE,noun
R145261,Natural Language Processing,R163702,Bacteria Biotope at BioNLP Open Shared Tasks 2019,S653690,R163704,Data domains,R52,Microbiology,"This paper presents the fourth edition of the Bacteria Biotope task at BioNLP Open Shared Tasks 2019. The task focuses on the extraction of the locations and phenotypes of microorganisms from PubMed abstracts and full-text excerpts, and the characterization of these entities with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on biodiversity for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, and the challenge organization. We also provide an analysis of the results obtained by participants, and inspect the evolution of the results since the last edition in 2016.",TRUE,noun
R145261,Natural Language Processing,R163702,Bacteria Biotope at BioNLP Open Shared Tasks 2019,S653696,R163704,Concept types,R163711,Microorganism,"This paper presents the fourth edition of the Bacteria Biotope task at BioNLP Open Shared Tasks 2019. The task focuses on the extraction of the locations and phenotypes of microorganisms from PubMed abstracts and full-text excerpts, and the characterization of these entities with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on biodiversity for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, and the challenge organization. We also provide an analysis of the results obtained by participants, and inspect the evolution of the results since the last edition in 2016.",TRUE,noun
R145261,Natural Language Processing,R163702,Bacteria Biotope at BioNLP Open Shared Tasks 2019,S660338,R165701,Entity types,R163711,Microorganism,"This paper presents the fourth edition of the Bacteria Biotope task at BioNLP Open Shared Tasks 2019. The task focuses on the extraction of the locations and phenotypes of microorganisms from PubMed abstracts and full-text excerpts, and the characterization of these entities with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on biodiversity for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, and the challenge organization. We also provide an analysis of the results obtained by participants, and inspect the evolution of the results since the last edition in 2016.",TRUE,noun
R145261,Natural Language Processing,R163406,Overview of the Cancer Genetics (CG) task of BioNLP Shared Task 2013,S652181,R163408,Event / Relation Types,R163435,Molecular,"We present the design, preparation, results and analysis of the Cancer Genetics (CG) event extraction task, a main task of the BioNLP Shared Task (ST) 2013. The CG task is an information extraction task targeting the recognition of events in text, represented as structured n-ary associations of given physical entities. In addition to addressing the cancer domain, the CG task is differentiated from previous event extraction tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple levels of biological organization, ranging from the molecular through the cellular and organ levels up to whole organisms. Final test set submissions were accepted from six teams. The highest-performing system achieved an Fscore of 55.4%. This level of performance is broadly comparable with the state of the art for established molecular-level extraction tasks, demonstrating that event extraction resources and methods generalize well to higher levels of biological organization and are applicable to the analysis of scientific texts on cancer. The CG task continues as an open challenge to all interested parties, with tools and resources available from http://2013. bionlp-st.org/.",TRUE,noun
R145261,Natural Language Processing,R163406,Overview of the Cancer Genetics (CG) task of BioNLP Shared Task 2013,S660995,R165824,Event types,R163435,Molecular,"We present the design, preparation, results and analysis of the Cancer Genetics (CG) event extraction task, a main task of the BioNLP Shared Task (ST) 2013. The CG task is an information extraction task targeting the recognition of events in text, represented as structured n-ary associations of given physical entities. In addition to addressing the cancer domain, the CG task is differentiated from previous event extraction tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple levels of biological organization, ranging from the molecular through the cellular and organ levels up to whole organisms. Final test set submissions were accepted from six teams. The highest-performing system achieved an Fscore of 55.4%. This level of performance is broadly comparable with the state of the art for established molecular-level extraction tasks, demonstrating that event extraction resources and methods generalize well to higher levels of biological organization and are applicable to the analysis of scientific texts on cancer. The CG task continues as an open challenge to all interested parties, with tools and resources available from http://2013. bionlp-st.org/.",TRUE,noun
R145261,Natural Language Processing,R166335,Overview of BioCreAtIvE task 1B: normalized gene lists,S662504,R166336,Coarse-grained Entity types,R166339,Mouse,"Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the ""Normalized Gene List"" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.",TRUE,noun
R145261,Natural Language Processing,R166335,Overview of BioCreAtIvE task 1B: normalized gene lists,S662538,R166336,Number of development documents,R166358,Mouse,"Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the ""Normalized Gene List"" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.",TRUE,noun
R145261,Natural Language Processing,R166335,Overview of BioCreAtIvE task 1B: normalized gene lists,S662515,R166336,Number of identifiers,R166345,Mouse,"Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the ""Normalized Gene List"" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.",TRUE,noun
R145261,Natural Language Processing,R166335,Overview of BioCreAtIvE task 1B: normalized gene lists,S662518,R166336,Number of mentions,R166348,Mouse,"Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the ""Normalized Gene List"" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.",TRUE,noun
R145261,Natural Language Processing,R166335,Overview of BioCreAtIvE task 1B: normalized gene lists,S662535,R166336,Number of test documents,R166355,Mouse,"Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the ""Normalized Gene List"" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.",TRUE,noun
R145261,Natural Language Processing,R166335,Overview of BioCreAtIvE task 1B: normalized gene lists,S662532,R166336,Number of training documents,R166352,Mouse,"Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the ""Normalized Gene List"" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.",TRUE,noun
R145261,Natural Language Processing,R166178,Exploiting Wikipedia as external knowledge for named entity recognition,S661876,R166180,Fine-grained Entity types,R166193,Name,"We explore the use of Wikipedia as external knowledge to improve named entity recognition (NER). Our method retrieves the corresponding Wikipedia entry for each candidate word sequence and extracts a category label from the first sentence of the entry, which can be thought of as a definition part. These category labels are used as features in a CRF-based NE tagger. We demonstrate using the CoNLL 2003 dataset that the Wikipedia category labels extracted by such a simple method actually improve the accuracy of NER.",TRUE,noun
R145261,Natural Language Processing,R163406,Overview of the Cancer Genetics (CG) task of BioNLP Shared Task 2013,S652047,R163408,Concept types,R163412,Organism,"We present the design, preparation, results and analysis of the Cancer Genetics (CG) event extraction task, a main task of the BioNLP Shared Task (ST) 2013. The CG task is an information extraction task targeting the recognition of events in text, represented as structured n-ary associations of given physical entities. In addition to addressing the cancer domain, the CG task is differentiated from previous event extraction tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple levels of biological organization, ranging from the molecular through the cellular and organ levels up to whole organisms. Final test set submissions were accepted from six teams. The highest-performing system achieved an Fscore of 55.4%. This level of performance is broadly comparable with the state of the art for established molecular-level extraction tasks, demonstrating that event extraction resources and methods generalize well to higher levels of biological organization and are applicable to the analysis of scientific texts on cancer. The CG task continues as an open challenge to all interested parties, with tools and resources available from http://2013. bionlp-st.org/.",TRUE,noun
R145261,Natural Language Processing,R163406,Overview of the Cancer Genetics (CG) task of BioNLP Shared Task 2013,S656896,R164522,Entity types,R164443,Organism,"We present the design, preparation, results and analysis of the Cancer Genetics (CG) event extraction task, a main task of the BioNLP Shared Task (ST) 2013. The CG task is an information extraction task targeting the recognition of events in text, represented as structured n-ary associations of given physical entities. In addition to addressing the cancer domain, the CG task is differentiated from previous event extraction tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple levels of biological organization, ranging from the molecular through the cellular and organ levels up to whole organisms. Final test set submissions were accepted from six teams. The highest-performing system achieved an Fscore of 55.4%. This level of performance is broadly comparable with the state of the art for established molecular-level extraction tasks, demonstrating that event extraction resources and methods generalize well to higher levels of biological organization and are applicable to the analysis of scientific texts on cancer. The CG task continues as an open challenge to all interested parties, with tools and resources available from http://2013. bionlp-st.org/.",TRUE,noun
R145261,Natural Language Processing,R146670,ParsCit: an Open-source CRF Reference String Parsing Package,S629357,R146672,model,R157003,ParsCit,"We describe ParsCit, a freely available, open-source implementation of a reference string parsing package. At the core of ParsCit is a trained conditional random field (CRF) model used to label the token sequences in the reference string. A heuristic model wraps this core with added functionality to identify reference strings from a plain text file, and to retrieve the citation contexts. The package comes with utilities to run it as a web service or as a standalone utility. We compare ParsCit on three distinct reference string datasets and show that it compares well with other previously published work.",TRUE,noun
R145261,Natural Language Processing,R163499,Overview of the Pathway Curation (PC) task of BioNLP Shared Task 2013,S652841,R163501,Event / Relation Types,R163538,Pathway,"We present the Pathway Curation (PC) task, a main event extraction task of the BioNLP shared task (ST) 2013. The PC task concerns the automatic extraction of biomolecular reactions from text. The task setting, representation and semantics are defined with respect to pathway model standards and ontologies (SBML, BioPAX, SBO) and documents selected by relevance to specific model reactions. Two BioNLP ST 2013 participants successfully completed the PC task. The highest achieved Fscore, 52.8%, indicates that event extraction is a promising approach to supporting pathway curation efforts. The PC task continues as an open challenge with data, resources and tools available from http://2013.bionlp-st.org/",TRUE,noun
R145261,Natural Language Processing,R163499,Overview of the Pathway Curation (PC) task of BioNLP Shared Task 2013,S661063,R165834,Event types,R163538,Pathway,"We present the Pathway Curation (PC) task, a main event extraction task of the BioNLP shared task (ST) 2013. The PC task concerns the automatic extraction of biomolecular reactions from text. The task setting, representation and semantics are defined with respect to pathway model standards and ontologies (SBML, BioPAX, SBO) and documents selected by relevance to specific model reactions. Two BioNLP ST 2013 participants successfully completed the PC task. The highest achieved Fscore, 52.8%, indicates that event extraction is a promising approach to supporting pathway curation efforts. The PC task continues as an open challenge with data, resources and tools available from http://2013.bionlp-st.org/",TRUE,noun
R145261,Natural Language Processing,R165926,Transforming Wikipedia into Named Entity Training Data,S661471,R165928,Coarse-grained Entity types,R162711,Person,"Statistical named entity recognisers require costly hand-labelled training data and, as a result, most existing corpora are small. We exploit Wikipedia to create a massive corpus of named entity annotated text. We transform Wikipedia’s links into named entity annotations by classifying the target articles into common entity types (e.g. person, organisation and location). Comparing to MUC, CONLL and BBN corpora, Wikipedia generally performs better than other cross-corpus train/test pairs.",TRUE,noun
R145261,Natural Language Processing,R165926,Transforming Wikipedia into Named Entity Training Data,S661417,R165928,Fine-grained Entity types,R162711,Person,"Statistical named entity recognisers require costly hand-labelled training data and, as a result, most existing corpora are small. We exploit Wikipedia to create a massive corpus of named entity annotated text. We transform Wikipedia’s links into named entity annotations by classifying the target articles into common entity types (e.g. person, organisation and location). Comparing to MUC, CONLL and BBN corpora, Wikipedia generally performs better than other cross-corpus train/test pairs.",TRUE,noun
R145261,Natural Language Processing,R163702,Bacteria Biotope at BioNLP Open Shared Tasks 2019,S653699,R163704,Concept types,R163712,Phenotype,"This paper presents the fourth edition of the Bacteria Biotope task at BioNLP Open Shared Tasks 2019. The task focuses on the extraction of the locations and phenotypes of microorganisms from PubMed abstracts and full-text excerpts, and the characterization of these entities with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on biodiversity for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, and the challenge organization. We also provide an analysis of the results obtained by participants, and inspect the evolution of the results since the last edition in 2016.",TRUE,noun
R145261,Natural Language Processing,R163702,Bacteria Biotope at BioNLP Open Shared Tasks 2019,S660340,R165701,Entity types,R163712,Phenotype,"This paper presents the fourth edition of the Bacteria Biotope task at BioNLP Open Shared Tasks 2019. The task focuses on the extraction of the locations and phenotypes of microorganisms from PubMed abstracts and full-text excerpts, and the characterization of these entities with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on biodiversity for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, and the challenge organization. We also provide an analysis of the results obtained by participants, and inspect the evolution of the results since the last edition in 2016.",TRUE,noun
R145261,Natural Language Processing,R162349,BioCreAtIvE Task 1A: gene mention finding evaluation,S647547,R162350,Evaluation metrics,R141527,Precision,"Abstract Background The biological research literature is a major repository of knowledge. As the amount of literature increases, it will get harder to find the information of interest on a particular topic. There has been an increasing amount of work on text mining this literature, but comparing this work is hard because of a lack of standards for making comparisons. To address this, we worked with colleagues at the Protein Design Group, CNB-CSIC, Madrid to develop BioCreAtIvE (Critical Assessment for Information Extraction in Biology), an open common evaluation of systems on a number of biological text mining tasks. We report here on task 1A, which deals with finding mentions of genes and related entities in text. ""Finding mentions"" is a basic task, which can be used as a building block for other text mining tasks. The task makes use of data and evaluation software provided by the (US) National Center for Biotechnology Information (NCBI). Results 15 teams took part in task 1A. A number of teams achieved scores over 80% F-measure (balanced precision and recall). The teams that tried to use their task 1A systems to help on other BioCreAtIvE tasks reported mixed results. Conclusion The 80% plus F-measure results are good, but still somewhat lag the best scores achieved in some other domains such as newswire, due in part to the complexity and length of gene names, compared to person or organization names in newswire.",TRUE,noun
R145261,Natural Language Processing,R162457,Overview of the CHEMDNER patents task,S648169,R162459,Evaluation metrics,R141527,Precision,"A considerable effort has been made to extract biological and chemical entities, as well as their relationships, from the scientific literature, either manually through traditional literature curation or by using information extraction and text mining technologies. Medicinal chemistry patents contain a wealth of information, for instance to uncover potential biomarkers that might play a role in cancer treatment and prognosis. However, current biomedical annotation databases do not cover such information, partly due to limitations of publicly available biomedical patent mining software. As part of the BioCreative V CHEMDNER patents track, we present the results of the first named entity recognition (NER) assignment carried out to detect mentions of chemical compounds and genes/proteins in running patent text. More specifically, this task aimed to evaluate the performance of automatic name recognition strategies capable of isolating chemical names and gene and gene product mentions from surrounding text within patent titles and abstracts. A total of 22 unique teams submitted results for at least one of the three CHEMDNER subtasks. The first subtask, called the CEMP (chemical entity mention in patents) task, focused on the detection of chemical named entity mentions in patents, requesting teams to return the start and end indices corresponding to all the chemical entities found in a given record. A total of 21 teams submitted 93 runs, for this subtask. The top performing team reached an f-measure of 0.89 with a precision of 0.87 and a recall of 0.91. The CPD (chemical passage detection) task required the classification of patent titles and abstracts whether they do or do not contain chemical compound mentions. Nine teams returned predictions for this task (40 runs). The top run in terms of Matthew’s correlation coefficient (MCC) had a score of 0.88, the highest sensitivity ? Corresponding author",TRUE,noun
R145261,Natural Language Processing,R162546,Overview of the BioCreative VI Precision Medicine Track: mining protein interactions and mutations for precision medicine,S648585,R162553,Evaluation metrics,R141527,Precision,"Abstract The Precision Medicine Initiative is a multicenter effort aiming at formulating personalized treatments leveraging on individual patient data (clinical, genome sequence and functional genomic data) together with the information in large knowledge bases (KBs) that integrate genome annotation, disease association studies, electronic health records and other data types. The biomedical literature provides a rich foundation for populating these KBs, reporting genetic and molecular interactions that provide the scaffold for the cellular regulatory systems and detailing the influence of genetic variants in these interactions. The goal of BioCreative VI Precision Medicine Track was to extract this particular type of information and was organized in two tasks: (i) document triage task, focused on identifying scientific literature containing experimentally verified protein–protein interactions (PPIs) affected by genetic mutations and (ii) relation extraction task, focused on extracting the affected interactions (protein pairs). To assist system developers and task participants, a large-scale corpus of PubMed documents was manually annotated for this task. Ten teams worldwide contributed 22 distinct text-mining models for the document triage task, and six teams worldwide contributed 14 different text-mining systems for the relation extraction task. When comparing the text-mining system predictions with human annotations, for the triage task, the best F-score was 69.06%, the best precision was 62.89%, the best recall was 98.0% and the best average precision was 72.5%. For the relation extraction task, when taking homologous genes into account, the best F-score was 37.73%, the best precision was 46.5% and the best recall was 54.1%. Submitted systems explored a wide range of methods, from traditional rule-based, statistical and machine learning systems to state-of-the-art deep learning methods. Given the level of participation and the individual team results we find the precision medicine track to be successful in engaging the text-mining research community. In the meantime, the track produced a manually annotated corpus of 5509 PubMed documents developed by BioGRID curators and relevant for precision medicine. The data set is freely available to the community, and the specific interactions have been integrated into the BioGRID data set. In addition, this challenge provided the first results of automatically identifying PubMed articles that describe PPI affected by mutations, as well as extracting the affected relations from those articles. Still, much progress is needed for computer-assisted precision medicine text mining to become mainstream. Future work should focus on addressing the remaining technical challenges and incorporating the practical benefits of text-mining tools into real-world precision medicine information-related curation.",TRUE,noun
R145261,Natural Language Processing,R69282,SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications,S583693,R69283,Concept types,R145752,Process,"We describe the SemEval task of extracting keyphrases and relations between them from scientific documents, which is crucial for understanding which publications describe which processes, tasks and materials. Although this was a new task, we had a total of 26 submissions across 3 evaluation scenarios. We expect the task and the findings reported in this paper to be relevant for researchers working on understanding scientific content, as well as the broader knowledge base population and information extraction communities.",TRUE,noun
R145261,Natural Language Processing,R162349,BioCreAtIvE Task 1A: gene mention finding evaluation,S662845,R166331,Coarse-grained Entity type,R164660,Protein,"Abstract Background The biological research literature is a major repository of knowledge. As the amount of literature increases, it will get harder to find the information of interest on a particular topic. There has been an increasing amount of work on text mining this literature, but comparing this work is hard because of a lack of standards for making comparisons. To address this, we worked with colleagues at the Protein Design Group, CNB-CSIC, Madrid to develop BioCreAtIvE (Critical Assessment for Information Extraction in Biology), an open common evaluation of systems on a number of biological text mining tasks. We report here on task 1A, which deals with finding mentions of genes and related entities in text. ""Finding mentions"" is a basic task, which can be used as a building block for other text mining tasks. The task makes use of data and evaluation software provided by the (US) National Center for Biotechnology Information (NCBI). Results 15 teams took part in task 1A. A number of teams achieved scores over 80% F-measure (balanced precision and recall). The teams that tried to use their task 1A systems to help on other BioCreAtIvE tasks reported mixed results. Conclusion The 80% plus F-measure results are good, but still somewhat lag the best scores achieved in some other domains such as newswire, due in part to the complexity and length of gene names, compared to person or organization names in newswire.",TRUE,noun
R145261,Natural Language Processing,R141057,Overview of the Epigenetics and Post-translational Modifications (EPI) task of BioNLP Shared Task 2011,S651227,R141059,Concept types,R148579,Protein,"This paper presents the preparation, resources, results and analysis of the Epigenetics and Post-translational Modifications (EPI) task, a main task of the BioNLP Shared Task 2011. The task concerns the extraction of detailed representations of 14 protein and DNA modification events, the catalysis of these reactions, and the identification of instances of negated or speculatively stated event instances. Seven teams submitted final results to the EPI task in the shared task, with the highest-performing system achieving 53% F-score in the full task and 69% F-score in the extraction of a simplified set of core event arguments.",TRUE,noun
R145261,Natural Language Processing,R162426,BioCreative V BioC track overview: collaborative biocurator assistant task for BioGRID,S648121,R162428,Concept types,R147576,protein,"BioC is a simple XML format for text, annotations and relations, and was developed to achieve interoperability for biomedical text processing. Following the success of BioC in BioCreative IV, the BioCreative V BioC track addressed a collaborative task to build an assistant system for BioGRID curation. In this paper, we describe the framework of the collaborative BioC task and discuss our findings based on the user survey. This track consisted of eight subtasks including gene/protein/organism named entity recognition, protein–protein/genetic interaction passage identification and annotation visualization. Using BioC as their data-sharing and communication medium, nine teams, world-wide, participated and contributed either new methods or improvements of existing tools to address different subtasks of the BioC track. Results from different teams were shared in BioC and made available to other teams as they addressed different subtasks of the track. In the end, all submitted runs were merged using a machine learning classifier to produce an optimized output. The biocurator assistant system was evaluated by four BioGRID curators in terms of practical usability. The curators’ feedback was overall positive and highlighted the user-friendly design and the convenient gene/protein curation tool based on text mining. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-1-bioc/",TRUE,noun
R145261,Natural Language Processing,R162457,Overview of the CHEMDNER patents task,S648161,R162459,Concept types,R147576,protein,"A considerable effort has been made to extract biological and chemical entities, as well as their relationships, from the scientific literature, either manually through traditional literature curation or by using information extraction and text mining technologies. Medicinal chemistry patents contain a wealth of information, for instance to uncover potential biomarkers that might play a role in cancer treatment and prognosis. However, current biomedical annotation databases do not cover such information, partly due to limitations of publicly available biomedical patent mining software. As part of the BioCreative V CHEMDNER patents track, we present the results of the first named entity recognition (NER) assignment carried out to detect mentions of chemical compounds and genes/proteins in running patent text. More specifically, this task aimed to evaluate the performance of automatic name recognition strategies capable of isolating chemical names and gene and gene product mentions from surrounding text within patent titles and abstracts. A total of 22 unique teams submitted results for at least one of the three CHEMDNER subtasks. The first subtask, called the CEMP (chemical entity mention in patents) task, focused on the detection of chemical named entity mentions in patents, requesting teams to return the start and end indices corresponding to all the chemical entities found in a given record. A total of 21 teams submitted 93 runs, for this subtask. The top performing team reached an f-measure of 0.89 with a precision of 0.87 and a recall of 0.91. The CPD (chemical passage detection) task required the classification of patent titles and abstracts whether they do or do not contain chemical compound mentions. Nine teams returned predictions for this task (40 runs). The top run in terms of Matthew’s correlation coefficient (MCC) had a score of 0.88, the highest sensitivity ? Corresponding author",TRUE,noun
R145261,Natural Language Processing,R141057,Overview of the Epigenetics and Post-translational Modifications (EPI) task of BioNLP Shared Task 2011,S656559,R164428,Entity types,R148579,Protein,"This paper presents the preparation, resources, results and analysis of the Epigenetics and Post-translational Modifications (EPI) task, a main task of the BioNLP Shared Task 2011. The task concerns the extraction of detailed representations of 14 protein and DNA modification events, the catalysis of these reactions, and the identification of instances of negated or speculatively stated event instances. Seven teams submitted final results to the EPI task in the shared task, with the highest-performing system achieving 53% F-score in the full task and 69% F-score in the extraction of a simplified set of core event arguments.",TRUE,noun
R145261,Natural Language Processing,R164531,BioNLP Shared Task 2013 – An overview of the Genic Regulation Network Task,S656989,R164533,Entity types,R148579,Protein,The goal of the Genic Regulation Network task (GRN) is to extract a regulation network that links and integrates a v a r i e y o f m o l e a interactions between genes and proteins of the well-studied model bacterium Bacillus subtilis. It is an extension of the BI task of BioNLP-ST’11. The corpus is composed of sentences selected from publicly available PubMed scientific,TRUE,noun
R145261,Natural Language Processing,R162352,Evaluation of BioCreAtIvE assessment of task 2,S662803,R166359,Number of test data mentions,R166417,Protein,"Abstract Background Molecular Biology accumulated substantial amounts of data concerning functions of genes and proteins. Information relating to functional descriptions is generally extracted manually from textual data and stored in biological databases to build up annotations for large collections of gene products. Those annotation databases are crucial for the interpretation of large scale analysis approaches using bioinformatics or experimental techniques. Due to the growing accumulation of functional descriptions in biomedical literature the need for text mining tools to facilitate the extraction of such annotations is urgent. In order to make text mining tools useable in real world scenarios, for instance to assist database curators during annotation of protein function, comparisons and evaluations of different approaches on full text articles are needed. Results The Critical Assessment for Information Extraction in Biology (BioCreAtIvE) contest consists of a community wide competition aiming to evaluate different strategies for text mining tools, as applied to biomedical literature. We report on task two which addressed the automatic extraction and assignment of Gene Ontology (GO) annotations of human proteins, using full text articles. The predictions of task 2 are based on triplets of protein – GO term – article passage . The annotation-relevant text passages were returned by the participants and evaluated by expert curators of the GO annotation (GOA) team at the European Institute of Bioinformatics (EBI). Each participant could submit up to three results for each sub-task comprising task 2. In total more than 15,000 individual results were provided by the participants. The curators evaluated in addition to the annotation itself, whether the protein and the GO term were correctly predicted and traceable through the submitted text fragment. Conclusion Concepts provided by GO are currently the most extended set of terms used for annotating gene products, thus they were explored to assess how effectively text mining tools are able to extract those annotations automatically. Although the obtained results are promising, they are still far from reaching the required performance demanded by real world applications. Among the principal difficulties encountered to address the proposed task, were the complex nature of the GO terms and protein names (the large range of variants which are used to express proteins and especially GO terms in free text), and the lack of a standard training set. A range of very different strategies were used to tackle this task. The dataset generated in line with the BioCreative challenge is publicly available and will allow new possibilities for training information extraction methods in the domain of molecular biology.",TRUE,noun
R145261,Natural Language Processing,R162352,Evaluation of BioCreAtIvE assessment of task 2,S662816,R166394,Number of test data mentions,R166421,Proteins,"Abstract Background Molecular Biology accumulated substantial amounts of data concerning functions of genes and proteins. Information relating to functional descriptions is generally extracted manually from textual data and stored in biological databases to build up annotations for large collections of gene products. Those annotation databases are crucial for the interpretation of large scale analysis approaches using bioinformatics or experimental techniques. Due to the growing accumulation of functional descriptions in biomedical literature the need for text mining tools to facilitate the extraction of such annotations is urgent. In order to make text mining tools useable in real world scenarios, for instance to assist database curators during annotation of protein function, comparisons and evaluations of different approaches on full text articles are needed. Results The Critical Assessment for Information Extraction in Biology (BioCreAtIvE) contest consists of a community wide competition aiming to evaluate different strategies for text mining tools, as applied to biomedical literature. We report on task two which addressed the automatic extraction and assignment of Gene Ontology (GO) annotations of human proteins, using full text articles. The predictions of task 2 are based on triplets of protein – GO term – article passage . The annotation-relevant text passages were returned by the participants and evaluated by expert curators of the GO annotation (GOA) team at the European Institute of Bioinformatics (EBI). Each participant could submit up to three results for each sub-task comprising task 2. In total more than 15,000 individual results were provided by the participants. The curators evaluated in addition to the annotation itself, whether the protein and the GO term were correctly predicted and traceable through the submitted text fragment. Conclusion Concepts provided by GO are currently the most extended set of terms used for annotating gene products, thus they were explored to assess how effectively text mining tools are able to extract those annotations automatically. Although the obtained results are promising, they are still far from reaching the required performance demanded by real world applications. Among the principal difficulties encountered to address the proposed task, were the complex nature of the GO terms and protein names (the large range of variants which are used to express proteins and especially GO terms in free text), and the lack of a standard training set. A range of very different strategies were used to tackle this task. The dataset generated in line with the BioCreative challenge is publicly available and will allow new possibilities for training information extraction methods in the domain of molecular biology.",TRUE,noun
R145261,Natural Language Processing,R162352,Evaluation of BioCreAtIvE assessment of task 2,S662815,R166394,Number of training data mentions,R166413,Proteins,"Abstract Background Molecular Biology accumulated substantial amounts of data concerning functions of genes and proteins. Information relating to functional descriptions is generally extracted manually from textual data and stored in biological databases to build up annotations for large collections of gene products. Those annotation databases are crucial for the interpretation of large scale analysis approaches using bioinformatics or experimental techniques. Due to the growing accumulation of functional descriptions in biomedical literature the need for text mining tools to facilitate the extraction of such annotations is urgent. In order to make text mining tools useable in real world scenarios, for instance to assist database curators during annotation of protein function, comparisons and evaluations of different approaches on full text articles are needed. Results The Critical Assessment for Information Extraction in Biology (BioCreAtIvE) contest consists of a community wide competition aiming to evaluate different strategies for text mining tools, as applied to biomedical literature. We report on task two which addressed the automatic extraction and assignment of Gene Ontology (GO) annotations of human proteins, using full text articles. The predictions of task 2 are based on triplets of protein – GO term – article passage . The annotation-relevant text passages were returned by the participants and evaluated by expert curators of the GO annotation (GOA) team at the European Institute of Bioinformatics (EBI). Each participant could submit up to three results for each sub-task comprising task 2. In total more than 15,000 individual results were provided by the participants. The curators evaluated in addition to the annotation itself, whether the protein and the GO term were correctly predicted and traceable through the submitted text fragment. Conclusion Concepts provided by GO are currently the most extended set of terms used for annotating gene products, thus they were explored to assess how effectively text mining tools are able to extract those annotations automatically. Although the obtained results are promising, they are still far from reaching the required performance demanded by real world applications. Among the principal difficulties encountered to address the proposed task, were the complex nature of the GO terms and protein names (the large range of variants which are used to express proteins and especially GO terms in free text), and the lack of a standard training set. A range of very different strategies were used to tackle this task. The dataset generated in line with the BioCreative challenge is publicly available and will allow new possibilities for training information extraction methods in the domain of molecular biology.",TRUE,noun
R145261,Natural Language Processing,R182418,SPECTER: Document-level Representation Learning using Citation-informed Transformers,S705865,R182420,Has evaluation,L476027,Recommendation,"Representation learning is a critical ingredient for natural language processing systems. Recent Transformer language models like BERT learn powerful textual representations, but these models are targeted towards token- and sentence-level training objectives and do not leverage information on inter-document relatedness, which limits their document-level representation power. For applications on scientific documents, such as classification and recommendation, accurate embeddings of documents are a necessity. We propose SPECTER, a new method to generate document-level embedding of scientific papers based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, Specter can be easily applied to downstream applications without task-specific fine-tuning. Additionally, to encourage further research on document-level models, we introduce SciDocs, a new evaluation benchmark consisting of seven document-level tasks ranging from citation prediction, to document classification and recommendation. We show that Specter outperforms a variety of competitive baselines on the benchmark.",TRUE,noun
R145261,Natural Language Processing,R145757,SemEval-2018 Task 7: Semantic Relation Extraction and Classification in Scientific Papers,S583713,R145759,Relation types,R145761,Result,"This paper describes the first task on semantic relation extraction and classification in scientific paper abstracts at SemEval 2018. The challenge focuses on domain-specific semantic relations and includes three different subtasks. The subtasks were designed so as to compare and quantify the effect of different pre-processing steps on the relation classification results. We expect the task to be relevant for a broad range of researchers working on extracting specialized knowledge from domain corpora, for example but not limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants, with 158 submissions across different scenarios.",TRUE,noun
R145261,Natural Language Processing,R162557,Overview of DrugProt BioCreative VII track: quality evaluation and large scale text mining of drug-gene/protein relations,S686951,R172093,Relation types,R172082,Substrate,"Considering recent progress in NLP, deep learning techniques and biomedical language models there is a pressing need to generate annotated resources and comparable evaluation scenarios that enable the development of advanced biomedical relation extraction systems that extract interactions between drugs/chemical entities and genes, proteins or miRNAs. Building on the results and experience of the CHEMDNER, CHEMDNER patents and ChemProt tracks, we have posed the DrugProt track at BioCreative VII. The DrugProt track focused on the evaluation of automatic systems able to extract 13 different types of drug-genes/protein relations of importance to understand gene regulatory and pharmacological mechanisms. The DrugProt track addressed regulatory associations (direct/indirect, activator/inhibitor relations), certain types of binding associations (antagonist and agonist relations) as well as metabolic associations (substrate or product relations). To promote development of novel tools and offer a comparative evaluation scenario we have released 61,775 manually annotated gene mentions, 65,561 chemical and drug mentions and a total of 24,526 relationships manually labeled by domain experts. A total of 30 teams submitted results for the DrugProt main track, while 9 teams submitted results for the large-scale text mining subtrack that required processing of over 2,3 million records. Teams obtained very competitive results, with predictions reaching fmeasures of over 0.92 for some relation types (antagonist) and fmeasures across all relation types close to 0.8. INTRODUCTION Among the most relevant biological and pharmacological relation types are those that involve (a) chemical compounds and drugs as well as (b) gene products including genes, proteins, miRNAs. A variety of associations between chemicals and genes/proteins are described in the biomedical literature, and there is a growing interest in facilitating a more systematic extraction of these relations from the literature, either for manual database curation initiatives or to generate large knowledge graphs of importance for drug discovery, drug repurposing, building regulatory or interaction networks or to characterize off-target interactions of drugs that might be of importance to understand better adverse drug reactions. At BioCreative VI, the ChemProt track tried to promote the development of novel systems between chemicals and genes for groups of biologically related association types (ChemProt track relation groups or CPRs). Although the obtained results did have a considerable impact in the development and evaluation of new biomedical relation extraction systems, a limitation of grouping more specific relation types into broader groups was the difficulty to directly exploit the results for database curation efforts and biomedical knowledge graph mining application scenarios. The considerable interest in the integration of chemical and biomedical data for drug-discovery purposes, together with the ongoing curation of relationships between biological and chemical entities from scientific publications and patents due to the recent COVID-19 pandemic, motivated the DrugProt track of BioCreative VII, which proposed using more granular relation types. In order to facilitate the development of more granular relation extraction systems large manually annotated corpora are needed. Those corpora should include high-quality manually labled entity mentions together with exhaustive relation annotations generated by domain experts. TRACK AND CORPUS DESCRIPTION Corpus description To carry out the DrugProt track at BioCreative VII, we have released a large manually labelled corpus including annotations of mentions of chemical compounds and drugs as well as genes, proteins and miRNAs. Domain experts with experience in biomedical literature annotation and database curation annotated by hand all abstracts using the BRAT annotation interface. The manual labeling of chemicals and genes was done in separate steps and by different experts to avoid introducing biases during the text annotation process. The manual tagging of entity mentions of chemicals and drugs as well as genes, proteins and miRNAs was done following a carefully designed annotation process and in line with publicly released annotation guidelines. Gene/protein entity mentions were manually mapped to their corresponding biologic al database identifiers whenever possible and classified as either normalizable to databases (tag: GENE-Y) or non normalizable mentions (GENE-N). Teams that participated at the DrugProt track were only provided with this classification of gene mentions and not the actual database identifier to avoid usage of external knowledge bases for producing their predictions. The corpus construction process required first annotating exhaustively all chemical and gene mentions (phase 1). Afterwards the relation annotation phase followed (phase 2), were relationships between these two types of entities had to be labeled according to public available annotation guidelines. Thus, to facilitate the annotation of chemical-protein interactions, the DrugProt track organizers constructed very granular relation annotation rules described in a 33 pages annotation guidelines document. These guidelines were refined during an iterative process based on the annotation of sample documents. The guidelines provided the basic details of the chemicalprotein interaction annotation task and the conventions that had to be followed during the corpus construction process. They incorporated suggestions made by curators as well as observations of annotation inconsistencies encountered when comparing results from different human curators. In brief, DrugProt interactions covered direct interactions (when a physical contact existed between a chemical/drug and a gene/protein) as well as indirect regulatory interactions that alter either the function or the quantity of the gene/gene product. The aim of the iterative manual annotation cycle was to improve the quality and consistency of the guidelines. During the planning of the guidelines some rules had to be reformulated to make them more explicit and clear and additional rules were added wherever necessary to better cover the practical annotation scenario and for being more complete. The manual annotation task basically consisted of labeling or marking manually through a customized BRAT webinterface the interactions given the article abstracts as content. Figure 1 summarizes the DrugProt relation types included in the annotation guidelines. Fig. 1. Overview of the DrugProt relation type hierarchy. The corpus annotation carried out for the DrugProt track was exhaustive for all the types of interactions previously specified. This implied that mentions of other kind of relationships between chemicals and genes (e.g. phenotypic and biological responses) were not manually labelled. Moreover, the DrugProt relations are directed in the sense that only relations of “what a chemical does to a gene/protein"" (chemical → gene/protein direction) were annotated, and not vice versa. To establish a easy to understand relation nomenclature and avoid redundant class definitions, we reviewed several chemical repositories that included chemical – biology information. We revised DrugBank, the Therapeutic Targets Database (TTD) and ChEMBL, assay normalization ontologies (BAO) and previously existing formalizations for the annotation of relationships: the Biological Expression Language (BEL), curation guidelines for transcription regulation interactions (DNA-binding transcription factor – target gene interaction) and SIGNOR, a database of causal relationships between biological entities. Each of these resources inspired the definition of the subclasses DIRECT REGULATOR (e.g. DrugBank, ChEMBL, BAO and SIGNOR) and the INDIRECT REGULATOR (e.g. BEL, curation guidelines for transcription regulation interactions and SIGNOR). For example, DrugBank relationships for drugs included a total of 22 definitions, some of them overlapping with CHEMPROT subclasses (e.g. “Inhibitor”, “Antagonist”, “Agonist”,...), some of them being regarded as highly specific for the purpose of this task (e.g. “intercalation”, “cross-linking/alkylation”) or referring to biological roles (e.g. “Antibody”, “Incorporation into and Destabilization”) and others, partially overlapping between them (e.g. “Binder” and “Ligand”), that were merged into a single class. Concerning indirect regulatory aspects, the five classes of casual relationships between a subject and an object term defined by BEL (“decreases”, “directlyDecreases”, “increases”, “directlyIncreases” and “causesNoChange”) were highly inspiring. Subclasses definitions of pharmacological modes of action were defined according to the UPHAR/BPS Guide to Pharmacology in 2016. For the DrugProt track a very granular chemical-protein relation annotation was carried out, with the aim to cover most of the relations that are of importance from the point of view of biochemical and pharmacological/biomedical perspective. Nevertheless, for the DrugProt track only a total of 13 relation types were used, keeping those that had enough training instances/examples and sufficient manual annotation consistency. The final list of relation types used for this shared task was: INDIRECT-DOWNREGULATOR, INDIRECTUPREGULATOR, DIRECT-REGULATOR, ACTIVATOR, INHIBITOR, AGONIST, ANTAGONIST, AGONISTACTIVATOR, AGONIST-INHIBITOR, PRODUCT-OF, SUBSTRATE, SUBSTRATE_PRODUCT-OF or PART-OF. The DrugProt corpus was split randomly into training, development and test set. We also included a background and large scale background collection of records that were automatically annotated with drugs/chemicals and genes/proteins/miRNAs using an entity tagger trained on the manual DrugProt entity mentions. The background collections were merged with the test set to be able to get team predictions also for these records. Table 1 shows a su",TRUE,noun
R145261,Natural Language Processing,R69282,SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications,S583694,R69283,Concept types,R145753,Task,"We describe the SemEval task of extracting keyphrases and relations between them from scientific documents, which is crucial for understanding which publications describe which processes, tasks and materials. Although this was a new task, we had a total of 26 submissions across 3 evaluation scenarios. We expect the task and the findings reported in this paper to be relevant for researchers working on understanding scientific content, as well as the broader knowledge base population and information extraction communities.",TRUE,noun
R145261,Natural Language Processing,R69288,"Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction",S583773,R69289,Concept types,R145753,Task,"We introduce a multi-task setup of identifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called SciIE with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.",TRUE,noun
R145261,Natural Language Processing,R146853,SciREX: A Challenge Dataset for Document-Level Information Extraction,S588005,R146855,Concept types,R146861,Task,"Extracting information from full documents is an important problem in many domains, but most previous work focus on identifying relationships within a sentence or a paragraph. It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections. In this paper, we introduce SciREX, a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles. We annotate our dataset by integrating automatic and human annotations, leveraging existing scientific knowledge resources. We develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE. Analyzing the model performance shows a significant gap between human performance and current baselines, inviting the community to use our dataset as a challenge to develop document-level IE models. Our data and code are publicly available at https://github.com/allenai/SciREX .",TRUE,noun
R145261,Natural Language Processing,R146872,"Identification of Tasks, Datasets, Evaluation Metrics, and Numeric Scores for Scientific Leaderboards Construction",S591775,R146874,Concept types,R147516,Task,"While the fast-paced inception of novel tasks and new datasets helps foster active research in a community towards interesting directions, keeping track of the abundance of research activity in different areas on different datasets is likely to become increasingly difficult. The community could greatly benefit from an automatic system able to summarize scientific results, e.g., in the form of a leaderboard. In this paper we build two datasets and develop a framework (TDMS-IE) aimed at automatically extracting task, dataset, metric and score from NLP papers, towards the automatic construction of leaderboards. Experiments show that our model outperforms several baselines by a large margin. Our model is a first step towards automatic leaderboard construction, e.g., in the NLP domain.",TRUE,noun
R145261,Natural Language Processing,R163747,CrossNER: Evaluating Cross-Domain Named Entity Recognition,S653952,R163784,Entity types,R163779,Task,"Cross-domain named entity recognition (NER) models are able to cope with the scarcity issue of NER samples in target domains. However, most of the existing NER benchmarks lack domain-specialized entity types or do not focus on a certain domain, leading to a less effective cross-domain evaluation. To address these obstacles, we introduce a cross-domain NER dataset (CrossNER), a fully-labeled collection of NER data spanning over five diverse domains with specialized entity categories for different domains. Additionally, we also provide a domain-related corpus since using it to continue pre-training language models (domain-adaptive pre-training) is effective for the domain adaptation. We then conduct comprehensive experiments to explore the effectiveness of leveraging different levels of the domain corpus and pre-training strategies to do domain-adaptive pre-training for the cross-domain task. Results show that focusing on the fractional corpus containing domain-specialized entities and utilizing a more challenging pre-training strategy in domain-adaptive pre-training are beneficial for the NER domain adaptation, and our proposed method can consistently outperform existing cross-domain NER baselines. Nevertheless, experiments also illustrate the challenge of this cross-domain NER task. We hope that our dataset and baselines will catalyze research in the NER domain adaptation area. The code and data are available at this https URL.",TRUE,noun
R145261,Natural Language Processing,R146081,Analyzing the Dynamics of Research by Extracting Key Aspects of Scientific Papers,S585005,R146083,Concept types,R146086,Technique,"We present a method for characterizing a research work in terms of its focus, domain of application, and techniques used. We show how tracing these aspects over time provides a novel measure of the influence of research communities on each other. We extract these characteristics by matching semantic extraction patterns, learned using bootstrapping, to the dependency trees of sentences in an article’s",TRUE,noun
R145261,Natural Language Processing,R147657,Concept-based analysis of scientific literature,S592382,R147659,Concept types,R147660,Technique,"This paper studies the importance of identifying and categorizing scientific concepts as a way to achieve a deeper understanding of the research literature of a scientific community. To reach this goal, we propose an unsupervised bootstrapping algorithm for identifying and categorizing mentions of concepts. We then propose a new clustering algorithm that uses citations' context as a way to cluster the extracted mentions into coherent concepts. Our evaluation of the algorithms against gold standards shows significant improvement over state-of-the-art results. More importantly, we analyze the computational linguistic literature using the proposed algorithms and show four different ways to summarize and understand the research community which are difficult to obtain using existing techniques.",TRUE,noun
R145261,Natural Language Processing,R171917,Web services-based text-mining demonstrates broad impacts for interoperability and process simplification,S686446,R171919,Data domains,R171923,Toxicogenomics,"The Critical Assessment of Information Extraction systems in Biology (BioCreAtIvE) challenge evaluation tasks collectively represent a community-wide effort to evaluate a variety of text-mining and information extraction systems applied to the biological domain. The BioCreative IV Workshop included five independent subject areas, including Track 3, which focused on named-entity recognition (NER) for the Comparative Toxicogenomics Database (CTD; http://ctdbase.org). Previously, CTD had organized document ranking and NER-related tasks for the BioCreative Workshop 2012; a key finding of that effort was that interoperability and integration complexity were major impediments to the direct application of the systems to CTD's text-mining pipeline. This underscored a prevailing problem with software integration efforts. Major interoperability-related issues included lack of process modularity, operating system incompatibility, tool configuration complexity and lack of standardization of high-level inter-process communications. One approach to potentially mitigate interoperability and general integration issues is the use of Web services to abstract implementation details; rather than integrating NER tools directly, HTTP-based calls from CTD's asynchronous, batch-oriented text-mining pipeline could be made to remote NER Web services for recognition of specific biological terms using BioC (an emerging family of XML formats) for inter-process communications. To test this concept, participating groups developed Representational State Transfer /BioC-compliant Web services tailored to CTD's NER requirements. Participants were provided with a comprehensive set of training materials. CTD evaluated results obtained from the remote Web service-based URLs against a test data set of 510 manually curated scientific articles. Twelve groups participated in the challenge. Recall, precision, balanced F-scores and response times were calculated. Top balanced F-scores for gene, chemical and disease NER were 61, 74 and 51%, respectively. Response times ranged from fractions-of-a-second to over a minute per article. We present a description of the challenge and summary of results, demonstrating how curation groups can effectively use interoperable NER technologies to simplify text-mining pipeline implementation. Database URL: http://ctdbase.org/",TRUE,noun
R145261,Natural Language Processing,R182418,SPECTER: Document-level Representation Learning using Citation-informed Transformers,S705830,R182420,Method,R182421,Transformer,"Representation learning is a critical ingredient for natural language processing systems. Recent Transformer language models like BERT learn powerful textual representations, but these models are targeted towards token- and sentence-level training objectives and do not leverage information on inter-document relatedness, which limits their document-level representation power. For applications on scientific documents, such as classification and recommendation, accurate embeddings of documents are a necessity. We propose SPECTER, a new method to generate document-level embedding of scientific papers based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, Specter can be easily applied to downstream applications without task-specific fine-tuning. Additionally, to encourage further research on document-level models, we introduce SciDocs, a new evaluation benchmark consisting of seven document-level tasks ranging from citation prediction, to document classification and recommendation. We show that Specter outperforms a variety of competitive baselines on the benchmark.",TRUE,noun
R145261,Natural Language Processing,R172139,BioCreative VII–Task 3: Automatic Extraction of Medication Names in Tweets,S687090,R172140,Data coverage,R172136,Tweets,"We present the BioCreative VII Task 3 which focuses on drug names extraction from tweets. Recognized to provide unique insights into population health, detecting health related tweets is notoriously challenging for natural language processing tools. Tweets are written about any and all topics, most of them not related to health. Additionally, they are written with little regard for proper grammar, are inherently colloquial, and are almost never proof-read. Given a tweet, task 3 consists of detecting if the tweet has a mention of a drug name and, if so, extracting the span of the drug mention. We made available 182,049 tweets publicly posted by 212 Twitter users with all drugs mentions manually annotated. This corpus exhibits the natural and strongly imbalanced distribution of positive tweets, with only 442 tweets (0.2%) mentioning a drug. This task was an opportunity for participants to evaluate methods robust to classimbalance beyond the simple lexical match. A total of 65 teams registered, and 16 teams submitted a system run. We summarize the corpus and the tools created for the challenge, which is freely available at https://biocreative.bioinformatics.udel.edu/tasks/biocreativevii/track-3/. We analyze the methods and the results of the competing systems with a focus on learning from classimbalanced data. Keywords—social media; pharmacovigilance; named entity recognition; drug name extraction; class-imbalance.",TRUE,noun
R145261,Natural Language Processing,R172139,BioCreative VII–Task 3: Automatic Extraction of Medication Names in Tweets,S687091,R172140,data source,R46698,Twitter,"We present the BioCreative VII Task 3 which focuses on drug names extraction from tweets. Recognized to provide unique insights into population health, detecting health related tweets is notoriously challenging for natural language processing tools. Tweets are written about any and all topics, most of them not related to health. Additionally, they are written with little regard for proper grammar, are inherently colloquial, and are almost never proof-read. Given a tweet, task 3 consists of detecting if the tweet has a mention of a drug name and, if so, extracting the span of the drug mention. We made available 182,049 tweets publicly posted by 212 Twitter users with all drugs mentions manually annotated. This corpus exhibits the natural and strongly imbalanced distribution of positive tweets, with only 442 tweets (0.2%) mentioning a drug. This task was an opportunity for participants to evaluate methods robust to classimbalance beyond the simple lexical match. A total of 65 teams registered, and 16 teams submitted a system run. We summarize the corpus and the tools created for the challenge, which is freely available at https://biocreative.bioinformatics.udel.edu/tasks/biocreativevii/track-3/. We analyze the methods and the results of the competing systems with a focus on learning from classimbalanced data. Keywords—social media; pharmacovigilance; named entity recognition; drug name extraction; class-imbalance.",TRUE,noun
R145261,Natural Language Processing,R156119,Question Answering Benchmarks for Wikidata,S646112,R156120,Knowledge Base,R161783,Wikidata,"Wikidata is becoming an increasingly important knowledge base whose usage is spreading in the research community. However, most question answering systems evaluation datasets rely on Freebase or DBpedia. We present two new datasets in order to train and benchmark QA systems over Wikidata. The first is a translation of the popular SimpleQuestions dataset to Wikidata, the second is a dataset created by collecting user feedbacks.",TRUE,noun
R145261,Natural Language Processing,R163109,WiNER: A Wikipedia Annotated Corpus for Named Entity Recognition,S650344,R163111,Dataset name,R163122,WiNER,"We revisit the idea of mining Wikipedia in order to generate named-entity annotations. We propose a new methodology that we applied to English Wikipedia to build WiNER, a large, high quality, annotated corpus. We evaluate its usefulness on 6 NER tasks, comparing 4 popular state-of-the art approaches. We show that LSTM-CRF is the approach that benefits the most from our corpus. We report impressive gains with this model when using a small portion of WiNER on top of the CONLL training material. Last, we propose a simple but efficient method for exploiting the full range of WiNER, leading to further improvements.",TRUE,noun
R145261,Natural Language Processing,R166335,Overview of BioCreAtIvE task 1B: normalized gene lists,S662502,R166336,Coarse-grained Entity types,R166337,Yeast,"Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the ""Normalized Gene List"" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.",TRUE,noun
R145261,Natural Language Processing,R166335,Overview of BioCreAtIvE task 1B: normalized gene lists,S662536,R166336,Number of development documents,R166356,Yeast,"Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the ""Normalized Gene List"" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.",TRUE,noun
R145261,Natural Language Processing,R166335,Overview of BioCreAtIvE task 1B: normalized gene lists,S662514,R166336,Number of identifiers,R166344,Yeast,"Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the ""Normalized Gene List"" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.",TRUE,noun
R145261,Natural Language Processing,R166335,Overview of BioCreAtIvE task 1B: normalized gene lists,S662517,R166336,Number of mentions,R166347,Yeast,"Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the ""Normalized Gene List"" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.",TRUE,noun
R145261,Natural Language Processing,R166335,Overview of BioCreAtIvE task 1B: normalized gene lists,S662533,R166336,Number of test documents,R166353,Yeast,"Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the ""Normalized Gene List"" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.",TRUE,noun
R145261,Natural Language Processing,R166335,Overview of BioCreAtIvE task 1B: normalized gene lists,S662530,R166336,Number of training documents,R166350,Yeast,"Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the ""Normalized Gene List"" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.",TRUE,noun
R145261,Natural Language Processing,R166335,Overview of BioCreAtIvE task 1B: normalized gene lists,S662503,R166336,Coarse-grained Entity types,R166338,Fly,"Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the ""Normalized Gene List"" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.",TRUE,noun
R145261,Natural Language Processing,R166335,Overview of BioCreAtIvE task 1B: normalized gene lists,S662537,R166336,Number of development documents,R166357,Fly,"Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the ""Normalized Gene List"" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.",TRUE,noun
R145261,Natural Language Processing,R166335,Overview of BioCreAtIvE task 1B: normalized gene lists,S662516,R166336,Number of identifiers,R166346,Fly,"Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the ""Normalized Gene List"" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.",TRUE,noun
R145261,Natural Language Processing,R166335,Overview of BioCreAtIvE task 1B: normalized gene lists,S662519,R166336,Number of mentions,R166349,Fly,"Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the ""Normalized Gene List"" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.",TRUE,noun
R145261,Natural Language Processing,R166335,Overview of BioCreAtIvE task 1B: normalized gene lists,S662534,R166336,Number of test documents,R166354,Fly,"Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the ""Normalized Gene List"" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.",TRUE,noun
R145261,Natural Language Processing,R166335,Overview of BioCreAtIvE task 1B: normalized gene lists,S662531,R166336,Number of training documents,R166351,Fly,"Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the ""Normalized Gene List"" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.",TRUE,noun
R145261,Natural Language Processing,R146081,Analyzing the Dynamics of Research by Extracting Key Aspects of Scientific Papers,S585001,R146083,Concept types,R146084,Focus,"We present a method for characterizing a research work in terms of its focus, domain of application, and techniques used. We show how tracing these aspects over time provides a novel measure of the influence of research communities on each other. We extract these characteristics by matching semantic extraction patterns, learned using bootstrapping, to the dependency trees of sentences in an article’s",TRUE,noun
R145261,Natural Language Processing,R162474,Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task,S686693,R172006,data source,R140296,PubMed,"Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task—a result that approaches the human inter-annotator agreement (0.8875)—and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system’s ability to return real-time results: the average response time for each team’s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/",TRUE,noun
R145261,Natural Language Processing,R162546,Overview of the BioCreative VI Precision Medicine Track: mining protein interactions and mutations for precision medicine,S686867,R172064,data source,R140296,PubMed,"Abstract The Precision Medicine Initiative is a multicenter effort aiming at formulating personalized treatments leveraging on individual patient data (clinical, genome sequence and functional genomic data) together with the information in large knowledge bases (KBs) that integrate genome annotation, disease association studies, electronic health records and other data types. The biomedical literature provides a rich foundation for populating these KBs, reporting genetic and molecular interactions that provide the scaffold for the cellular regulatory systems and detailing the influence of genetic variants in these interactions. The goal of BioCreative VI Precision Medicine Track was to extract this particular type of information and was organized in two tasks: (i) document triage task, focused on identifying scientific literature containing experimentally verified protein–protein interactions (PPIs) affected by genetic mutations and (ii) relation extraction task, focused on extracting the affected interactions (protein pairs). To assist system developers and task participants, a large-scale corpus of PubMed documents was manually annotated for this task. Ten teams worldwide contributed 22 distinct text-mining models for the document triage task, and six teams worldwide contributed 14 different text-mining systems for the relation extraction task. When comparing the text-mining system predictions with human annotations, for the triage task, the best F-score was 69.06%, the best precision was 62.89%, the best recall was 98.0% and the best average precision was 72.5%. For the relation extraction task, when taking homologous genes into account, the best F-score was 37.73%, the best precision was 46.5% and the best recall was 54.1%. Submitted systems explored a wide range of methods, from traditional rule-based, statistical and machine learning systems to state-of-the-art deep learning methods. Given the level of participation and the individual team results we find the precision medicine track to be successful in engaging the text-mining research community. In the meantime, the track produced a manually annotated corpus of 5509 PubMed documents developed by BioGRID curators and relevant for precision medicine. The data set is freely available to the community, and the specific interactions have been integrated into the BioGRID data set. In addition, this challenge provided the first results of automatically identifying PubMed articles that describe PPI affected by mutations, as well as extracting the affected relations from those articles. Still, much progress is needed for computer-assisted precision medicine text mining to become mainstream. Future work should focus on addressing the remaining technical challenges and incorporating the practical benefits of text-mining tools into real-world precision medicine information-related curation.",TRUE,noun
R145261,Natural Language Processing,R162561,Overview of the NLM-Chem BioCreative VII track: Full-text Chemical Identification and Indexing in PubMed articles,S687039,R172126,data source,R140296,PubMed,"The BioCreative NLM-Chem track calls for a community effort to fine-tune automated recognition of chemical names in biomedical literature. Chemical names are one of the most searched biomedical entities in PubMed and – as highlighted during the COVID-19 pandemic – their identification may significantly advance research in multiple biomedical subfields. While previous community challenges focused on identifying chemical names mentioned in titles and abstracts, the full text contains valuable additional detail. We organized the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document. This manuscript summarizes the BioCreative NLM-Chem track. We received a total of 88 submissions in total from 17 teams worldwide. The highest performance achieved for the Chemical Identification task was 0.8672 f-score (0.8759 precision, 0.8587 recall) for strict NER performance and 0.8136 f-score (0.8621 precision, 0.7702 recall) for strict normalization performance. The highest performance achieved for the Chemical Indexing task was 0.4825 f-score (0.4397 precision, 0.5344 recall). The NLM-Chem track dataset and other challenge materials are publicly available at https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/. This community challenge demonstrated 1) the current substantial achievements in deep learning technologies can be utilized to further improve automated prediction accuracy, and 2) the Chemical Indexing task is substantially more challenging. We look forward to further development of biomedical text mining methods to respond to the rapid growth of biomedical literature. Keywords— biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; text mining; chemical entity recognition; chemical indexing",TRUE,noun
R145261,Natural Language Processing,R162349,BioCreAtIvE Task 1A: gene mention finding evaluation,S647546,R162350,Evaluation metrics,R141528,Recall,"Abstract Background The biological research literature is a major repository of knowledge. As the amount of literature increases, it will get harder to find the information of interest on a particular topic. There has been an increasing amount of work on text mining this literature, but comparing this work is hard because of a lack of standards for making comparisons. To address this, we worked with colleagues at the Protein Design Group, CNB-CSIC, Madrid to develop BioCreAtIvE (Critical Assessment for Information Extraction in Biology), an open common evaluation of systems on a number of biological text mining tasks. We report here on task 1A, which deals with finding mentions of genes and related entities in text. ""Finding mentions"" is a basic task, which can be used as a building block for other text mining tasks. The task makes use of data and evaluation software provided by the (US) National Center for Biotechnology Information (NCBI). Results 15 teams took part in task 1A. A number of teams achieved scores over 80% F-measure (balanced precision and recall). The teams that tried to use their task 1A systems to help on other BioCreAtIvE tasks reported mixed results. Conclusion The 80% plus F-measure results are good, but still somewhat lag the best scores achieved in some other domains such as newswire, due in part to the complexity and length of gene names, compared to person or organization names in newswire.",TRUE,noun
R145261,Natural Language Processing,R162457,Overview of the CHEMDNER patents task,S648170,R162459,Evaluation metrics,R141528,Recall,"A considerable effort has been made to extract biological and chemical entities, as well as their relationships, from the scientific literature, either manually through traditional literature curation or by using information extraction and text mining technologies. Medicinal chemistry patents contain a wealth of information, for instance to uncover potential biomarkers that might play a role in cancer treatment and prognosis. However, current biomedical annotation databases do not cover such information, partly due to limitations of publicly available biomedical patent mining software. As part of the BioCreative V CHEMDNER patents track, we present the results of the first named entity recognition (NER) assignment carried out to detect mentions of chemical compounds and genes/proteins in running patent text. More specifically, this task aimed to evaluate the performance of automatic name recognition strategies capable of isolating chemical names and gene and gene product mentions from surrounding text within patent titles and abstracts. A total of 22 unique teams submitted results for at least one of the three CHEMDNER subtasks. The first subtask, called the CEMP (chemical entity mention in patents) task, focused on the detection of chemical named entity mentions in patents, requesting teams to return the start and end indices corresponding to all the chemical entities found in a given record. A total of 21 teams submitted 93 runs, for this subtask. The top performing team reached an f-measure of 0.89 with a precision of 0.87 and a recall of 0.91. The CPD (chemical passage detection) task required the classification of patent titles and abstracts whether they do or do not contain chemical compound mentions. Nine teams returned predictions for this task (40 runs). The top run in terms of Matthew’s correlation coefficient (MCC) had a score of 0.88, the highest sensitivity ? Corresponding author",TRUE,noun
R145261,Natural Language Processing,R162546,Overview of the BioCreative VI Precision Medicine Track: mining protein interactions and mutations for precision medicine,S648586,R162553,Evaluation metrics,R141528,Recall,"Abstract The Precision Medicine Initiative is a multicenter effort aiming at formulating personalized treatments leveraging on individual patient data (clinical, genome sequence and functional genomic data) together with the information in large knowledge bases (KBs) that integrate genome annotation, disease association studies, electronic health records and other data types. The biomedical literature provides a rich foundation for populating these KBs, reporting genetic and molecular interactions that provide the scaffold for the cellular regulatory systems and detailing the influence of genetic variants in these interactions. The goal of BioCreative VI Precision Medicine Track was to extract this particular type of information and was organized in two tasks: (i) document triage task, focused on identifying scientific literature containing experimentally verified protein–protein interactions (PPIs) affected by genetic mutations and (ii) relation extraction task, focused on extracting the affected interactions (protein pairs). To assist system developers and task participants, a large-scale corpus of PubMed documents was manually annotated for this task. Ten teams worldwide contributed 22 distinct text-mining models for the document triage task, and six teams worldwide contributed 14 different text-mining systems for the relation extraction task. When comparing the text-mining system predictions with human annotations, for the triage task, the best F-score was 69.06%, the best precision was 62.89%, the best recall was 98.0% and the best average precision was 72.5%. For the relation extraction task, when taking homologous genes into account, the best F-score was 37.73%, the best precision was 46.5% and the best recall was 54.1%. Submitted systems explored a wide range of methods, from traditional rule-based, statistical and machine learning systems to state-of-the-art deep learning methods. Given the level of participation and the individual team results we find the precision medicine track to be successful in engaging the text-mining research community. In the meantime, the track produced a manually annotated corpus of 5509 PubMed documents developed by BioGRID curators and relevant for precision medicine. The data set is freely available to the community, and the specific interactions have been integrated into the BioGRID data set. In addition, this challenge provided the first results of automatically identifying PubMed articles that describe PPI affected by mutations, as well as extracting the affected relations from those articles. Still, much progress is needed for computer-assisted precision medicine text mining to become mainstream. Future work should focus on addressing the remaining technical challenges and incorporating the practical benefits of text-mining tools into real-world precision medicine information-related curation.",TRUE,noun
R145261,Natural Language Processing,R146872,"Identification of Tasks, Datasets, Evaluation Metrics, and Numeric Scores for Scientific Leaderboards Construction",S591778,R146874,Concept types,R147519,Score,"While the fast-paced inception of novel tasks and new datasets helps foster active research in a community towards interesting directions, keeping track of the abundance of research activity in different areas on different datasets is likely to become increasingly difficult. The community could greatly benefit from an automatic system able to summarize scientific results, e.g., in the form of a leaderboard. In this paper we build two datasets and develop a framework (TDMS-IE) aimed at automatically extracting task, dataset, metric and score from NLP papers, towards the automatic construction of leaderboards. Experiments show that our model outperforms several baselines by a large margin. Our model is a first step towards automatic leaderboard construction, e.g., in the NLP domain.",TRUE,noun
R112130,Networking and Internet Architecture,R176011,A Stochastic Approach of Dependency Evaluation for IoT Devices,S696086,R176013,Type of considered dependencies,L468051,Service,"Internet of things (IoT) is an emerging technique that offers advanced connectivity of devices, systems, services, and human beings. With the rapid development of hardware and network technologies, the IoT can refer to a wide variety and large number of devices, resulting in complex relationships among IoT devices. The dependencies among IoT devices, which reflect their relationships, are with reference value for the design, development and management of IoT devices. This paper proposes a stochastic model based approach for evaluating the dependencies of IoT devices. A random walk model is proposed to describe the relationships of IoT devices, and its corresponding Markov chain is obtained for dependency analysis. A framework as well as schemes and algorithms for dependency evaluation in real-world IoT are designed based on traffic measurement. Simulation experiments based on real-life data extracted from smart home environments are conducted to illustrate the efficacy of the approach.",TRUE,noun
R137,Numerical Analysis/Scientific Computing,R38485,"Out of one, many: Exploiting intrinsic motions to explore protein structure spaces",S126266,R38487,Has method,R38491,SoPriM-NMA,"Reconstructing the energy landscape of a protein holds the key to characterizing its structural dynamics and function [1]. While the disparate spatio-temporal scales spanned by the slow dynamics challenge reconstruction in wet and dry laboratories, computational efforts have had recent success on proteins where a wealth of experimentally-known structures can be exploited to extract modes of motion. In [2], the authors propose the SoPriM method that extracts principle components (PCs) and utilizes them as variables of the structure space of interest. Stochastic optimization is employed to sample the structure space and its associated energy landscape in the defined varible space. We refer to this algorithm as SoPriM-PCA and compare it here to SoPriM-NMA, which investigates whether the landscape can be reconstructed with knowledge of modes of motion (normal modes) extracted from one single known structure. Some representative results are shown in Figure 1, where structures obtained by SoPriM-PCA and those obtained by SoPriM-NMA for the H-Ras enzyme are compared via color-coded projections onto the top two variables utilized by each algorithm. The results show that precious information can be obtained on the energy landscape even when one structural model is available. The presented work opens up interesting venues of research on structure-based inference of dynamics. Acknowledgment: This work is supported in part by NSF Grant No. 1421001 to AS and NSF Grant No. 1440581 to AS and EP. Computations were run on ARGO, a research computing cluster provided by the Office of Research Computing at George Mason University, VA (URL: http://orc.gmu.edu).",TRUE,noun
R137,Numerical Analysis/Scientific Computing,R38485,"Out of one, many: Exploiting intrinsic motions to explore protein structure spaces",S126265,R38487,Has method,R38488,SoPriM-PCA,"Reconstructing the energy landscape of a protein holds the key to characterizing its structural dynamics and function [1]. While the disparate spatio-temporal scales spanned by the slow dynamics challenge reconstruction in wet and dry laboratories, computational efforts have had recent success on proteins where a wealth of experimentally-known structures can be exploited to extract modes of motion. In [2], the authors propose the SoPriM method that extracts principle components (PCs) and utilizes them as variables of the structure space of interest. Stochastic optimization is employed to sample the structure space and its associated energy landscape in the defined varible space. We refer to this algorithm as SoPriM-PCA and compare it here to SoPriM-NMA, which investigates whether the landscape can be reconstructed with knowledge of modes of motion (normal modes) extracted from one single known structure. Some representative results are shown in Figure 1, where structures obtained by SoPriM-PCA and those obtained by SoPriM-NMA for the H-Ras enzyme are compared via color-coded projections onto the top two variables utilized by each algorithm. The results show that precious information can be obtained on the energy landscape even when one structural model is available. The presented work opens up interesting venues of research on structure-based inference of dynamics. Acknowledgment: This work is supported in part by NSF Grant No. 1421001 to AS and NSF Grant No. 1440581 to AS and EP. Computations were run on ARGO, a research computing cluster provided by the Office of Research Computing at George Mason University, VA (URL: http://orc.gmu.edu).",TRUE,noun
R96,Nutritional Epidemiology,R75685,Dietary patterns and cardiometabolic risk factors among adolescents: systematic review and meta-analysis,S346299,R75687,Age group,R75697,Adolescents,"AbstractThis study systematised and synthesised the results of observational studies that were aimed at supporting the association between dietary patterns and cardiometabolic risk (CMR) factors among adolescents. Relevant scientific articles were searched in PUBMED, EMBASE, SCIENCE DIRECT, LILACS, WEB OF SCIENCE and SCOPUS. Observational studies that included the measurement of any CMR factor in healthy adolescents and dietary patterns were included. The search strategy retained nineteen articles for qualitative analysis. Among retained articles, the effects of dietary pattern on the means of BMI (n 18), waist circumference (WC) (n 9), systolic blood pressure (n 7), diastolic blood pressure (n 6), blood glucose (n 5) and lipid profile (n 5) were examined. Systematised evidence showed that an unhealthy dietary pattern appears to be associated with poor mean values of CMR factors among adolescents. However, evidence of a protective effect of healthier dietary patterns in this group remains unclear. Considering the number of studies with available information, a meta-analysis of anthropometric measures showed that dietary patterns characterised by the highest intake of unhealthy foods resulted in a higher mean BMI (0·57 kg/m²; 95 % CI 0·51, 0·63) and WC (0·57 cm; 95 % CI 0·47, 0·67) compared with low intake of unhealthy foods. Controversially, patterns characterised by a low intake of healthy foods were associated with a lower mean BMI (−0·41 kg/m²; 95 % CI −0·46,−0·36) and WC (−0·43 cm; 95 % CI −0·52,−0·33). An unhealthy dietary pattern may influence markers of CMR among adolescents, but considering the small number and limitations of the studies included, further studies are warranted to strengthen the evidence of this relation.",TRUE,noun
R96,Nutritional Epidemiology,R75682,Association between dietary patterns and overweight risk among Malaysian adults: evidence from nationally representative surveys,S346300,R75684,Age group,R75698,Adults,"Abstract Objective: To investigate the association between dietary patterns (DP) and overweight risk in the Malaysian Adult Nutrition Surveys (MANS) of 2003 and 2014. Design: DP were derived from the MANS FFQ using principal component analysis. The cross-sectional association of the derived DP with prevalence of overweight was analysed. Setting: Malaysia. Participants: Nationally representative sample of Malaysian adults from MANS (2003, n 6928; 2014, n 3000). Results: Three major DP were identified for both years. These were ‘Traditional’ (fish, eggs, local cakes), ‘Western’ (fast foods, meat, carbonated beverages) and ‘Mixed’ (ready-to-eat cereals, bread, vegetables). A fourth DP was generated in 2003, ‘Flatbread & Beverages’ (flatbread, creamer, malted beverages), and 2014, ‘Noodles & Meat’ (noodles, meat, eggs). These DP accounted for 25·6 and 26·6 % of DP variations in 2003 and 2014, respectively. For both years, Traditional DP was significantly associated with rural households, lower income, men and Malay ethnicity, while Western DP was associated with younger age and higher income. Mixed DP was positively associated with women and higher income. None of the DP showed positive association with overweight risk, except for reduced adjusted odds of overweight with adherence to Traditional DP in 2003. Conclusions: Overweight could not be attributed to adherence to a single dietary pattern among Malaysian adults. This may be due to the constantly morphing dietary landscape in Malaysia, especially in urban areas, given the ease of availability and relative affordability of multi-ethnic and international foods. Timely surveys are recommended to monitor implications of these changes.",TRUE,noun
R172,Oceanography,R138370,NITROGEN SOURCES FOR NEW PRODUCTION IN THE NE INDIAN OCEAN,S548729,R138372,Sampling depth covered,L385977,Surface,"Productivity measurements were carried out during spring 2007 in the northeastern (NE) Indian Ocean, where light availability is controlled by clouds and surface productivity by nutrient and light availability. New productivity is found to be higher than regenerated productivity at most locations, consistent with the earlier findings from the region. A comparison of the present results with the earlier findings reveals that the region contributes significantly in the sequestration of CO2 from the atmosphere, particularly during spring. Diatomdominated plankton community is more efficient than those dominated by other organisms in the uptake of CO2 and its export to the deep. Earlier studies on plankton composition suggest that higher new productivity at most locations could also be due to the dominance of diatoms in the region.",TRUE,noun
R172,Oceanography,R109396,No nitrogen fixation in the Bay of Bengal?,S549171,R138402,Coastal/open ocean,L386348,Open,"Abstract. The Bay of Bengal (BoB) has long stood as a biogeochemical enigma with subsurface waters containing extremely low, but persistent, concentrations of oxygen in the nanomolar range which – for some, yet unconstrained reason – are prevented from becoming anoxic. One reason for this may be the low productivity of the BoB waters due to nutrient limitation, and the resulting lack of respiration of organic material at intermediate waters. Thus, the parameters determining primary production are key to understanding what prevents the BoB from developing anoxia. Primary productivity in the sunlit surface layers of tropical oceans is mostly limited by the supply of reactive nitrogen through upwelling, riverine flux, atmospheric deposition, and biological dinitrogen (N2) fixation. In the BoB, a stable stratification limits nutrient supply via upwelling in the open waters, and riverine or atmospheric fluxes have been shown to support only less than one quarter of the nitrogen for primary production. This leaves a large uncertainty for most of the BoB's nitrogen input, suggesting a potential role of N2 fixation in those waters. Here, we present a survey of N2 fixation and carbon fixation in the BoB during the winter monsoon season. We detected a community of N2 fixers comparable to other OMZ regions, with only a few cyanobacterial clades and a broad diversity of non-phototrophic N2 fixers present throughout the water column (samples collected between 10 m and 560 m water depth). While similar communities of N2 fixers were shown to actively fix N2 in other OMZs, N2 fixation rates were below the detection limit in our samples covering the water column between the deep chlorophyll maximum and the OMZ. Consistent with this, no N2 fixation signal was visible in δ15N signatures. We suggest that the absence of N2 fixation may be a consequence of a micronutrient limitation or of an O2 sensitivity of the OMZ diazotrophs in the BoB. To explore how the onset of N2 fixation by cyanobacteria compared to non-phototrophic N2 fixers would impact on OMZ O2 concentrations, a simple model exercise was carried out. We observed that both, photic zone-based and OMZ-based N2 fixation are very sensitive to even minimal changes in water column stratification, with stronger mixing increasing organic matter production and export, which would exhaust remaining O2 traces in the BoB.
",TRUE,noun
R272,"Operations Research, Systems Engineering and Industrial Engineering",R139484,Production scheduling in a knitted fabric dyeing and finishing process,S556506,R139486,Positioning in the logistics chain,L391289,Production,"Abstract Developing detailed production schedules for dyeing and finishing operations is a very difficult task that has received relatively little attention in the literature. In this paper, a scheduling procedure is presented for a knitted fabric dyeing and finishing plant that is essentially a flexible job shop with sequence-dependent setups. An existing job shop scheduling algorithm is modified to take into account the complexities of the case plant. The resulting approach based on family scheduling is tested on problems generated with case plant characteristics.",TRUE,noun
R272,"Operations Research, Systems Engineering and Industrial Engineering",R139487,Scheduling with multi-attribute set-up times on unrelated parallel machines,S556513,R139489,Positioning in the logistics chain,L391296,Production,"This paper studies a problem in the knitting process of the textile industry. In such a production system, each job has a number of attributes and each attribute has one or more levels. Because there is at least one different attribute level between two adjacent jobs, it is necessary to make a set-up adjustment whenever there is a switch to a different job. The problem can be formulated as a scheduling problem with multi-attribute set-up times on unrelated parallel machines. The objective of the problem is to assign jobs to different machines to minimise the makespan. A constructive heuristic is developed to obtain a qualified solution. To improve the solution further, a meta-heuristic that uses a genetic algorithm with a new crossover operator and three local searches are proposed. The computational experiments show that the proposed constructive heuristic outperforms two existed heuristics and the current scheduling method used by the case textile plant.",TRUE,noun
R272,"Operations Research, Systems Engineering and Industrial Engineering",R139522,Multisystem Optimization for an Integrated Production Scheduling with Resource Saving Problem in Textile Printing and Dyeing,S556537,R139525,Positioning in the logistics chain,L391316,Production,"Resource saving has become an integral aspect of manufacturing in industry 4.0. This paper proposes a multisystem optimization (MSO) algorithm, inspired by implicit parallelism of heuristic methods, to solve an integrated production scheduling with resource saving problem in textile printing and dyeing. First, a real-world integrated production scheduling with resource saving is formulated as a multisystem optimization problem. Then, the MSO algorithm is proposed to solve multisystem optimization problems that consist of several coupled subsystems, and each of the subsystems may contain multiple objectives and multiple constraints. The proposed MSO algorithm is composed of within-subsystem evolution and cross-subsystem migration operators, and the former is to optimize each subsystem by excellent evolution operators and the later is to complete information sharing between multiple subsystems, to accelerate the global optimization of the whole system. Performance is tested on a set of multisystem benchmark functions and compared with improved NSGA-II and multiobjective multifactorial evolutionary algorithm (MO-MFEA). Simulation results show that the MSO algorithm is better than compared algorithms for the benchmark functions studied in this paper. Finally, the MSO algorithm is successfully applied to the proposed integrated production scheduling with resource saving problem, and the results show that MSO is a promising algorithm for the studied problem.",TRUE,noun
R129,Organic Chemistry,R138577,Solvent-Free Chelation-Assisted Catalytic C-C Bond Cleavage of Unstrained Ketone by Rhodium(I) Complexes under Microwave Irradiation,S550511,R138579,Additive,L387412,amine,A highly efficient C-C bond cleavage of unstrained aliphatic ketones bearing β-hydrogens with olefins was achieved using a chelation-assisted catalytic system consisting of (Ph 3 P) 3 RhCl and 2-amino-3-picoline by microwave irradiation under solvent-free conditions. The addition of cyclohexylamine catalyst accelerated the reaction rate dramatically under microwave irradiation compared with the classical heating method.,TRUE,noun
R129,Organic Chemistry,R137073,"Selective, Nickel-Catalyzed Hydrogenolysis of Aryl Ethers",S541564,R137075,Product,L381377,Arenes,"A catalyst that cleaves aryl-oxygen bonds but not carbon-carbon bonds may help improve lignin processing. Selective hydrogenolysis of the aromatic carbon-oxygen (C-O) bonds in aryl ethers is an unsolved synthetic problem important for the generation of fuels and chemical feedstocks from biomass and for the liquefaction of coal. Currently, the hydrogenolysis of aromatic C-O bonds requires heterogeneous catalysts that operate at high temperature and pressure and lead to a mixture of products from competing hydrogenolysis of aliphatic C-O bonds and hydrogenation of the arene. Here, we report hydrogenolyses of aromatic C-O bonds in alkyl aryl and diaryl ethers that form exclusively arenes and alcohols. This process is catalyzed by a soluble nickel carbene complex under just 1 bar of hydrogen at temperatures of 80 to 120°C; the relative reactivity of ether substrates scale as Ar-OAr>>Ar-OMe>ArCH2-OMe (Ar, Aryl; Me, Methyl). Hydrogenolysis of lignin model compounds highlights the potential of this approach for the conversion of refractory aryl ether biopolymers to hydrocarbons.",TRUE,noun
R129,Organic Chemistry,R154455,Hydrodeoxygenation of Guaiacol over Carbon-Supported Metal Catalysts,S618372,R154457,Product,R154454,benzene,"Catalytic bio‐oil upgrading to produce renewable fuels has attracted increasing attention in response to the decreasing oil reserves and the increased fuel demand worldwide. Herein, the catalytic hydrodeoxygenation (HDO) of guaiacol with carbon‐supported non‐sulfided metal catalysts was investigated. Catalytic tests were performed at 4.0 MPa and temperatures ranging from 623 to 673 K. Both Ru/C and Mo/C catalysts showed promising catalytic performance in HDO. The selectivity to benzene was 69.5 and 83.5 % at 653 K over Ru/C and 10Mo/C catalysts, respectively. Phenol, with a selectivity as high as 76.5 %, was observed mainly on 1Mo/C. However, the reaction pathway over both catalysts is different. Over the Ru/C catalyst, the OCH3 bond was cleaved to form the primary intermediate catechol, whereas only traces of catechol were detected over Mo/C catalysts. In addition, two types of active sites were detected over Mo samples after reduction in H2 at 973 K. Catalytic studies showed that the demethoxylation of guaiacol is performed over residual MoOx sites with high selectivity to phenol whereas the consecutive HDO of phenol is performed over molybdenum carbide species, which is widely available only on the 10Mo/C sample. Different deactivation patterns were also observed over Ru/C and Mo/C catalysts.",TRUE,noun
R129,Organic Chemistry,R137065,Biaryl Construction via Ni-Catalyzed C−O Activation of Phenolic Carboxylates,S541518,R137067,Product,L381341,Biaryl,"Biaryl scaffolds were constructed via Ni-catalyzed aryl C-O activation by avoiding cleavage of the more reactive acyl C-O bond of aryl carboxylates. Now aryl esters, in general, can be successfully employed in cross-coupling reactions for the first time. The substrate scope and synthetic utility of the chemistry were demonstrated by the syntheses of more than 40 biaryls and by constructing complex organic molecules. Water was observed to play an important role in facilitating this transformation.",TRUE,noun
R129,Organic Chemistry,R137068,Visible-light photoredox-catalyzed C–O bond cleavage of diaryl ethers by acridinium photocatalysts at room temperature,S541551,R137071,Type of transformation,R137072,Cleavage,"Abstract Cleavage of C–O bonds in lignin can afford the renewable aryl sources for fine chemicals. However, the high bond energies of these C–O bonds, especially the 4-O-5-type diaryl ether C–O bonds (~314 kJ/mol) make the cleavage very challenging. Here, we report visible-light photoredox-catalyzed C–O bond cleavage of diaryl ethers by an acidolysis with an aryl carboxylic acid and a following one-pot hydrolysis. Two molecules of phenols are obtained from one molecule of diaryl ether at room temperature. The aryl carboxylic acid used for the acidolysis can be recovered. The key to success of the acidolysis is merging visible-light photoredox catalysis using an acridinium photocatalyst and Lewis acid catalysis using Cu(TMHD) 2 . Preliminary mechanistic studies indicate that the catalytic cycle occurs via a rare selective electrophilic attack of the generated aryl carboxylic radical on the electron-rich aryl ring of the diphenyl ether. This transformation is applied to a gram-scale reaction and the model of 4-O-5 lignin linkages.",TRUE,noun
R129,Organic Chemistry,R137076,Selective C–O Bond Cleavage of Lignin Systems and Polymers Enabled by Sequential Palladium-Catalyzed Aerobic Oxidation and Visible-Light Photoredox Catalysis,S541616,R137085,Type of transformation,R137072,Cleavage,"Lignin, which is a highly cross-linked and irregular biopolymer, is nature’s most abundant source of aromatic compounds and constitutes an attractive renewable resource for the production of aromatic commodity chemicals. Herein, we demonstrate a practical and operationally simple two-step degradation approach involving Pd-catalyzed aerobic oxidation and visible-light photoredox-catalyzed reductive fragmentation for the chemoselective cleavage of the β-O-4 linkage—the predominant linkage in lignin—for the generation of lower-molecular-weight aromatic building blocks. The developed strategy affords the β-O-4 bond cleaved products with high chemoselectivity and in high yields, is amenable to continuous flow processing, operates at ambient temperature and pressure, and is moisture- and oxygen-tolerant.",TRUE,noun
R129,Organic Chemistry,R137059,Nickel-Catalyzed Cross-Coupling of Aryl Methyl Ethers with Aryl Boronic Esters,S541473,R137061,Type of transformation,R137058,Coupling,The Ni(0)-catalyzed cross-coupling of alkenyl methyl ethers with boronic esters is described. Several types of alkenyl methyl ethers can be coupled with a wide range of boronic esters to give the stilbene derivatives.,TRUE,noun
R129,Organic Chemistry,R137062,Cross-Coupling Reactions of Aryl Pivalates with Boronic Acids,S541490,R137064,Type of transformation,R137058,Coupling,"The first cross-coupling of acylated phenol derivatives has been achieved. In the presence of an air-stable Ni(II) complex, readily accessible aryl pivalates participate in the Suzuki-Miyaura coupling with arylboronic acids. The process is tolerant of considerable variation in each of the cross-coupling components. In addition, a one-pot acylation/cross-coupling sequence has been developed. The potential to utilize an aryl pivalate as a directing group has also been demonstrated, along with the ability to sequentially cross-couple an aryl bromide followed by an aryl pivalate, using palladium and nickel catalysis, respectively.",TRUE,noun
R129,Organic Chemistry,R137065,Biaryl Construction via Ni-Catalyzed C−O Activation of Phenolic Carboxylates,S541523,R137067,Type of transformation,R137058,Coupling,"Biaryl scaffolds were constructed via Ni-catalyzed aryl C-O activation by avoiding cleavage of the more reactive acyl C-O bond of aryl carboxylates. Now aryl esters, in general, can be successfully employed in cross-coupling reactions for the first time. The substrate scope and synthetic utility of the chemistry were demonstrated by the syntheses of more than 40 biaryls and by constructing complex organic molecules. Water was observed to play an important role in facilitating this transformation.",TRUE,noun
R129,Organic Chemistry,R110941,Microwave-Assisted Cobinamide Synthesis,S505348,R110943,Solvent,L364897,Ethanol,"We present a new method for the preparation of cobinamide (CN)2Cbi, a vitamin B12 precursor, that should allow its broader utility. Treatment of vitamin B12 with only NaCN and heating in a microwave reactor affords (CN)2Cbi as the sole product. The purification procedure was greatly simplified, allowing for easy isolation of the product in 94% yield. The use of microwave heating proved beneficial also for (CN)2Cbi(c-lactone) synthesis. Treatment of (CN)2Cbi with triethanolamine led to (CN)2Cbi(c-lactam).",TRUE,noun
R129,Organic Chemistry,R154399,Selective catalytic conversion of guaiacol to phenols over a molybdenum carbide catalyst,S618205,R154401,substrate,R154406,guaiacol,An activated carbon supported α-molybdenum carbide catalyst (α-MoC1−x/AC) showed remarkable activity in the selective deoxygenation of guaiacol to substituted mono-phenols in low carbon number alcohol solvents.
,TRUE,noun
R129,Organic Chemistry,R154440,Anatase TiO2 Activated by Gold Nanoparticles for Selective Hydrodeoxygenation of Guaiacol to Phenolics,S618321,R154443,substrate,R154406,guaiacol,"Gold nanoparticles on a number of supporting materials, including anatase TiO2 (TiO2-A, in 40 nm and 45 μm), rutile TiO2 (TiO2-R), ZrO2, Al2O3, SiO2 , and activated carbon, were evaluated for hydrodeoxygenation of guaiacol in 6.5 MPa initial H2 pressure at 300 °C. The presence of gold nanoparticles on the supports did not show distinguishable performance compared to that of the supports alone in the conversion level and in the product distribution, except for that on a TiO2-A-40 nm. The lack of marked catalytic activity on supports other than TiO2-A-40 nm suggests that Au nanoparticles are not catalytically active on these supports. Most strikingly, the gold nanoparticles on the least-active TiO2-A-40 nm support stood out as the best catalyst exhibiting high activity with excellent stability and remarkable selectivity to phenolics from guaiacol hydrodeoxygenation. The conversion of guaiacol (∼43.1%) over gold on the TiO2-A-40 nm was about 33 times that (1.3%) over the TiO2-A-40 nm alone. The selectivity o...",TRUE,noun
R129,Organic Chemistry,R154455,Hydrodeoxygenation of Guaiacol over Carbon-Supported Metal Catalysts,S618368,R154457,substrate,R154406,guaiacol,"Catalytic bio‐oil upgrading to produce renewable fuels has attracted increasing attention in response to the decreasing oil reserves and the increased fuel demand worldwide. Herein, the catalytic hydrodeoxygenation (HDO) of guaiacol with carbon‐supported non‐sulfided metal catalysts was investigated. Catalytic tests were performed at 4.0 MPa and temperatures ranging from 623 to 673 K. Both Ru/C and Mo/C catalysts showed promising catalytic performance in HDO. The selectivity to benzene was 69.5 and 83.5 % at 653 K over Ru/C and 10Mo/C catalysts, respectively. Phenol, with a selectivity as high as 76.5 %, was observed mainly on 1Mo/C. However, the reaction pathway over both catalysts is different. Over the Ru/C catalyst, the OCH3 bond was cleaved to form the primary intermediate catechol, whereas only traces of catechol were detected over Mo/C catalysts. In addition, two types of active sites were detected over Mo samples after reduction in H2 at 973 K. Catalytic studies showed that the demethoxylation of guaiacol is performed over residual MoOx sites with high selectivity to phenol whereas the consecutive HDO of phenol is performed over molybdenum carbide species, which is widely available only on the 10Mo/C sample. Different deactivation patterns were also observed over Ru/C and Mo/C catalysts.",TRUE,noun
R129,Organic Chemistry,R154468,"Atmospheric Hydrodeoxygenation of Guaiacol over Alumina-, Zirconia-, and Silica-Supported Nickel Phosphide Catalysts",S618413,R154470,substrate,R154406,guaiacol,"This study investigated atmospheric hydrodeoxygenation (HDO) of guaiacol over Ni2P-supported catalysts. Alumina, zirconia, and silica served as the supports of Ni2P catalysts. The physicochemical properties of these catalysts were surveyed by N2 physisorption, X-ray diffraction (XRD), CO chemisorption, H2 temperature-programmed reduction (H2-TPR), H2 temperature-programmed desorption (H2-TPD), and NH3 temperature-programmed desorption (NH3-TPD). The catalytic performance of these catalysts was tested in a continuous fixed-bed system. This paper proposes a plausible network of atmospheric guaiacol HDO, containing demethoxylation (DMO), demethylation (DME), direct deoxygenation (DDO), hydrogenation (HYD), transalkylation, and methylation. Pseudo-first-order kinetics analysis shows that the intrinsic activity declined in the following order: Ni2P/ZrO2 > Ni2P/Al2O3 > Ni2P/SiO2. Product selectivity at zero guaiacol conversion indicates that Ni2P/SiO2 promotes DMO and DDO routes, whereas Ni2P/ZrO2 and Ni2P/Al2O...",TRUE,noun
R129,Organic Chemistry,R111101,"Preparation of Dicyano- and
Methylcobinamide from Vitamin B12a",S505865,R111103,Precursor of cobinamide,R111109,Hydroxycobalamin,"Treatment of vitamin B 12a 1 (hydroxycobalamin hydrochloride, aquocobalamin) with NaBH 4 and ZnCl 2 leads to the selective cleavage of the nucleotide loop and gives dicyanocobinamide 2a in good yield. Methylcobinamide 4 was prepared from 2 via aquocyanocobinamide 3. The glutathione-mediated methylation of 3 in a pH 3.5 buffer solution proceeded with Mel, but not with MeOTs.",TRUE,noun
R129,Organic Chemistry,R137076,Selective C–O Bond Cleavage of Lignin Systems and Polymers Enabled by Sequential Palladium-Catalyzed Aerobic Oxidation and Visible-Light Photoredox Catalysis,S541598,R137084,substrate,L381398,Lignin,"Lignin, which is a highly cross-linked and irregular biopolymer, is nature’s most abundant source of aromatic compounds and constitutes an attractive renewable resource for the production of aromatic commodity chemicals. Herein, we demonstrate a practical and operationally simple two-step degradation approach involving Pd-catalyzed aerobic oxidation and visible-light photoredox-catalyzed reductive fragmentation for the chemoselective cleavage of the β-O-4 linkage—the predominant linkage in lignin—for the generation of lower-molecular-weight aromatic building blocks. The developed strategy affords the β-O-4 bond cleaved products with high chemoselectivity and in high yields, is amenable to continuous flow processing, operates at ambient temperature and pressure, and is moisture- and oxygen-tolerant.",TRUE,noun
R129,Organic Chemistry,R111072,One-step synthesis of α/β cyano-aqua cobinamides from vitamin B12 with Zn(II) or Cu(II) salts in methanol,S505779,R111074,Solvent,L365122,Methanol,This short communication describes the screening of various metal salts for the preparation of cyano-aqua cobinamides from vitamin B12 in methanol. ZnCl 2 and Cu(NO 3 ) 2 ·3H 2 O have been identified as most active for this purpose and represent useful alternatives to the widely applied Ce(III) method that requires excess cyanide.,TRUE,noun
R129,Organic Chemistry,R110941,Microwave-Assisted Cobinamide Synthesis,S505409,R110943,Major reactant,L364927,NaCN,"We present a new method for the preparation of cobinamide (CN)2Cbi, a vitamin B12 precursor, that should allow its broader utility. Treatment of vitamin B12 with only NaCN and heating in a microwave reactor affords (CN)2Cbi as the sole product. The purification procedure was greatly simplified, allowing for easy isolation of the product in 94% yield. The use of microwave heating proved beneficial also for (CN)2Cbi(c-lactone) synthesis. Treatment of (CN)2Cbi with triethanolamine led to (CN)2Cbi(c-lactam).",TRUE,noun
R129,Organic Chemistry,R137068,Visible-light photoredox-catalyzed C–O bond cleavage of diaryl ethers by acridinium photocatalysts at room temperature,S541545,R137071,Product,L381362,Phenol,"Abstract Cleavage of C–O bonds in lignin can afford the renewable aryl sources for fine chemicals. However, the high bond energies of these C–O bonds, especially the 4-O-5-type diaryl ether C–O bonds (~314 kJ/mol) make the cleavage very challenging. Here, we report visible-light photoredox-catalyzed C–O bond cleavage of diaryl ethers by an acidolysis with an aryl carboxylic acid and a following one-pot hydrolysis. Two molecules of phenols are obtained from one molecule of diaryl ether at room temperature. The aryl carboxylic acid used for the acidolysis can be recovered. The key to success of the acidolysis is merging visible-light photoredox catalysis using an acridinium photocatalyst and Lewis acid catalysis using Cu(TMHD) 2 . Preliminary mechanistic studies indicate that the catalytic cycle occurs via a rare selective electrophilic attack of the generated aryl carboxylic radical on the electron-rich aryl ring of the diphenyl ether. This transformation is applied to a gram-scale reaction and the model of 4-O-5 lignin linkages.",TRUE,noun
R129,Organic Chemistry,R138423,Oxidative Depolymerization of Lignin in Ionic Liquids,S549349,R138425,Product,L386490,Phenol,"Beech lignin was oxidatively cleaved in ionic liquids to give phenols, unsaturated propylaromatics, and aromatic aldehydes. A multiparallel batch reactor system was used to screen different ionic liquids and metal catalysts. Mn(NO(3))(2) in 1-ethyl-3-methylimidazolium trifluoromethanesulfonate [EMIM][CF(3)SO(3)] proved to be the most effective reaction system. A larger scale batch reaction with this system in a 300 mL autoclave (11 g lignin starting material) resulted in a maximum conversion of 66.3 % (24 h at 100 degrees C, 84x10(5) Pa air). By adjusting the reaction conditions and catalyst loading, the selectivity of the process could be shifted from syringaldehyde as the predominant product to 2,6-dimethoxy-1,4-benzoquinone (DMBQ). Surprisingly, the latter could be isolated as a pure substance in 11.5 wt % overall yield by a simple extraction/crystallization process.",TRUE,noun
R129,Organic Chemistry,R154426,Catalysis Meets Nonthermal Separation for the Production of (Alkyl)phenols and Hydrocarbons from Pyrolysis Oil,S618274,R154429,Product,R154424,phenol,"A simple and efficient hydrodeoxygenation strategy is described to selectively generate and separate high-value alkylphenols from pyrolysis bio-oil, produced directly from lignocellulosic biomass. The overall process is efficient and only requires low pressures of hydrogen gas (5 bar). Initially, an investigation using model compounds indicates that MoCx /C is a promising catalyst for targeted hydrodeoxygenation, enabling selective retention of the desired Ar-OH substituents. By applying this procedure to pyrolysis bio-oil, the primary products (phenol/4-alkylphenols and hydrocarbons) are easily separable from each other by short-path column chromatography, serving as potential valuable feedstocks for industry. The strategy requires no prior fractionation of the lignocellulosic biomass, no further synthetic steps, and no input of additional (e.g., petrochemical) platform molecules.",TRUE,noun
R129,Organic Chemistry,R154440,Anatase TiO2 Activated by Gold Nanoparticles for Selective Hydrodeoxygenation of Guaiacol to Phenolics,S618327,R154443,Product,R154424,phenol,"Gold nanoparticles on a number of supporting materials, including anatase TiO2 (TiO2-A, in 40 nm and 45 μm), rutile TiO2 (TiO2-R), ZrO2, Al2O3, SiO2 , and activated carbon, were evaluated for hydrodeoxygenation of guaiacol in 6.5 MPa initial H2 pressure at 300 °C. The presence of gold nanoparticles on the supports did not show distinguishable performance compared to that of the supports alone in the conversion level and in the product distribution, except for that on a TiO2-A-40 nm. The lack of marked catalytic activity on supports other than TiO2-A-40 nm suggests that Au nanoparticles are not catalytically active on these supports. Most strikingly, the gold nanoparticles on the least-active TiO2-A-40 nm support stood out as the best catalyst exhibiting high activity with excellent stability and remarkable selectivity to phenolics from guaiacol hydrodeoxygenation. The conversion of guaiacol (∼43.1%) over gold on the TiO2-A-40 nm was about 33 times that (1.3%) over the TiO2-A-40 nm alone. The selectivity o...",TRUE,noun
R129,Organic Chemistry,R154455,Hydrodeoxygenation of Guaiacol over Carbon-Supported Metal Catalysts,S618366,R154457,catalyst,R154459,Ru/C,"Catalytic bio‐oil upgrading to produce renewable fuels has attracted increasing attention in response to the decreasing oil reserves and the increased fuel demand worldwide. Herein, the catalytic hydrodeoxygenation (HDO) of guaiacol with carbon‐supported non‐sulfided metal catalysts was investigated. Catalytic tests were performed at 4.0 MPa and temperatures ranging from 623 to 673 K. Both Ru/C and Mo/C catalysts showed promising catalytic performance in HDO. The selectivity to benzene was 69.5 and 83.5 % at 653 K over Ru/C and 10Mo/C catalysts, respectively. Phenol, with a selectivity as high as 76.5 %, was observed mainly on 1Mo/C. However, the reaction pathway over both catalysts is different. Over the Ru/C catalyst, the OCH3 bond was cleaved to form the primary intermediate catechol, whereas only traces of catechol were detected over Mo/C catalysts. In addition, two types of active sites were detected over Mo samples after reduction in H2 at 973 K. Catalytic studies showed that the demethoxylation of guaiacol is performed over residual MoOx sites with high selectivity to phenol whereas the consecutive HDO of phenol is performed over molybdenum carbide species, which is widely available only on the 10Mo/C sample. Different deactivation patterns were also observed over Ru/C and Mo/C catalysts.",TRUE,noun
R56,Pathogenic Microbiology,R110394,Resistance Evolution Against Antimicrobial Peptides in Staphylococcus aureus Alters Pharmacodynamics Beyond the MIC,S503944,R110396,antimicobials used in study,L364068,Pexiganan,"Antimicrobial peptides (AMPs) have been proposed as a promising class of new antimicrobials partly because they are less susceptible to bacterial resistance evolution. This is possibly caused by their mode of action but also by their pharmacodynamic characteristics, which differ significantly from conventional antibiotics. Although pharmacodynamics of antibiotic resistant strains have been studied, such data are lacking for AMP resistant strains. Here, we investigated if the pharmacodynamics of the Gram-positive human pathogen Staphylococcous aureus evolve under antimicrobial peptide selection. Interestingly, the Hill coefficient (kappa κ) evolves together with the minimum inhibition concentration (MIC). Except for one genotype, strains harboring mutations in menF and atl, all mutants had higher kappa than the non-selected sensitive controls. Higher κ results in steeper pharmacodynamic curve and, importantly, in a narrower mutant selection window. S. aureus selected for resistance to melittin displayed cross resistant against pexiganan and had as steep pharmacodynamic curves (high κ) as pexiganan-selected lines. By contrast, the pexiganan-sensitive tenecin-selected lines displayed lower κ. Taken together, our data demonstrate that pharmacodynamic parameters are not fixed traits of particular drug/strain interactions but actually evolve under drug treatment. The contribution of factors such as κ and the maximum and minimum growth rates on the dynamics and probability of resistance evolution are open questions that require urgent attention.",TRUE,noun
R56,Pathogenic Microbiology,R110403,Evolution of Staphylococcus aureus under Vancomycin Selective Pressure: the Role of the Small-Colony Variant Phenotype,S504205,R110405,antimicobials used in study,L364230,Vancomycin,"ABSTRACT Staphylococcus aureus small-colony variants (SCVs) often persist despite antibiotic therapy. Against a 108-CFU/ml methicillin-resistant S. aureus (MRSA) (strain COL) population of which 0%, 1%, 10%, 50%, or 100% was an isogenic hemB knockout (Ia48) subpopulation displaying the SCV phenotype, vancomycin achieved maximal reductions of 4.99, 5.39, 4.50, 3.28, and 1.66 log10 CFU/ml over 48 h. Vancomycin at ≥16 mg/liter shifted a population from 50% SCV cells at 0 h to 100% SCV cells at 48 h, which was well characterized by a Hill-type model (R2 > 0.90).",TRUE,noun
R68,Pharmacology,R109548,EFFECT OF PIOGLITAZONE AND GEMFIBROZIL ADMINISTRATION ON C-REACTIVE PROTEIN LEVELS IN NON-DIABETIC HYPERLIPIDEMIC RATS,S499908,R109550,Data,R109567,p-value,"ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean±SD of 2.59±0.28mg/L, 2.63±0.32mg/L and 2.67±0.23mg/L at 0 week to 3.55±0.44mg/L, 3.59±0.34mg/L and 3.6±0.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean±SD of 2.93±0.33mg/L compared to control group’s 4.42±0.30mg/L and gemfibrozil group’s 4.28±0.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).",TRUE,noun
R68,Pharmacology,R109548,EFFECT OF PIOGLITAZONE AND GEMFIBROZIL ADMINISTRATION ON C-REACTIVE PROTEIN LEVELS IN NON-DIABETIC HYPERLIPIDEMIC RATS,S499894,R109550,Material,R109553,rats,"ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean±SD of 2.59±0.28mg/L, 2.63±0.32mg/L and 2.67±0.23mg/L at 0 week to 3.55±0.44mg/L, 3.59±0.34mg/L and 3.6±0.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean±SD of 2.93±0.33mg/L compared to control group’s 4.42±0.30mg/L and gemfibrozil group’s 4.28±0.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).",TRUE,noun
R130,Physical Chemistry,R135704,Counterion-Mediated Crossing of the Cyanine Limit in Crystals and Fluid Solution: Bond Length Alternation and Spectral Broadening Unveiled by Quantum Chemistry,S536856,R135708,Counterion,L378416,Br-,"Absorption spectra of Cyanine+Br- salts show a remarkable solvent dependence in non-/polar solvents, exhibiting a narrow, sharp band shapes in dichloromethane but broad features in toluene; this change was attributed to ion pair association, breaking the symmetry of the cyanine, similar to the situation in the crystals (P.-A. Bouit et al, J. Am. Chem. Soc. 2010, 132, 4328). Our density functional theory (DFT) based quantum mechanics/molecular mechanics (QM/MM) calculations of the crystals evidence the crucial role of specific asymmetric anion positioning on the symmetry breaking. Molecular dynamics (MD) simulations prove the ion pair association in non-polar solvents. Time-dependent DFT vibronic calculations in toluene show that ion pairing, controlled by steric demands, induces symmetry breaking in the electronic ground state. This largely broadens the spectrum in very reasonable agreement with experiment, while the principal pattern of vibrational modes is retained. The current findings allow to establish a unified picture on symmetry breaking of polymethine dyes in fluid solution.",TRUE,noun
R361,Place and Environment,R110285,Using Persona Descriptions to Inform Library Space Design,S502658,R110287,Has method,R110288,Personas,Practical implications Personas are a practical and meaningful tool for thinking about library space and service design in the development stage. Several examples of library spaces that focus on the needs of specific personas are provided.,TRUE,noun
R138056,Planetary Sciences,R155200,Martian minerals components at Gale crater detected by MRO CRISM hyperspectral images,S620863,R155202,Supplimentary Information,R155199, Smectite,"Gale Crater on Mars has the layered structure of deposit covered by the Noachian/Hesperian boundary. Mineral identification and classification at this region can provide important constrains on environment and geological evolution for Mars. Although Curiosity rove has provided the in-situ mineralogical analysis in Gale, but it restricted in small areas. Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) aboard the Mars Reconnaissance Orbiter (MRO) with enhanced spectral resolution can provide more information in spatial and time scale. In this paper, CRISM near-infrared spectral data are used to identify mineral classes and groups at Martian Gale region. By using diagnostic absorptions features analysis in conjunction with spectral angle mapper (SAM), detailed mineral species are identified at Gale region, e.g., kaolinite, chlorites, smectite, jarosite, and northupite. The clay minerals' diversity in Gale Crater suggests the variation of aqueous alteration. The detection of northupite suggests that the Gale region has experienced the climate change from moist condition with mineral dissolution to dryer climate with water evaporation. The presence of ferric sulfate mineral jarosite formed through the oxidation of iron sulfides in acidic environments shows the experience of acidic sulfur-rich condition in Gale history.",TRUE,noun
R138056,Planetary Sciences,R138520,Silica polymorphs in lunar granite: Implications for granite petrogenesis on the Moon,S550155,R138522,Relavance,L387161,Cristobalite,"Abstract Granitic lunar samples largely consist of granophyric intergrowths of silica and K-feldspar. The identification of the silica polymorph present in the granophyre can clarify the petrogenesis of the lunar granites. The presence of tridymite or cristobalite would indicate rapid crystallization at high temperature. Quartz would indicate crystallization at low temperature or perhaps intrusive, slow crystallization, allowing for the orderly transformation from high-temperature silica polymorphs (tridymite or cristobalite). We identify the silica polymorphs present in four granitic lunar samples from the Apollo 12 regolith using laser Raman spectroscopy. Typically, lunar silica occurs with a hackle fracture pattern. We did an initial density calculation on the hackle fracture pattern of quartz and determined that the volume of quartz and fracture space is consistent with a molar volume contraction from tridymite or cristobalite, both of which are less dense than quartz. Moreover, we analyzed the silica in the granitic fragments from Apollo 12 by electron-probe microanalysis and found it contains up to 0.7 wt% TiO2, consistent with initial formation as the high-temperature silica polymorphs, which have more open crystal structures that can more readily accommodate cations other than Si. The silica in Apollo 12 granitic samples crystallized rapidly as tridymite or cristobalite, consistent with extrusive volcanism. The silica then inverted to quartz at a later time, causing it to contract and fracture. A hackle fracture pattern is common in silica occurring in extrusive lunar lithologies (e.g., mare basalt). The extrusive nature of these granitic samples makes them excellent candidates to be similar to the rocks that compose positive relief silicic features such as the Gruithuisen Domes.",TRUE,noun
R138056,Planetary Sciences,R138520,Silica polymorphs in lunar granite: Implications for granite petrogenesis on the Moon,S550179,R138522,Rock type,L387184,Granite,"Abstract Granitic lunar samples largely consist of granophyric intergrowths of silica and K-feldspar. The identification of the silica polymorph present in the granophyre can clarify the petrogenesis of the lunar granites. The presence of tridymite or cristobalite would indicate rapid crystallization at high temperature. Quartz would indicate crystallization at low temperature or perhaps intrusive, slow crystallization, allowing for the orderly transformation from high-temperature silica polymorphs (tridymite or cristobalite). We identify the silica polymorphs present in four granitic lunar samples from the Apollo 12 regolith using laser Raman spectroscopy. Typically, lunar silica occurs with a hackle fracture pattern. We did an initial density calculation on the hackle fracture pattern of quartz and determined that the volume of quartz and fracture space is consistent with a molar volume contraction from tridymite or cristobalite, both of which are less dense than quartz. Moreover, we analyzed the silica in the granitic fragments from Apollo 12 by electron-probe microanalysis and found it contains up to 0.7 wt% TiO2, consistent with initial formation as the high-temperature silica polymorphs, which have more open crystal structures that can more readily accommodate cations other than Si. The silica in Apollo 12 granitic samples crystallized rapidly as tridymite or cristobalite, consistent with extrusive volcanism. The silica then inverted to quartz at a later time, causing it to contract and fracture. A hackle fracture pattern is common in silica occurring in extrusive lunar lithologies (e.g., mare basalt). The extrusive nature of these granitic samples makes them excellent candidates to be similar to the rocks that compose positive relief silicic features such as the Gruithuisen Domes.",TRUE,noun
R138056,Planetary Sciences,R138520,Silica polymorphs in lunar granite: Implications for granite petrogenesis on the Moon,S550152,R138522,Relavance,L387158,Quartz,"Abstract Granitic lunar samples largely consist of granophyric intergrowths of silica and K-feldspar. The identification of the silica polymorph present in the granophyre can clarify the petrogenesis of the lunar granites. The presence of tridymite or cristobalite would indicate rapid crystallization at high temperature. Quartz would indicate crystallization at low temperature or perhaps intrusive, slow crystallization, allowing for the orderly transformation from high-temperature silica polymorphs (tridymite or cristobalite). We identify the silica polymorphs present in four granitic lunar samples from the Apollo 12 regolith using laser Raman spectroscopy. Typically, lunar silica occurs with a hackle fracture pattern. We did an initial density calculation on the hackle fracture pattern of quartz and determined that the volume of quartz and fracture space is consistent with a molar volume contraction from tridymite or cristobalite, both of which are less dense than quartz. Moreover, we analyzed the silica in the granitic fragments from Apollo 12 by electron-probe microanalysis and found it contains up to 0.7 wt% TiO2, consistent with initial formation as the high-temperature silica polymorphs, which have more open crystal structures that can more readily accommodate cations other than Si. The silica in Apollo 12 granitic samples crystallized rapidly as tridymite or cristobalite, consistent with extrusive volcanism. The silica then inverted to quartz at a later time, causing it to contract and fracture. A hackle fracture pattern is common in silica occurring in extrusive lunar lithologies (e.g., mare basalt). The extrusive nature of these granitic samples makes them excellent candidates to be similar to the rocks that compose positive relief silicic features such as the Gruithuisen Domes.",TRUE,noun
R138056,Planetary Sciences,R138520,Silica polymorphs in lunar granite: Implications for granite petrogenesis on the Moon,S550150,R138522,Relavance,L387156,Tridymite,"Abstract Granitic lunar samples largely consist of granophyric intergrowths of silica and K-feldspar. The identification of the silica polymorph present in the granophyre can clarify the petrogenesis of the lunar granites. The presence of tridymite or cristobalite would indicate rapid crystallization at high temperature. Quartz would indicate crystallization at low temperature or perhaps intrusive, slow crystallization, allowing for the orderly transformation from high-temperature silica polymorphs (tridymite or cristobalite). We identify the silica polymorphs present in four granitic lunar samples from the Apollo 12 regolith using laser Raman spectroscopy. Typically, lunar silica occurs with a hackle fracture pattern. We did an initial density calculation on the hackle fracture pattern of quartz and determined that the volume of quartz and fracture space is consistent with a molar volume contraction from tridymite or cristobalite, both of which are less dense than quartz. Moreover, we analyzed the silica in the granitic fragments from Apollo 12 by electron-probe microanalysis and found it contains up to 0.7 wt% TiO2, consistent with initial formation as the high-temperature silica polymorphs, which have more open crystal structures that can more readily accommodate cations other than Si. The silica in Apollo 12 granitic samples crystallized rapidly as tridymite or cristobalite, consistent with extrusive volcanism. The silica then inverted to quartz at a later time, causing it to contract and fracture. A hackle fracture pattern is common in silica occurring in extrusive lunar lithologies (e.g., mare basalt). The extrusive nature of these granitic samples makes them excellent candidates to be similar to the rocks that compose positive relief silicic features such as the Gruithuisen Domes.",TRUE,noun
R185,Plasma and Beam Physics,R139135,2D spatially resolved O atom density profiles in an atmospheric pressure plasma jet: from the active plasma volume to the effluent,S554531,R139189,Unit_gas_flow_rate,L390278,slm,"Two-dimensional spatially resolved absolute atomic oxygen densities are measured within an atmospheric pressure micro plasma jet and in its effluent. The plasma is operated in helium with an admixture of 0.5% of oxygen at 13.56 MHz and with a power of 1 W. Absolute atomic oxygen densities are obtained using two photon absorption laser induced fluorescence spectroscopy. The results are interpreted based on measurements of the electron dynamics by phase resolved optical emission spectroscopy in combination with a simple model that balances the production of atomic oxygen with its losses due to chemical reactions and diffusion. Within the discharge, the atomic oxygen density builds up with a rise time of 600 µs along the gas flow and reaches a plateau of 8 × 1015 cm−3. In the effluent, the density decays exponentially with a decay time of 180 µs (corresponding to a decay length of 3 mm at a gas flow of 1.0 slm). It is found that both, the species formation behavior and the maximum distance between the jet nozzle and substrates for possible oxygen treatments of surfaces can be controlled by adjusting the gas flow.",TRUE,noun
R185,Plasma and Beam Physics,R145177,Stark Broadening of Hydrogen Lines in a Plasma,S581199,R145226,Comparison to,L406153,Experiment,The frequency distributions of hydrogen lines broadened by the local fields of both ions and electrons in a plasma are calculated in the classical path approximation. The electron collisions are treated by an impact theory which takes into account the Stark splitting caused by the quasistatic ion fields. The ion field-strength distribution function used includes the effect of electron shielding and ion-ion correlations. The various approximations that were employed are examined for self-consistency and an accuracy of about 10% in the resulting line profiles is expected. Good agreement with experimental H/sub beta / profiles is obtained while there are deviations of factors of two with the usual Holtsmark theory. Asymptotic distributions for the line wings are given for astrophysical applications. Also here the electron effects are generally as important as the ion effects for all values of the electron density and in some cases the electron broadening is larger than the ion broadening. (auth),TRUE,noun
R185,Plasma and Beam Physics,R145180,Stark Broadening of Neutral Helium Lines in a Plasma,S581208,R145227,Comparison to,L406160,Experiment,"The frequency distributions of spectral lines of nonhydrogenic atoms broadened by local fields of both electrons and ions in a plasma are calculated in the classical path approximation. The electron collisions are treated by an impact theory which takes into account deviations from adiabaticity. For the ion effects, the adiabatic approximation can be used to describe the time-dependent wave functions. The various approximations employed were examined for self-consistency, and an accuracy of about 20% in the resulting line profiles is expected. Good agreement with Wulff's experimental helium line profiles was obtained while there are large deviations from the adiabatic theory, especially for the line shifts. Asymptotic distributions for the line wings are given for astrophysical applications. Here the ion effects can be as important as the electron effects and lead to large asymmetries, but near the line core electrons usually dominate. Numerical results are tabulated for 24 neutral helium lines with principal quantum numbers up to five.",TRUE,noun
R131,Polymer Chemistry,R161568,Current Technologies in Depolymerization Process and the Road Ahead,S645139,R161572,Method,R161573,depolymerization,"Although plastic is considered an indispensable commodity, plastic pollution is a major concern around the world due to its rapid accumulation rate, complexity, and lack of management. Some political policies, such as the Chinese import ban on plastic waste, force us to think about a long-term solution to eliminate plastic wastes. Converting waste plastics into liquid and gaseous fuels is considered a promising technique to eliminate the harm to the environment and decrease the dependence on fossil fuels, and recycling waste plastic by converting it into monomers is another effective solution to the plastic pollution problem. This paper presents the critical situation of plastic pollution, various methods of plastic depolymerization based on different kinds of polymers defined in the Society of the Plastics Industry (SPI) Resin Identification Coding System, and the opportunities and challenges in the future.",TRUE,noun
R131,Polymer Chemistry,R161598,Hydrolysis and Solvolysis as Benign Routes for the End-of-Life Management of Thermoset Polymer Waste,S645206,R161600,Method,R161601,solvolysis,"The production of thermoset polymers is increasing globally owing to their advantageous properties, particularly when applied as composite materials. Though these materials are traditionally used in more durable, longer-lasting applications, ultimately, they become waste at the end of their usable lifetimes. Current recycling practices are not applicable to traditional thermoset waste, owing to their network structures and lack of processability. Recently, researchers have been developing thermoset polymers with the right functionalities to be chemically degraded under relatively benign conditions postuse, providing a route to future management of thermoset waste. This review presents thermosets containing hydrolytically or solvolytically cleavable bonds, such as esters and acetals. Hydrolysis and solvolysis mechanisms are discussed, and various factors that influence the degradation rates are examined. Degradable thermosets with impressive mechanical, thermal, and adhesion behavior are discussed, illustrating that the design of material end of life need not limit material performance. Expected final online publication date for the Annual Review of Chemical and Biomolecular Engineering, Volume 11 is June 8, 2020. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.",TRUE,noun
R131,Polymer Chemistry,R161372,Biocatalytic Degradation Efficiency of Postconsumer Polyethylene Terephthalate Packaging Determined by Their Polymer Microstructures,S644429,R161375,Performed at temperature,R161345,temperature,"Polyethylene terephthalate (PET) is the most important mass‐produced thermoplastic polyester used as a packaging material. Recently, thermophilic polyester hydrolases such as TfCut2 from Thermobifida fusca have emerged as promising biocatalysts for an eco‐friendly PET recycling process. In this study, postconsumer PET food packaging containers are treated with TfCut2 and show weight losses of more than 50% after 96 h of incubation at 70 °C. Differential scanning calorimetry analysis indicates that the high linear degradation rates observed in the first 72 h of incubation is due to the high hydrolysis susceptibility of the mobile amorphous fraction (MAF) of PET. The physical aging process of PET occurring at 70 °C is shown to gradually convert MAF to polymer microstructures with limited accessibility to enzymatic hydrolysis. Analysis of the chain‐length distribution of degraded PET by nuclear magnetic resonance spectroscopy reveals that MAF is rapidly hydrolyzed via a combinatorial exo‐ and endo‐type degradation mechanism whereas the remaining PET microstructures are slowly degraded only by endo‐type chain scission causing no detectable weight loss. Hence, efficient thermostable biocatalysts are required to overcome the competitive physical aging process for the complete degradation of postconsumer PET materials close to the glass transition temperature of PET.",TRUE,noun
R131,Polymer Chemistry,R161372,Biocatalytic Degradation Efficiency of Postconsumer Polyethylene Terephthalate Packaging Determined by Their Polymer Microstructures,S644438,R161375,Enzyme,R161382,fusca,"Polyethylene terephthalate (PET) is the most important mass‐produced thermoplastic polyester used as a packaging material. Recently, thermophilic polyester hydrolases such as TfCut2 from Thermobifida fusca have emerged as promising biocatalysts for an eco‐friendly PET recycling process. In this study, postconsumer PET food packaging containers are treated with TfCut2 and show weight losses of more than 50% after 96 h of incubation at 70 °C. Differential scanning calorimetry analysis indicates that the high linear degradation rates observed in the first 72 h of incubation is due to the high hydrolysis susceptibility of the mobile amorphous fraction (MAF) of PET. The physical aging process of PET occurring at 70 °C is shown to gradually convert MAF to polymer microstructures with limited accessibility to enzymatic hydrolysis. Analysis of the chain‐length distribution of degraded PET by nuclear magnetic resonance spectroscopy reveals that MAF is rapidly hydrolyzed via a combinatorial exo‐ and endo‐type degradation mechanism whereas the remaining PET microstructures are slowly degraded only by endo‐type chain scission causing no detectable weight loss. Hence, efficient thermostable biocatalysts are required to overcome the competitive physical aging process for the complete degradation of postconsumer PET materials close to the glass transition temperature of PET.",TRUE,noun
R11,Science,R28609,Undifferentiated embryonal sarcoma of the liver mimicking acute appendicitis. Case report and review of the literature,S94479,R28610,Sex,R28100,Male,"Abstract Background Undifferentiated embryonal sarcoma (UES) of liver is a rare malignant neoplasm, which affects mostly the pediatric population accounting for 13% of pediatric hepatic malignancies, a few cases has been reported in adults. Case presentation We report a case of undifferentiated embryonal sarcoma of the liver in a 20-year-old Caucasian male. The patient was referred to us for further investigation after a laparotomy in a district hospital for spontaneous abdominal hemorrhage, which was due to a liver mass. After a through evaluation with computed tomography scan and magnetic resonance imaging of the liver and taking into consideration the previous history of the patient, it was decided to surgically explore the patient. Resection of I–IV and VIII hepatic lobe. Patient developed disseminated intravascular coagulation one day after the surgery and died the next day. Conclusion It is a rare, highly malignant hepatic neoplasm, affecting almost exclusively the pediatric population. The prognosis is poor but recent evidence has shown that long-term survival is possible after complete surgical resection with or without postoperative chemotherapy.",TRUE,noun
R11,Science,R31787,DNA methylation and embryogenic compe- tence in leaves and callus of napiergrass (Pennisetum purpureum Schum,S107221,R31788,Comparison,R28515,none,Quantitative and qualitative levels of DNA methylation were evaluated in leaves and callus of Pennisetum purpureum Schum. The level of methylation did not change during leaf differentiation or aging and similar levels of methylation were found in embryogenic and nonembryogenic callus.,TRUE,noun
R11,Science,R28522,RIGHT HEPATOLOBECTOMY FOR PRIMARY MESENCHYMOMA OF THE LIVER*,S93528,R28523,Laboratory fi ndings,L57387,Normal,SUMMARYA case report of a large primary malignant mesenchymoma of the liver is presented. This tumor was successfully removed with normal liver tissue surrounding the tumor by right hepatolobectomy. The pathologic characteristics and clinical behavior of tumors falling into this general category are,TRUE,noun
R11,Science,R28549,Hepatic sarcomas in adults: a review of 25 cases.,S93729,R28550,Laboratory fi ndings,L57526,Normal,"Twenty-five patients with an apparently primary sarcoma of the liver are reviewed. Presenting complaints were non-specific, but hepatomegaly and abnormal liver function tests were usual. Use of the contraceptive pill (four of 11 women) was identified as a possible risk factor; one patient had previously been exposed to vinyl chloride monomer. Detailed investigation showed that the primary tumour was extrahepatic in nine of the 25 patients. Distinguishing features of the 15 patients with confirmed primary hepatic sarcoma included a lower incidence of multiple hepatic lesions and a shorter time from first symptoms to diagnosis, but the most valuable discriminator was histology. Angiosarcomas and undifferentiated tumours were all of hepatic origin, epithelioid haemangioendotheliomas (EHAE) occurred as primary and secondary lesions and all other differentiated tumours arose outside the liver. The retroperitoneum was the most common site of an occult primary tumour and its careful examination therefore crucial: computed tomography scanning was found least fallible in this respect in the present series. Where resection (or transplantation), the best treatment, was not possible, results of therapy were disappointing, prognosis being considerably worse for patients with primary hepatic tumours. Patients with EHAE had a better overall prognosis regardless of primary site.",TRUE,noun
R11,Science,R28558,Undifferentiated Sarcoma of the Liver in a 21-year-old Woman: Case Report,S93849,R28559,Laboratory fi ndings,L57607,Normal,"A successful surgical case of malignant undifferentiated (embryonal) sarcoma of the liver (USL), a rare tumor normally found in children, is reported. The patient was a 21-year-old woman, complaining of epigastric pain and abdominal fullness. Chemical analyses of the blood and urine and complete blood counts revealed no significant changes, and serum alpha-fetoprotein levels were within normal limits. A physical examination demonstrated a film, slightly tender lesion at the liver's edge palpable 10 cm below the xiphoid process. CT scan and ultrasonography showed an oval mass, confined to the left lobe of the liver, which proved to be hypovascular on angiography. At laparotomy, a large, 18 x 15 x 13 cm tumor, found in the left hepatic lobe was resected. The lesion was dark red in color, encapsulated, smooth surfaced and of an elastic firm consistency. No metastasis was apparent. Histological examination resulted in a diagnosis of undifferentiated sarcoma of the liver. Three courses of adjuvant chemotherapy, including adriamycin, cis-diaminodichloroplatinum, vincristine and dacarbazine were administered following the surgery with no serious adverse effects. The patient remains well with no evidence of recurrence 12 months after her operation.",TRUE,noun
R11,Science,R28560,Undifferentiated (Embryonal) Sarcoma of the Liver,S93874,R28561,Laboratory fi ndings,L57627,Normal,"A 10‐year‐old girl with undifferentiated (embryonal) sarcoma of the liver reported here had abdominal pain, nausea, vomiting and weakness when she was 8 years old. Chemical analyses of the blood and urine were normal. Serum alpha‐fetoprotein was within normal limits. She died of cachexia 1 year and 8 months after the onset of symptoms. Autopsy showed a huge tumor mass in the liver and a few metastatic nodules in the lungs, which were consistent histologically with undifferenitated sarcoma of the liver. To our knowledge, this is the second case report of hepatic undifferentiated sarcoma of children in Japan, the feature being compatible with the description of Stocker and Ishaka.",TRUE,noun
R11,Science,R27149,Real Exchange Rate Volatility and U.S. Bilateral Trade: A VAR Approach,S87320,R27150,Nominal or real exchange rate used,R27142,Real,"This paper uses VAR models to investigate the impact of real exchange rate volatility on U.S. bilateral imports from the United Kingdom, France, Germany, Japan and Canada. The VAR systems include U.S. and foreign macro variables, and are estimated separately for each country. The major results suggest that the effect of volatility on imports is weak, although permanent shocks to volatility do have a negative impact on this measure of trade, and those effects are relatively more important over the flexible rate period. Copyright 1989 by MIT Press.",TRUE,noun
R11,Science,R27156,The Effect of Real Exchange Rate Uncertainty on Exports: Empirical Evidence,S87354,R27157,Nominal or real exchange rate used,R27142,Real,"Unless very specific assumptions are made, theory alone cannot determine the sign of the relation between real exchange rate uncertainty and exports. On the one hand, convexity of the profit function with respect to prices implies that an increase in price uncertainty raises the expected returns in the export sector. On the other, potential asymmetries in the cost of adjusting factors of production (for example, investment irreversibility) and risk aversion tend to make the uncertainty-exports relation negative. This article examines these issues using a simple risk-aversion model. Export equations allowing for uncertainty are then estimated for six developing countries. Contrary to the ambiguity of the theory, the empirical relation is strongly negative. Estimates indicate that a 5 percent increase in the annual standard deviation of the real exchange rate can reduce exports by 2 to 30 percent in the short run. These effects are substantially magnified in the long run.",TRUE,noun
R11,Science,R27209,Estimating the impact of exchange rate volatility on exports: evidence from Asian countries,S87585,R27210,Nominal or real exchange rate used,R27142,Real,"The paper examines the impact of exchange rate volatility on the exports of five Asian countries. The countries are Turkey, South Korea, Malaysia, Indonesia and Pakistan. The impact of a volatility term on exports is examined by using an Engle-Granger residual-based cointegrating technique. The results indicate that the exchange rate volatility reduced real exports for these countries. This might mean that producers in these countries are risk-averse. The producers will prefer to sell in domestic markets rather than foreign markets if the exchange rate volatility increases.",TRUE,noun
R11,Science,R27225,Exchange Rate Uncertainty in Turkey and its Impact on Export Volume,S87647,R27226,Nominal or real exchange rate used,R27142,Real,"This paper investigates the impact of real exchange rate volatility on Turkey’s exports to its most important trading partners using quarterly data for the period 1982 to 2001. Cointegration and error correction modeling approaches are applied, and estimates of the cointegrating relations are obtained using Johansen’s multivariate procedure. Estimates of the short-run dynamics are obtained through the error correction technique. Our results indicate that exchange rate volatility has a significant positive effect on export volume in the long run. This result may indicate that firms operating in a small economy, like Turkey, have little option for dealing with increased exchange rate risk.",TRUE,noun
R11,Science,R29725,On the Relationship Between CO 2 Emissions and Economic Growth: The Mauritian Experience,S98639,R29726,Type of data,R8311,time,"This paper analyses the relationship between GDP and carbon dioxide emissions for Mauritius and vice-versa in a historical perspective. Using rigorous econometrics analysis, our results suggest that the carbon dioxide emission trajectory is closely related to the GDP time path. We show that emissions elasticity on income has been increasing over time. By estimating the EKC for the period 1975-2009, we were unable to prove the existence of a reasonable turning point and thus no EKC “U” shape was obtained. Our results suggest that Mauritius could not curb its carbon dioxide emissions in the last three decades. Thus, as hypothesized, the cost of degradation associated with GDP grows over time and it suggests that the economic and human activities are having increasingly negative environmental impacts on the country as cpmpared to their economic prosperity.",TRUE,noun
R11,Science,R28549,Hepatic sarcomas in adults: a review of 25 cases.,S93778,R28553,Symptoms and signs,R28525,hepatomegaly,"Twenty-five patients with an apparently primary sarcoma of the liver are reviewed. Presenting complaints were non-specific, but hepatomegaly and abnormal liver function tests were usual. Use of the contraceptive pill (four of 11 women) was identified as a possible risk factor; one patient had previously been exposed to vinyl chloride monomer. Detailed investigation showed that the primary tumour was extrahepatic in nine of the 25 patients. Distinguishing features of the 15 patients with confirmed primary hepatic sarcoma included a lower incidence of multiple hepatic lesions and a shorter time from first symptoms to diagnosis, but the most valuable discriminator was histology. Angiosarcomas and undifferentiated tumours were all of hepatic origin, epithelioid haemangioendotheliomas (EHAE) occurred as primary and secondary lesions and all other differentiated tumours arose outside the liver. The retroperitoneum was the most common site of an occult primary tumour and its careful examination therefore crucial: computed tomography scanning was found least fallible in this respect in the present series. Where resection (or transplantation), the best treatment, was not possible, results of therapy were disappointing, prognosis being considerably worse for patients with primary hepatic tumours. Patients with EHAE had a better overall prognosis regardless of primary site.",TRUE,noun
R11,Science,R25997,Automated labeling in document images,S80515,R26016,Logical Labels,L50886,abstract,"The National Library of Medicine (NLM) is developing an automated system to produce bibliographic records for its MEDLINER database. This system, named Medical Article Record System (MARS), employs document image analysis and understanding techniques and optical character recognition (OCR). This paper describes a key module in MARS called the Automated Labeling (AL) module, which labels all zones of interest (title, author, affiliation, and abstract) automatically. The AL algorithm is based on 120 rules that are derived from an analysis of journal page layouts and features extracted from OCR output. Experiments carried out on more than 11,000 articles in over 1,000 biomedical journals show the accuracy of this rule-based algorithm to exceed 96%.",TRUE,noun
R11,Science,R33802,Assessment of algorithms for high throughput detection of genomic copy number variation in oligonucleotide microarray data,S117211,R33803,Vendor,R33801,Affymetrix,"Abstract Background Genomic deletions and duplications are important in the pathogenesis of diseases, such as cancer and mental retardation, and have recently been shown to occur frequently in unaffected individuals as polymorphisms. Affymetrix GeneChip whole genome sampling analysis (WGSA) combined with 100 K single nucleotide polymorphism (SNP) genotyping arrays is one of several microarray-based approaches that are now being used to detect such structural genomic changes. The popularity of this technology and its associated open source data format have resulted in the development of an increasing number of software packages for the analysis of copy number changes using these SNP arrays. Results We evaluated four publicly available software packages for high throughput copy number analysis using synthetic and empirical 100 K SNP array data sets, the latter obtained from 107 mental retardation (MR) patients and their unaffected parents and siblings. We evaluated the software with regards to overall suitability for high-throughput 100 K SNP array data analysis, as well as effectiveness of normalization, scaling with various reference sets and feature extraction, as well as true and false positive rates of genomic copy number variant (CNV) detection. Conclusion We observed considerable variation among the numbers and types of candidate CNVs detected by different analysis approaches, and found that multiple programs were needed to find all real aberrations in our test set. The frequency of false positive deletions was substantial, but could be greatly reduced by using the SNP genotype information to confirm loss of heterozygosity.",TRUE,noun
R11,Science,R33824,Accuracy of CNV Detection from GWAS Data,S117304,R33825,Vendor,R33801,Affymetrix,"Several computer programs are available for detecting copy number variants (CNVs) using genome-wide SNP arrays. We evaluated the performance of four CNV detection software suites—Birdsuite, Partek, HelixTree, and PennCNV-Affy—in the identification of both rare and common CNVs. Each program's performance was assessed in two ways. The first was its recovery rate, i.e., its ability to call 893 CNVs previously identified in eight HapMap samples by paired-end sequencing of whole-genome fosmid clones, and 51,440 CNVs identified by array Comparative Genome Hybridization (aCGH) followed by validation procedures, in 90 HapMap CEU samples. The second evaluation was program performance calling rare and common CNVs in the Bipolar Genome Study (BiGS) data set (1001 bipolar cases and 1033 controls, all of European ancestry) as measured by the Affymetrix SNP 6.0 array. Accuracy in calling rare CNVs was assessed by positive predictive value, based on the proportion of rare CNVs validated by quantitative real-time PCR (qPCR), while accuracy in calling common CNVs was assessed by false positive/false negative rates based on qPCR validation results from a subset of common CNVs. Birdsuite recovered the highest percentages of known HapMap CNVs containing >20 markers in two reference CNV datasets. The recovery rate increased with decreased CNV frequency. In the tested rare CNV data, Birdsuite and Partek had higher positive predictive values than the other software suites. In a test of three common CNVs in the BiGS dataset, Birdsuite's call was 98.8% consistent with qPCR quantification in one CNV region, but the other two regions showed an unacceptable degree of accuracy. We found relatively poor consistency between the two “gold standards,” the sequence data of Kidd et al., and aCGH data of Conrad et al. Algorithms for calling CNVs especially common ones need substantial improvement, and a “gold standard” for detection of CNVs remains to be established.",TRUE,noun
R11,Science,R26687,TASC: topology adaptive spatial clustering for sensor networks,S85125,R26688,ole,R26675,Aggregation,"The ability to extract topological regularity out of large randomly deployed sensor networks holds the promise to maximally leverage correlation for data aggregation and also to assist with sensor localization and hierarchy creation. This paper focuses on extracting such regular structures from physical topology through the development of a distributed clustering scheme. The topology adaptive spatial clustering (TASC) algorithm presented here is a distributed algorithm that partitions the network into a set of locally isotropic, non-overlapping clusters without prior knowledge of the number of clusters, cluster size and node coordinates. This is achieved by deriving a set of weights that encode distance measurements, connectivity and density information within the locality of each node. The derived weights form the terrain for holding a coordinated leader election in which each node selects the node closer to the center of mass of its neighborhood to become its leader. The clustering algorithm also employs a dynamic density reachability criterion that groups nodes according to their neighborhood's density properties. Our simulation results show that the proposed algorithm can trace locally isotropic structures in non-isotropic network and cluster the network with respect to local density attributes. We also found out that TASC exhibits consistent behavior in the presence of moderate measurement noise levels",TRUE,noun
R11,Science,R26691,Distributed Clustering-Based Aggregation Algorithm for Spatial Correlated Sensor Networks,S85149,R26692,ole,R26675,Aggregation,"In wireless sensor networks, it is already noted that nearby sensor nodes monitoring an environmental feature typically register similar values. This kind of data redundancy due to the spatial correlation between sensor observations inspires the research of in-network data aggregation. In this paper, an α -local spatial clustering algorithm for sensor networks is proposed. By measuring the spatial correlation between data sampled by different sensors, the algorithm constructs a dominating set as the sensor network backbone used to realize the data aggregation based on the information description/summarization performance of the dominators. In order to evaluate the performance of the algorithm a pattern recognition scenario over environmental data is presented. The evaluation shows that the resulting network achieved by our algorithm can provide environmental information at higher accuracy compared to other algorithms.",TRUE,noun
R11,Science,R26724,Load-balanced clustering algorithm with distributed selforganization for wireless sensor networks,S85385,R26725,ole,R26675,Aggregation,"Wireless sensor networks (WSNs) are composed of a large number of inexpensive power-constrained wireless sensor nodes, which detect and monitor physical parameters around them through self-organization. Utilizing clustering algorithms to form a hierarchical network topology is a common method of implementing network management and data aggregation in WSNs. Assuming that the residual energy of nodes follows the random distribution, we propose a load-balanced clustering algorithm for WSNs on the basis of their distance and density distribution, making it essentially different from the previous clustering algorithms. Simulated tests indicate that the new algorithm can build more balanceable clustering structure and enhance the network life cycle.",TRUE,noun
R11,Science,R27113,Kinetics of acetylcholinesterase immobilized on polyethylene tubing,S87198,R27114,"Group that reacts
(with activated
matrix)",R27069,amine," Acetylcholinesterase was covalently attached to the inner surface of polyethylene tubing. Initial oxidation generated surface carboxylic groups which, on reaction with thionyl chloride, produced acid chloride groups; these were caused to react with excess ethylenediamine. The amine groups on the surface were linked to glutaraldehyde, and acetylcholinesterase was then attached to the surface. Various kinetic tests showed the catalysis of the hydrolysis of acetylthiocholine iodide to be diffusion controlled. The apparent Michaelis constants were strongly dependent on flow rate and were much larger than the value for the free enzyme. Rate measurements over the temperature range 6–42 °C showed changes in activation energies consistent with diffusion control. ",TRUE,noun
R11,Science,R26214,Decomposition of a Combined Inventory and Time Constrained Ship Routing Problem,S82708,R26362,Products,R26361,Ammonia,"In contrast to vehicle routing problems, little work has been done in ship routing and scheduling, although large benefits may be expected from improving this scheduling process. We will present a real ship planning problem, which is a combined inventory management problem anda routing problem with time windows. A fleet of ships transports a single product (ammonia) between production and consumption harbors. The quantities loaded and discharged are determined by the production rates of the harbors, possible stock levels, and the actual ship visiting the harbor. We describe the real problem and the underlying mathematical model. To decompose this model, we discuss some model adjustments. Then, the problem can be solved by a Dantzig Wolfe decomposition approach including both ship routing subproblems and inventory management subproblems. The overall problem is solved by branch-and-bound. Our computational results indicate that the proposed method works for the real planning problem.",TRUE,noun
R11,Science,R158044,Software Cinema-Video-based Requirements Engineering,S690153,R172935,Application ,R172992,application,"The dialogue between end-user and developer presents several challenges in requirements development. One issue is the gap between the conceptual models of end-users and formal specification/analysis models of developers. This paper presents a novel technique for the video analysis of scenarios, relating the use of video-based requirements to process models of software development. It uses a knowledge model-an RDF graph-based on a semiotic interpretation of film language, which allows mapping conceptual into formal models. It can be queried with RDQL, a query language for RDF. The technique has been implemented with a tool which lets the analyst annotate objects as well as spatial or temporal relationships in the video, to represent the conceptual model. The video can be arranged in a scenario graph effectively representing a multi-path video. It can be viewed in linear time order to facilitate the review of individual scenarios by end-users. Each multi-path scene from the conceptual model is mapped to a UML use case in the formal model. A UML sequence diagram can also be generated from the annotations, which shows the direct mapping of film language to UML. This sequence diagram can be edited by the analyst, refining the conceptual model to reflect deeper understanding of the application domain. The use of the software cinema technique is demonstrated with several prototypical applications. One example is a loan application scenario for a financial services consulting firm which acted as an end-user",TRUE,noun
R11,Science,R25732,Fast Algorithms for Mining Association Rules,S78134,R25733,Algorithm name,L48925,AprioriHybrid,"We consider the problem of discovering association rules between items in a large database of sales transactions. We present two new algorithms for solving thii problem that are fundamentally different from the known algorithms. Empirical evaluation shows that these algorithms outperform the known algorithms by factors ranging from three for small problems to more than an order of magnitude for large problems. We also show how the best features of the two proposed algorithms can be combined into a hybrid algorithm, called AprioriHybrid. Scale-up experiments show that AprioriHybrid scales linearly with the number of transactions. AprioriHybrid also has excellent scale-up properties with respect to the transaction size and the number of items in the database.",TRUE,noun
R11,Science,R30726,Relationship between sports drinks and dental erosion in 304 university athletes in Columbus,S102574,R30727,Study population,L61587,Athletes,"Acidic soft drinks, including sports drinks, have been implicated in dental erosion with limited supporting data in scarce erosion studies worldwide. The purpose of this study was to determine the prevalence of dental erosion in a sample of athletes at a large Midwestern state university in the USA, and to evaluate whether regular consumption of sports drinks was associated with dental erosion. A cross-sectional, observational study was done using a convenience sample of 304 athletes, selected irrespective of sports drinks usage. The Lussi Index was used in a blinded clinical examination to grade the frequency and severity of erosion of all tooth surfaces excluding third molars and incisal surfaces of anterior teeth. A self-administered questionnaire was used to gather details on sports drink usage, lifestyle, health problems, dietary and oral health habits. Intraoral color slides were taken of all teeth with erosion. Sports drinks usage was found in 91.8% athletes and the total prevalence of erosion was 36.5%. Nonparametric tests and stepwise regression analysis using history variables showed no association between dental erosion and the use of sports drinks, quantity and frequency of consumption, years of usage and nonsport usage of sports drinks. The most significant predictor of erosion was found to be not belonging to the African race (p < 0.0001). The results of this study reveal no relationship between consumption of sports drinks and dental erosion.",TRUE,noun
R11,Science,R30611,Robust Facial Features Localization on Rotation Arbitrary Multi-View face in Complex Background,S102189,R30636,Challenges,R30613,Background,"Focused on facial features localization on multi-view face arbitrarily rotated in plane, a novel detection algorithm based improved SVM is proposed. First, the face is located by the rotation invariant multi-view (RIMV) face detector and its pose in plane is corrected by rotation. After the searching ranges of the facial features are determined, the crossing detection method which uses the brow-eye and nose-mouth features and the improved SVM detectors trained by large scale multi-view facial features examples is adopted to find the candidate eye, nose and mouth regions,. Based on the fact that the window region with higher value in the SVM discriminant function is relatively closer to the object, and the same object tends to be repeatedly detected by near windows, the candidate eyes, nose and mouth regions are filtered and merged to refine their location on the multi-view face. Experiments show that the algorithm has very good accuracy and robustness to the facial features localization with expression and arbitrary face pose in complex background.",TRUE,noun
R11,Science,R50113,Falcon 2.0: An Entity and Relation Linking Tool over Wikidata,S153520,R50114,contains,R50094,Background,"The Natural Language Processing (NLP) community has significantly contributed to the solutions for entity and relation recognition from a natural language text, and possibly linking them to proper matches in Knowledge Graphs (KGs). Considering Wikidata as the background KG, there are still limited tools to link knowledge within the text to Wikidata. In this paper, we present Falcon 2.0, the first joint entity and relation linking tool over Wikidata. It receives a short natural language text in the English language and outputs a ranked list of entities and relations annotated with the proper candidates in Wikidata. The candidates are represented by their Internationalized Resource Identifier (IRI) in Wikidata. Falcon 2.0 resorts to the English language model for the recognition task (e.g., N-Gram tiling and N-Gram splitting), and then an optimization approach for the linking task. We have empirically studied the performance of Falcon 2.0 on Wikidata and concluded that it outperforms all the existing baselines. Falcon 2.0 is open source and can be reused by the community; all the required instructions of Falcon 2.0 are well-documented at our GitHub repository (https://github.com/SDM-TIB/falcon2.0). We also demonstrate an online API, which can be run without any technical expertise. Falcon 2.0 and its background knowledge bases are available as resources at https://labs.tib.eu/falcon/falcon2/.",TRUE,noun
R11,Science,R26498,A synergistic chlorhexidine/chitosan combination for improved antiplaque strategies,S83341,R26499,Advantages,L52645,Bioadhesive,"BACKGROUND The minor efficacy of chlorhexidine (CHX) on other cariogenic bacteria than mutans streptococci such as Streptococcus sanguinis may contribute to uneffective antiplaque strategies. METHODS AND RESULTS In addition to CHX (0.1%) as positive control and saline as negative control, two chitosan derivatives (0.2%) and their CHX combinations were applied to planktonic and attached sanguinis streptococci for 2 min. In a preclinical biofilm model, the bacteria suspended in human sterile saliva were allowed to attach to human enamel slides for 60 min under flow conditions mimicking human salivation. The efficacy of the test agents on streptococci was screened by the following parameters: vitality status, colony-forming units (CFU)/ml and cell density on enamel. The first combination reduced the bacterial vitality to approximately 0% and yielded a strong CFU reduction of 2-3 log(10) units, much stronger than CHX alone. Furthermore, the first chitosan derivative showed a significant decrease of the surface coverage with these treated streptococci after attachment to enamel. CONCLUSIONS Based on these results, a new CHX formulation would be beneficial unifying the bioadhesive properties of chitosan with the antibacterial activity of CHX synergistically resulting in a superior antiplaque effect than CHX alone.",TRUE,noun
R11,Science,R31364,Artificial neural networks to infer biomass and product concentration during the production of penicillin G acylase from Bacillus megaterium,S105195,R31365,Systems applied,R31356,Bioreactor,"BACKGROUND: Production of microbial enzymes in bioreactors is a complex process including such phenomena as metabolic networks and mass transport resistances. The use of neural networks (NNs) to infer the state of bioreactors may be an interesting option that may handle the nonlinear dynamics of biomass growth and protein production. RESULTS: Feedforward multilayer perceptron (MLP) NNs were used for identification of the cultivation phase of Bacillus megaterium to produce the enzyme penicillin G acylase (EC. 3.5.1.11). The following variables were used as input to the net: run time and carbon dioxide concentration in the exhausted gas. The NN output associates a numerical value to the metabolic state of the cultivation, close to 0 during the lag phase, close to 1 during the exponential phase and approximately 2 for the stationary phase. This is a non-conventional approach for pattern recognition. During the exponential phase, another MLP was used to infer cellular concentration. Time, carbon dioxide concentration and stirrer speed form an integrated net input vector. Cellular concentrations provided by the NN were used in a hybrid approach to estimate product concentrations of the enzyme. The model employed a first-order approximation. CONCLUSION: Results showed that the algorithm was able to infer accurate values of cellular and product concentrations up to the end of the exponential growth phase, where an industrial run should stop. Copyright © 2008 Society of Chemical Industry",TRUE,noun
R11,Science,R33824,Accuracy of CNV Detection from GWAS Data,S117299,R33825,Algorithm,R33804,Birdsuite,"Several computer programs are available for detecting copy number variants (CNVs) using genome-wide SNP arrays. We evaluated the performance of four CNV detection software suites—Birdsuite, Partek, HelixTree, and PennCNV-Affy—in the identification of both rare and common CNVs. Each program's performance was assessed in two ways. The first was its recovery rate, i.e., its ability to call 893 CNVs previously identified in eight HapMap samples by paired-end sequencing of whole-genome fosmid clones, and 51,440 CNVs identified by array Comparative Genome Hybridization (aCGH) followed by validation procedures, in 90 HapMap CEU samples. The second evaluation was program performance calling rare and common CNVs in the Bipolar Genome Study (BiGS) data set (1001 bipolar cases and 1033 controls, all of European ancestry) as measured by the Affymetrix SNP 6.0 array. Accuracy in calling rare CNVs was assessed by positive predictive value, based on the proportion of rare CNVs validated by quantitative real-time PCR (qPCR), while accuracy in calling common CNVs was assessed by false positive/false negative rates based on qPCR validation results from a subset of common CNVs. Birdsuite recovered the highest percentages of known HapMap CNVs containing >20 markers in two reference CNV datasets. The recovery rate increased with decreased CNV frequency. In the tested rare CNV data, Birdsuite and Partek had higher positive predictive values than the other software suites. In a test of three common CNVs in the BiGS dataset, Birdsuite's call was 98.8% consistent with qPCR quantification in one CNV region, but the other two regions showed an unacceptable degree of accuracy. We found relatively poor consistency between the two “gold standards,” the sequence data of Kidd et al., and aCGH data of Conrad et al. Algorithms for calling CNVs especially common ones need substantial improvement, and a “gold standard” for detection of CNVs remains to be established.",TRUE,noun
R11,Science,R25981,Document image segmentation and text area ordering,S80417,R26009,Logical Labels,L50802,body,"A system for document image segmentation and ordering text areas is described and applied to both Japanese and English complex printed page layouts. There is no need to make any assumption about the shape of blocks, hence the segmentation technique can handle not only skewed images without skew-correction but also documents where column are not rectangular. In this technique, on the bottom-up strategy, the connected components are extracted from the reduced image, and classified according to their local information. The connected components are merged into lines, and lines are merged into areas. Extracted text areas are classified as body, caption, header, and footer. A tree graph of the layout of body texts is made, and we get the order of texts by preorder traversal on the graph. The authors introduce the influence range of each node, a procedure for the title part, and extraction of the white horizontal separator. Making it possible to get good results on various documents. The total system is fast and compact.<>",TRUE,noun
R11,Science,R28307,Schedule Design and Container Routing in Liner Shipping,S92681,R28308,emarkable factor,R28306,bonus,"A liner shipping company seeks to provide liner services with shorter transit time compared with the benchmark of market-level transit time because of the ever-increasing competition. When the itineraries of its liner service routes are determined, the liner shipping company designs the schedules of the liner routes such that the wait time at transshipment ports is minimized. As a result of transshipment, multiple paths are available for delivering containers from the origin port to the destination port. Therefore, the medium-term (3 to 6 months) schedule design problem and the operational-level container-routing problem must be investigated simultaneously. The schedule design and container-routing problems were formulated by minimization of the sum of the total transshipment cost and penalty cost associated with longer transit time than the market-level transit time, minus the bonus for shorter transit time. The formulation is nonlinear, noncontinuous, and nonconvex. A genetic local search approach was developed to find good solutions to the problem. The proposed solution method was applied to optimize the Asia–Europe–Oceania liner shipping services of a global liner company.",TRUE,noun
R11,Science,R31908,Cost estimate for biosynfuel production via biosyncrude gasification,S107891,R31909,Plant type,L64726,BtL,"Production of synthetic fuels from lignocellulose like wood or straw involves complex technology. There‐fore, a large BTL (biomass to liquid) plant for biosynfuel production is more economic than many small facilities. A reasonable BTL‐plant capacity is ≥1 Mt/a biosynfuel similar to the already existing commercial CTL and GTL (coal to liquid, gas to liquid) plants of SASOL and SHELL, corresponding to at least 10% of the capacity of a modern oil refinery. BTL‐plant cost estimates are therefore based on reported experience with CTL and GTL plants. Direct supply of large BTL plants with low bulk density biomass by trucks is limited by high transport costs and intolerable local traffic density. Biomass densification by liquefaction in a fast pyrolysis process generates a compact bioslurry or biopaste, also denoted as biosyncrude as produced by the bioliq® process. The densified biosyncrude intermediate can now be cheaply transported from many local facilities in silo wagons by electric rail over long distances to a large and more economic central biosynfuel plant. In addition to the capital expenditure (capex) for the large and complex central biosynfuel plant, a comparable investment effort is required for the construction of several dozen regional pyrolysis plants with simpler technology. Investment costs estimated for fast pyrolysis plants reported in the literature have been complemented by own studies for plants with ca. 100 MWth biomass input. The breakdown of BTL synfuel manufacturing costs of ca. 1 € /kg in central EU shows that about half of the costs are caused by the biofeedstock, including transport. This helps to generate new income for farmers. The other half is caused by technical costs, which are about proportional to the total capital investment (TCI) for the pyrolysis and biosynfuel production plants. Labor is a minor contribution in the relatively large facilities. © 2009 Society of Chemical Industry and John Wiley & Sons, Ltd",TRUE,noun
R11,Science,R30805,Bi-objective stochastic programming models for determining depot locations in disaster relief operations,S104128,R31089,Decisions First-stage,R30910,budget,"This paper presents two-stage bi-objective stochastic programming models for disaster relief operations. We consider a problem that occurs in the aftermath of a natural disaster: a transportation system for supplying disaster victims with relief goods must be established. We propose bi-objective optimization models with a monetary objective and humanitarian objective. Uncertainty in the accessibility of the road network is modeled by a discrete set of scenarios. The key features of our model are the determination of locations for intermediate depots and acquisition of vehicles. Several model variants are considered. First, the operating budget can be fixed at the first stage for all possible scenarios or determined for each scenario at the second stage. Second, the assignment of vehicles to a depot can be either fixed or free. Third, we compare a heterogeneous vehicle fleet to a homogeneous fleet. We study the impact of the variants on the solutions. The set of Pareto-optimal solutions is computed by applying the adaptive Epsilon-constraint method. We solve the deterministic equivalents of the two-stage stochastic programs using the MIP-solver CPLEX.",TRUE,noun
R11,Science,R151254,"CAMPUS EMERGENCY NOTIFICATION SYSTEMS:
AN EXAMINATION OF FACTORS AFFECTING
COMPLIANCE WITH ALERTS1",S626520,R156076,Focus Group,L431219,Campus,"The increasing number of campus-related emergency incidents, in combination with the requirements imposed by the Clery Act, have prompted college campuses to develop emergency notification systems to inform community members of extreme events that may affect them. Merely deploying emergency notification systems on college campuses, however, does not guarantee that these systems will be effective; student compliance plays a very important role in establishing such effectiveness. Immediate compliance with alerts, as opposed to delayed compliance or noncompliance, is a key factor in improving student safety on campuses. This paper investigates the critical antecedents that motivate students to comply immediately with messages from campus emergency notification systems. Drawing on Etzioni's compliance theory, a model is developed. Using a scenario-based survey method, the model is tested in five types of events--snowstorm, active shooter, building fire, health-related, and robbery--and with more than 800 college students from the Northern region of the United States. The results from this study suggest that subjective norm and information quality trust are, in general, the most important factors that promote immediate compliance. This research contributes to the literature on compliance, emergency notification systems, and emergency response policies.",TRUE,noun
R11,Science,R151288,"Factors impacting the intention to use emergency
notification services in campus emergencies: an empirical investigation",S616830,R153901,Focus Group,L425393,Campus,"Research problem: This study investigates the factors influencing students' intentions to use emergency notification services to receive news about campus emergencies through short-message systems (SMS) and social network sites (SNS). Research questions: (1) What are the critical factors that influence students' intention to use SMS to receive emergency notifications? (2) What are the critical factors that influence students' intention to use SNS to receive emergency notifications? Literature review: By adapting Media Richness theory and prior research on emergency notifications, we propose that perceived media richness, perceived trust in information, perceived risk, perceived benefit, and perceived social influence impact the intention to use SMS and SNS to receive emergency notifications. Methodology: We conducted a quantitative, survey-based study that tested our model in five different scenarios, using logistic regression to test the research hypotheses with 574 students of a large research university in the northeastern US. Results and discussion: Results suggest that students' intention to use SNS is impacted by media richness, perceived benefit, and social influence, while students' intention to use SMS is influenced by trust and perceived benefit. Implications to emergency managers suggest how to more effectively manage and market the service through both channels. The results also suggest using SNS as an additional means of providing emergency notifications at academic institutions.",TRUE,noun
R11,Science,R25981,Document image segmentation and text area ordering,S80418,R26009,Logical Labels,L50803,caption,"A system for document image segmentation and ordering text areas is described and applied to both Japanese and English complex printed page layouts. There is no need to make any assumption about the shape of blocks, hence the segmentation technique can handle not only skewed images without skew-correction but also documents where column are not rectangular. In this technique, on the bottom-up strategy, the connected components are extracted from the reduced image, and classified according to their local information. The connected components are merged into lines, and lines are merged into areas. Extracted text areas are classified as body, caption, header, and footer. A tree graph of the layout of body texts is made, and we get the order of texts by preorder traversal on the graph. The authors introduce the influence range of each node, a procedure for the title part, and extraction of the white horizontal separator. Making it possible to get good results on various documents. The total system is fast and compact.<>",TRUE,noun
R11,Science,R32233,Fungicidal Activity of Artemisia herba alba Asso (Asteraceae),S109611,R32234,Main components (P5%),R32231,Carvone,"The antifungal activity of Artemisia herba alba was found to be associated with two major volatile compounds isolated from the fresh leaves of the plant. Carvone and piperitone were isolated and identified by GC/MS, GC/IR, and NMR spectroscopy. Antifungal activity was measured against Penicillium citrinum (ATCC 10499) and Mucora rouxii (ATCC 24905). The antifungal activity (IC50) of the purified compounds was estimated to be 5 μ g/ml, 2 μ g/ml against Penicillium citrinum and 7 μ g/ml, 1.5 μ g/ml against Mucora rouxii carvone and piperitone, respectively.",TRUE,noun
R11,Science,R27505,An investigation of cointegration and causality between energy consumption and economic growth,S89155,R27506,Methodology,R27493,Cointegration,"This paper reexamines the causality between energy consumption and economic growth with both bivariate and multivariate models by applying the recently developed methods of cointegration and Hsiao`s version of the Granger causality to transformed U.S. data for the period 1947-1990. The Phillips-Perron (PP) tests reveal that the original series are not stationary and, therefore, a first differencing is performed to secure stationarity. The study finds no causal linkages between energy consumption and economic growth. Energy and gross national product (GNP) each live a life of its own. The results of this article are consistent with some of the past studies that find no relationship between energy and GNP but are contrary to some other studies that find GNP unidirectionally causes energy consumption. Both the bivariate and trivariate models produce the similar results. We also find that there is no causal relationship between energy consumption and industrial production. The United States is basically a service-oriented economy and changes in energy consumption can cause little or no changes in GNP. In other words, an implementation of energy conservation policy may not impair economic growth. 27 refs., 5 tabs.",TRUE,noun
R11,Science,R27518,Causality between energy consumption and economic growth in India: an application of cointegration and error-correction modeling,S89206,R27519,Methodology,R27493,Cointegration,"Applying the Johansen cointegration test, this study finds that energy consumption, economic growth, capital and labour are cointegrated. However, this study detects no causality from energy consumption to economic growth using Hsiao's version of the Granger causality method with the aid of cointegration and error correction modelling. Interestingly, it is discerned that causality runs from economic growth to energy consumption both in the short run and in the long run and causality flows from capital to economic growth in the short run.",TRUE,noun
R11,Science,R29907,"A panel estimation of the relationship between trade liberalization, economic growth and CO2 emissions in BRICS countries",S99231,R29908,Methodology,R27493,Cointegration,"In the last few years, several studies have found an inverted-U relationship between per capita income and environmental degradation. This relationship, known as the environmental Kuznets curve (EKC), suggests that environmental degradation increases in the early stages of growth, but it eventually decreases as income exceeds a threshold level. However, this paper investigation relationship between per capita CO2 emission, growth economics and trade liberalization based on econometric techniques of unit root test, co-integration and a panel data set during the period 1960-1996 for BRICS countries. Data properties were analyzed to determine their stationarity using the LLC , IPS , ADF and PP unit root tests which indicated that the series are I(1). We find a cointegration relationship between per capita CO2 emission, growth economics and trade liberalization by applying Kao panel cointegration test. The evidence indi cates that in the long-run trade liberalization has a positive significant impact on CO2 emissions and impact of trade liberalization on emissions growth depends on the level of income Our findings suggest that there is a quadratic relationship between relationship between real GDP and CO2 emissions for the region as a whole. The estimated long-run coefficients of real GDP and its square satisfy the EKC hypothesis in all of studied countries. Our estimation shows that the inflection point or optimal point real GDP per capita is about 5269.4 dollars. The results show that on average, sample countries are on the positive side of the inverted U curve. The turning points are very low in some cases and very high in other cases, hence providing poor evidence in support of the EKC hypothesis. Thus, our findings suggest that all BRICS countries need to sacrifice economic growth to decrease their emission levels",TRUE,noun
R11,Science,R34354,Fatal Clostridium difficile enteritis after total abdominal colectomy,S119583,R34355,Intestinal operation,R34346,Colectomy,"A 71-year-old man who had undergone an ileorectal anastomosis some years earlier, developed fulminant fatal Clostridium difficile pseudomembranous enteritis and proctitis after a prostatectomy. This case and three reports of C. difficile involvement of the small bowel in adults emphasize that the small intestine can be affected. No case like ours, of enteritis after colectomy from C. difficile, has hitherto been reported.",TRUE,noun
R11,Science,R34374,Clostridium difficile small bowel enteritis occurring after total colectomy,S119683,R34375,Intestinal operation,R34346,Colectomy,Clostridium difficile infection is usually associated with antibiotic therapy and is almost always limited to the colonic mucosa. Small bowel enteritis is rare: only 9 cases have been previously cited in the literature. This report describes a case of C. difficile small bowel enteritis that occurred in a patient after total colectomy and reviews the 9 previously reported cases of C. difficile enteritis.,TRUE,noun
R11,Science,R34394,Fulminant small bowel enteritis: a rare complication of Clostridium difficile-associated disease,S119826,R34395,Intestinal operation,R34346,Colectomy,"To the Editor: A 54-year-old male was admitted to a community hospital with a 3-month history of diarrhea up to 8 times a day associated with bloody bowel motions and weight loss of 6 kg. He had no past medical history or family history of note. A clinical diagnosis of colitis was made and the patient underwent a limited colonoscopy which demonstrated continuous mucosal inflammation and ulceration that was most marked in the rectum. The clinical and endoscopic findings were suggestive of acute ulcerative colitis (UC), which was subsequently supported by histopathology. The patient was managed with bowel rest and intravenous steroids. However, he developed toxic megacolon on day 4 of his admission and underwent a total colectomy with end ileostomy. On the third postoperative day the patient developed a pyrexia of 39°C, a septic screen was performed, and the central venous line (CVP) was changed with the tip culturing methicillin-resistant Staphylococcus aureus (MRSA). Intravenous gentamycin was commenced and discontinued after 5 days, with the patient remaining afebrile and stable. On the tenth postoperative day the patient became tachycardic (pulse 110/min), diaphoretic (temperature of 39.4°C), hypotensive (diastolic of 60 mm Hg), and with a high volume nasogastric aspirates noted (2000 mL). A diagnosis of septic shock was considered although the etiology was unclear. The patient was resuscitated with intravenous fluids and transferred to the regional surgical unit for Intensive Care Unit monitoring and management. A computed tomography (CT) of the abdomen showed a marked inflammatory process with bowel wall thickening along the entire small bowel with possible intramural air, raising the suggestion of ischemic bowel (Fig. 1). However, on clinical assessment the patient elicited no signs of peritonism, his vitals were stable, he was not acidotic (pH 7.40), urine output was adequate, and his blood pressure was being maintained without inotropic support. Furthermore, his ileostomy appeared healthy and well perfused, although a high volume (2500 mL in the previous 18 hours), malodorous output was noted. A sample of the stoma output was sent for microbiological analysis. Given that the patient was not exhibiting evidence of peritonitis with normal vital signs, a conservative policy of fluid resuscitation was pursued with plans for exploratory laparotomy if he disimproved. Ileostomy output sent for microbiology assessment was positive for Clostridium difficile toxin A and B utilizing culture and enzyme immunoassays (EIA). Intravenous vancomycin, metronidazole, and rifampicin via a nasogastric tube were commenced in conjunction with bowel rest and total parenteral nutrition. The ileostomy output reduced markedly within 2 days and the patient’s clinical condition improved. Follow-up culture of the ileostomy output was negative for C. difficile toxins. The patient was discharged in good health on full oral diet 12 days following transfer. Review of histopathology relating to the resected colon and subsequent endoscopic assessment of the retained rectum confirmed the initial diagnosis of UC, rather than a primary diagnosis of pseudomembranous colitis. Clostridium difficile is the leading cause of nosocomial diarrhea associated with antibiotic therapy and is almost always limited to the colonic mucosa.1 Small bowel enteritis secondary to C. difficile is exceedingly rare, with only 21 previous cases cited in the literature.2,3 Of this cohort, 18 patients had a surgical procedure at some timepoint prior to the development of C. difficile enteritis, while the remaining 3 patients had no surgical procedure prior to the infection. The time span between surgery and the development of enteritis ranged from 4 days to 31 years. Antibiotic therapy predisposed to the development of C. difficile enteritis in 20 of the cases. A majority of the patients (n 11) had a history of inflammatory bowel disease (IBD), with 8 having UC similar to our patient and the remaining 3 patients having a history of Crohn’s disease. The etiology of small bowel enteritis remains unclear. C. difficile has been successfully isolated from the small bowel in both autopsy specimens and from jejunal aspirate of patients with chronic diarrhea, suggesting that the small bowel may act as a reservoir for C. difficile.4 This would suggest that C. difficile could become pathogenic in the small bowel following a disruption in the small bowel flora in the setting of antibiotic therapy. This would be supported by the observation that the majority of cases reported occurred within 90 days of surgery with attendant disruption of bowel function. The prevalence of C. difficile-associated disease (CDAD) in patients with IBD is increasing. Issa et al5 examined the impact of CDAD in a cohort of patients with IBD. They found that more than half of the patients with a positive culture for C. difficile were admitted and 20% required a colectomy. They reported that maintenance immunomodulator use and colonic involvement were independent risk factors for C. difficile infection in patients with IBD. The rising incidence of C. difficile in patients with IBD coupled with the use of increasingly potent immunomodulatory therapies means that clinicians must have a high index of suspiCopyright © 2008 Crohn’s & Colitis Foundation of America, Inc. DOI 10.1002/ibd.20758 Published online 22 October 2008 in Wiley InterScience (www.interscience.wiley.com).",TRUE,noun
R11,Science,R153003,Mobile text alerts are an effective way of communicating emergency information to adolescents: Results from focus groups with 12- to 18-year-olds,S626776,R156108,paper: Theory / Construct / Model,L431443,Compliance,Mobile phone text messages can be used to disseminate information and advice to the public in disasters. We sought to identify factors influencing how adolescents would respond to receiving emergency text messages. Qualitative interviews were conducted with participants aged 12–18 years. Participants discussed scenarios relating to flooding and the discovery of an unexploded World War Two bomb and were shown example alerts that might be sent out in these circumstances. Intended compliance with the alerts was high. Participants noted that compliance would be more likely if: they were familiar with the system; the messages were sent by a trusted source; messages were reserved for serious incidents; multiple messages were sent; messages were kept short and formal.,TRUE,noun
R11,Science,R25991,Logical structure analysis of book document images using contents information,S80483,R26014,Logical Labels,L50858,content,"Numerous studies have so far been carried out extensively for the analysis of document image structure, with particular emphasis placed on media conversion and layout analysis. For the conversion of a collection of books in a library into the form of hypertext documents, a logical structure extraction technology is indispensable, in addition to document layout analysis. The table of contents of a book generally involves very concise and faithful information to represent the logical structure of the entire book. That is to say, we can efficiently analyze the logical structure of a book by making full use of its contents pages. This paper proposes a new approach for document logical structure analysis to convert document images and contents information into an electronic document. First, the contents pages of a book are analyzed to acquire the overall document logical structure. Thereafter, we are able to use this information to acquire the logical structure of all the pages of the book by analyzing consecutive pages of a portion of the book. Test results demonstrate very high discrimination rates: up to 97.6% for the headline structure, 99.4% for the text structure, 97.8% for the page-number structure and almost 100% for the head-foot structure.",TRUE,noun
R11,Science,R50503,Estimating the Information Gap between Textual and Visual Representations,S154356,R50505,contains,R50491,Contribution,"Photos, drawings, figures, etc. supplement textual information in various kinds of media, for example, in web news or scientific publications. In this respect, the intended effect of an image can be quite different, e.g., providing additional information, focusing on certain details of surrounding text, or simply being a general illustration of a topic. As a consequence, the semantic correlation between information of different modalities can vary noticeably, too. Moreover, cross-modal interrelations are often hard to describe in a precise way. The variety of possible interrelations of textual and graphical information and the question, how they can be described and automatically estimated have not been addressed yet by previous work. In this paper, we present several contributions to close this gap. First, we introduce two measures to describe cross-modal interrelations: cross-modal mutual information (CMI) and semantic correlation (SC). Second, a novel approach relying on deep learning is suggested to estimate CMI and SC of textual and visual information. Third, three diverse datasets are leveraged to learn an appropriate deep neural network model for the demanding task. The system has been evaluated on a challenging test set and the experimental results demonstrate the feasibility of the approach.",TRUE,noun
R11,Science,R25855,Crystal-Facet Effect of γ-Al2O3 on Supporting CrOx for Catalytic Semihydrogenation of Acetylene,S79222,R25856,catalysts,L49829,CrOx/(110)γ-Al2O3,"With the successful preparation of γ-alumina with high-energy external surfaces such as {111} facets, the crystal-facet effect of γ-Al2O3 on surface-loaded CrOx has been explored for semihydrogenation of acetylene. Our results indeed demonstrate that the harmonious interaction of CrOx with traditional γ-Al2O3, the external surfaces of which are typically low-energy{110} facets, has caused a highly efficient performance for semihydrogenation of acetylene over CrOx/(110)γ-Al2O3 catalyst, whereas the activity of the CrOx/(111)γ-Al2O3 catalyst for acetylene hydrogenation is suppressed dramatically due to the limited formation of active Cr species, restrained by the high-energy {111} facets of γ-Al2O3. Furthermore, the use of inexpensive CrOx as the active component for semihydrogenation of acetylene is an economically friendly alternative relative to commercial precious Pd catalysts. This work sheds light on a strategy for exploiting the crystal-facet effect of the supports to purposefully tailor the catalyti...",TRUE,noun
R11,Science,R29637,Environmental Kuznets curve for CO2 in Canada,S98442,R29638,Power of income,R29362,Cubic,"The environmental Kuznets curve hypothesis is a theory by which the relationship between per capita GDP and per capita pollutant emissions has an inverted U shape. This implies that, past a certain point, economic growth may actually be profitable for environmental quality. Most studies on this subject are based on estimating fully parametric quadratic or cubic regression models. While this is not technhically wrong, such an approach somewhat lacks flexibility since it may fail to detect the true shape of the relationship if it happens not to be of the specified form. We use semiparametric and flexible nonlinear parametric modelling methods in an attempt to provide more robust inferences. We find little evidence in favour of the environmental Kuznets curve hypothesis. Our main results could be interpreted as indicating that the oil shock of the 1970s has had an important impact on progress towards less polluting technology and production.",TRUE,noun
R11,Science,R29711,A panel data heterogeneous Bayesian estimation of environmental Kuznets curves for CO2emissions,S98595,R29712,Power of income,R29362,Cubic,"This article investigates the Environmental Kuznets Curves (EKC) for CO2 emissions in a panel of 109 countries during the period 1959 to 2001. The length of the series makes the application of a heterogeneous estimator suitable from an econometric point of view. The results, based on the hierarchical Bayes estimator, show that different EKC dynamics are associated with the different sub-samples of countries considered. On average, more industrialized countries show evidence of EKC in quadratic specifications, which nevertheless are probably evolving into an N-shape based on their cubic specification. Nevertheless, it is worth noting that the EU, and not the Umbrella Group led by US, has been driving currently observed EKC-like shapes. The latter is associated to monotonic income–CO2 dynamics. The EU shows a clear EKC shape. Evidence for less-developed countries consistently shows that CO2 emissions rise positively with income, though there are some signs of an EKC. Analyses of future performance, nevertheless, favour quadratic specifications, thus supporting EKC evidence for wealthier countries and non-EKC shapes for industrializing regions.",TRUE,noun
R11,Science,R30042,GREENHOUSE GASES EMISSIONS AND ECONOMIC GROWTH – EVIDENCE SUBSTANTIATING THE PRESENCE OF ENVIRONMENTAL KUZNETS CURVE IN THE EU,S99596,R30043,Power of income,R29362,Cubic,"AbstractThe paper considers the relationship between greenhouse gas emissions (GHG) as the main variable of climate change and gross domestic product (GDP), using the environmental Kuznets curve (EKC) technique. At early stages of economic growth, EKC indicates the increase of pollution related to the growing use of resources. However, when a certain level of income per capita is reached, the trend reverses and at a higher stage of development, further economic growth leads to improvement of the environment. According to the researchers, this implies that the environmental impact indicator is an inverted U-shaped function of income per capita. In this paper, the cubic equation is used to empirically check the validity of the EKC relationship for European countries. The analysis is based on the survey of EU-27, Norway and Switzerland in the period of 1995–2010. The data is taken from the Eurostat database. To gain some insights into the environmental trends in each country, the article highlights the speci...",TRUE,noun
R11,Science,R50025,Segmentation of Ocular Pathologies Using Deep Convolutional Neural Network,S153322,R50027,contains,R50023,Data,"Diabetes Mellitus (DM) is a chronic, progressive and life-threatening disease. The ocular manifestations of DM, Diabetic Retinopathy (DR) and Diabetic Macular Edema (DME), are the leading causes of blindness in the adult population throughout the world. Early diagnosis of DR and DM through screening tests and successive treatments can reduce the threat to visual acuity. In this context, we propose an encoder decoder based semantic segmentation network SOP-Net (Segmentation of Ocular Pathologies Using Deep Convolutional Neural Network) for simultaneous delineation of retinal pathologies (hard exudates, soft exudates, hemorrhages, microaneurysms). The proposed semantic segmentation framework is capable of providing segmentation results at pixel-level with good localization of object boundaries. SOP-Net has been trained and tested on IDRiD dataset which is publicly available with pixel level annotations of retinal pathologies. The network achieved average accuracies of 98.98%, 90.46%, 96.79%, and 96.70% for segmentation of hard exudates, soft exudates, hemorrhages, and microaneurysms. The proposed methodology has the capability to be used in developing a diagnostic system for organizing large scale ophthalmic screening programs.",TRUE,noun
R11,Science,R50113,Falcon 2.0: An Entity and Relation Linking Tool over Wikidata,S153529,R50114,contains,R50103,Data,"The Natural Language Processing (NLP) community has significantly contributed to the solutions for entity and relation recognition from a natural language text, and possibly linking them to proper matches in Knowledge Graphs (KGs). Considering Wikidata as the background KG, there are still limited tools to link knowledge within the text to Wikidata. In this paper, we present Falcon 2.0, the first joint entity and relation linking tool over Wikidata. It receives a short natural language text in the English language and outputs a ranked list of entities and relations annotated with the proper candidates in Wikidata. The candidates are represented by their Internationalized Resource Identifier (IRI) in Wikidata. Falcon 2.0 resorts to the English language model for the recognition task (e.g., N-Gram tiling and N-Gram splitting), and then an optimization approach for the linking task. We have empirically studied the performance of Falcon 2.0 on Wikidata and concluded that it outperforms all the existing baselines. Falcon 2.0 is open source and can be reused by the community; all the required instructions of Falcon 2.0 are well-documented at our GitHub repository (https://github.com/SDM-TIB/falcon2.0). We also demonstrate an online API, which can be run without any technical expertise. Falcon 2.0 and its background knowledge bases are available as resources at https://labs.tib.eu/falcon/falcon2/.",TRUE,noun
R11,Science,R50227,OER Recommendations to Support Career Development,S153759,R50228,contains,R50214,Data,"This Work in Progress Research paper departs from the recent, turbulent changes in global societies, forcing many citizens to re-skill themselves to (re)gain employment. Learners therefore need to be equipped with skills to be autonomous and strategic about their own skill development. Subsequently, high-quality, on-line, personalized educational content and services are also essential to serve this high demand for learning content. Open Educational Resources (OERs) have high potential to contribute to the mitigation of these problems, as they are available in a wide range of learning and occupational contexts globally. However, their applicability has been limited, due to low metadata quality and complex quality control. These issues resulted in a lack of personalised OER functions, like recommendation and search. Therefore, we suggest a novel, personalised OER recommendation method to match skill development targets with open learning content. This is done by: 1) using an OER quality prediction model based on metadata, OER properties, and content; 2) supporting learners to set individual skill targets based on actual labour market information, and 3) building a personalized OER recommender to help learners to master their skill targets. Accordingly, we built a prototype focusing on Data Science related jobs, and evaluated this prototype with 23 data scientists in different expertise levels. Pilot participants used our prototype for at least 30 minutes and commented on each of the recommended OERs. As a result, more than 400 recommendations were generated and 80.9% of the recommendations were reported as useful.",TRUE,noun
R11,Science,R50289,Message Passing for Hyper-Relational Knowledge Graphs,S153877,R50290,contains,R50277,Data,"Hyper-relational knowledge graphs (KGs) (e.g., Wikidata) enable associating additional key-value pairs along with the main triple to disambiguate, or restrict the validity of a fact. In this work, we propose a message passing based graph encoder - StarE capable of modeling such hyper-relational KGs. Unlike existing approaches, StarE can encode an arbitrary number of additional information (qualifiers) along with the main triple while keeping the semantic roles of qualifiers and triples intact. We also demonstrate that existing benchmarks for evaluating link prediction (LP) performance on hyper-relational KGs suffer from fundamental flaws and thus develop a new Wikidata-based dataset - WD50K. Our experiments demonstrate that StarE based LP model outperforms existing approaches across multiple benchmarks. We also confirm that leveraging qualifiers is vital for link prediction with gains up to 25 MRR points compared to triple-based representations.",TRUE,noun
R11,Science,R28946,Identifying Software Project Risks: An International Delphi Study”,S95577,R28950,esearch method & question set,R28945,Delphi,"Advocates of software risk management claim that by identifying and analyzing threats to success (i.e., risks) action can be taken to reduce the chance of failure of a project. The first step in the risk management process is to identify the risk itself, so that appropriate countermeasures can be taken. One problem in this task, however, is that no validated lists are available to help the project manager understand the nature and types of risks typically faced in a software project. This paper represents a first step toward alleviating this problem by developing an authoritative list of common risk factors. We deploy a rigorous data collection method called a ""ranking-type"" Delphi survey to produce a rank-order list of risk factors. This data collection method is designed to elicit and organize opinions of a panel of experts through iterative, controlled feedback. Three simultaneous surveys were conducted in three different settings: Hong Kong, Finland, and the United States. This was done to broaden our view of the types of risks, rather than relying on the view of a single culture-an aspect that has been ignored in past risk management research. In forming the three panels, we recruited experienced project managers in each country. The paper presents the obtained risk factor list, compares it with other published risk factor lists for completeness and variation, and analyzes common features and differences in risk factor rankings in the three countries. We conclude by discussing implications of our findings for both research and improving risk management practice.",TRUE,noun
R11,Science,R28965,Prioritizing Clinical Information System Project Risk Factors: A Delphi Study,S95624,R28966,esearch method & question set,R28945,Delphi,"Identifying the risks associated with the implementation of clinical information systems (CIS) in health care organizations can be a major challenge for managers, clinicians, and IT specialists, as there are numerous ways in which they can be described and categorized. Risks vary in nature, severity, and consequence, so it is important that those considered to be high-level risks be identified, understood, and managed. This study addresses this issue by first reviewing the extant literature on IT/CIS project risks, and second conducting a Delphi survey among 21 experts highly involved in CIS projects in Canada. In addition to providing a comprehensive list of risk factors and their relative importance, this study is helpful in unifying the literature on IT implementation and health informatics. Our risk factor-oriented research actually confirmed many of the factors found to be important in both these streams.",TRUE,noun
R11,Science,R30745,Facility location in humanitarian relief,S103851,R30964,Uncertainty on the first-stage,R30951,Demand,"In this study, we consider facility location decisions for a humanitarian relief chain responding to quick-onset disasters. In particular, we develop a model that determines the number and locations of distribution centres in a relief network and the amount of relief supplies to be stocked at each distribution centre to meet the needs of people affected by the disasters. Our model, which is a variant of the maximal covering location model, integrates facility location and inventory decisions, considers multiple item types, and captures budgetary constraints and capacity restrictions. We conduct computational experiments to illustrate how the proposed model works on a realistic problem. Results show the effects of pre- and post-disaster relief funding on relief system's performance, specifically on response time and the proportion of demand satisfied. Finally, we discuss the managerial implications of the proposed model.",TRUE,noun
R11,Science,R30800,Stochastic network design for disaster preparedness,S104049,R31053,Uncertainty on the first-stage,R30951,Demand,"This article introduces a risk-averse stochastic modeling approach for a pre-disaster relief network design problem under uncertain demand and transportation capacities. The sizes and locations of the response facilities and the inventory levels of relief supplies at each facility are determined while guaranteeing a certain level of network reliability. A probabilistic constraint on the existence of a feasible flow is introduced to ensure that the demand for relief supplies across the network is satisfied with a specified high probability. Responsiveness is also accounted for by defining multiple regions in the network and introducing local probabilistic constraints on satisfying demand within each region. These local constraints ensure that each region is self-sufficient in terms of providing for its own needs with a large probability. In particular, the Gale–Hoffman inequalities are used to represent the conditions on the existence of a feasible network flow. The solution method rests on two pillars. A preprocessing algorithm is used to eliminate redundant Gale–Hoffman inequalities and then proposed models are formulated as computationally efficient mixed-integer linear programs by utilizing a method based on combinatorial patterns. Computational results for a case study and randomly generated problem instances demonstrate the effectiveness of the models and the solution method.",TRUE,noun
R11,Science,R30811,Two-Stage Multiobjective Optimization for Emergency Supplies Allocation Problem under Integrated Uncertainty,S104077,R31066,Uncertainty on the first-stage,R30951,Demand,"This paper proposes a new two-stage optimization method for emergency supplies allocation problem with multisupplier, multiaffected area, multirelief, and multivehicle. The triplet of supply, demand, and the availability of path is unknown prior to the extraordinary event and is descriptive with fuzzy random variable. Considering the fairness, timeliness, and economical efficiency, a multiobjective expected value model is built for facility location, vehicle routing, and supply allocation decisions. The goals of proposed model aim to minimize the proportion of demand nonsatisfied and response time of emergency reliefs and the total cost of the whole process. When the demand and the availability of path are discrete, the expected values in the objective functions are converted into their equivalent forms. When the supply amount is continuous, the equilibrium chance in the constraint is transformed to its equivalent one. To overcome the computational difficulty caused by multiple objectives, a goal programming model is formulated to obtain a compromise solution. Finally, an example is presented to illustrate the validity of the proposed model and the effectiveness of the solution method.",TRUE,noun
R11,Science,R151266,"Social Media and Disasters: A Functional Framework for Social Media Use in Disaster Planning, Response, and Research",S626585,R156082,paper:publised_in,L431278,Disasters,"A comprehensive review of online, official, and scientific literature was carried out in 2012-13 to develop a framework of disaster social media. This framework can be used to facilitate the creation of disaster social media tools, the formulation of disaster social media implementation processes, and the scientific study of disaster social media effects. Disaster social media users in the framework include communities, government, individuals, organisations, and media outlets. Fifteen distinct disaster social media uses were identified, ranging from preparing and receiving disaster preparedness information and warnings and signalling and detecting disasters prior to an event to (re)connecting community members following a disaster. The framework illustrates that a variety of entities may utilise and produce disaster social media content. Consequently, disaster social media use can be conceptualised as occurring at a number of levels, even within the same disaster. Suggestions are provided on how the proposed framework can inform future disaster social media development and research.",TRUE,noun
R11,Science,R151242,Design of a Resilient Information System for Disaster Response,S626480,R156070,Emergency Type,L431185,earthquake,"The devastating 2011 Great East Japan Earthquake made people aware of the importance of Information and Communication Technology (ICT) for sustaining life during and soon after a disaster. The difficulty in recovering information systems, because of the failure of ICT, hindered all recovery processes. The paper explores ways to make information systems resilient in disaster situations. Resilience is defined as quickly regaining essential capabilities to perform critical post disaster missions and to smoothly return to fully stable operations thereafter. From case studies and the literature, we propose that a frugal IS design that allows creative responses will make information systems resilient in disaster situations. A three-stage model based on a chronological sequence was employed in structuring the proposed design principles.",TRUE,noun
R11,Science,R27777,Computer games for the math achievement of diverse students,S90463,R27778,Educational context,R27724,Elementary,"Introduction As a way to improve student academic performance, educators have begun paying special attention to computer games (Gee, 2005; Oblinger, 2006). Reflecting the interests of the educators, studies have been conducted to explore the effects of computer games on student achievement. However, there has been no consensus on the effects of computer games: Some studies support computer games as educational resources to promote students' learning (Annetta, Mangrum, Holmes, Collazo, & Cheng, 2009; Vogel et al., 2006). Other studies have found no significant effects on the students' performance in school, especially in math achievement of elementary school students (Ke, 2008). Researchers have also been interested in the differential effects of computer games between gender groups. While several studies have reported various gender differences in the preferences of computer games (Agosto, 2004; Kinzie & Joseph, 2008), a few studies have indicated no significant differential effect of computer games between genders and asserted generic benefits for both genders (Vogel et al., 2006). To date, the studies examining computer games and gender interaction are far from conclusive. Moreover, there is a lack of empirical studies examining the differential effects of computer games on the academic performance of diverse learners. These learners included linguistic minority students who speak languages other than English. Recent trends in the K-12 population feature the increasing enrollment of linguistic minority students, whose population reached almost four million (NCES, 2004). These students have been a grieve concern for American educators because of their reported low performance. In response, this study empirically examined the effects of math computer games on the math performance of 4th-graders with focused attention on differential effects for gender and linguistic groups. To achieve greater generalizability of the study findings, the study utilized a US nationally representative database--the 2005 National Assessment of Educational Progress (NAEP). The following research questions guided the current study: 1. Are computer games in math classes associated with the 4th-grade students' math performance? 2. How does the relationship differ by linguistic group? 3. How does the association vary by gender? 4. Is there an interaction effect of computer games on linguistic and gender groups? In other words, how does the effect of computer games on linguistic groups vary by gender group? Literature review Academic performance and computer games According DeBell and Chapman (2004), of 58,273,000 students of nursery and K-12 school age in the USA, 56% of students played computer games. Along with the popularity among students, computer games have received a lot of attention from educators as a potential way to provide learners with effective and fun learning environments (Oblinger, 2006). Gee (2005) agreed that a game would turn out to be good for learning when the game is built to incorporate learning principles. Some researchers have also supported the potential of games for affective domains of learning and fostering a positive attitude towards learning (Ke, 2008; Ke & Grabowski, 2007; Vogel et al., 2006). For example, based on the study conducted on 1,274 1st- and 2nd-graders, Rosas et al. (2003) found a positive effect of educational games on the motivation of students. Although there is overall support for the idea that games have a positive effect on affective aspects of learning, there have been mixed research results regarding the role of games in promoting cognitive gains and academic achievement. In the meta-analysis, Vogel et al. (2006) examined 32 empirical studies and concluded that the inclusion of games for students' learning resulted in significantly higher cognitive gains compared with traditional teaching methods without games. …",TRUE,noun
R11,Science,R27804,Outdoor natural science learning with an RFID-supported immersive ubiquitous learning environment,S90598,R27805,Educational context,R27724,Elementary,"Despite their successful use in many conscientious studies involving outdoor learning applications, mobile learning systems still have certain limitations. For instance, because students cannot obtain real-time, contextaware content in outdoor locations such as historical sites, endangered animal habitats, and geological landscapes, they are unable to search, collect, share, and edit information by using information technology. To address such concerns, this work proposes an environment of ubiquitous learning with educational resources (EULER) based on radio frequency identification (RFID), augmented reality (AR), the Internet, ubiquitous computing, embedded systems, and database technologies. EULER helps teachers deliver lessons on site and cultivate student competency in adopting information technology to improve learning. To evaluate its effectiveness, we used the proposed EULER for natural science learning at the Guandu Nature Park in Taiwan. The participants were elementary school teachers and students. The analytical results revealed that the proposed EULER improves student learning. Moreover, the largely positive feedback from a post-study survey confirms the effectiveness of EULER in supporting outdoor learning and its ability to attract the interest of students.",TRUE,noun
R11,Science,R29841,"An Econometric Analysis for CO2 Emissions, Energy Consumption, Economic Growth, Foreign Trade and Urbanization of Japan",S99017,R29842,Shape of EKC,R29590,emissions,"This paper examines the dynamic causal relationship between carbon dioxide emissions, energy consumption, economic growth, foreign trade and urbanization using time series data for the period of 1960-2009. Short-run unidirectional causalities are found from energy consumption and trade openness to carbon dioxide emissions, from trade openness to energy consumption, from carbon dioxide emissions to economic growth, and from economic growth to trade openness. The test results also support the evidence of existence of long-run relationship among the variables in the form of Equation (1) which also conform the results of bounds and Johansen conintegration tests. It is found that over time higher energy consumption in Japan gives rise to more carbon dioxide emissions as a result the environment will be polluted more. But in respect of economic growth, trade openness and urbanization the environmental quality is found to be normal good in the long-run.",TRUE,noun
R11,Science,R30260,The environmental Kuznets curve at different levels of economic development: a counterfactual quantile regression analysis for CO2emissions,S100262,R30261,Shape of EKC,R29590,emissions,"This paper applies the quantile fixed effects technique in exploring the CO2 environmental Kuznets curve within two groups of economic development (OECD and Non-OECD countries) and six geographical regions West, East Europe, Latin America, East Asia, West Asia and Africa. A comparison of the findings resulting from the use of this technique with those of conventional fixed effects method reveals that the latter may depict a flawed summary of the prevailing incomeemissions nexus depending on the conditional quantile examined. We also extend the Machado and Mata decomposition method to the Kuznets curve framework to explore the most important explanations for the CO2 emissions gap between OECD and Non-OECD countries. We find a statistically significant OECD-Non-OECD emissions gap and this contracts as we ascent the emissions distribution. The decomposition further reveals that there are non-income related factors working against the Non-OECD group's greening. We tentatively conclude that deliberate and systematic mitigation of current CO2 emissions in the Non-OECD group is required. JEL Classification: Q56, Q58.",TRUE,noun
R11,Science,R30611,Robust Facial Features Localization on Rotation Arbitrary Multi-View face in Complex Background,S101999,R30612,Challenges,R30581,expression,"Focused on facial features localization on multi-view face arbitrarily rotated in plane, a novel detection algorithm based improved SVM is proposed. First, the face is located by the rotation invariant multi-view (RIMV) face detector and its pose in plane is corrected by rotation. After the searching ranges of the facial features are determined, the crossing detection method which uses the brow-eye and nose-mouth features and the improved SVM detectors trained by large scale multi-view facial features examples is adopted to find the candidate eye, nose and mouth regions,. Based on the fact that the window region with higher value in the SVM discriminant function is relatively closer to the object, and the same object tends to be repeatedly detected by near windows, the candidate eyes, nose and mouth regions are filtered and merged to refine their location on the multi-view face. Experiments show that the algorithm has very good accuracy and robustness to the facial features localization with expression and arbitrary face pose in complex background.",TRUE,noun
R11,Science,R28988,Comprehensive database for facial expression analysis,S95766,R28989,Variations,L58643,expression,"Within the past decade, significant effort has occurred in developing methods of facial expression analysis. Because most investigators have used relatively limited data sets, the generalizability of these various methods remains unknown. We describe the problem space for facial expression analysis, which includes level of description, transitions among expressions, eliciting conditions, reliability and validity of training and test data, individual differences in subjects, head orientation and scene complexity image characteristics, and relation to non-verbal behavior. We then present the CMU-Pittsburgh AU-Coded Face Expression Image Database, which currently includes 2105 digitized image sequences from 182 adult subjects of varying ethnicity, performing multiple tokens of most primary FACS action units. This database is the most comprehensive testbed to date for comparative studies of facial expression analysis.",TRUE,noun
R11,Science,R28996,A high-resolution 3D dynamic facial expression database,S95836,R28997,Variations,L58701,expression,"Face information processing relies on the quality of data resource. From the data modality point of view, a face database can be 2D or 3D, and static or dynamic. From the task point of view, the data can be used for research of computer based automatic face recognition, face expression recognition, face detection, or cognitive and psychological investigation. With the advancement of 3D imaging technologies, 3D dynamic facial sequences (called 4D data) have been used for face information analysis. In this paper, we focus on the modality of 3D dynamic data for the task of facial expression recognition. We present a newly created high-resolution 3D dynamic facial expression database, which is made available to the scientific research community. The database contains 606 3D facial expression sequences captured from 101 subjects of various ethnic backgrounds. The database has been validated through our facial expression recognition experiment using an HMM based 3D spatio-temporal facial descriptor. It is expected that such a database shall be used to facilitate the facial expression analysis from a static 3D space to a dynamic 3D space, with a goal of scrutinizing facial behavior at a higher level of detail in a real 3D spatio-temporal domain.",TRUE,noun
R11,Science,R28998,"Annotated facial landmarks in the wild: A large-scale, real- world database for facial landmark localization",S95850,R28999,Variations,L58712,expression,"Face alignment is a crucial step in face recognition tasks. Especially, using landmark localization for geometric face normalization has shown to be very effective, clearly improving the recognition results. However, no adequate databases exist that provide a sufficient number of annotated facial landmarks. The databases are either limited to frontal views, provide only a small number of annotated images or have been acquired under controlled conditions. Hence, we introduce a novel database overcoming these limitations: Annotated Facial Landmarks in the Wild (AFLW). AFLW provides a large-scale collection of images gathered from Flickr, exhibiting a large variety in face appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total 25,993 faces in 21,997 real-world images are annotated with up to 21 landmarks per image. Due to the comprehensive set of annotations AFLW is well suited to train and test algorithms for multi-view face detection, facial landmark localization and face pose estimation. Further, we offer a rich set of tools that ease the integration of other face databases and associated annotations into our joint framework.",TRUE,noun
R11,Science,R29000,Localizing Parts of Faces Using a Consensus of Exemplars,S95869,R29001,Variations,L58728,expression,"We present a novel approach to localizing parts in images of human faces. The approach combines the output of local detectors with a nonparametric set of global models for the part locations based on over 1,000 hand-labeled exemplar images. By assuming that the global models generate the part locations as hidden variables, we derive a Bayesian objective function. This function is optimized using a consensus of models for these hidden variables. The resulting localizer handles a much wider range of expression, pose, lighting, and occlusion than prior ones. We show excellent performance on real-world face datasets such as Labeled Faces in the Wild (LFW) and a new Labeled Face Parts in the Wild (LFPW) and show that our localizer achieves state-of-the-art performance on the less challenging BioID dataset.",TRUE,noun
R11,Science,R29008,The First Facial Landmark Tracking in-the-Wild Challenge: Benchmark and Results,S95943,R29009,Variations,L58790,expression,"Detection and tracking of faces in image sequences is among the most well studied problems in the intersection of statistical machine learning and computer vision. Often, tracking and detection methodologies use a rigid representation to describe the facial region 1, hence they can neither capture nor exploit the non-rigid facial deformations, which are crucial for countless of applications (e.g., facial expression analysis, facial motion capture, high-performance face recognition etc.). Usually, the non-rigid deformations are captured by locating and tracking the position of a set of fiducial facial landmarks (e.g., eyes, nose, mouth etc.). Recently, we witnessed a burst of research in automatic facial landmark localisation in static imagery. This is partly attributed to the availability of large amount of annotated data, many of which have been provided by the first facial landmark localisation challenge (also known as 300-W challenge). Even though now well established benchmarks exist for facial landmark localisation in static imagery, to the best of our knowledge, there is no established benchmark for assessing the performance of facial landmark tracking methodologies, containing an adequate number of annotated face videos. In conjunction with ICCV'2015 we run the first competition/challenge on facial landmark tracking in long-term videos. In this paper, we present the first benchmark for long-term facial landmark tracking, containing currently over 110 annotated videos, and we summarise the results of the competition.",TRUE,noun
R11,Science,R29010,Robust Face Landmark Estimation under Occlusion,S95961,R29011,Variations,L58805,expression,"Human faces captured in real-world conditions present large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food). Current face landmark estimation approaches struggle under such conditions since they fail to provide a principled way of handling outliers. We propose a novel method, called Robust Cascaded Pose Regression (RCPR) which reduces exposure to outliers by detecting occlusions explicitly and using robust shape-indexed features. We show that RCPR improves on previous landmark estimation methods on three popular face datasets (LFPW, LFW and HELEN). We further explore RCPR's performance by introducing a novel face dataset focused on occlusion, composed of 1,007 faces presenting a wide range of occlusion patterns. RCPR reduces failure cases by half on all four datasets, at the same time as it detects face occlusions with a 80/40% precision/recall.",TRUE,noun
R11,Science,R31723,Design patterns and fault-proneness a study of commercial C# software,S106759,R31724,Quality attribute,L63892,Faults,"In this paper, we document a study of design patterns in commercial, proprietary software and determine whether design pattern participants (i.e. the constituent classes of a pattern) had a greater propensity for faults than non-participants. We studied a commercial software system for a 24 month period and identified design pattern participants by inspecting the design documentation and source code; we also extracted fault data for the same period to determine whether those participant classes were more fault-prone than non-participant classes. Results showed that design pattern participant classes were marginally more fault-prone than non-participant classes, The Adaptor, Method and Singleton patterns were found to be the most fault-prone of thirteen patterns explored. However, the primary reason for this fault-proneness was the propensity of design classes to be changed more often than non-design pattern classes.",TRUE,noun
R11,Science,R151256,"ICT-Enabled Community Empowerment in Crisis
Response: Social Media in Thailand Flooding 2011",S626526,R156077,Emergency Type,L431224,Flood,"In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency’s perspective, which undermines communities’ role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.",TRUE,noun
R11,Science,R153575,"ICT-Enabled Community Empowerment in Crisis
Response: Social Media in Thailand Flooding 2011.",S616713,R153885,Emergency Type,L425292,Flood,"In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency’s perspective, which undermines communities’ role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.",TRUE,noun
R11,Science,R151302,"Digitally enabled disaster response: the
emergence of social media as boundary
objects in a flooding disaster",S626680,R156098,Emergency Type,L431357,flooding,"In recent times, social media has been increasingly playing a critical role in response actions following natural catastrophes. From facilitating the recruitment of volunteers during an earthquake to supporting emotional recovery after a hurricane, social media has demonstrated its power in serving as an effective disaster response platform. Based on a case study of Thailand flooding in 2011 – one of the worst flooding disasters in more than 50 years that left the country severely impaired – this paper provides an in‐depth understanding on the emergent roles of social media in disaster response. Employing the perspective of boundary object, we shed light on how different boundary spanning competences of social media emerged in practice to facilitate cross‐boundary response actions during a disaster, with an aim to promote further research in this area. We conclude this paper with guidelines for response agencies and impacted communities to deploy social media for future disaster response.",TRUE,noun
R11,Science,R32427,"Unemployment Persistency, Over-education and the Employment Chances of the Less Educated",S110098,R32428,Population under analysis,R27732,General,"The research question addressed in this article concerns whether unemployment persistency can be regarded as a phenomenon that increases employment difficulties for the less educated and, if so, whether their employment chances are reduced by an overly rapid reduction in the number of jobs with low educational requirements. The empirical case is Sweden and the data covers the period 1976-2000. The empirical analyses point towards a negative response to both questions. First, it is shown that jobs with low educational requirements have declined but still constitute a substantial share of all jobs. Secondly, educational attainment has changed at a faster rate than the job structure with increasing over-education in jobs with low educational requirements as a result. This, together with changed selection patterns into the low education group, are the main reasons for the poor employment chances of the less educated in periods with low general demand for labour.",TRUE,noun
R11,Science,R32451,The Social and Political Consequences of Overeducation,S110218,R32452,Population under analysis,R27732,General,"This study employs national survey data to estimate the extent of overeducation in the U.S. labor force and its impact on a variety of worker attitudes. Estimates are made of the extent of overeducation and its distribution among different categories of workers, according to sex, race, age, and class background. The effects of overeducation are examined in four areas of worker attitudes: job satisfaction, political leftism, political alienation, and stratification ideology. Evidence is found of significant effects of overeducation on job satisfaction and several aspects of stratification ideology. The magnitude of these effects is small, however, and they are concentrated almost exclusively among very highly overeducated workers. No evidence is found of generalized political effects of overeducation, either in the form of increased political leftism or in the form of increased political alienation. These findings fail to support the common prediction of major political repercussions of overeducation and suggest the likelihood of alternative forms of adaptation among overeducated workers.",TRUE,noun
R11,Science,R32475,«Recruitment of Overeducated Personnel: Insider-Outsider Effects on Fair Employee Selection Practice,S110351,R32476,Population under analysis,R27732,General,"Abstract Fair employment policies constrain employee selection: specifically, applicants’ professional experience can be a substitute for formal education. However, reflecting firm-specific job requirements, this substitution rule applies less strictly to applicants from outside the firm. Further, setting low educational job requirements decreases the risk of disparate impact charges. Using data from a large US public employer, we show that successful outsider candidates exhibit higher levels of formal education than insiders. Also, this gap in educational attainments between outsiders and insiders widens with lower advertised degree requirements. More generally, we find strong insider–outsider effects on hiring decisions.",TRUE,noun
R11,Science,R32505,Gender Differences in Overeducation: A Test of the Theory of Differential Overqualification,S110521,R32506,Population under analysis,R27732,General,"There is little question that substantial labormarket differences exist between men and women. Among the most researched difference is the male-female wage gap. Many different theories are used to explain why men earn more than women. One possible reason is based on the limited geographic mobility of married women (Robert Frank, 1978). Family mobility is a joint decision in which the needs of the husband and wife are balanced to maximize family welfare. Job-motivated relocations are generally made to benefit the primary earner in the family. This leads to a constrained job search for the secondary earner, as he or she must search for a job in a limited geographic area. Since the husband is still the primary wage earner in many families, the job search of the wife may suffer. Individuals who are tied to a certain area are labeled ""tied-stayers,"" while secondary earners who move for the benefit of the family are labeled ""tied-movers"" (Jacob Mincer, 1978). The wages of a tied-stayer or tied-mover may not be substantially lower if the family lives in or moves to a large city. If a large labor market has more vacancies, the wife may locate a wage offer near the maximum she would find with a nationwide job search. However, being a tied-stayer or tied-mover can lower the wife's wage if the family lives in or moves to a small community. A small labor market will reduce the likelihood of her finding a job that utilizes her skills. As a result she may accept a job for which she is overqualified and thus earn a lower wage.' This hypothesized relationship between the likelihood of being overqualified and SMSA size is termed ""differential overqualification."" Frank ( 1978) and Haim Ofek and Yesook Merrill (1994) provide support for the theory of differential overqualification by finding that the malefemale wage gap is greater in smaller SMSA's. While the results are consistent with the existence of differential overqualification, they may also result from other situations as well. Firms in small labor markets may use their monopsony power to keep wages down.2 Local demand shocks are found to be a major source of wage variation both across and within local labor markets (Robert Topel, 1986). Since large labor markets are generally more diversified, a demand shock can have a substantial impact on immobile workers in small labor markets. Another reason for examining differential overqualification involves the assumption that there are more vacancies in large labor markets. While there is little doubt that more vacancies exist in large labor markets, there are also likely to be more people searching for jobs in large labor markets. If the greater number of vacancies is offset by the larger number of searchers, it is unclear whether women will be more likely to be overqualified in small labor markets. Instead of relying on wages to determine if differential overqualification exists, we consider an explicit form of overqualification based on education.",TRUE,noun
R11,Science,R32529,"Overeducation, Undereducation and the British Labour Market",S110657,R32530,Population under analysis,R27732,General,"This paper addresses the issue of overeducation and undereducation using for the first time a British dataset which contains explicit information on the level of required education to enter a job across the generality of occupations. Three key issues within the overeducation literature are addressed. First, what determines the existence of over and undereducation and to what extent are over and undereducation substitutes for experience, tenure and training? Second, to what extent are over and undereducation temporary or permanent phenomena? Third, what are the returns to over and undereducation and do certain stylized facts discovered for the US and a number of European countries hold for Britain?",TRUE,noun
R11,Science,R27753,International Evaluation of a Localized Geography Educational Software,S90358,R27754,Topic,R313,Geography,"A report on the implementation and evaluation of an intelligent learning system; the multimedia geography tutor and game software titled Lainos World SM was localized into English, French, Spanish, German, Portuguese, Russian and Simplified Chinese. Thereafter, multilingual online surveys were setup to which High school students were globally invited via mails to schools, targeted adverts and recruitment on Facebook, Google, etc. 1125 respondents from selected nations completed both the initial and final surveys. The effect of the software on students’ geographical knowledge was analyzed through pre and post achievement test scores. In general, the mean score were higher after exposure to the educational software for fifteen days and it was established that the score differences were statistically significant. This positive effect and other qualitative data show that the localized software from students’ perspective is a widely acceptable and effective educational tool for learning geography in an interactive and gaming environment..",TRUE,noun
R11,Science,R25706,Gephi: An Open Source Software for Exploring and Manipulating Networks,S77898,R25707,System,L48755,Gephi ,"Gephi is an open source software for graph and network analysis. It uses a 3D render engine to display large networks in real-time and to speed up the exploration. A flexible and multi-task architecture brings new possibilities to work with complex data sets and produce valuable visual results. We present several key features of Gephi in the context of interactive exploration and interpretation of networks. It provides easy and broad access to network data and allows for spatializing, filtering, navigating, manipulating and clustering. Finally, by presenting dynamic features of Gephi, we highlight key aspects of dynamic network visualization.",TRUE,noun
R11,Science,R32455,Measuring Over-education,S110232,R32456,Population under analysis,R32454,Graduates,"Previous work on over-education has assumed homogeneity of workers and jobs. Relaxing these assumptions, we find that over-educated workers have lower education credentials than matched graduates. Among the over-educated graduates we distinguish between the apparently over-educated workers, who have similar unobserved skills as matched graduates, and the genuinely over-educated workers, who have a much lower skill endowment. Over-education is associated with a pay penalty of 5%-11% for apparently over-educated workers compared with matched graduates and of 22%-26% for the genuinely over-educated. Over-education originates from the lack of skills of graduates. This should be taken into consideration in the current debate on the future of higher education in the UK. Copyright The London School of Economics and Political Science 2003.",TRUE,noun
R11,Science,R32457,«Overeducation and the Skills of UK Graduates»,S110245,R32458,Population under analysis,R32454,Graduates,"Summary. During the early 1990s the proportion of a cohort entering higher education in the UK doubled over a short period of time. The paper investigates the effect of the expansion on graduates’ early labour market attainment, focusing on overeducation. We define overeducation by combining occupation codes and a self‐reported measure for the appropriateness of the match between qualification and the job. We therefore define three groups of graduates: matched, apparently overeducated and genuinely overeducated. This measure is well correlated with alternative definitions of overeducation. Comparing pre‐ and post‐expansion cohorts of graduates, we find with this measure that the proportion of overeducated graduates has doubled, even though overeducation wage penalties have remained stable. We do not find that type of institution affects the probability of genuine overeducation. Apparently overeducated graduates are mostly indistinguishable from matched graduates, whereas genuinely overeducated graduates principally lack non‐academic skills and suffer a large wage penalty. Individual unobserved heterogeneity differs between the three groups of graduates but controlling for it does not alter these conclusions.",TRUE,noun
R11,Science,R25985,Knowledge-based derivation of document logical structure,S80446,R26011,Logical Labels,L50827,graph,"The analysis of a document image to derive a symbolic description of its structure and contents involves using spatial domain knowledge to classify the different printed blocks (e.g., text paragraphs), group them into logical units (e.g., newspaper stories), and determine the reading order of the text blocks within each unit. These steps describe the conversion of the physical structure of a document into its logical structure. We have developed a computational model for document logical structure derivation, in which a rule-based control strategy utilizes the data obtained from analyzing a digitized document image, and makes inferences using a multi-level knowledge base of document layout rules. The knowledge-based document logical structure derivation system (DeLoS) based on this model consists of a hierarchical rule-based control system to guide the block classification, grouping and read-ordering operations; a global data structure to store the document image data and incremental inferences; and a domain knowledge base to encode the rules governing document layout.",TRUE,noun
R11,Science,R34704,A Min-Min Max-Min selective algorihtm for grid task scheduling,S121328,R34705,Tools used for simulation,R34703,Gridsim,"Today, the high cost of supercomputers in the one hand and the need for large-scale computational resources on the other hand, has led to use network of computational resources known as Grid. Numerous research groups in universities, research labs, and industries around the world are now working on a type of Grid called Computational Grids that enable aggregation of distributed resources for solving large-scale data intensive problems in science, engineering, and commerce. Several institutions and universities have started research and teaching programs on Grid computing as part of their parallel and distributed computing curriculum. To better use tremendous capabilities of this distributed system, effective and efficient scheduling algorithms are needed. In this paper, we introduce a new scheduling algorithm based on two conventional scheduling algorithms, Min-Min and Max-Min, to use their cons and at the same time, cover their pros. It selects between the two algorithms based on standard deviation of the expected completion time of tasks on resources. We evaluate our scheduling heuristic, the Selective algorithm, within a grid simulator called GridSim. We also compared our approach to its two basic heuristics. The experimental results show that the new heuristic can lead to significant performance gain for a variety of scenarios.",TRUE,noun
R11,Science,R153502,"THIS IS NOT A DRILL: Mobile Telephony, Information Verification, and Expressive Communication During Hawaii’s False Missile Alert",S626811,R156111,Country,L431475,Hawaii,"On Saturday, 13 January 2018, residents of Hawaii received a chilling message through their smartphones. It read, in all caps, BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL. The message was mistakenly sent, but many residents lived in a threatened state of mind for the 38 minutes it took before a retraction was made. This study is based on a survey of 418 people who experienced the alert, recollecting their immediate responses, including how they attempted to verify the alert and how they used their mobile devices and social media for expressive interactions during the alert period. With the ongoing testing in the United States of nationwide Wireless Emergency Alerts, along with similar expansions of these systems in other countries, the event in Hawaii serves to illuminate how people understand and respond to mobile-based alerts. It shows the extreme speed that information—including misinformation—can flow in an emergency, and, for many, expressive communication affects people’s reactions.",TRUE,noun
R11,Science,R25991,Logical structure analysis of book document images using contents information,S80487,R26014,Logical Labels,L50862,head-foot,"Numerous studies have so far been carried out extensively for the analysis of document image structure, with particular emphasis placed on media conversion and layout analysis. For the conversion of a collection of books in a library into the form of hypertext documents, a logical structure extraction technology is indispensable, in addition to document layout analysis. The table of contents of a book generally involves very concise and faithful information to represent the logical structure of the entire book. That is to say, we can efficiently analyze the logical structure of a book by making full use of its contents pages. This paper proposes a new approach for document logical structure analysis to convert document images and contents information into an electronic document. First, the contents pages of a book are analyzed to acquire the overall document logical structure. Thereafter, we are able to use this information to acquire the logical structure of all the pages of the book by analyzing consecutive pages of a portion of the book. Test results demonstrate very high discrimination rates: up to 97.6% for the headline structure, 99.4% for the text structure, 97.8% for the page-number structure and almost 100% for the head-foot structure.",TRUE,noun
R11,Science,R25991,Logical structure analysis of book document images using contents information,S80482,R26014,Logical Labels,L50857,headline,"Numerous studies have so far been carried out extensively for the analysis of document image structure, with particular emphasis placed on media conversion and layout analysis. For the conversion of a collection of books in a library into the form of hypertext documents, a logical structure extraction technology is indispensable, in addition to document layout analysis. The table of contents of a book generally involves very concise and faithful information to represent the logical structure of the entire book. That is to say, we can efficiently analyze the logical structure of a book by making full use of its contents pages. This paper proposes a new approach for document logical structure analysis to convert document images and contents information into an electronic document. First, the contents pages of a book are analyzed to acquire the overall document logical structure. Thereafter, we are able to use this information to acquire the logical structure of all the pages of the book by analyzing consecutive pages of a portion of the book. Test results demonstrate very high discrimination rates: up to 97.6% for the headline structure, 99.4% for the text structure, 97.8% for the page-number structure and almost 100% for the head-foot structure.",TRUE,noun
R11,Science,R33824,Accuracy of CNV Detection from GWAS Data,S117301,R33825,Algorithm,R33822,HelixTree,"Several computer programs are available for detecting copy number variants (CNVs) using genome-wide SNP arrays. We evaluated the performance of four CNV detection software suites—Birdsuite, Partek, HelixTree, and PennCNV-Affy—in the identification of both rare and common CNVs. Each program's performance was assessed in two ways. The first was its recovery rate, i.e., its ability to call 893 CNVs previously identified in eight HapMap samples by paired-end sequencing of whole-genome fosmid clones, and 51,440 CNVs identified by array Comparative Genome Hybridization (aCGH) followed by validation procedures, in 90 HapMap CEU samples. The second evaluation was program performance calling rare and common CNVs in the Bipolar Genome Study (BiGS) data set (1001 bipolar cases and 1033 controls, all of European ancestry) as measured by the Affymetrix SNP 6.0 array. Accuracy in calling rare CNVs was assessed by positive predictive value, based on the proportion of rare CNVs validated by quantitative real-time PCR (qPCR), while accuracy in calling common CNVs was assessed by false positive/false negative rates based on qPCR validation results from a subset of common CNVs. Birdsuite recovered the highest percentages of known HapMap CNVs containing >20 markers in two reference CNV datasets. The recovery rate increased with decreased CNV frequency. In the tested rare CNV data, Birdsuite and Partek had higher positive predictive values than the other software suites. In a test of three common CNVs in the BiGS dataset, Birdsuite's call was 98.8% consistent with qPCR quantification in one CNV region, but the other two regions showed an unacceptable degree of accuracy. We found relatively poor consistency between the two “gold standards,” the sequence data of Kidd et al., and aCGH data of Conrad et al. Algorithms for calling CNVs especially common ones need substantial improvement, and a “gold standard” for detection of CNVs remains to be established.",TRUE,noun
R11,Science,R27764,"Mobile game-based learning in secondary education: engagement, motivation and learning in a mobile city game",S90407,R27765,Topic,R410,History,"Using mobile games in education combines situated and active learning with fun in a potentially excellent manner. The effects of a mobile city game called Frequency 1550, which was developed by The Waag Society to help pupils in their first year of secondary education playfully acquire historical knowledge of medieval Amsterdam, were investigated in terms of pupil engagement in the game, historical knowledge, and motivation for History in general and the topic of the Middle Ages in particular. A quasi-experimental design was used with 458 pupils from 20 classes from five schools. The pupils in 10 of the classes played the mobile history game whereas the pupils in the other 10 classes received a regular, project-based lesson series. The results showed those pupils who played the game to be engaged and to gain significantly more knowledge about medieval Amsterdam than those pupils who received regular project-based instruction. No significant differences were found between the two groups with respect to motivation for History or the Middle Ages. The impact of location-based technology and game-based learning on pupil knowledge and motivation are discussed along with suggestions for future research.",TRUE,noun
R11,Science,R26586,"Distributed clustering in ad-hoc sensor networks: a hybrid, energy-efficient approach",S85011,R26670,Clustering Process CH Election,R26630,Hybrid,"Prolonged network lifetime, scalability, and load balancing are important requirements for many ad-hoc sensor network applications. Clustering sensor nodes is an effective technique for achieving these goals. In this work, we propose a new energy-efficient approach for clustering nodes in ad-hoc sensor networks. Based on this approach, we present a protocol, HEED (hybrid energy-efficient distributed clustering), that periodically selects cluster heads according to a hybrid of their residual energy and a secondary parameter, such as node proximity to its neighbors or node degree. HEED does not make any assumptions about the distribution or density of nodes, or about node capabilities, e.g., location-awareness. The clustering process terminates in O(1) iterations, and does not depend on the network topology or size. The protocol incurs low overhead in terms of processing cycles and messages exchanged. It also achieves fairly uniform cluster head distribution across the network. A careful selection of the secondary clustering parameter can balance load among cluster heads. Our simulation results demonstrate that HEED outperforms weight-based clustering protocols in terms of several cluster characteristics. We also apply our approach to a simple application to demonstrate its effectiveness in prolonging the network lifetime and supporting data aggregation.",TRUE,noun
R11,Science,R32197,Chemical Composition of the Essential Oil ofArtemisia herba-albaAsso Grown in Algeria,S109503,R32198,Isolation Procedure,R32192,hydrodistillation,"Abstract The essential oil obtained by hydrodistillation from the aerial parts of Artemisia herba-alba Asso growing wild in M'sila-Algeria, was investigated using both capillary GC and GC/MS techniques. The oil yield was 1.02% based on dry weight. Sixty-eight components amounting to 94.7% of the oil were identifed, 33 of them being reported for the frst time in Algerian A. herba-alba oil and 21 of these components have not been previously reported in A. herba-alba oils. The oil contained camphor (19.4%), trans-pinocarveol (16.9%), chrysanthenone (15.8%) and β-thujone (15%) as major components. Monoterpenoids are the main components (86.1%), and the irregular monoterpenes fraction represented a 3.1% yield.",TRUE,noun
R11,Science,R32215,Chemical composi- tion of Algerian Artemisia herba-alba essential oils isolated by microwave and hydrodistillation,S109557,R32216,Isolation Procedure,R32192,hydrodistillation,"Abstract Isolation of the essential oil from Artemisia herba-alba collected in the North Sahara desert has been conducted by hydrodistillation (HD) and a microwave distillation process (MD). The chemical composition of the two oils was investigated by GC and GC/MS. In total, 94 constituents were identified. The main components were camphor (49.3 and 48.1% in HD and MD oils, respectively), 1,8-cineole (13.4–12.4%), borneol (7.3–7.1%), pinocarvone (5.6–5.5%), camphene (4.9–4.5%) and chrysanthenone (3.2–3.3%). In comparison with HD, MD allows one to obtain an oil in a very short time, with similar yields, comparable qualities and a substantial savings of energy.",TRUE,noun
R11,Science,R32277,APPLICATION OF ESSENTIAL OIL OF ARTEMISIA HERBA ALBA AS GREEN CORROSION INHIBITOR FOR STEEL IN 0.5 M H2SO4,S109713,R32278,Isolation Procedure,R32192,hydrodistillation,Essential oil from Artemisia herba alba (Art) was hydrodistilled and tested as corrosion inhibitor of steel in 0.5 M H2SO4 using weight loss measurements and electrochemical polarization methods. Results gathered show that this natural oil reduced the corrosion rate by the cathodic action. Its inhibition efficiency attains the maximum (74%) at 1 g/L. The inhibition efficiency of Arm oil increases with the rise of temperature. The adsorption isotherm of natural product on the steel has been determined. A. herba alba essential oil was obtained by hydrodistillation and its chemical composition oil was investigated by capillary GC and GC/MS. The major components were chrysanthenone (30.6%) and camphor (24.4%).,TRUE,noun
R11,Science,R32369,The essential oil from Artemisia herba-alba Asso cultivated in Arid Land (South Tunisia),S109929,R32370,Isolation Procedure,R32192,hydrodistillation,"Abstract Seedlings of Artemisia herba-alba Asso collected from Kirchaou area were transplanted in an experimental garden near the Institut des Régions Arides of Médenine (Tunisia). During three years, the aerials parts were harvested (three levels of cutting, 25%, 50% and 75% of the plant), at full blossom and during the vegetative stage. The essential oil was isolated by hydrodistillation and its chemical composition was determined by GC(RI) and 13C-NMR. With respect to the quantity of vegetable material and the yield of hydrodistillation, it appears that the best results were obtained for plants cut at 50% of their height and during the full blossom. The chemical composition of the essential oil was dominated by β-thujone, α-thujone, 1,8-cineole, camphor and trans-sabinyl acetate, irrespective of the level of cutting and the period of harvest. It remains similar to that of plants growing wild in the same area.",TRUE,noun
R11,Science,R32407,Chemical Variability ofArtemisia herba-albaAsso Growing Wild in Semi-arid and Arid Land (Tunisia),S110021,R32408,Isolation Procedure,R32192,hydrodistillation,"Abstract Twenty-six oil samples were isolated by hydrodistillation from aerial parts of Artemisia herba-alba Asso growing wild in Tunisia (semi-arid land) and their chemical composition was determined by GC(RI), GC/MS and 13C-NMR. Various compositions were observed, dominated either by a single component (α-thujone, camphor, chrysanthenone or trans-sabinyl acetate) or characterized by the occurrence, at appreciable contents, of two or more of these compounds. These results confrmed the tremendous chemical variability of A. herba-alba.",TRUE,noun
R11,Science,R32413,Chemical composition and biological activities of a new essential oil chemotype of Tunisian Artemisia herba alba Asso,S110052,R32414,Isolation Procedure,R32192,hydrodistillation,"The aim of the present study was to investigate the chemical composition, antioxidant, angiotensin Iconverting enzyme (ACE) inhibitory, antibacterial and antifungal activities of the essential oil of Artemisia herba alba Asso (Aha), a traditional medicinal plant widely growing in Tunisia. The essential oil from the air dried leaves and flowers of Aha were extracted by hydrodistillation and analyzed by GC and GC/MS. More than fifty compounds, out of which 48 were identified. The main chemical class of the oil was represented by oxygenated monoterpenes (50.53%). These were represented by 21 derivatives, among which the cis -chrysantenyl acetate (10.60%), the sabinyl acetate (9.13%) and the α-thujone (8.73%) were the principal compounds. Oxygenated sesquiterpenes, particularly arbusculones were identified in the essential oil at relatively high rates. The Aha essential oil was found to have an interesting antioxidant activity as evaluated by the 2,2-diphenyl-1-picrylhydrazyl and the β-carotene bleaching methods. The Aha essential oil also exhibited an inhibitory activity towards the ACE. The antimicrobial activities of Aha essential oil was evaluated against six bacterial strains and three fungal strains by the agar diffusion method and by determining the inhibition zone. The inhibition zones were in the range of 8-51 mm. The essential oil exhibited a strong growth inhibitory activity on all the studied fungi. Our findings demonstrated that Aha growing wild in South-Western of Tunisia seems to be a new chemotype and its essential oil might be a natural potential source for food preservation and for further investigation by developing new bioactive substances.",TRUE,noun
R11,Science,R32422,Chemical constituents and antioxidant activity of the essential oil from aerial parts of Artemisia herba-alba grown in Tunisian semi-arid region,S110072,R32423,Isolation Procedure,R32192,hydrodistillation,"Essential oils and their components are becoming increasingly popular as naturally occurring antioxidant agents. In this work, the composition of essential oil in Artemisia herba-alba from southwest Tunisia, obtained by hydrodistillation was determined by GC/MS. Eighteen compounds were identified with the main constituents namely, α-thujone (24.88%), germacrene D (14.48%), camphor (10.81%), 1,8-cineole (8.91%) and β-thujone (8.32%). The oil was screened for its antioxidant activity with 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical scavenging, β-carotene bleaching and reducing power assays. The essential oil of A. herba-alba exhibited a good antioxidant activity with all assays with dose dependent manner and can be attributed to its presence in the oil. Key words: Artemisia herba alba, essential oil, chemical composition, antioxidant activity.",TRUE,noun
R11,Science,R28973,"Hyperface: A deep multi-task learning framework for face detection, land- mark localization, pose estimation, and gender recognition",S95980,R29015,Methods,L58819,HyperFace ,"We present an algorithm for simultaneous face detection, landmarks localization, pose estimation and gender recognition using deep convolutional neural networks (CNN). The proposed method called, HyperFace, fuses the intermediate layers of a deep CNN using a separate CNN followed by a multi-task learning algorithm that operates on the fused features. It exploits the synergy among the tasks which boosts up their individual performances. Additionally, we propose two variants of HyperFace: (1) HyperFace-ResNet that builds on the ResNet-101 model and achieves significant improvement in performance, and (2) Fast-HyperFace that uses a high recall fast face detector for generating region proposals to improve the speed of the algorithm. Extensive experiments show that the proposed models are able to capture both global and local information in faces and performs significantly better than many competitive algorithms for each of these four tasks.",TRUE,noun
R11,Science,R27238,Middleware for Robotics: A Survey,S87702,R27239,Standards and technologies used,R27236,Ice,"The field of robotics relies heavily on various technologies such as mechatronics, computing systems, and wireless communication. Given the fast growing technological progress in these fields, robots can offer a wide range of applications. However real world integration and application development for such a distributed system composed of many robotic modules and networked robotic devices is very difficult. Therefore, middleware services provide a novel approach offering many possibilities and drastically enhancing the application development for robots. This paper surveys the current state of middleware approaches in this domain. It discusses middleware challenges in these systems and presents some representative middleware solutions specifically designed for robots. The selection of the studied methods tries to cover most of the middleware platforms, objectives and approaches that have been proposed by researchers in this field.",TRUE,noun
R11,Science,R33815,Comparative analyses of seven algorithms for copy number variant identification from single nucleotide polymorphism arrays,S117256,R33816,Vendor,R9303,Illumina,"Determination of copy number variants (CNVs) inferred in genome wide single nucleotide polymorphism arrays has shown increasing utility in genetic variant disease associations. Several CNV detection methods are available, but differences in CNV call thresholds and characteristics exist. We evaluated the relative performance of seven methods: circular binary segmentation, CNVFinder, cnvPartition, gain and loss of DNA, Nexus algorithms, PennCNV and QuantiSNP. Tested data included real and simulated Illumina HumHap 550 data from the Singapore cohort study of the risk factors for Myopia (SCORM) and simulated data from Affymetrix 6.0 and platform-independent distributions. The normalized singleton ratio (NSR) is proposed as a metric for parameter optimization before enacting full analysis. We used 10 SCORM samples for optimizing parameter settings for each method and then evaluated method performance at optimal parameters using 100 SCORM samples. The statistical power, false positive rates, and receiver operating characteristic (ROC) curve residuals were evaluated by simulation studies. Optimal parameters, as determined by NSR and ROC curve residuals, were consistent across datasets. QuantiSNP outperformed other methods based on ROC curve residuals over most datasets. Nexus Rank and SNPRank have low specificity and high power. Nexus Rank calls oversized CNVs. PennCNV detects one of the fewest numbers of CNVs.",TRUE,noun
R11,Science,R33819,The Effect of Algorithms on Copy Number Variant Detection,S117281,R33820,Vendor,R9303,Illumina,"Background The detection of copy number variants (CNVs) and the results of CNV-disease association studies rely on how CNVs are defined, and because array-based technologies can only infer CNVs, CNV-calling algorithms can produce vastly different findings. Several authors have noted the large-scale variability between CNV-detection methods, as well as the substantial false positive and false negative rates associated with those methods. In this study, we use variations of four common algorithms for CNV detection (PennCNV, QuantiSNP, HMMSeg, and cnvPartition) and two definitions of overlap (any overlap and an overlap of at least 40% of the smaller CNV) to illustrate the effects of varying algorithms and definitions of overlap on CNV discovery. Methodology and Principal Findings We used a 56 K Illumina genotyping array enriched for CNV regions to generate hybridization intensities and allele frequencies for 48 Caucasian schizophrenia cases and 48 age-, ethnicity-, and gender-matched control subjects. No algorithm found a difference in CNV burden between the two groups. However, the total number of CNVs called ranged from 102 to 3,765 across algorithms. The mean CNV size ranged from 46 kb to 787 kb, and the average number of CNVs per subject ranged from 1 to 39. The number of novel CNVs not previously reported in normal subjects ranged from 0 to 212. Conclusions and Significance Motivated by the availability of multiple publicly available genome-wide SNP arrays, investigators are conducting numerous analyses to identify putative additional CNVs in complex genetic disorders. However, the number of CNVs identified in array-based studies, and whether these CNVs are novel or valid, will depend on the algorithm(s) used. Thus, given the variety of methods used, there will be many false positives and false negatives. Both guidelines for the identification of CNVs inferred from high-density arrays and the establishment of a gold standard for validation of CNVs are needed.",TRUE,noun
R11,Science,R33827,Assessment of copy number variation using the Illumina Infinium 1M SNP-array: A comparison of methodological approaches in the Spanish Bladder Cancer/EPICURO study,S117327,R33828,Vendor,R9303,Illumina,"High‐throughput single nucleotide polymorphism (SNP)‐array technologies allow to investigate copy number variants (CNVs) in genome‐wide scans and specific calling algorithms have been developed to determine CNV location and copy number. We report the results of a reliability analysis comparing data from 96 pairs of samples processed with CNVpartition, PennCNV, and QuantiSNP for Infinium Illumina Human 1Million probe chip data. We also performed a validity assessment with multiplex ligation‐dependent probe amplification (MLPA) as a reference standard. The number of CNVs per individual varied according to the calling algorithm. Higher numbers of CNVs were detected in saliva than in blood DNA samples regardless of the algorithm used. All algorithms presented low agreement with mean Kappa Index (KI) <66. PennCNV was the most reliable algorithm (KIw=98.96) when assessing the number of copies. The agreement observed in detecting CNV was higher in blood than in saliva samples. When comparing to MLPA, all algorithms identified poorly known copy aberrations (sensitivity = 0.19–0.28). In contrast, specificity was very high (0.97–0.99). Once a CNV was detected, the number of copies was truly assessed (sensitivity >0.62). Our results indicate that the current calling algorithms should be improved for high performance CNV analysis in genome‐wide scans. Further refinement is required to assess CNVs as risk factors in complex diseases.Hum Mutat 32:1–10, 2011. © 2011 Wiley‐Liss, Inc.",TRUE,noun
R11,Science,R28984,From few to many: illumination cone models for face recognition under variable lighting and pose,S95737,R28985,Variations,L58620,illumination,"We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render (or synthesize) images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone. Test results show that the method performs almost without error, except on the most extreme lighting directions.",TRUE,noun
R11,Science,R28984,From few to many: illumination cone models for face recognition under variable lighting and pose,S95741,R28985,Video (v)/image (i),L58624,Image,"We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render (or synthesize) images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone. Test results show that the method performs almost without error, except on the most extreme lighting directions.",TRUE,noun
R11,Science,R28994,Overview of the Face Recognition Grand Challenge,S95822,R28995,Video (v)/image (i),L58690,Image,"Over the last couple of years, face recognition researchers have been developing new techniques. These developments are being fueled by advances in computer vision techniques, computer design, sensor design, and interest in fielding face recognition systems. Such advances hold the promise of reducing the error rate in face recognition systems by an order of magnitude over Face Recognition Vendor Test (FRVT) 2002 results. The face recognition grand challenge (FRGC) is designed to achieve this performance goal by presenting to researchers a six-experiment challenge problem along with data corpus of 50,000 images. The data consists of 3D scans and high resolution still imagery taken under controlled and uncontrolled conditions. This paper describes the challenge problem, data corpus, and presents baseline performance and preliminary results on natural statistics of facial imagery.",TRUE,noun
R11,Science,R28998,"Annotated facial landmarks in the wild: A large-scale, real- world database for facial landmark localization",S95856,R28999,Video (v)/image (i),L58718,Image,"Face alignment is a crucial step in face recognition tasks. Especially, using landmark localization for geometric face normalization has shown to be very effective, clearly improving the recognition results. However, no adequate databases exist that provide a sufficient number of annotated facial landmarks. The databases are either limited to frontal views, provide only a small number of annotated images or have been acquired under controlled conditions. Hence, we introduce a novel database overcoming these limitations: Annotated Facial Landmarks in the Wild (AFLW). AFLW provides a large-scale collection of images gathered from Flickr, exhibiting a large variety in face appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total 25,993 faces in 21,997 real-world images are annotated with up to 21 landmarks per image. Due to the comprehensive set of annotations AFLW is well suited to train and test algorithms for multi-view face detection, facial landmark localization and face pose estimation. Further, we offer a rich set of tools that ease the integration of other face databases and associated annotations into our joint framework.",TRUE,noun
R11,Science,R29000,Localizing Parts of Faces Using a Consensus of Exemplars,S95875,R29001,Video (v)/image (i),L58734,Image,"We present a novel approach to localizing parts in images of human faces. The approach combines the output of local detectors with a nonparametric set of global models for the part locations based on over 1,000 hand-labeled exemplar images. By assuming that the global models generate the part locations as hidden variables, we derive a Bayesian objective function. This function is optimized using a consensus of models for these hidden variables. The resulting localizer handles a much wider range of expression, pose, lighting, and occlusion than prior ones. We show excellent performance on real-world face datasets such as Labeled Faces in the Wild (LFW) and a new Labeled Face Parts in the Wild (LFPW) and show that our localizer achieves state-of-the-art performance on the less challenging BioID dataset.",TRUE,noun
R11,Science,R29004,"Face detection, pose estimation, and landmark localization in the wild",S95909,R29005,Video (v)/image (i),L58762,Image,"We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).",TRUE,noun
R11,Science,R29006,A Semi-automatic Methodology for Facial Landmark Annotation,S95928,R29007,Video (v)/image (i),L58778,Image,"Developing powerful deformable face models requires massive, annotated face databases on which techniques can be trained, validated and tested. Manual annotation of each facial image in terms of landmarks requires a trained expert and the workload is usually enormous. Fatigue is one of the reasons that in some cases annotations are inaccurate. This is why, the majority of existing facial databases provide annotations for a relatively small subset of the training images. Furthermore, there is hardly any correspondence between the annotated land-marks across different databases. These problems make cross-database experiments almost infeasible. To overcome these difficulties, we propose a semi-automatic annotation methodology for annotating massive face datasets. This is the first attempt to create a tool suitable for annotating massive facial databases. We employed our tool for creating annotations for MultiPIE, XM2VTS, AR, and FRGC Ver. 2 databases. The annotations will be made publicly available from http://ibug.doc.ic.ac.uk/ resources/facial-point-annotations/. Finally, we present experiments which verify the accuracy of produced annotations.",TRUE,noun
R11,Science,R29217,The Core Critical Success Factors in Implementation of Enterprise Resource Planning Systems,S96982,R29218,Period,R6302,Implementation,"The Implementation of Enterprise Resource Planning ERP systems require huge investments while ineffective implementations of such projects are commonly observed. A considerable number of these projects have been reported to fail or take longer than it was initially planned, while previous studies show that the aim of rapid implementation of such projects has not been successful and the failure of the fundamental goals in these projects have imposed huge amounts of costs on investors. Some of the major consequences are the reduction in demand for such products and the introduction of further skepticism to the managers and investors of ERP systems. In this regard, it is important to understand the factors determining success or failure of ERP implementation. The aim of this paper is to study the critical success factors CSFs in implementing ERP systems and to develop a conceptual model which can serve as a basis for ERP project managers. These critical success factors that are called ""core critical success factors"" are extracted from 62 published papers using the content analysis and the entropy method. The proposed conceptual model has been verified in the context of five multinational companies.",TRUE,noun
R11,Science,R33521,Evaluating the critical success factors of supplier development: a case study,S115784,R33522,Critical success factors,R33510,incentives,"Purpose – The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach – In total, 13 CSFs for SD are identified (i.e. long‐term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings – The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long‐term strategic goal is found to be ...",TRUE,noun
R11,Science,R29198,ERP implementation: a compilation and analysis of critical success factors,S96837,R29199,Types of literature reviews,R29110,Inductive,"Purpose – To explore the current literature base of critical success factors (CSFs) of ERP implementations, prepare a compilation, and identify any gaps that might exist.Design/methodology/approach – Hundreds of journals were searched using key terms identified in a preliminary literature review. Successive rounds of article abstract reviews resulted in 45 articles being selected for the compilation. CSF constructs were then identified using content analysis methodology and an inductive coding technique. A subsequent critical analysis identified gaps in the literature base.Findings – The most significant finding is the lack of research that has focused on the identification of CSFs from the perspectives of key stakeholders. Additionally, there appears to be much variance with respect to what exactly is encompassed by change management, one of the most widely cited CSFs, and little detail of specific implementation tactics.Research limitations/implications – There is a need to focus future research efforts...",TRUE,noun
R11,Science,R70539,"Development of an infection screening system for entry inspection at airport quarantine stations using ear temperature, heart and respiration rates",S335763,R70570,Infection,L242576,Influenza,"After the outbreak of severe acute respiratory syndrome (SARS) in 2003, many international airport quarantine stations conducted fever-based screening to identify infected passengers using infrared thermography for preventing global pandemics. Due to environmental factors affecting measurement of facial skin temperature with thermography, some previous studies revealed the limits of authenticity in detecting infectious symptoms. In order to implement more strict entry screening in the epidemic seasons of emerging infectious diseases, we developed an infection screening system for airport quarantines using multi-parameter vital signs. This system can automatically detect infected individuals within several tens of seconds by a neural-network-based discriminant function using measured vital signs, i.e., heart rate obtained by a reflective photo sensor, respiration rate determined by a 10-GHz non-contact respiration radar, and the ear temperature monitored by a thermography. In this paper, to reduce the environmental effects on thermography measurement, we adopted the ear temperature as a new screening indicator instead of facial skin. We tested the system on 13 influenza patients and 33 normal subjects. The sensitivity of the infection screening system in detecting influenza were 92.3%, which was higher than the sensitivity reported in our previous paper (88.0%) with average facial skin temperature.",TRUE,noun
R11,Science,R70541,A Pediatric Infection Screening System with a Radar Respiration Monitor for Rapid Detection of Seasonal Influenza among Outpatient Children,S335771,R70571,Infection,L242583,Influenza,"Background: Seasonal influenza virus outbreaks cause annual epidemics, mostly during winter in temperate zone countries, especially resulting in increased morbidity and higher mortality in children. In order to conduct rapid screening for influenza in pediatric outpatient units, we developed a pediatric infection screening system with a radar respiration monitor. Methods: The system conducts influenza screening within 10 seconds based on vital signs (i.e., respiration rate monitored using a 24 GHz microwave radar; facial temperature, using a thermopile array; and heart rate, using a pulse photosensor). A support vector machine (SVM) classification method was used to discriminate influenza children from healthy children based on vital signs. To assess the classification performance of the screening system that uses the SVM, we conducted influenza screening for 70 children (i.e., 27 seasonal influenza patients (11 ± 2 years) at a pediatric clinic and 43 healthy control subjects (9 ± 4 years) at a pediatric dental clinic) in the winter of 2013-2014. Results: The screening system using the SVM identified 26 subjects with influenza (22 of the 27 influenza patients and 4 of the 43 healthy subjects). The system discriminated 44 subjects as healthy (5 of the 27 influenza patients and 39 of the 43 healthy subjects), with sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 81.5%, 90.7%, 84.6%, and 88.6%, respectively. Conclusion: The SVM-based screening system achieved classification results for the outpatient children based on vital signs with comparatively high NPV within 10 seconds. At pediatric clinics and hospitals, our system seems potentially useful in the first screening step for infections in the future.",TRUE,noun
R11,Science,R33447,Linking Success Factors to Financial Performance,S115631,R33448,Critical success factors,R33440,internationalization,"Problem statement: Based on a literature survey, an attempt has been made in this study to develop a framework for identifying the success factors. In addition, a list of key success factors is presented. The emphasis is on success factors dealing with breadth of services, internationalization of operations, industry focus, customer focus, 3PL experience, relationship with 3PLs, investment in quality assets, investment in information systems, availability of skilled professionals and supply chain integration. In developing the factors an effort has been made to align and relate them to financial performance. Conclusion/Recommendations: We found success factors “relationship with 3PLs and skilled logistics professionals” would substantially improves financial performance metric profit growth. Our findings also contribute to managerial practice by offering a benchmarking tool that can be used by managers in the 3PL service provider industry in India.",TRUE,noun
R11,Science,R30651,Prevalence of erosive tooth wear and associated risk factors in 2-7-year-old German kindergarten children,S102266,R30652,Study population,R30647,Kindergarten,"OBJECTIVES The aims of this study were to (1) investigate prevalence and severity of erosive tooth wear among kindergarten children and (2) determine the relationship between dental erosion and dietary intake, oral hygiene behaviour, systemic diseases and salivary concentration of calcium and phosphate. MATERIALS AND METHODS A sample of 463 children (2-7 years old) from 21 kindergartens were examined under standardized conditions by a calibrated examiner. Dental erosion of primary and permanent teeth was recorded using a scoring system based on O'Sullivan Index [Eur J Paediatr Dent 2 (2000) 69]. Data on the rate and frequency of dietary intake, systemic diseases and oral hygiene behaviour were obtained from a questionnaire completed by the parents. Unstimulated saliva samples of 355 children were analysed for calcium and phosphate concentration by colorimetric assessment. Descriptive statistics and multiple regression analysis were applied to the data. RESULTS Prevalence of erosion amounted to 32% and increased with increasing age of the children. Dentine erosion affecting at least one tooth could be observed in 13.2% of the children. The most affected teeth were the primary maxillary first and second incisors (15.5-25%) followed by the canines (10.5-12%) and molars (1-5%). Erosions on primary mandibular teeth were as follows: incisors: 1.5-3%, canines: 5.5-6% and molars: 3.5-5%. Erosions of the primary first and second molars were mostly seen on the occlusal surfaces (75.9%) involving enamel or enamel-dentine but not the pulp. In primary first and second incisors and canines, erosive lesions were often located incisally (51.2%) or affected multiple surfaces (28.9%). None of the permanent incisors (n = 93) or first molars (n=139) showed signs of erosion. Dietary factors, oral hygiene behaviour, systemic diseases and salivary calcium and phosphate concentration were not associated with the presence of erosion. CONCLUSIONS Erosive tooth wear of primary teeth was frequently seen in primary dentition. As several children showed progressive erosion into dentine or exhibited severe erosion affecting many teeth, preventive and therapeutic measures are recommended.",TRUE,noun
R11,Science,R25985,Knowledge-based derivation of document logical structure,S80437,R26011,Key Idea,L50818,knowledge-based,"The analysis of a document image to derive a symbolic description of its structure and contents involves using spatial domain knowledge to classify the different printed blocks (e.g., text paragraphs), group them into logical units (e.g., newspaper stories), and determine the reading order of the text blocks within each unit. These steps describe the conversion of the physical structure of a document into its logical structure. We have developed a computational model for document logical structure derivation, in which a rule-based control strategy utilizes the data obtained from analyzing a digitized document image, and makes inferences using a multi-level knowledge base of document layout rules. The knowledge-based document logical structure derivation system (DeLoS) based on this model consists of a hierarchical rule-based control system to guide the block classification, grouping and read-ordering operations; a global data structure to store the document image data and incremental inferences; and a domain knowledge base to encode the rules governing document layout.",TRUE,noun
R11,Science,R25997,Automated labeling in document images,S80297,R25998,Performance metric,L50702,labeling,"The National Library of Medicine (NLM) is developing an automated system to produce bibliographic records for its MEDLINER database. This system, named Medical Article Record System (MARS), employs document image analysis and understanding techniques and optical character recognition (OCR). This paper describes a key module in MARS called the Automated Labeling (AL) module, which labels all zones of interest (title, author, affiliation, and abstract) automatically. The AL algorithm is based on 120 rules that are derived from an analysis of journal page layouts and features extracted from OCR output. Experiments carried out on more than 11,000 articles in over 1,000 biomedical journals show the accuracy of this rule-based algorithm to exceed 96%.",TRUE,noun
R11,Science,R34599,L-diversity: privacy beyond k-anonymity,S120568,R34600,Anonymistion algorithm/method,R34598,l-diversity,"Publishing data about individuals without revealing sensitive information about them is an important problem. In recent years, a new definition of privacy called \kappa-anonymity has gained popularity. In a \kappa-anonymized dataset, each record is indistinguishable from at least k—1 other records with respect to certain ""identifying"" attributes. In this paper we show with two simple attacks that a \kappa-anonymized dataset has some subtle, but severe privacy problems. First, we show that an attacker can discover the values of sensitive attributes when there is little diversity in those sensitive attributes. Second, attackers often have background knowledge, and we show that \kappa-anonymity does not guarantee privacy against attackers using background knowledge. We give a detailed analysis of these two attacks and we propose a novel and powerful privacy definition called \ell-diversity. In addition to building a formal foundation for \ell-diversity, we show in an experimental evaluation that \ell-diversity is practical and can be implemented efficiently.",TRUE,noun
R11,Science,R25906,"LEPTO dipstick, a dipstick assay for detection of Leptospira-specific immunoglobulin M antibodies in human sera.",S79634,R25907,Application,L50168,Leptospirosis,"We studied a dipstick assay for the detection of Leptospira-specific immunoglobulin M (IgM) antibodies in human serum samples. A high degree of concordance was observed between the results of the dipstick assay and an IgM enzyme-linked immunosorbent assay (ELISA). Application of the dipstick assay for the detection of acute leptospirosis enabled the accurate identification, early in the disease, of a high proportion of the cases of leptospirosis. Analysis of a second serum sample is recommended, in order to determine seroconversion or increased staining intensity. All serum samples from the patients who were confirmed to be positive for leptospirosis by either a positive microscopic agglutination test or a positive culture but were found to be negative by the dipstick assay were also judged to be negative by the IgM ELISA or revealed borderline titers by the IgM ELISA. Some cross-reactivity was observed for sera from patients with diseases other than leptospirosis, and this should be taken into account in the interpretation of test results. The dipstick assay is easy to perform, can be performed quickly, and requires no electricity or special equipment, and the assay components, a dipstick and a staining reagent, can be stored for a prolonged period without a loss of reactivity, even at elevated temperatures.",TRUE,noun
R11,Science,R25989,Computer understanding of document structure,S80474,R26013,Application Domain,L50851,letters,"We describe a system which is capable of learning the presentation of document logical structure, exemplary as shown for business letters. Presenting a set of instances to the system, it clusters them into structural concepts and induces a concept hierarchy. This concept hierarchy is taken as a reference for classifying future input. The article introduces the sequence of learning steps and describes how the resulting concept hierarchy is applied to logical labeling, and reports the results.",TRUE,noun
R11,Science,R30617,Accurate eye center location and tracking using isophote curvature,S102032,R30618,Challenges,R30580,Lighting,"The ubiquitous application of eye tracking is precluded by the requirement of dedicated and expensive hardware, such as infrared high definition cameras. Therefore, systems based solely on appearance (i.e. not involving active infrared illumination) are being proposed in literature. However, although these systems are able to successfully locate eyes, their accuracy is significantly lower than commercial eye tracking devices. Our aim is to perform very accurate eye center location and tracking, using a simple Web cam. By means of a novel relevance mechanism, the proposed method makes use of isophote properties to gain invariance to linear lighting changes (contrast and brightness), to achieve rotational invariance and to keep low computational costs. In this paper we test our approach for accurate eye location and robustness to changes in illumination and pose, using the BioIDand the Yale Face B databases, respectively. We demonstrate that our system can achieve a considerable improvement in accuracy over state of the art techniques.",TRUE,noun
R11,Science,R30745,Facility location in humanitarian relief,S103550,R30917,Decisions First-stage,R30824,Locations,"In this study, we consider facility location decisions for a humanitarian relief chain responding to quick-onset disasters. In particular, we develop a model that determines the number and locations of distribution centres in a relief network and the amount of relief supplies to be stocked at each distribution centre to meet the needs of people affected by the disasters. Our model, which is a variant of the maximal covering location model, integrates facility location and inventory decisions, considers multiple item types, and captures budgetary constraints and capacity restrictions. We conduct computational experiments to illustrate how the proposed model works on a realistic problem. Results show the effects of pre- and post-disaster relief funding on relief system's performance, specifically on response time and the proportion of demand satisfied. Finally, we discuss the managerial implications of the proposed model.",TRUE,noun
R11,Science,R30800,Stochastic network design for disaster preparedness,S103789,R30945,Decisions First-stage,R30824,Locations,"This article introduces a risk-averse stochastic modeling approach for a pre-disaster relief network design problem under uncertain demand and transportation capacities. The sizes and locations of the response facilities and the inventory levels of relief supplies at each facility are determined while guaranteeing a certain level of network reliability. A probabilistic constraint on the existence of a feasible flow is introduced to ensure that the demand for relief supplies across the network is satisfied with a specified high probability. Responsiveness is also accounted for by defining multiple regions in the network and introducing local probabilistic constraints on satisfying demand within each region. These local constraints ensure that each region is self-sufficient in terms of providing for its own needs with a large probability. In particular, the Gale–Hoffman inequalities are used to represent the conditions on the existence of a feasible network flow. The solution method rests on two pillars. A preprocessing algorithm is used to eliminate redundant Gale–Hoffman inequalities and then proposed models are formulated as computationally efficient mixed-integer linear programs by utilizing a method based on combinatorial patterns. Computational results for a case study and randomly generated problem instances demonstrate the effectiveness of the models and the solution method.",TRUE,noun
R11,Science,R30805,Bi-objective stochastic programming models for determining depot locations in disaster relief operations,S104129,R31089,Decisions First-stage,R30824,Locations,"This paper presents two-stage bi-objective stochastic programming models for disaster relief operations. We consider a problem that occurs in the aftermath of a natural disaster: a transportation system for supplying disaster victims with relief goods must be established. We propose bi-objective optimization models with a monetary objective and humanitarian objective. Uncertainty in the accessibility of the road network is modeled by a discrete set of scenarios. The key features of our model are the determination of locations for intermediate depots and acquisition of vehicles. Several model variants are considered. First, the operating budget can be fixed at the first stage for all possible scenarios or determined for each scenario at the second stage. Second, the assignment of vehicles to a depot can be either fixed or free. Third, we compare a heterogeneous vehicle fleet to a homogeneous fleet. We study the impact of the variants on the solutions. The set of Pareto-optimal solutions is computed by applying the adaptive Epsilon-constraint method. We solve the deterministic equivalents of the two-stage stochastic programs using the MIP-solver CPLEX.",TRUE,noun
R11,Science,R31699,Design Patterns in Software Maintenance: An Experiment Replication at University of Alabama,S106378,R31700,Quality attribute,L63547,Maintainability,"Design patterns are widely used within the software engineer community. Researchers claim that design patterns improve software quality. In this paper, we describe two experiments, using graduate student participants, to study whether design patterns improve the software quality, specifically maintainability and understandability. We replicated a controlled experiment to compare the maintainability of two implementations of an application, one using a design pattern and the other using a simpler alternative. The maintenance tasks in this replication experiment required the participants to answer questions about a Java program and then modify that program. Prior to the replication, we performed a preliminary exercise to investigate whether design patterns improve the understandability of software designs. We gave the participants the graphical design of the systems that would be used in the replication study. The participant received either the version of the design containing the design pattern or the version containing the simpler alternative. We asked the participants a series of questions to see how well they understood the given design. The results of two experiments revealed that the design patterns did not improve either the maintainability or the understandability of the software. We found that there was no significant correlation between the maintainability and the understandability of the software even though the participants had received the design of the systems before they performed the maintenance tasks.",TRUE,noun
R11,Science,R31705,Impact of the visitor pattern on program comprehension and maintenance,S106481,R31706,Quality attribute,L63641,Maintainability,"In the software engineering literature, many works claim that the use of design patterns improves the comprehensibility of programs and, more generally, their maintainability. Yet, little work attempted to study the impact of design patterns on the developers' tasks of program comprehension and modification. We design and perform an experiment to collect data on the impact of the Visitor pattern on comprehension and modification tasks with class diagrams. We use an eye-tracker to register saccades and fixations, the latter representing the focus of the developers' attention. Collected data show that the Visitor pattern plays a role in maintenance tasks: class diagrams with its canonical representation requires less efforts from developers.",TRUE,noun
R11,Science,R26248,Omya Hustadmarmor optimizes its supply chain for delivering calcium carbonate slurry to European paper manufacturers,S82096,R26249,mode,R26198,Maritime,"The Norwegian company Omya Hustadmarmor supplies calcium carbonate slurry to European paper manufacturers from a single processing plant, using chemical tank ships of various sizes to transport its products. Transportation costs are lower for large ships than for small ships, but their use increases planning complexity and creates problems in production. In 2001, the company faced overwhelming operational challenges and sought operations-research-based planning support. The CEO, Sturla Steinsvik, contacted More Research Molde, which conducted a project that led to the development of a decision-support system (DSS) for maritime inventory routing. The core of the DSS is an optimization model that is solved through a metaheuristic-based algorithm. The system helps planners to make stronger, faster decisions and has increased predictability and flexibility throughout the supply chain. It has saved production and transportation costs close to US$7 million a year. We project additional direct savings of nearly US$4 million a year as the company adds even larger ships to the fleet as a result of the project. In addition, the company has avoided investments of US$35 million by increasing capacity utilization. Finally, the project has had a positive environmental effect by reducing overall oil consumption by more than 10 percent.",TRUE,noun
R11,Science,R27790,New Directions for Traditional Lessons”: Can Handheld Game Consoles Enhance Mental Mathematics Skills?,S90531,R27791,Topic,R156,Mathematics,"This paper reports on a pilot study that compared the use of commercial off-the-shelf (COTS) handheld game consoles (HGCs) with traditional teaching methods to develop the automaticity of mathematical calculations and self-concept towards mathematics for year 4 students in two metropolitan schools. One class conducted daily sessions using the HGCs and the Dr Kawashima’s Brain Training software to enhance their mental maths skills while the comparison class engaged in mental maths lessons using more traditional classroom approaches. Students were assessed using standardised tests at the beginning and completion of the term and findings indicated that students who undertook the Brain Training pilot study using the HGCs showed significant improvement in both the speed and accuracy of their mathematical calculations and selfconcept compared to students in the control school. An exploration of the intervention, discussion of methodology and the implications of the use of HGCs in the primary classroom are presented.",TRUE,noun
R11,Science,R27792,A Study on Exploiting Commercial Digital Games into School Context,S90542,R27793,Topic,R156,Mathematics,"Digital game-based learning is a research field within the context of technology-enhanced learning that has attracted significant research interest. Commercial off-the-shelf digital games have the potential to provide concrete learning experiences and allow for drawing links between abstract concepts and real-world situations. The aim of this paper is to provide evidence for the effect of a general-purpose commercial digital game (namely, the “Sims 2-Open for Business”) on the achievement of standard curriculum Mathematics educational objectives as well as general educational objectives as defined by standard taxonomies. Furthermore, students’ opinions about their participation in the proposed game-supported educational scenario and potential changes in their attitudes toward math teaching and learning in junior high school are investigated. The results of the conducted research showed that: (i) students engaged in the game-supported educational activities achieved the same results with those who did not, with regard to the subject matter educational objectives, (ii) digital gamesupported educational activities resulted in better achievement of the general educational objectives, and (iii) no significant differences were observed with regard to students’ attitudes towards math teaching and learning.",TRUE,noun
R11,Science,R34666,User-Priority guided min min scheduling algorithm for load balancing in cloud computing,S121174,R34667,Tools used for simulation,R28636,Matlab,"Cloud computing is emerging as a new paradigm of large-scale distributed computing. In order to utilize the power of cloud computing completely, we need an efficient task scheduling algorithm. The traditional Min-Min algorithm is a simple, efficient algorithm that produces a better schedule that minimizes the total completion time of tasks than other algorithms in the literature [7]. However the biggest drawback of it is load imbalanced, which is one of the central issues for cloud providers. In this paper, an improved load balanced algorithm is introduced on the ground of Min-Min algorithm in order to reduce the makespan and increase the resource utilization (LBIMM). At the same time, Cloud providers offer computer resources to users on a pay-per-use base. In order to accommodate the demands of different users, they may offer different levels of quality for services. Then the cost per resource unit depends on the services selected by the user. In return, the user receives guarantees regarding the provided resources. To observe the promised guarantees, user-priority was considered in our proposed PA-LBIMM so that user's demand could be satisfied more completely. At last, the introduced algorithm is simulated using Matlab toolbox. The simulation results show that the improved algorithm can lead to significant performance gain and achieve over 20% improvement on both VIP user satisfaction and resource utilization ratio.",TRUE,noun
R11,Science,R50090,Translating the Concept of Goal Setting into Practice: What ‘else’ Does It Require than a Goal Setting Tool?: ,S153490,R50092,contains,R50083,Methods,"This conceptual paper reviews the current status of goal setting in the area of technology enhanced learning and education. Besides a brief literature review, three current projects on goal setting are discussed. The paper shows that the main barriers for goal setting applications in education are not related to the technology, the available data or analytical methods, but rather the human factor. The most important bottlenecks are the lack of students goal setting skills and abilities, and the current curriculum design, which, especially in the observed higher education institutions, provides little support for goal setting interventions.",TRUE,noun
R11,Science,R50390,A Technology-enhanced Smart Learning Environment based on the Combination of Knowledge Graphs and Learning Paths: ,S154100,R50391,contains,R50383,Methods,"In our position paper on a technology-enhanced smart learning environment, we propose the innovative combination of a knowledge graph representing what one has to learn and a learning path defining in which order things are going to be learned. In this way, we aim to identify students’ weak spots or knowledge gaps in order to individually assist them in reaching their goals. Based on the performance of different learning paths, one might further identify the characteristics of a learning system that leads to successful students. In addition, by studying assessments and the different ways a particular problem can be solved, new methods for a multi-dimensional classification of assessments can be developed. The theoretical findings on learning paths in combination with the classification of assessments will inform the design and development of a smart learning environment. By combining a knowledge graph with different learning paths and the corresponding practical assessments we enable the creation of a smart learning tool. While the proposed approach can be applied to different educational domains and should lead to more effective learning environments fostering deep learning in schools as well as in professional settings, in this paper we focus on the domain of mathematics in primary and high schools as the main use case.",TRUE,noun
R11,Science,R52278,Measuring the predictability of life outcomes with a scientific mass collaboration,S160092,R52279,contains,R52276,Methods,"How predictable are life trajectories? We investigated this question with a scientific mass collaboration using the common task method; 160 teams built predictive models for six life outcomes using data from the Fragile Families and Child Wellbeing Study, a high-quality birth cohort study. Despite using a rich dataset and applying machine-learning methods optimized for prediction, the best predictions were not very accurate and were only slightly better than those from a simple benchmark model. Within each outcome, prediction error was strongly associated with the family being predicted and weakly associated with the technique used to generate the prediction. Overall, these results suggest practical limits to the predictability of life outcomes in some settings and illustrate the value of mass collaborations in the social sciences.",TRUE,noun
R11,Science,R34392,Treatment of metronidazole-refractory Clostridium difficile enteritis with vancomycin,S119796,R34393,Treatment,R34340,metronidazole,"BACKGROUND Clostridium difficile infection of the colon is a common and well-described clinical entity. Clostridium difficile enteritis of the small bowel is believed to be less common and has been described sparsely in the literature. METHODS Case report and literature review. RESULTS We describe a patient who had undergone total proctocolectomy with ileal pouch-anal anastomosis who was treated with broad-spectrum antibiotics and contracted C. difficile refractory to metronidazole. The enteritis resolved quickly after initiation of combined oral vancomycin and metronidazole. A literature review found that eight of the fifteen previously reported cases of C. difficile-associated small-bowel enteritis resulted in death. CONCLUSIONS It is important for physicians who treat acolonic patients to be aware of C. difficile enteritis of the small bowel so that it can be suspected, diagnosed, and treated.",TRUE,noun
R11,Science,R50113,Falcon 2.0: An Entity and Relation Linking Tool over Wikidata,S153528,R50114,contains,R50102,Model,"The Natural Language Processing (NLP) community has significantly contributed to the solutions for entity and relation recognition from a natural language text, and possibly linking them to proper matches in Knowledge Graphs (KGs). Considering Wikidata as the background KG, there are still limited tools to link knowledge within the text to Wikidata. In this paper, we present Falcon 2.0, the first joint entity and relation linking tool over Wikidata. It receives a short natural language text in the English language and outputs a ranked list of entities and relations annotated with the proper candidates in Wikidata. The candidates are represented by their Internationalized Resource Identifier (IRI) in Wikidata. Falcon 2.0 resorts to the English language model for the recognition task (e.g., N-Gram tiling and N-Gram splitting), and then an optimization approach for the linking task. We have empirically studied the performance of Falcon 2.0 on Wikidata and concluded that it outperforms all the existing baselines. Falcon 2.0 is open source and can be reused by the community; all the required instructions of Falcon 2.0 are well-documented at our GitHub repository (https://github.com/SDM-TIB/falcon2.0). We also demonstrate an online API, which can be run without any technical expertise. Falcon 2.0 and its background knowledge bases are available as resources at https://labs.tib.eu/falcon/falcon2/.",TRUE,noun
R11,Science,R50180,multimodal speech emotion recognition using audio and text,S153681,R50181,contains,R50178,Model,"Speech emotion recognition is a challenging task, and extensive reliance has been placed on models that use audio features in building well-performing classifiers. In this paper, we propose a novel deep dual recurrent encoder model that utilizes text data and audio signals simultaneously to obtain a better understanding of speech data. As emotional dialogue is composed of sound and spoken content, our model encodes the information from audio and text sequences using dual recurrent neural networks (RNNs) and then combines the information from these sources to predict the emotion class. This architecture analyzes speech data from the signal level to the language level, and it thus utilizes the information within the data more comprehensively than models that focus on audio features. Extensive experiments are conducted to investigate the efficacy and properties of the proposed model. Our proposed model outperforms previous state-of-the-art methods in assigning data to one of four emotion categories (i.e., angry, happy, sad and neutral) when the model is applied to the IEMOCAP dataset, as reflected by accuracies ranging from 68.8% to 71.8%.",TRUE,noun
R11,Science,R50227,OER Recommendations to Support Career Development,S153762,R50228,contains,R50217,Model,"This Work in Progress Research paper departs from the recent, turbulent changes in global societies, forcing many citizens to re-skill themselves to (re)gain employment. Learners therefore need to be equipped with skills to be autonomous and strategic about their own skill development. Subsequently, high-quality, on-line, personalized educational content and services are also essential to serve this high demand for learning content. Open Educational Resources (OERs) have high potential to contribute to the mitigation of these problems, as they are available in a wide range of learning and occupational contexts globally. However, their applicability has been limited, due to low metadata quality and complex quality control. These issues resulted in a lack of personalised OER functions, like recommendation and search. Therefore, we suggest a novel, personalised OER recommendation method to match skill development targets with open learning content. This is done by: 1) using an OER quality prediction model based on metadata, OER properties, and content; 2) supporting learners to set individual skill targets based on actual labour market information, and 3) building a personalized OER recommender to help learners to master their skill targets. Accordingly, we built a prototype focusing on Data Science related jobs, and evaluated this prototype with 23 data scientists in different expertise levels. Pilot participants used our prototype for at least 30 minutes and commented on each of the recommended OERs. As a result, more than 400 recommendations were generated and 80.9% of the recommendations were reported as useful.",TRUE,noun
R11,Science,R33001,Prognos- tic and biologic significance of chromosomal imbalances assessed by comparative genomic hybridization in multiple myeloma,S114303,R33002,Disease,R33000,Myeloma,"Cytogenetic abnormalities, evaluated either by karyotype or by fluorescence in situ hybridization (FISH), are considered the most important prognostic factor in multiple myeloma (MM). However, there is no information about the prognostic impact of genomic changes detected by comparative genomic hybridization (CGH). We have analyzed the frequency and prognostic impact of genetic changes as detected by CGH and evaluated the relationship between these chromosomal imbalances and IGH translocation, analyzed by FISH, in 74 patients with newly diagnosed MM. Genomic changes were identified in 51 (69%) of the 74 MM patients. The most recurrent abnormalities among the cases with genomic changes were gains on chromosome regions 1q (45%), 5q (24%), 9q (24%), 11q (22%), 15q (22%), 3q (16%), and 7q (14%), while losses mainly involved chromosomes 13 (39%), 16q (18%), 6q (10%), and 8p (10%). Remarkably, the 6 patients with gains on 11q had IGH translocations. Multivariate analysis selected chromosomal losses, 11q gains, age, and type of treatment (conventional chemotherapy vs autologous transplantation) as independent parameters for predicting survival. Genomic losses retained the prognostic value irrespective of treatment approach. According to these results, losses of chromosomal material evaluated by CGH represent a powerful prognostic factor in MM patients.",TRUE,noun
R11,Science,R33815,Comparative analyses of seven algorithms for copy number variant identification from single nucleotide polymorphism arrays,S117253,R33816,Algorithm,R33813,Nexus,"Determination of copy number variants (CNVs) inferred in genome wide single nucleotide polymorphism arrays has shown increasing utility in genetic variant disease associations. Several CNV detection methods are available, but differences in CNV call thresholds and characteristics exist. We evaluated the relative performance of seven methods: circular binary segmentation, CNVFinder, cnvPartition, gain and loss of DNA, Nexus algorithms, PennCNV and QuantiSNP. Tested data included real and simulated Illumina HumHap 550 data from the Singapore cohort study of the risk factors for Myopia (SCORM) and simulated data from Affymetrix 6.0 and platform-independent distributions. The normalized singleton ratio (NSR) is proposed as a metric for parameter optimization before enacting full analysis. We used 10 SCORM samples for optimizing parameter settings for each method and then evaluated method performance at optimal parameters using 100 SCORM samples. The statistical power, false positive rates, and receiver operating characteristic (ROC) curve residuals were evaluated by simulation studies. Optimal parameters, as determined by NSR and ROC curve residuals, were consistent across datasets. QuantiSNP outperformed other methods based on ROC curve residuals over most datasets. Nexus Rank and SNPRank have low specificity and high power. Nexus Rank calls oversized CNVs. PennCNV detects one of the fewest numbers of CNVs.",TRUE,noun
R11,Science,R27808,The application of an occupational therapy nutrition education programme for children who are obese,S90623,R27809,Topic,R91,Nutrition,"The aim of this study was to evaluate an occupational therapy nutrition education programme for children who are obese with the use of two interactive games. A quasi-experimental study was carried out at a municipal school in Fortaleza, Brazil. A convenient sample of 200 children ages 8-10 years old participated in the study. Data collection comprised a semi-structured interview, direct and structured observation, and focus group, comparing two interactive games based on the food pyramid (video game and board game) used individually and then combined. Both play activities were efficient in the mediation of nutritional concepts, with a preference for the board game. In the learning strategies, intrinsic motivation and metacognition were analysed. The attention strategy was most applied at the video game. We concluded that both games promoted the learning of nutritional concepts. We confirmed the effectiveness of the simultaneous application of interactive games in an interdisciplinary health environment. It is recommended that a larger sample should be used in evaluating the effectiveness of play and video games in teaching healthy nutrition to children in a school setting.",TRUE,noun
R11,Science,R29000,Localizing Parts of Faces Using a Consensus of Exemplars,S95871,R29001,Variations,L58730,occlusion,"We present a novel approach to localizing parts in images of human faces. The approach combines the output of local detectors with a nonparametric set of global models for the part locations based on over 1,000 hand-labeled exemplar images. By assuming that the global models generate the part locations as hidden variables, we derive a Bayesian objective function. This function is optimized using a consensus of models for these hidden variables. The resulting localizer handles a much wider range of expression, pose, lighting, and occlusion than prior ones. We show excellent performance on real-world face datasets such as Labeled Faces in the Wild (LFW) and a new Labeled Face Parts in the Wild (LFPW) and show that our localizer achieves state-of-the-art performance on the less challenging BioID dataset.",TRUE,noun
R11,Science,R29010,Robust Face Landmark Estimation under Occlusion,S95964,R29011,Variations,L58808,occlusion,"Human faces captured in real-world conditions present large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food). Current face landmark estimation approaches struggle under such conditions since they fail to provide a principled way of handling outliers. We propose a novel method, called Robust Cascaded Pose Regression (RCPR) which reduces exposure to outliers by detecting occlusions explicitly and using robust shape-indexed features. We show that RCPR improves on previous landmark estimation methods on three popular face datasets (LFPW, LFW and HELEN). We further explore RCPR's performance by introducing a novel face dataset focused on occlusion, composed of 1,007 faces presenting a wide range of occlusion patterns. RCPR reduces failure cases by half on all four datasets, at the same time as it detects face occlusions with a 80/40% precision/recall.",TRUE,noun
R11,Science,R25722,Visualizing ontologies with VOWL,S78070,R25723,Domain,R25700,ontology,"The Visual Notation for OWL Ontologies (VOWL) is a well-specified visual language for the user-oriented representation of ontologies. It defines graphical depictions for most elements of the Web Ontology Language (OWL) that are combined to a force-directed graph layout visualizing the ontology. In contrast to related work, VOWL aims for an intuitive and comprehensive representation that is also understandable to users less familiar with ontologies. This article presents VOWL in detail and describes its implementation in two different tools: ProtegeVOWL and WebVOWL. The first is a plugin for the ontology editor Protege, the second a standalone web application. Both tools demonstrate the applicability of VOWL by means of various ontologies. In addition, the results of three user studies that evaluate the comprehensibility and usability of VOWL are summarized. They are complemented by findings from an interview with experienced ontology users and from testing the visual scope and completeness of VOWL with a benchmark ontology. The evaluations helped to improve VOWL and confirm that it produces comparatively intuitive and comprehensible ontology visualizations.",TRUE,noun
R11,Science,R151214,"Organizational Resilience
and Using Information
and Communication
Technologies to Rebuild
Communication Structures",S626377,R156056,Focus Group,L431096,organizations,"This study employs the perspective of organizational resilience to examine how information and communication technologies (ICTs) were used by organizations to aid in their recovery after Hurricane Katrina. In-depth interviews enabled longitudinal analysis of ICT use. Results showed that organizations enacted a variety of resilient behaviors through adaptive ICT use, including information sharing, (re)connection, and resource acquisition. Findings emphasize the transition of ICT use across different stages of recovery, including an anticipated stage. Key findings advance organizational resilience theory with an additional source of resilience, external availability. Implications and contributions to the literature of ICTs in disaster contexts and organizational resilience are discussed.",TRUE,noun
R11,Science,R29380,The environmental Kuznets curve: an empirical analysis,S97677,R29381,Type of data,R29363,Panel,"This paper examines the relationship between per capita income and a wide range of environmental indicators using cross-country panel sets. The manner in which this has been done overcomes several of the weaknesses asscociated with the estimation of environmental Kuznets curves (EKCs). outlined by Stern et al. (1996). Results suggest that meaningful EKCs exist only for local air pollutants whilst indicators with a more global, or indirect, impact either increase monotonically with income or else have predicted turning points at high per capita income levels with large standard errors – unless they have been subjected to a multilateral policy initiative. Two other findings are also made: that concentration of local pollutants in urban areas peak at a lower per capita income level than total emissions per capita; and that transport-generated local air pollutants peak at a higher per capita income level than total emissions per capita. Given these findings, suggestions are made regarding the necessary future direction of environmental policy.",TRUE,noun
R11,Science,R29502,Environmental Kuznets Curves for CO2: Heterogeneity versus Homogeneity,S97990,R29503,Type of data,R29363,Panel,"We explore the emissions income relationship for CO2 in OECD countries using various modelling strategies.Even for this relatively homogeneous sample, we find that the inverted-U-shaped curve is quite sensitive to the degree of heterogeneity included in the panel estimations.This finding is robust, not only across different model specifications but also across estimation techniques, including the more flexible non-parametric approach.Differences in restrictions applied in panel estimations are therefore responsible for the widely divergent findings for an inverted-U shape for CO2.Our findings suggest that allowing for enough heterogeneity is essential to prevent spurious correlation from reduced-form panel estimations.Moreover, this inverted U for CO2 is likely to exist for many, but not for all, countries.",TRUE,noun
R11,Science,R29564,Carbon emissions in Central and Eastern Europe: environmental Kuznets curve and implications for sustainable development,S98208,R29565,Type of data,R29363,Panel,"This study examines the impact of various factors such as gross domestic product (GDP) per capita, energy use per capita and trade openness on carbon dioxide (CO 2 ) emission per capita in the Central and Eastern European Countries. The extended environmental Kuznets curve (EKC) was employed, utilizing the available panel data from 1980 to 2002 for Bulgaria, Hungary, Romania and Turkey. The results confirm the existence of an EKC for the region such that CO 2 emission per capita decreases over time as the per capita GDP increases. Energy use per capita is a significant factor that causes pollution in the region, indicating that the region produces environmentally unclean energy. The trade openness variable implies that globalization has not facilitated the emission level in the region. The results imply that the region needs environmentally cleaner technologies in energy production to achieve sustainable development. Copyright © 2008 John Wiley & Sons, Ltd and ERP Environment.",TRUE,noun
R11,Science,R29587,Does One Size Fit All? A Reexamination of the Environmental Kuznets Curve Using the Dynamic Panel Data Approach,S98285,R29588,Type of data,R29363,Panel,"This article applies the dynamic panel generalized method of moments technique to reexamine the environmental Kuznets curve (EKC) hypothesis for carbon dioxide (CO_2) emissions and asks two critical questions: ""Does the global data set fit the EKC hypothesis?"" and ""Do different income levels or regions influence the results of the EKC?"" We find evidence of the EKC hypothesis for CO_2 emissions in a global data set, middle-income, and American and European countries, but not in other income levels and regions. Thus, the hypothesis that one size fits all cannot be supported for the EKC, and even more importantly, results, robustness checking, and implications emerge. Copyright 2009 Agricultural and Applied Economics Association",TRUE,noun
R11,Science,R29711,A panel data heterogeneous Bayesian estimation of environmental Kuznets curves for CO2emissions,S98593,R29712,Type of data,R29363,Panel,"This article investigates the Environmental Kuznets Curves (EKC) for CO2 emissions in a panel of 109 countries during the period 1959 to 2001. The length of the series makes the application of a heterogeneous estimator suitable from an econometric point of view. The results, based on the hierarchical Bayes estimator, show that different EKC dynamics are associated with the different sub-samples of countries considered. On average, more industrialized countries show evidence of EKC in quadratic specifications, which nevertheless are probably evolving into an N-shape based on their cubic specification. Nevertheless, it is worth noting that the EU, and not the Umbrella Group led by US, has been driving currently observed EKC-like shapes. The latter is associated to monotonic income–CO2 dynamics. The EU shows a clear EKC shape. Evidence for less-developed countries consistently shows that CO2 emissions rise positively with income, though there are some signs of an EKC. Analyses of future performance, nevertheless, favour quadratic specifications, thus supporting EKC evidence for wealthier countries and non-EKC shapes for industrializing regions.",TRUE,noun
R11,Science,R29751,An Empirical Study on the Environmental Kuznets Curve for China’s Carbon Emissions: Based on Provincial Panel Data,S98719,R29752,Type of data,R29363,Panel,"Abstract Based on the Environmental Kuznets Curve theory, the authors choose provincial panel data of China in 1990–2007 and adopt panel unit root and co-integration testing method to study whether there is Environmental Kuznets Curve for China’s carbon emissions. The research results show that: carbon emissions per capita of the eastern region and the central region of China fit into Environmental Kuznets Curve, but that of the western region does not. On this basis, the authors carry out scenario analysis on the occurrence time of the inflection point of carbon emissions per capita of different regions, and describe a specific time path.",TRUE,noun
R11,Science,R29881,Environmental Kuznets curve: evidences from developed and developing economies,S99156,R29882,Type of data,R29363,Panel,"Previous studies show that the environmental quality and economic growth can be represented by the inverted U curve called Environmental Kuznets Curve (EKC). In this study, we conduct empirical analyses on detecting the existence of EKC using the five common pollutants emissions (i.e. CO2, SO2, BOD, SPM10, and GHG) as proxy for environmental quality. The data spanning from year 1961 to 2009 and cover 40 countries. We seek to investigate if the EKC hypothesis holds in two groups of economies, i.e. developed versus developing economies. Applying panel data approach, our results show that the EKC does not hold in all countries. We also detect the existence of U shape and increasing trend in other cases. The results reveal that CO2 and SPM10 are good data to proxy for environmental pollutant and they can be explained well by GDP. Also, it is observed that the developed countries have higher turning points than the developing countries. Higher economic growth may lead to different impacts on environmental quality in different economies.",TRUE,noun
R11,Science,R29907,"A panel estimation of the relationship between trade liberalization, economic growth and CO2 emissions in BRICS countries",S99228,R29908,Type of data,R29363,Panel,"In the last few years, several studies have found an inverted-U relationship between per capita income and environmental degradation. This relationship, known as the environmental Kuznets curve (EKC), suggests that environmental degradation increases in the early stages of growth, but it eventually decreases as income exceeds a threshold level. However, this paper investigation relationship between per capita CO2 emission, growth economics and trade liberalization based on econometric techniques of unit root test, co-integration and a panel data set during the period 1960-1996 for BRICS countries. Data properties were analyzed to determine their stationarity using the LLC , IPS , ADF and PP unit root tests which indicated that the series are I(1). We find a cointegration relationship between per capita CO2 emission, growth economics and trade liberalization by applying Kao panel cointegration test. The evidence indi cates that in the long-run trade liberalization has a positive significant impact on CO2 emissions and impact of trade liberalization on emissions growth depends on the level of income Our findings suggest that there is a quadratic relationship between relationship between real GDP and CO2 emissions for the region as a whole. The estimated long-run coefficients of real GDP and its square satisfy the EKC hypothesis in all of studied countries. Our estimation shows that the inflection point or optimal point real GDP per capita is about 5269.4 dollars. The results show that on average, sample countries are on the positive side of the inverted U curve. The turning points are very low in some cases and very high in other cases, hence providing poor evidence in support of the EKC hypothesis. Thus, our findings suggest that all BRICS countries need to sacrifice economic growth to decrease their emission levels",TRUE,noun
R11,Science,R29973,The environmental Kuznets curve in Asia: the case of sulphur and carbon emissions”,S99375,R29974,Type of data,R29363,Panel,"The present study examines whether the Race to the Bottom and Revised EKC scenarios presented by Dasgupta and others (2002) are, with regard to the analytical framework of the Environmental Kuznets Curve (EKC), applicable in Asia to representative environmental indices, such as sulphur emissions and carbon emissions. To carry out this study, a generalized method of moments (GMM) estimation was made, using panel data of 19 economies for the period 1950-2009. The main findings of the analysis on the validity of EKC indicate that sulphur emissions follow the expected inverted U-shape pattern, while carbon emissions tend to increase in line with per capita income in the observed range. As for the Race to the Bottom and Revised EKC scenarios, the latter was verified in sulphur emissions, as their EKC trajectories represent a linkage of the later development of the economy with the lower level of emissions while the former one was not present in neither sulphur nor carbon emissions.",TRUE,noun
R11,Science,R30016,An Environment Kuznets Curve for GHG Emissions: A Panel Cointegration Analysis,S99497,R30017,Type of data,R29363,Panel,"In this article, we attempt to use panel unit root and panel cointegration tests as well as the fully-modified ordinary least squares (OLS) approach to examine the relationships among carbon dioxide emissions, energy use and gross domestic product for 22 Organization for Economic Cooperation and Development (OECD) countries (Annex II Parties) over the 1971–2000 period. Furthermore, in order to investigate these results for other direct greenhouse gases (GHGs), we have estimated the Environmental Kuznets Curve (EKC) hypothesis by using the total GHG, methane, and nitrous oxide. The empirical results support that energy use still plays an important role in explaining the GHG emissions for OECD countries. In terms of the EKC hypothesis, the results showed that a quadratic relationship was found to exist in the long run. Thus, other countries could learn from developed countries in this regard and try to smooth the EKC curve at relatively less cost.",TRUE,noun
R11,Science,R30082,An empirical examination of environmental Kuznets curve (EKC) in West Africa,S99705,R30083,Type of data,R29363,Panel,"This study aims to examine the relationship between income and environmental degradation in West Africa and ascertain the validity of EKC hypothesis in the region. The study adopted a panel data approach for fifteen West Africa countries for the period 1980-2012. The available results from our estimation procedure confirmed the EKC theory in the region. At early development stages, pollution rises with income and reaching a turning point, pollution dwindles with increasing income; as indicated by the significant inverse relation between income and environmental degradation. Consequently, literacy level and sound institutional arrangement were found to contribute significantly in mitigating the extent of environmental degradation. Among notable recommendation is the need for awareness campaign on environment abatement and adaptation strategies, strengthening of institutions to caution production and dumping pollution emitting commodities and encourage adoption of cleaner technologies.",TRUE,noun
R11,Science,R30175,Emissions and trade in Southeast and East Asian countries: a panel co-integration analysis,S100040,R30176,Type of data,R29363,Panel,"Purpose – The purpose of this paper is to analyse the implication of trade on carbon emissions in a panel of eight highly trading Southeast and East Asian countries, namely, China, Indonesia, South Korea, Malaysia, Hong Kong, The Philippines, Singapore and Thailand. Design/methodology/approach – The analysis relies on the standard quadratic environmental Kuznets curve (EKC) extended to include energy consumption and international trade. A battery of panel unit root and co-integration tests is applied to establish the variables’ stochastic properties and their long-run relations. Then, the specified EKC is estimated using the panel dynamic ordinary least square (OLS) estimation technique. Findings – The panel co-integration statistics verifies the validity of the extended EKC for the countries under study. Estimation of the long-run EKC via the dynamic OLS estimation method reveals the environmentally degrading effects of trade in these countries, especially in ASEAN and plus South Korea and Hong Kong. Pra...",TRUE,noun
R11,Science,R30185,The role of renewable energy consumption and trade: environmental Kuznets curve analysis for Sub-Saharan Africa countries,S100067,R30186,Type of data,R29363,Panel,"type=""main"" xml:lang=""en""> Based on the Environmental Kuznets Curve (EKC) hypothesis, this paper uses panel cointegration techniques to investigate the short- and long-run relationship between CO 2 emissions, gross domestic product (GDP), renewable energy consumption and international trade for a panel of 24 sub-Saharan Africa countries over the period 1980–2010. Short-run Granger causality results reveal that there is a bidirectional causality between emissions and economic growth; bidirectional causality between emissions and real exports; unidirectional causality from real imports to emissions; and unidirectional causality runs from trade (exports or imports) to renewable energy consumption. There is an indirect short-run causality running from emissions to renewable energy and an indirect short-run causality from GDP to renewable energy. In the long-run, the error correction term is statistically significant for emissions, renewable energy consumption and trade. The long-run estimates suggest that the inverted U-shaped EKC hypothesis is not supported for these countries; exports have a positive impact on CO 2 emissions, whereas imports have a negative impact on CO 2 emissions. As a policy recommendation, sub-Saharan Africa countries should expand their trade exchanges particularly with developed countries and try to maximize their benefit from technology transfer occurring when importing capital goods as this may increase their renewable energy consumption and reduce CO 2 emissions.",TRUE,noun
R11,Science,R30280,Estimating the relationship between economic growth and environmental quality for the brics economies - a dynamic panel data approach,S100349,R30281,Type of data,R29363,Panel,"It has been forecasted by many economists that in the next couple of decades the BRICS economies are going to experience an unprecedented economic growth. This massive economic growth would definitely have a detrimental impact on the environment since these economies, like others, would extract their environmental and natural resource to a larger scale in the process of their economic growth. Therefore, maintaining environmental quality while growing has become a major challenge for these economies. However, the proponents of Environmental Kuznets Curve (EKC) Hypothesis - an inverted U shape relationship between income and emission per capita, suggest BRICS economies need not bother too much about environmental quality while growing because growth would eventually take care of the environment once a certain level of per capita income is achieved. In this backdrop, the present study makes an attempt to estimate EKC type relationship, if any, between income and emission in the context of the BRICS countries for the period 1997 to 2011. Therefore, the study first adopts fixed effect (FE) panel data model to control time constant country specific effects, and then uses Generalized Method of Moments (GMM) approach for dynamic panel data to address endogeneity of income variable and dynamism in emission per capita. Apart from income, we also include variables related to financial sector development and energy utilization to explain emission. The fixed effect model shows a significant EKC type relation between income and emission supporting the previous literature. However, GMM estimates for the dynamic panel model show the relationship between income and emission is actually U shaped with the turning point being out of sample. This out of sample turning point indicates that emission has been growing monotonically with growth in income. Factors like, net energy imports and share of industrial output in GDP are found to be significant and having detrimental impact on the environment in the dynamic panel model. However, these variables are found to be insignificant in FE model. Capital account convertibility shows significant and negative impact on the environment irrespective of models used. The monotonically increasing relationship between income and emission suggests the BRICS economies must adopt some efficiency oriented action plan so that they can grow without putting much pressure on the environment. These findings can have important policy implications as BRICS countries are mainly depending on these factors for their growth but at the same time they can cause serious threat to the environment.",TRUE,noun
R11,Science,R30284,"The Relationship between CO2 Emission, Energy Consumption, Urbanization and Trade Openness for Selected CEECs",S100365,R30285,Type of data,R29363,Panel,"This paper investigates the relationship between CO2 emission, real GDP, energy consumption, urbanization and trade openness for 10 for selected Central and Eastern European Countries (CEECs), including, Albania, Bulgaria, Croatia, Czech Republic, Macedonia, Hungary, Poland, Romania, Slovak Republic and Slovenia for the period of 1991–2011. The results show that the environmental Kuznets curve (EKC) hypothesis holds for these countries. The fully modified ordinary least squares (FMOLS) results reveal that a 1% increase in energy consumption leads to a %1.0863 increase in CO2 emissions. Results for the existence and direction of panel Vector Error Correction Model (VECM) Granger causality method show that there is bidirectional causal relationship between CO2 emissions - real GDP and energy consumption-real GDP as well.",TRUE,noun
R11,Science,R25878,Selective Semihydrogenation of Alkynes Catalyzed by Pd Nanoparticles Immobilized on Heteroatom-Doped Hierarchical Porous Carbon Derived from Bamboo Shoots,S79400,R25879,catalyst,L49973,"Pd/N,O-carbon","Highly dispersed palladium nanoparticles (Pd NPs) immobilized on heteroatom-doped hierarchical porous carbon supports (N,O-carbon) with large specific surface areas are synthesized by a wet chemical reduction method. The N,O-carbon derived from naturally abundant bamboo shoots is fabricated by a tandem hydrothermal-carbonization process without assistance of any templates, chemical activation reagents, or exogenous N or O sources in a simple and ecofriendly manner. The prepared Pd/N,O-carbon catalyst shows extremely high activity and excellent chemoselectivity for semihydrogenation of a broad range of alkynes to versatile and valuable alkenes under ambient conditions. The catalyst can be readily recovered for successive reuse with negligible loss in activity and selectivity, and is also applicable for practical gram-scale reactions.",TRUE,noun
R11,Science,R25888,Formation and Characterization of PdZn Alloy: A Very Selective Catalyst for Alkyne Semihydrogenation,S79493,R25889,catalyst,L50051,Pd/ZnO,"The formation of a PdZn alloy from a 4.3% Pd/ZnO catalyst was characterized by combined in situ high-resolution X-ray diffraction (HRXRD) and X-ray absorption spectroscopy (XAS). Alloy formation started already at around 100 °C, likely at the surface, and reached the bulk with increasing temperature. The structure of the catalyst was close to the bulk value of a 1:1 PdZn alloy with a L1o structure (RPd−Pd = 2.9 A, RPd−Zn = 2.6 A, CNPd−Zn = 8, CNPd−Pd = 4) after reduction at 300 °C and above. The activity of the gas-phase hydrogenation of 1-pentyne decreased with the formation of the PdZn alloy. In contrast to Pd/SiO2, no full hydrogenation occurred over Pd/ZnO. Over time, only slight decomposition of the alloy occurred under reaction conditions.",TRUE,noun
R11,Science,R25783,Pd@C core–shell nanoparticles on carbon nanotubes as highly stable and selective catalysts for hydrogenation of acetylene to ethylene,S78503,R25784,catalysts,L49220,Pd@C/CNT,"Developing highly selective and stable catalysts for acetylene hydrogenation is an imperative task in the chemical industry. Herein, core-shell Pd@carbon nanoparticles supported on carbon nanotubes (Pd@C/CNTs) were synthesized. During the hydrogenation of acetylene, the selectivity of Pd@C/CNTs to ethylene was distinctly improved. Moreover, Pd@C/CNTs showed excellent stability during the hydrogenation reaction.",TRUE,noun
R11,Science,R25816,"Single-Atom Pd1/Graphene Catalyst Achieved by Atomic Layer Deposition: Remarkable Performance in Selective Hydrogenation of 1,3-Butadiene",S78851,R25817,catalysts,L49517,Pd1/graphene,"We reported that atomically dispersed Pd on graphene can be fabricated using the atomic layer deposition technique. Aberration-corrected high-angle annular dark-field scanning transmission electron microscopy and X-ray absorption fine structure spectroscopy both confirmed that isolated Pd single atoms dominantly existed on the graphene support. In selective hydrogenation of 1,3-butadiene, the single-atom Pd1/graphene catalyst showed about 100% butenes selectivity at 95% conversion at a mild reaction condition of about 50 °C, which is likely due to the changes of 1,3-butadiene adsorption mode and enhanced steric effect on the isolated Pd atoms. More importantly, excellent durability against deactivation via either aggregation of metal atoms or carbonaceous deposits during a total 100 h of reaction time on stream was achieved. Therefore, the single-atom catalysts may open up more opportunities to optimize the activity, selectivity, and durability in selective hydrogenation reactions.",TRUE,noun
R11,Science,R33819,The Effect of Algorithms on Copy Number Variant Detection,S117276,R33820,Algorithm,R33806,PennCNV,"Background The detection of copy number variants (CNVs) and the results of CNV-disease association studies rely on how CNVs are defined, and because array-based technologies can only infer CNVs, CNV-calling algorithms can produce vastly different findings. Several authors have noted the large-scale variability between CNV-detection methods, as well as the substantial false positive and false negative rates associated with those methods. In this study, we use variations of four common algorithms for CNV detection (PennCNV, QuantiSNP, HMMSeg, and cnvPartition) and two definitions of overlap (any overlap and an overlap of at least 40% of the smaller CNV) to illustrate the effects of varying algorithms and definitions of overlap on CNV discovery. Methodology and Principal Findings We used a 56 K Illumina genotyping array enriched for CNV regions to generate hybridization intensities and allele frequencies for 48 Caucasian schizophrenia cases and 48 age-, ethnicity-, and gender-matched control subjects. No algorithm found a difference in CNV burden between the two groups. However, the total number of CNVs called ranged from 102 to 3,765 across algorithms. The mean CNV size ranged from 46 kb to 787 kb, and the average number of CNVs per subject ranged from 1 to 39. The number of novel CNVs not previously reported in normal subjects ranged from 0 to 212. Conclusions and Significance Motivated by the availability of multiple publicly available genome-wide SNP arrays, investigators are conducting numerous analyses to identify putative additional CNVs in complex genetic disorders. However, the number of CNVs identified in array-based studies, and whether these CNVs are novel or valid, will depend on the algorithm(s) used. Thus, given the variety of methods used, there will be many false positives and false negatives. Both guidelines for the identification of CNVs inferred from high-density arrays and the establishment of a gold standard for validation of CNVs are needed.",TRUE,noun
R11,Science,R33827,Assessment of copy number variation using the Illumina Infinium 1M SNP-array: A comparison of methodological approaches in the Spanish Bladder Cancer/EPICURO study,S117324,R33828,Algorithm,R33806,PennCNV,"High‐throughput single nucleotide polymorphism (SNP)‐array technologies allow to investigate copy number variants (CNVs) in genome‐wide scans and specific calling algorithms have been developed to determine CNV location and copy number. We report the results of a reliability analysis comparing data from 96 pairs of samples processed with CNVpartition, PennCNV, and QuantiSNP for Infinium Illumina Human 1Million probe chip data. We also performed a validity assessment with multiplex ligation‐dependent probe amplification (MLPA) as a reference standard. The number of CNVs per individual varied according to the calling algorithm. Higher numbers of CNVs were detected in saliva than in blood DNA samples regardless of the algorithm used. All algorithms presented low agreement with mean Kappa Index (KI) <66. PennCNV was the most reliable algorithm (KIw=98.96) when assessing the number of copies. The agreement observed in detecting CNV was higher in blood than in saliva samples. When comparing to MLPA, all algorithms identified poorly known copy aberrations (sensitivity = 0.19–0.28). In contrast, specificity was very high (0.97–0.99). Once a CNV was detected, the number of copies was truly assessed (sensitivity >0.62). Our results indicate that the current calling algorithms should be improved for high performance CNV analysis in genome‐wide scans. Further refinement is required to assess CNVs as risk factors in complex diseases.Hum Mutat 32:1–10, 2011. © 2011 Wiley‐Liss, Inc.",TRUE,noun
R11,Science,R31715,State Design Pattern Implementation of a DSP processor: A case study of TMS5416C,S106639,R31716,Quality attribute,L63784,Performance,"This paper presents an empirical study of the impact of State Design Pattern Implementation on the memory and execution time of popular fixed-point DSP processor from Texas Instruments; TMS320VC5416. Actually, the object-oriented approach is known to introduce a significant performance penalty compared to classical procedural programming [1]. One can find the studies of the object-oriented penalty on the system performance, in terms of execution time and memory overheads in the literature. Since, to the author's best knowledge the study of the overheads of Design Patterns (DP) in the embedded system programming is not widely published in the literature. The main contribution of the paper is to bring further evidence that embedded system software developers have to consider the memory and the execution time overheads of DPs in their implementations. The results of the experiment show that implementation in C++ with DP increases the memory usage and the execution time but meanwhile these overheads would not prevent embedded system software developers to use DPs.",TRUE,noun
R11,Science,R25880,Palladium nanoparticles supported on mpg-C3N4 as active catalyst for semihydrogenation of phenylacetylene under mild conditions,S79417,R25881,substrate,L49987,phenylacetylene,"Palladium nanoparticles supported on a mesoporous graphitic carbon nitride, Pd@mpg-C3N4, has been developed as an effective, heterogeneous catalyst for the liquid-phase semihydrogenation of phenylacetylene under mild conditions (303 K, atmospheric H2). A total conversion was achieved with high selectivity of styrene (higher than 94%) within 85 minutes. Moreover, the spent catalyst can be easily recovered by filtration and then reused nine times without apparent lose of selectivity. The generality of Pd@mpg-C3N4 catalyst for partial hydrogenation of alkynes was also checked for terminal and internal alkynes with similar performance. The Pd@mpg-C3N4 catalyst was proven to be of industrial interest.",TRUE,noun
R11,Science,R25894,Single atom alloy surface analogs in Pd0.18Cu15 nanoparticles for selective hydrogenation reactions,S79543,R25895,substrate,L50092,phenylacetylene,"We report a novel synthesis of nanoparticle Pd-Cu catalysts, containing only trace amounts of Pd, for selective hydrogenation reactions. Pd-Cu nanoparticles were designed based on model single atom alloy (SAA) surfaces, in which individual, isolated Pd atoms act as sites for hydrogen uptake, dissociation, and spillover onto the surrounding Cu surface. Pd-Cu nanoparticles were prepared by addition of trace amounts of Pd (0.18 atomic (at)%) to Cu nanoparticles supported on Al2O3 by galvanic replacement (GR). The catalytic performance of the resulting materials for the partial hydrogenation of phenylacetylene was investigated at ambient temperature in a batch reactor under a head pressure of hydrogen (6.9 bar). The bimetallic Pd-Cu nanoparticles have over an order of magnitude higher activity for phenylacetylene hydrogenation when compared to their monometallic Cu counterpart, while maintaining a high selectivity to styrene over many hours at high conversion. Greater than 94% selectivity to styrene is observed at all times, which is a marked improvement when compared to monometallic Pd catalysts with the same Pd loading, at the same total conversion. X-ray photoelectron spectroscopy and UV-visible spectroscopy measurements confirm the complete uptake and alloying of Pd with Cu by GR. Scanning tunneling microscopy and thermal desorption spectroscopy of model SAA surfaces confirmed the feasibility of hydrogen spillover onto an otherwise inert Cu surface. These model studies addressed a wide range of Pd concentrations related to the bimetallic nanoparticles.",TRUE,noun
R11,Science,R27113,Kinetics of acetylcholinesterase immobilized on polyethylene tubing,S87202,R27114,Polymer,R27110,Polyethylene," Acetylcholinesterase was covalently attached to the inner surface of polyethylene tubing. Initial oxidation generated surface carboxylic groups which, on reaction with thionyl chloride, produced acid chloride groups; these were caused to react with excess ethylenediamine. The amine groups on the surface were linked to glutaraldehyde, and acetylcholinesterase was then attached to the surface. Various kinetic tests showed the catalysis of the hydrolysis of acetylthiocholine iodide to be diffusion controlled. The apparent Michaelis constants were strongly dependent on flow rate and were much larger than the value for the free enzyme. Rate measurements over the temperature range 6–42 °C showed changes in activation energies consistent with diffusion control. ",TRUE,noun
R11,Science,R30767,A two‐stage procurement model for humanitarian relief supply chains,S104110,R31082,Decisions First-stage,R31080,Procurement,"Purpose – The purpose of this paper is to discuss and to help address the need for quantitative models to support and improve procurement in the context of humanitarian relief efforts.Design/methodology/approach – This research presents a two‐stage stochastic decision model with recourse for procurement in humanitarian relief supply chains, and compares its effectiveness on an illustrative example with respect to a standard solution approach.Findings – Results show the ability of the new model to capture and model both the procurement process and the uncertainty inherent in a disaster relief situation, in support of more efficient and effective procurement plans.Research limitations/implications – The research focus is on sudden onset disasters and it does not differentiate between local and international suppliers. A number of extensions of the base model could be implemented, however, so as to address the specific needs of a given organization and their procurement process.Practical implications – Despi...",TRUE,noun
R11,Science,R25605,Agile Team Perceptions of Productivity Factors,S77279,R25606,Focus,L48340,Productivity,"In this paper, we investigate agile team perceptions of factors impacting their productivity. Within this overall goal, we also investigate which productivity concept was adopted by the agile teams studied. We here conducted two case studies in the industry and analyzed data from two projects that we followed for six months. From the perspective of agile team members, the three most perceived factors impacting on their productivity were appropriate team composition and allocation, external dependencies, and staff turnover. Teams also mentioned pair programming and collocation as agile practices that impact productivity. As a secondary finding, most team members did not share the same understanding of the concept of productivity. While some known factors still impact agile team productivity, new factors emerged from the interviews as potential productivity factors impacting agile teams.",TRUE,noun
R11,Science,R27466,Productivité et salaire des travailleurs âgés,S88985,R27467,Performance indicator (in per-capita terms if not otherwise indicated),R27462,Productivity,"[fre] Nous evaluons de facon conjointe les differences de productivite et de remuneration existant en France entre diverses categories de travailleurs au moyen d'une nouvelle base de donnees qui reunit des informations tant sur les employes que sur leurs employeurs. Completant une methodologie nouvelle proposee au depart par Heller- stein, Neumark et Troske [1999], nous adoptons des hypotheses moins contraignantes et fournissons une methode utilisant le cout du travail pour les employeurs. De facon surprenante, les resultats trouves pour la France sont tres differents de ceux obtenus pour les Etats-Unis, et plus proches des resultats en Norvege : dans le secteur manufacturier, nous constatons que les travailleurs âges sont plus payes par rapport aux travailleurs jeunes que leur difference de productivite ne le laisserait supposer. La robustesse de ces resultats semble confirmee a travers le temps, les secteurs d'activite et les hypotheses retenues. [eng] In this study we analyse the differences between productivity levels and earnings across a range of categories of workers in France, drawing on a new database which brings together data from employers and employees. We take as our starting point the methodology first introduced by Hellerstein, Neumark and Troske [1999], and develop it further by applying les restrictive assumptions and by using a new method which takes into account labour costs incurred by the employers. The results obtained for France are surprisingly different to those for the United States and in fact are closest to the results obtained for Norway. For example, we find that in the manufacturing sector, relatively to younger workers, older workers are paid more than the difference in productivity between the two age groups would suggest. These results appear to be robust over time regardless of the sector studied or the assumptions used.",TRUE,noun
R11,Science,R151228,"Community intelligence and social media services: A rumor
theoretic analysis of tweets during social crisis",S626429,R156063,Focus Group,L431141,Public,"Recent extreme events show that Twitter, a micro-blogging service, is emerging as the dominant social reporting tool to spread information on social crises. It is elevating the online public community to the status of first responders who can collectively cope with social crises. However, at the same time, many warnings have been raised about the reliability of community intelligence obtained through social reporting by the amateur online community. Using rumor theory, this paper studies citizen-driven information processing through Twitter services using data from three social crises: the Mumbai terrorist attacks in 2008, the Toyota recall in 2010, and the Seattle cafe shooting incident in 2012. We approach social crises as communal efforts for community intelligence gathering and collective information processing to cope with and adapt to uncertain external situations. We explore two issues: (1) collective social reporting as an information processing mechanism to address crisis problems and gather community intelligence, and (2) the degeneration of social reporting into collective rumor mills. Our analysis reveals that information with no clear source provided was the most important, personal involvement next in importance, and anxiety the least yet still important rumor causing factor on Twitter under social crisis situations.",TRUE,noun
R11,Science,R29711,A panel data heterogeneous Bayesian estimation of environmental Kuznets curves for CO2emissions,S98594,R29712,Power of income,R29372,Quadratic,"This article investigates the Environmental Kuznets Curves (EKC) for CO2 emissions in a panel of 109 countries during the period 1959 to 2001. The length of the series makes the application of a heterogeneous estimator suitable from an econometric point of view. The results, based on the hierarchical Bayes estimator, show that different EKC dynamics are associated with the different sub-samples of countries considered. On average, more industrialized countries show evidence of EKC in quadratic specifications, which nevertheless are probably evolving into an N-shape based on their cubic specification. Nevertheless, it is worth noting that the EU, and not the Umbrella Group led by US, has been driving currently observed EKC-like shapes. The latter is associated to monotonic income–CO2 dynamics. The EU shows a clear EKC shape. Evidence for less-developed countries consistently shows that CO2 emissions rise positively with income, though there are some signs of an EKC. Analyses of future performance, nevertheless, favour quadratic specifications, thus supporting EKC evidence for wealthier countries and non-EKC shapes for industrializing regions.",TRUE,noun
R11,Science,R29907,"A panel estimation of the relationship between trade liberalization, economic growth and CO2 emissions in BRICS countries",S99229,R29908,Power of income,R29372,Quadratic,"In the last few years, several studies have found an inverted-U relationship between per capita income and environmental degradation. This relationship, known as the environmental Kuznets curve (EKC), suggests that environmental degradation increases in the early stages of growth, but it eventually decreases as income exceeds a threshold level. However, this paper investigation relationship between per capita CO2 emission, growth economics and trade liberalization based on econometric techniques of unit root test, co-integration and a panel data set during the period 1960-1996 for BRICS countries. Data properties were analyzed to determine their stationarity using the LLC , IPS , ADF and PP unit root tests which indicated that the series are I(1). We find a cointegration relationship between per capita CO2 emission, growth economics and trade liberalization by applying Kao panel cointegration test. The evidence indi cates that in the long-run trade liberalization has a positive significant impact on CO2 emissions and impact of trade liberalization on emissions growth depends on the level of income Our findings suggest that there is a quadratic relationship between relationship between real GDP and CO2 emissions for the region as a whole. The estimated long-run coefficients of real GDP and its square satisfy the EKC hypothesis in all of studied countries. Our estimation shows that the inflection point or optimal point real GDP per capita is about 5269.4 dollars. The results show that on average, sample countries are on the positive side of the inverted U curve. The turning points are very low in some cases and very high in other cases, hence providing poor evidence in support of the EKC hypothesis. Thus, our findings suggest that all BRICS countries need to sacrifice economic growth to decrease their emission levels",TRUE,noun
R11,Science,R30016,An Environment Kuznets Curve for GHG Emissions: A Panel Cointegration Analysis,S99498,R30017,Power of income,R29372,Quadratic,"In this article, we attempt to use panel unit root and panel cointegration tests as well as the fully-modified ordinary least squares (OLS) approach to examine the relationships among carbon dioxide emissions, energy use and gross domestic product for 22 Organization for Economic Cooperation and Development (OECD) countries (Annex II Parties) over the 1971–2000 period. Furthermore, in order to investigate these results for other direct greenhouse gases (GHGs), we have estimated the Environmental Kuznets Curve (EKC) hypothesis by using the total GHG, methane, and nitrous oxide. The empirical results support that energy use still plays an important role in explaining the GHG emissions for OECD countries. In terms of the EKC hypothesis, the results showed that a quadratic relationship was found to exist in the long run. Thus, other countries could learn from developed countries in this regard and try to smooth the EKC curve at relatively less cost.",TRUE,noun
R11,Science,R30175,Emissions and trade in Southeast and East Asian countries: a panel co-integration analysis,S100041,R30176,Power of income,R29372,Quadratic,"Purpose – The purpose of this paper is to analyse the implication of trade on carbon emissions in a panel of eight highly trading Southeast and East Asian countries, namely, China, Indonesia, South Korea, Malaysia, Hong Kong, The Philippines, Singapore and Thailand. Design/methodology/approach – The analysis relies on the standard quadratic environmental Kuznets curve (EKC) extended to include energy consumption and international trade. A battery of panel unit root and co-integration tests is applied to establish the variables’ stochastic properties and their long-run relations. Then, the specified EKC is estimated using the panel dynamic ordinary least square (OLS) estimation technique. Findings – The panel co-integration statistics verifies the validity of the extended EKC for the countries under study. Estimation of the long-run EKC via the dynamic OLS estimation method reveals the environmentally degrading effects of trade in these countries, especially in ASEAN and plus South Korea and Hong Kong. Pra...",TRUE,noun
R11,Science,R30390,Relationship between economic growth and environmental degradation: is there evidence of an environmental Kuznets curve for Brazil?,S100824,R30391,Power of income,R29372,Quadratic,"This study investigates the relationship between CO2 emissions, economic growth, energy use and electricity production by hydroelectric sources in Brazil. To verify the environmental Kuznets curve (EKC) hypothesis we use time-series data for the period 1971-2011. The autoregressive distributed lag methodology was used to test for cointegration in the long run. Additionally, the vector error correction model Granger causality test was applied to verify the predictive value of independent variables. Empirical results find that there is a quadratic long run relationship between CO2emissions and economic growth, confirming the existence of an EKC for Brazil. Furthermore, energy use shows increasing effects on emissions, while electricity production by hydropower sources has an inverse relationship with environmental degradation. The short run model does not provide evidence for the EKC theory. The differences between the results in the long and short run models can be considered for establishing environmental policies. This suggests that special attention to both variables-energy use and the electricity production by hydroelectric sources- could be an effective way to mitigate CO2 emissions in Brazil",TRUE,noun
R11,Science,R33819,The Effect of Algorithms on Copy Number Variant Detection,S117277,R33820,Algorithm,R33807,QuantiSNP,"Background The detection of copy number variants (CNVs) and the results of CNV-disease association studies rely on how CNVs are defined, and because array-based technologies can only infer CNVs, CNV-calling algorithms can produce vastly different findings. Several authors have noted the large-scale variability between CNV-detection methods, as well as the substantial false positive and false negative rates associated with those methods. In this study, we use variations of four common algorithms for CNV detection (PennCNV, QuantiSNP, HMMSeg, and cnvPartition) and two definitions of overlap (any overlap and an overlap of at least 40% of the smaller CNV) to illustrate the effects of varying algorithms and definitions of overlap on CNV discovery. Methodology and Principal Findings We used a 56 K Illumina genotyping array enriched for CNV regions to generate hybridization intensities and allele frequencies for 48 Caucasian schizophrenia cases and 48 age-, ethnicity-, and gender-matched control subjects. No algorithm found a difference in CNV burden between the two groups. However, the total number of CNVs called ranged from 102 to 3,765 across algorithms. The mean CNV size ranged from 46 kb to 787 kb, and the average number of CNVs per subject ranged from 1 to 39. The number of novel CNVs not previously reported in normal subjects ranged from 0 to 212. Conclusions and Significance Motivated by the availability of multiple publicly available genome-wide SNP arrays, investigators are conducting numerous analyses to identify putative additional CNVs in complex genetic disorders. However, the number of CNVs identified in array-based studies, and whether these CNVs are novel or valid, will depend on the algorithm(s) used. Thus, given the variety of methods used, there will be many false positives and false negatives. Both guidelines for the identification of CNVs inferred from high-density arrays and the establishment of a gold standard for validation of CNVs are needed.",TRUE,noun
R11,Science,R32575,Enhanced ship detection from overhead imagery,S110881,R32576,Satellite sensor,R32567,QuickBird,"In the authors' previous work, a sequence of image-processing algorithms was developed that was suitable for detecting and classifying ships from panchromatic Quickbird electro-optical satellite imagery. Presented in this paper are several new algorithms, which improve the performance and enhance the capabilities of the ship detection software, as well as an overview on how land masking is performed. Specifically, this paper describes the new algorithms for enhanced detection including for the reduction of false detects such as glint and clouds. Improved cloud detection and filtering algorithms are described as well as several texture classification algorithms are used to characterize the background statistics of the ocean texture. These detection algorithms employ both cloud and glint removal techniques, which we describe. Results comparing ship detection with and without these false detect reduction algorithms are provided. These are components of a larger effort to develop a low-cost solution for detecting the presence of ships from readily-available overhead commercial imagery and comparing this information against various open-source ship-registry databases to categorize contacts for follow-on analysis.",TRUE,noun
R11,Science,R32591,Ship detection and recognitionin high-resolution satellite images,S110968,R32592,Satellite sensor,R32567,QuickBird,"Nowadays, the availability of high-resolution images taken from satellites, like Quickbird, Orbview, and others, offers the remote sensing community the possibility of monitoring and surveying vast areas of the Earth for different purposes, e.g. monitoring forest regions for ecological reasons. A particular application is the use of satellite images to survey the bottom of the seas around the Iberian peninsula which is flooded with innumerable treasures that are being plundered by specialized ships. In this paper we present a GIS-based application aimed to catalog areas of the sea with archeological interest and to monitor the risk of plundering of ships that stay within such areas during a suspicious period of time.",TRUE,noun
R11,Science,R32709,A new method on inshore ship detection in high-resolution satellite images using shape and context information,S111736,R32710,Satellite sensor,R32567,QuickBird,"In this letter, we present a new method to detect inshore ships using shape and context information. We first propose a new energy function based on an active contour model to segment water and land and minimize it with an iterative global optimization method. The proposed energy performs well on the different intensity distributions between water and land and produces a result that can be well used in shape and context analyses. In the segmented image, ships are detected with successive shape analysis, including shape analysis in the localization of ship head and region growing in computing the width and length of ship. Finally, to locate ships accurately and remove the false alarms, we unify them with a binary linear programming problem by utilizing the context information. Experiments on QuickBird images show the robustness and precision of our method.",TRUE,noun
R11,Science,R32643,Detection and classification of man-made offshore objects in TerraSAR-X and RapidEye imagery: Selected results of the DeMarine-DEKO project,S111282,R32644,Satellite sensor,R32642,RapidEye,"The project DEKO (Detection of artificial objects in sea areas) is integrated in the German DeMarine-Security project and focuses on the detection and classification of ships and offshore artificial objects relying on TerraSAR-X as well as on RapidEye multispectral optical images. The objectives are 1/ the development of reliable detection algorithms and 2/ the definition of effective, customized service concepts. In addition to an earlier publication, we describe in the following paper some selected results of our work. The algorithms for TerraSAR-X have been extended to a processing chain including all needed steps for ship detection and ship signature analysis, with an emphasis on object segmentation. For Rapid Eye imagery, a ship detection algorithm has been developed. Finally, some applications are described: Ship monitoring in the Strait of Dover based on TerraSAR-X StripMap using AIS information for verification, analyzing TerraSAR-X HighResolution scenes of an industrial harbor and finally an example of surveying a wind farm using change detection.",TRUE,noun
R11,Science,R33426,Determination of the success factors in supply chain networks: a Hong Kong‐based manufacturer's perspective,S115601,R33427,Critical success factors,R33423,reputation,"Purpose – The purpose of the paper is to investigate the factors that affect the decision‐making process of Hong Kong‐based manufacturers when they select a third‐party logistics (3PL) service provider and how 3PL service providers manage to retain customer loyalty in times of financial turbulence.Design/methodology/approach – The paper presents a survey‐based study targeting Hong Kong‐based manufacturers currently using 3PL companies. It investigates the relationship between the reasons for using 3PL services and the requirements for selecting a provider, and examines the relationship between customer satisfaction and loyalty. In addition, the relationships among various dimensions – in small to medium‐sized enterprises (SMEs), large enterprises and companies – of contracts of various lengths are investigated.Findings – In general, the reasons for using 3PL services and the requirements for selecting 3PL service providers are positive‐related. The dimension of “reputation” of satisfaction influences “pri...",TRUE,noun
R11,Science,R151135,The design of a dynamic emergency response management information system,S626108,R156016,Emergency Management Phase,L430867,Response,"ABSTRACT This paper systematically develops a set of general and supporting design principles and specifications for a ""Dynamic Emergency Response Management Information System"" (DERMIS) by identifying design premises resulting from the use of the ""Emergency Management Information System and Reference Index"" (EMISARI) and design concepts resulting from a comprehensive literature review. Implicit in crises of varying scopes and proportions are communication and information needs that can be addressed by today's information and communication technologies. However, what is required is organizing the premises and concepts that can be mapped into a set of generic design principles in turn providing a framework for the sensible development of flexible and dynamic Emergency Response Information Systems. A framework is presented for the system design and development that addresses the communication and information needs of first responders as well as the decision making needs of command and control personnel. The framework also incorporates thinking about the value of insights and information from communities of geographically dispersed experts and suggests how that expertise can be brought to bear on crisis decision making. Historic experience is used to suggest nine design premises. These premises are complemented by a series of five design concepts based upon the review of pertinent and applicable research. The result is a set of eight general design principles and three supporting design considerations that are recommended to be woven into the detailed specifications of a DERMIS. The resulting DERMIS design model graphically indicates the heuristic taken by this paper and suggests that the result will be an emergency response system flexible, robust, and dynamic enough to support the communication and information needs of emergency and crisis personnel on all levels. In addition it permits the development of dynamic emergency response information systems with tailored flexibility to support and be integrated across different sizes and types of organizations. This paper provides guidelines for system analysts and designers, system engineers, first responders, communities of experts, emergency command and control personnel, and MIS/IT researchers. SECTIONS 1. Introduction 2. Historical Insights about EMISARI 3. The emergency Response Atmosphere of OEP 4. Resulting Requirements for Emergency Response and Conceptual Design Specifics 4.1 Metaphors 4.2 Roles 4.3 Notifications 4.4 Context Visibility 4.5 Hypertext 5. Generalized Design Principles 6. Supporting Design Considerations 6.1 Resource Databases and Community Collaboration 6.2 Collective Memory 6.3 Online Communities of Experts 7. Conclusions and Final Observations 8. References 1. INTRODUCTION There have been, since 9/11, considerable efforts to propose improvements in the ability to respond to emergencies. However, the vast majority of these efforts have concentrated on infrastructure improvements to aid in mitigation of the impacts of either a man-made or natural disaster. In the area of communication and information systems to support the actual ongoing reaction to a disaster situation, the vast majority of the efforts have focused on the underlying technology to reliably support survivability of the underlying networks and physical facilities (Kunreuther and LernerLam 2002; Mork 2002). The fact that there were major failures of the basic technology and loss of the command center for 48 hours in the 9/11 event has made this an understandable result. The very workable commercial paging and digital mail systems supplied immediately afterwards by commercial firms (Michaels 2001; Vatis 2002) to the emergency response workers demonstrated that the correction of underlying technology is largely a process of setting integration standards and deciding to spend the necessary funds to update antiquated systems. …",TRUE,noun
R11,Science,R151157,"Emergency Response Information System
Interoperability: Development of Chemical Incident
Response Data Model",S626172,R156027,Emergency Management Phase,L430920,response,"Emergency response requires an efficient information supply chain for the smooth operations of intraand inter-organizational emergency management processes. However, the breakdown of this information supply chain due to the lack of consistent data standards presents a significant problem. In this paper, we adopt a theory driven novel approach to develop a XML-based data model that prescribes a comprehensive set of data standards (semantics and internal structures) for emergency management to better address the challenges of information interoperability. Actual documents currently being used in mitigating chemical emergencies from a large number of incidents are used in the analysis stage. The data model development is guided by Activity Theory and is validated through a RFC-like process used in standards development. This paper applies the standards to the real case of a chemical incident scenario. Further, it complies with the national leading initiatives in emergency standards (National Information Exchange Model).",TRUE,noun
R11,Science,R151167,"Designing Emergency Response Dispatch Systems for Better
Dispatcher Performance",S626278,R156042,Emergency Management Phase,L431011,response,"Emergency response systems are a relatively new and important area of research in the information systems community. While there is a growing body of literature in this research stream, human-computer interaction (HCI) issues concerning the design of emergency response system interfaces have received limited attention. Emergency responders often work in time pressured situations and depend on fast access to key information. One of the problems studied in HCI research is the design of interfaces to improve user information selection and processing performance. Based on cue-summation theory and research findings on parallel processing, associative processing, and hemispheric differences in information processing, this study proposes that information selection of target information in an emergency response dispatch application can be improved by using supplementary cues. Color-coding and sorting are proposed as relevant cues that can improve processing performance by providing prioritization heuristics. An experimental emergency response dispatch application is developed, and user performance is tested under conditions of varying complexity and time pressure. The results suggest that supplementary cues significantly improve performance, with better results often obtained when both cues are used. Additionally, the use of these cues becomes more beneficial as time pressure and task complexity increase.",TRUE,noun
R11,Science,R151208,Crisis Response Information Networks,S626344,R156053,Emergency Management Phase,L431066,Response,"In the past two decades, organizational scholars have focused significant attention on how organizations manage crises. While most of these studies concentrate on crisis prevention, there is a growing emphasis on crisis response. Because information that is critical to crisis response may become outdated as crisis conditions change, crisis response research recognizes that the management of information flows and networks is critical to crisis response. Yet despite its importance, little is known about the various types of crisis information networks and the role of IT in enabling these information networks. Employing concepts from information flow and social network theories, this paper contributes to crisis management research by developing four crisis response information network prototypes. These networks are based on two main dimensions: (1) information flow intensity and (2) network density. We describe how considerations of these two dimensions with supporting case evidence yield four prototypical crisis information response networks: Information Star, Information Pyramid, Information Forest, and Information Black-out. In addition, we examine the role of IT within each information network structure. We conclude with guidelines for managers to deploy appropriate information networks during crisis response and with suggestions for future research related to IT and crisis management.",TRUE,noun
R11,Science,R151210,"Design Principles of Integrated Information
Platform for Emergency Responses: The Case of
2008 Beijing Olympic Games",S626355,R156054,Emergency Management Phase,L431076,response,"•his paper investigates the challenges faced in designing an integrated information platform for emergency response management and uses the Beijing Olympic Games as a case study. The research methods are grounded in action research, participatory design, and situation-awareness oriented design. The completion of a more than two-year industrial secondment and six-month field studies ensured that a full understanding of user requirements had been obtained. A service-centered architecture was proposed to satisfy these user requirements. The proposed architecture consists mainly of information gathering, database management, and decision support services. The decision support services include situational overview, instant risk assessment, emergency response preplan, and disaster development prediction. Abstracting from the experience obtained while building this system, we outline a set of design principles in the general domain of information systems (IS) development for emergency management. These design principles form a contribution to the information systems literature because they provide guidance to developers who are aiming to support emergency response and the development of such systems that have not yet been adequately met by any existing types of IS. We are proud that the information platform developed was deployed in the real world and used in the 2008 Beijing",TRUE,noun
R11,Science,R151238,Social Media and Emergency Management: Exploring State and Local Tweets,S626456,R156068,Emergency Management Phase,L431163,response,"Social media for emergency management has emerged as a vital resource for government agencies across the globe. In this study, we explore social media strategies employed by governments to respond to major weather-related events. Using social media monitoring software, we analyze how social media is used in six cities following storms in the winter of 2012. We listen, monitor, and assess online discourse available on the full range of social media outlets (e.g., Twitter, Facebook, blogs). To glean further insight, we conduct a survey and extract themes from citizen comments and government's response. We conclude with recommendations on how practitioners can develop social media strategies that enable citizen participation in emergency management.",TRUE,noun
R11,Science,R151242,Design of a Resilient Information System for Disaster Response,S626478,R156070,Emergency Management Phase,L431183,response,"The devastating 2011 Great East Japan Earthquake made people aware of the importance of Information and Communication Technology (ICT) for sustaining life during and soon after a disaster. The difficulty in recovering information systems, because of the failure of ICT, hindered all recovery processes. The paper explores ways to make information systems resilient in disaster situations. Resilience is defined as quickly regaining essential capabilities to perform critical post disaster missions and to smoothly return to fully stable operations thereafter. From case studies and the literature, we propose that a frugal IS design that allows creative responses will make information systems resilient in disaster situations. A three-stage model based on a chronological sequence was employed in structuring the proposed design principles.",TRUE,noun
R11,Science,R151256,"ICT-Enabled Community Empowerment in Crisis
Response: Social Media in Thailand Flooding 2011",S626523,R156077,Emergency Management Phase,L431221,Response,"In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency’s perspective, which undermines communities’ role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.",TRUE,noun
R11,Science,R151302,"Digitally enabled disaster response: the
emergence of social media as boundary
objects in a flooding disaster",S626677,R156098,Emergency Management Phase,L431354,response,"In recent times, social media has been increasingly playing a critical role in response actions following natural catastrophes. From facilitating the recruitment of volunteers during an earthquake to supporting emotional recovery after a hurricane, social media has demonstrated its power in serving as an effective disaster response platform. Based on a case study of Thailand flooding in 2011 – one of the worst flooding disasters in more than 50 years that left the country severely impaired – this paper provides an in‐depth understanding on the emergent roles of social media in disaster response. Employing the perspective of boundary object, we shed light on how different boundary spanning competences of social media emerged in practice to facilitate cross‐boundary response actions during a disaster, with an aim to promote further research in this area. We conclude this paper with guidelines for response agencies and impacted communities to deploy social media for future disaster response.",TRUE,noun
R11,Science,R153575,"ICT-Enabled Community Empowerment in Crisis
Response: Social Media in Thailand Flooding 2011.",S616710,R153885,Emergency Management Phase,L425289,Response,"In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency’s perspective, which undermines communities’ role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.",TRUE,noun
R11,Science,R50025,Segmentation of Ocular Pathologies Using Deep Convolutional Neural Network,S153323,R50027,contains,R50024,Results,"Diabetes Mellitus (DM) is a chronic, progressive and life-threatening disease. The ocular manifestations of DM, Diabetic Retinopathy (DR) and Diabetic Macular Edema (DME), are the leading causes of blindness in the adult population throughout the world. Early diagnosis of DR and DM through screening tests and successive treatments can reduce the threat to visual acuity. In this context, we propose an encoder decoder based semantic segmentation network SOP-Net (Segmentation of Ocular Pathologies Using Deep Convolutional Neural Network) for simultaneous delineation of retinal pathologies (hard exudates, soft exudates, hemorrhages, microaneurysms). The proposed semantic segmentation framework is capable of providing segmentation results at pixel-level with good localization of object boundaries. SOP-Net has been trained and tested on IDRiD dataset which is publicly available with pixel level annotations of retinal pathologies. The network achieved average accuracies of 98.98%, 90.46%, 96.79%, and 96.70% for segmentation of hard exudates, soft exudates, hemorrhages, and microaneurysms. The proposed methodology has the capability to be used in developing a diagnostic system for organizing large scale ophthalmic screening programs.",TRUE,noun
R11,Science,R52278,Measuring the predictability of life outcomes with a scientific mass collaboration,S160089,R52279,contains,R52273,Results,"How predictable are life trajectories? We investigated this question with a scientific mass collaboration using the common task method; 160 teams built predictive models for six life outcomes using data from the Fragile Families and Child Wellbeing Study, a high-quality birth cohort study. Despite using a rich dataset and applying machine-learning methods optimized for prediction, the best predictions were not very accurate and were only slightly better than those from a simple benchmark model. Within each outcome, prediction error was strongly associated with the family being predicted and weakly associated with the technique used to generate the prediction. Overall, these results suggest practical limits to the predictability of life outcomes in some settings and illustrate the value of mass collaborations in the social sciences.",TRUE,noun
R11,Science,R137374,Investigating Interactive Search Behaviour of Medical Students: An Exploratory Survey,S543425,R137375,contains,R137370,Results,"In this paper, we investigate medical students medical search behavior on a medical domain. We use two behavioral signals: detailed query analysis (qualitative and quantitative) and task completion time to understand how medical students perform medical searches based on varying task complexity. We also investigate how task complexity and topic familiarity affect search behavior. We gathered 80 interactive search sessions from an exploratory survey with 20 medical students. We observe information searching behavior using 3 simulated work task scenarios and 1 personal scenario. We present quantitative results from two perspectives: overall and user perceived task complexity. We also analyze query properties from a qualitative aspect. Our results show task complexity and topic familiarity affect search behavior of medical students. In some cases, medical students demonstrate different search traits on a personal task in comparison to the simulated work task scenarios. These findings help us better understand medical search behavior. Medical search engines can use these findings to detect and adapt to medical students' search behavior to enhance a student's search experience.",TRUE,noun
R11,Science,R25985,Knowledge-based derivation of document logical structure,S80436,R26011,Key Idea,L50817,rule-based,"The analysis of a document image to derive a symbolic description of its structure and contents involves using spatial domain knowledge to classify the different printed blocks (e.g., text paragraphs), group them into logical units (e.g., newspaper stories), and determine the reading order of the text blocks within each unit. These steps describe the conversion of the physical structure of a document into its logical structure. We have developed a computational model for document logical structure derivation, in which a rule-based control strategy utilizes the data obtained from analyzing a digitized document image, and makes inferences using a multi-level knowledge base of document layout rules. The knowledge-based document logical structure derivation system (DeLoS) based on this model consists of a hierarchical rule-based control system to guide the block classification, grouping and read-ordering operations; a global data structure to store the document image data and incremental inferences; and a domain knowledge base to encode the rules governing document layout.",TRUE,noun
R11,Science,R25985,Knowledge-based derivation of document logical structure,S80438,R26011,Physical Layout Representation,L50819,rules,"The analysis of a document image to derive a symbolic description of its structure and contents involves using spatial domain knowledge to classify the different printed blocks (e.g., text paragraphs), group them into logical units (e.g., newspaper stories), and determine the reading order of the text blocks within each unit. These steps describe the conversion of the physical structure of a document into its logical structure. We have developed a computational model for document logical structure derivation, in which a rule-based control strategy utilizes the data obtained from analyzing a digitized document image, and makes inferences using a multi-level knowledge base of document layout rules. The knowledge-based document logical structure derivation system (DeLoS) based on this model consists of a hierarchical rule-based control system to guide the block classification, grouping and read-ordering operations; a global data structure to store the document image data and incremental inferences; and a domain knowledge base to encode the rules governing document layout.",TRUE,noun
R11,Science,R30594,Eye Localization based on Multi-Scale Gabor Feature Vector Model,S102092,R30624,Challenges,R30614,scale,"Eye localization is necessary for face recognition and related application areas. Most of eye localization algorithms reported thus far still need to be improved about precision and computational time for successful applications. In this paper, we propose an improved eye localization method based on multi-scale Gator feature vector models. The proposed method first tries to locate eyes in the downscaled face image by utilizing Gabor Jet similarity between Gabor feature vector at an initial eye coordinates and the eye model bunch of the corresponding scale. The proposed method finally locates eyes in the original input face image after it processes in the same way recursively in each scaled face image by using the eye coordinates localized in the downscaled image as initial eye coordinates. Experiments verify that our proposed method improves the precision rate without causing much computational overhead compared with other eye localization methods reported in the previous researches.",TRUE,noun
R11,Science,R30606,2D cascaded AdaBoost for eye localization,S102122,R30627,Challenges,R30614,scale,"In this paper, 2D cascaded AdaBoost, a novel classifier designing framework, is presented and applied to eye localization. By the term ""2D"", we mean that in our method there are two cascade classifiers in two directions: The first one is a cascade designed by bootstrapping the positive samples, and the second one, as the component classifiers of the first one, is cascaded by bootstrapping the negative samples. The advantages of the 2D structure include: (1) it greatly facilitates the classifier designing on huge-scale training set; (2) it can easily deal with the significant variations within the positive (or negative) samples; (3) both the training and testing procedures are more efficient. The proposed structure is applied to eye localization and evaluated on four public face databases, extensive experimental results verified the effectiveness, efficiency, and robustness of the proposed method",TRUE,noun
R11,Science,R30608,A robust eye localization method for low quality face images,S102149,R30631,Challenges,R30614,scale,"Eye localization is an important part in face recognition system, because its precision closely affects the performance of face recognition. Although various methods have already achieved high precision on the face images with high quality, their precision will drop on low quality images. In this paper, we propose a robust eye localization method for low quality face images to improve the eye detection rate and localization precision. First, we propose a probabilistic cascade (P-Cascade) framework, in which we reformulate the traditional cascade classifier in a probabilistic way. The P-Cascade can give chance to each image patch contributing to the final result, regardless the patch is accepted or rejected by the cascade. Second, we propose two extensions to further improve the robustness and precision in the P-Cascade framework. There are: (1) extending feature set, and (2) stacking two classifiers in multiple scales. Extensive experiments on JAFFE, BioID, LFW and a self-collected video surveillance database show that our method is comparable to state-of-the-art methods on high quality images and can work well on low quality images. This work supplies a solid base for face recognition applications under unconstrained or surveillance environments.",TRUE,noun
R11,Science,R30611,Robust Facial Features Localization on Rotation Arbitrary Multi-View face in Complex Background,S102190,R30636,Challenges,R30614,scale,"Focused on facial features localization on multi-view face arbitrarily rotated in plane, a novel detection algorithm based improved SVM is proposed. First, the face is located by the rotation invariant multi-view (RIMV) face detector and its pose in plane is corrected by rotation. After the searching ranges of the facial features are determined, the crossing detection method which uses the brow-eye and nose-mouth features and the improved SVM detectors trained by large scale multi-view facial features examples is adopted to find the candidate eye, nose and mouth regions,. Based on the fact that the window region with higher value in the SVM discriminant function is relatively closer to the object, and the same object tends to be repeatedly detected by near windows, the candidate eyes, nose and mouth regions are filtered and merged to refine their location on the multi-view face. Experiments show that the algorithm has very good accuracy and robustness to the facial features localization with expression and arbitrary face pose in complex background.",TRUE,noun
R11,Science,R30620,Eye localization through multiscale sparse dictionaries,S102063,R30621,Challenges,R30614,scale,"This paper presents a new eye localization method via Multiscale Sparse Dictionaries (MSD). We built a pyramid of dictionaries that models context information at multiple scales. Eye locations are estimated at each scale by fitting the image through sparse coefficients of the dictionary. By using context information, our method is robust to various eye appearances. The method also works efficiently since it avoids sliding a search window in the image during localization. The experiments in BioID database prove the effectiveness of our method.",TRUE,noun
R11,Science,R25914,Diagnosis of Schistosomiasis by Reagent Strip Test for Detection of Circulating Cathodic Antigen,S79699,R25915,Application,L50221,Schistosomiasis,"ABSTRACT A newly developed reagent strip assay for the diagnosis of schistosomiasis based on parasite antigen detection in urine of infected individuals was evaluated. The test uses the principle of lateral flow through a nitrocellulose strip of the sample mixed with a colloidal carbon conjugate of a monoclonal antibody specific for Schistosoma circulating cathodic antigen (CCA). The strip assay to diagnose a group of highly infected schoolchildren in Mwanza, Tanzania, demonstrated a high sensitivity and association with the intensity of infection as measured both by egg counts, and by circulating anodic antigen and CCA levels determined by enzyme-linked immunosorbent assay. A specificity of ca. 90% was shown in a group of schistosome-negative schoolchildren from Tarime, Tanzania, an area where schistosomiasis is not endemic. The test is easy to perform and requires no technical equipment or special training. The stability of the strips and the conjugate in the dry format lasts for at least 3 months at ambient temperature in sealed packages, making it suitable for transport and use in areas where schistosomiasis is endemic. This assay can easily be developed to an end-user format.",TRUE,noun
R11,Science,R70539,"Development of an infection screening system for entry inspection at airport quarantine stations using ear temperature, heart and respiration rates",S335765,R70570,Objective,L242578,Screening,"After the outbreak of severe acute respiratory syndrome (SARS) in 2003, many international airport quarantine stations conducted fever-based screening to identify infected passengers using infrared thermography for preventing global pandemics. Due to environmental factors affecting measurement of facial skin temperature with thermography, some previous studies revealed the limits of authenticity in detecting infectious symptoms. In order to implement more strict entry screening in the epidemic seasons of emerging infectious diseases, we developed an infection screening system for airport quarantines using multi-parameter vital signs. This system can automatically detect infected individuals within several tens of seconds by a neural-network-based discriminant function using measured vital signs, i.e., heart rate obtained by a reflective photo sensor, respiration rate determined by a 10-GHz non-contact respiration radar, and the ear temperature monitored by a thermography. In this paper, to reduce the environmental effects on thermography measurement, we adopted the ear temperature as a new screening indicator instead of facial skin. We tested the system on 13 influenza patients and 33 normal subjects. The sensitivity of the infection screening system in detecting influenza were 92.3%, which was higher than the sensitivity reported in our previous paper (88.0%) with average facial skin temperature.",TRUE,noun
R11,Science,R70541,A Pediatric Infection Screening System with a Radar Respiration Monitor for Rapid Detection of Seasonal Influenza among Outpatient Children,S335773,R70571,Objective,L242585,Screening,"Background: Seasonal influenza virus outbreaks cause annual epidemics, mostly during winter in temperate zone countries, especially resulting in increased morbidity and higher mortality in children. In order to conduct rapid screening for influenza in pediatric outpatient units, we developed a pediatric infection screening system with a radar respiration monitor. Methods: The system conducts influenza screening within 10 seconds based on vital signs (i.e., respiration rate monitored using a 24 GHz microwave radar; facial temperature, using a thermopile array; and heart rate, using a pulse photosensor). A support vector machine (SVM) classification method was used to discriminate influenza children from healthy children based on vital signs. To assess the classification performance of the screening system that uses the SVM, we conducted influenza screening for 70 children (i.e., 27 seasonal influenza patients (11 ± 2 years) at a pediatric clinic and 43 healthy control subjects (9 ± 4 years) at a pediatric dental clinic) in the winter of 2013-2014. Results: The screening system using the SVM identified 26 subjects with influenza (22 of the 27 influenza patients and 4 of the 43 healthy subjects). The system discriminated 44 subjects as healthy (5 of the 27 influenza patients and 39 of the 43 healthy subjects), with sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 81.5%, 90.7%, 84.6%, and 88.6%, respectively. Conclusion: The SVM-based screening system achieved classification results for the outpatient children based on vital signs with comparatively high NPV within 10 seconds. At pediatric clinics and hospitals, our system seems potentially useful in the first screening step for infections in the future.",TRUE,noun
R11,Science,R25597,An empirical study on the relationship between the use of agile practices and the success of Scrum projects,S77249,R25598,"Agile
Method",R25585,Scrum,"In this article, factors considered critical for the success of projects managed using Scrum are correlated to the results of software projects in industry. Using a set of 25 factors compiled in by other researchers, a cross section survey was conducted to evaluate the presence or application of these factors in 11 software projects that used Scrum in 9 different software companies located in Recife-PE, Brazil. The questionnaire was applied to 65 developers and Scrum Masters, representing 75% (65/86) of the professionals that have participated in the projects. The result was correlated with the level of success achieved by the projects, measured by the subjective perception of the project participant, using Spearman's rank correlation coefficient. The main finding is that only 32% (8/25) of the factors correlated positively with project success, raising the question of whether the factors hypothesized in the literature as being critical to the success of agile software projects indeed have an effect on project success. Given the limitations regarding the generalization of this result, other forms of empirical results, in particular case-studies, are needed to test this question.",TRUE,noun
R11,Science,R25610,A qualitative study of the determinants of self-managing team effectiveness in a scrum team,S77306,R25611,"Agile
Method",R25585,Scrum,"There are many evidences in the literature that the use self-managing teams has positive impacts on several dimensions of team effectiveness. Agile methods, supported by the Agile Manifesto, defend the use of self-managing teams in software development in substitution of hierarchically managed, traditional teams. The goal of this research was to study how a self-managing software team works in practice and how the behaviors of the software organization support or hinder the effectiveness of such teams. We performed a single case holistic case study, looking in depth into the actual behavior of a mature Scrum team in industry. Using interviews and participant observation, we collected qualitative data from five team members in several interactions. We extract the behavior of the team and of the software company in terms of the determinants of self-managing team effectiveness defined in a theoretical model from the literature. We found evidence that 17 out of 24 determinants of this model exist in the studied context. We concluded that certain determinants can support or facilitate the adoption of methodologies like Scrum, while the use of Scrum may affect other determinants.",TRUE,noun
R11,Science,R25612,Investigating the Long-Term Acceptance of Agile Methodologies: An Empirical Study of Developer Perceptions in Scrum Projects,S77315,R25613,"Agile
Method",R25585,Scrum,"Agile development methodologies have gained great interest in research and practice. As their introduction considerably changes traditional working habits of developers, the long-term acceptance of agile methodologies becomes a critical success factor. Yet, current studies primarily examine the early adoption stage of agile methodologies. To investigate the long-term acceptance, we conducted a study at a leading insurance company that introduced Scrum in 2007. Using a qualitative research design and the Diffusion of Innovations Theory as a lens for analysis, we gained in-depth insights into factors influencing the acceptance of Scrum. Particularly, developers felt Scrum to be more compatible to their actual working practices. Moreover, they perceived the use of Scrum to deliver numerous relative advantages. However, we also identified possible barriers to acceptance since developers felt both the complexity of Scrum and the required discipline to be higher in comparison with traditional development methodologies.",TRUE,noun
R11,Science,R28078,High-speed segmentation-driven high-resolution matching,S91684,R28079,Taxonomy stage: Step,R28076,Segmentation,"This paper proposes a segmentation-based approach for matching of high-resolution stereo images in real time. The approach employs direct region matching in a raster scan fashion influenced by scanline approaches, but with pixel decoupling. To enable real-time performance it is implemented as a heterogeneous system of an FPGA and a sequential processor. Additionally, the approach is designed for low resource usage in order to qualify as part of unified image processing in an embedded system.",TRUE,noun
R11,Science,R70546,Machine Learning Models for Analysis of Vital Signs Dynamics: A Case for Sepsis Onset Prediction,S335793,R70573,Infection,L242603,Sepsis,"Objective. Achieving accurate prediction of sepsis detection moment based on bedside monitor data in the intensive care unit (ICU). A good clinical outcome is more probable when onset is suspected and treated on time, thus early insight of sepsis onset may save lives and reduce costs. Methodology. We present a novel approach for feature extraction, which focuses on the hypothesis that unstable patients are more prone to develop sepsis during ICU stay. These features are used in machine learning algorithms to provide a prediction of a patient’s likelihood to develop sepsis during ICU stay, hours before it is diagnosed. Results. Five machine learning algorithms were implemented using R software packages. The algorithms were trained and tested with a set of 4 features which represent the variability in vital signs. These algorithms aimed to calculate a patient’s probability to become septic within the next 4 hours, based on recordings from the last 8 hours. The best area under the curve (AUC) was achieved with Support Vector Machine (SVM) with radial basis function, which was 88.38%. Conclusions. The high level of predictive accuracy along with the simplicity and availability of input variables present great potential if applied in ICUs. Variability of a patient’s vital signs proves to be a good indicator of one’s chance to become septic during ICU stay.",TRUE,noun
R11,Science,R70548,Machine-Learning-Based Laboratory Developed Test for the Diagnosis of Sepsis in High-Risk Patients,S335802,R70574,Infection,L242611,Sepsis,"Sepsis, a dysregulated host response to infection, is a major health burden in terms of both mortality and cost. The difficulties clinicians face in diagnosing sepsis, alongside the insufficiencies of diagnostic biomarkers, motivate the present study. This work develops a machine-learning-based sepsis diagnostic for a high-risk patient group, using a geographically and institutionally diverse collection of nearly 500,000 patient health records. Using only a minimal set of clinical variables, our diagnostics outperform common severity scoring systems and sepsis biomarkers and benefit from being available immediately upon ordering.",TRUE,noun
R11,Science,R70554,Prediction of Sepsis in the Intensive Care Unit With Minimal Electronic Health Record Data: A Machine Learning Approach,S335830,R70577,Infection,L242636,Sepsis,"Background Sepsis is one of the leading causes of mortality in hospitalized patients. Despite this fact, a reliable means of predicting sepsis onset remains elusive. Early and accurate sepsis onset predictions could allow more aggressive and targeted therapy while maintaining antimicrobial stewardship. Existing detection methods suffer from low performance and often require time-consuming laboratory test results. Objective To study and validate a sepsis prediction method, InSight, for the new Sepsis-3 definitions in retrospective data, make predictions using a minimal set of variables from within the electronic health record data, compare the performance of this approach with existing scoring systems, and investigate the effects of data sparsity on InSight performance. Methods We apply InSight, a machine learning classification system that uses multivariable combinations of easily obtained patient data (vitals, peripheral capillary oxygen saturation, Glasgow Coma Score, and age), to predict sepsis using the retrospective Multiparameter Intelligent Monitoring in Intensive Care (MIMIC)-III dataset, restricted to intensive care unit (ICU) patients aged 15 years or more. Following the Sepsis-3 definitions of the sepsis syndrome, we compare the classification performance of InSight versus quick sequential organ failure assessment (qSOFA), modified early warning score (MEWS), systemic inflammatory response syndrome (SIRS), simplified acute physiology score (SAPS) II, and sequential organ failure assessment (SOFA) to determine whether or not patients will become septic at a fixed period of time before onset. We also test the robustness of the InSight system to random deletion of individual input observations. Results In a test dataset with 11.3% sepsis prevalence, InSight produced superior classification performance compared with the alternative scores as measured by area under the receiver operating characteristic curves (AUROC) and area under precision-recall curves (APR). In detection of sepsis onset, InSight attains AUROC = 0.880 (SD 0.006) at onset time and APR = 0.595 (SD 0.016), both of which are superior to the performance attained by SIRS (AUROC: 0.609; APR: 0.160), qSOFA (AUROC: 0.772; APR: 0.277), and MEWS (AUROC: 0.803; APR: 0.327) computed concurrently, as well as SAPS II (AUROC: 0.700; APR: 0.225) and SOFA (AUROC: 0.725; APR: 0.284) computed at admission (P<.001 for all comparisons). Similar results are observed for 1-4 hours preceding sepsis onset. In experiments where approximately 60% of input data are deleted at random, InSight attains an AUROC of 0.781 (SD 0.013) and APR of 0.401 (SD 0.015) at sepsis onset time. Even with 60% of data missing, InSight remains superior to the corresponding SIRS scores (AUROC and APR, P<.001), qSOFA scores (P=.0095; P<.001) and superior to SOFA and SAPS II computed at admission (AUROC and APR, P<.001), where all of these comparison scores (except InSight) are computed without data deletion. Conclusions Despite using little more than vitals, InSight is an effective tool for predicting sepsis onset and performs well even with randomly missing data.",TRUE,noun
R11,Science,R70560,Physiological monitoring for critically ill patients: testing a predictive model for the early detection of sepsis,S335857,R70580,Infection,L242660,Sepsis,"OBJECTIVE To assess the predictive value for the early detection of sepsis of the physiological monitoring parameters currently recommended by the Surviving Sepsis Campaign. METHODS The Project IMPACT data set was used to assess whether the physiological parameters of heart rate, mean arterial pressure, body temperature, and respiratory rate can be used to distinguish between critically ill adult patients with and without sepsis in the first 24 hours of admission to an intensive care unit. RESULTS All predictor variables used in the analyses differed significantly between patients with sepsis and patients without sepsis. However, only 2 of the predictor variables, mean arterial pressure and high temperature, were independently associated with sepsis. In addition, the temperature mean for hypothermia was significantly lower in patients without sepsis. The odds ratio for having sepsis was 2.126 for patients with a temperature of 38 degrees C or higher, 3.874 for patients with a mean arterial blood pressure of less than 70 mm Hg, and 4.63 times greater for patients who had both of these conditions. CONCLUSIONS The results support the use of some of the guidelines of the Surviving Sepsis Campaign. However, the lowest mean temperature was significantly less for patients without sepsis than for patients with sepsis, a finding that calls into question the clinical usefulness of using hypothermia as an early predictor of sepsis. Alone the group of variables used is not sufficient for discriminating between critically ill patients with and without sepsis.",TRUE,noun
R11,Science,R70562,Predictive models for severe sepsis in adult ICU patients,S335865,R70581,Infection,L242667,Sepsis,"Intensive Care Unit (ICU) patients have significant morbidity and mortality, often from complications that arise during the hospital stay. Severe sepsis is one of the leading causes of death among these patients. Predictive models have the potential to allow for earlier detection of severe sepsis and ultimately earlier intervention. However, current methods for identifying and predicting severe sepsis are biased and inadequate. The goal of this work is to identify a new framework for the prediction of severe sepsis and identify early predictors utilizing clinical laboratory values and vital signs collected in adult ICU patients. We explore models with logistic regression (LR), support vector machines (SVM), and logistic model trees (LMT) utilizing vital signs, laboratory values, or a combination of vital and laboratory values. When applied to a retrospective cohort of ICU patients, the SVM model using laboratory and vital signs as predictors identified 339 (65%) of the 3,446 patients as developing severe sepsis correctly. Based on this new framework and developed models, we provide a recommendation for the use in clinical decision support in ICU and non-ICU environments.",TRUE,noun
R11,Science,R70564,A Bayesian network for early diagnosis of sepsis patients: a basis for a clinical decision support system,S335877,R70582,Infection,L242678,Sepsis,"Sepsis is a severe medical condition caused by an inordinate immune response to an infection. Early detection of sepsis symptoms is important to prevent the progression into the more severe stages of the disease, which kills one in four it effects. Electronic medical records of 1492 patients containing 233 cases of sepsis were used in a clustering analysis to identify features that are indicative of sepsis and can be further used for training a Bayesian inference network. The Bayesian network was constructed using the systemic inflammatory response syndrome criteria, mean arterial pressure, and lactate levels for sepsis patients. The resulting network reveals a clear correlation between lactate levels and sepsis. Furthermore, it was shown that lactate levels may be predicative of the SIRS criteria. In this light, Bayesian networks of sepsis patients hold the promise of providing a clinical decision support system in the future.",TRUE,noun
R11,Science,R70566,From vital signs to clinical outcomes for patients with sepsis: a machine learning basis for a clinical decision support system,S335885,R70583,Infection,L242685,Sepsis,"OBJECTIVE To develop a decision support system to identify patients at high risk for hyperlactatemia based upon routinely measured vital signs and laboratory studies. MATERIALS AND METHODS Electronic health records of 741 adult patients at the University of California Davis Health System who met at least two systemic inflammatory response syndrome criteria were used to associate patients' vital signs, white blood cell count (WBC), with sepsis occurrence and mortality. Generative and discriminative classification (naïve Bayes, support vector machines, Gaussian mixture models, hidden Markov models) were used to integrate heterogeneous patient data and form a predictive tool for the inference of lactate level and mortality risk. RESULTS An accuracy of 0.99 and discriminability of 1.00 area under the receiver operating characteristic curve (AUC) for lactate level prediction was obtained when the vital signs and WBC measurements were analysed in a 24 h time bin. An accuracy of 0.73 and discriminability of 0.73 AUC for mortality prediction in patients with sepsis was achieved with only three features: median of lactate levels, mean arterial pressure, and median absolute deviation of the respiratory rate. DISCUSSION This study introduces a new scheme for the prediction of lactate levels and mortality risk from patient vital signs and WBC. Accurate prediction of both these variables can drive the appropriate response by clinical staff and thus may have important implications for patient health and treatment outcome. CONCLUSIONS Effective predictions of lactate levels and mortality risk can be provided with a few clinical variables when the temporal aspect and variability of patient data are considered.",TRUE,noun
R11,Science,R70546,Machine Learning Models for Analysis of Vital Signs Dynamics: A Case for Sepsis Onset Prediction,S335590,R70547,Objective,L242426,Sepsis,"Objective. Achieving accurate prediction of sepsis detection moment based on bedside monitor data in the intensive care unit (ICU). A good clinical outcome is more probable when onset is suspected and treated on time, thus early insight of sepsis onset may save lives and reduce costs. Methodology. We present a novel approach for feature extraction, which focuses on the hypothesis that unstable patients are more prone to develop sepsis during ICU stay. These features are used in machine learning algorithms to provide a prediction of a patient’s likelihood to develop sepsis during ICU stay, hours before it is diagnosed. Results. Five machine learning algorithms were implemented using R software packages. The algorithms were trained and tested with a set of 4 features which represent the variability in vital signs. These algorithms aimed to calculate a patient’s probability to become septic within the next 4 hours, based on recordings from the last 8 hours. The best area under the curve (AUC) was achieved with Support Vector Machine (SVM) with radial basis function, which was 88.38%. Conclusions. The high level of predictive accuracy along with the simplicity and availability of input variables present great potential if applied in ICUs. Variability of a patient’s vital signs proves to be a good indicator of one’s chance to become septic during ICU stay.",TRUE,noun
R11,Science,R70548,Machine-Learning-Based Laboratory Developed Test for the Diagnosis of Sepsis in High-Risk Patients,S335604,R70549,Objective,L242438,Sepsis,"Sepsis, a dysregulated host response to infection, is a major health burden in terms of both mortality and cost. The difficulties clinicians face in diagnosing sepsis, alongside the insufficiencies of diagnostic biomarkers, motivate the present study. This work develops a machine-learning-based sepsis diagnostic for a high-risk patient group, using a geographically and institutionally diverse collection of nearly 500,000 patient health records. Using only a minimal set of clinical variables, our diagnostics outperform common severity scoring systems and sepsis biomarkers and benefit from being available immediately upon ordering.",TRUE,noun
R11,Science,R70554,Prediction of Sepsis in the Intensive Care Unit With Minimal Electronic Health Record Data: A Machine Learning Approach,S335647,R70555,Objective,L242475,Sepsis,"Background Sepsis is one of the leading causes of mortality in hospitalized patients. Despite this fact, a reliable means of predicting sepsis onset remains elusive. Early and accurate sepsis onset predictions could allow more aggressive and targeted therapy while maintaining antimicrobial stewardship. Existing detection methods suffer from low performance and often require time-consuming laboratory test results. Objective To study and validate a sepsis prediction method, InSight, for the new Sepsis-3 definitions in retrospective data, make predictions using a minimal set of variables from within the electronic health record data, compare the performance of this approach with existing scoring systems, and investigate the effects of data sparsity on InSight performance. Methods We apply InSight, a machine learning classification system that uses multivariable combinations of easily obtained patient data (vitals, peripheral capillary oxygen saturation, Glasgow Coma Score, and age), to predict sepsis using the retrospective Multiparameter Intelligent Monitoring in Intensive Care (MIMIC)-III dataset, restricted to intensive care unit (ICU) patients aged 15 years or more. Following the Sepsis-3 definitions of the sepsis syndrome, we compare the classification performance of InSight versus quick sequential organ failure assessment (qSOFA), modified early warning score (MEWS), systemic inflammatory response syndrome (SIRS), simplified acute physiology score (SAPS) II, and sequential organ failure assessment (SOFA) to determine whether or not patients will become septic at a fixed period of time before onset. We also test the robustness of the InSight system to random deletion of individual input observations. Results In a test dataset with 11.3% sepsis prevalence, InSight produced superior classification performance compared with the alternative scores as measured by area under the receiver operating characteristic curves (AUROC) and area under precision-recall curves (APR). In detection of sepsis onset, InSight attains AUROC = 0.880 (SD 0.006) at onset time and APR = 0.595 (SD 0.016), both of which are superior to the performance attained by SIRS (AUROC: 0.609; APR: 0.160), qSOFA (AUROC: 0.772; APR: 0.277), and MEWS (AUROC: 0.803; APR: 0.327) computed concurrently, as well as SAPS II (AUROC: 0.700; APR: 0.225) and SOFA (AUROC: 0.725; APR: 0.284) computed at admission (P<.001 for all comparisons). Similar results are observed for 1-4 hours preceding sepsis onset. In experiments where approximately 60% of input data are deleted at random, InSight attains an AUROC of 0.781 (SD 0.013) and APR of 0.401 (SD 0.015) at sepsis onset time. Even with 60% of data missing, InSight remains superior to the corresponding SIRS scores (AUROC and APR, P<.001), qSOFA scores (P=.0095; P<.001) and superior to SOFA and SAPS II computed at admission (AUROC and APR, P<.001), where all of these comparison scores (except InSight) are computed without data deletion. Conclusions Despite using little more than vitals, InSight is an effective tool for predicting sepsis onset and performs well even with randomly missing data.",TRUE,noun
R11,Science,R70560,Physiological monitoring for critically ill patients: testing a predictive model for the early detection of sepsis,S335689,R70561,Objective,L242511,Sepsis,"OBJECTIVE To assess the predictive value for the early detection of sepsis of the physiological monitoring parameters currently recommended by the Surviving Sepsis Campaign. METHODS The Project IMPACT data set was used to assess whether the physiological parameters of heart rate, mean arterial pressure, body temperature, and respiratory rate can be used to distinguish between critically ill adult patients with and without sepsis in the first 24 hours of admission to an intensive care unit. RESULTS All predictor variables used in the analyses differed significantly between patients with sepsis and patients without sepsis. However, only 2 of the predictor variables, mean arterial pressure and high temperature, were independently associated with sepsis. In addition, the temperature mean for hypothermia was significantly lower in patients without sepsis. The odds ratio for having sepsis was 2.126 for patients with a temperature of 38 degrees C or higher, 3.874 for patients with a mean arterial blood pressure of less than 70 mm Hg, and 4.63 times greater for patients who had both of these conditions. CONCLUSIONS The results support the use of some of the guidelines of the Surviving Sepsis Campaign. However, the lowest mean temperature was significantly less for patients without sepsis than for patients with sepsis, a finding that calls into question the clinical usefulness of using hypothermia as an early predictor of sepsis. Alone the group of variables used is not sufficient for discriminating between critically ill patients with and without sepsis.",TRUE,noun
R11,Science,R70562,Predictive models for severe sepsis in adult ICU patients,S335703,R70563,Objective,L242523,Sepsis,"Intensive Care Unit (ICU) patients have significant morbidity and mortality, often from complications that arise during the hospital stay. Severe sepsis is one of the leading causes of death among these patients. Predictive models have the potential to allow for earlier detection of severe sepsis and ultimately earlier intervention. However, current methods for identifying and predicting severe sepsis are biased and inadequate. The goal of this work is to identify a new framework for the prediction of severe sepsis and identify early predictors utilizing clinical laboratory values and vital signs collected in adult ICU patients. We explore models with logistic regression (LR), support vector machines (SVM), and logistic model trees (LMT) utilizing vital signs, laboratory values, or a combination of vital and laboratory values. When applied to a retrospective cohort of ICU patients, the SVM model using laboratory and vital signs as predictors identified 339 (65%) of the 3,446 patients as developing severe sepsis correctly. Based on this new framework and developed models, we provide a recommendation for the use in clinical decision support in ICU and non-ICU environments.",TRUE,noun
R11,Science,R70564,A Bayesian network for early diagnosis of sepsis patients: a basis for a clinical decision support system,S335720,R70565,Objective,L242538,Sepsis,"Sepsis is a severe medical condition caused by an inordinate immune response to an infection. Early detection of sepsis symptoms is important to prevent the progression into the more severe stages of the disease, which kills one in four it effects. Electronic medical records of 1492 patients containing 233 cases of sepsis were used in a clustering analysis to identify features that are indicative of sepsis and can be further used for training a Bayesian inference network. The Bayesian network was constructed using the systemic inflammatory response syndrome criteria, mean arterial pressure, and lactate levels for sepsis patients. The resulting network reveals a clear correlation between lactate levels and sepsis. Furthermore, it was shown that lactate levels may be predicative of the SIRS criteria. In this light, Bayesian networks of sepsis patients hold the promise of providing a clinical decision support system in the future.",TRUE,noun
R11,Science,R70566,From vital signs to clinical outcomes for patients with sepsis: a machine learning basis for a clinical decision support system,S335732,R70567,Objective,L242548,Sepsis,"OBJECTIVE To develop a decision support system to identify patients at high risk for hyperlactatemia based upon routinely measured vital signs and laboratory studies. MATERIALS AND METHODS Electronic health records of 741 adult patients at the University of California Davis Health System who met at least two systemic inflammatory response syndrome criteria were used to associate patients' vital signs, white blood cell count (WBC), with sepsis occurrence and mortality. Generative and discriminative classification (naïve Bayes, support vector machines, Gaussian mixture models, hidden Markov models) were used to integrate heterogeneous patient data and form a predictive tool for the inference of lactate level and mortality risk. RESULTS An accuracy of 0.99 and discriminability of 1.00 area under the receiver operating characteristic curve (AUC) for lactate level prediction was obtained when the vital signs and WBC measurements were analysed in a 24 h time bin. An accuracy of 0.73 and discriminability of 0.73 AUC for mortality prediction in patients with sepsis was achieved with only three features: median of lactate levels, mean arterial pressure, and median absolute deviation of the respiratory rate. DISCUSSION This study introduces a new scheme for the prediction of lactate levels and mortality risk from patient vital signs and WBC. Accurate prediction of both these variables can drive the appropriate response by clinical staff and thus may have important implications for patient health and treatment outcome. CONCLUSIONS Effective predictions of lactate levels and mortality risk can be provided with a few clinical variables when the temporal aspect and variability of patient data are considered.",TRUE,noun
R11,Science,R26330,On the Interactions Between Routing and Inventory-Management Policies in a One-WarehouseN-Retailer Distribution System,S82543,R26331,approach,R8470,Simulation,"This paper examines the interactions between routing and inventory-management decisions in a two-level supply chain consisting of a cross-docking warehouse and N retailers. Retailer demand is normally distributed and independent across retailers and over time. Travel times are fixed between pairs of system sites. Every m time periods, system inventory is replenished at the warehouse, whereupon an uncapacitated vehicle departs on a route that visits each retailer once and only once, allocating all of its inventory based on the status of inventory at the retailers who have not yet received allocations. The retailers experience newsvendor-type inventory-holding and backorder-penalty costs each period; the vehicle experiences in-transit inventory-holding costs each period. Our goal is to determine a combined system inventory-replenishment, routing, and inventory-allocation policy that minimizes the total expected cost/period of the system over an infinite time horizon. Our analysis begins by examining the determination of the optimal static route, i.e., the best route if the vehicle must travel the same route every replenishment-allocation cycle. Here we demonstrate that the optimal static route is not the shortest-total-distance (TSP) route, but depends on the variance of customer demands, and, if in-transit inventory-holding costs are charged, also on mean customer demands. We then examine dynamic-routing policies, i.e., policies that can change the route from one system-replenishment-allocation cycle to another, based on the status of the retailers' inventories. Here we argue that in the absence of transportation-related cost, the optimal dynamic-routing policy should be viewed as balancing management's ability to respond to system uncertainties (by changing routes) against system uncertainties that are induced by changing routes. We then examine the performance of a change-revert heuristic policy. Although its routing decisions are not fully dynamic, but determined and fixed for a given cycle at the time of each system replenishment, simulation tests with N = 2 and N = 6 retailers indicate that its use can substantially reduce system inventory-related costs even if most of the time the chosen route is the optimal static route.",TRUE,noun
R11,Science,R27002,Scheduling short-term marine transport of bulk products,S86786,R27003,Method,R8470,Simulation,"A multinational company uses a personal computer to schedule a fleet of coastal tankers and barges transporting liquid bulk products among plants, distribution centres (tank farms), and industrial customers. A simple spreadsheet interface cloaks a sophisticated optimization-based decision support system and makes this system useable via a varity of natural languages. The dispatchers, whose native language is not English, and some of whom presumably speak no English at all, communicate via the spreadsheet, and view recommended schedules displayed in Gantt charts both internationally familiar tools. Inside the spreadsheet, a highly detailed simulation can generate every feasible alternate vessel employment schedule, and an integer linear set partitioning model selects one schedule for each vessel so that all loads and deliveries are completed at minimal cost while satisfying all operational requirements. The optimized fleet employment schedule is displyed graphically with hourly time resolution over a planning horizon of 2-3 weeks. Each vessel will customarily make several voyages and many port calls to load and unload products during this time.",TRUE,noun
R11,Science,R27015,Strategic fleet size planning for maritime refrigerated containers,S86862,R27016,Method,R8470,Simulation,"In the present economic climate, it is often the case that profits can only be improved, or for that matter maintained, by improving efficiency and cutting costs. This is particularly notorious in the shipping business, where it has been seen that the competition is getting tougher among carriers, thus alliances and partnerships are resulting for cost effective services in recent years. In this scenario, effective planning methods are important not only for strategic but also operating tasks, covering their entire transportation systems. Container fleet size planning is an important part of the strategy of any shipping line. This paper addresses the problem of fleet size planning for refrigerated containers, to achieve cost-effective services in a competitive maritime shipping market. An analytical model is first discussed to determine the optimal size of an own dry container fleet. Then, this is extended for an own refrigerated container fleet, which is the case when an extremely unbalanced trade represents one of the major investment decisions to be taken by liner operators. Next, a simulation model is developed for fleet sizing in a more practical situation and, by using this, various scenarios are analysed to determine the most convenient composition of refrigerated fleet between own and leased containers for the transpacific cargo trade.",TRUE,noun
R11,Science,R25142,A large-scale LED array to support anticipatory driving,S74653,R25143,type,R25141,Speed,"We present a novel assistance system which supports anticipatory driving by means of fostering early deceleration. Upcoming technologies like Car2X communication provide information about a time interval which is currently uncovered. This information shall be used in the proposed system to inform drivers about future situations which require reduced speed. Such situations include traffic jams, construction sites or speed limits. The HMI is an optical output system based on line arrays of RGB-LEDs. Our contribution presents construction details as well as user evaluations. The results show an earlier deceleration of 3.9 – 11.5 s and a shorter deceleration distance of 2 – 166 m.",TRUE,noun
R11,Science,R25144,GPS enabled speed control embedded system speed limiting device with display and engine control interface,S74666,R25145,type,R25141,Speed,"In the past decade, there have been close to 350,000 fatal crashes in the United States [1]. With various improvements in traffic and vehicle safety, the number of such crashes is decreasing every year. One of the ways to reduce vehicle crashes is to prevent excessive speeding in the roads and highways. The paper aims to outline the design of an embedded system that will automatically control the speed of a motor vehicle based on its location determined by a GPS device. The embedded system will make use of an AVR ATMega128 microcontroller connected to an EM-406A GPS receiver. The large amount of location input data justifies the use of an ATMega128 microcontroller which has 128KB of programmable flash memory as well as 4KB SRAM, and a 4KB EEPROM Memory [2]. The output of the ATMega128 will be a DOGMI63W-A LCD module which will display information of the current and the set-point speed of the vehicle at the current position. A discrete indicator LED will flash at a pre-determined frequency when the speed of the vehicle has exceeded the recommended speed limit. Finally, the system will have outputs that will communicate with the Engine Control Unit (ECU) of the vehicle. For the limited scope of this project, the ECU is simulated as an external device with two inputs that will acknowledge pulse-trains of particular frequencies to limit the speed of a vehicle. The speed control system will be programmed using mixed language C and Assembly with the latter in use for some pre-written subroutines to drive the LCD module. The GPS module will transmit National Marine Electronics Association (NMEA) data strings to the microcontroller (MCU) using Serial Peripheral Interface (SPI). The MCU will use the location coordinates (latitude and longitude) and the speed from the NMEA RMC output string. The current speed is then compared against the recommended speed for the vehicle's location. The memory locations in the ATMega128 can be used to store set-point speed values against a particular set of location co-ordinates. Apart from its implementation in human operated vehicles, the project can be used to control speed of autonomous cars and to implement the idea of a variable speed limit on roads introduced by the Department of Transportation [3].",TRUE,noun
R11,Science,R25146,ChaseLight,S74678,R25147,type,R25141,Speed,"In order to support drivers to maintain a predefined driving speed, we introduce ChaseLight, an in-car system that uses a programmable LED stripe mounted along the A-pillar of a car. The chase light (i.e., stripes of adjacent LEDs that are turned on and off frequently to give the illusion of lights moving along the stripe) provides ambient feedback to the driver about speed. We present a simulator based user study that uses three different types of feedback: (1) chase light with constant speed, (2) with proportional speed (i.e., chase light speed correlates with vehicle speed), and (3) with adaptive speed (i.e., chase light speed adapts to a target speed of the vehicle). Our results show that the adaptive condition is suited best to help a driver to control driving speed. The proportional speed condition resulted in a significantly slower mean speed than the baseline condition (no chase light).",TRUE,noun
R11,Science,R25156,heart rate,S74726,R25157,type,R25153,State,"Electric Vehicles (EVs) are an emerging technology and open up an exciting new space for designing in-car interfaces. This technology enhances driving experience by a strong acceleration, regenerative breaking and especially a reduced noise level. However, engine vibrations and sound transmit valuable feedback to drivers of conventional cars, e.g. signaling that the engine is running and ready to go. We address this lack of feedback with Heartbeat, a multimodal electric vehicle information system. Heartbeat communicates (1) the state of the electric drive including energy flow and (2) the energy level of the batteries in a natural and experienceable way. We enhance the underlying Experience Design process by formulating working principles derived from an experience story in order to transport its essence throughout the following design phases. This way, we support the design of a consistent experience and resolve the tension between implementation constraints (e.g., space) and the persistence of the underlying story while building prototypes and integrating them into a technical environment (e.g., a dashboard).",TRUE,noun
R11,Science,R26554,Energy-efficient communication protocol for wireless microsensor networks,S83904,R26657,Dynamism,R26656,Static,"Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.",TRUE,noun
R11,Science,R159710,"Video Variants for CrowdRE: How to Create Linear Videos, Vision Videos, and Interactive Videos",S705480,R159712,has study,R182383,study,"In CrowdRE, heterogenous crowds of stakeholders are involved in requirements elicitation. One major challenge is to inform several people about a complex and sophisticated piece of software so that they can effectively contextualize and contribute their opinions and insights. Overly technical or boring textual representations might lead to misunderstandings or even repel some people. Videos may be better suited for this purpose. There are several variants of video available: Linear videos have been used for tutorials on YouTube and similar platforms. Interactive media have been proposed for activating commitment and valuable feedback. Vision videos were explicitly introduced to solicit feedback about product visions and software requirements. In this paper, we describe essential steps of creating a useful video, making it interactive, and presenting it to stakeholders. We consider four potentially useful types of videos for CrowdRE and how to produce them. To evaluate feasibility of this approach for creating video variants, all presented steps were performed in a case study.",TRUE,noun
R11,Science,R27833,Learning blood management in orthopedic surgery through gameplay,S90740,R27834,Topic,R27828,Surgery,"Orthopedic surgery treats the musculoskeletal system, in which bleeding is common and can be fatal. To help train future surgeons in this complex practice, researchers designed and implemented a serious game for learning orthopedic surgery. The game focuses on teaching trainees blood management skills, which are critical for safe operations. Using state-of-the-art graphics technologies, the game provides an interactive and realistic virtual environment. It also integrates game elements, including task-oriented and time-attack scenarios, bonuses, game levels, and performance evaluation tools. To study the system's effect, the researchers conducted experiments on player completion time and off-target contacts to test their learning of psychomotor skills in blood management.",TRUE,noun
R11,Science,R70622,Improving Prediction of Surgical Site Infection Risk with Multilevel Modeling,S336156,R70623,Objective,L242896,Surveillance,"Background Surgical site infection (SSI) surveillance is a key factor in the elaboration of strategies to reduce SSI occurrence and in providing surgeons with appropriate data feedback (risk indicators, clinical prediction rule). Aim To improve the predictive performance of an individual-based SSI risk model by considering a multilevel hierarchical structure. Patients and Methods Data were collected anonymously by the French SSI active surveillance system in 2011. An SSI diagnosis was made by the surgical teams and infection control practitioners following standardized criteria. A random 20% sample comprising 151 hospitals, 502 wards and 62280 patients was used. Three-level (patient, ward, hospital) hierarchical logistic regression models were initially performed. Parameters were estimated using the simulation-based Markov Chain Monte Carlo procedure. Results A total of 623 SSI were diagnosed (1%). The hospital level was discarded from the analysis as it did not contribute to variability of SSI occurrence (p = 0.32). Established individual risk factors (patient history, surgical procedure and hospitalization characteristics) were identified. A significant heterogeneity in SSI occurrence between wards was found (median odds ratio [MOR] 3.59, 95% credibility interval [CI] 3.03 to 4.33) after adjusting for patient-level variables. The effects of the follow-up duration varied between wards (p<10−9), with an increased heterogeneity when follow-up was <15 days (MOR 6.92, 95% CI 5.31 to 9.07]). The final two-level model significantly improved the discriminative accuracy compared to the single level reference model (p<10−9), with an area under the ROC curve of 0.84. Conclusion This study sheds new light on the respective contribution of patient-, ward- and hospital-levels to SSI occurrence and demonstrates the significant impact of the ward level over and above risk factors present at patient level (i.e., independently from patient case-mix).",TRUE,noun
R11,Science,R28064,Using the GPU for fast symmetry-based dense stereo matching in high resolution images,S91619,R28065,Taxonomy stage: Step,R28062,Symmetry,"SymStereo is a new algorithm used for stereo estimation. Instead of measuring photo-similarity, it proposes novel cost functions that measure symmetry for evaluating the likelihood of two pixels being a match. In this work we propose a parallel approach of the LogN matching cost variant of SymStereo capable of processing pairs of images in real-time for depth estimation. The power of the graphics processing units utilized allows exploring more efficiently the bank of log-Gabor wavelets developed to analyze symmetry, in the spectral domain. We analyze tradeoffs and propose different parameter-izations of the signal processing algorithm to accommodate image size, dimension of the filter bank, number of wavelets and also the number of disparities that controls the space density of the estimation, and still process up to 53 frames per second (fps) for images with size 288 × 384 and up to 3 fps for 768 × 1024 images.",TRUE,noun
R11,Science,R28064,Using the GPU for fast symmetry-based dense stereo matching in high resolution images,S91615,R28065,Algorithm,R28063,SymStereo,"SymStereo is a new algorithm used for stereo estimation. Instead of measuring photo-similarity, it proposes novel cost functions that measure symmetry for evaluating the likelihood of two pixels being a match. In this work we propose a parallel approach of the LogN matching cost variant of SymStereo capable of processing pairs of images in real-time for depth estimation. The power of the graphics processing units utilized allows exploring more efficiently the bank of log-Gabor wavelets developed to analyze symmetry, in the spectral domain. We analyze tradeoffs and propose different parameter-izations of the signal processing algorithm to accommodate image size, dimension of the filter bank, number of wavelets and also the number of disparities that controls the space density of the estimation, and still process up to 53 frames per second (fps) for images with size 288 × 384 and up to 3 fps for 768 × 1024 images.",TRUE,noun
R11,Science,R25991,Logical structure analysis of book document images using contents information,S80485,R26014,Logical Labels,L50860,table,"Numerous studies have so far been carried out extensively for the analysis of document image structure, with particular emphasis placed on media conversion and layout analysis. For the conversion of a collection of books in a library into the form of hypertext documents, a logical structure extraction technology is indispensable, in addition to document layout analysis. The table of contents of a book generally involves very concise and faithful information to represent the logical structure of the entire book. That is to say, we can efficiently analyze the logical structure of a book by making full use of its contents pages. This paper proposes a new approach for document logical structure analysis to convert document images and contents information into an electronic document. First, the contents pages of a book are analyzed to acquire the overall document logical structure. Thereafter, we are able to use this information to acquire the logical structure of all the pages of the book by analyzing consecutive pages of a portion of the book. Test results demonstrate very high discrimination rates: up to 97.6% for the headline structure, 99.4% for the text structure, 97.8% for the page-number structure and almost 100% for the head-foot structure.",TRUE,noun
R11,Science,R31580,Temperature control of a pilot plant reactor system using a genetic algorithm model-based control approach,S105805,R31581,Objective/estimate(s) process systems,R8782,Temperature,"The work described in this paper aims at exploring the use of an artificial intelligence technique, i.e. genetic algorithm (GA), for designing an optimal model-based controller to regulate the temperature of a reactor. GA is utilized to identify the best control action for the system by creating possible solutions and thereby to propose the correct control action to the reactor system. This value is then used as the set point for the closed loop control system of the heat exchanger. A continuous stirred tank reactor is chosen as a case study, where the controller is then tested with multiple set-point tracking and changes in its parameters. The GA model-based control (GAMBC) is then implemented experimentally to control the reactor temperature of a pilot plant, where an irreversible exothermic chemical reaction is simulated by using the calculated steam flow rate. The dynamic behavior of the pilot plant reactor during the online control studies is highlighted, and comparison with the conventional tuned proportional integral derivative (PID) is presented. It is found that both controllers are able to control the process with comparable performance. Copyright © 2007 Curtin University of Technology and John Wiley & Sons, Ltd.",TRUE,noun
R11,Science,R30573,Defense against Sybil attack in vehicular ad hoc network based on roadside unit support,S101738,R30574,Confidentiality (Privacy) Technique,R30572,Timestamp,"In this paper, we propose a timestamp series approach to defend against Sybil attack in a vehicular ad hoc network (VANET) based on roadside unit support. The proposed approach targets the initial deployment stage of VANET when basic roadside unit (RSU) support infrastructure is available and a small fraction of vehicles have network communication capability. Unlike previously proposed schemes that require a dedicated vehicular public key infrastructure to certify individual vehicles, in our approach RSUs are the only components issuing the certificates. Due to the differences of moving dynamics among vehicles, it is rare to have two vehicles passing by multiple RSUs at exactly the same time. By exploiting this spatial and temporal correlation between vehicles and RSUs, two messages will be treated as Sybil attack issued by one vehicle if they have the similar timestamp series issued by RSUs. The timestamp series approach needs neither vehicular-based public-key infrastructure nor Internet accessible RSUs, which makes it an economical solution suitable for the initial stage of VANET.",TRUE,noun
R11,Science,R25997,Automated labeling in document images,S80513,R26016,Logical Labels,L50884,title,"The National Library of Medicine (NLM) is developing an automated system to produce bibliographic records for its MEDLINER database. This system, named Medical Article Record System (MARS), employs document image analysis and understanding techniques and optical character recognition (OCR). This paper describes a key module in MARS called the Automated Labeling (AL) module, which labels all zones of interest (title, author, affiliation, and abstract) automatically. The AL algorithm is based on 120 rules that are derived from an analysis of journal page layouts and features extracted from OCR output. Experiments carried out on more than 11,000 articles in over 1,000 biomedical journals show the accuracy of this rule-based algorithm to exceed 96%.",TRUE,noun
R11,Science,R27331,Effect of shot peening on residual stress and fatigue life of a spring steel,S88171,R27332,Special Notes,R27329,Torsion,"This study describes shot peening effects such as shot hardness, shot size and shot projection pressure, on the residual stress distribution and fatigue life in reversed torsion of a 60SC7 spring steel. There appears to be a correlation between the fatigue strength and the area under the residual stress distribution curve. The biggest shot shows the best fatigue life improvement. However, for a shorter time of shot peening, small hard shot showed the best performance. Moreover, the superficial residual stresses and the amount of work hardening (characterised by the width of the X-ray diffraction line) do not remain stable during fatigue cycling. Indeed they decrease and their reduction rate is a function of the cyclic stress level and an inverse function of the depth of the plastically deformed surface layer.",TRUE,noun
R11,Science,R27359,"Residual Stress Relaxation and Fatigue Strength of AISI 4140 under Torsional Loading after Conventional Shot Peening, Stress Peening and Warm Peening",S88313,R27360,Special Notes,R27329,Torsion,"Cylindrical rods of 450°C quenched and tempered AISI 41 40 were conventionally shot peened, stress peened and warm peened while rotating in the peening device. Warm peening at Tpeen = 310°C was conducted using a modified air blast shot peening machine with an electric air flow heater system. To perform stress peening using a torsional pre-stress, a device was conceived which allowed rotating pre-stressed samples without having material of the pre-loading gadget between the shot and the samples. Thus, same peening conditions for all peening procedures were ensured. The residual stress distributions present after the different peening procedures were evaluated and compared with results obtained after peening of flat material of the same steel. The differently peened samples were subjected to torsional pulsating stresses (R = 0) at different loadings to investigate their residual stress relaxation behavior. Additionally, the pulsating torsional strengths for the differently peened samples were determined.",TRUE,noun
R11,Science,R25742,A transaction mapping algorithm for frequent itemsets mining,S78205,R25743,Algorithm name,L48981,Transaction,"In this paper, we present a novel algorithm for mining complete frequent itemsets. This algorithm is referred to as the TM (transaction mapping) algorithm from hereon. In this algorithm, transaction ids of each itemset are mapped and compressed to continuous transaction intervals in a different space and the counting of itemsets is performed by intersecting these interval lists in a depth-first order along the lexicographic tree. When the compression coefficient becomes smaller than the average number of comparisons for intervals intersection at a certain level, the algorithm switches to transaction id intersection. We have evaluated the algorithm against two popular frequent itemset mining algorithms, FP-growth and dEclat, using a variety of data sets with short and long frequent patterns. Experimental data show that the TM algorithm outperforms these two algorithms.",TRUE,noun
R11,Science,R30805,Bi-objective stochastic programming models for determining depot locations in disaster relief operations,S104131,R31089,Second-stage2,R30819,Transport,"This paper presents two-stage bi-objective stochastic programming models for disaster relief operations. We consider a problem that occurs in the aftermath of a natural disaster: a transportation system for supplying disaster victims with relief goods must be established. We propose bi-objective optimization models with a monetary objective and humanitarian objective. Uncertainty in the accessibility of the road network is modeled by a discrete set of scenarios. The key features of our model are the determination of locations for intermediate depots and acquisition of vehicles. Several model variants are considered. First, the operating budget can be fixed at the first stage for all possible scenarios or determined for each scenario at the second stage. Second, the assignment of vehicles to a depot can be either fixed or free. Third, we compare a heterogeneous vehicle fleet to a homogeneous fleet. We study the impact of the variants on the solutions. The set of Pareto-optimal solutions is computed by applying the adaptive Epsilon-constraint method. We solve the deterministic equivalents of the two-stage stochastic programs using the MIP-solver CPLEX.",TRUE,noun
R11,Science,R25963,Understanding multi-articled documents,S80306,R26000,Logical Structure Representation,L50709,tree,"A document understanding method based on the tree representation of document structures is proposed. It is shown that documents have an obvious hierarchical structure in their geometry which is represented by a tree. A small number of rules are introduced to transform the geometric structure into the logical structure which represents the semantics. The virtual field separator technique is employed to utilize the information carried by special constituents of documents such as field separators and frames, keeping the number of transformation rules small. Experimental results on a variety of document formats have shown that the proposed method is applicable to most of the documents commonly encountered in daily use, although there is still room for further refinement of the transformation rules.<>",TRUE,noun
R11,Science,R25981,Document image segmentation and text area ordering,S80415,R26009,Logical Structure Representation,L50800,tree,"A system for document image segmentation and ordering text areas is described and applied to both Japanese and English complex printed page layouts. There is no need to make any assumption about the shape of blocks, hence the segmentation technique can handle not only skewed images without skew-correction but also documents where column are not rectangular. In this technique, on the bottom-up strategy, the connected components are extracted from the reduced image, and classified according to their local information. The connected components are merged into lines, and lines are merged into areas. Extracted text areas are classified as body, caption, header, and footer. A tree graph of the layout of body texts is made, and we get the order of texts by preorder traversal on the graph. The authors introduce the influence range of each node, a procedure for the title part, and extraction of the white horizontal separator. Making it possible to get good results on various documents. The total system is fast and compact.<>",TRUE,noun
R11,Science,R28097,A fast trilateral filterbased adaptive support weight method for stereo matching,S91745,R28098,Algorithm,R28096,Trilateral,"Adaptive support weight (ASW) methods represent the state of the art in local stereo matching, while the bilateral filter-based ASW method achieves outstanding performance. However, this method fails to resolve the ambiguity induced by nearby pixels at different disparities but with similar colors. In this paper, we introduce a novel trilateral filter (TF)-based ASW method that remedies such ambiguities by considering the possible disparity discontinuities through color discontinuity boundaries, i.e., the boundary strength between two pixels, which is measured by a local energy model. We also present a recursive TF-based ASW method whose computational complexity is O(N) for the cost aggregation step, and O(NLog2(N)) for boundary detection, where N denotes the input image size. This complexity is thus independent of the support window size. The recursive TF-based method is a nonlocal cost aggregation strategy. The experimental evaluation on the Middlebury benchmark shows that the proposed method, whose average error rate is 4.95%, outperforms other local methods in terms of accuracy. Equally, the average runtime of the proposed TF-based cost aggregation is roughly 260 ms on a 3.4-GHz Inter Core i7 CPU, which is comparable with state-of-the-art efficiency.",TRUE,noun
R11,Science,R151308,Twitter for Crisis Communication: Lessons Learned from Japan's Tsunami Disaster,S606805,R151309,Emergency Type,L419627,Tsunami,"Two weeks after the Great Tohoku earthquake followed by the devastating tsunami, we have sent open-ended questionnaires to a randomly selected sample of Twitter users and also analysed the tweets sent from the disaster-hit areas. We found that people in directly affected areas tend to tweet about their unsafe and uncertain situation while people in remote areas post messages to let their followers know that they are safe. Our analysis of the open-ended answers has revealed that unreliable retweets (RTs) on Twitter was the biggest problem the users have faced during the disaster. Some of the solutions offered by the respondents included introducing official hash tags, limiting the number of RTs for each hash tag and adding features that allow users to trace information by maintaining anonymity.",TRUE,noun
R11,Science,R152989,Twitter for Crisis Communication: Lessons Learned from Japan's Tsunami Disaster,S626716,R156101,Emergency Type,L431390,Tsunami,"Two weeks after the Great Tohoku earthquake followed by the devastating tsunami, we have sent open-ended questionnaires to a randomly selected sample of Twitter users and also analysed the tweets sent from the disaster-hit areas. We found that people in directly affected areas tend to tweet about their unsafe and uncertain situation while people in remote areas post messages to let their followers know that they are safe. Our analysis of the open-ended answers has revealed that unreliable retweets (RTs) on Twitter was the biggest problem the users have faced during the disaster. Some of the solutions offered by the respondents included introducing official hash tags, limiting the number of RTs for each hash tag and adding features that allow users to trace information by maintaining anonymity.",TRUE,noun
R11,Science,R151308,Twitter for Crisis Communication: Lessons Learned from Japan's Tsunami Disaster,S606814,R151309,Technology,L419636,Twitter,"Two weeks after the Great Tohoku earthquake followed by the devastating tsunami, we have sent open-ended questionnaires to a randomly selected sample of Twitter users and also analysed the tweets sent from the disaster-hit areas. We found that people in directly affected areas tend to tweet about their unsafe and uncertain situation while people in remote areas post messages to let their followers know that they are safe. Our analysis of the open-ended answers has revealed that unreliable retweets (RTs) on Twitter was the biggest problem the users have faced during the disaster. Some of the solutions offered by the respondents included introducing official hash tags, limiting the number of RTs for each hash tag and adding features that allow users to trace information by maintaining anonymity.",TRUE,noun
R11,Science,R152989,Twitter for Crisis Communication: Lessons Learned from Japan's Tsunami Disaster,S626720,R156101,Technology,L431394,Twitter,"Two weeks after the Great Tohoku earthquake followed by the devastating tsunami, we have sent open-ended questionnaires to a randomly selected sample of Twitter users and also analysed the tweets sent from the disaster-hit areas. We found that people in directly affected areas tend to tweet about their unsafe and uncertain situation while people in remote areas post messages to let their followers know that they are safe. Our analysis of the open-ended answers has revealed that unreliable retweets (RTs) on Twitter was the biggest problem the users have faced during the disaster. Some of the solutions offered by the respondents included introducing official hash tags, limiting the number of RTs for each hash tag and adding features that allow users to trace information by maintaining anonymity.",TRUE,noun
R11,Science,R25105,User gains and PD aims,S74436,R25106,participants,L46275,Users,"We present a study of user gains from their participation in a participatory design (PD) project at Danish primary schools. We explore user experiences and reported gains from the project in relation to the multiple aims of PD, based on a series of interviews with pupils, teachers, administrators, and consultants, conducted approximately three years after the end of the project. In particular, we reflect on how the PD initiatives were sustained after the project had ended. We propose that not only are ideas and initiatives disseminated directly within the organization, but also through networked relationships among people, stretching across organizations and project groups. Moreover, we demonstrate how users' gains related to their acting within these networks. These results suggest a heightened focus on the indirect and distributed channels through which the long-term impact of PD emerges.",TRUE,noun
R11,Science,R34384,Fatal Clostridium difficile infection of the small bowel after complex colorectal surgery,S119735,R34385,Treatment,R34320,Vancomycin,"Pseudomembranous colitis is a well recognized complication of antibiotic use1 and is due to disturbances of the normal colonic bacterial flora, resulting in overgrowth of Clostridium difficile. For recurrent or severe cases, oral vancomycin or metronidazole is the treatment of choice. Progression to acute fulminant colitis with systemic toxic effects occasionally occurs, especially in the elderly and in the immunosuppressed. Some of these patients may need surgical intervention for complications such as perforation.2 Clostridium difficile is commonly regarded as a colonic pathogen and there are few reports of C. difficile enteritis with involvement of the small bowel (Table 1). Pseudomembrane formation caused by C. difficile is generally restricted to the colon, with abrupt termination at the ileocaecal valve.1,3,5,8,9 We report a case of fulminant and fatal C. difficile infection with pseudomembranes throughout the entire small bowel and colon in a patient following complex colorectal surgery. The relevant literature is reviewed.",TRUE,noun
R11,Science,R34392,Treatment of metronidazole-refractory Clostridium difficile enteritis with vancomycin,S119795,R34393,Treatment,R34320,Vancomycin,"BACKGROUND Clostridium difficile infection of the colon is a common and well-described clinical entity. Clostridium difficile enteritis of the small bowel is believed to be less common and has been described sparsely in the literature. METHODS Case report and literature review. RESULTS We describe a patient who had undergone total proctocolectomy with ileal pouch-anal anastomosis who was treated with broad-spectrum antibiotics and contracted C. difficile refractory to metronidazole. The enteritis resolved quickly after initiation of combined oral vancomycin and metronidazole. A literature review found that eight of the fifteen previously reported cases of C. difficile-associated small-bowel enteritis resulted in death. CONCLUSIONS It is important for physicians who treat acolonic patients to be aware of C. difficile enteritis of the small bowel so that it can be suspected, diagnosed, and treated.",TRUE,noun
R11,Science,R34394,Fulminant small bowel enteritis: a rare complication of Clostridium difficile-associated disease,S119818,R34395,Treatment,R34320,Vancomycin,"To the Editor: A 54-year-old male was admitted to a community hospital with a 3-month history of diarrhea up to 8 times a day associated with bloody bowel motions and weight loss of 6 kg. He had no past medical history or family history of note. A clinical diagnosis of colitis was made and the patient underwent a limited colonoscopy which demonstrated continuous mucosal inflammation and ulceration that was most marked in the rectum. The clinical and endoscopic findings were suggestive of acute ulcerative colitis (UC), which was subsequently supported by histopathology. The patient was managed with bowel rest and intravenous steroids. However, he developed toxic megacolon on day 4 of his admission and underwent a total colectomy with end ileostomy. On the third postoperative day the patient developed a pyrexia of 39°C, a septic screen was performed, and the central venous line (CVP) was changed with the tip culturing methicillin-resistant Staphylococcus aureus (MRSA). Intravenous gentamycin was commenced and discontinued after 5 days, with the patient remaining afebrile and stable. On the tenth postoperative day the patient became tachycardic (pulse 110/min), diaphoretic (temperature of 39.4°C), hypotensive (diastolic of 60 mm Hg), and with a high volume nasogastric aspirates noted (2000 mL). A diagnosis of septic shock was considered although the etiology was unclear. The patient was resuscitated with intravenous fluids and transferred to the regional surgical unit for Intensive Care Unit monitoring and management. A computed tomography (CT) of the abdomen showed a marked inflammatory process with bowel wall thickening along the entire small bowel with possible intramural air, raising the suggestion of ischemic bowel (Fig. 1). However, on clinical assessment the patient elicited no signs of peritonism, his vitals were stable, he was not acidotic (pH 7.40), urine output was adequate, and his blood pressure was being maintained without inotropic support. Furthermore, his ileostomy appeared healthy and well perfused, although a high volume (2500 mL in the previous 18 hours), malodorous output was noted. A sample of the stoma output was sent for microbiological analysis. Given that the patient was not exhibiting evidence of peritonitis with normal vital signs, a conservative policy of fluid resuscitation was pursued with plans for exploratory laparotomy if he disimproved. Ileostomy output sent for microbiology assessment was positive for Clostridium difficile toxin A and B utilizing culture and enzyme immunoassays (EIA). Intravenous vancomycin, metronidazole, and rifampicin via a nasogastric tube were commenced in conjunction with bowel rest and total parenteral nutrition. The ileostomy output reduced markedly within 2 days and the patient’s clinical condition improved. Follow-up culture of the ileostomy output was negative for C. difficile toxins. The patient was discharged in good health on full oral diet 12 days following transfer. Review of histopathology relating to the resected colon and subsequent endoscopic assessment of the retained rectum confirmed the initial diagnosis of UC, rather than a primary diagnosis of pseudomembranous colitis. Clostridium difficile is the leading cause of nosocomial diarrhea associated with antibiotic therapy and is almost always limited to the colonic mucosa.1 Small bowel enteritis secondary to C. difficile is exceedingly rare, with only 21 previous cases cited in the literature.2,3 Of this cohort, 18 patients had a surgical procedure at some timepoint prior to the development of C. difficile enteritis, while the remaining 3 patients had no surgical procedure prior to the infection. The time span between surgery and the development of enteritis ranged from 4 days to 31 years. Antibiotic therapy predisposed to the development of C. difficile enteritis in 20 of the cases. A majority of the patients (n 11) had a history of inflammatory bowel disease (IBD), with 8 having UC similar to our patient and the remaining 3 patients having a history of Crohn’s disease. The etiology of small bowel enteritis remains unclear. C. difficile has been successfully isolated from the small bowel in both autopsy specimens and from jejunal aspirate of patients with chronic diarrhea, suggesting that the small bowel may act as a reservoir for C. difficile.4 This would suggest that C. difficile could become pathogenic in the small bowel following a disruption in the small bowel flora in the setting of antibiotic therapy. This would be supported by the observation that the majority of cases reported occurred within 90 days of surgery with attendant disruption of bowel function. The prevalence of C. difficile-associated disease (CDAD) in patients with IBD is increasing. Issa et al5 examined the impact of CDAD in a cohort of patients with IBD. They found that more than half of the patients with a positive culture for C. difficile were admitted and 20% required a colectomy. They reported that maintenance immunomodulator use and colonic involvement were independent risk factors for C. difficile infection in patients with IBD. The rising incidence of C. difficile in patients with IBD coupled with the use of increasingly potent immunomodulatory therapies means that clinicians must have a high index of suspiCopyright © 2008 Crohn’s & Colitis Foundation of America, Inc. DOI 10.1002/ibd.20758 Published online 22 October 2008 in Wiley InterScience (www.interscience.wiley.com).",TRUE,noun
R11,Science,R29008,The First Facial Landmark Tracking in-the-Wild Challenge: Benchmark and Results,S95949,R29009,Video (v)/image (i),L58796,Video,"Detection and tracking of faces in image sequences is among the most well studied problems in the intersection of statistical machine learning and computer vision. Often, tracking and detection methodologies use a rigid representation to describe the facial region 1, hence they can neither capture nor exploit the non-rigid facial deformations, which are crucial for countless of applications (e.g., facial expression analysis, facial motion capture, high-performance face recognition etc.). Usually, the non-rigid deformations are captured by locating and tracking the position of a set of fiducial facial landmarks (e.g., eyes, nose, mouth etc.). Recently, we witnessed a burst of research in automatic facial landmark localisation in static imagery. This is partly attributed to the availability of large amount of annotated data, many of which have been provided by the first facial landmark localisation challenge (also known as 300-W challenge). Even though now well established benchmarks exist for facial landmark localisation in static imagery, to the best of our knowledge, there is no established benchmark for assessing the performance of facial landmark tracking methodologies, containing an adequate number of annotated face videos. In conjunction with ICCV'2015 we run the first competition/challenge on facial landmark tracking in long-term videos. In this paper, we present the first benchmark for long-term facial landmark tracking, containing currently over 110 annotated videos, and we summarise the results of the competition.",TRUE,noun
R11,Science,R28560,Undifferentiated (Embryonal) Sarcoma of the Liver,S93885,R28562,Symptoms and signs,R28540,vomiting,"A 10‐year‐old girl with undifferentiated (embryonal) sarcoma of the liver reported here had abdominal pain, nausea, vomiting and weakness when she was 8 years old. Chemical analyses of the blood and urine were normal. Serum alpha‐fetoprotein was within normal limits. She died of cachexia 1 year and 8 months after the onset of symptoms. Autopsy showed a huge tumor mass in the liver and a few metastatic nodules in the lungs, which were consistent histologically with undifferenitated sarcoma of the liver. To our knowledge, this is the second case report of hepatic undifferentiated sarcoma of children in Japan, the feature being compatible with the description of Stocker and Ishaka.",TRUE,noun
R11,Science,R33795,Comparative analysis of algorithms for identifying amplifications and deletions in array CGH data,S117172,R33796,Algorithm,R33788,Wavelet,"MOTIVATION Array Comparative Genomic Hybridization (CGH) can reveal chromosomal aberrations in the genomic DNA. These amplifications and deletions at the DNA level are important in the pathogenesis of cancer and other diseases. While a large number of approaches have been proposed for analyzing the large array CGH datasets, the relative merits of these methods in practice are not clear. RESULTS We compare 11 different algorithms for analyzing array CGH data. These include both segment detection methods and smoothing methods, based on diverse techniques such as mixture models, Hidden Markov Models, maximum likelihood, regression, wavelets and genetic algorithms. We compute the Receiver Operating Characteristic (ROC) curves using simulated data to quantify sensitivity and specificity for various levels of signal-to-noise ratio and different sizes of abnormalities. We also characterize their performance on chromosomal regions of interest in a real dataset obtained from patients with Glioblastoma Multiforme. While comparisons of this type are difficult due to possibly sub-optimal choice of parameters in the methods, they nevertheless reveal general characteristics that are helpful to the biological investigator.",TRUE,noun
R11,Science,R25683,Information content based ranking metric for linked open vocabularies,S77717,R25684,App. Type,R6538,Web,"It is widely accepted that by controlling metadata, it is easier to publish high quality data on the web. Metadata, in the context of Linked Data, refers to vocabularies and ontologies used for describing data. With more and more data published on the web, the need for reusing controlled taxonomies and vocabularies is becoming more and more a necessity. Catalogues of vocabularies are generally a starting point to search for vocabularies based on search terms. Some recent studies recommend that it is better to reuse terms from ""popular"" vocabularies [4]. However, there is not yet an agreement on what makes a popular vocabulary since it depends on diverse criteria such as the number of properties, the number of datasets using part or the whole vocabulary, etc. In this paper, we propose a method for ranking vocabularies based on an information content metric which combines three features: (i) the datasets using the vocabulary, (ii) the outlinks from the vocabulary and (iii) the inlinks to the vocabulary. We applied this method to 366 vocabularies described in the LOV catalogue. The results are then compared with other catalogues which provide alternative rankings.",TRUE,noun
R11,Science,R25722,Visualizing ontologies with VOWL,S78067,R25723,App. Type,R6538,Web,"The Visual Notation for OWL Ontologies (VOWL) is a well-specified visual language for the user-oriented representation of ontologies. It defines graphical depictions for most elements of the Web Ontology Language (OWL) that are combined to a force-directed graph layout visualizing the ontology. In contrast to related work, VOWL aims for an intuitive and comprehensive representation that is also understandable to users less familiar with ontologies. This article presents VOWL in detail and describes its implementation in two different tools: ProtegeVOWL and WebVOWL. The first is a plugin for the ontology editor Protege, the second a standalone web application. Both tools demonstrate the applicability of VOWL by means of various ontologies. In addition, the results of three user studies that evaluate the comprehensibility and usability of VOWL are summarized. They are complemented by findings from an interview with experienced ontology users and from testing the visual scope and completeness of VOWL with a benchmark ontology. The evaluations helped to improve VOWL and confirm that it produces comparatively intuitive and comprehensible ontology visualizations.",TRUE,noun
R11,Science,R25724,graphVizdb: A scalable platform for interactive large graph visualization,S78088,R25725,App. Type,R6538,Web,"We present a novel platform for the interactive visualization of very large graphs. The platform enables the user to interact with the visualized graph in a way that is very similar to the exploration of maps at multiple levels. Our approach involves an offline preprocessing phase that builds the layout of the graph by assigning coordinates to its nodes with respect to a Euclidean plane. The respective points are indexed with a spatial data structure, i.e., an R-tree, and stored in a database. Multiple abstraction layers of the graph based on various criteria are also created offline, and they are indexed similarly so that the user can explore the dataset at different levels of granularity, depending on her particular needs. Then, our system translates user operations into simple and very efficient spatial operations (i.e., window queries) in the backend. This technique allows for a fine-grained access to very large graphs with extremely low latency and memory requirements and without compromising the functionality of the tool. Our web-based prototype supports three main operations: (1) interactive navigation, (2) multi-level exploration, and (3) keyword search on the graph metadata.",TRUE,noun
R11,Science,R27259,Cyberbotics ltd. webots professional mobile robot simulation,S87873,R27260,Name,L54380,Webots,"Cyberbotics Ltd. develops Webots™, a mobile robotics simulation software that provides you with a rapid prototyping environment for modelling, programming and simulating mobile robots. The provided robot libraries enable you to transfer your control programs to several commercially available real mobile robots. Webots™ lets you define and modify a complete mobile robotics setup, even several different robots sharing the same environment. For each object, you can define a number of properties, such as shape, color, texture, mass, friction, etc. You can equip each robot with a large number of available sensors and actuators. You can program these robots using your favorite development environment, simulate them and optionally transfer the resulting programs onto your real robots. Webots™ has been developed in collaboration with the Swiss Federal Institute of Technology in Lausanne, thoroughly tested, well documented and continuously maintained for over 7 years. It is now the main commercial product available from Cyberbotics Ltd.",TRUE,noun
R11,Science,R32197,Chemical Composition of the Essential Oil ofArtemisia herba-albaAsso Grown in Algeria,S109505,R32198,Collection site,R32190,Wild,"Abstract The essential oil obtained by hydrodistillation from the aerial parts of Artemisia herba-alba Asso growing wild in M'sila-Algeria, was investigated using both capillary GC and GC/MS techniques. The oil yield was 1.02% based on dry weight. Sixty-eight components amounting to 94.7% of the oil were identifed, 33 of them being reported for the frst time in Algerian A. herba-alba oil and 21 of these components have not been previously reported in A. herba-alba oils. The oil contained camphor (19.4%), trans-pinocarveol (16.9%), chrysanthenone (15.8%) and β-thujone (15%) as major components. Monoterpenoids are the main components (86.1%), and the irregular monoterpenes fraction represented a 3.1% yield.",TRUE,noun
R11,Science,R32385,Composition and intraspecific chemical vari- ability of the essential oil from Artemisia herba alba growing wild in a Tunisian arid zone,S109978,R32386,Collection site,R32190,Wild,"The intraspecific chemical variability of essential oils (50 samples) isolated from the aerial parts of Artemisia herba‐alba Asso growing wild in the arid zone of Southeastern Tunisia was investigated. Analysis by GC (RI) and GC/MS allowed the identification of 54 essential oil components. The main compounds were β‐thujone and α‐thujone, followed by 1,8‐cineole, camphor, chrysanthenone, trans‐sabinyl acetate, trans‐pinocarveol, and borneol. Chemometric analysis (k‐means clustering and PCA) led to the partitioning into three groups. The composition of two thirds of the samples was dominated by α‐thujone or β‐thujone. Therefore, it could be expected that wild plants of A. herba‐alba randomly harvested in the area of Kirchaou and transplanted by local farmers for the cultivation in arid zones of Southern Tunisia produce an essential oil belonging to the α‐thujone/β‐thujone chemotype and containing also 1,8‐cineole, camphor, and trans‐sabinyl acetate at appreciable amounts.",TRUE,noun
R11,Science,R32407,Chemical Variability ofArtemisia herba-albaAsso Growing Wild in Semi-arid and Arid Land (Tunisia),S110023,R32408,Collection site,R32190,Wild,"Abstract Twenty-six oil samples were isolated by hydrodistillation from aerial parts of Artemisia herba-alba Asso growing wild in Tunisia (semi-arid land) and their chemical composition was determined by GC(RI), GC/MS and 13C-NMR. Various compositions were observed, dominated either by a single component (α-thujone, camphor, chrysanthenone or trans-sabinyl acetate) or characterized by the occurrence, at appreciable contents, of two or more of these compounds. These results confrmed the tremendous chemical variability of A. herba-alba.",TRUE,noun
R11,Science,R32413,Chemical composition and biological activities of a new essential oil chemotype of Tunisian Artemisia herba alba Asso,S110054,R32414,Collection site,R32190,Wild,"The aim of the present study was to investigate the chemical composition, antioxidant, angiotensin Iconverting enzyme (ACE) inhibitory, antibacterial and antifungal activities of the essential oil of Artemisia herba alba Asso (Aha), a traditional medicinal plant widely growing in Tunisia. The essential oil from the air dried leaves and flowers of Aha were extracted by hydrodistillation and analyzed by GC and GC/MS. More than fifty compounds, out of which 48 were identified. The main chemical class of the oil was represented by oxygenated monoterpenes (50.53%). These were represented by 21 derivatives, among which the cis -chrysantenyl acetate (10.60%), the sabinyl acetate (9.13%) and the α-thujone (8.73%) were the principal compounds. Oxygenated sesquiterpenes, particularly arbusculones were identified in the essential oil at relatively high rates. The Aha essential oil was found to have an interesting antioxidant activity as evaluated by the 2,2-diphenyl-1-picrylhydrazyl and the β-carotene bleaching methods. The Aha essential oil also exhibited an inhibitory activity towards the ACE. The antimicrobial activities of Aha essential oil was evaluated against six bacterial strains and three fungal strains by the agar diffusion method and by determining the inhibition zone. The inhibition zones were in the range of 8-51 mm. The essential oil exhibited a strong growth inhibitory activity on all the studied fungi. Our findings demonstrated that Aha growing wild in South-Western of Tunisia seems to be a new chemotype and its essential oil might be a natural potential source for food preservation and for further investigation by developing new bioactive substances.",TRUE,noun
R11,Science,R30708,Patterns of tooth surface loss among winemakers,S102477,R30709,Study population,L61522,Winemakers,"There are a few documented case studies on the adverse effect of wine on both dental hard and soft tissues. Professional wine tasting could present some degree of increased risk to dental erosion. Alcoholic beverages with a low pH may cause erosion, particularly if the attack is of long duration, and repeated over time. The purpose of this study was to compare the prevalence and severity of tooth surface loss between winemakers (exposed) and their spouses (non-exposed). Utilising a cross-sectional, comparative study design, a clinical examination was conducted to assess caries status; the presence and severity of tooth surface loss; staining (presence or absence); fluorosis and prosthetic status. The salivary flow rate, buffering capacity and pH were also measured. Thirty-six persons, twenty-one winemakers and fifteen of their spouses participated in the study. It was possible to show that there was a difference in terms of the prevalence and severity of tooth surface loss between the teeth of winemakers and those who are not winemakers. The occurrence of tooth surface loss amongst winemakers was highly likely due to frequent exposure of their teeth to wine. Frequent exposure of the teeth to wine, as occurs among wine tasters, is deleterious to enamel, and constitutes an occupational hazard. Erosion is an occupational risk for wine tasters.",TRUE,noun
R11,Science,R25997,Automated labeling in document images,S80510,R26016,Physical Layout Representation,L50881,zones,"The National Library of Medicine (NLM) is developing an automated system to produce bibliographic records for its MEDLINER database. This system, named Medical Article Record System (MARS), employs document image analysis and understanding techniques and optical character recognition (OCR). This paper describes a key module in MARS called the Automated Labeling (AL) module, which labels all zones of interest (title, author, affiliation, and abstract) automatically. The AL algorithm is based on 120 rules that are derived from an analysis of journal page layouts and features extracted from OCR output. Experiments carried out on more than 11,000 articles in over 1,000 biomedical journals show the accuracy of this rule-based algorithm to exceed 96%.",TRUE,noun
R11,Science,R34596,k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY,S120557,R34597,Anonymistion algorithm/method,R34595,k-anonymity," Consider a data holder, such as a hospital or a bank, that has a privately held collection of person-specific, field structured data. Suppose the data holder wants to share a version of the data with researchers. How can a data holder release a version of its private data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful? The solution provided in this paper includes a formal protection model named k-anonymity and a set of accompanying policies for deployment. A release provides k-anonymity protection if the information for each person contained in the release cannot be distinguished from at least k-1 individuals whose information also appears in the release. This paper also examines re-identification attacks that can be realized on releases that adhere to k-anonymity unless accompanying policies are respected. The k-anonymity protection model is important because it forms the basis on which the real-world systems known as Datafly, μ-Argus and k-Similar provide guarantees of privacy protection. ",TRUE,noun
R11,Science,R27743,Experimental Validation of the Learning Effect for a Pedagogical Game on Computer Fundamentals,S90315,R27744,Method,R1046,experiment,"The question/answer-based computer game Age of Computers was introduced to replace traditional weekly paper exercises in a course in computer fundamentals in 2003. Questionnaire evaluations and observation of student behavior have indicated that the students found the game more motivating than paper exercises and that a majority of the students also perceived the game to have a higher learning effect than paper exercises or textbook reading. This paper reports on a controlled experiment to compare the learning effectiveness of game play with traditional paper exercises, as well as with textbook reading. The results indicated that with equal time being spent on the various learning activities, the effect of game play was only equal to that of the other activities, not better. Yet this result is promising enough, as the increased motivation means that students work harder in the course. Also, the results indicate that the game has potential for improvement, in particular with respect to its feedback on the more complicated questions.",TRUE,noun
R11,Science,R27748,Effect of computer-based video games on children: An experimental study,S90336,R27749,Method,R1046,experiment,"This experimental study investigated whether computer-based video games facilitate children's cognitive learning. In comparison to traditional computer-assisted instruction (CAI), this study explored the impact of the varied types of instructional delivery strategies on children's learning achievement. One major research null hypothesis was tested: no statistically significant differences in students' achievement when they receive two different instructional treatments: (1) traditional CAI; and (2) a computer-based video game. One hundred and eight third-graders from a middle/high socio-economic standard school district in Taiwan participated in the study. Results indicate that computer-based video game playing not only improves participants' fact/recall processes (F=5.288, p<;.05), but also promotes problem-solving skills by recognizing multiple solutions for problems (F=5.656, p<;.05).",TRUE,noun
R11,Science,R27779,The Effect of Using Exercise-Based Computer Games during the Process of Learning on Academic Achievement among Education Majors,S90474,R27780,Method,R1046,experiment,"Th e aim of this study is to define whether using exercise-based games increase the performance of learning. For this reason, two basic questions were tried to be answered in the study. First, is there any diff erence in learning between the group that was given exercisebased games and the group that was not? Second, is there any diff erence in learning between the group that used exercise-based games at end of the process of learning and the group that was not applied this but taken the questions of exercises in game material? Th is research has been conducted within the subject of Testing and Evaluation in the program of Kocaeli University Primary Maths Teacher’s College. Experimental design with a pre test-post test control group was used in this study. Experimental process based on game material was used in 120 minutes at the end of a 3-week-teaching period. Th e reliability values (KR-20) of the two tests were found to be .79 and .71 which were used to evaluate learning level. Th e study has reached a conclusion that game materials used at the end of learning process have increased the learning levels of teacher candidates. However, the similar learning levels have been observed among students who were taken printed exercises instead of using learning game method to reinforce the traditional learning in the research. Th is means that in method of applying teaching games in addition to the traditional teaching, there isn’t any diff erence of learning eff iciency of students answered the questions based on competition and fun and the group who only answered the questions. Th is study is expected to contribute defining in which situations games are eff ective.",TRUE,noun
R11,Science,R25149,Simple gaze-contingent cues guide eye movements in a realistic driving simulator,S74693,R25150,type,R25148,Gaze,"Looking at the right place at the right time is a critical component of driving skill. Therefore, gaze guidance has the potential to become a valuable driving assistance system. In previous work, we have already shown that complex gaze-contingent stimuli can guide attention and reduce the number of accidents in a simple driving simulator. We here set out to investigate whether cues that are simple enough to be implemented in a real car can also capture gaze during a more realistic driving task in a high-fidelity driving simulator. We used a state-of-the-art, wide-field-of-view driving simulator with an integrated eye tracker. Gaze-contingent warnings were implemented using two arrays of light-emitting diodes horizontally fitted below and above the simulated windshield. Thirteen volunteering subjects drove along predetermined routes in a simulated environment popu lated with autonomous traffic. Warnings were triggered during the approach to half of the intersections, cueing either towards the right or to the left. The remaining intersections were not cued, and served as controls. The analysis of the recorded gaze data revealed that the gaze-contingent cues did indeed have a gaze guiding effect, triggering a significant shift in gaze position towards the highlighted direction. This gaze shift was not accompanied by changes in driving behaviour, suggesting that the cues do not interfere with the driving task itself.",TRUE,noun
R11,Science,R25151,Light my way,S74705,R25152,type,R25148,Gaze,"In demanding driving situations, the front-seat passenger can become a supporter of the driver by, e.g., monitoring the scene or providing hints about upcoming hazards or turning points. A fast and efficient communication of such spatial information can help the driver to react properly, with more foresight. As shown in previous research, this spatial referencing can be facilitated by providing the driver a visualization of the front-seat passenger's gaze. In this paper, we focus on the question how the gaze should be visualized for the driver, taking into account the feasibility of implementation in a real car. We present the results from a driving simulator study, where we compared an LED visualization (glowing LEDs on an LED stripe mounted at the bottom of the windshield, indicating the horizontal position of the gaze) with a visualization of the gaze as a dot in the simulated environment. Our results show that LED visualization comes with benefits with regard to driver distraction but also bears disadvantages with regard to accuracy and control for the front-seat passenger.",TRUE,noun
R11,Science,R30594,Eye Localization based on Multi-Scale Gabor Feature Vector Model,S101876,R30595,Challenges,R30582,pose,"Eye localization is necessary for face recognition and related application areas. Most of eye localization algorithms reported thus far still need to be improved about precision and computational time for successful applications. In this paper, we propose an improved eye localization method based on multi-scale Gator feature vector models. The proposed method first tries to locate eyes in the downscaled face image by utilizing Gabor Jet similarity between Gabor feature vector at an initial eye coordinates and the eye model bunch of the corresponding scale. The proposed method finally locates eyes in the original input face image after it processes in the same way recursively in each scaled face image by using the eye coordinates localized in the downscaled image as initial eye coordinates. Experiments verify that our proposed method improves the precision rate without causing much computational overhead compared with other eye localization methods reported in the previous researches.",TRUE,noun
R11,Science,R28984,From few to many: illumination cone models for face recognition under variable lighting and pose,S95738,R28985,Variations,L58621,pose,"We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render (or synthesize) images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone. Test results show that the method performs almost without error, except on the most extreme lighting directions.",TRUE,noun
R11,Science,R28998,"Annotated facial landmarks in the wild: A large-scale, real- world database for facial landmark localization",S95853,R28999,Variations,L58715,pose,"Face alignment is a crucial step in face recognition tasks. Especially, using landmark localization for geometric face normalization has shown to be very effective, clearly improving the recognition results. However, no adequate databases exist that provide a sufficient number of annotated facial landmarks. The databases are either limited to frontal views, provide only a small number of annotated images or have been acquired under controlled conditions. Hence, we introduce a novel database overcoming these limitations: Annotated Facial Landmarks in the Wild (AFLW). AFLW provides a large-scale collection of images gathered from Flickr, exhibiting a large variety in face appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total 25,993 faces in 21,997 real-world images are annotated with up to 21 landmarks per image. Due to the comprehensive set of annotations AFLW is well suited to train and test algorithms for multi-view face detection, facial landmark localization and face pose estimation. Further, we offer a rich set of tools that ease the integration of other face databases and associated annotations into our joint framework.",TRUE,noun
R11,Science,R29000,Localizing Parts of Faces Using a Consensus of Exemplars,S95872,R29001,Variations,L58731,pose,"We present a novel approach to localizing parts in images of human faces. The approach combines the output of local detectors with a nonparametric set of global models for the part locations based on over 1,000 hand-labeled exemplar images. By assuming that the global models generate the part locations as hidden variables, we derive a Bayesian objective function. This function is optimized using a consensus of models for these hidden variables. The resulting localizer handles a much wider range of expression, pose, lighting, and occlusion than prior ones. We show excellent performance on real-world face datasets such as Labeled Faces in the Wild (LFW) and a new Labeled Face Parts in the Wild (LFPW) and show that our localizer achieves state-of-the-art performance on the less challenging BioID dataset.",TRUE,noun
R11,Science,R29004,"Face detection, pose estimation, and landmark localization in the wild",S95906,R29005,Variations,L58759,pose,"We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).",TRUE,noun
R11,Science,R29006,A Semi-automatic Methodology for Facial Landmark Annotation,S95925,R29007,Variations,L58775,pose,"Developing powerful deformable face models requires massive, annotated face databases on which techniques can be trained, validated and tested. Manual annotation of each facial image in terms of landmarks requires a trained expert and the workload is usually enormous. Fatigue is one of the reasons that in some cases annotations are inaccurate. This is why, the majority of existing facial databases provide annotations for a relatively small subset of the training images. Furthermore, there is hardly any correspondence between the annotated land-marks across different databases. These problems make cross-database experiments almost infeasible. To overcome these difficulties, we propose a semi-automatic annotation methodology for annotating massive face datasets. This is the first attempt to create a tool suitable for annotating massive facial databases. We employed our tool for creating annotations for MultiPIE, XM2VTS, AR, and FRGC Ver. 2 databases. The annotations will be made publicly available from http://ibug.doc.ic.ac.uk/ resources/facial-point-annotations/. Finally, we present experiments which verify the accuracy of produced annotations.",TRUE,noun
R11,Science,R29010,Robust Face Landmark Estimation under Occlusion,S95963,R29011,Variations,L58807,pose,"Human faces captured in real-world conditions present large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food). Current face landmark estimation approaches struggle under such conditions since they fail to provide a principled way of handling outliers. We propose a novel method, called Robust Cascaded Pose Regression (RCPR) which reduces exposure to outliers by detecting occlusions explicitly and using robust shape-indexed features. We show that RCPR improves on previous landmark estimation methods on three popular face datasets (LFPW, LFW and HELEN). We further explore RCPR's performance by introducing a novel face dataset focused on occlusion, composed of 1,007 faces presenting a wide range of occlusion patterns. RCPR reduces failure cases by half on all four datasets, at the same time as it detects face occlusions with a 80/40% precision/recall.",TRUE,noun
R11,Science,R29047,Pose-Free Facial Landmark Fitting via Optimized Part Mixtures and Cascaded Deformable Shape Model,S96146,R29048,Methods,R29046,Pose-free,"This paper addresses the problem of facial landmark localization and tracking from a single camera. We present a two-stage cascaded deformable shape model to effectively and efficiently localize facial landmarks with large head pose variations. For face detection, we propose a group sparse learning method to automatically select the most salient facial landmarks. By introducing 3D face shape model, we use procrustes analysis to achieve pose-free facial landmark initialization. For deformation, the first step uses mean-shift local search with constrained local model to rapidly approach the global optimum. The second step uses component-wise active contours to discriminatively refine the subtle shape variation. Our framework can simultaneously handle face detection, pose-free landmark localization and tracking in real time. Extensive experiments are conducted on both laboratory environmental face databases and face-in-the-wild databases. All results demonstrate that our approach has certain advantages over state-of-the-art methods in handling pose variations.",TRUE,noun
R11,Science,R28283,"Fleet deployment optimization for liner shipping. Part 1: background, problem formulation and solution approaches",S92901,R28355,Main question,R28338,Rout,"The background and the literature in liner fleet scheduling is reviewed and the objectives and assumptions of our approach are explained. We develop a detailed and realistic model for the estimation of the operating costs of liner ships on various routes, and present a linear programming formulation for the liner fleet deployment problem. Independent approaches for fixing both the service frequencies in the different routes and the speeds of the ships, are presented.",TRUE,noun
R11,Science,R28460,The marine single assignment nonstrict Hub location problem: formulations and experimental examples,S93298,R28461,Main question,R28338,Rout,"Marine hub-and-spoke networks have been applied to routing containerships for over two decades, but few papers have devoted their attention to these networks. The marine network problems are known as single assignment nonstrict hub location problems (SNHLPs), which deal with the optimal location of hubs and allocation of spokes to hubs in a network, allowing direct routes between some spokes. In this paper we present a satisfactory approach for solving SHNLPs. The quadratic integer profit programming consists of two-stage computational algorithms: a hub location model and a spoke allocation model. We apply a heuristic scheme based on the shortest distance rule and an experimental case based on the Trans-Pacific Routes is presented to illustrate the model’s formulation and solution methods. The results indicate that the model is a concave function, exploiting the economies of scale for total profit with respect to the number of hubs. The spoke allocation may change an optimal choice of hub",TRUE,noun
R11,Science,R33348,Critical success factors for B2B e‐commerce use within the UK NHS pharmaceutical supply chain,S115465,R33349,Critical success factors,R33117,trust,"Purpose – The purpose of this paper is to determine those factors perceived by users to influence the successful on‐going use of e‐commerce systems in business‐to‐business (B2B) buying and selling transactions through examination of the views of individuals acting in both purchasing and selling roles within the UK National Health Service (NHS) pharmaceutical supply chain.Design/methodology/approach – Literature from the fields of operations and supply chain management (SCM) and information systems (IS) is used to determine candidate factors that might influence the success of the use of e‐commerce. A questionnaire based on these is used for primary data collection in the UK NHS pharmaceutical supply chain. Factor analysis is used to analyse the data.Findings – The paper yields five composite factors that are perceived by users to influence successful e‐commerce use. “System quality,” “information quality,” “management and use,” “world wide web – assurance and empathy,” and “trust” are proposed as potentia...",TRUE,noun
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5791,R5230,Process,R5257,attrition,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5778,R5230,Data,R5244,citation-barriers,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5781,R5230,Data,R5247,language,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5790,R5230,Process,R5256,self-citation,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5782,R5230,Data,R5248,venue,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun
R137678,Security and Dependability,R176009,Situational Awareness: Detecting Critical Dependencies and Devices in a Network,S696085,R176010,Type of considered dependencies,L468050,Service,"Abstract Large-scale networks consisting of thousands of connected devices are like a living organism, constantly changing and evolving. It is very difficult for a human administrator to orient in such environment and to react to emerging security threats. With such motivation, this PhD proposal aims to find new methods for automatic identification of devices, the services they provide, their dependencies and importance. The main focus of the proposal is to find novel approaches to building cyber situational awareness in an unknown network for the purpose of computer security incident response. Our research is at the initial phase and will contribute to a PhD thesis in four years.",TRUE,noun
R137678,Security and Dependability,R178436,SAIDuCANT: Specification-Based Automotive Intrusion Detection Using Controller Area Network (CAN) Timing,S699893,R178442,Intrusion Detection Type,R178446,Specification-based,"The proliferation of embedded devices in modern vehicles has opened the traditionally-closed vehicular system to the risk of cybersecurity attacks through physical and remote access to the in-vehicle network such as the controller area network (CAN). The CAN bus does not implement a security protocol that can protect the vehicle against the increasing cyber and physical attacks. To address this risk, we introduce a novel algorithm to extract the real-time model parameters of the CAN bus and develop SAIDuCANT, a specification-based intrusion detection system (IDS) using anomaly-based supervised learning with the real-time model as input. We evaluate the effectiveness of SAIDuCANT with real CAN logs collected from two passenger cars and on an open-source CAN dataset collected from real-world scenarios. Experimental results show that SAIDuCANT can effectively detect data injection attacks with low false positive rates. Over four real attack scenarios from the open-source dataset, SAIDuCANT observes at most one false positive before detecting an attack whereas other detection approaches using CAN timing features detect on average more than a hundred false positives before a real attack occurs.",TRUE,noun
R141823,Semantic Web,R142508,Automatic Domain Ontology Construction Based on Thesauri,S572477,R142510,Evaluation metrics,R142103,Accuracy,"The research on the automatic ontology construction has become very popular. It is very useful for the ontology construction to reengineer the existing knowledge resource, such as the thesauri. But many relationships in the thesauri are incorrect or are defined too broadly. Accordingly, extracting ontological relations from the thesauri becomes very important. This paper proposes the method to reengineer the thesauri to ontology, and especially the method to how to obtain the correct semantic relations. The test result shows the accuracy of the semantic relations is86.23%, and one is the hierarchical relations with 89.02%, and the other is non-hierarchical relations with 83.44%.",TRUE,noun
R141823,Semantic Web,R143919,Enabling Folksonomies for Knowledge Extraction: A Semantic Grounding Approach,S576142,R143921,input,R143924,Context,"Folksonomies emerge as the result of the free tagging activity of a large number of users over a variety of resources. They can be considered as valuable sources from which it is possible to obtain emerging vocabularies that can be leveraged in knowledge extraction tasks. However, when it comes to understanding the meaning of tags in folksonomies, several problems mainly related to the appearance of synonymous and ambiguous tags arise, specifically in the context of multilinguality. The authors aim to turn folksonomies into knowledge structures where tag meanings are identified, and relations between them are asserted. For such purpose, they use DBpedia as a general knowledge base from which they leverage its multilingual capabilities.",TRUE,noun
R141823,Semantic Web,R149947,Image based mammographie ontology learning,S601102,R149949,Terms learning,R149957,Features,"Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.",TRUE,noun
R141823,Semantic Web,R149947,Image based mammographie ontology learning,S601109,R149949,Properties learning,R149961,Form,"Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.",TRUE,noun
R141823,Semantic Web,R149947,Image based mammographie ontology learning,S601101,R149949,Terms learning,R149956,Form,"Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.",TRUE,noun
R141823,Semantic Web,R144129,Representing the Hierarchy of Industrial Taxonomies in OWL: The gen/tax Approach,S576888,R144131,Learning method,R144133,gen/tax,"Existing taxonomies are valuable input for creating ontologies, because they reflect some degree of community consensus and contain, readily available, a wealth of concept definitions plus a hierarchy. However, the transformation of such taxonomies into useful ontologies is not as straightforward as it appears, because simply taking the hierarchy of concepts, which was originally developed for some external purpose other than ontology engineering, as the subsumption hierarchy using rdfs:subClassOf can yield useless ontologies. In this paper, we (1) illustrate the problem by analyzing OWL and RDF-S ontologies derived from UNSPSC (a products and services taxonomy), (2) detail how the interpretation and representation of the original taxonomic relationship is an important modeling decision when deriving ontologies from existing taxonomies, (3) propose a novel “gen/tax” approach to capture the original semantics of taxonomies in OWL, based on the split of each category in the taxonomy into two concepts, a generic concept and a taxonomy concept, and (4) show the usefulness of this approach by transforming eCl@ss into a fully-fledged products and services ontology.",TRUE,noun
R141823,Semantic Web,R149916,Image domain ontology fusion approach using multi-level inference mechanism,S601288,R149918,Knowledge source,R149629,Image,"One of the main challenges in content-based or semantic image retrieval is still to bridge the gap between low-level features and semantic information. In this paper, An approach is presented using integrated multi-level image features in ontology fusion construction by a fusion framework, which based on the latent semantic analysis. The proposed method promotes images ontology fusion efficiently and broadens the application fields of image ontology retrieval system. The relevant experiment shows that this method ameliorates the problem, such as too many redundant data and relations, in the traditional ontology system construction, as well as improves the performance of semantic images retrieval.",TRUE,noun
R141823,Semantic Web,R149947,Image based mammographie ontology learning,S601106,R149949,Knowledge source,R149629,Image,"Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.",TRUE,noun
R141823,Semantic Web,R142319,An Innovative Statistical Tool for Automatic OWL-ERD Alignment,S571942,R142321,Learning method,R139364,Mapping,"Aligning two representations of the same domain with different expressiveness is a crucial topic in nowadays semantic web and big data research. OWL ontologies and Entity Relation Diagrams are the most widespread representations whose alignment allows for semantic data access via ontology interface, and ontology storing techniques. The term """"alignment"" encompasses three different processes: OWL-to-ERD and ERD-to-OWL transformation, and OWL-ERD mapping. In this paper an innovative statistical tool is presented to accomplish all the three aspects of the alignment. The main idea relies on the use of a HMM to estimate the most likely ERD sentence that is stated in a suitable grammar, and corresponds to the observed OWL axiom. The system and its theoretical background are presented, and some experiments are reported.",TRUE,noun
R141823,Semantic Web,R142799,Towards the Reuse of Standardized Thesauri Into Ontologies,S573854,R142801,Learning method,R142808,Rules,"One of the main holdbacks towards a wide use of ontologies is the high building cost. In order to reduce this effort, reuse of existing Knowledge Organization Systems (KOSs), and in particular thesauri, is a valuable and much cheaper alternative to build ontologies from scratch. In the literature tools to support such reuse and conversion of thesauri as well as re-engineering patterns already exist. However, few of these tools rely on a sort of semi-automatic reasoning on the structure of the thesaurus being converted. Furthermore, patterns proposed in the literature are not updated considering the new ISO 25964 standard on thesauri. This paper introduces a new application framework aimed to convert thesauri into OWL ontologies, differing from the existing approaches for taking into consideration ISO 25964 compliant thesauri and for applying completely automatic conversion rules.",TRUE,noun
R141823,Semantic Web,R149947,Image based mammographie ontology learning,S601107,R149949,Properties learning,R149959,Shape,"Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.",TRUE,noun
R141823,Semantic Web,R149947,Image based mammographie ontology learning,S601099,R149949,Terms learning,R149954,Shape,"Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.",TRUE,noun
R141823,Semantic Web,R149947,Image based mammographie ontology learning,S601108,R149949,Properties learning,R149960,Size,"Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.",TRUE,noun
R141823,Semantic Web,R149947,Image based mammographie ontology learning,S601100,R149949,Terms learning,R149955,Size,"Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.",TRUE,noun
R141823,Semantic Web,R144129,Representing the Hierarchy of Industrial Taxonomies in OWL: The gen/tax Approach,S576880,R144131,Relationships,R142356,Subclass,"Existing taxonomies are valuable input for creating ontologies, because they reflect some degree of community consensus and contain, readily available, a wealth of concept definitions plus a hierarchy. However, the transformation of such taxonomies into useful ontologies is not as straightforward as it appears, because simply taking the hierarchy of concepts, which was originally developed for some external purpose other than ontology engineering, as the subsumption hierarchy using rdfs:subClassOf can yield useless ontologies. In this paper, we (1) illustrate the problem by analyzing OWL and RDF-S ontologies derived from UNSPSC (a products and services taxonomy), (2) detail how the interpretation and representation of the original taxonomic relationship is an important modeling decision when deriving ontologies from existing taxonomies, (3) propose a novel “gen/tax” approach to capture the original semantics of taxonomies in OWL, based on the split of each category in the taxonomy into two concepts, a generic concept and a taxonomy concept, and (4) show the usefulness of this approach by transforming eCl@ss into a fully-fledged products and services ontology.",TRUE,noun
R141823,Semantic Web,R143919,Enabling Folksonomies for Knowledge Extraction: A Semantic Grounding Approach,S576141,R143921,input,R143923,Tags,"Folksonomies emerge as the result of the free tagging activity of a large number of users over a variety of resources. They can be considered as valuable sources from which it is possible to obtain emerging vocabularies that can be leveraged in knowledge extraction tasks. However, when it comes to understanding the meaning of tags in folksonomies, several problems mainly related to the appearance of synonymous and ambiguous tags arise, specifically in the context of multilinguality. The authors aim to turn folksonomies into knowledge structures where tag meanings are identified, and relations between them are asserted. For such purpose, they use DBpedia as a general knowledge base from which they leverage its multilingual capabilities.",TRUE,noun
R141823,Semantic Web,R180001,A Deep Learning based Approach for Precise Video Tagging,S702019,R180003,Input format,R180005,Video,"With the increase in smart devices and abundance of video contents, efficient techniques for the indexing, analysis and retrieval of videos are becoming more and more desirable. Improved indexing and automated analysis of millions of videos could be accomplished by getting videos tagged automatically. A lot of existing methods fail to precisely tag videos because of their lack of ability to capture the video context. The context in a video represents the interactions of objects in a scene and their overall meaning. In this work, we propose a novel approach that integrates the video scene ontology with CNN (Convolutional Neural Network) for improved video tagging. Our method captures the content of a video by extracting the information from individual key frames. The key frames are then fed to a CNN based deep learning model to train its parameters. The trained parameters are used to generate the most frequent tags. Highly frequent tags are used to summarize the input video. The proposed technique is benchmarked on the most widely used dataset of video activities, namely, UCF-101. Our method managed to achieve an overall accuracy of 99.8% with an F1- score of 96.2%.",TRUE,noun
R141823,Semantic Web,R185271,Multimedia ontology learning for automatic annotation and video browsing,S709705,R185273,Input format,R180005,Video,"In this work, we offer an approach to combine standard multimedia analysis techniques with knowledge drawn from conceptual metadata provided by domain experts of a specialized scholarly domain, to learn a domain-specific multimedia ontology from a set of annotated examples. A standard Bayesian network learning algorithm that learns structure and parameters of a Bayesian network is extended to include media observables in the learning. An expert group provides domain knowledge to construct a basic ontology of the domain as well as to annotate a set of training videos. These annotations help derive the associations between high-level semantic concepts of the domain and low-level MPEG-7 based features representing audio-visual content of the videos. We construct a more robust and refined version of this ontology by learning from this set of conceptually annotated videos. To encode this knowledge, we use MOWL, a multimedia extension of Web Ontology Language (OWL) which is capable of describing domain concepts in terms of their media properties and of capturing the inherent uncertainties involved. We use the ontology specified knowledge for recognizing concepts relevant to a video to annotate fresh addition to the video database with relevant concepts in the ontology. These conceptual annotations are used to create hyperlinks in the video collection, to provide an effective video browsing interface to the user.",TRUE,noun
R141823,Semantic Web,R180001,A Deep Learning based Approach for Precise Video Tagging,S702047,R180016,Knowledge source,R149630,Video,"With the increase in smart devices and abundance of video contents, efficient techniques for the indexing, analysis and retrieval of videos are becoming more and more desirable. Improved indexing and automated analysis of millions of videos could be accomplished by getting videos tagged automatically. A lot of existing methods fail to precisely tag videos because of their lack of ability to capture the video context. The context in a video represents the interactions of objects in a scene and their overall meaning. In this work, we propose a novel approach that integrates the video scene ontology with CNN (Convolutional Neural Network) for improved video tagging. Our method captures the content of a video by extracting the information from individual key frames. The key frames are then fed to a CNN based deep learning model to train its parameters. The trained parameters are used to generate the most frequent tags. Highly frequent tags are used to summarize the input video. The proposed technique is benchmarked on the most widely used dataset of video activities, namely, UCF-101. Our method managed to achieve an overall accuracy of 99.8% with an F1- score of 96.2%.",TRUE,noun
R141823,Semantic Web,R185271,Multimedia ontology learning for automatic annotation and video browsing,S709697,R185273,Knowledge source,R149630,Video,"In this work, we offer an approach to combine standard multimedia analysis techniques with knowledge drawn from conceptual metadata provided by domain experts of a specialized scholarly domain, to learn a domain-specific multimedia ontology from a set of annotated examples. A standard Bayesian network learning algorithm that learns structure and parameters of a Bayesian network is extended to include media observables in the learning. An expert group provides domain knowledge to construct a basic ontology of the domain as well as to annotate a set of training videos. These annotations help derive the associations between high-level semantic concepts of the domain and low-level MPEG-7 based features representing audio-visual content of the videos. We construct a more robust and refined version of this ontology by learning from this set of conceptually annotated videos. To encode this knowledge, we use MOWL, a multimedia extension of Web Ontology Language (OWL) which is capable of describing domain concepts in terms of their media properties and of capturing the inherent uncertainties involved. We use the ontology specified knowledge for recognizing concepts relevant to a video to annotate fresh addition to the video database with relevant concepts in the ontology. These conceptual annotations are used to create hyperlinks in the video collection, to provide an effective video browsing interface to the user.",TRUE,noun
R141823,Semantic Web,R142433,REUSING UML CLASS MODELS TO GENERATE OWL ONTOLOGIES - A Use Case in the Pharmacotherapeutic Domain: ,S572266,R142435,implementation,R142439,Visualwade,"This paper presents a method for the reuse of existing knowledge in UML software models. Our purpose is being able to adapt fragments of existing UML class diagrams in order to build domain ontologies, represented in OWL-DL, reducing the required amount of time and resources to create one from scratch. Our method is supported by a CASE tool, VisualWADE, and a developed plug-in, used for the management of ontologies and the generation of semantically tagged Web applications. In order to analyse the designed transformations between knowledge representation formalisms, UML and OWL, we have chosen a use case in the pharmacotherapeutic domain. Then, we discuss some of the most relevant aspects of the proposal and, finally, conclusions are obtained and future work briefly described.",TRUE,noun
R141823,Semantic Web,R185300,Automatic Product Ontology Extraction from Textual Reviews,S709773,R185302,Input format,R149646,Text,"Ontologies have proven beneficial in different settings that make use of textual reviews. However, manually constructing ontologies is a laborious and time-consuming process in need of automation. We propose a novel methodology for automatically extracting ontologies, in the form of meronomies, from product reviews, using a very limited amount of hand-annotated training data. We show that the ontologies generated by our method outperform hand-crafted ontologies (WordNet) and ontologies extracted by existing methods (Text2Onto and COMET) in several, diverse settings. Specifically, our generated ontologies outperform the others when evaluated by human annotators as well as on an existing Q&A dataset from Amazon. Moreover, our method is better able to generalise, in capturing knowledge about unseen products. Finally, we consider a real-world setting, showing that our method is better able to determine recommended products based on their reviews, in alternative to using Amazon’s standard score aggregations.",TRUE,noun
R141823,Semantic Web,R185335,Ontology Learning Process as a Bottom-up Strategy for Building Domain-specific Ontology from Legal Texts,S709847,R185337,Input format,R149646,Text,"The objective of this paper is to present the role of Ontology Learning Process in supporting an ontology engineer for creating and maintaining ontologies from textual resources. The knowledge structures that interest us are legal domain-specific ontologies. We will use these ontologies to build legal domain ontology for a Lebanese legal knowledge based system. The domain application of this work is the Lebanese criminal system. Ontologies can be learnt from various sources, such as databases, structured and unstructured documents. Here, the focus is on the acquisition of ontologies from unstructured text, provided as input. In this work, the Ontology Learning Process represents a knowledge extraction phase using Natural Language Processing techniques. The resulted ontology is considered as inexpressive ontology. There is a need to reengineer it in order to build a complete, correct and more expressive domain-specific ontology.",TRUE,noun
R141823,Semantic Web,R185300,Automatic Product Ontology Extraction from Textual Reviews,S709767,R185302,Knowledge source,R149631,Text,"Ontologies have proven beneficial in different settings that make use of textual reviews. However, manually constructing ontologies is a laborious and time-consuming process in need of automation. We propose a novel methodology for automatically extracting ontologies, in the form of meronomies, from product reviews, using a very limited amount of hand-annotated training data. We show that the ontologies generated by our method outperform hand-crafted ontologies (WordNet) and ontologies extracted by existing methods (Text2Onto and COMET) in several, diverse settings. Specifically, our generated ontologies outperform the others when evaluated by human annotators as well as on an existing Q&A dataset from Amazon. Moreover, our method is better able to generalise, in capturing knowledge about unseen products. Finally, we consider a real-world setting, showing that our method is better able to determine recommended products based on their reviews, in alternative to using Amazon’s standard score aggregations.",TRUE,noun
R141823,Semantic Web,R185335,Ontology Learning Process as a Bottom-up Strategy for Building Domain-specific Ontology from Legal Texts,S709841,R185337,Knowledge source,R149631,Text,"The objective of this paper is to present the role of Ontology Learning Process in supporting an ontology engineer for creating and maintaining ontologies from textual resources. The knowledge structures that interest us are legal domain-specific ontologies. We will use these ontologies to build legal domain ontology for a Lebanese legal knowledge based system. The domain application of this work is the Lebanese criminal system. Ontologies can be learnt from various sources, such as databases, structured and unstructured documents. Here, the focus is on the acquisition of ontologies from unstructured text, provided as input. In this work, the Ontology Learning Process represents a knowledge extraction phase using Natural Language Processing techniques. The resulted ontology is considered as inexpressive ontology. There is a need to reengineer it in order to build a complete, correct and more expressive domain-specific ontology.",TRUE,noun
R141823,Semantic Web,R185349,The Ontology Extraction & Maintenance Framework Text-To-Onto,S709882,R185350,Knowledge source,R149631,Text,"Ontologies play an increasingly important role in Knowledge Management. One of the main problems associated with ontologies is that they need to be constructed and maintained. Manual construction of larger ontologies is usually not feasible within companies because of the effort and costs required. Therefore, a semi-automatic approach to ontology construction and maintenance is what everybody is wishing for. The paper presents a framework for semi-automatically learning ontologies from domainspecific texts by applying machine learning techniques. The TEXT-TO-ONTO framework integrates manual engineering facilities to follow a balanced cooperative modelling paradigm.",TRUE,noun
R141823,Semantic Web,R180001,A Deep Learning based Approach for Precise Video Tagging,S702018,R180003,Output format,R149652,Text,"With the increase in smart devices and abundance of video contents, efficient techniques for the indexing, analysis and retrieval of videos are becoming more and more desirable. Improved indexing and automated analysis of millions of videos could be accomplished by getting videos tagged automatically. A lot of existing methods fail to precisely tag videos because of their lack of ability to capture the video context. The context in a video represents the interactions of objects in a scene and their overall meaning. In this work, we propose a novel approach that integrates the video scene ontology with CNN (Convolutional Neural Network) for improved video tagging. Our method captures the content of a video by extracting the information from individual key frames. The key frames are then fed to a CNN based deep learning model to train its parameters. The trained parameters are used to generate the most frequent tags. Highly frequent tags are used to summarize the input video. The proposed technique is benchmarked on the most widely used dataset of video activities, namely, UCF-101. Our method managed to achieve an overall accuracy of 99.8% with an F1- score of 96.2%.",TRUE,noun
R259,Semiconductor and Optical Materials,R135926,Thin-Film Solar Cells with 19% Efficiency by Thermal Evaporation of CdSe and CdTe,S538086,R135931,keywords,L379194,Deposition,CdTe-based solar cells exhibiting 19% power conversion efficiency were produced using widely available thermal evaporation deposition of the absorber layers on SnO2-coated glass with or without a t...,TRUE,noun
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338123,R71590,Material,R71611,films,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun
R259,Semiconductor and Optical Materials,R135926,Thin-Film Solar Cells with 19% Efficiency by Thermal Evaporation of CdSe and CdTe,S538087,R135931,keywords,L379195,Layers,CdTe-based solar cells exhibiting 19% power conversion efficiency were produced using widely available thermal evaporation deposition of the absorber layers on SnO2-coated glass with or without a t...,TRUE,noun
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338109,R71590,Material,R71597,materials,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun
R259,Semiconductor and Optical Materials,R137466,Comparison of heterojunction device parameters for pure and doped ZnO thin films with IIIA (Al or In) elements grown on silicon at room ambient,S544050,R137468,ZnO film deposition method,L383136,sol-gel,"In this work, pure and IIIA element doped ZnO thin films were grown on p type silicon (Si) with (100) orientated surface by sol-gel method, and were characterized for comparing their electrical characteristics. The heterojunction parameters were obtained from the current-voltage (I-V) and capacitance-voltage (C-V) characteristics at room temperature. The ideality factor (n), saturation current (Io) and junction resistance of ZnO/p-Si heterojunction for both pure and doped (with Al or In) cases were determined by using different methods at room ambient. Other electrical parameters such as Fermi energy level (EF), barrier height (ΦB), acceptor concentration (Na), built-in potential (Φi) and voltage dependence of surface states (Nss) profile were obtained from the C-V measurements. The results reveal that doping ZnO with IIIA (Al or In) elements to fabricate n-ZnO/p-Si heterojunction can result in high performance diode characteristics.",TRUE,noun
R281,Social and Behavioral Sciences,R70740,ICT Engagement: a new construct and its assessment in PISA 2015,S337574,R70741,Has method,R71025,correlation ,"Abstract As a relevant cognitive-motivational aspect of ICT literacy, a new construct ICT Engagement is theoretically based on self-determination theory and involves the factors ICT interest, Perceived ICT competence, Perceived autonomy related to ICT use, and ICT as a topic in social interaction. In this manuscript, we present different sources of validity supporting the construct interpretation of test scores in the ICT Engagement scale, which was used in PISA 2015. Specifically, we investigated the internal structure by dimensional analyses and investigated the relation of ICT Engagement aspects to other variables. The analyses are based on public data from PISA 2015 main study from Switzerland ( n = 5860) and Germany ( n = 6504). First, we could confirm the four-dimensional structure of ICT Engagement for the Swiss sample using a structural equation modelling approach. Second, ICT Engagement scales explained the highest amount of variance in ICT Use for Entertainment, followed by Practical use. Third, we found significantly lower values for girls in all ICT Engagement scales except ICT Interest. Fourth, we found a small negative correlation between the scores in the subscale “ICT as a topic in social interaction” and reading performance in PISA 2015. We could replicate most results for the German sample. Overall, the obtained results support the construct interpretation of the four ICT Engagement subscales.",TRUE,noun
R281,Social and Behavioral Sciences,R70742,"A PISA-2015 Comparative Meta-Analysis between Singapore and Finland: Relations of Students’ Interest in Science, Perceived ICT Competence, and Environmental Awareness and Optimism",S337472,R70745,Has method,R71025,correlation ,"The aim of the present study is twofold: (1) to identify a factor structure between variables-interest in broad science topics, perceived information and communications technology (ICT) competence, environmental awareness and optimism; and (2) to explore the relations between these variables at the country level. The first part of the aim is addressed using exploratory factor analysis with data from the Program for International Student Assessment (PISA) for 15-year-old students from Singapore and Finland. The results show that a comparable structure with four factors was verified in both countries. Correlation analyses and linear regression were used to address the second part of the aim. The results show that adolescents’ interest in broad science topics can predict perceived ICT competence. Their interest in broad science topics and perceived ICT competence can predict environmental awareness in both countries. However, there is difference in predicting environmental optimism. Singaporean students’ interest in broad science topics and their perceived ICT competences are positive predictors, whereas environmental awareness is a negative predictor. Finnish students’ environmental awareness negatively predicted environmental optimism.",TRUE,noun
R353,Social Psychology,R75949,"Employee psychological well-being during the COVID-19 pandemic in Germany: A longitudinal study of demands, resources, and exhaustion",S352201,R75951,Indicator for well-being,R77150,Exhaustion,"Many governments react to the current coronavirus/COVID‐19 pandemic by restricting daily (work) life. On the basis of theories from occupational health, we propose that the duration of the pandemic, its demands (e.g., having to work from home, closing of childcare facilities, job insecurity, work‐privacy conflicts, privacy‐work conflicts) and personal‐ and job‐related resources (co‐worker social support, job autonomy, partner support and corona self‐efficacy) interact in their effect on employee exhaustion. We test the hypotheses with a three‐wave sample of German employees during the pandemic from April to June 2020 (N w1 = 2900, N w12 = 1237, N w123 = 789). Our findings show a curvilinear effect of pandemic duration on working women's exhaustion. The data also show that the introduction and the easing of lockdown measures affect exhaustion, and that women with children who work from home while childcare is unavailable are especially exhausted. Job autonomy and partner support mitigated some of these effects. In sum, women's psychological health was more strongly affected by the pandemic than men's. We discuss implications for occupational health theories and that interventions targeted at mitigating the psychological consequences of the COVID‐19 pandemic should target women specifically.",TRUE,noun
R353,Social Psychology,R76575,The gender gap in mental well-being during the Covid-19 outbreak: evidence from the UK,S352597,R76576,Control variables,R46728,Gender,"We document a decline in mental well-being after the onset of the Covid-19 pandemic in the UK. This decline is twice as large for women as for men. We seek to explain this gender gap by exploring gender differences in: family and caring responsibilities; financial and work situation; social engagement; health situation, and health behaviours, including exercise. Differences in family and caring responsibilities play some role, but the bulk of the gap is explained by social factors. Women reported more close friends before the pandemic than men, and increased loneliness after the pandemic's onset. Other factors are similarly distributed across genders and so play little role. Finally, we document larger declines in well-being for the young, of both genders, than the old.",TRUE,noun
R353,Social Psychology,R75828,Decision-making at the sharp end: a survey of literature related to decision-making in humanitarian contexts,S346755,R75830,Individual factors influencing evidence-based decision-making,L248356,stress,"Abstract In a humanitarian response, leaders are often tasked with making large numbers of decisions, many of which have significant consequences, in situations of urgency and uncertainty. These conditions have an impact on the decision-maker (causing stress, for example) and subsequently on how decisions get made. Evaluations of humanitarian action suggest that decision-making is an area of weakness in many operations. There are examples of important decisions being missed and of decision-making processes that are slow and ad hoc. As part of a research process to address these challenges, this article considers literature from the humanitarian and emergency management sectors that relates to decision-making. It outlines what the literature tells us about the nature of the decisions that leaders at the country level are taking during humanitarian operations, and the circumstances under which these decisions are taken. It then considers the potential application of two different types of decision-making process in these contexts: rational/analytical decision-making and naturalistic decision-making. The article concludes with broad hypotheses that can be drawn from the literature and with the recommendation that these be further tested by academics with an interest in the topic.",TRUE,noun
R353,Social Psychology,R76575,The gender gap in mental well-being during the Covid-19 outbreak: evidence from the UK,S351975,R76576,Examined (sub-)group,R77090,women,"We document a decline in mental well-being after the onset of the Covid-19 pandemic in the UK. This decline is twice as large for women as for men. We seek to explain this gender gap by exploring gender differences in: family and caring responsibilities; financial and work situation; social engagement; health situation, and health behaviours, including exercise. Differences in family and caring responsibilities play some role, but the bulk of the gap is explained by social factors. Women reported more close friends before the pandemic than men, and increased loneliness after the pandemic's onset. Other factors are similarly distributed across genders and so play little role. Finally, we document larger declines in well-being for the young, of both genders, than the old.",TRUE,noun
R354,Sociology,R187558,The Pandemic Penalty: The Gendered Effects of COVID-19 on Scientific Productivity,S718277,R187623,target population,R178513,academics,"Academia serves as a valuable case for studying the effects of social forces on workplace productivity, using a concrete measure of output: scholarly papers. Many academics, especially women, have ...",TRUE,noun
R354,Sociology,R44697,Telephone counseling for patients with minor depression: preliminary findings in a family practice setting,S136634,R44698,Recruitment,R44696,Screening,"BACKGROUND Depression is a frequently occurring condition in family practice patients, but time limitations may hamper the physician's ability to it treat effectively. Referrals to mental health professionals are frequently resisted by patients. The need for more effective treatment strategies led to the development and evaluation of a telephone-based, problem-solving intervention. METHODS Patients in a family practice residency practice were evaluated through the Medical Outcomes Study Depression Screening Scale and the Diagnostic Interview Schedule to identify those with subthreshold or minor depression. Twenty-nine subjects were randomly assigned to either a treatment or comparison group. Initial scores on the Hamilton Depression Rating Scale were equivalent for the groups and were in the mildly depressed range. Six problem-solving therapy sessions were conducted over the telephone by graduate student therapists supervised by a psychiatrist. RESULTS Treatment group subjects had significantly lower post-intervention scores on the Hamilton Depression Rating Scale compared with their pre-intervention scores (P < .05). Scores did not differ significantly over time in the comparison group. Post-intervention, treatment group subjects also had lower Beck Depression Inventory scores than did the comparison group (P < .02), as well as more positive scores for social health (P < .002), mental health (P < .05), and self-esteem (P < .05) on the Duke Health Profile. CONCLUSIONS The findings indicate that brief, telephone-based treatment for minor depression in family practice settings may be an efficient and effective method to decrease symptoms of depression and improve functioning. Nurses in these settings with appropriate training and supervision may also be able to provide this treatment.",TRUE,noun
R140,Software Engineering,R152014,From scenario modeling to scenario programming for reactive systems with dynamic topology,S608757,R152016,Has application in,R152017,car-to-x,"Software-intensive systems often consist of cooperating reactive components. In mobile and reconfigurable systems, their topology changes at run-time, which influences how the components must cooperate. The Scenario Modeling Language (SML) offers a formal approach for specifying the reactive behavior such systems that aligns with how humans conceive and communicate behavioral requirements. Simulation and formal checks can find specification flaws early. We present a framework for the Scenario-based Programming (SBP) that reflects the concepts of SML in Java and makes the scenario modeling approach available for programming. SBP code can also be generated from SML and extended with platform-specific code, thus streamlining the transition from design to implementation. As an example serves a car-to-x communication system. Demo video and artifact: http://scenariotools.org/esecfse-2017-tool-demo/",TRUE,noun
R140,Software Engineering,R76118,CrowdREquire: A Requirements Engineering Crowdsourcing Platform,S513420,R76120,Utilities in CrowdRE,R113271,Crowd,"This paper describes CrowdREquire, a platform that supports requirements engineering using the crowdsourcing concept. The power of the crowd is in the diversity of talents and expertise available within the crowd and CrowdREquire specifies how requirements engineering can harness skills available in the crowd. In developing CrowdREquire, this paper designs a crowdsourcing business model and market strategy for crowdsourcing requirements engineering irrespective of the professions and areas of expertise of the crowd involved. This is also a specific application of crowdsourcing which establishes the general applicability and efficacy of crowdsourcing. The results obtained could be used as a reference for other crowdsourcing systems as well.",TRUE,noun
R140,Software Engineering,R76123,Crowdsourcing to elicit requirements for MyERP application,S350340,R76125,Utilities in CrowdRE,R76716,Crowd,Crowdsourcing is an emerging method to collect requirements for software systems. Applications seeking global acceptance need to meet the expectations of a wide range of users. Collecting requirements and arriving at consensus with a wide range of users is difficult using traditional method of requirements elicitation. This paper presents crowdsourcing based approach for German medium-size software company MyERP that might help the company to get access to requirements from non-German customers. We present the tasks involved in the proposed solution that would help the company meet the goal of eliciting requirements at a fast pace with non-German customers.,TRUE,noun
R140,Software Engineering,R76126,Crowd-centric Requirements Engineering,S350326,R76128,Utilities in CrowdRE,R76711,Crowd,"Requirements engineering is a preliminary and crucial phase for the correctness and quality of software systems. Despite the agreement on the positive correlation between user involvement in requirements engineering and software success, current development methods employ a too narrow concept of that user and rely on a recruited set of users considered to be representative. Such approaches might not cater for the diversity and dynamism of the actual users and the context of software usage. This is especially true in new paradigms such as cloud and mobile computing. To overcome these limitations, we propose crowd-centric requirements engineering (CCRE) as a revised method for requirements engineering where users become primary contributors, resulting in higher-quality requirements and increased user satisfaction. CCRE relies on crowd sourcing to support a broader user involvement, and on gamification to motivate that voluntary involvement.",TRUE,noun
R140,Software Engineering,R76341,The Crowd in Requirements Engineering: The Landscape and Challenges,S349219,R76343,Utilities in CrowdRE,R76348,Crowd,"Crowd-based requirements engineering (CrowdRE) could significantly change RE. Performing RE activities such as elicitation with the crowd of stakeholders turns RE into a participatory effort, leads to more accurate requirements, and ultimately boosts software quality. Although any stakeholder in the crowd can contribute, CrowdRE emphasizes one stakeholder group whose role is often trivialized: users. CrowdRE empowers the management of requirements, such as their prioritization and segmentation, in a dynamic, evolved style through collecting and harnessing a continuous flow of user feedback and monitoring data on the usage context. To analyze the large amount of data obtained from the crowd, automated approaches are key. This article presents current research topics in CrowdRE; discusses the benefits, challenges, and lessons learned from projects and experiments; and assesses how to apply the methods and tools in industrial contexts. This article is part of a special issue on Crowdsourcing for Software Engineering.",TRUE,noun
R140,Software Engineering,R76353,Providing a User Forum is not enough: First Experiences of a Software Company with CrowdRE,S513350,R76355,Utilities in CrowdRE,R113255,Crowd,"Crowd-based requirements engineering (CrowdRE) is promising to derive requirements by gathering and analyzing information from the crowd. Setting up CrowdRE in practice seems challenging, although first solutions to support CrowdRE exist. In this paper, we report on a German software company's experience on crowd involvement by using feedback communication channels and a monitoring solution for user-event data. In our case study, we identified several problem areas that a software company is confronted with to setup an environment for gathering requirements from the crowd. We conclude that a CrowdRE process cannot be implemented ad-hoc and that future work is needed to create and analyze a continuous feedback and monitoring data stream.",TRUE,noun
R140,Software Engineering,R111441,Crowd Out the Competition,S507437,R111443,Utilities in CrowdRE,R111451,Crowd,"MyERP is a fictional developer of an Enterprise Resource Planning (ERP) system. Driven by the competition, they face the challenge of losing market share if they fail to de-ploy a Software as a Service (SaaS) ERP system to the European market quickly, but with high quality product. This also means that the requirements engineering (RE) activities will have to be performed efficiently and provide solid results. An additional problem they face is that their (potential) stakeholders are phys-ically distributed, it makes sense to consider them a ""crowd"". This competition paper suggests a Crowd-based RE approach that first identifies the crowd, then collects and analyzes their feedback to derive wishes and needs, and validate the results through prototyping. For this, techniques are introduced that have so far been rarely employed within RE, but more ""traditional"" RE techniques, will also be integrated and/or adapted to attain the best possible result in the case of MyERP.",TRUE,noun
R140,Software Engineering,R112416,Using the crowds to satisfy unbounded requirements,S511092,R112418,Utilities in CrowdRE,R112421,Crowd,"The Internet is a social space that is shaped by humans through the development of websites, the release of web services, the collaborative creation of encyclopedias and forums, the exchange of information through social networks, the provision of work through crowdsourcing platforms, etc. This landscape offers novel possibilities for software systems to satisfy their requirements, e.g., by retrieving and aggregating the information from Internet websites as well as by crowdsourcing the execution of certain functions. In this paper, we present a special type of functional requirements (called unbounded) that is not fully satisfiable and whose satisfaction is increased by gathering evidence from multiple sources. In addition to charac- terizing unbounded requirements, we explain how to maximize their satisfaction by asking and by combining opinions of mul- tiple sources: people, services, information, and algorithms. We provide evidence of the existence of these requirements through examples by studying a modern Web application (Spotify) and from a traditional system (Microsoft Word).",TRUE,noun
R140,Software Engineering,R112472,CRAFT: A Crowd-Annotated Feedback Technique,S511255,R112474,Utilities in CrowdRE,R112481,Crowd,"The ever increasing accessibility of the web for the crowd offered by various electronic devices such as smartphones has facilitated the communication of the needs, ideas, and wishes of millions of stakeholders. To cater for the scale of this input and reduce the overhead of manual elicitation methods, data mining and text mining techniques have been utilised to automatically capture and categorise this stream of feedback, which is also used, amongst other things, by stakeholders to communicate their requirements to software developers. Such techniques, however, fall short of identifying some of the peculiarities and idiosyncrasies of the natural language that people use colloquially. This paper proposes CRAFT, a technique that utilises the power of the crowd to support richer, more powerful text mining by enabling the crowd to categorise and annotate feedback through a context menu. This, in turn, helps requirements engineers to better identify user requirements within such feedback. This paper presents the theoretical foundations as well as the initial evaluation of this crowd-based feedback annotation technique for requirements identification.",TRUE,noun
R140,Software Engineering,R113030,"Conceptualising, extracting and analysing requirements arguments in users' forums: The CrowdRE‐Arg framework",S512337,R113033,Utilities in CrowdRE,R113041,Crowd,"Due to the pervasive use of online forums and social media, users' feedback are more accessible today and can be used within a requirements engineering context. However, such information is often fragmented, with multiple perspectives from multiple parties involved during on‐going interactions. In this paper, the authors propose a Crowd‐based Requirements Engineering approach by Argumentation (CrowdRE‐Arg). The framework is based on the analysis of the textual conversations found in user forums, identification of features, issues and the arguments that are in favour or opposing a given requirements statement. The analysis is to generate an argumentation model of the involved user statements, retrieve the conflicting‐viewpoints, reason about the winning‐arguments and present that to systems analysts to make informed‐requirements decisions. For this purpose, the authors adopted a bipolar argumentation framework and a coalition‐based meta‐argumentation framework as well as user voting techniques. The CrowdRE‐Arg approach and its algorithms are illustrated through two sample conversations threads taken from the Reddit forum. Additionally, the authors devised algorithms that can identify conflict‐free features or issues based on their supporting and attacking arguments. The authors tested these machine learning algorithms on a set of 3,051 user comments, preprocessed using the content analysis technique. The results show that the proposed algorithms correctly and efficiently identify conflict‐free features and issues along with their winning arguments.",TRUE,noun
R140,Software Engineering,R113054,A gradual approach to crowd-based requirements engineering: The case of conference online social networks,S512395,R113056,Utilities in CrowdRE,R113059,Crowd,"This paper proposes a gradual approach to crowd-based requirements engineering (RE) for supporting the establishment of a more engaged crowd, hence, mitigating the low involvement risk in crowd-based RE. Our approach advocates involving micro-crowds (MCs), where in each micro-crowd, the population is relatively cohesive and familiar with each other. Using this approach, the evolving product is developed iteratively. At each iteration, a new MC can join the already established crowd to enhance the requirements for the next version, while adding terminology to an evolving folksonomy. We are currently using this approach in an on-going research project to develop an online social network (OSN) for academic researchers that will facilitate discussions and knowledge sharing around conferences.",TRUE,noun
R140,Software Engineering,R113085,Discovering Requirements through Goal-Driven Process Mining,S512660,R113087,Utilities in CrowdRE,R113119,Crowd,"Software systems are designed to support their users in performing tasks that are parts of more general processes. Unfortunately, software designers often make invalid assumptions about the users' processes and therefore about the requirements to support such processes. Eliciting and validating such assumptions through manual means (e.g., through observations, interviews, and workshops) is expensive, time-consuming, and may fail to identify the users' real processes. Using process mining may reduce these problems by automating the monitoring and discovery of the actual processes followed by a crowd of users. The Crowd provides an opportunity to involve diverse groups of users to interact with a system and conduct their intended processes. This implicit feedback in the form of discovered processes can then be used to modify the existing system's functionalities and ensure whether or not a software product is used as initially designed. In addition, the analysis of user-system interactions may reveal lacking functionalities and quality issues. These ideas are illustrated on the GreenSoft personal energy management system.",TRUE,noun
R140,Software Engineering,R113122,UCFrame: A Use Case Framework for Crowd-Centric Requirement Acquisition,S512711,R113124,Utilities in CrowdRE,R113130,Crowd,"To build needed mobile applications in specific domains, requirements should be collected and analyzed in holistic approach. However, resource is limited for small vendor groups to perform holistic requirement acquisition and elicitation. The rise of crowdsourcing and crowdfunding gives small vendor groups new opportunities to build needed mobile applications for the crowd. By finding prior stakeholders and gathering requirements effectively from the crowd, mobile application projects can establish sound foundation in early phase of software process. Therefore, integration of crowd-based requirement engineering into software process is important for small vendor groups. Conventional requirement acquisition and elicitation methods are analyst-centric. Very little discussion is in adapting requirement acquisition tools for crowdcentric context. In this study, several tool features of use case documentation are revised in crowd-centric context. These features constitute a use case-based framework, called UCFrame, for crowd-centric requirement acquisition. An instantiation of UCFrame is also presented to demonstrate the effectiveness of UCFrame in collecting crowd requirements for building two mobile applications.",TRUE,noun
R140,Software Engineering,R113137,Mining Context-Aware User Requirements from Crowd Contributed Mobile Data,S512816,R113139,Utilities in CrowdRE,R113146,Crowd,"Internetware is required to respond quickly to emergent user requirements or requirements changes by providing application upgrade or making context-aware recommendations. As user requirements in Internet computing environment are often changing fast and new requirements emerge more and more in a creative way, traditional requirements engineering approaches based on requirements elicitation and analysis cannot ensure the quick response of Internetware. In this paper, we propose an approach for mining context-aware user requirements from crowd contributed mobile data. The approach captures behavior records contributed by a crowd of mobile users and automatically mines context-aware user behavior patterns (i.e., when, where and under what conditions users require a specific service) from them using Apriori-M algorithm. Based on the mined user behaviors, emergent requirements or requirements changes can be inferred from the mined user behavior patterns and solutions that satisfy the requirements can be recommended to users. To evaluate the proposed approach, we conduct an experimental study and show the effectiveness of the requirements mining approach.",TRUE,noun
R140,Software Engineering,R113151,Linguistic Analysis of Crowd Requirements: An Experimental Study,S512916,R113153,Utilities in CrowdRE,R113156,Crowd,"Users of today's online software services are often diversified and distributed, whose needs are hard to elicit using conventional RE approaches. As a consequence, crowd-based, data intensive requirements engineering approaches are considered important. In this paper, we have conducted an experimental study on a dataset of 2,966 requirements statements to evaluate the performance of three text clustering algorithms. The purpose of the study is to aggregate similar requirement statements suggested by the crowd users, and also to identify domain objects and operations, as well as required features from the given requirements statements dataset. The experimental results are then cross-checked with original tags provided by data providers for validation.",TRUE,noun
R140,Software Engineering,R76123,Crowdsourcing to elicit requirements for MyERP application,S350332,R76125,RE activities with crowd involvement,R76713,Elicitation,Crowdsourcing is an emerging method to collect requirements for software systems. Applications seeking global acceptance need to meet the expectations of a wide range of users. Collecting requirements and arriving at consensus with a wide range of users is difficult using traditional method of requirements elicitation. This paper presents crowdsourcing based approach for German medium-size software company MyERP that might help the company to get access to requirements from non-German customers. We present the tasks involved in the proposed solution that would help the company meet the goal of eliciting requirements at a fast pace with non-German customers.,TRUE,noun
R140,Software Engineering,R76341,The Crowd in Requirements Engineering: The Landscape and Challenges,S513393,R76343,RE activities with crowd involvement,R113266,Elicitation,"Crowd-based requirements engineering (CrowdRE) could significantly change RE. Performing RE activities such as elicitation with the crowd of stakeholders turns RE into a participatory effort, leads to more accurate requirements, and ultimately boosts software quality. Although any stakeholder in the crowd can contribute, CrowdRE emphasizes one stakeholder group whose role is often trivialized: users. CrowdRE empowers the management of requirements, such as their prioritization and segmentation, in a dynamic, evolved style through collecting and harnessing a continuous flow of user feedback and monitoring data on the usage context. To analyze the large amount of data obtained from the crowd, automated approaches are key. This article presents current research topics in CrowdRE; discusses the benefits, challenges, and lessons learned from projects and experiments; and assesses how to apply the methods and tools in industrial contexts. This article is part of a special issue on Crowdsourcing for Software Engineering.",TRUE,noun
R140,Software Engineering,R112472,CRAFT: A Crowd-Annotated Feedback Technique,S511239,R112474,RE activities with crowd involvement,R112475,Elicitation,"The ever increasing accessibility of the web for the crowd offered by various electronic devices such as smartphones has facilitated the communication of the needs, ideas, and wishes of millions of stakeholders. To cater for the scale of this input and reduce the overhead of manual elicitation methods, data mining and text mining techniques have been utilised to automatically capture and categorise this stream of feedback, which is also used, amongst other things, by stakeholders to communicate their requirements to software developers. Such techniques, however, fall short of identifying some of the peculiarities and idiosyncrasies of the natural language that people use colloquially. This paper proposes CRAFT, a technique that utilises the power of the crowd to support richer, more powerful text mining by enabling the crowd to categorise and annotate feedback through a context menu. This, in turn, helps requirements engineers to better identify user requirements within such feedback. This paper presents the theoretical foundations as well as the initial evaluation of this crowd-based feedback annotation technique for requirements identification.",TRUE,noun
R140,Software Engineering,R113122,UCFrame: A Use Case Framework for Crowd-Centric Requirement Acquisition,S512694,R113124,RE activities with crowd involvement,R113126,Elicitation,"To build needed mobile applications in specific domains, requirements should be collected and analyzed in holistic approach. However, resource is limited for small vendor groups to perform holistic requirement acquisition and elicitation. The rise of crowdsourcing and crowdfunding gives small vendor groups new opportunities to build needed mobile applications for the crowd. By finding prior stakeholders and gathering requirements effectively from the crowd, mobile application projects can establish sound foundation in early phase of software process. Therefore, integration of crowd-based requirement engineering into software process is important for small vendor groups. Conventional requirement acquisition and elicitation methods are analyst-centric. Very little discussion is in adapting requirement acquisition tools for crowdcentric context. In this study, several tool features of use case documentation are revised in crowd-centric context. These features constitute a use case-based framework, called UCFrame, for crowd-centric requirement acquisition. An instantiation of UCFrame is also presented to demonstrate the effectiveness of UCFrame in collecting crowd requirements for building two mobile applications.",TRUE,noun
R140,Software Engineering,R113137,Mining Context-Aware User Requirements from Crowd Contributed Mobile Data,S512800,R113139,RE activities with crowd involvement,R113141,Elicitation,"Internetware is required to respond quickly to emergent user requirements or requirements changes by providing application upgrade or making context-aware recommendations. As user requirements in Internet computing environment are often changing fast and new requirements emerge more and more in a creative way, traditional requirements engineering approaches based on requirements elicitation and analysis cannot ensure the quick response of Internetware. In this paper, we propose an approach for mining context-aware user requirements from crowd contributed mobile data. The approach captures behavior records contributed by a crowd of mobile users and automatically mines context-aware user behavior patterns (i.e., when, where and under what conditions users require a specific service) from them using Apriori-M algorithm. Based on the mined user behaviors, emergent requirements or requirements changes can be inferred from the mined user behavior patterns and solutions that satisfy the requirements can be recommended to users. To evaluate the proposed approach, we conduct an experimental study and show the effectiveness of the requirements mining approach.",TRUE,noun
R140,Software Engineering,R113160,Customer Rating Reactions Can Be Predicted Purely using App Features,S512965,R113162,RE activities with crowd involvement,R113163,Elicitation,"In this paper we provide empirical evidence that the rating that an app attracts can be accurately predicted from the features it offers. Our results, based on an analysis of 11,537 apps from the Samsung Android and BlackBerry World app stores, indicate that the rating of 89% of these apps can be predicted with 100% accuracy. Our prediction model is built by using feature and rating information from the existing apps offered in the App Store and it yields highly accurate rating predictions, using only a few (11-12) existing apps for case-based prediction. These findings may have important implications for requirements engineering in app stores: They indicate that app developers may be able to obtain (very accurate) assessments of the customer reaction to their proposed feature sets (requirements), thereby providing new opportunities to support the requirements elicitation process for app developers.",TRUE,noun
R140,Software Engineering,R108199,A Little Bird Told Me: Mining Tweets for Requirements and Software Evolution,S492801,R108201,RE activities with crowd involvement,R108205,Evolution,"Twitter is one of the most popular social networks. Previous research found that users employ Twitter to communicate about software applications via short messages, commonly referred to as tweets, and that these tweets can be useful for requirements engineering and software evolution. However, due to their large number---in the range of thousands per day for popular applications---a manual analysis is unfeasible.In this work we present ALERTme, an approach to automatically classify, group and rank tweets about software applications. We apply machine learning techniques for automatically classifying tweets requesting improvements, topic modeling for grouping semantically related tweets and a weighted function for ranking tweets according to specific attributes, such as content category, sentiment and number of retweets. We ran our approach on 68,108 collected tweets from three software applications and compared its results against software practitioners' judgement. Our results show that ALERTme is an effective approach for filtering, summarizing and ranking tweets about software applications. ALERTme enables the exploitation of Twitter as a feedback channel for information relevant to software evolution, including end-user requirements.",TRUE,noun
R140,Software Engineering,R49480,Software Architecture Optimization Methods: A Systematic Literature Review,S694877,R175402,contains,R175401,Methods,"Due to significant industrial demands toward software systems with increasing complexity and challenging quality requirements, software architecture design has become an important development activity and the research domain is rapidly evolving. In the last decades, software architecture optimization methods, which aim to automate the search for an optimal architecture design with respect to a (set of) quality attribute(s), have proliferated. However, the reported results are fragmented over different research communities, multiple system domains, and multiple quality attributes. To integrate the existing research results, we have performed a systematic literature review and analyzed the results of 188 research papers from the different research communities. Based on this survey, a taxonomy has been created which is used to classify the existing research. Furthermore, the systematic analysis of the research literature provided in this review aims to help the research community in consolidating the existing research efforts and deriving a research agenda for future developments.",TRUE,noun
R140,Software Engineering,R76341,The Crowd in Requirements Engineering: The Landscape and Challenges,S513396,R76343,RE activities with crowd involvement,R113269,Prioritization,"Crowd-based requirements engineering (CrowdRE) could significantly change RE. Performing RE activities such as elicitation with the crowd of stakeholders turns RE into a participatory effort, leads to more accurate requirements, and ultimately boosts software quality. Although any stakeholder in the crowd can contribute, CrowdRE emphasizes one stakeholder group whose role is often trivialized: users. CrowdRE empowers the management of requirements, such as their prioritization and segmentation, in a dynamic, evolved style through collecting and harnessing a continuous flow of user feedback and monitoring data on the usage context. To analyze the large amount of data obtained from the crowd, automated approaches are key. This article presents current research topics in CrowdRE; discusses the benefits, challenges, and lessons learned from projects and experiments; and assesses how to apply the methods and tools in industrial contexts. This article is part of a special issue on Crowdsourcing for Software Engineering.",TRUE,noun
R140,Software Engineering,R186081,Metrics Based Verification and Validation Maturity Model (MB-V2M2),S711434,R186083,Domain Name,R186096,Software,"Verification and validation (V&V) is only marginally addressed in software process improvement models like CMM and CMMI. A roadmap for the establishment of a sound verification and validation process in software development organizations is badly needed. This paper presents a basis for a roadmap; it describes a framework for improvement of the V&V process, based on the Testing Maturity Model (TMM), but with considerable enhancements. The model, tentatively named MB-V/sup 2/M/sup 2/ (Metrics Based Verification and Validation Maturity Model), has been initiated by a consortium of industrial companies, consultancy & service agencies and an academic institute, operating and residing in the Netherlands. MB-V/sup 2/M/sup 2/ is designed to be universally applicable, to unite the strengths of known (verification and validation) improvement models and to reflect proven work practices. It recommends a metrics base to select process improvements and to track and control implementation of improvement actions. This paper outlines the model and addresses the current status.",TRUE,noun
R140,Software Engineering,R74509,Mapping human values and scrum roles: a study on students' preferences,S342500,R74511,Subjects,L246605,students,"Despite the long tradition on the study of human values, the impact of this field in the software engineering domain is rarely studied. To these regards, this study focuses on applying human values to agile software development process, more specifically to scrum roles. Thus, the goal of the study is to explore possible associations between human values and scrum roles preferences among students. Questionnaires are designed by employing the Short Schwartz's Value Survey and are distributed among 57 students. The results of the quantitative analysis process consisting of descriptive statistics, linear regression models and Pearson correlation coefficients, revealed that values such as power and self-direction influence the preference for the product owner role, the value of hedonism influences the preference for scrum masters and self-direction is associated with team members' preference.",TRUE,noun
R140,Software Engineering,R76123,Crowdsourcing to elicit requirements for MyERP application,S513429,R76125,Utilities in CrowdRE,R113273,Task,Crowdsourcing is an emerging method to collect requirements for software systems. Applications seeking global acceptance need to meet the expectations of a wide range of users. Collecting requirements and arriving at consensus with a wide range of users is difficult using traditional method of requirements elicitation. This paper presents crowdsourcing based approach for German medium-size software company MyERP that might help the company to get access to requirements from non-German customers. We present the tasks involved in the proposed solution that would help the company meet the goal of eliciting requirements at a fast pace with non-German customers.,TRUE,noun
R140,Software Engineering,R76792,Mining Twitter Feeds for Software User Requirements,S513327,R76794,Utilities in CrowdRE,R113246,Task,"Twitter enables large populations of end-users of software to publicly share their experiences and concerns about software systems in the form of micro-blogs. Such data can be collected and classified to help software developers infer users' needs, detect bugs in their code, and plan for future releases of their systems. However, automatically capturing, classifying, and presenting useful tweets is not a trivial task. Challenges stem from the scale of the data available, its unique format, diverse nature, and high percentage of irrelevant information and spam. Motivated by these challenges, this paper reports on a three-fold study that is aimed at leveraging Twitter as a main source of software user requirements. The main objective is to enable a responsive, interactive, and adaptive data-driven requirements engineering process. Our analysis is conducted using 4,000 tweets collected from the Twitter feeds of 10 software systems sampled from a broad range of application domains. The results reveal that around 50% of collected tweets contain useful technical information. The results also show that text classifiers such as Support Vector Machines and Naive Bayes can be very effective in capturing and categorizing technically informative tweets. Additionally, the paper describes and evaluates multiple summarization strategies for generating meaningful summaries of informative software-relevant tweets.",TRUE,noun
R140,Software Engineering,R76818,App Review Analysis Via Active Learning: Reducing Supervision Effort without Compromising Classification Accuracy,S513299,R76820,Utilities in CrowdRE,R113238,Task,"Automated app review analysis is an important avenue for extracting a variety of requirements-related information. Typically, a first step toward performing such analysis is preparing a training dataset, where developers (experts) identify a set of reviews and, manually, annotate them according to a given task. Having sufficiently large training data is important for both achieving a high prediction accuracy and avoiding overfitting. Given millions of reviews, preparing a training set is laborious. We propose to incorporate active learning, a machine learning paradigm, in order to reduce the human effort involved in app review analysis. Our app review classification framework exploits three active learning strategies based on uncertainty sampling. We apply these strategies to an existing dataset of 4,400 app reviews for classifying app reviews as features, bugs, rating, and user experience. We find that active learning, compared to a training dataset chosen randomly, yields a significantly higher prediction accuracy under multiple scenarios.",TRUE,noun
R140,Software Engineering,R113085,Discovering Requirements through Goal-Driven Process Mining,S512661,R113087,Utilities in CrowdRE,R113120,Task,"Software systems are designed to support their users in performing tasks that are parts of more general processes. Unfortunately, software designers often make invalid assumptions about the users' processes and therefore about the requirements to support such processes. Eliciting and validating such assumptions through manual means (e.g., through observations, interviews, and workshops) is expensive, time-consuming, and may fail to identify the users' real processes. Using process mining may reduce these problems by automating the monitoring and discovery of the actual processes followed by a crowd of users. The Crowd provides an opportunity to involve diverse groups of users to interact with a system and conduct their intended processes. This implicit feedback in the form of discovered processes can then be used to modify the existing system's functionalities and ensure whether or not a software product is used as initially designed. In addition, the analysis of user-system interactions may reveal lacking functionalities and quality issues. These ideas are illustrated on the GreenSoft personal energy management system.",TRUE,noun
R140,Software Engineering,R113173,Software Feature Request Detection in Issue Tracking Systems,S513092,R113175,Utilities in CrowdRE,R113179,Task,"Communication about requirements is often handled in issue tracking systems, especially in a distributed setting. As issue tracking systems also contain bug reports or programming tasks, the software feature requests of the users are often difficult to identify. This paper investigates natural language processing and machine learning features to detect software feature requests in natural language data of issue tracking systems. It compares traditional linguistic machine learning features, such as ""bag of words"", with more advanced features, such as subject-action-object, and evaluates combinations of machine learning features derived from the natural language and features taken from the issue tracking system meta-data. Our investigation shows that some combinations of machine learning features derived from natural language and the issue tracking system meta-data outperform traditional approaches. We show that issues or data fields (e.g. descriptions or comments), which contain software feature requests, can be identified reasonably well, but hardly the exact sentence. Finally, we show that the choice of machine learning algorithms should depend on the goal, e.g. maximization of the detection rate or balance between detection rate and precision. In addition, the paper contributes a double coded gold standard and an open-source implementation to further pursue this topic.",TRUE,noun
R140,Software Engineering,R74547,Supporting Requirements Elicitation by Tool-Supported Video Analysis,S342794,R74593,Has research method,R74598,Experiment,"Workshops are an established technique for requirements elicitation. A lot of information is revealed during a workshop, which is generally captured via textual minutes. The scribe suffers from a cognitive overload due to the difficulty of gathering all information, listening and writing at the same time. Video recording is used as additional option to capture more information, including non-verbal gestures. Since a workshop can take several hours, the recorded video will be long and may be disconnected from the scribe's notes. Therefore, the weak and unclear structure of the video complicates the access to the recorded information, for example in subsequent requirements engineering activities. We propose the combination of textual minutes and video with a software tool. Our objective is connecting textual notes with the corresponding part of the video. By highlighting relevant sections of a video and attaching notes that summarize those sections, a more useful structure can be achieved. This structure allows an easy and fast access to the relevant information and their corresponding video context. Thus, a scribe's overload can be mitigated and further use of a video can be simplified. Tool-supported analysis of such an enriched video can facilitate the access to all communicated information of a workshop. This allows an easier elicitation of high-quality requirements. We performed a preliminary evaluation of our approach in an experimental set-up with 12 participants. They were able to elicit higher-quality requirements with our software tool.",TRUE,noun
R30,Terrestrial and Aquatic Ecology,R171893,Alien plants can be associated with a decrease in local and regional native richness even when at low abundance,S686393,R171897,Ecological Level of evidence,L462499,Community,"The impacts of alien plants on native richness are usually assessed at small spatial scales and in locations where the alien is at high abundance. But this raises two questions: to what extent do impacts occur where alien species are at low abundance, and do local impacts translate to effects at the landscape scale? In an analysis of 47 widespread alien plant species occurring across a 1,000 km2 landscape, we examined the relationship between their local abundance and native plant species richness in 594 grassland plots. We first defined the critical abundance at which these focal alien species were associated with a decline in native α‐richness (plot‐scale species numbers), and then assessed how this local decline was translated into declines in native species γ‐richness (landscape‐scale species numbers). After controlling for sampling biases and environmental gradients that might lead to spurious relationships, we found that eight out of 47 focal alien species were associated with a significant decline in native α‐richness as their local abundance increased. Most of these significant declines started at low to intermediate classes of abundance. For these eight species, declines in native γ‐richness were, on average, an order of magnitude (32.0 vs. 2.2 species) greater than those found for native α‐richness, mostly due to spatial homogenization of native communities. The magnitude of the decrease at the landscape scale was best explained by the number of plots where an alien species was found above its critical abundance. Synthesis. Even at low abundance, alien plants may impact native plant richness at both local and landscape scales. Local impacts may result in much greater declines in native richness at larger spatial scales. Quantifying impact at the landscape scale requires consideration of not only the prevalence of an alien plant, but also its critical abundance and its effect on native community homogenization. This suggests that management approaches targeting only those locations dominated by alien plants might not mitigate impacts effectively. Our integrated approach will improve the ranking of alien species risks at a spatial scale appropriate for prioritizing management and designing conservation policies.",TRUE,noun
R30,Terrestrial and Aquatic Ecology,R171893,Alien plants can be associated with a decrease in local and regional native richness even when at low abundance,S686396,R171897,Investigated species,L462502,plants,"The impacts of alien plants on native richness are usually assessed at small spatial scales and in locations where the alien is at high abundance. But this raises two questions: to what extent do impacts occur where alien species are at low abundance, and do local impacts translate to effects at the landscape scale? In an analysis of 47 widespread alien plant species occurring across a 1,000 km2 landscape, we examined the relationship between their local abundance and native plant species richness in 594 grassland plots. We first defined the critical abundance at which these focal alien species were associated with a decline in native α‐richness (plot‐scale species numbers), and then assessed how this local decline was translated into declines in native species γ‐richness (landscape‐scale species numbers). After controlling for sampling biases and environmental gradients that might lead to spurious relationships, we found that eight out of 47 focal alien species were associated with a significant decline in native α‐richness as their local abundance increased. Most of these significant declines started at low to intermediate classes of abundance. For these eight species, declines in native γ‐richness were, on average, an order of magnitude (32.0 vs. 2.2 species) greater than those found for native α‐richness, mostly due to spatial homogenization of native communities. The magnitude of the decrease at the landscape scale was best explained by the number of plots where an alien species was found above its critical abundance. Synthesis. Even at low abundance, alien plants may impact native plant richness at both local and landscape scales. Local impacts may result in much greater declines in native richness at larger spatial scales. Quantifying impact at the landscape scale requires consideration of not only the prevalence of an alien plant, but also its critical abundance and its effect on native community homogenization. This suggests that management approaches targeting only those locations dominated by alien plants might not mitigate impacts effectively. Our integrated approach will improve the ranking of alien species risks at a spatial scale appropriate for prioritizing management and designing conservation policies.",TRUE,noun
R30,Terrestrial and Aquatic Ecology,R109846,"Effects of grazing and climate warming on plant diversity, productivity and living state in the alpine rangelands and cultivated grasslands of the Qinghai-Tibetan Plateau",S501153,R109848,Experimental treatment,R110034,Warming,"Overgrazing and climate warming may be important drivers of alpine rangeland degradation in the Qinghai-Tibetan Plateau (QTP). In this study, the effects of grazing and experimental warming on the vegetation of cultivated grasslands, alpine steppe and alpine meadows on the QTP were investigated. The three treatments were a control, a warming treatment and a grazing treatment and were replicated three times on each vegetation type. The warming treatment was applied using fibreglass open-top chambers and the grazing treatment was continuous grazing by yaks at a moderately high stocking rate. Both grazing and warming negatively affected vegetation cover. Grazing reduced vegetation height while warming increased vegetation height. Grazing increased but warming reduced plant diversity. Grazing decreased and warming increased the aboveground plant biomass. Grazing increased the preferred forage species in native rangelands (alpine steppe and alpine meadow), while warming increased the preferred forage species in the cultivated grassland. Grazing reduced the vegetation living state (VLS) of all three alpine grasslands by nearly 70%, while warming reduced the VLS of the cultivated grassland and the alpine steppe by 32% and 56%, respectively, and promoted the VLS of the alpine meadow by 20.5%. It was concluded that overgrazing was the main driver of change to the alpine grassland vegetation on the QTP. The findings suggest that grazing regimes should be adapted in order for them to be sustainable in a warmer future.",TRUE,noun
R30,Terrestrial and Aquatic Ecology,R109846,"Effects of grazing and climate warming on plant diversity, productivity and living state in the alpine rangelands and cultivated grasslands of the Qinghai-Tibetan Plateau",S501147,R109848,Study type,L362399,Experiment,"Overgrazing and climate warming may be important drivers of alpine rangeland degradation in the Qinghai-Tibetan Plateau (QTP). In this study, the effects of grazing and experimental warming on the vegetation of cultivated grasslands, alpine steppe and alpine meadows on the QTP were investigated. The three treatments were a control, a warming treatment and a grazing treatment and were replicated three times on each vegetation type. The warming treatment was applied using fibreglass open-top chambers and the grazing treatment was continuous grazing by yaks at a moderately high stocking rate. Both grazing and warming negatively affected vegetation cover. Grazing reduced vegetation height while warming increased vegetation height. Grazing increased but warming reduced plant diversity. Grazing decreased and warming increased the aboveground plant biomass. Grazing increased the preferred forage species in native rangelands (alpine steppe and alpine meadow), while warming increased the preferred forage species in the cultivated grassland. Grazing reduced the vegetation living state (VLS) of all three alpine grasslands by nearly 70%, while warming reduced the VLS of the cultivated grassland and the alpine steppe by 32% and 56%, respectively, and promoted the VLS of the alpine meadow by 20.5%. It was concluded that overgrazing was the main driver of change to the alpine grassland vegetation on the QTP. The findings suggest that grazing regimes should be adapted in order for them to be sustainable in a warmer future.",TRUE,noun
R369,"Theory, Knowledge and Science",R75675,Knowledge Graph Refinement: A Survey of Approaches and Evaluation Methods,S346243,R75677,Has evaluation,R75679,Methods,"In the recent years, different Web knowledge graphs, both free and commercial, have been created. While Google coined the term ""Knowledge Graph"" in 2012, there are also a few openly available knowledge graphs, with DBpedia, YAGO, and Freebase being among the most prominent ones. Those graphs are often constructed from semi-structured knowledge, such as Wikipedia, or harvested from the web with a combination of statistical and linguistic methods. The result are large-scale knowledge graphs that try to make a good trade-off between completeness and correctness. In order to further increase the utility of such knowledge graphs, various refinement methods have been proposed, which try to infer and add missing knowledge to the graph, or identify erroneous pieces of information. In this article, we provide a survey of such knowledge graph refinement approaches, with a dual look at both the methods being proposed as well as the evaluation methodologies used.",TRUE,noun
R369,"Theory, Knowledge and Science",R76779,Knowledge Graphs: New Directions for Knowledge Representation on the Semantic Web,S350518,R76780,Has result,L250165,Seminar,"The increasingly pervasive nature of the Web, expanding to devices and things in everyday life, along with new trends in Artificial Intelligence call for new paradigms and a new look on Knowledge Representation and Processing at scale for the Semantic Web. The emerging, but still to be concretely shaped concept of ""Knowledge Graphs"" provides an excellent unifying metaphor for this current status of Semantic Web research. More than two decades of Semantic Web research provides a solid basis and a promising technology and standards stack to interlink data, ontologies and knowledge on the Web. However, neither are applications for Knowledge Graphs as such limited to Linked Open Data, nor are instantiations of Knowledge Graphs in enterprises – while often inspired by – limited to the core Semantic Web stack. This report documents the program and the outcomes of Dagstuhl Seminar 18371 ""Knowledge Graphs: New Directions for Knowledge Representation on the Semantic Web"", where a group of experts from academia and industry discussed fundamental questions around these topics for a week in early September 2018, including the following: what are knowledge graphs? Which applications do we see to emerge? Which open research questions still need be addressed and which technology gaps still need to be closed?",TRUE,noun
R342,Urban Studies,R138694,Incremental development of a shared urban ontology: the Urbamet experience,S551167,R138695,Ontology type,R34990,Thesaurus,"Thesauri are used for document referencing. They define hierarchies of domains. We show how document and domain contents can be used to validate and update a classification based on a thesaurus. We use document indexing and classification techniques to automate these operations. We also draft a methodology to systematically address those issues. Our techniques are applied to Urbamet, a thesaurus in the field of town planning.",TRUE,noun
R342,Urban Studies,R138694,Incremental development of a shared urban ontology: the Urbamet experience,S551165,R138695,Ontology name,R138697,Urbamet,"Thesauri are used for document referencing. They define hierarchies of domains. We show how document and domain contents can be used to validate and update a classification based on a thesaurus. We use document indexing and classification techniques to automate these operations. We also draft a methodology to systematically address those issues. Our techniques are applied to Urbamet, a thesaurus in the field of town planning.",TRUE,noun
R374,Urban Studies and Planning,R146434,Skunkworks finder: unlocking the diversity advantage of urban innovation ecosystems,S586274,R146436,uses Recommendation Method,R146437,Algorithm,"Entrepreneurs and start-up founders using innovation spaces and hubs often find themselves inside a filter bubble or echo chamber, where like-minded people tend to come up with similar ideas and recommend similar approaches to innovation. This trend towards homophily and a polarisation of like-mindedness is aggravated by algorithmic filtering and recommender systems embedded in mobile technology and social media platforms. Yet, genuine innovation thrives on social inclusion fostering a diversity of ideas. To escape these echo chambers, we designed and tested the Skunkworks Finder - an exploratory tool that employs social network analysis to help users discover spaces of difference and otherness in their local urban innovation ecosystem.",TRUE,noun
R374,Urban Studies and Planning,R146150,A Roadmap on Improved Performance-centric Cloud Storage Estimation Approach for Database System Deployment in Cloud Environment,S585191,R146152,Components ,R146118,Application,"Cloud computing has taken the limelight with respect to the present industry scenario due to its multi-tenant and pay-as-you-use models, where users need not bother about buying resources like hardware, software, infrastructure, etc. on an permanently basis. As much as the technological benefits, cloud computing also has its downside. By looking at its financial benefits, customers who cannot afford initial investments, choose cloud by compromising on its concerns, like security, performance, estimation, availability, etc. At the same time due to its risks, customers - relatively majority in number, avoid migration towards cloud. Considering this fact, performance and estimation are being the major critical factors for any application deployment in cloud environment; this paper brings the roadmap for an improved performance-centric cloud storage estimation approach, which is based on balanced PCTFree allocation technique for database systems deployment in cloud environment. Objective of this approach is to highlight the set of key activities that have to be jointly done by the database technical team and business users of the software system in order to perform an accurate analysis to arrive at estimation for sizing of the database. For the evaluation of this approach, an experiment has been performed through varied-size PCTFree allocations on an experimental setup with 100000 data records. The result of this experiment shows the impact of PCTFree configuration on database performance. Basis this fact, we propose an improved performance-centric cloud storage estimation approach in cloud. Further, this paper applies our improved performance-centric storage estimation approach on decision support system (DSS) as a case study.",TRUE,noun
R374,Urban Studies and Planning,R146150,A Roadmap on Improved Performance-centric Cloud Storage Estimation Approach for Database System Deployment in Cloud Environment,S585190,R146152,Components ,R146047,Business,"Cloud computing has taken the limelight with respect to the present industry scenario due to its multi-tenant and pay-as-you-use models, where users need not bother about buying resources like hardware, software, infrastructure, etc. on an permanently basis. As much as the technological benefits, cloud computing also has its downside. By looking at its financial benefits, customers who cannot afford initial investments, choose cloud by compromising on its concerns, like security, performance, estimation, availability, etc. At the same time due to its risks, customers - relatively majority in number, avoid migration towards cloud. Considering this fact, performance and estimation are being the major critical factors for any application deployment in cloud environment; this paper brings the roadmap for an improved performance-centric cloud storage estimation approach, which is based on balanced PCTFree allocation technique for database systems deployment in cloud environment. Objective of this approach is to highlight the set of key activities that have to be jointly done by the database technical team and business users of the software system in order to perform an accurate analysis to arrive at estimation for sizing of the database. For the evaluation of this approach, an experiment has been performed through varied-size PCTFree allocations on an experimental setup with 100000 data records. The result of this experiment shows the impact of PCTFree configuration on database performance. Basis this fact, we propose an improved performance-centric cloud storage estimation approach in cloud. Further, this paper applies our improved performance-centric storage estimation approach on decision support system (DSS) as a case study.",TRUE,noun
R374,Urban Studies and Planning,R149792,Changing competences of public managers: tensions in commitment,S600330,R149794,has character traits ,R149250,commitment,"The literature on managerial competences has not sufficiently addressed the value contents of competences and the generic features of public managers. This article presents a model of five competence areas: task competence, professional competence in substantive policy field, professional competence in administration, political competence and ethical competence. Each competence area includes both value and instrumental competences. Relatively permanent value competences are understood as commitments. The assumptions of new public management question not only the instrumental competences but also the commitments of traditional public service. The efficacy of human resource development is limited in learning new commitments. Apart from structural reforms that speed up the process, the friction in the change of commitments is seen as slow cultural change in many public organisations. This is expressed by transitional tensions in task commitment, professional commitment, political commitment, and ethical commitment of public managers.",TRUE,noun
R374,Urban Studies and Planning,R146112,Industry 4.0 Complemented with EA Approach: A Proposal for Digital Transformation Success,S585103,R146114,Components ,R146119,Data,"Manufacturing industry based on steam know as Industry 1.0 is evolving to Industry 4.0 a digital ecosystem consisting of an interconnected automated system with real-time data. This paper investigates and proposes, how the digital ecosystem complemented with Enterprise Architecture practice will ensure the success of digital transformation.",TRUE,noun
R374,Urban Studies and Planning,R146150,A Roadmap on Improved Performance-centric Cloud Storage Estimation Approach for Database System Deployment in Cloud Environment,S585192,R146152,Components ,R146119,Data,"Cloud computing has taken the limelight with respect to the present industry scenario due to its multi-tenant and pay-as-you-use models, where users need not bother about buying resources like hardware, software, infrastructure, etc. on an permanently basis. As much as the technological benefits, cloud computing also has its downside. By looking at its financial benefits, customers who cannot afford initial investments, choose cloud by compromising on its concerns, like security, performance, estimation, availability, etc. At the same time due to its risks, customers - relatively majority in number, avoid migration towards cloud. Considering this fact, performance and estimation are being the major critical factors for any application deployment in cloud environment; this paper brings the roadmap for an improved performance-centric cloud storage estimation approach, which is based on balanced PCTFree allocation technique for database systems deployment in cloud environment. Objective of this approach is to highlight the set of key activities that have to be jointly done by the database technical team and business users of the software system in order to perform an accurate analysis to arrive at estimation for sizing of the database. For the evaluation of this approach, an experiment has been performed through varied-size PCTFree allocations on an experimental setup with 100000 data records. The result of this experiment shows the impact of PCTFree configuration on database performance. Basis this fact, we propose an improved performance-centric cloud storage estimation approach in cloud. Further, this paper applies our improved performance-centric storage estimation approach on decision support system (DSS) as a case study.",TRUE,noun
R374,Urban Studies and Planning,R142729,CityPulse: Large Scale Data Analytics Framework for Smart Cities,S579670,R144797,Ontology domains,R144788,Event,"Our world and our lives are changing in many ways. Communication, networking, and computing technologies are among the most influential enablers that shape our lives today. Digital data and connected worlds of physical objects, people, and devices are rapidly changing the way we work, travel, socialize, and interact with our surroundings, and they have a profound impact on different domains, such as healthcare, environmental monitoring, urban systems, and control and management applications, among several other areas. Cities currently face an increasing demand for providing services that can have an impact on people's everyday lives. The CityPulse framework supports smart city service creation by means of a distributed system for semantic discovery, data analytics, and interpretation of large-scale (near-)real-time Internet of Things data and social media data streams. To goal is to break away from silo applications and enable cross-domain data integration. The CityPulse framework integrates multimodal, mixed quality, uncertain and incomplete data to create reliable, dependable information and continuously adapts data processing techniques to meet the quality of information requirements from end users. Different than existing solutions that mainly offer unified views of the data, the CityPulse framework is also equipped with powerful data analytics modules that perform intelligent data aggregation, event detection, quality assessment, contextual filtering, and decision support. This paper presents the framework, describes its components, and demonstrates how they interact to support easy development of custom-made applications for citizens. The benefits and the effectiveness of the framework are demonstrated in a use-case scenario implementation presented in this paper.",TRUE,noun
R374,Urban Studies and Planning,R154672,Digital Twin and Big Data Towards Smart Manufacturing and Industry 4.0: 360 Degree Comparison,S619008,R154674,has environment,R154676,General,"With the advances in new-generation information technologies, especially big data and digital twin, smart manufacturing is becoming the focus of global manufacturing transformation and upgrading. Intelligence comes from data. Integrated analysis for the manufacturing big data is beneficial to all aspects of manufacturing. Besides, the digital twin paves a way for the cyber-physical integration of manufacturing, which is an important bottleneck to achieve smart manufacturing. In this paper, the big data and digital twin in manufacturing are reviewed, including their concept as well as their applications in product design, production planning, manufacturing, and predictive maintenance. On this basis, the similarities and differences between big data and digital twin are compared from the general and data perspectives. Since the big data and digital twin can be complementary, how they can be integrated to promote smart manufacturing are discussed.",TRUE,noun
R374,Urban Studies and Planning,R74317,Impact of COVID-19 pandemic on mobility in ten countries and associated perceived risk for all transport modes,S535329,R74325,Country,R75461,Iran,"The restrictive measures implemented in response to the COVID-19 pandemic have triggered sudden massive changes to travel behaviors of people all around the world. This study examines the individual mobility patterns for all transport modes (walk, bicycle, motorcycle, car driven alone, car driven in company, bus, subway, tram, train, airplane) before and during the restrictions adopted in ten countries on six continents: Australia, Brazil, China, Ghana, India, Iran, Italy, Norway, South Africa and the United States. This cross-country study also aims at understanding the predictors of protective behaviors related to the transport sector and COVID-19. Findings hinge upon an online survey conducted in May 2020 (N = 9,394). The empirical results quantify tremendous disruptions for both commuting and non-commuting travels, highlighting substantial reductions in the frequency of all types of trips and use of all modes. In terms of potential virus spread, airplanes and buses are perceived to be the riskiest transport modes, while avoidance of public transport is consistently found across the countries. According to the Protection Motivation Theory, the study sheds new light on the fact that two indicators, namely income inequality, expressed as Gini index, and the reported number of deaths due to COVID-19 per 100,000 inhabitants, aggravate respondents’ perceptions. This research indicates that socio-economic inequality and morbidity are not only related to actual health risks, as well documented in the relevant literature, but also to the perceived risks. These findings document the global impact of the COVID-19 crisis as well as provide guidance for transportation practitioners in developing future strategies.",TRUE,noun
R374,Urban Studies and Planning,R154617,The Digital Twin Paradigm for Future NASA and U.S. Air Force Vehicles,S618820,R154619,Has health,R154615,management,"Future generations of NASA and U.S. Air Force vehicles will require lighter mass while being subjected to higher loads and more extreme service conditions over longer time periods than the present generation. Current approaches for certification, fleet management and sustainment are largely based on statistical distributions of material properties, heuristic design philosophies, physical testing and assumed similitude between testing and operational conditions and will likely be unable to address these extreme requirements. To address the shortcomings of conventional approaches, a fundamental paradigm shift is needed. This paradigm shift, the Digital Twin, integrates ultra-high fidelity simulation with the vehicle s on-board integrated vehicle health management system, maintenance history and all available historical and fleet data to mirror the life of its flying twin and enable unprecedented levels of safety and reliability.",TRUE,noun
R374,Urban Studies and Planning,R146060,Tools of quality economics: sustainable development of a ‘smart city’ under conditions of digital transformation of the economy,S584961,R146062,Components ,R146066,Metrology,"The article covers the issues of ensuring sustainable city development based on the achievements of digitalization. Attention is also paid to the use of quality economy tools in managing 'smart' cities under conditions of the digital transformation of the national economy. The current state of 'smart' cities and the main factors contributing to their sustainable development, including the digitalization requirements is analyzed. Based on the analysis of statistical material, the main prospects to form the 'smart city' concept, the possibility to assess such parameters as 'life quality', 'comfort', 'rational organization', 'opportunities', 'sustainable development', 'city environment accessibility', 'use of communication technologies'. The role of tools for quality economics is revealed in ensuring the big city life under conditions of digital economy. The concept of 'life quality' is considered, which currently is becoming one of the fundamental vectors of the human civilization development, a criterion that is increasingly used to compare countries and territories. Special attention is paid to such tools and methods of quality economics as standardization, metrology and quality management. It is proposed to consider these tools as a mechanism for solving the most important problems in the national economy development under conditions of digital transformation.",TRUE,noun
R374,Urban Studies and Planning,R154681,Industry 4 . 0 : The Future of Productivity and Growth in Manufacturing Industries ,S619034,R154682,has performance,R154683,Part,"Technological advances have driven dramatic increases in industrial productivity since the dawn of the Industrial Revolution. The steam engine powered factories in the nineteenth century, electrification led to mass production in the early part of the twentieth century, and industry became automated in the 1970s. In the decades that followed, however, industrial technological advancements were only incremental, especially compared with the breakthroughs that transformed IT, mobile communications, and e-commerce.",TRUE,noun
R374,Urban Studies and Planning,R146122,Evolution of Enterprise Architecture for Digital Transformation,S585128,R146124,Components ,R146128,Perspective,"The digital transformation of our life changes the way we work, learn, communicate, and collaborate. Enterprises are presently transforming their strategy, culture, processes, and their information systems to become digital. The digital transformation deeply disrupts existing enterprises and economies. Digitization fosters the development of IT systems with many rather small and distributed structures, like Internet of Things, Microservices and mobile services. Since years a lot of new business opportunities appear using the potential of services computing, Internet of Things, mobile systems, big data with analytics, cloud computing, collaboration networks, and decision support. Biological metaphors of living and adaptable ecosystems provide the logical foundation for self-optimizing and resilient run-time environments for intelligent business services and adaptable distributed information systems with service-oriented enterprise architectures. This has a strong impact for architecting digital services and products following both a value-oriented and a service perspective. The change from a closed-world modeling world to a more flexible open-world composition and evolution of enterprise architectures defines the moving context for adaptable and high distributed systems, which are essential to enable the digital transformation. The present research paper investigates the evolution of Enterprise Architecture considering new defined value-oriented mappings between digital strategies, digital business models and an improved digital enterprise architecture.",TRUE,noun
R374,Urban Studies and Planning,R142729,CityPulse: Large Scale Data Analytics Framework for Smart Cities,S576187,R143938,Ontologies which have been used as referenced,R143939,prov,"Our world and our lives are changing in many ways. Communication, networking, and computing technologies are among the most influential enablers that shape our lives today. Digital data and connected worlds of physical objects, people, and devices are rapidly changing the way we work, travel, socialize, and interact with our surroundings, and they have a profound impact on different domains, such as healthcare, environmental monitoring, urban systems, and control and management applications, among several other areas. Cities currently face an increasing demand for providing services that can have an impact on people's everyday lives. The CityPulse framework supports smart city service creation by means of a distributed system for semantic discovery, data analytics, and interpretation of large-scale (near-)real-time Internet of Things data and social media data streams. To goal is to break away from silo applications and enable cross-domain data integration. The CityPulse framework integrates multimodal, mixed quality, uncertain and incomplete data to create reliable, dependable information and continuously adapts data processing techniques to meet the quality of information requirements from end users. Different than existing solutions that mainly offer unified views of the data, the CityPulse framework is also equipped with powerful data analytics modules that perform intelligent data aggregation, event detection, quality assessment, contextual filtering, and decision support. This paper presents the framework, describes its components, and demonstrates how they interact to support easy development of custom-made applications for citizens. The benefits and the effectiveness of the framework are demonstrated in a use-case scenario implementation presented in this paper.",TRUE,noun
R374,Urban Studies and Planning,R146070,"Smart city initiatives in the context of digital transformation: scope, services and technologies",S584984,R146074,Components ,R146078,Scope,"Digital transformation is an emerging trend in developing the way how the work is being done, and it is present in the private and public sector, in all industries and fields of work. Smart cities, as one of the concepts related to digital transformation, is usually seen as a matter of local governments, as it is their responsibility to ensure a better quality of life for the citizens. Some cities have already taken advantages of possibilities offered by the concept of smart cities, creating new values to all stakeholders interacting in the living city ecosystems, thus serving as examples of good practice, while others are still developing and growing on their intentions to become smart. This paper provides a structured literature analysis and investigates key scope, services and technologies related to smart cities and digital transformation as concepts of empowering social and collaboration interactions, in order to identify leading factors in most smart city initiatives.",TRUE,noun
R374,Urban Studies and Planning,R146070,"Smart city initiatives in the context of digital transformation: scope, services and technologies",S584983,R146074,Components ,R146077,Services,"Digital transformation is an emerging trend in developing the way how the work is being done, and it is present in the private and public sector, in all industries and fields of work. Smart cities, as one of the concepts related to digital transformation, is usually seen as a matter of local governments, as it is their responsibility to ensure a better quality of life for the citizens. Some cities have already taken advantages of possibilities offered by the concept of smart cities, creating new values to all stakeholders interacting in the living city ecosystems, thus serving as examples of good practice, while others are still developing and growing on their intentions to become smart. This paper provides a structured literature analysis and investigates key scope, services and technologies related to smart cities and digital transformation as concepts of empowering social and collaboration interactions, in order to identify leading factors in most smart city initiatives.",TRUE,noun
R374,Urban Studies and Planning,R146443,Encouraging civic participation through local news aggregation,S586295,R146445,has Implementation level,R146419,System,"Traditional sources of information for small and rural communities have been disappearing over the past decade. A lot of the information and discussion related to such local geographic areas is now scattered across websites of numerous local organizations, individual blogs, social media and other user-generated media (YouTube, Flickr). It is important to capture this information and make it easily accessible to local citizens to facilitate citizen engagement and social interaction. Furthermore, a system that has location-based support can provide local citizens with an engaging way to interact with this information and identify the local issues most relevant to them. A location-based interface for a local geographic area enables people to identify and discuss local issues related to specific locations such as a particular street or a road construction site. We created an information aggregator, called the Virtual Town Square (VTS), to support and facilitate local discussion and interaction. We created a location-based interface for users to access the information collected by VTS. In this paper, we discuss focus group interviews with local citizens that motivated our design of a local news and information aggregator to facilitate civic participation. We then discuss the unique design challenges in creating such a local news aggregator and our design approach to create a local information ecosystem. We describe VTS and the initial evaluation and feedback we received from local users and through weekly meetings with community partners.",TRUE,noun
R374,Urban Studies and Planning,R146070,"Smart city initiatives in the context of digital transformation: scope, services and technologies",S584985,R146074,Components ,R146079,Technologies,"Digital transformation is an emerging trend in developing the way how the work is being done, and it is present in the private and public sector, in all industries and fields of work. Smart cities, as one of the concepts related to digital transformation, is usually seen as a matter of local governments, as it is their responsibility to ensure a better quality of life for the citizens. Some cities have already taken advantages of possibilities offered by the concept of smart cities, creating new values to all stakeholders interacting in the living city ecosystems, thus serving as examples of good practice, while others are still developing and growing on their intentions to become smart. This paper provides a structured literature analysis and investigates key scope, services and technologies related to smart cities and digital transformation as concepts of empowering social and collaboration interactions, in order to identify leading factors in most smart city initiatives.",TRUE,noun
R374,Urban Studies and Planning,R146416,Collaborating Filtering Community Image Recommendation System Based on Scene,S586221,R146418,has Application Scope,R138229,User,"With the advancement of smart city, the development of intelligent mobile terminal and wireless network, the traditional text information service no longer meet the needs of the community residents, community image service appeared as a new media service. “There are pictures of the truth” has become a community residents to understand and master the new dynamic community, image information service has become a new information service. However, there are two major problems in image information service. Firstly, the underlying eigenvalues extracted by current image feature extraction techniques are difficult for users to understand, and there is a semantic gap between the image content itself and the user’s understanding; secondly, in community life of the image data increasing quickly, it is difficult to find their own interested image data. Aiming at the two problems, this paper proposes a unified image semantic scene model to express the image content. On this basis, a collaborative filtering recommendation model of fusion scene semantics is proposed. In the recommendation model, a comprehensiveness and accuracy user interest model is proposed to improve the recommendation quality. The results of the present study have achieved good results in the pilot cities of Wenzhou and Yan'an, and it is applied normally.",TRUE,noun
R374,Urban Studies and Planning,R146434,Skunkworks finder: unlocking the diversity advantage of urban innovation ecosystems,S586279,R146436,has Application Scope,R138229,User,"Entrepreneurs and start-up founders using innovation spaces and hubs often find themselves inside a filter bubble or echo chamber, where like-minded people tend to come up with similar ideas and recommend similar approaches to innovation. This trend towards homophily and a polarisation of like-mindedness is aggravated by algorithmic filtering and recommender systems embedded in mobile technology and social media platforms. Yet, genuine innovation thrives on social inclusion fostering a diversity of ideas. To escape these echo chambers, we designed and tested the Skunkworks Finder - an exploratory tool that employs social network analysis to help users discover spaces of difference and otherness in their local urban innovation ecosystem.",TRUE,noun
R374,Urban Studies and Planning,R146122,Evolution of Enterprise Architecture for Digital Transformation,S585127,R146124,Components ,R146127,Value,"The digital transformation of our life changes the way we work, learn, communicate, and collaborate. Enterprises are presently transforming their strategy, culture, processes, and their information systems to become digital. The digital transformation deeply disrupts existing enterprises and economies. Digitization fosters the development of IT systems with many rather small and distributed structures, like Internet of Things, Microservices and mobile services. Since years a lot of new business opportunities appear using the potential of services computing, Internet of Things, mobile systems, big data with analytics, cloud computing, collaboration networks, and decision support. Biological metaphors of living and adaptable ecosystems provide the logical foundation for self-optimizing and resilient run-time environments for intelligent business services and adaptable distributed information systems with service-oriented enterprise architectures. This has a strong impact for architecting digital services and products following both a value-oriented and a service perspective. The change from a closed-world modeling world to a more flexible open-world composition and evolution of enterprise architectures defines the moving context for adaptable and high distributed systems, which are essential to enable the digital transformation. The present research paper investigates the evolution of Enterprise Architecture considering new defined value-oriented mappings between digital strategies, digital business models and an improved digital enterprise architecture.",TRUE,noun
R57,Virology,R51231,Broad anti-coronaviral activity of FDA approved drugs against SARS-CoV-2 in vitro and SARS-CoV in vivo,S333267,R70138,has role,R51487,drug,"Abstract SARS-CoV-2 emerged in China at the end of 2019 and has rapidly become a pandemic with roughly 2.7 million recorded COVID-19 cases and greater than 189,000 recorded deaths by April 23rd, 2020 (www.WHO.org). There are no FDA approved antivirals or vaccines for any coronavirus, including SARS-CoV-2. Current treatments for COVID-19 are limited to supportive therapies and off-label use of FDA approved drugs. Rapid development and human testing of potential antivirals is greatly needed. A quick way to test compounds with potential antiviral activity is through drug repurposing. Numerous drugs are already approved for human use and subsequently there is a good understanding of their safety profiles and potential side effects, making them easier to fast-track to clinical studies in COVID-19 patients. Here, we present data on the antiviral activity of 20 FDA approved drugs against SARS-CoV-2 that also inhibit SARS-CoV and MERS-CoV. We found that 17 of these inhibit SARS-CoV-2 at a range of IC50 values at non-cytotoxic concentrations. We directly follow up with seven of these to demonstrate all are capable of inhibiting infectious SARS-CoV-2 production. Moreover, we have evaluated two of these, chloroquine and chlorpromazine, in vivo using a mouse-adapted SARS-CoV model and found both drugs protect mice from clinical disease.",TRUE,noun
R57,Virology,R42003,Virus Isolation from the First Patient with SARS-CoV-2 in Korea,S132054,R42017,patient characteristics,R42031,age,Novel coronavirus (SARS-CoV-2) is found to cause a large outbreak started from Wuhan since December 2019 in China and SARS-CoV-2 infections have been reported with epidemiological linkage to China in 25 countries until now. We isolated SARS-CoV-2 from the oropharyngeal sample obtained from the patient with the first laboratory-confirmed SARS-CoV-2 infection in Korea. Cytopathic effects of SARS-CoV-2 in the Vero cell cultures were confluent 3 days after the first blind passage of the sample. Coronavirus was confirmed with spherical particle having a fringe reminiscent of crown on transmission electron microscopy. Phylogenetic analyses of whole genome sequences showed that it clustered with other SARS-CoV-2 reported from Wuhan.,TRUE,noun
R57,Virology,R175260,Alphacoronavirus in a Daubenton’s Myotis Bat (Myotis daubentonii) in Sweden.,S694138,R175262,Has Host,L466714,bats,"The ongoing COVID-19 pandemic has stimulated a search for reservoirs and species potentially involved in back and forth transmission. Studies have postulated bats as one of the key reservoirs of coronaviruses (CoVs), and different CoVs have been detected in bats. So far, CoVs have not been found in bats in Sweden and we therefore tested whether they carry CoVs. In summer 2020, we sampled a total of 77 adult bats comprising 74 Myotis daubentonii, 2 Pipistrellus pygmaeus, and 1 M. mystacinus bats in southern Sweden. Blood, saliva and feces were sampled, processed and subjected to a virus next-generation sequencing target enrichment protocol. An Alphacoronavirus was detected and sequenced from feces of a M. daubentonii adult female bat. Phylogenetic analysis of the almost complete virus genome revealed a close relationship with Finnish and Danish strains. This was the first finding of a CoV in bats in Sweden, and bats may play a role in the transmission cycle of CoVs in Sweden. Focused and targeted surveillance of CoVs in bats is warranted, with consideration of potential conflicts between public health and nature conservation required as many bat species in Europe are threatened and protected.",TRUE,noun
R57,Virology,R175292,Dynamics of Antibodies to Ebolaviruses in an Eidolon helvum Bat Colony in,S694455,R175294,Has Host,L466999,bats,"The ecology of ebolaviruses is still poorly understood and the role of bats in outbreaks needs to be further clarified. Straw-colored fruit bats (Eidolon helvum) are the most common fruit bats in Africa and antibodies to ebolaviruses have been documented in this species. Between December 2018 and November 2019, samples were collected at approximately monthly intervals in roosting and feeding sites from 820 bats from an Eidolon helvum colony. Dried blood spots (DBS) were tested for antibodies to Zaire, Sudan, and Bundibugyo ebolaviruses. The proportion of samples reactive with GP antigens increased significantly with age from 0–9/220 (0–4.1%) in juveniles to 26–158/225 (11.6–70.2%) in immature adults and 10–225/372 (2.7–60.5%) in adult bats. Antibody responses were lower in lactating females. Viral RNA was not detected in 456 swab samples collected from 152 juvenile and 214 immature adult bats. Overall, our study shows that antibody levels increase in young bats suggesting that seroconversion to Ebola or related viruses occurs in older juvenile and immature adult bats. Multiple year monitoring would be needed to confirm this trend. Knowledge of the periods of the year with the highest risk of Ebolavirus circulation can guide the implementation of strategies to mitigate spill-over events.",TRUE,noun
R57,Virology,R110711,Antiviral Chromones from the Stem of Cassia siamea,S504485,R110713,Class of compound,L364378,Chromone,"Seven new chromones, siamchromones A-G (1-7), and 12 known chromones (8-19) were isolated from the stems of Cassia siamea. Compounds 1-19 were evaluated for their antitobacco mosaic virus (anti-TMV) and anti-HIV-1 activities. Compound 6 showed antitobacco mosaic virus (anti-TMV) activity with an inhibition rate of 35.3% and IC50 value of 31.2 μM, which is higher than that of the positive control, ningnamycin. Compounds 1, 10, 13, and 16 showed anti-TMV activities with inhibition rates above 10%. Compounds 4, 6, 13, and 19 showed anti-HIV-1 activities with therapeutic index values above 50.",TRUE,noun
R57,Virology,R175260,Alphacoronavirus in a Daubenton’s Myotis Bat (Myotis daubentonii) in Sweden.,S694142,R175262,Has Virus,L466718,coronavirus,"The ongoing COVID-19 pandemic has stimulated a search for reservoirs and species potentially involved in back and forth transmission. Studies have postulated bats as one of the key reservoirs of coronaviruses (CoVs), and different CoVs have been detected in bats. So far, CoVs have not been found in bats in Sweden and we therefore tested whether they carry CoVs. In summer 2020, we sampled a total of 77 adult bats comprising 74 Myotis daubentonii, 2 Pipistrellus pygmaeus, and 1 M. mystacinus bats in southern Sweden. Blood, saliva and feces were sampled, processed and subjected to a virus next-generation sequencing target enrichment protocol. An Alphacoronavirus was detected and sequenced from feces of a M. daubentonii adult female bat. Phylogenetic analysis of the almost complete virus genome revealed a close relationship with Finnish and Danish strains. This was the first finding of a CoV in bats in Sweden, and bats may play a role in the transmission cycle of CoVs in Sweden. Focused and targeted surveillance of CoVs in bats is warranted, with consideration of potential conflicts between public health and nature conservation required as many bat species in Europe are threatened and protected.",TRUE,noun
R57,Virology,R175292,Dynamics of Antibodies to Ebolaviruses in an Eidolon helvum Bat Colony in,S694462,R175294,Has Virus,L467006,Ebola,"The ecology of ebolaviruses is still poorly understood and the role of bats in outbreaks needs to be further clarified. Straw-colored fruit bats (Eidolon helvum) are the most common fruit bats in Africa and antibodies to ebolaviruses have been documented in this species. Between December 2018 and November 2019, samples were collected at approximately monthly intervals in roosting and feeding sites from 820 bats from an Eidolon helvum colony. Dried blood spots (DBS) were tested for antibodies to Zaire, Sudan, and Bundibugyo ebolaviruses. The proportion of samples reactive with GP antigens increased significantly with age from 0–9/220 (0–4.1%) in juveniles to 26–158/225 (11.6–70.2%) in immature adults and 10–225/372 (2.7–60.5%) in adult bats. Antibody responses were lower in lactating females. Viral RNA was not detected in 456 swab samples collected from 152 juvenile and 214 immature adult bats. Overall, our study shows that antibody levels increase in young bats suggesting that seroconversion to Ebola or related viruses occurs in older juvenile and immature adult bats. Multiple year monitoring would be needed to confirm this trend. Knowledge of the periods of the year with the highest risk of Ebolavirus circulation can guide the implementation of strategies to mitigate spill-over events.",TRUE,noun
R57,Virology,R175284,Porcine Circoviruses and Herpesviruses Are Prevalent in an Austrian Game,S694376,R175286,Has Virus,L466928,herpesvirus,"During the annual hunt in a privately owned Austrian game population in fall 2019 and 2020, 64 red deer (Cervus elaphus), 5 fallow deer (Dama dama), 6 mouflon (Ovis gmelini musimon), and 95 wild boars (Sus scrofa) were shot and sampled for PCR testing. Pools of spleen, lung, and tonsillar swabs were screened for specific nucleic acids of porcine circoviruses. Wild ruminants were additionally tested for herpesviruses and pestiviruses, and wild boars were screened for pseudorabies virus (PrV) and porcine lymphotropic herpesviruses (PLHV-1-3). PCV2 was detectable in 5% (3 of 64) of red deer and 75% (71 of 95) of wild boar samples. In addition, 24 wild boar samples (25%) but none of the ruminants tested positive for PCV3 specific nucleic acids. Herpesviruses were detected in 15 (20%) ruminant samples. Sequence analyses showed the closest relationships to fallow deer herpesvirus and elk gammaherpesvirus. In wild boars, PLHV-1 was detectable in 10 (11%), PLHV-2 in 44 (46%), and PLHV-3 in 66 (69%) of animals, including 36 double and 3 triple infections. No pestiviruses were detectable in any ruminant samples, and all wild boar samples were negative in PrV-PCR. Our data demonstrate a high prevalence of PCV2 and PLHVs in an Austrian game population, confirm the presence of PCV3 in Austrian wild boars, and indicate a low risk of spillover of notifiable animal diseases into the domestic animal population.",TRUE,noun
R57,Virology,R51252,Identification of inhibitors of SARS-CoV-2 in-vitro cellular toxicity in human (Caco-2) cells using a large scale drug repurposing collection,S156839,R51254,has mode of action,R51248,inhibition,"To identify possible candidates for progression towards clinical studies against SARS-CoV-2, we screened a well-defined collection of 5632 compounds including 3488 compounds which have undergone clinical investigations (marketed drugs, phases 1 -3, and withdrawn) across 600 indications. Compounds were screened for their inhibition of viral induced cytotoxicity using the human epithelial colorectal adenocarcinoma cell line Caco-2 and a SARS-CoV-2 isolate. The primary screen of 5632 compounds gave 271 hits. A total of 64 compounds with IC50 <20 µM were identified, including 19 compounds with IC50 < 1 µM. Of this confirmed hit population, 90% have not yet been previously reported as active against SARS-CoV-2 in-vitro cell assays. Some 37 of the actives are launched drugs, 19 are in phases 1-3 and 10 pre-clinical. Several inhibitors were associated with modulation of host pathways including kinase signaling P53 activation, ubiquitin pathways and PDE activity modulation, with long chain acyl transferases were effective viral inhibitors.",TRUE,noun
R57,Virology,R69999,Human organ chip-enabled pipeline to rapidly repurpose therapeutics during viral pandemics,S332558,R70000,has mode of action,R51248,inhibition,"The rising threat of pandemic viruses, such as SARS-CoV-2, requires development of new preclinical discovery platforms that can more rapidly identify therapeutics that are active in vitro and also translate in vivo. Here we show that human organ-on-a-chip (Organ Chip) microfluidic culture devices lined by highly differentiated human primary lung airway epithelium and endothelium can be used to model virus entry, replication, strain-dependent virulence, host cytokine production, and recruitment of circulating immune cells in response to infection by respiratory viruses with great pandemic potential. We provide a first demonstration of drug repurposing by using oseltamivir in influenza A virus-infected organ chip cultures and show that co-administration of the approved anticoagulant drug, nafamostat, can double oseltamivir’s therapeutic time window. With the emergence of the COVID-19 pandemic, the Airway Chips were used to assess the inhibitory activities of approved drugs that showed inhibition in traditional cell culture assays only to find that most failed when tested in the Organ Chip platform. When administered in human Airway Chips under flow at a clinically relevant dose, one drug – amodiaquine - significantly inhibited infection by a pseudotyped SARS-CoV-2 virus. Proof of concept was provided by showing that amodiaquine and its active metabolite (desethylamodiaquine) also significantly reduce viral load in both direct infection and animal-to-animal transmission models of native SARS-CoV-2 infection in hamsters. These data highlight the value of Organ Chip technology as a more stringent and physiologically relevant platform for drug repurposing, and suggest that amodiaquine should be considered for future clinical testing.",TRUE,noun
R57,Virology,R44759,Transmission potential of COVID-19 in Iran,S326554,R44771,location,R44769,Iran,"Abstract We estimated the reproduction number of 2020 Iranian COVID-19 epidemic using two different methods: R 0 was estimated at 4.4 (95% CI, 3.9, 4.9) (generalized growth model) and 3.50 (1.28, 8.14) (epidemic doubling time) (February 19 - March 1) while the effective R was estimated at 1.55 (1.06, 2.57) (March 6-19).",TRUE,noun
R57,Virology,R175260,Alphacoronavirus in a Daubenton’s Myotis Bat (Myotis daubentonii) in Sweden.,S694136,R175262,Has Host,L466712,Myotis,"The ongoing COVID-19 pandemic has stimulated a search for reservoirs and species potentially involved in back and forth transmission. Studies have postulated bats as one of the key reservoirs of coronaviruses (CoVs), and different CoVs have been detected in bats. So far, CoVs have not been found in bats in Sweden and we therefore tested whether they carry CoVs. In summer 2020, we sampled a total of 77 adult bats comprising 74 Myotis daubentonii, 2 Pipistrellus pygmaeus, and 1 M. mystacinus bats in southern Sweden. Blood, saliva and feces were sampled, processed and subjected to a virus next-generation sequencing target enrichment protocol. An Alphacoronavirus was detected and sequenced from feces of a M. daubentonii adult female bat. Phylogenetic analysis of the almost complete virus genome revealed a close relationship with Finnish and Danish strains. This was the first finding of a CoV in bats in Sweden, and bats may play a role in the transmission cycle of CoVs in Sweden. Focused and targeted surveillance of CoVs in bats is warranted, with consideration of potential conflicts between public health and nature conservation required as many bat species in Europe are threatened and protected.",TRUE,noun
R57,Virology,R175260,Alphacoronavirus in a Daubenton’s Myotis Bat (Myotis daubentonii) in Sweden.,S694137,R175262,Has Host,L466713,myotis,"The ongoing COVID-19 pandemic has stimulated a search for reservoirs and species potentially involved in back and forth transmission. Studies have postulated bats as one of the key reservoirs of coronaviruses (CoVs), and different CoVs have been detected in bats. So far, CoVs have not been found in bats in Sweden and we therefore tested whether they carry CoVs. In summer 2020, we sampled a total of 77 adult bats comprising 74 Myotis daubentonii, 2 Pipistrellus pygmaeus, and 1 M. mystacinus bats in southern Sweden. Blood, saliva and feces were sampled, processed and subjected to a virus next-generation sequencing target enrichment protocol. An Alphacoronavirus was detected and sequenced from feces of a M. daubentonii adult female bat. Phylogenetic analysis of the almost complete virus genome revealed a close relationship with Finnish and Danish strains. This was the first finding of a CoV in bats in Sweden, and bats may play a role in the transmission cycle of CoVs in Sweden. Focused and targeted surveillance of CoVs in bats is warranted, with consideration of potential conflicts between public health and nature conservation required as many bat species in Europe are threatened and protected.",TRUE,noun
R57,Virology,R175284,Porcine Circoviruses and Herpesviruses Are Prevalent in an Austrian Game,S694375,R175286,Has Virus,L466927,rabies,"During the annual hunt in a privately owned Austrian game population in fall 2019 and 2020, 64 red deer (Cervus elaphus), 5 fallow deer (Dama dama), 6 mouflon (Ovis gmelini musimon), and 95 wild boars (Sus scrofa) were shot and sampled for PCR testing. Pools of spleen, lung, and tonsillar swabs were screened for specific nucleic acids of porcine circoviruses. Wild ruminants were additionally tested for herpesviruses and pestiviruses, and wild boars were screened for pseudorabies virus (PrV) and porcine lymphotropic herpesviruses (PLHV-1-3). PCV2 was detectable in 5% (3 of 64) of red deer and 75% (71 of 95) of wild boar samples. In addition, 24 wild boar samples (25%) but none of the ruminants tested positive for PCV3 specific nucleic acids. Herpesviruses were detected in 15 (20%) ruminant samples. Sequence analyses showed the closest relationships to fallow deer herpesvirus and elk gammaherpesvirus. In wild boars, PLHV-1 was detectable in 10 (11%), PLHV-2 in 44 (46%), and PLHV-3 in 66 (69%) of animals, including 36 double and 3 triple infections. No pestiviruses were detectable in any ruminant samples, and all wild boar samples were negative in PrV-PCR. Our data demonstrate a high prevalence of PCV2 and PLHVs in an Austrian game population, confirm the presence of PCV3 in Austrian wild boars, and indicate a low risk of spillover of notifiable animal diseases into the domestic animal population.",TRUE,noun
R57,Virology,R43039,Presymptomatic SARS-CoV-2 Infections and Transmission in a Skilled Nursing Facility,S133835,R43040,Has method,R43062,sequencing,"Abstract Background Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection can spread rapidly within skilled nursing facilities. After identification of a case of Covid-19 in a skilled nursing facility, we assessed transmission and evaluated the adequacy of symptom-based screening to identify infections in residents. Methods We conducted two serial point-prevalence surveys, 1 week apart, in which assenting residents of the facility underwent nasopharyngeal and oropharyngeal testing for SARS-CoV-2, including real-time reverse-transcriptase polymerase chain reaction (rRT-PCR), viral culture, and sequencing. Symptoms that had been present during the preceding 14 days were recorded. Asymptomatic residents who tested positive were reassessed 7 days later. Residents with SARS-CoV-2 infection were categorized as symptomatic with typical symptoms (fever, cough, or shortness of breath), symptomatic with only atypical symptoms, presymptomatic, or asymptomatic. Results Twenty-three days after the first positive test result in a resident at this skilled nursing facility, 57 of 89 residents (64%) tested positive for SARS-CoV-2. Among 76 residents who participated in point-prevalence surveys, 48 (63%) tested positive. Of these 48 residents, 27 (56%) were asymptomatic at the time of testing; 24 subsequently developed symptoms (median time to onset, 4 days). Samples from these 24 presymptomatic residents had a median rRT-PCR cycle threshold value of 23.1, and viable virus was recovered from 17 residents. As of April 3, of the 57 residents with SARS-CoV-2 infection, 11 had been hospitalized (3 in the intensive care unit) and 15 had died (mortality, 26%). Of the 34 residents whose specimens were sequenced, 27 (79%) had sequences that fit into two clusters with a difference of one nucleotide. Conclusions Rapid and widespread transmission of SARS-CoV-2 was demonstrated in this skilled nursing facility. More than half of residents with positive test results were asymptomatic at the time of testing and most likely contributed to transmission. Infection-control strategies focused solely on symptomatic residents were not sufficient to prevent transmission after SARS-CoV-2 introduction into this facility.",TRUE,noun
R57,Virology,R44137,Full-genome sequences of the first two SARS-CoV-2 viruses from India,S134451,R44139,Has method,R44140,sequencing,"Background & objectives: Since December 2019, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has globally affected 195 countries. In India, suspected cases were screened for SARS-CoV-2 as per the advisory of the Ministry of Health and Family Welfare. The objective of this study was to characterize SARS-CoV-2 sequences from three identified positive cases as on February 29, 2020. Methods: Throat swab/nasal swab specimens for a total of 881 suspected cases were screened by E gene and confirmed by RdRp (1), RdRp (2) and N gene real-time reverse transcription-polymerase chain reactions and next-generation sequencing. Phylogenetic analysis, molecular characterization and prediction of B- and T-cell epitopes for Indian SARS-CoV-2 sequences were undertaken. Results: Three cases with a travel history from Wuhan, China, were confirmed positive for SARS-CoV-2. Almost complete (29,851 nucleotides) genomes of case 1, case 3 and a fragmented genome for case 2 were obtained. The sequences of Indian SARS-CoV-2 though not identical showed high (~99.98%) identity with Wuhan seafood market pneumonia virus (accession number: NC 045512). Phylogenetic analysis showed that the Indian sequences belonged to different clusters. Predicted linear B-cell epitopes were found to be concentrated in the S1 domain of spike protein, and a conformational epitope was identified in the receptor-binding domain. The predicted T-cell epitopes showed broad human leucocyte antigen allele coverage of A and B supertypes predominant in the Indian population. Interpretation & conclusions: The two SARS-CoV-2 sequences obtained from India represent two different introductions into the country. The genetic heterogeneity is as noted globally. The identified B- and T-cell epitopes may be considered suitable for future experiments towards the design of vaccines and diagnostics. Continuous monitoring and analysis of the sequences of new cases from India and the other affected countries would be vital to understand the genetic evolution and rates of substitution of the SARS-CoV-2.",TRUE,noun
R57,Virology,R41605,"Serological and molecular findings during SARS-CoV-2 infection: the first case study in Finland, January to February 2020",S131413,R41607,patient characteristics,R41613,symptoms,"The first case of coronavirus disease (COVID-19) in Finland was confirmed on 29 January 2020. No secondary cases were detected. We describe the clinical picture and laboratory findings 3–23 days since the first symptoms. The SARS-CoV-2/Finland/1/2020 virus strain was isolated, the genome showing a single nucleotide substitution to the reference strain from Wuhan. Neutralising antibody response appeared within 9 days along with specific IgM and IgG response, targeting particularly nucleocapsid and spike proteins.",TRUE,noun
R57,Virology,R51231,Broad anti-coronaviral activity of FDA approved drugs against SARS-CoV-2 in vitro and SARS-CoV in vivo,S333396,R70172,Has participant,R51249,virus,"Abstract SARS-CoV-2 emerged in China at the end of 2019 and has rapidly become a pandemic with roughly 2.7 million recorded COVID-19 cases and greater than 189,000 recorded deaths by April 23rd, 2020 (www.WHO.org). There are no FDA approved antivirals or vaccines for any coronavirus, including SARS-CoV-2. Current treatments for COVID-19 are limited to supportive therapies and off-label use of FDA approved drugs. Rapid development and human testing of potential antivirals is greatly needed. A quick way to test compounds with potential antiviral activity is through drug repurposing. Numerous drugs are already approved for human use and subsequently there is a good understanding of their safety profiles and potential side effects, making them easier to fast-track to clinical studies in COVID-19 patients. Here, we present data on the antiviral activity of 20 FDA approved drugs against SARS-CoV-2 that also inhibit SARS-CoV and MERS-CoV. We found that 17 of these inhibit SARS-CoV-2 at a range of IC50 values at non-cytotoxic concentrations. We directly follow up with seven of these to demonstrate all are capable of inhibiting infectious SARS-CoV-2 production. Moreover, we have evaluated two of these, chloroquine and chlorpromazine, in vivo using a mouse-adapted SARS-CoV model and found both drugs protect mice from clinical disease.",TRUE,noun
R57,Virology,R51373,Identification of antiviral drug candidates against SARS-CoV-2 from FDA-approved drugs,S157286,R51399,Has participant,R51249,virus,"Drug repositioning is the only feasible option to immediately address the COVID-19 global challenge. We screened a panel of 48 FDA-approved drugs against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) which were preselected by an assay of SARS-CoV. We identified 24 potential antiviral drug candidates against SARS-CoV-2 infection. Some drug candidates showed very low 50% inhibitory concentrations (IC 50 s), and in particular, two FDA-approved drugs—niclosamide and ciclesonide—were notable in some respects.",TRUE,noun
R57,Virology,R51386,In vitro screening of a FDA approved chemical library reveals potential inhibitors of SARS-CoV-2 replication,S157313,R51404,Has participant,R51249,virus,"A novel coronavirus, named SARS-CoV-2, emerged in 2019 from Hubei region in China and rapidly spread worldwide. As no approved therapeutics exists to treat Covid-19, the disease associated to SARS-Cov-2, there is an urgent need to propose molecules that could quickly enter into clinics. Repurposing of approved drugs is a strategy that can bypass the time consuming stages of drug development. In this study, we screened the Prestwick Chemical Library® composed of 1,520 approved drugs in an infected cell-based assay. 90 compounds were identified. The robustness of the screen was assessed by the identification of drugs, such as Chloroquine derivatives and protease inhibitors, already in clinical trials. The hits were sorted according to their chemical composition and their known therapeutic effect, then EC50 and CC50 were determined for a subset of compounds. Several drugs, such as Azithromycine, Opipramol, Quinidine or Omeprazol present antiviral potency with 2105), satisfactory responsivity (1.05 A/W), and excellent selectivity for the deep-ultraviolet band, compared to those with ordinary metal electrodes. The raise of photocurrent and responsivity is attributed to the increase of incident photons through Gr and separated carriers caused by the built-in electric field formed at the interface of Gr and Ga2O3:Zn films. The proposed ideas and methods of tailoring Gr can not only improve the performance of devices but more importantly contribute to the practical development of graphene.",TRUE,noun
,Photonics,R145530,High Performance of Solution-Processed Amorphous p-Channel Copper-Tin-Sulfur-Gallium Oxide Thin-Film Transistors by UV/O3 Photocuring,S582779,R145532,keywords,L407031,Semiconductors,"The development of p-type metal-oxide semiconductors (MOSs) is of increasing interest for applications in next-generation optoelectronic devices, display backplane, and low-power-consumption complementary MOS circuits. Here, we report the high performance of solution-processed, p-channel copper-tin-sulfide-gallium oxide (CTSGO) thin-film transistors (TFTs) using UV/O3 exposure. Hall effect measurement confirmed the p-type conduction of CTSGO with Hall mobility of 6.02 ± 0.50 cm2 V-1 s-1. The p-channel CTSGO TFT using UV/O3 treatment exhibited the field-effect mobility (μFE) of 1.75 ± 0.15 cm2 V-1 s-1 and an on/off current ratio (ION/IOFF) of ∼104 at a low operating voltage of -5 V. The significant enhancement in the device performance is due to the good p-type CTSGO material, smooth surface morphology, and fewer interfacial traps between the semiconductor and the Al2O3 gate insulator. Therefore, the p-channel CTSGO TFT can be applied for CMOS MOS TFT circuits for next-generation display.",TRUE,noun
,Photonics,R145530,High Performance of Solution-Processed Amorphous p-Channel Copper-Tin-Sulfur-Gallium Oxide Thin-Film Transistors by UV/O3 Photocuring,S582781,R145532,keywords,L407033,Transistors,"The development of p-type metal-oxide semiconductors (MOSs) is of increasing interest for applications in next-generation optoelectronic devices, display backplane, and low-power-consumption complementary MOS circuits. Here, we report the high performance of solution-processed, p-channel copper-tin-sulfide-gallium oxide (CTSGO) thin-film transistors (TFTs) using UV/O3 exposure. Hall effect measurement confirmed the p-type conduction of CTSGO with Hall mobility of 6.02 ± 0.50 cm2 V-1 s-1. The p-channel CTSGO TFT using UV/O3 treatment exhibited the field-effect mobility (μFE) of 1.75 ± 0.15 cm2 V-1 s-1 and an on/off current ratio (ION/IOFF) of ∼104 at a low operating voltage of -5 V. The significant enhancement in the device performance is due to the good p-type CTSGO material, smooth surface morphology, and fewer interfacial traps between the semiconductor and the Al2O3 gate insulator. Therefore, the p-channel CTSGO TFT can be applied for CMOS MOS TFT circuits for next-generation display.",TRUE,noun
,electrical engineering,R145549,Solution-processed high-performance p-channel copper tin sulfide thin-film transistors,S582939,R145550,keywords,L407147,Transistors,We introduce a solution-processed copper tin sulfide (CTS) thin film to realize high-performance of thin-film transistors (TFT) by optimizing the CTS precursor solution concentration.
,TRUE,noun
,chemical engineering,R178365,Disorder–Order Transition—Improving the Moisture Sensitivity of Waterborne Nanocomposite Barriers,S699639,R178366,Solvent,L470918,water,"Systematic studies on the influence of crystalline vs disordered nanocomposite structures on barrier properties and water vapor sensitivity are scarce as it is difficult to switch between the two morphologies without changing other critical parameters. By combining water-soluble poly(vinyl alcohol) (PVOH) and ultrahigh aspect ratio synthetic sodium fluorohectorite (Hec) as filler, we were able to fabricate nanocomposites from a single nematic aqueous suspension by slot die coating that, depending on the drying temperature, forms different desired morphologies. Increasing the drying temperature from 20 to 50 °C for the same formulation triggers phase segregation and disordered nanocomposites are obtained, while at room temperature, one-dimensional (1D) crystalline, intercalated hybrid Bragg Stacks form. The onset of swelling of the crystalline morphology is pushed to significantly higher relative humidity (RH). This disorder-order transition renders PVOH/Hec a promising barrier material at RH of up to 65%, which is relevant for food packaging. The oxygen permeability (OP) of the 1D crystalline PVOH/Hec is an order of magnitude lower compared to the OP of the disordered nanocomposite at this elevated RH (OP = 0.007 cm3 μm m-2 day-1 bar-1 cf. OP = 0.047 cm3 μm m-2 day-1 bar-1 at 23 °C and 65% RH).",TRUE,noun
R282,Agricultural and Resource Economics,R109321,Farm Households' Simultaneous Use of Sources to Access Information on Cotton Crop Production,S499045,R109323,data collection,L361133, Field survey,"ABSTRACT This study has investigated farm households' simultaneous use of social networks, field extension, traditional media, and modern information and communication technologies (ICTs) to access information on cotton crop production. The study was based on a field survey, conducted in Punjab, Pakistan. Data were collected from 399 cotton farm households using the multistage sampling technique. Important combinations of information sources were found in terms of their simultaneous use to access information. The study also examined the factors influencing the use of various available information sources. A multivariate probit model was used considering the correlation among the use of social networks, field extension, traditional media, and modern ICTs. The findings indicated the importance of different socioeconomic and institutional factors affecting farm households' use of available information sources on cotton production. Important policy conclusions are drawn based on findings.",TRUE,noun phrase
R282,Agricultural and Resource Economics,R109321,Farm Households' Simultaneous Use of Sources to Access Information on Cotton Crop Production,S498878,R109323,Econometric model,L361007,Multivariate probit model,"ABSTRACT This study has investigated farm households' simultaneous use of social networks, field extension, traditional media, and modern information and communication technologies (ICTs) to access information on cotton crop production. The study was based on a field survey, conducted in Punjab, Pakistan. Data were collected from 399 cotton farm households using the multistage sampling technique. Important combinations of information sources were found in terms of their simultaneous use to access information. The study also examined the factors influencing the use of various available information sources. A multivariate probit model was used considering the correlation among the use of social networks, field extension, traditional media, and modern ICTs. The findings indicated the importance of different socioeconomic and institutional factors affecting farm households' use of available information sources on cotton production. Important policy conclusions are drawn based on findings.",TRUE,noun phrase
R282,Agricultural and Resource Economics,R109335,Socio-economic Factors Affecting Adoption of Modern Information and Communication Technology by Farmers in India: Analysis Using Multivariate Probit Model,S498908,R109337,Econometric model,L361028,Multivariate probit model,"Abstract Purpose: The paper analyzes factors that affect the likelihood of adoption of different agriculture-related information sources by farmers. Design/Methodology/Approach: The paper links the theoretical understanding of the existing multiple sources of information that farmer use, with the empirical model to analyze the factors that affect the farmer's adoption of different agriculture-related information sources. The analysis is done using a multivariate probit model and primary survey data of 1,200 farmer households of five Indo-Gangetic states of India, covering 120 villages. Findings: The results of the study highlight that farmer's age, education level and farm size influence farmer's behaviour in selecting different sources of information. The results show that farmers use multiple information sources, that may be complementary or substitutes to each other and this also implies that any single source does not satisfy all information needs of the farmer. Practical implication: If we understand the likelihood of farmer's choice of source of information then direction can be provided and policies can be developed to provide information through those sources in targeted regions with the most effective impact. Originality/Value: Information plays a key role in a farmer's life by enhancing their knowledge and strengthening their decision-making ability. Farmers use multiple sources of information as no one source is sufficient in itself.",TRUE,noun phrase
R282,Agricultural and Resource Economics,R109340,Factors Influencing the Selection of Precision Farming Information Sources by Cotton Producers,S498951,R109342,Econometric model,L361062,Multivariate probit regression,"Precision farming information demanded by cotton producers is provided by various suppliers, including consultants, farm input dealerships, University Extension systems, and media sources. Factors associated with the decisions to select among information sources to search for precision farming information are analyzed using a multivariate probit regression accounting for correlation among the different selection decisions. Factors influencing these decisions are age, education, and income. These findings should be valuable to precision farming information providers who may be able to better meet their target clientele needs.",TRUE,noun phrase
R123,Analytical Chemistry,R139343,Exfoliated black phosphorus gas sensing properties at room temperature,S560897,R140503,Sensing material,R140450,Black phosphorus,"Room temperature gas sensing properties of chemically exfoliated black phosphorus (BP) to oxidizing (NO2, CO2) and reducing (NH3, H2, CO) gases in a dry air carrier have been reported. To study the gas sensing properties of BP, chemically exfoliated BP flakes have been drop casted on Si3N4 substrates provided with Pt comb-type interdigitated electrodes in N2 atmosphere. Scanning electron microscopy and x-ray photoelectron spectroscopy characterizations show respectively the occurrence of a mixed structure, composed of BP coarse aggregates dispersed on BP exfoliated few layer flakes bridging the electrodes, and a clear 2p doublet belonging to BP, which excludes the occurrence of surface oxidation. Room temperature electrical tests in dry air show a p-type response of multilayer BP with measured detection limits of 20 ppb and 10 ppm to NO2 and NH3 respectively. No response to CO and CO2 has been detected, while a slight but steady sensitivity to H2 has been recorded. The reported results confirm, on an experimental basis, what was previously theoretically predicted, demonstrating the promising sensing properties of exfoliated BP.",TRUE,noun phrase
R123,Analytical Chemistry,R140498,Black Phosphorus Gas Sensors,S560885,R140500,Sensing material,R140450,Black phosphorus,"The utilization of black phosphorus and its monolayer (phosphorene) and few-layers in field-effect transistors has attracted a lot of attention to this elemental two-dimensional material. Various studies on optimization of black phosphorus field-effect transistors, PN junctions, photodetectors, and other applications have been demonstrated. Although chemical sensing based on black phosphorus devices was theoretically predicted, there is still no experimental verification of such an important study of this material. In this article, we report on chemical sensing of nitrogen dioxide (NO2) using field-effect transistors based on multilayer black phosphorus. Black phosphorus sensors exhibited increased conduction upon NO2 exposure and excellent sensitivity for detection of NO2 down to 5 ppb. Moreover, when the multilayer black phosphorus field-effect transistor was exposed to NO2 concentrations of 5, 10, 20, and 40 ppb, its relative conduction change followed the Langmuir isotherm for molecules adsorbed on a surface. Additionally, on the basis of an exponential conductance change, the rate constants for adsorption and desorption of NO2 on black phosphorus were extracted for different NO2 concentrations, and they were in the range of 130-840 s. These results shed light on important electronic and sensing characteristics of black phosphorus, which can be utilized in future studies and applications.",TRUE,noun phrase
R123,Analytical Chemistry,R110770,Integration of Molecular Networking and In-Silico MS/MS Fragmentation for Natural Products Dereplication,S504756,R110772,Material,R110773,complex biological matrices,"Dereplication represents a key step for rapidly identifying known secondary metabolites in complex biological matrices. In this context, liquid-chromatography coupled to high resolution mass spectrometry (LC-HRMS) is increasingly used and, via untargeted data-dependent MS/MS experiments, massive amounts of detailed information on the chemical composition of crude extracts can be generated. An efficient exploitation of such data sets requires automated data treatment and access to dedicated fragmentation databases. Various novel bioinformatics approaches such as molecular networking (MN) and in-silico fragmentation tools have emerged recently and provide new perspective for early metabolite identification in natural products (NPs) research. Here we propose an innovative dereplication strategy based on the combination of MN with an extensive in-silico MS/MS fragmentation database of NPs. Using two case studies, we demonstrate that this combined approach offers a powerful tool to navigate through the chemistry of complex NPs extracts, dereplicate metabolites, and annotate analogues of database entries.",TRUE,noun phrase
R123,Analytical Chemistry,R110770,Integration of Molecular Networking and In-Silico MS/MS Fragmentation for Natural Products Dereplication,S504759,R110772,Material,R110776,complex NPs extracts,"Dereplication represents a key step for rapidly identifying known secondary metabolites in complex biological matrices. In this context, liquid-chromatography coupled to high resolution mass spectrometry (LC-HRMS) is increasingly used and, via untargeted data-dependent MS/MS experiments, massive amounts of detailed information on the chemical composition of crude extracts can be generated. An efficient exploitation of such data sets requires automated data treatment and access to dedicated fragmentation databases. Various novel bioinformatics approaches such as molecular networking (MN) and in-silico fragmentation tools have emerged recently and provide new perspective for early metabolite identification in natural products (NPs) research. Here we propose an innovative dereplication strategy based on the combination of MN with an extensive in-silico MS/MS fragmentation database of NPs. Using two case studies, we demonstrate that this combined approach offers a powerful tool to navigate through the chemistry of complex NPs extracts, dereplicate metabolites, and annotate analogues of database entries.",TRUE,noun phrase
R123,Analytical Chemistry,R110770,Integration of Molecular Networking and In-Silico MS/MS Fragmentation for Natural Products Dereplication,S504757,R110772,Material,R110774,dedicated fragmentation databases,"Dereplication represents a key step for rapidly identifying known secondary metabolites in complex biological matrices. In this context, liquid-chromatography coupled to high resolution mass spectrometry (LC-HRMS) is increasingly used and, via untargeted data-dependent MS/MS experiments, massive amounts of detailed information on the chemical composition of crude extracts can be generated. An efficient exploitation of such data sets requires automated data treatment and access to dedicated fragmentation databases. Various novel bioinformatics approaches such as molecular networking (MN) and in-silico fragmentation tools have emerged recently and provide new perspective for early metabolite identification in natural products (NPs) research. Here we propose an innovative dereplication strategy based on the combination of MN with an extensive in-silico MS/MS fragmentation database of NPs. Using two case studies, we demonstrate that this combined approach offers a powerful tool to navigate through the chemistry of complex NPs extracts, dereplicate metabolites, and annotate analogues of database entries.",TRUE,noun phrase
R123,Analytical Chemistry,R110770,Integration of Molecular Networking and In-Silico MS/MS Fragmentation for Natural Products Dereplication,S504758,R110772,Material,R110775,extensive in-silico MS/MS fragmentation database,"Dereplication represents a key step for rapidly identifying known secondary metabolites in complex biological matrices. In this context, liquid-chromatography coupled to high resolution mass spectrometry (LC-HRMS) is increasingly used and, via untargeted data-dependent MS/MS experiments, massive amounts of detailed information on the chemical composition of crude extracts can be generated. An efficient exploitation of such data sets requires automated data treatment and access to dedicated fragmentation databases. Various novel bioinformatics approaches such as molecular networking (MN) and in-silico fragmentation tools have emerged recently and provide new perspective for early metabolite identification in natural products (NPs) research. Here we propose an innovative dereplication strategy based on the combination of MN with an extensive in-silico MS/MS fragmentation database of NPs. Using two case studies, we demonstrate that this combined approach offers a powerful tool to navigate through the chemistry of complex NPs extracts, dereplicate metabolites, and annotate analogues of database entries.",TRUE,noun phrase
R123,Analytical Chemistry,R140522,"Highly sensitive MoTe
2
chemical sensor with fast recovery rate through gate biasing",S561040,R140524,Sensing material,R140525,Molybdenum ditelluride,"The unique properties of two dimensional (2D) materials make them promising candidates for chemical and biological sensing applications. However, most 2D nanomaterial sensors suffer very long recovery time due to slow molecular desorption at room temperature. Here, we report a highly sensitive molybdenum ditelluride (MoTe2) gas sensor for NO2 and NH3 detection with greatly enhanced recovery rate. The effects of gate bias on sensing performance have been systematically studied. It is found that the recovery kinetics can be effectively adjusted by biasing the sensor to different gate voltages. Under the optimum biasing potential, the MoTe2 sensor can achieve more than 90% recovery after each sensing cycle well within 10 min at room temperature. The results demonstrate the potential of MoTe2 as a promising candidate for high-performance chemical sensors. The idea of exploiting gate bias to adjust molecular desorption kinetics can be readily applied to much wider sensing platforms based on 2D nanomaterials.",TRUE,noun phrase
R123,Analytical Chemistry,R140498,Black Phosphorus Gas Sensors,S560883,R140500,Analyte,R140501,Nitrogen dioxide,"The utilization of black phosphorus and its monolayer (phosphorene) and few-layers in field-effect transistors has attracted a lot of attention to this elemental two-dimensional material. Various studies on optimization of black phosphorus field-effect transistors, PN junctions, photodetectors, and other applications have been demonstrated. Although chemical sensing based on black phosphorus devices was theoretically predicted, there is still no experimental verification of such an important study of this material. In this article, we report on chemical sensing of nitrogen dioxide (NO2) using field-effect transistors based on multilayer black phosphorus. Black phosphorus sensors exhibited increased conduction upon NO2 exposure and excellent sensitivity for detection of NO2 down to 5 ppb. Moreover, when the multilayer black phosphorus field-effect transistor was exposed to NO2 concentrations of 5, 10, 20, and 40 ppb, its relative conduction change followed the Langmuir isotherm for molecules adsorbed on a surface. Additionally, on the basis of an exponential conductance change, the rate constants for adsorption and desorption of NO2 on black phosphorus were extracted for different NO2 concentrations, and they were in the range of 130-840 s. These results shed light on important electronic and sensing characteristics of black phosphorus, which can be utilized in future studies and applications.",TRUE,noun phrase
R123,Analytical Chemistry,R140535,Physisorption-Based Charge Transfer in Two-Dimensional SnS2 for Selective and Reversible NO2 Gas Sensing,S561106,R140537,Analyte,R140501,Nitrogen dioxide,"Nitrogen dioxide (NO2) is a gas species that plays an important role in certain industrial, farming, and healthcare sectors. However, there are still significant challenges for NO2 sensing at low detection limits, especially in the presence of other interfering gases. The NO2 selectivity of current gas-sensing technologies is significantly traded-off with their sensitivity and reversibility as well as fabrication and operating costs. In this work, we present an important progress for selective and reversible NO2 sensing by demonstrating an economical sensing platform based on the charge transfer between physisorbed NO2 gas molecules and two-dimensional (2D) tin disulfide (SnS2) flakes at low operating temperatures. The device shows high sensitivity and superior selectivity to NO2 at operating temperatures of less than 160 °C, which are well below those of chemisorptive and ion conductive NO2 sensors with much poorer selectivity. At the same time, excellent reversibility of the sensor is demonstrated, which has rarely been observed in other 2D material counterparts. Such impressive features originate from the planar morphology of 2D SnS2 as well as unique physical affinity and favorable electronic band positions of this material that facilitate the NO2 physisorption and charge transfer at parts per billion levels. The 2D SnS2-based sensor provides a real solution for low-cost and selective NO2 gas sensing.",TRUE,noun phrase
R123,Analytical Chemistry,R140535,Physisorption-Based Charge Transfer in Two-Dimensional SnS2 for Selective and Reversible NO2 Gas Sensing,S561110,R140537,Sensing material,R140538,Tin disulfide,"Nitrogen dioxide (NO2) is a gas species that plays an important role in certain industrial, farming, and healthcare sectors. However, there are still significant challenges for NO2 sensing at low detection limits, especially in the presence of other interfering gases. The NO2 selectivity of current gas-sensing technologies is significantly traded-off with their sensitivity and reversibility as well as fabrication and operating costs. In this work, we present an important progress for selective and reversible NO2 sensing by demonstrating an economical sensing platform based on the charge transfer between physisorbed NO2 gas molecules and two-dimensional (2D) tin disulfide (SnS2) flakes at low operating temperatures. The device shows high sensitivity and superior selectivity to NO2 at operating temperatures of less than 160 °C, which are well below those of chemisorptive and ion conductive NO2 sensors with much poorer selectivity. At the same time, excellent reversibility of the sensor is demonstrated, which has rarely been observed in other 2D material counterparts. Such impressive features originate from the planar morphology of 2D SnS2 as well as unique physical affinity and favorable electronic band positions of this material that facilitate the NO2 physisorption and charge transfer at parts per billion levels. The 2D SnS2-based sensor provides a real solution for low-cost and selective NO2 gas sensing.",TRUE,noun phrase
R114008,Applied Physics,R137444,Mechanisms of bacterial inactivation in the liquid phase induced by a remote RF cold atmospheric pressure plasma jet,S543886,R137446,Intended_Application,L383002,Bacterial inactivation,"A radio-frequency atmospheric pressure argon plasma jet is used for the inactivation of bacteria (Pseudomonas aeruginosa) in solutions. The source is characterized by measurements of power dissipation, gas temperature, absolute UV irradiance as well as mass spectrometry measurements of emitted ions. The plasma-induced liquid chemistry is studied by performing liquid ion chromatography and hydrogen peroxide concentration measurements on treated distilled water samples. Additionally, a quantitative estimation of an extensive liquid chemistry induced by the plasma is made by solution kinetics calculations. The role of the different active components of the plasma is evaluated based on either measurements, as mentioned above, or estimations based on published data of measurements of those components. For the experimental conditions being considered in this work, it is shown that the bactericidal effect can be solely ascribed to plasma-induced liquid chemistry, leading to the production of stable and transient chemical species. It is shown that HNO2, ONOO − and H2O2 are present in the liquid phase in similar quantities to concentrations which are reported in the literature to cause bacterial inactivation. The importance of plasma-induced chemistry at the gas‐liquid interface is illustrated and discussed in detail. (Some figures may appear in colour only in the online journal)",TRUE,noun phrase
R114008,Applied Physics,R162104,Single-shot soft-x-ray digital holographic microscopy with an adjustable field of view and magnification,S646945,R162106,Research objective,L441531,Holographic microscopy,"Single-shot digital holographic microscopy with an adjustable field of view and magnification was demonstrated by using a tabletop 32.8 nm soft-x-ray laser. The holographic images were reconstructed with a two-dimensional fast-Fourier-transform algorithm, and a new configuration of imaging was developed to overcome the pixel-size limit of the recording device without reducing the effective NA. The image of an atomic-force-microscope cantilever was reconstructed with a lateral resolution of 480 nm, and the phase contrast image of a 20 nm carbon mesh foil demonstrated that profiles of sample thickness can be reconstructed with few-nanometers uncertainty. The ultrashort x-ray pulse duration combined with single-shot capability offers great advantage for flash imaging of delicate samples.",TRUE,noun phrase
R133,Artificial Intelligence,R153391,Neuro-Symbolic Probabilistic Argumentation Machines,S648897,R162654,has Input,R162640, an argumentation graph,"Neural-symbolic systems combine the strengths of neural networks and symbolic formalisms. In this paper, we introduce a neural-symbolic system which combines restricted Boltzmann machines and probabilistic semi-abstract argumentation. We propose to train networks on argument labellings explaining the data, so that any sampled data outcome is associated with an argument labelling. Argument labellings are integrated as constraints within restricted Boltzmann machines, so that the neural networks are used to learn probabilistic dependencies amongst argument labels. Given a dataset and an argumentation graph as prior knowledge, for every example/case K in the dataset, we use a so-called K-maxconsistent labelling of the graph, and an explanation of case K refers to a K-maxconsistent labelling of the given argumentation graph. The abilities of the proposed system to predict correct labellings were evaluated and compared with standard machine learning techniques. Experiments revealed that such argumentation Boltzmann machines can outperform other classification models, especially in noisy settings.",TRUE,noun phrase
R133,Artificial Intelligence,R4857,How are topics born? Understanding the research dynamics preceding the emergence of new areas,S5333,R4863,users,R4868,academic publishers,"The ability to promptly recognise new research trends is strategic for many stakeholders, including universities, institutional funding bodies, academic publishers and companies. While the literature describes several approaches which aim to identify the emergence of new research topics early in their lifecycle, these rely on the assumption that the topic in question is already associated with a number of publications and consistently referred to by a community of researchers. Hence, detecting the emergence of a new research area at an embryonic stage, i.e., before the topic has been consistently labelled by a community of researchers and associated with a number of publications, is still an open challenge. In this paper, we begin to address this challenge by performing a study of the dynamics preceding the creation of new topics. This study indicates that the emergence of a new topic is anticipated by a significant increase in the pace of collaboration between relevant research areas, which can be seen as the ‘parents’ of the new topic. These initial findings (i) confirm our hypothesis that it is possible in principle to detect the emergence of a new topic at the embryonic stage, (ii) provide new empirical evidence supporting relevant theories in Philosophy of Science, and also (iii) suggest that new topics tend to emerge in an environment in which weakly interconnected research areas begin to cross-fertilise.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329556,R69419,Material,R69449,added components,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R181000,"Image-Based Food Calorie Estimation Using Knowledge on Food Categories, Ingredients and Cooking Directions",S703011,R181002,data source,R181007,American recipe site,"Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329545,R69419,Process,R69438,automatic identification,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329499,R69391,Data,R69398,available information,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,noun phrase
R133,Artificial Intelligence,R181000,"Image-Based Food Calorie Estimation Using Knowledge on Food Categories, Ingredients and Cooking Directions",S703014,R181002,Data,R181010,"categories, ingredients and cooking directions","Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.",TRUE,noun phrase
R133,Artificial Intelligence,R139297,What's going on in my city?: recommender systems and electronic participatory budgeting,S555382,R139299,has Recommended items,R138241,Citizen proposals,"In this paper, we present electronic participatory budgeting (ePB) as a novel application domain for recommender systems. On public data from the ePB platforms of three major US cities - Cambridge, Miami and New York City-, we evaluate various methods that exploit heterogeneous sources and models of user preferences to provide personalized recommendations of citizen proposals. We show that depending on characteristics of the cities and their participatory processes, particular methods are more effective than others for each city. This result, together with open issues identified in the paper, call for further research in the area.",TRUE,noun phrase
R133,Artificial Intelligence,R76724,Privacy-aware image classification and search,S350368,R76726,Has evaluation,R76729,Classification experiments,"Modern content sharing environments such as Flickr or YouTube contain a large amount of private resources such as photos showing weddings, family holidays, and private parties. These resources can be of a highly sensitive nature, disclosing many details of the users' private sphere. In order to support users in making privacy decisions in the context of image sharing and to provide them with a better overview on privacy related visual content available on the Web, we propose techniques to automatically detect private images, and to enable privacy-oriented image search. To this end, we learn privacy classifiers trained on a large set of manually assessed Flickr photos, combining textual metadata of images with a variety of visual features. We employ the resulting classification models for specifically searching for private photos, and for diversifying query results to provide users with a better coverage of private and public content. Large-scale classification experiments reveal insights into the predictive performance of different visual and textual features, and a user evaluation of query result rankings demonstrates the viability of our approach.",TRUE,noun phrase
R133,Artificial Intelligence,R69633,Learning heterogeneous knowledge base embeddings for explainable recommendation,S330800,R69634,Machine Learning Method,R69632,Collaborative Filtering,"Providing model-generated explanations in recommender systems is important to user experience. State-of-the-art recommendation algorithms—especially the collaborative filtering (CF)- based approaches with shallow or deep models—usually work with various unstructured information sources for recommendation, such as textual reviews, visual images, and various implicit or explicit feedbacks. Though structured knowledge bases were considered in content-based approaches, they have been largely ignored recently due to the availability of vast amounts of data and the learning power of many complex models. However, structured knowledge bases exhibit unique advantages in personalized recommendation systems. When the explicit knowledge about users and items is considered for recommendation, the system could provide highly customized recommendations based on users’ historical behaviors and the knowledge is helpful for providing informed explanations regarding the recommended items. A great challenge for using knowledge bases for recommendation is how to integrate large-scale structured and unstructured data, while taking advantage of collaborative filtering for highly accurate performance. Recent achievements in knowledge-base embedding (KBE) sheds light on this problem, which makes it possible to learn user and item representations while preserving the structure of their relationship with external knowledge for explanation. In this work, we propose to explain knowledge-base embeddings for explainable recommendation. Specifically, we propose a knowledge-base representation learning framework to embed heterogeneous entities for recommendation, and based on the embedded knowledge base, a soft matching algorithm is proposed to generate personalized explanations for the recommended items. Experimental results on real-world e-commerce datasets verified the superior recommendation performance and the explainability power of our approach compared with state-of-the-art baselines.",TRUE,noun phrase
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329500,R69391,Data,R69399,contemporary application-oriented fine-grained aspects,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329542,R69419,Method,R69435,conventional multi-class classification,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R75785,SemEval-2020 Task 5: Counterfactual Recognition,S349186,R75787,description,L249482,Counterfactual recognition in natural language,"We present a counterfactual recognition (CR) task, the shared Task 5 of SemEval-2020. Counterfactuals describe potential outcomes (consequents) produced by actions or circumstances that did not happen or cannot happen and are counter to the facts (antecedent). Counterfactual thinking is an important characteristic of the human cognitive system; it connects antecedents and consequent with causal relations. Our task provides a benchmark for counterfactual recognition in natural language with two subtasks. Subtask-1 aims to determine whether a given sentence is a counterfactual statement or not. Subtask-2 requires the participating systems to extract the antecedent and consequent in a given counterfactual statement. During the SemEval-2020 official evaluation period, we received 27 submissions to Subtask-1 and 11 to Subtask-2. Our data and baseline code are made publicly available at https://zenodo.org/record/3932442. The task website and leaderboard can be found at https://competitions.codalab.org/competitions/21691.",TRUE,noun phrase
R133,Artificial Intelligence,R186158,Neuro-Symbolic AI: An Emerging Class of AI Workloads and their Characterization,S711866,R186160,Workload taxonomy,R186216,Data Movement,"Neuro-symbolic artificial intelligence is a novel area of AI research which seeks to combine traditional rules-based AI approaches with modern deep learning techniques. Neurosymbolic models have already demonstrated the capability to outperform state-of-the-art deep learning models in domains such as image and video reasoning. They have also been shown to obtain high accuracy with significantly less training data than traditional models. Due to the recency of the field’s emergence and relative sparsity of published results, the performance characteristics of these models are not well understood. In this paper, we describe and analyze the performance characteristics of three recent neuro-symbolic models. We find that symbolic models have less potential parallelism than traditional neural models due to complex control flow and low-operational-intensity operations, such as scalar multiplication and tensor addition. However, the neural aspect of computation dominates the symbolic part in cases where they are clearly separable. We also find that data movement poses a potential bottleneck, as it does in many ML workloads.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329551,R69419,Material,R69444,data set,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329533,R69419,Data,R69426,different scores,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329537,R69419,Data,R69430,different sentiment classes,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R6653,Context-based Multi-Document Summarization using Fuzzy Coreference Cluster Graphs,S8577,R6654,implementation,R6656,ERSS summarizer,"Constructing focused, context-based multi-document summaries requires an analysis of the context questions, as well as their corresponding document sets. We present a fuzzy cluster graph algorithm that finds entities and their connections between context and documents based on fuzzy coreference chains and describe the design and implementation of the ERSS summarizer implementing these ideas.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329529,R69419,Data,R69422,exact sentiment,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329531,R69419,Data,R69424,existing sentiments,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329510,R69391,Material,R69409,feature maps,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,noun phrase
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329515,R69391,Material,R69414,figurative literary devices,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,noun phrase
R133,Artificial Intelligence,R182107,Multi-Task Learning for Calorie Prediction on a Novel Large-Scale Recipe Dataset Enriched with Nutritional Information,S704457,R182111,data source,R182113,Food item database,"A rapidly growing amount of content posted online, such as food recipes, opens doors to new exciting applications at the intersection of vision and language. In this work, we aim to estimate the calorie amount of a meal directly from an image by learning from recipes people have published on the Internet, thus skipping time-consuming manual data annotation. Since there are few large-scale publicly available datasets captured in unconstrained environments, we propose the pic2kcal benchmark comprising 308 000 images from over 70 000 recipes including photographs, ingredients, and instructions. To obtain nutritional information of the ingredients and automatically determine the ground-truth calorie value, we match the items in the recipes with structured information from a food item database. We evaluate various neural networks for regression of the calorie quantity and extend them with the multi-task paradigm. Our learning procedure combines the calorie estimation with prediction of proteins, carbohydrates, and fat amounts as well as a multi-label ingredient classification. Our experiments demonstrate clear benefits of multi-task learning for calorie estimation, surpassing the single-task calorie regression by 9.9%. To encourage further research on this task, we make the code for generating the dataset and the models publicly available.",TRUE,noun phrase
R133,Artificial Intelligence,R182238,"Food Recognition: A New Dataset, Experiments, and Results",S704942,R182240,Task,R182222,Food recognition,"We propose a new dataset for the evaluation of food recognition algorithms that can be used in dietary monitoring applications. Each image depicts a real canteen tray with dishes and foods arranged in different ways. Each tray contains multiple instances of food classes. The dataset contains 1027 canteen trays for a total of 3616 food instances belonging to 73 food classes. The food on the tray images has been manually segmented using carefully drawn polygonal boundaries. We have benchmarked the dataset by designing an automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest, and predicts for each region the corresponding food class. We have experimented with three different classification strategies using also several visual descriptors. We achieve about 79% of food and tray recognition accuracy using convolutional-neural-networks-based features. The dataset, as well as the benchmark framework, are available to the research community.",TRUE,noun phrase
R133,Artificial Intelligence,R182290,PFID: Pittsburgh fast-food image dataset,S705172,R182292,Task,R182294,Food recognition,"We introduce the first visual dataset of fast foods with a total of 4,545 still images, 606 stereo pairs, 303 360° videos for structure from motion, and 27 privacy-preserving videos of eating events of volunteers. This work was motivated by research on fast food recognition for dietary assessment. The data was collected by obtaining three instances of 101 foods from 11 popular fast food chains, and capturing images and videos in both restaurant conditions and a controlled lab setting. We benchmark the dataset using two standard approaches, color histogram and bag of SIFT features in conjunction with a discriminative classifier. Our dataset and the benchmarks are designed to stimulate research in this area and will be released freely to the research community.",TRUE,noun phrase
R133,Artificial Intelligence,R182336,A Food Recognition System for Diabetic Patients Based on an Optimized Bag-of-Features Model,S705335,R182338,Task,R182305,Food recognition,"Computer vision-based food recognition could be used to estimate a meal's carbohydrate content for diabetic patients. This study proposes a methodology for automatic food recognition, based on the bag-of-features (BoF) model. An extensive technical investigation was conducted for the identification and optimization of the best performing components involved in the BoF architecture, as well as the estimation of the corresponding parameters. For the design and evaluation of the prototype system, a visual dataset with nearly 5000 food images was created and organized into 11 classes. The optimized system computes dense local features, using the scale-invariant feature transform on the HSV color space, builds a visual dictionary of 10000 visual words by using the hierarchical k-means clustering and finally classifies the food images with a linear support vector machine classifier. The system achieved classification accuracy of the order of 78%, thus proving the feasibility of the proposed approach in a very challenging image dataset.",TRUE,noun phrase
R133,Artificial Intelligence,R182352,Real-Time Mobile Food Recognition System,S705385,R182354,Task,R182305,Food recognition,"We propose a mobile food recognition system the poses of which are estimating calorie and nutritious of foods and recording a user's eating habits. Since all the processes on image recognition performed on a smart-phone, the system does not need to send images to a server and runs on an ordinary smartphone in a real-time way. To recognize food items, a user draws bounding boxes by touching the screen first, and then the system starts food item recognition within the indicated bounding boxes. To recognize them more accurately, we segment each food item region by GrubCut, extract a color histogram and SURF-based bag-of-features, and finally classify it into one of the fifty food categories with linear SVM and fast 2 kernel. In addition, the system estimates the direction of food regions where the higher SVM output score is expected to be obtained, show it as an arrow on the screen in order to ask a user to move a smartphone camera. This recognition process is performed repeatedly about once a second. We implemented this system as an Android smartphone application so as to use multiple CPU cores effectively for real-time recognition. In the experiments, we have achieved the 81.55% classification rate for the top 5 category candidates when the ground-truth bounding boxes are given. In addition, we obtained positive evaluation by user study compared to the food recording system without object recognition.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329535,R69419,Data,R69428,highest scores,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R146694,A Robust and Real-Time Capable Envelope-Based Algorithm for Heart Sound Classification: Validation under Different Physiological Conditions,S587339,R146698,uses,L409120,Hilbert transform,"This paper proposes a robust and real-time capable algorithm for classification of the first and second heart sounds. The classification algorithm is based on the evaluation of the envelope curve of the phonocardiogram. For the evaluation, in contrast to other studies, measurements on 12 probands were conducted in different physiological conditions. Moreover, for each measurement the auscultation point, posture and physical stress were varied. The proposed envelope-based algorithm is tested with two different methods for envelope curve extraction: the Hilbert transform and the short-time Fourier transform. The performance of the classification of the first heart sounds is evaluated by using a reference electrocardiogram. Overall, by using the Hilbert transform, the algorithm has a better performance regarding the F1-score and computational effort. The proposed algorithm achieves for the S1 classification an F1-score up to 95.7% and in average 90.5%. The algorithm is robust against the age, BMI, posture, heart rate and auscultation point (except measurements on the back) of the subjects.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329539,R69419,Data,R69432,human annotation,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R182107,Multi-Task Learning for Calorie Prediction on a Novel Large-Scale Recipe Dataset Enriched with Nutritional Information,S704462,R182111,Result,R182117,Ingredient classification,"A rapidly growing amount of content posted online, such as food recipes, opens doors to new exciting applications at the intersection of vision and language. In this work, we aim to estimate the calorie amount of a meal directly from an image by learning from recipes people have published on the Internet, thus skipping time-consuming manual data annotation. Since there are few large-scale publicly available datasets captured in unconstrained environments, we propose the pic2kcal benchmark comprising 308 000 images from over 70 000 recipes including photographs, ingredients, and instructions. To obtain nutritional information of the ingredients and automatically determine the ground-truth calorie value, we match the items in the recipes with structured information from a food item database. We evaluate various neural networks for regression of the calorie quantity and extend them with the multi-task paradigm. Our learning procedure combines the calorie estimation with prediction of proteins, carbohydrates, and fat amounts as well as a multi-label ingredient classification. Our experiments demonstrate clear benefits of multi-task learning for calorie estimation, surpassing the single-task calorie regression by 9.9%. To encourage further research on this task, we make the code for generating the dataset and the models publicly available.",TRUE,noun phrase
R133,Artificial Intelligence,R4857,How are topics born? Understanding the research dynamics preceding the emergence of new areas,S5332,R4863,users,R4867,institutional funding bodies,"The ability to promptly recognise new research trends is strategic for many stakeholders, including universities, institutional funding bodies, academic publishers and companies. While the literature describes several approaches which aim to identify the emergence of new research topics early in their lifecycle, these rely on the assumption that the topic in question is already associated with a number of publications and consistently referred to by a community of researchers. Hence, detecting the emergence of a new research area at an embryonic stage, i.e., before the topic has been consistently labelled by a community of researchers and associated with a number of publications, is still an open challenge. In this paper, we begin to address this challenge by performing a study of the dynamics preceding the creation of new topics. This study indicates that the emergence of a new topic is anticipated by a significant increase in the pace of collaboration between relevant research areas, which can be seen as the ‘parents’ of the new topic. These initial findings (i) confirm our hypothesis that it is possible in principle to detect the emergence of a new topic at the embryonic stage, (ii) provide new empirical evidence supporting relevant theories in Philosophy of Science, and also (iii) suggest that new topics tend to emerge in an environment in which weakly interconnected research areas begin to cross-fertilise.",TRUE,noun phrase
R133,Artificial Intelligence,R181000,"Image-Based Food Calorie Estimation Using Knowledge on Food Categories, Ingredients and Cooking Directions",S703012,R181002,data source,R181008,Japanese recipe sites,"Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.",TRUE,noun phrase
R133,Artificial Intelligence,R69621,Interaction Embeddings for Prediction and Explanation in Knowledge Graphs,S330727,R69622,Machine Learning Method,R69603,Knowledge Graph Embedding,"Knowledge graph embedding aims to learn distributed representations for entities and relations, and is proven to be effective in many applications. Crossover interactions -- bi-directional effects between entities and relations --- help select related information when predicting a new triple, but haven't been formally discussed before. In this paper, we propose CrossE, a novel knowledge graph embedding which explicitly simulates crossover interactions. It not only learns one general embedding for each entity and relation as most previous methods do, but also generates multiple triple specific embeddings for both of them, named interaction embeddings. We evaluate embeddings on typical link prediction tasks and find that CrossE achieves state-of-the-art results on complex and more challenging datasets. Furthermore, we evaluate embeddings from a new perspective -- giving explanations for predicted triples, which is important for real applications. In this work, an explanation for a triple is regarded as a reliable closed-path between the head and the tail entity. Compared to other baselines, we show experimentally that CrossE, benefiting from interaction embeddings, is more capable of generating reliable explanations to support its predictions.",TRUE,noun phrase
R133,Artificial Intelligence,R70188,Template-based Question Answering using Recursive Neural Networks,S333428,R70189,Material,R70190,LC-QuAD dataset,"Most question answering (QA) systems over Linked Data, i.e. Knowledge Graphs, approach the question answering task as a conversion from a natural language question to its corresponding SPARQL query. A common approach is to use query templates to generate SPARQL queries with slots that need to be filled. Using templates instead of running an extensive NLP pipeline or end-to-end model shifts the QA problem into a classification task, where the system needs to match the input question to the appropriate template. This paper presents an approach to automatically learn and classify natural language questions into corresponding templates using recursive neural networks. Our model was trained on 5000 questions and their respective SPARQL queries from the preexisting LC-QuAD dataset grounded in DBpedia, spanning 5042 entities and 615 predicates. The resulting model was evaluated using the FAIR GERBIL QA framework resulting in 0.419 macro f-measure on LC-QuAD and 0.417 macro f-measure on QALD-7.",TRUE,noun phrase
R133,Artificial Intelligence,R140398,Learning ontology from relational database,S560386,R140400,Learning method,R140402,Learning rule,"Ontology provides a shared and reusable piece of knowledge about a specific domain, and has been applied in many fields, such as semantic Web, e-commerce and information retrieval, etc. However, building ontology by hand is a very hard and error-prone task. Learning ontology from existing resources is a good solution. Because relational database is widely used for storing data and OWL is the latest standard recommended by W3C, this paper proposes an approach of learning OWL ontology from data in relational database. Compared with existing methods, the approach can acquire ontology from relational database automatically by using a group of learning rules instead of using a middle model. In addition, it can obtain OWL ontology, including the classes, properties, properties characteristics, cardinality and instances, while none of existing methods can acquire all of them. The proposed learning rules have been proven to be correct by practice.",TRUE,noun phrase
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329493,R69391,Data,R69392,"lexical, syntactic, or pragmatic features","A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,noun phrase
R133,Artificial Intelligence,R74378,A Relational Learning Approach for Collective Entity Resolution in the Web of Data,S341385,R74380,Material,R74386,Linked Data Cloud,"The integration of different datasets in the Linked Data Cloud is a key aspect to the success of the Web of Data. To tackle this problem most of existent solutions have been supported by the task of entity resolution. However, many challenges still prevail specially when considering different types, structures and vocabularies used in the Web. Another common problem is that data usually are incomplete, inconsistent and contain outliers. To overcome these limitations, some works have applied machine learning algorithms since they are typically robust to both noise and data inconsistencies and are able to efficiently utilize nondeterministic dependencies in the data. In this paper we propose an approach based in a relational learning algorithm that addresses the problem by statistical approximation method. Modeling the problem as a relational machine learning task allows exploit contextual information that might be too distant in the relational graph. The joint application of relationship patterns between entities and evidences of similarity between their descriptions can improve the effectiveness of results. Furthermore, it is based on a sparse structure that scales well to large datasets. We present initial experiments based on BTC2012 datasets.",TRUE,noun phrase
R133,Artificial Intelligence,R74535,Towards Exploring Literals to Enrich Data Linking in Knowledge Graphs,S342592,R74537,Material,R74539,literals from subjects and predicates,"Knowledge graph completion is still a challenging solution that uses techniques from distinct areas to solve many different tasks. Most recent works, which are based on embedding models, were conceived to improve an existing knowledge graph using the link prediction task. However, even considering the ability of these solutions to solve other tasks, they did not present results for data linking, for example. Furthermore, most of these works focuses only on structural information, i.e., the relations between entities. In this paper, we present an approach for data linking that enrich entity embeddings in a model with their literal information and that do not rely on external information of these entities. The key aspect of this proposal is that we use a blocking scheme to improve the effectiveness of the solution in relation to the use of literals. Thus, in addition to the literals from object elements in a triple, we use other literals from subjects and predicates. By merging entity embeddings with their literal information it is possible to extend many popular embedding models. Preliminary experiments were performed on real-world datasets and our solution showed competitive results to the performance of the task of data linking.",TRUE,noun phrase
R133,Artificial Intelligence,R41079,Speech Recognition Using Deep Neural Networks: A Systematic Review,S130283,R41082,Method,R41097,Machine learning,"Over the past decades, a tremendous amount of research has been done on the use of machine learning for speech processing applications, especially speech recognition. However, in the past few years, research has focused on utilizing deep learning for speech-related applications. This new area of machine learning has yielded far better results when compared to others in a variety of applications including speech, and thus became a very attractive area of research. This paper provides a thorough examination of the different studies that have been conducted since 2006, when deep learning first arose as a new area of machine learning, for speech applications. A thorough statistical analysis is provided in this review which was conducted by extracting specific information from 174 papers published between the years 2006 and 2018. The results provided in this paper shed light on the trends of research in this area as well as bring focus to new research topics.",TRUE,noun phrase
R133,Artificial Intelligence,R142180,Detection and Diagnosis of Breast Cancer Using Artificial Intelligence Based Assessment of Maximum Intensity Projection Dynamic Contrast-Enhanced Magnetic Resonance Images,S571306,R142184,Imaging modality ,L400978,Magnetic resonance imaging,"We aimed to evaluate an artificial intelligence (AI) system that can detect and diagnose lesions of maximum intensity projection (MIP) in dynamic contrast-enhanced (DCE) breast magnetic resonance imaging (MRI). We retrospectively gathered MIPs of DCE breast MRI for training and validation data from 30 and 7 normal individuals, 49 and 20 benign cases, and 135 and 45 malignant cases, respectively. Breast lesions were indicated with a bounding box and labeled as benign or malignant by a radiologist, while the AI system was trained to detect and calculate possibilities of malignancy using RetinaNet. The AI system was analyzed using test sets of 13 normal, 20 benign, and 52 malignant cases. Four human readers also scored these test data with and without the assistance of the AI system for the possibility of a malignancy in each breast. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were 0.926, 0.828, and 0.925 for the AI system; 0.847, 0.841, and 0.884 for human readers without AI; and 0.889, 0.823, and 0.899 for human readers with AI using a cutoff value of 2%, respectively. The AI system showed better diagnostic performance compared to the human readers (p = 0.002), and because of the increased performance of human readers with the assistance of the AI system, the AUC of human readers was significantly higher with than without the AI system (p = 0.039). Our AI system showed a high performance ability in detecting and diagnosing lesions in MIPs of DCE breast MRI and increased the diagnostic performance of human readers.",TRUE,noun phrase
R133,Artificial Intelligence,R146699,A Markov-Switching Model Approach to Heart Sound Segmentation and Classification,S587373,R146704,uses,L409138,Markov-switching autoregressive (MSAR),"Objective: We consider challenges in accurate segmentation of heart sound signals recorded under noisy clinical environments for subsequent classification of pathological events. Existing state-of-the-art solutions to heart sound segmentation use probabilistic models such as hidden Markov models (HMMs), which, however, are limited by its observation independence assumption and rely on pre-extraction of noise-robust features. Methods: We propose a Markov-switching autoregressive (MSAR) process to model the raw heart sound signals directly, which allows efficient segmentation of the cyclical heart sound states according to the distinct dependence structure in each state. To enhance robustness, we extend the MSAR model to a switching linear dynamic system (SLDS) that jointly model both the switching AR dynamics of underlying heart sound signals and the noise effects. We introduce a novel algorithm via fusion of switching Kalman filter and the duration-dependent Viterbi algorithm, which incorporates the duration of heart sound states to improve state decoding. Results: Evaluated on Physionet/CinC Challenge 2016 dataset, the proposed MSAR-SLDS approach significantly outperforms the hidden semi-Markov model (HSMM) in heart sound segmentation based on raw signals and comparable to a feature-based HSMM. The segmented labels were then used to train Gaussian-mixture HMM classifier for identification of abnormal beats, achieving high average precision of 86.1% on the same dataset including very noisy recordings. Conclusion: The proposed approach shows noticeable performance in heart sound segmentation and classification on a large noisy dataset. Significance: It is potentially useful in developing automated heart monitoring systems for pre-screening of heart pathologies.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329547,R69419,Material,R69440,media platforms,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329541,R69419,Method,R69434,Multi-class sentiment analysis,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R181000,"Image-Based Food Calorie Estimation Using Knowledge on Food Categories, Ingredients and Cooking Directions",S703007,R181002,Machine Learning Method,R181003,multi-task CNN,"Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.",TRUE,noun phrase
R133,Artificial Intelligence,R181000,"Image-Based Food Calorie Estimation Using Knowledge on Food Categories, Ingredients and Cooking Directions",S705686,R182411,Machine Learning Method,R182104,Multi-task CNN,"Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.",TRUE,noun phrase
R133,Artificial Intelligence,R182107,Multi-Task Learning for Calorie Prediction on a Novel Large-Scale Recipe Dataset Enriched with Nutritional Information,S704586,R182156,Machine Learning Method,R119424,Multi-Task Learning,"A rapidly growing amount of content posted online, such as food recipes, opens doors to new exciting applications at the intersection of vision and language. In this work, we aim to estimate the calorie amount of a meal directly from an image by learning from recipes people have published on the Internet, thus skipping time-consuming manual data annotation. Since there are few large-scale publicly available datasets captured in unconstrained environments, we propose the pic2kcal benchmark comprising 308 000 images from over 70 000 recipes including photographs, ingredients, and instructions. To obtain nutritional information of the ingredients and automatically determine the ground-truth calorie value, we match the items in the recipes with structured information from a food item database. We evaluate various neural networks for regression of the calorie quantity and extend them with the multi-task paradigm. Our learning procedure combines the calorie estimation with prediction of proteins, carbohydrates, and fat amounts as well as a multi-label ingredient classification. Our experiments demonstrate clear benefits of multi-task learning for calorie estimation, surpassing the single-task calorie regression by 9.9%. To encourage further research on this task, we make the code for generating the dataset and the models publicly available.",TRUE,noun phrase
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329514,R69391,Material,R69413,natural language,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329555,R69419,Material,R69448,necessary components,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R151347,Conversational Neuro-Symbolic Commonsense Reasoning,S631590,R157540,Has method,R157544,neuro-symbolic theorem prover,"One aspect of human commonsense reasoning is the ability to make presumptions about daily experiences, activities and social interactions with others. We propose a new commonsense reasoning benchmark where the task is to uncover commonsense presumptions implied by imprecisely stated natural language commands in the form of if-then-because statements. For example, in the command ""If it snows at night then wake me up early because I don't want to be late for work"" the speaker relies on commonsense reasoning of the listener to infer the implicit presumption that it must snow enough to cause traffic slowdowns. Such if-then-because commands are particularly important when users instruct conversational agents. We release a benchmark data set for this task, collected from humans and annotated with commonsense presumptions. We develop a neuro-symbolic theorem prover that extracts multi-hop reasoning chains and apply it to this problem. We further develop an interactive conversational framework that evokes commonsense knowledge from humans for completing reasoning chains.",TRUE,noun phrase
R133,Artificial Intelligence,R139297,What's going on in my city?: recommender systems and electronic participatory budgeting,S556761,R139299,has been evaluated in the City,R138190,New York City,"In this paper, we present electronic participatory budgeting (ePB) as a novel application domain for recommender systems. On public data from the ePB platforms of three major US cities - Cambridge, Miami and New York City-, we evaluate various methods that exploit heterogeneous sources and models of user preferences to provide personalized recommendations of citizen proposals. We show that depending on characteristics of the cities and their participatory processes, particular methods are more effective than others for each city. This result, together with open issues identified in the paper, call for further research in the area.",TRUE,noun phrase
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329512,R69391,Material,R69411,novel sAtt-BLSTM convNet model,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,noun phrase
R133,Artificial Intelligence,R38225,Joint Extraction of Entities and Relations Based on a Novel Tagging Scheme,S125646,R38227,Has method,R38228,Novel tagging scheme,"Joint extraction of entities and relations is an important task in information extraction. To tackle this problem, we firstly propose a novel tagging scheme that can convert the joint extraction task to a tagging problem. Then, based on our tagging scheme, we study different end-to-end models to extract entities and their relations directly, without identifying entities and relations separately. We conduct experiments on a public dataset produced by distant supervision method and the experimental results show that the tagging based methods are better than most of the existing pipelined and joint learning methods. What's more, the end-to-end model proposed in this paper, achieves the best results on the public dataset.",TRUE,noun phrase
R133,Artificial Intelligence,R4857,How are topics born? Understanding the research dynamics preceding the emergence of new areas,S5335,R4863,method,R4870,number of publications,"The ability to promptly recognise new research trends is strategic for many stakeholders, including universities, institutional funding bodies, academic publishers and companies. While the literature describes several approaches which aim to identify the emergence of new research topics early in their lifecycle, these rely on the assumption that the topic in question is already associated with a number of publications and consistently referred to by a community of researchers. Hence, detecting the emergence of a new research area at an embryonic stage, i.e., before the topic has been consistently labelled by a community of researchers and associated with a number of publications, is still an open challenge. In this paper, we begin to address this challenge by performing a study of the dynamics preceding the creation of new topics. This study indicates that the emergence of a new topic is anticipated by a significant increase in the pace of collaboration between relevant research areas, which can be seen as the ‘parents’ of the new topic. These initial findings (i) confirm our hypothesis that it is possible in principle to detect the emergence of a new topic at the embryonic stage, (ii) provide new empirical evidence supporting relevant theories in Philosophy of Science, and also (iii) suggest that new topics tend to emerge in an environment in which weakly interconnected research areas begin to cross-fertilise.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329552,R69419,Material,R69445,online post,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329546,R69419,Material,R69439,online social media content,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R138459,Transforming XML documents to OWL ontologies: A survey,S555903,R138461,Learning purpose,R139412,Ontology enrichment,"The aims of XML data conversion to ontologies are the indexing, integration and enrichment of existing ontologies with knowledge acquired from these sources. The contribution of this paper consists in providing a classification of the approaches used for the conversion of XML documents into OWL ontologies. This classification underlines the usage profile of each conversion method, providing a clear description of the advantages and drawbacks belonging to each method. Hence, this paper focuses on two main processes, which are ontology enrichment and ontology population using XML data. Ontology enrichment is related to the schema of the ontology (TBox), and ontology population is related to an individual (Abox). In addition, the ontologies described in these methods are based on formal languages of the Semantic Web such as OWL (Ontology Web Language) or RDF (Resource Description Framework). These languages are formal because the semantics are formally defined and take advantage of the Description Logics. In contrast, XML data sources are without formal semantics. The XML language is used to store, export and share data between processes able to process the specific data structure. However, even if the semantics is not explicitly expressed, data structure contains the universe of discourse by using a qualified vocabulary regarding a consensual agreement. In order to formalize this semantics, the OWL language provides rich logical constraints. Therefore, these logical constraints are evolved in the transformation of XML documents into OWL documents, allowing the enrichment and the population of the target ontology. To design such a transformation, the current research field establishes connections between OWL constructs (classes, predicates, simple or complex data types, etc.) and XML constructs (elements, attributes, element lists, etc.). Two different approaches for the transformation process are exposed. The instance approaches are based on XML documents without any schema associated. The validation approaches are based on the XML schema and document validated by the associated schema. The second approaches benefit from the schema definition to provide automated transformations with logic constraints. Both approaches are discussed in the text.",TRUE,noun phrase
R133,Artificial Intelligence,R138459,Transforming XML documents to OWL ontologies: A survey,S555902,R138461,Learning purpose,R139411,Ontology population,"The aims of XML data conversion to ontologies are the indexing, integration and enrichment of existing ontologies with knowledge acquired from these sources. The contribution of this paper consists in providing a classification of the approaches used for the conversion of XML documents into OWL ontologies. This classification underlines the usage profile of each conversion method, providing a clear description of the advantages and drawbacks belonging to each method. Hence, this paper focuses on two main processes, which are ontology enrichment and ontology population using XML data. Ontology enrichment is related to the schema of the ontology (TBox), and ontology population is related to an individual (Abox). In addition, the ontologies described in these methods are based on formal languages of the Semantic Web such as OWL (Ontology Web Language) or RDF (Resource Description Framework). These languages are formal because the semantics are formally defined and take advantage of the Description Logics. In contrast, XML data sources are without formal semantics. The XML language is used to store, export and share data between processes able to process the specific data structure. However, even if the semantics is not explicitly expressed, data structure contains the universe of discourse by using a qualified vocabulary regarding a consensual agreement. In order to formalize this semantics, the OWL language provides rich logical constraints. Therefore, these logical constraints are evolved in the transformation of XML documents into OWL documents, allowing the enrichment and the population of the target ontology. To design such a transformation, the current research field establishes connections between OWL constructs (classes, predicates, simple or complex data types, etc.) and XML constructs (elements, attributes, element lists, etc.). Two different approaches for the transformation process are exposed. The instance approaches are based on XML documents without any schema associated. The validation approaches are based on the XML schema and document validated by the associated schema. The second approaches benefit from the schema definition to provide automated transformations with logic constraints. Both approaches are discussed in the text.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329530,R69419,Data,R69423,overall sentiment polarity,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R139907,Automatic transforming XML documents into OWL Ontology,S560432,R139908,Output format,R140418,OWL individual,"DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329527,R69419,Data,R69420,people’s behavior,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R182107,Multi-Task Learning for Calorie Prediction on a Novel Large-Scale Recipe Dataset Enriched with Nutritional Information,S704459,R182111,Result,R182114,Prediction of proteins,"A rapidly growing amount of content posted online, such as food recipes, opens doors to new exciting applications at the intersection of vision and language. In this work, we aim to estimate the calorie amount of a meal directly from an image by learning from recipes people have published on the Internet, thus skipping time-consuming manual data annotation. Since there are few large-scale publicly available datasets captured in unconstrained environments, we propose the pic2kcal benchmark comprising 308 000 images from over 70 000 recipes including photographs, ingredients, and instructions. To obtain nutritional information of the ingredients and automatically determine the ground-truth calorie value, we match the items in the recipes with structured information from a food item database. We evaluate various neural networks for regression of the calorie quantity and extend them with the multi-task paradigm. Our learning procedure combines the calorie estimation with prediction of proteins, carbohydrates, and fat amounts as well as a multi-label ingredient classification. Our experiments demonstrate clear benefits of multi-task learning for calorie estimation, surpassing the single-task calorie regression by 9.9%. To encourage further research on this task, we make the code for generating the dataset and the models publicly available.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329554,R69419,Material,R69447,previously introduced tool,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R153391,Neuro-Symbolic Probabilistic Argumentation Machines,S648895,R162654,Material,R162657,probabilistic semi-abstract argumentation,"Neural-symbolic systems combine the strengths of neural networks and symbolic formalisms. In this paper, we introduce a neural-symbolic system which combines restricted Boltzmann machines and probabilistic semi-abstract argumentation. We propose to train networks on argument labellings explaining the data, so that any sampled data outcome is associated with an argument labelling. Argument labellings are integrated as constraints within restricted Boltzmann machines, so that the neural networks are used to learn probabilistic dependencies amongst argument labels. Given a dataset and an argumentation graph as prior knowledge, for every example/case K in the dataset, we use a so-called K-maxconsistent labelling of the graph, and an explanation of case K refers to a K-maxconsistent labelling of the given argumentation graph. The abilities of the proposed system to predict correct labellings were evaluated and compared with standard machine learning techniques. Experiments revealed that such argumentation Boltzmann machines can outperform other classification models, especially in noisy settings.",TRUE,noun phrase
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329505,R69391,Method,R69404,proposed deep neural model,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,noun phrase
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329494,R69391,Data,R69393,punctuation-based auxiliary features,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,noun phrase
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329517,R69391,Material,R69416,random-tweet dataset,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329544,R69419,Process,R69437,rapid growth,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R70188,Template-based Question Answering using Recursive Neural Networks,S333430,R70189,Material,R70192,recursive neural network,"Most question answering (QA) systems over Linked Data, i.e. Knowledge Graphs, approach the question answering task as a conversion from a natural language question to its corresponding SPARQL query. A common approach is to use query templates to generate SPARQL queries with slots that need to be filled. Using templates instead of running an extensive NLP pipeline or end-to-end model shifts the QA problem into a classification task, where the system needs to match the input question to the appropriate template. This paper presents an approach to automatically learn and classify natural language questions into corresponding templates using recursive neural networks. Our model was trained on 5000 questions and their respective SPARQL queries from the preexisting LC-QuAD dataset grounded in DBpedia, spanning 5042 entities and 615 predicates. The resulting model was evaluated using the FAIR GERBIL QA framework resulting in 0.419 macro f-measure on LC-QuAD and 0.417 macro f-measure on QALD-7.",TRUE,noun phrase
R133,Artificial Intelligence,R140398,Learning ontology from relational database,S560383,R140400,Input format,R140401,Relational data,"Ontology provides a shared and reusable piece of knowledge about a specific domain, and has been applied in many fields, such as semantic Web, e-commerce and information retrieval, etc. However, building ontology by hand is a very hard and error-prone task. Learning ontology from existing resources is a good solution. Because relational database is widely used for storing data and OWL is the latest standard recommended by W3C, this paper proposes an approach of learning OWL ontology from data in relational database. Compared with existing methods, the approach can acquire ontology from relational database automatically by using a group of learning rules instead of using a middle model. In addition, it can obtain OWL ontology, including the classes, properties, properties characteristics, cardinality and instances, while none of existing methods can acquire all of them. The proposed learning rules have been proven to be correct by practice.",TRUE,noun phrase
R133,Artificial Intelligence,R153391,Neuro-Symbolic Probabilistic Argumentation Machines,S648894,R162654,Material,R162656,restricted boltzmann machines,"Neural-symbolic systems combine the strengths of neural networks and symbolic formalisms. In this paper, we introduce a neural-symbolic system which combines restricted Boltzmann machines and probabilistic semi-abstract argumentation. We propose to train networks on argument labellings explaining the data, so that any sampled data outcome is associated with an argument labelling. Argument labellings are integrated as constraints within restricted Boltzmann machines, so that the neural networks are used to learn probabilistic dependencies amongst argument labels. Given a dataset and an argumentation graph as prior knowledge, for every example/case K in the dataset, we use a so-called K-maxconsistent labelling of the graph, and an explanation of case K refers to a K-maxconsistent labelling of the given argumentation graph. The abilities of the proposed system to predict correct labellings were evaluated and compared with standard machine learning techniques. Experiments revealed that such argumentation Boltzmann machines can outperform other classification models, especially in noisy settings.",TRUE,noun phrase
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329504,R69391,Method,R69403,Sarcasm Detector tool,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,noun phrase
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329501,R69391,Data,R69400,sarcastic tone,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,noun phrase
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329502,R69391,Data,R69401,semantic word embeddings,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,noun phrase
R133,Artificial Intelligence,R146694,A Robust and Real-Time Capable Envelope-Based Algorithm for Heart Sound Classification: Validation under Different Physiological Conditions,S587340,R146698,uses,L409121,Short-time Fourier transform,"This paper proposes a robust and real-time capable algorithm for classification of the first and second heart sounds. The classification algorithm is based on the evaluation of the envelope curve of the phonocardiogram. For the evaluation, in contrast to other studies, measurements on 12 probands were conducted in different physiological conditions. Moreover, for each measurement the auscultation point, posture and physical stress were varied. The proposed envelope-based algorithm is tested with two different methods for envelope curve extraction: the Hilbert transform and the short-time Fourier transform. The performance of the classification of the first heart sounds is evaluated by using a reference electrocardiogram. Overall, by using the Hilbert transform, the algorithm has a better performance regarding the F1-score and computational effort. The proposed algorithm achieves for the S1 classification an F1-score up to 95.7% and in average 90.5%. The algorithm is robust against the age, BMI, posture, heart rate and auscultation point (except measurements on the back) of the subjects.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329532,R69419,Data,R69425,single sentiment label,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R181000,"Image-Based Food Calorie Estimation Using Knowledge on Food Categories, Ingredients and Cooking Directions",S704416,R182105,Machine Learning Method,R181004,Single-task CNN,"Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.",TRUE,noun phrase
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329513,R69391,Material,R69412,social media and social networks,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329548,R69419,Material,R69441,specific topics,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R146699,A Markov-Switching Model Approach to Heart Sound Segmentation and Classification,S587374,R146704,uses,L409139,Switching linear dynamic system (SLDS),"Objective: We consider challenges in accurate segmentation of heart sound signals recorded under noisy clinical environments for subsequent classification of pathological events. Existing state-of-the-art solutions to heart sound segmentation use probabilistic models such as hidden Markov models (HMMs), which, however, are limited by its observation independence assumption and rely on pre-extraction of noise-robust features. Methods: We propose a Markov-switching autoregressive (MSAR) process to model the raw heart sound signals directly, which allows efficient segmentation of the cyclical heart sound states according to the distinct dependence structure in each state. To enhance robustness, we extend the MSAR model to a switching linear dynamic system (SLDS) that jointly model both the switching AR dynamics of underlying heart sound signals and the noise effects. We introduce a novel algorithm via fusion of switching Kalman filter and the duration-dependent Viterbi algorithm, which incorporates the duration of heart sound states to improve state decoding. Results: Evaluated on Physionet/CinC Challenge 2016 dataset, the proposed MSAR-SLDS approach significantly outperforms the hidden semi-Markov model (HSMM) in heart sound segmentation based on raw signals and comparable to a feature-based HSMM. The segmented labels were then used to train Gaussian-mixture HMM classifier for identification of abnormal beats, achieving high average precision of 86.1% on the same dataset including very noisy recordings. Conclusion: The proposed approach shows noticeable performance in heart sound segmentation and classification on a large noisy dataset. Significance: It is potentially useful in developing automated heart monitoring systems for pre-screening of heart pathologies.",TRUE,noun phrase
R133,Artificial Intelligence,R69417,Multi-Class Sentiment Analysis in Twitter: What if Classification is Not the Answer,S329550,R69419,Material,R69443,text message or post,"With the rapid growth of online social media content, and the impact these have made on people’s behavior, many researchers have been interested in studying these media platforms. A major part of their work focused on sentiment analysis and opinion mining. These refer to the automatic identification of opinions of people toward specific topics by analyzing their posts and publications. Multi-class sentiment analysis, in particular, addresses the identification of the exact sentiment conveyed by the user rather than the overall sentiment polarity of his text message or post. That being the case, we introduce a task different from the conventional multi-class classification, which we run on a data set collected from Twitter. We refer to this task as “quantification.” By the term “quantification,” we mean the identification of all the existing sentiments within an online post (i.e., tweet) instead of attributing a single sentiment label to it. For this sake, we propose an approach that automatically attributes different scores to each sentiment in a tweet, and selects the sentiments with the highest scores which we judge as conveyed in the text. To reach this target, we added to our previously introduced tool SENTA the necessary components to run and perform such a task. Throughout this work, we present the added components; we study the feasibility of quantification, and propose an approach to perform it on a data set made of tweets for 11 different sentiment classes. The data set was manually labeled and the results of the automatic analysis were checked against the human annotation. Our experiments show the feasibility of this task and reach an F1 score equal to 45.9%.",TRUE,noun phrase
R133,Artificial Intelligence,R74367,A Blocking Scheme for Entity Resolution in the Semantic Web,S342578,R74369,Material,R5174,the Web of Data,"The amount and diversity of data in the Semantic Web has grown quite. RDF datasets has proportionally more problems than relational datasets due to the way data are published, usually without formal criteria. Entity Resolution is n important issue which is related to a known task of many research communities and it aims at finding all representations that refer to the same entity in different datasets. Yet, it is still an open problem. Blocking methods are used to avoid the quadratic complexity of the brute force approach by clustering entities into blocks and limiting the evaluation of entity specifications to entity pairs within blocks. In the last years only a few blocking methods were conceived to deal with RDF data and novel blocking techniques are required for dealing with noisy and heterogeneous data in the Web of Data. In this paper we present a blocking scheme, CER-Blocking, which is based on an inverted index structure and that uses different data evidences from a triple, aiming to maximize its effectiveness. To overcome the problems of data quality or even the very absence thereof, we use two blocking key definitions. This scheme is part of an ER approach which is based on a relational learning algorithm that addresses the problem by statistical approximation. It was empirically evaluated on real and synthetic datasets which are part of consolidated benchmarks found on the literature.",TRUE,noun phrase
R133,Artificial Intelligence,R74378,A Relational Learning Approach for Collective Entity Resolution in the Web of Data,S342580,R74380,Material,R5174,the Web of Data,"The integration of different datasets in the Linked Data Cloud is a key aspect to the success of the Web of Data. To tackle this problem most of existent solutions have been supported by the task of entity resolution. However, many challenges still prevail specially when considering different types, structures and vocabularies used in the Web. Another common problem is that data usually are incomplete, inconsistent and contain outliers. To overcome these limitations, some works have applied machine learning algorithms since they are typically robust to both noise and data inconsistencies and are able to efficiently utilize nondeterministic dependencies in the data. In this paper we propose an approach based in a relational learning algorithm that addresses the problem by statistical approximation method. Modeling the problem as a relational machine learning task allows exploit contextual information that might be too distant in the relational graph. The joint application of relationship patterns between entities and evidences of similarity between their descriptions can improve the effectiveness of results. Furthermore, it is based on a sparse structure that scales well to large datasets. We present initial experiments based on BTC2012 datasets.",TRUE,noun phrase
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329496,R69391,Data,R69395,training- and test-set accuracy metrics,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,noun phrase
R133,Artificial Intelligence,R76724,Privacy-aware image classification and search,S350367,R76726,Has evaluation,R76728,User evaluation,"Modern content sharing environments such as Flickr or YouTube contain a large amount of private resources such as photos showing weddings, family holidays, and private parties. These resources can be of a highly sensitive nature, disclosing many details of the users' private sphere. In order to support users in making privacy decisions in the context of image sharing and to provide them with a better overview on privacy related visual content available on the Web, we propose techniques to automatically detect private images, and to enable privacy-oriented image search. To this end, we learn privacy classifiers trained on a large set of manually assessed Flickr photos, combining textual metadata of images with a variety of visual features. We employ the resulting classification models for specifically searching for private photos, and for diversifying query results to provide users with a better coverage of private and public content. Large-scale classification experiments reveal insights into the predictive performance of different visual and textual features, and a user evaluation of query result rankings demonstrates the viability of our approach.",TRUE,noun phrase
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329506,R69391,Material,R69405,user-generated content,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,noun phrase
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329507,R69391,Material,R69406,"words, emoticons, and exclamation marks","A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,noun phrase
R133,Artificial Intelligence,R138459,Transforming XML documents to OWL ontologies: A survey,S555913,R138461,Individual extraction/learning,R139366,XML document,"The aims of XML data conversion to ontologies are the indexing, integration and enrichment of existing ontologies with knowledge acquired from these sources. The contribution of this paper consists in providing a classification of the approaches used for the conversion of XML documents into OWL ontologies. This classification underlines the usage profile of each conversion method, providing a clear description of the advantages and drawbacks belonging to each method. Hence, this paper focuses on two main processes, which are ontology enrichment and ontology population using XML data. Ontology enrichment is related to the schema of the ontology (TBox), and ontology population is related to an individual (Abox). In addition, the ontologies described in these methods are based on formal languages of the Semantic Web such as OWL (Ontology Web Language) or RDF (Resource Description Framework). These languages are formal because the semantics are formally defined and take advantage of the Description Logics. In contrast, XML data sources are without formal semantics. The XML language is used to store, export and share data between processes able to process the specific data structure. However, even if the semantics is not explicitly expressed, data structure contains the universe of discourse by using a qualified vocabulary regarding a consensual agreement. In order to formalize this semantics, the OWL language provides rich logical constraints. Therefore, these logical constraints are evolved in the transformation of XML documents into OWL documents, allowing the enrichment and the population of the target ontology. To design such a transformation, the current research field establishes connections between OWL constructs (classes, predicates, simple or complex data types, etc.) and XML constructs (elements, attributes, element lists, etc.). Two different approaches for the transformation process are exposed. The instance approaches are based on XML documents without any schema associated. The validation approaches are based on the XML schema and document validated by the associated schema. The second approaches benefit from the schema definition to provide automated transformations with logic constraints. Both approaches are discussed in the text.",TRUE,noun phrase
R133,Artificial Intelligence,R139421,DTD2OWL: automatic transforming XML documents into OWL ontology,S556122,R139423,Individual extraction/learning,R139366,XML document,"DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.",TRUE,noun phrase
R133,Artificial Intelligence,R138459,Transforming XML documents to OWL ontologies: A survey,S555897,R138461,Input format,R139366,XML document,"The aims of XML data conversion to ontologies are the indexing, integration and enrichment of existing ontologies with knowledge acquired from these sources. The contribution of this paper consists in providing a classification of the approaches used for the conversion of XML documents into OWL ontologies. This classification underlines the usage profile of each conversion method, providing a clear description of the advantages and drawbacks belonging to each method. Hence, this paper focuses on two main processes, which are ontology enrichment and ontology population using XML data. Ontology enrichment is related to the schema of the ontology (TBox), and ontology population is related to an individual (Abox). In addition, the ontologies described in these methods are based on formal languages of the Semantic Web such as OWL (Ontology Web Language) or RDF (Resource Description Framework). These languages are formal because the semantics are formally defined and take advantage of the Description Logics. In contrast, XML data sources are without formal semantics. The XML language is used to store, export and share data between processes able to process the specific data structure. However, even if the semantics is not explicitly expressed, data structure contains the universe of discourse by using a qualified vocabulary regarding a consensual agreement. In order to formalize this semantics, the OWL language provides rich logical constraints. Therefore, these logical constraints are evolved in the transformation of XML documents into OWL documents, allowing the enrichment and the population of the target ontology. To design such a transformation, the current research field establishes connections between OWL constructs (classes, predicates, simple or complex data types, etc.) and XML constructs (elements, attributes, element lists, etc.). Two different approaches for the transformation process are exposed. The instance approaches are based on XML documents without any schema associated. The validation approaches are based on the XML schema and document validated by the associated schema. The second approaches benefit from the schema definition to provide automated transformations with logic constraints. Both approaches are discussed in the text.",TRUE,noun phrase
R133,Artificial Intelligence,R139421,DTD2OWL: automatic transforming XML documents into OWL ontology,S556002,R139423,Input format,R139366,XML document,"DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.",TRUE,noun phrase
R133,Artificial Intelligence,R138459,Transforming XML documents to OWL ontologies: A survey,S555926,R138461,RDF Graph,R139366,XML document,"The aims of XML data conversion to ontologies are the indexing, integration and enrichment of existing ontologies with knowledge acquired from these sources. The contribution of this paper consists in providing a classification of the approaches used for the conversion of XML documents into OWL ontologies. This classification underlines the usage profile of each conversion method, providing a clear description of the advantages and drawbacks belonging to each method. Hence, this paper focuses on two main processes, which are ontology enrichment and ontology population using XML data. Ontology enrichment is related to the schema of the ontology (TBox), and ontology population is related to an individual (Abox). In addition, the ontologies described in these methods are based on formal languages of the Semantic Web such as OWL (Ontology Web Language) or RDF (Resource Description Framework). These languages are formal because the semantics are formally defined and take advantage of the Description Logics. In contrast, XML data sources are without formal semantics. The XML language is used to store, export and share data between processes able to process the specific data structure. However, even if the semantics is not explicitly expressed, data structure contains the universe of discourse by using a qualified vocabulary regarding a consensual agreement. In order to formalize this semantics, the OWL language provides rich logical constraints. Therefore, these logical constraints are evolved in the transformation of XML documents into OWL documents, allowing the enrichment and the population of the target ontology. To design such a transformation, the current research field establishes connections between OWL constructs (classes, predicates, simple or complex data types, etc.) and XML constructs (elements, attributes, element lists, etc.). Two different approaches for the transformation process are exposed. The instance approaches are based on XML documents without any schema associated. The validation approaches are based on the XML schema and document validated by the associated schema. The second approaches benefit from the schema definition to provide automated transformations with logic constraints. Both approaches are discussed in the text.",TRUE,noun phrase
R133,Artificial Intelligence,R139907,Automatic transforming XML documents into OWL Ontology,S558508,R139908,Input format,R139917,XML instances,"DTD and its instance have been considered the standard for data representation and information exchange format on the current web. However, when coming to the next generation of web, the Semantic Web, the drawbacks of XML and its schema are appeared. They mainly focus on the structure level and lack support for data representation. Meanwhile, some Semantic Web applications such as intelligent information services and semantic search engines require not only the syntactic format of the data, but also the semantic content. These requirements are supported by the Web Ontology Language (OWL), which is one of the recent W3C recommendation. But nowadays the amount of data presented in OWL is small in compare with XML data. Therefore, finding a way to utilize the available XML documents for the Semantic Web is a current challenge research. In this work we present an effective solution for transforming XML document into OWL domain knowledge. While keeping the original structure, our work also adds more semantics for the XML document. Moreover, whole of the transformation processes are done automatically without any outside intervention. Further, unlike previous approaches which focus on the schema level, we also extend our methodology for the data level by transforming specific XML instances into OWL individuals. The results in existing OWL syntaxes help them to be loaded immediately by the Semantic Web applications.",TRUE,noun phrase
R133,Artificial Intelligence,R139897,Ontology enrichment and automatic population from XML data,S558501,R139898,Input format,R139365,XML schema,"This paper presents a flexible method to enrich and populate an existing OWL ontology from XML data. Basic mapping rules are defined in order to specify the conversion rules on properties. Advanced mapping rules are defined on XML schemas a nd OWL XML schema elements in order to define rules for th e population process. In addition, this flexible method allows u sers to reuse rules for other conversions and populations.",TRUE,noun phrase
R133,Artificial Intelligence,R139899,Building ontologies from XML data sources,S558502,R139900,Input format,R139365,XML schema,"In this paper, we present a tool called X2OWL that aims at building an OWL ontology from an XML datasource. This method is based on XML schema to automatically generate the ontology structure, as well as, a set of mapping bridges. The presented method also includes a refinement step that allows to clean the mapping bridges and possibly to restructure the generated ontology.",TRUE,noun phrase
R133,Artificial Intelligence,R139901,Transforming XML schema to OWL using patterns,S558503,R139902,Input format,R139365,XML schema,"One of the promises of the Semantic Web is to support applications that easily and seamlessly deal with heterogeneous data. Most data on the Web, however, is in the Extensible Markup Language (XML) format, but using XML requires applications to understand the format of each data source that they access. To achieve the benefits of the Semantic Web involves transforming XML into the Semantic Web language, OWL (Ontology Web Language), a process that generally has manual or only semi-automatic components. In this paper we present a set of patterns that enable the direct, automatic transformation from XML Schema into OWL allowing the integration of much XML data in the Semantic Web. We focus on an advanced logical representation of XML Schema components and present an implementation, including a comparison with related work.",TRUE,noun phrase
R133,Artificial Intelligence,R139903,An efficient XML to OWL converter,S558505,R139904,Input format,R139365,XML schema,"XML has become the de-facto standard of data exchange format in E-businesses. Although XML can support syntactic inter-operability, problems arise when data sources represented as XML documents are needed to be integrated. The reason is that XML lacks support for efficient sharing of conceptualization. The Web Ontology Language (OWL) can play an important role here as it can enable semantic inter-operability, and it supports the representation of domain knowledge using classes, properties and instances for applications. In many applications it is required to convert huge XML documents automatically to OWL ontologies, which is receiving a lot of attention. There are some existing converters for this job. Unfortunately they have serious shortcomings, e. g., they do not address the handling of characteristics like internal references, (transitive) import(s), include etc. which are commonly used in XML Schemas. To alleviate these drawbacks, we propose a new framework for mapping XML to OWL automatically. We illustrate our technique on examples to show the efficacy of our approach. We also provide the performance measures of our approach on some standard datasets. We also check the correctness of the conversion process.",TRUE,noun phrase
R14,Biochemistry,R74652,Dynamic Impacts of the Inhibition of the Molecular Chaperone Hsp90 on the T-Cell Proteome Have Implications for Anti-Cancer Therapy,S342949,R74654,Material,R74657,complex protein network,"The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.",TRUE,noun phrase
R14,Biochemistry,R74652,Dynamic Impacts of the Inhibition of the Molecular Chaperone Hsp90 on the T-Cell Proteome Have Implications for Anti-Cancer Therapy,S342955,R74654,Data,R74663,critical biological and medical relevance,"The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.",TRUE,noun phrase
R14,Biochemistry,R74652,Dynamic Impacts of the Inhibition of the Molecular Chaperone Hsp90 on the T-Cell Proteome Have Implications for Anti-Cancer Therapy,S342954,R74654,Material,R74662,Hsp90 family and cofactors themselves,"The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.",TRUE,noun phrase
R14,Biochemistry,R109331,Potential inhibitors of coronavirus 3-chymotrypsin-like protease (3CLpro): an in silico screening of alkaloids and terpenoids from African medicinal plants,S498898,R109333,Method,L361021,in silico,"Abstract The novel coronavirus disease 2019 (COVID-19) caused by SARS-COV-2 has raised myriad of global concerns. There is currently no FDA approved antiviral strategy to alleviate the disease burden. The conserved 3-chymotrypsin-like protease (3CLpro), which controls coronavirus replication is a promising drug target for combating the coronavirus infection. This study screens some African plants derived alkaloids and terpenoids as potential inhibitors of coronavirus 3CLpro using in silico approach. Bioactive alkaloids (62) and terpenoids (100) of plants native to Africa were docked to the 3CLpro of the novel SARS-CoV-2. The top twenty alkaloids and terpenoids with high binding affinities to the SARS-CoV-2 3CLpro were further docked to the 3CLpro of SARS-CoV and MERS-CoV. The docking scores were compared with 3CLpro-referenced inhibitors (Lopinavir and Ritonavir). The top docked compounds were further subjected to ADEM/Tox and Lipinski filtering analyses for drug-likeness prediction analysis. This ligand-protein interaction study revealed that more than half of the top twenty alkaloids and terpenoids interacted favourably with the coronaviruses 3CLpro, and had binding affinities that surpassed that of lopinavir and ritonavir. Also, a highly defined hit-list of seven compounds (10-Hydroxyusambarensine, Cryptoquindoline, 6-Oxoisoiguesterin, 22-Hydroxyhopan-3-one, Cryptospirolepine, Isoiguesterin and 20-Epibryonolic acid) were identified. Furthermore, four non-toxic, druggable plant derived alkaloids (10-Hydroxyusambarensine, and Cryptoquindoline) and terpenoids (6-Oxoisoiguesterin and 22-Hydroxyhopan-3-one), that bind to the receptor-binding site and catalytic dyad of SARS-CoV-2 3CLpro were identified from the predictive ADME/tox and Lipinski filter analysis. However, further experimental analyses are required for developing these possible leads into natural anti-COVID-19 therapeutic agents for combating the pandemic. Communicated by Ramaswamy H. Sarma",TRUE,noun phrase
R14,Biochemistry,R109331,Potential inhibitors of coronavirus 3-chymotrypsin-like protease (3CLpro): an in silico screening of alkaloids and terpenoids from African medicinal plants,S498918,R109333,Standard drugs used,L361036,Lopinavir and Ritonavir,"Abstract The novel coronavirus disease 2019 (COVID-19) caused by SARS-COV-2 has raised myriad of global concerns. There is currently no FDA approved antiviral strategy to alleviate the disease burden. The conserved 3-chymotrypsin-like protease (3CLpro), which controls coronavirus replication is a promising drug target for combating the coronavirus infection. This study screens some African plants derived alkaloids and terpenoids as potential inhibitors of coronavirus 3CLpro using in silico approach. Bioactive alkaloids (62) and terpenoids (100) of plants native to Africa were docked to the 3CLpro of the novel SARS-CoV-2. The top twenty alkaloids and terpenoids with high binding affinities to the SARS-CoV-2 3CLpro were further docked to the 3CLpro of SARS-CoV and MERS-CoV. The docking scores were compared with 3CLpro-referenced inhibitors (Lopinavir and Ritonavir). The top docked compounds were further subjected to ADEM/Tox and Lipinski filtering analyses for drug-likeness prediction analysis. This ligand-protein interaction study revealed that more than half of the top twenty alkaloids and terpenoids interacted favourably with the coronaviruses 3CLpro, and had binding affinities that surpassed that of lopinavir and ritonavir. Also, a highly defined hit-list of seven compounds (10-Hydroxyusambarensine, Cryptoquindoline, 6-Oxoisoiguesterin, 22-Hydroxyhopan-3-one, Cryptospirolepine, Isoiguesterin and 20-Epibryonolic acid) were identified. Furthermore, four non-toxic, druggable plant derived alkaloids (10-Hydroxyusambarensine, and Cryptoquindoline) and terpenoids (6-Oxoisoiguesterin and 22-Hydroxyhopan-3-one), that bind to the receptor-binding site and catalytic dyad of SARS-CoV-2 3CLpro were identified from the predictive ADME/tox and Lipinski filter analysis. However, further experimental analyses are required for developing these possible leads into natural anti-COVID-19 therapeutic agents for combating the pandemic. Communicated by Ramaswamy H. Sarma",TRUE,noun phrase
R14,Biochemistry,R74652,Dynamic Impacts of the Inhibition of the Molecular Chaperone Hsp90 on the T-Cell Proteome Have Implications for Anti-Cancer Therapy,S342948,R74654,Material,R74656,molecular chaperone Hsp90-dependent proteome,"The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.",TRUE,noun phrase
R14,Biochemistry,R74652,Dynamic Impacts of the Inhibition of the Molecular Chaperone Hsp90 on the T-Cell Proteome Have Implications for Anti-Cancer Therapy,S342959,R74654,Process,R74667,net de novo protein synthesis,"The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.",TRUE,noun phrase
R14,Biochemistry,R74652,Dynamic Impacts of the Inhibition of the Molecular Chaperone Hsp90 on the T-Cell Proteome Have Implications for Anti-Cancer Therapy,S342958,R74654,Data,R74666,pro-survival and anti-proliferative functions,"The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.",TRUE,noun phrase
R14,Biochemistry,R74652,Dynamic Impacts of the Inhibition of the Molecular Chaperone Hsp90 on the T-Cell Proteome Have Implications for Anti-Cancer Therapy,S342956,R74654,Data,R74664,protein decay rates,"The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.",TRUE,noun phrase
R14,Biochemistry,R74652,Dynamic Impacts of the Inhibition of the Molecular Chaperone Hsp90 on the T-Cell Proteome Have Implications for Anti-Cancer Therapy,S342953,R74654,Material,R74661,protein families,"The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.",TRUE,noun phrase
R14,Biochemistry,R74652,Dynamic Impacts of the Inhibition of the Molecular Chaperone Hsp90 on the T-Cell Proteome Have Implications for Anti-Cancer Therapy,S342952,R74654,Material,R74660,Several novel putative Hsp90 clients,"The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.",TRUE,noun phrase
R14,Biochemistry,R74652,Dynamic Impacts of the Inhibition of the Molecular Chaperone Hsp90 on the T-Cell Proteome Have Implications for Anti-Cancer Therapy,S342957,R74654,Data,R74665,strongly increased decay rates,"The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.",TRUE,noun phrase
R136156,Biogerontology and Geriatric Medicine,R175176,Respiratory Care Received by Individuals With Duchenne Muscular Dystrophy From 2000 to 2011,S693787,R175178,Subject Label,R175182,Full spectrum,"BACKGROUND: Duchenne muscular dystrophy (DMD) causes progressive respiratory muscle weakness and decline in function, which can go undetected without monitoring. DMD respiratory care guidelines recommend scheduled respiratory assessments and use of respiratory assist devices. To determine the extent of adherence to these guidelines, we evaluated respiratory assessments and interventions among males with DMD in the Muscular Dystrophy Surveillance, Tracking, and Research Network (MD STARnet) from 2000 to 2011. METHODS: MD STARnet is a population-based surveillance system that identifies all individuals born during or after 1982 residing in Arizona, Colorado, Georgia, Hawaii, Iowa, and western New York with Duchenne or Becker muscular dystrophy. We analyzed MD STARnet respiratory care data for non-ambulatory adolescent males (12–17 y old) and men (≥18 y old) with DMD, assessing whether: (1) pulmonary function was measured twice yearly; (2) awake and asleep hypoventilation testing was performed at least yearly; (3) home mechanical insufflation-exsufflation, noninvasive ventilation, and tracheostomy/ventilators were prescribed; and (4) pulmonologists provided evaluations. RESULTS: During 2000–2010, no more than 50% of both adolescents and men had their pulmonary function monitored twice yearly in any of the years; 67% or fewer were assessed for awake and sleep hypoventilation yearly. Although the use of mechanical insufflation-exsufflation and noninvasive ventilation is probably increasing, prior use of these devices did not prevent all tracheostomies, and at least 18 of 29 tracheostomies were performed due to acute respiratory illnesses. Fewer than 32% of adolescents and men had pulmonologist evaluations in 2010–2011. CONCLUSIONS: Since the 2004 publication of American Thoracic Society guidelines, there have been few changes in pulmonary clinical practice. Frequencies of respiratory assessments and assist device use among males with DMD were lower than recommended in clinical guidelines. Collaboration of respiratory therapists and pulmonologists with clinicians caring for individuals with DMD should be encouraged to ensure access to the full spectrum of in-patient and out-patient pulmonary interventions.",TRUE,noun phrase
R136156,Biogerontology and Geriatric Medicine,R175176,Respiratory Care Received by Individuals With Duchenne Muscular Dystrophy From 2000 to 2011,S693790,R175178,Subject Label,R146555,Respiratory therapist,"BACKGROUND: Duchenne muscular dystrophy (DMD) causes progressive respiratory muscle weakness and decline in function, which can go undetected without monitoring. DMD respiratory care guidelines recommend scheduled respiratory assessments and use of respiratory assist devices. To determine the extent of adherence to these guidelines, we evaluated respiratory assessments and interventions among males with DMD in the Muscular Dystrophy Surveillance, Tracking, and Research Network (MD STARnet) from 2000 to 2011. METHODS: MD STARnet is a population-based surveillance system that identifies all individuals born during or after 1982 residing in Arizona, Colorado, Georgia, Hawaii, Iowa, and western New York with Duchenne or Becker muscular dystrophy. We analyzed MD STARnet respiratory care data for non-ambulatory adolescent males (12–17 y old) and men (≥18 y old) with DMD, assessing whether: (1) pulmonary function was measured twice yearly; (2) awake and asleep hypoventilation testing was performed at least yearly; (3) home mechanical insufflation-exsufflation, noninvasive ventilation, and tracheostomy/ventilators were prescribed; and (4) pulmonologists provided evaluations. RESULTS: During 2000–2010, no more than 50% of both adolescents and men had their pulmonary function monitored twice yearly in any of the years; 67% or fewer were assessed for awake and sleep hypoventilation yearly. Although the use of mechanical insufflation-exsufflation and noninvasive ventilation is probably increasing, prior use of these devices did not prevent all tracheostomies, and at least 18 of 29 tracheostomies were performed due to acute respiratory illnesses. Fewer than 32% of adolescents and men had pulmonologist evaluations in 2010–2011. CONCLUSIONS: Since the 2004 publication of American Thoracic Society guidelines, there have been few changes in pulmonary clinical practice. Frequencies of respiratory assessments and assist device use among males with DMD were lower than recommended in clinical guidelines. Collaboration of respiratory therapists and pulmonologists with clinicians caring for individuals with DMD should be encouraged to ensure access to the full spectrum of in-patient and out-patient pulmonary interventions.",TRUE,noun phrase
R104,Bioinformatics,R5107,Implementing LOINC – Current Status and Ongoing Work at a Medical University,S5645,R5119,Material,R5127,all university sites,"The Logical Observation Identifiers, Names and Codes (LOINC) is a common terminology used for standardizing laboratory terms. Within the consortium of the HiGHmed project, LOINC is one of the central terminologies used for health data sharing across all university sites. Therefore, linking the LOINC codes to the site-specific tests and measures is one crucial step to reach this goal. In this work we report our ongoing efforts in implementing LOINC to our laboratory information system and research infrastructure, as well as our challenges and the lessons learned. 407 local terms could be mapped to 376 LOINC codes of which 209 are already available to routine laboratory data. In our experience, mapping of local terms to LOINC is a widely manual and time consuming process for reasons of language and expert knowledge of local laboratory procedures.",TRUE,noun phrase
R104,Bioinformatics,R75371,Isolating SARS-CoV-2 Strains From Countries in the Same Meridian: Genome Evolutionary Analysis,S345254,R75376,Has result,R75388,attachment phase of the virus,"Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and “disorder” and “transmembrane” analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein—binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.",TRUE,noun phrase
R104,Bioinformatics,R135546,Acute Lymphoblastic Leukemia Detection from Microscopic Images Using Weighted Ensemble of Convolutional Neural Networks,S536112,R135550,Has experimental datasets,L378114,C-NMC-2019 ALL,"Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.",TRUE,noun phrase
R104,Bioinformatics,R75371,Isolating SARS-CoV-2 Strains From Countries in the Same Meridian: Genome Evolutionary Analysis,S345243,R75376,Has evaluation,R75380,codon mutation,"Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and “disorder” and “transmembrane” analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein—binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.",TRUE,noun phrase
R104,Bioinformatics,R75371,Isolating SARS-CoV-2 Strains From Countries in the Same Meridian: Genome Evolutionary Analysis,S345250,R75376,Has result,R75380,codon mutation,"Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and “disorder” and “transmembrane” analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein—binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.",TRUE,noun phrase
R104,Bioinformatics,R109012,Drug-Drug Interaction Prediction Based on Knowledge Graph Embeddings and Convolutional-LSTM Network,S496903,R109014,Has approach,R109015,Convolutional-LSTM Network,"Interference between pharmacological substances can cause serious medical injuries. Correctly predicting so-called drug-drug interactions (DDI) does not only reduce these cases but can also result in a reduction of drug development cost. Presently, most drug-related knowledge is the result of clinical evaluations and post-marketing surveillance; resulting in a limited amount of information. Existing data-driven prediction approaches for DDIs typically rely on a single source of information, while using information from multiple sources would help improve predictions. Machine learning (ML) techniques are used, but the techniques are often unable to deal with skewness in the data. Hence, we propose a new ML approach for predicting DDIs based on multiple data sources. For this task, we use 12,000 drug features from DrugBank, PharmGKB, and KEGG drugs, which are integrated using Knowledge Graphs (KGs). To train our prediction model, we first embed the nodes in the graph using various embedding approaches. We found that the best performing combination was a ComplEx embedding method creating using PyTorch-BigGraph (PBG) with a Convolutional-LSTM network and classic machine learning-based prediction models. The model averaging ensemble method of three best classifiers yields up to 0.94, 0.92, 0.80 for AUPR, F1 F1-score, and MCC, respectively during 5-fold cross-validation tests.",TRUE,noun phrase
R104,Bioinformatics,R135489,Identification of Leukemia Subtypes from Microscopic Images Using Convolutional Neural Network,S535867,R135491,Used models,L377982,decision tree,"Leukemia is a fatal cancer and has two main types: Acute and chronic. Each type has two more subtypes: Lymphoid and myeloid. Hence, in total, there are four subtypes of leukemia. This study proposes a new approach for diagnosis of all subtypes of leukemia from microscopic blood cell images using convolutional neural networks (CNN), which requires a large training data set. Therefore, we also investigated the effects of data augmentation for an increasing number of training samples synthetically. We used two publicly available leukemia data sources: ALL-IDB and ASH Image Bank. Next, we applied seven different image transformation techniques as data augmentation. We designed a CNN architecture capable of recognizing all subtypes of leukemia. Besides, we also explored other well-known machine learning algorithms such as naive Bayes, support vector machine, k-nearest neighbor, and decision tree. To evaluate our approach, we set up a set of experiments and used 5-fold cross-validation. The results we obtained from experiments showed that our CNN model performance has 88.25% and 81.74% accuracy, in leukemia versus healthy and multi-class classification of all subtypes, respectively. Finally, we also showed that the CNN model has a better performance than other well-known machine learning algorithms.",TRUE,noun phrase
R104,Bioinformatics,R135546,Acute Lymphoblastic Leukemia Detection from Microscopic Images Using Weighted Ensemble of Convolutional Neural Networks,S536115,R135550,Used models,L378116,Deep CNN,"Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.",TRUE,noun phrase
R104,Bioinformatics,R138951,Human Behaviour-Based Automatic Depression Analysis Using Hand-Crafted Statistics and Deep Learned Spectral Features,S552141,R138953,Aims,R138958,depression severity,"Depression is a serious mental disorder that affects millions of people all over the world. Traditional clinical diagnosis methods are subjective, complicated and need extensive participation of experts. Audio-visual automatic depression analysis systems predominantly base their predictions on very brief sequential segments, sometimes as little as one frame. Such data contains much redundant information, causes a high computational load, and negatively affects the detection accuracy. Final decision making at the sequence level is then based on the fusion of frame or segment level predictions. However, this approach loses longer term behavioural correlations, as the behaviours themselves are abstracted away by the frame-level predictions. We propose to on the one hand use automatically detected human behaviour primitives such as Gaze directions, Facial action units (AU), etc. as low-dimensional multi-channel time series data, which can then be used to create two sequence descriptors. The first calculates the sequence-level statistics of the behaviour primitives and the second casts the problem as a Convolutional Neural Network problem operating on a spectral representation of the multichannel behaviour signals. The results of depression detection (binary classification) and severity estimation (regression) experiments conducted on the AVEC 2016 DAIC-WOZ database show that both methods achieved significant improvement compared to the previous state of the art in terms of the depression severity estimation.",TRUE,noun phrase
R104,Bioinformatics,R138719,Deep Neural Generative Model of Functional MRI Images for Psychiatric Disorder Diagnosis,S551307,R138724,Aims,L387874,Diagnosis of psychiatric disorder,"Accurate diagnosis of psychiatric disorders plays a critical role in improving the quality of life for patients and potentially supports the development of new treatments. Many studies have been conducted on machine learning techniques that seek brain imaging data for specific biomarkers of disorders. These studies have encountered the following dilemma: A direct classification overfits to a small number of high-dimensional samples but unsupervised feature-extraction has the risk of extracting a signal of no interest. In addition, such studies often provided only diagnoses for patients without presenting the reasons for these diagnoses. This study proposed a deep neural generative model of resting-state functional magnetic resonance imaging (fMRI) data. The proposed model is conditioned by the assumption of the subject's state and estimates the posterior probability of the subject's state given the imaging data, using Bayes’ rule. This study applied the proposed model to diagnose schizophrenia and bipolar disorders. Diagnostic accuracy was improved by a large margin over competitive approaches, namely classifications of functional connectivity, discriminative/generative models of regionwise signals, and those with unsupervised feature-extractors. The proposed model visualizes brain regions largely related to the disorders, thus motivating further biological investigation.",TRUE,noun phrase
R104,Bioinformatics,R75371,Isolating SARS-CoV-2 Strains From Countries in the Same Meridian: Genome Evolutionary Analysis,S345251,R75376,Has result,R75385,episodic selective pressure,"Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and “disorder” and “transmembrane” analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein—binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.",TRUE,noun phrase
R104,Bioinformatics,R168707,iDREM: Interactive visualization of dynamic regulatory networks,S669058,R168708,creates,R167053,interactive DREM,"The Dynamic Regulatory Events Miner (DREM) software reconstructs dynamic regulatory networks by integrating static protein-DNA interaction data with time series gene expression data. In recent years, several additional types of high-throughput time series data have been profiled when studying biological processes including time series miRNA expression, proteomics, epigenomics and single cell RNA-Seq. Combining all available time series and static datasets in a unified model remains an important challenge and goal. To address this challenge we have developed a new version of DREM termed interactive DREM (iDREM). iDREM provides support for all data types mentioned above and combines them with existing interaction data to reconstruct networks that can lead to novel hypotheses on the function and timing of regulators. Users can interactively visualize and query the resulting model. We showcase the functionality of the new tool by applying it to microglia developmental data from multiple labs.",TRUE,noun phrase
R104,Bioinformatics,R135489,Identification of Leukemia Subtypes from Microscopic Images Using Convolutional Neural Network,S535866,R135491,Used models,L377981,k-nearest neighbor,"Leukemia is a fatal cancer and has two main types: Acute and chronic. Each type has two more subtypes: Lymphoid and myeloid. Hence, in total, there are four subtypes of leukemia. This study proposes a new approach for diagnosis of all subtypes of leukemia from microscopic blood cell images using convolutional neural networks (CNN), which requires a large training data set. Therefore, we also investigated the effects of data augmentation for an increasing number of training samples synthetically. We used two publicly available leukemia data sources: ALL-IDB and ASH Image Bank. Next, we applied seven different image transformation techniques as data augmentation. We designed a CNN architecture capable of recognizing all subtypes of leukemia. Besides, we also explored other well-known machine learning algorithms such as naive Bayes, support vector machine, k-nearest neighbor, and decision tree. To evaluate our approach, we set up a set of experiments and used 5-fold cross-validation. The results we obtained from experiments showed that our CNN model performance has 88.25% and 81.74% accuracy, in leukemia versus healthy and multi-class classification of all subtypes, respectively. Finally, we also showed that the CNN model has a better performance than other well-known machine learning algorithms.",TRUE,noun phrase
R104,Bioinformatics,R108865,Evaluation of knowledge graph embedding approaches for drug-drug interaction prediction in realistic settings,S496881,R108868,Has approach,R69603,Knowledge Graph Embedding,"Abstract Background Current approaches to identifying drug-drug interactions (DDIs), include safety studies during drug development and post-marketing surveillance after approval, offer important opportunities to identify potential safety issues, but are unable to provide complete set of all possible DDIs. Thus, the drug discovery researchers and healthcare professionals might not be fully aware of potentially dangerous DDIs. Predicting potential drug-drug interaction helps reduce unanticipated drug interactions and drug development costs and optimizes the drug design process. Methods for prediction of DDIs have the tendency to report high accuracy but still have little impact on translational research due to systematic biases induced by networked/paired data. In this work, we aimed to present realistic evaluation settings to predict DDIs using knowledge graph embeddings. We propose a simple disjoint cross-validation scheme to evaluate drug-drug interaction predictions for the scenarios where the drugs have no known DDIs. Results We designed different evaluation settings to accurately assess the performance for predicting DDIs. The settings for disjoint cross-validation produced lower performance scores, as expected, but still were good at predicting the drug interactions. We have applied Logistic Regression, Naive Bayes and Random Forest on DrugBank knowledge graph with the 10-fold traditional cross validation using RDF2Vec, TransE and TransD. RDF2Vec with Skip-Gram generally surpasses other embedding methods. We also tested RDF2Vec on various drug knowledge graphs such as DrugBank, PharmGKB and KEGG to predict unknown drug-drug interactions. The performance was not enhanced significantly when an integrated knowledge graph including these three datasets was used. Conclusion We showed that the knowledge embeddings are powerful predictors and comparable to current state-of-the-art methods for inferring new DDIs. We addressed the evaluation biases by introducing drug-wise and pairwise disjoint test classes. Although the performance scores for drug-wise and pairwise disjoint seem to be low, the results can be considered to be realistic in predicting the interactions for drugs with limited interaction information.",TRUE,noun phrase
R104,Bioinformatics,R168556,Pep2Path: Automated Mass Spectrometry-Guided Genome Mining of Peptidic Natural Products,S668474,R168562,uses,R166959,Mac OS X,"Nonribosomally and ribosomally synthesized bioactive peptides constitute a source of molecules of great biomedical importance, including antibiotics such as penicillin, immunosuppressants such as cyclosporine, and cytostatics such as bleomycin. Recently, an innovative mass-spectrometry-based strategy, peptidogenomics, has been pioneered to effectively mine microbial strains for novel peptidic metabolites. Even though mass-spectrometric peptide detection can be performed quite fast, true high-throughput natural product discovery approaches have still been limited by the inability to rapidly match the identified tandem mass spectra to the gene clusters responsible for the biosynthesis of the corresponding compounds. With Pep2Path, we introduce a software package to fully automate the peptidogenomics approach through the rapid Bayesian probabilistic matching of mass spectra to their corresponding biosynthetic gene clusters. Detailed benchmarking of the method shows that the approach is powerful enough to correctly identify gene clusters even in data sets that consist of hundreds of genomes, which also makes it possible to match compounds from unsequenced organisms to closely related biosynthetic gene clusters in other genomes. Applying Pep2Path to a data set of compounds without known biosynthesis routes, we were able to identify candidate gene clusters for the biosynthesis of five important compounds. Notably, one of these clusters was detected in a genome from a different subphylum of Proteobacteria than that in which the molecule had first been identified. All in all, our approach paves the way towards high-throughput discovery of novel peptidic natural products. Pep2Path is freely available from http://pep2path.sourceforge.net/, implemented in Python, licensed under the GNU General Public License v3 and supported on MS Windows, Linux and Mac OS X.",TRUE,noun phrase
R104,Bioinformatics,R168629,MEDYAN: Mechanochemical Simulations of Contraction and Polarity Alignment in Actomyosin Networks,S668747,R168630,deposits,R167004,Mechanochemical Dynamics of Active Networks,"Active matter systems, and in particular the cell cytoskeleton, exhibit complex mechanochemical dynamics that are still not well understood. While prior computational models of cytoskeletal dynamics have lead to many conceptual insights, an important niche still needs to be filled with a high-resolution structural modeling framework, which includes a minimally-complete set of cytoskeletal chemistries, stochastically treats reaction and diffusion processes in three spatial dimensions, accurately and efficiently describes mechanical deformations of the filamentous network under stresses generated by molecular motors, and deeply couples mechanics and chemistry at high spatial resolution. To address this need, we propose a novel reactive coarse-grained force field, as well as a publicly available software package, named the Mechanochemical Dynamics of Active Networks (MEDYAN), for simulating active network evolution and dynamics (available at www.medyan.org). This model can be used to study the non-linear, far from equilibrium processes in active matter systems, in particular, comprised of interacting semi-flexible polymers embedded in a solution with complex reaction-diffusion processes. In this work, we applied MEDYAN to investigate a contractile actomyosin network consisting of actin filaments, alpha-actinin cross-linking proteins, and non-muscle myosin IIA mini-filaments. We found that these systems undergo a switch-like transition in simulations from a random network to ordered, bundled structures when cross-linker concentration is increased above a threshold value, inducing contraction driven by myosin II mini-filaments. Our simulations also show how myosin II mini-filaments, in tandem with cross-linkers, can produce a range of actin filament polarity distributions and alignment, which is crucially dependent on the rate of actin filament turnover and the actin filament’s resulting super-diffusive behavior in the actomyosin-cross-linker system. We discuss the biological implications of these findings for the arc formation in lamellipodium-to-lamellum architectural remodeling. Lastly, our simulations produce force-dependent accumulation of myosin II, which is thought to be responsible for their mechanosensation ability, also spontaneously generating myosin II concentration gradients in the solution phase of the simulation volume.",TRUE,noun phrase
R104,Bioinformatics,R168604,"MIiSR: Molecular Interactions in Super-Resolution Imaging Enables the Analysis of Protein Interactions, Dynamics and Formation of Multi-protein Structures",S668653,R168605,creates,R166988,Molecular Interactions in Super Resolution,"Our current understanding of the molecular mechanisms which regulate cellular processes such as vesicular trafficking has been enabled by conventional biochemical and microscopy techniques. However, these methods often obscure the heterogeneity of the cellular environment, thus precluding a quantitative assessment of the molecular interactions regulating these processes. Herein, we present Molecular Interactions in Super Resolution (MIiSR) software which provides quantitative analysis tools for use with super-resolution images. MIiSR combines multiple tools for analyzing intermolecular interactions, molecular clustering and image segmentation. These tools enable quantification, in the native environment of the cell, of molecular interactions and the formation of higher-order molecular complexes. The capabilities and limitations of these analytical tools are demonstrated using both modeled data and examples derived from the vesicular trafficking system, thereby providing an established and validated experimental workflow capable of quantitatively assessing molecular interactions and molecular complex formation within the heterogeneous environment of the cell.",TRUE,noun phrase
R104,Bioinformatics,R185395,"PeroxisomeDB: a database for the peroxisomal proteome, functional genomics and disease",S710074,R185397,Has method,R124229,Multiple Sequence Alignment,"Peroxisomes are essential organelles of eukaryotic origin, ubiquitously distributed in cells and organisms, playing key roles in lipid and antioxidant metabolism. Loss or malfunction of peroxisomes causes more than 20 fatal inherited conditions. We have created a peroxisomal database () that includes the complete peroxisomal proteome of Homo sapiens and Saccharomyces cerevisiae, by gathering, updating and integrating the available genetic and functional information on peroxisomal genes. PeroxisomeDB is structured in interrelated sections ‘Genes’, ‘Functions’, ‘Metabolic pathways’ and ‘Diseases’, that include hyperlinks to selected features of NCBI, ENSEMBL and UCSC databases. We have designed graphical depictions of the main peroxisomal metabolic routes and have included updated flow charts for diagnosis. Precomputed BLAST, PSI-BLAST, multiple sequence alignment (MUSCLE) and phylogenetic trees are provided to assist in direct multispecies comparison to study evolutionary conserved functions and pathways. Highlights of the PeroxisomeDB include new tools developed for facilitating (i) identification of novel peroxisomal proteins, by means of identifying proteins carrying peroxisome targeting signal (PTS) motifs, (ii) detection of peroxisomes in silico, particularly useful for screening the deluge of newly sequenced genomes. PeroxisomeDB should contribute to the systematic characterization of the peroxisomal proteome and facilitate system biology approaches on the organelle.",TRUE,noun phrase
R104,Bioinformatics,R135489,Identification of Leukemia Subtypes from Microscopic Images Using Convolutional Neural Network,S535864,R135491,Used models,L377979,naive Bayes,"Leukemia is a fatal cancer and has two main types: Acute and chronic. Each type has two more subtypes: Lymphoid and myeloid. Hence, in total, there are four subtypes of leukemia. This study proposes a new approach for diagnosis of all subtypes of leukemia from microscopic blood cell images using convolutional neural networks (CNN), which requires a large training data set. Therefore, we also investigated the effects of data augmentation for an increasing number of training samples synthetically. We used two publicly available leukemia data sources: ALL-IDB and ASH Image Bank. Next, we applied seven different image transformation techniques as data augmentation. We designed a CNN architecture capable of recognizing all subtypes of leukemia. Besides, we also explored other well-known machine learning algorithms such as naive Bayes, support vector machine, k-nearest neighbor, and decision tree. To evaluate our approach, we set up a set of experiments and used 5-fold cross-validation. The results we obtained from experiments showed that our CNN model performance has 88.25% and 81.74% accuracy, in leukemia versus healthy and multi-class classification of all subtypes, respectively. Finally, we also showed that the CNN model has a better performance than other well-known machine learning algorithms.",TRUE,noun phrase
R104,Bioinformatics,R150537,LINNAEUS: A species name identification system for biomedical literature,S603598,R150539,Other resources,R148003,NCBI Taxonomy,"Abstract Background The task of recognizing and identifying species names in biomedical literature has recently been regarded as critical for a number of applications in text and data mining, including gene name recognition, species-specific document retrieval, and semantic enrichment of biomedical articles. Results In this paper we describe an open-source species name recognition and normalization software system, LINNAEUS, and evaluate its performance relative to several automatically generated biomedical corpora, as well as a novel corpus of full-text documents manually annotated for species mentions. LINNAEUS uses a dictionary-based approach (implemented as an efficient deterministic finite-state automaton) to identify species names and a set of heuristics to resolve ambiguous mentions. When compared against our manually annotated corpus, LINNAEUS performs with 94% recall and 97% precision at the mention level, and 98% recall and 90% precision at the document level. Our system successfully solves the problem of disambiguating uncertain species mentions, with 97% of all mentions in PubMed Central full-text documents resolved to unambiguous NCBI taxonomy identifiers. Conclusions LINNAEUS is an open source, stand-alone software system capable of recognizing and normalizing species name mentions with speed and accuracy, and can therefore be integrated into a range of bioinformatics and text-mining applications. The software and manually annotated corpus can be downloaded freely at http://linnaeus.sourceforge.net/.",TRUE,noun phrase
R104,Bioinformatics,R5107,Implementing LOINC – Current Status and Ongoing Work at a Medical University,S5640,R5119,Data,R5122,our challenges and the lessons,"The Logical Observation Identifiers, Names and Codes (LOINC) is a common terminology used for standardizing laboratory terms. Within the consortium of the HiGHmed project, LOINC is one of the central terminologies used for health data sharing across all university sites. Therefore, linking the LOINC codes to the site-specific tests and measures is one crucial step to reach this goal. In this work we report our ongoing efforts in implementing LOINC to our laboratory information system and research infrastructure, as well as our challenges and the lessons learned. 407 local terms could be mapped to 376 LOINC codes of which 209 are already available to routine laboratory data. In our experience, mapping of local terms to LOINC is a widely manual and time consuming process for reasons of language and expert knowledge of local laboratory procedures.",TRUE,noun phrase
R104,Bioinformatics,R138931,DCNN and DNN based multi-modal depression recognition,S552064,R138933,Outcome assessment,R138874,PHQ-8 score,"In this paper, we propose an audio visual multimodal depression recognition framework composed of deep convolutional neural network (DCNN) and deep neural network (DNN) models. For each modality, corresponding feature descriptors are input into a DCNN to learn high-level global features with compact dynamic information, which are then fed into a DNN to predict the PHQ-8 score. For multi-modal depression recognition, the predicted PHQ-8 scores from each modality are integrated in a DNN for the final prediction. In addition, we propose the Histogram of Displacement Range as a novel global visual descriptor to quantify the range and speed of the facial landmarks' displacements. Experiments have been carried out on the Distress Analysis Interview Corpus-Wizard of Oz (DAIC-WOZ) dataset for the Depression Sub-challenge of the Audio-Visual Emotion Challenge (AVEC 2016), results show that the proposed multi-modal depression recognition framework obtains very promising results on both the development set and test set, which outperforms the state-of-the-art results.",TRUE,noun phrase
R104,Bioinformatics,R168667,FIMTrack: An open source tracking and locomotion analysis software for small animals,S668908,R168670,deposits,R167028,pre-compiled binaries,"Imaging and analyzing the locomotion behavior of small animals such as Drosophila larvae or C. elegans worms has become an integral subject of biological research. In the past we have introduced FIM, a novel imaging system feasible to extract high contrast images. This system in combination with the associated tracking software FIMTrack is already used by many groups all over the world. However, so far there has not been an in-depth discussion of the technical aspects. Here we elaborate on the implementation details of FIMTrack and give an in-depth explanation of the used algorithms. Among others, the software offers several tracking strategies to cover a wide range of different model organisms, locomotion types, and camera properties. Furthermore, the software facilitates stimuli-based analysis in combination with built-in manual tracking and correction functionalities. All features are integrated in an easy-to-use graphical user interface. To demonstrate the potential of FIMTrack we provide an evaluation of its accuracy using manually labeled data. The source code is available under the GNU GPLv3 at https://github.com/i-git/FIMTrack and pre-compiled binaries for Windows and Mac are available at http://fim.uni-muenster.de.",TRUE,noun phrase
R104,Bioinformatics,R150475,Biomedical named entity recognition and linking datasets: survey and our recent development,S603444,R150477,Data domains,R150491,Protein-Protein Interaction Extraction (PPIE),"Natural language processing (NLP) is widely applied in biological domains to retrieve information from publications. Systems to address numerous applications exist, such as biomedical named entity recognition (BNER), named entity normalization (NEN) and protein-protein interaction extraction (PPIE). High-quality datasets can assist the development of robust and reliable systems; however, due to the endless applications and evolving techniques, the annotations of benchmark datasets may become outdated and inappropriate. In this study, we first review commonlyused BNER datasets and their potential annotation problems such as inconsistency and low portability. Then, we introduce a revised version of the JNLPBA dataset that solves potential problems in the original and use state-of-the-art named entity recognition systems to evaluate its portability to different kinds of biomedical literature, including protein-protein interaction and biology events. Lastly, we introduce an ensembled biomedical entity dataset (EBED) by extending the revised JNLPBA dataset with PubMed Central full-text paragraphs, figure captions and patent abstracts. This EBED is a multi-task dataset that covers annotations including gene, disease and chemical entities. In total, it contains 85000 entity mentions, 25000 entity mentions with database identifiers and 5000 attribute tags. To demonstrate the usage of the EBED, we review the BNER track from the AI CUP Biomedical Paper Analysis challenge. Availability: The revised JNLPBA dataset is available at https://iasl-btm.iis.sinica.edu.tw/BNER/Content/Re vised_JNLPBA.zip. The EBED dataset is available at https://iasl-btm.iis.sinica.edu.tw/BNER/Content/AICUP _EBED_dataset.rar. Contact: Email: thtsai@g.ncu.edu.tw, Tel. 886-3-4227151 ext. 35203, Fax: 886-3-422-2681 Email: hsu@iis.sinica.edu.tw, Tel. 886-2-2788-3799 ext. 2211, Fax: 886-2-2782-4814 Supplementary information: Supplementary data are available at Briefings in Bioinformatics online.",TRUE,noun phrase
R104,Bioinformatics,R138825,Comprehensive functional genomic resource and integrative model for the human brain,S551735,R138856,Data,R138858,Regulatory network,"Despite progress in defining genetic risk for psychiatric disorders, their molecular mechanisms remain elusive. Addressing this, the PsychENCODE Consortium has generated a comprehensive online resource for the adult brain across 1866 individuals. The PsychENCODE resource contains ~79,000 brain-active enhancers, sets of Hi-C linkages, and topologically associating domains; single-cell expression profiles for many cell types; expression quantitative-trait loci (QTLs); and further QTLs associated with chromatin, splicing, and cell-type proportions. Integration shows that varying cell-type proportions largely account for the cross-population variation in expression (with >88% reconstruction accuracy). It also allows building of a gene regulatory network, linking genome-wide association study variants to genes (e.g., 321 for schizophrenia). We embed this network into an interpretable deep-learning model, which improves disease prediction by ~6-fold versus polygenic risk scores and identifies key genes and pathways in psychiatric disorders.",TRUE,noun phrase
R104,Bioinformatics,R170112,Estimating genetic kin relationships in prehistoric populations,S675624,R170115,creates,R167953,Relationship Estimation from Ancient DNA,"Archaeogenomic research has proven to be a valuable tool to trace migrations of historic and prehistoric individuals and groups, whereas relationships within a group or burial site have not been investigated to a large extent. Knowing the genetic kinship of historic and prehistoric individuals would give important insights into social structures of ancient and historic cultures. Most archaeogenetic research concerning kinship has been restricted to uniparental markers, while studies using genome-wide information were mainly focused on comparisons between populations. Applications which infer the degree of relationship based on modern-day DNA information typically require diploid genotype data. Low concentration of endogenous DNA, fragmentation and other post-mortem damage to ancient DNA (aDNA) makes the application of such tools unfeasible for most archaeological samples. To infer family relationships for degraded samples, we developed the software READ (Relationship Estimation from Ancient DNA). We show that our heuristic approach can successfully infer up to second degree relationships with as little as 0.1x shotgun coverage per genome for pairs of individuals. We uncover previously unknown relationships among prehistoric individuals by applying READ to published aDNA data from several human remains excavated from different cultural contexts. In particular, we find a group of five closely related males from the same Corded Ware culture site in modern-day Germany, suggesting patrilocality, which highlights the possibility to uncover social structures of ancient populations by applying READ to genome-wide aDNA data. READ is publicly available from https://bitbucket.org/tguenther/read.",TRUE,noun phrase
R104,Bioinformatics,R170097,Efficacy and tolerability of short-term duloxetine treatment in adults with generalized anxiety disorder: A meta-analysis,S675544,R170098,uses,R167944,Review Manager,"Objective To investigate the efficacy and tolerability of duloxetine during short-term treatment in adults with generalized anxiety disorder (GAD). Methods We conducted a comprehensive literature review of the PubMed, Embase, Cochrane Central Register of Controlled Trials, Web of Science, and ClinicalTrials databases for randomized controlled trials(RCTs) comparing duloxetine or duloxetine plus other antipsychotics with placebo for the treatment of GAD in adults. Outcome measures were (1) efficacy, assessed by the Hospital Anxiety and Depression Scale(HADS) anxiety subscale score, the Hamilton Rating Scale for Anxiety(HAM-A) psychic and somatic anxiety factor scores, and response and remission rates based on total scores of HAM-A; (2) tolerability, assessed by discontinuation rate due to adverse events, the incidence of treatment emergent adverse events(TEAEs) and serious adverse events(SAEs). Review Manager 5.3 and Stata Version 12.0 software were used for all statistical analyses. Results The meta-analysis included 8 RCTs. Mean changes in the HADS anxiety subscale score [mean difference(MD) = 2.32, 95% confidence interval(CI) 1.77–2.88, P<0.00001] and HAM-A psychic anxiety factor score were significantly greater in patients with GAD that received duloxetine compared to those that received placebo (MD = 2.15, 95%CI 1.61–2.68, P<0.00001). However, there was no difference in mean change in the HAM-A somatic anxiety factor score (MD = 1.13, 95%CI 0.67–1.58, P<0.00001). Discontinuation rate due to AEs in the duloxetine group was significantly higher than the placebo group [odds ratio(OR) = 2.62, 95%CI 1.35–5.06, P = 0.004]. The incidence of any TEAE was significantly increased in patients that received duloxetine (OR = 1.76, 95%CI 1.36–2.28, P<0.0001), but there was no significant difference in the incidence of SAEs (OR = 1.13, 95%CI 0.52–2.47, P = 0.75). Conclusion Duloxetine resulted in a greater improvement in symptoms of psychic anxiety and similar changes in symptoms of somatic anxiety compared to placebo during short-term treatment in adults with GAD and its tolerability was acceptable.",TRUE,noun phrase
R104,Bioinformatics,R5107,Implementing LOINC – Current Status and Ongoing Work at a Medical University,S5642,R5119,Data,R5124,routine laboratory data,"The Logical Observation Identifiers, Names and Codes (LOINC) is a common terminology used for standardizing laboratory terms. Within the consortium of the HiGHmed project, LOINC is one of the central terminologies used for health data sharing across all university sites. Therefore, linking the LOINC codes to the site-specific tests and measures is one crucial step to reach this goal. In this work we report our ongoing efforts in implementing LOINC to our laboratory information system and research infrastructure, as well as our challenges and the lessons learned. 407 local terms could be mapped to 376 LOINC codes of which 209 are already available to routine laboratory data. In our experience, mapping of local terms to LOINC is a widely manual and time consuming process for reasons of language and expert knowledge of local laboratory procedures.",TRUE,noun phrase
R104,Bioinformatics,R139014,Detecting Stress Based on Social Interactions in Social Networks,S552369,R139016,Data,R139018,social interactions,"Psychological stress is threatening people’s health. It is non-trivial to detect stress timely for proactive care. With the popularity of social media, people are used to sharing their daily activities and interacting with friends on social media platforms, making it feasible to leverage online social network data for stress detection. In this paper, we find that users stress state is closely related to that of his/her friends in social media, and we employ a large-scale dataset from real-world social platforms to systematically study the correlation of users’ stress states and social interactions. We first define a set of stress-related textual, visual, and social attributes from various aspects, and then propose a novel hybrid model - a factor graph model combined with Convolutional Neural Network to leverage tweet content and social interaction information for stress detection. Experimental results show that the proposed model can improve the detection performance by 6-9 percent in F1-score. By further analyzing the social interaction data, we also discover several intriguing phenomena, i.e., the number of social structures of sparse connections (i.e., with no delta connections) of stressed users is around 14 percent higher than that of non-stressed users, indicating that the social structure of stressed users’ friends tend to be less connected and less complicated than that of non-stressed users.",TRUE,noun phrase
R104,Bioinformatics,R168667,FIMTrack: An open source tracking and locomotion analysis software for small animals,S668906,R168669,deposits,R167027,source code,"Imaging and analyzing the locomotion behavior of small animals such as Drosophila larvae or C. elegans worms has become an integral subject of biological research. In the past we have introduced FIM, a novel imaging system feasible to extract high contrast images. This system in combination with the associated tracking software FIMTrack is already used by many groups all over the world. However, so far there has not been an in-depth discussion of the technical aspects. Here we elaborate on the implementation details of FIMTrack and give an in-depth explanation of the used algorithms. Among others, the software offers several tracking strategies to cover a wide range of different model organisms, locomotion types, and camera properties. Furthermore, the software facilitates stimuli-based analysis in combination with built-in manual tracking and correction functionalities. All features are integrated in an easy-to-use graphical user interface. To demonstrate the potential of FIMTrack we provide an evaluation of its accuracy using manually labeled data. The source code is available under the GNU GPLv3 at https://github.com/i-git/FIMTrack and pre-compiled binaries for Windows and Mac are available at http://fim.uni-muenster.de.",TRUE,noun phrase
R104,Bioinformatics,R170078,Neurological manifestations in chronic hepatitis C patients receiving care in a reference hospital in sub-Saharan Africa: A cross-sectional study,S675431,R170079,uses,R167933,Statistical Package for Social Sciences,"Background Chronic hepatitis C infection is a major public health concern, with a high burden in Sub-Saharan Africa. There is growing evidence that chronic hepatitis C virus (HCV) infection causes neurological complications. This study aimed at assessing the prevalence and factors associated with neurological manifestations in chronic hepatitis C patients. Methods Through a cross-sectional design, a semi-structured questionnaire was used to collect data from consecutive chronic HCV infected patients attending the outpatient gastroenterology unit of the Douala General Hospital (DGH). Data collection was by interview, patient record review (including HCV RNA quantification, HCV genotyping and the assessment of liver fibrosis and necroinflammatory activity), clinical examination complemented by 3 tools; Neuropathic pain diagnostic questionnaire, Brief peripheral neuropathy screen and mini mental state examination score. Data were analysed using Statistical package for social sciences version 20 for windows. Results Of the 121 chronic hepatitis C patients (51.2% males) recruited, 54.5% (95% Confidence interval: 46.3%, 62.8%) had at least one neurological manifestation, with peripheral nervous system manifestations being more common (50.4%). Age ≥ 55 years (Adjusted Odds Ratio: 4.82, 95%CI: 1.02–18.81, p = 0.02), longer duration of illness (AOR: 1.012, 95%CI: 1.00–1.02, p = 0.01) and high viral load (AOR: 3.40, 95% CI: 1.20–9.64, p = 0.02) were significantly associated with neurological manifestations. Peripheral neuropathy was the most common neurological manifestation (49.6%), presenting mainly as sensory neuropathy (47.9%). Age ≥ 55 years (AOR: 6.25, 95%CI: 1.33–29.08, p = 0.02) and longer duration of illness (AOR: 1.01, 1.00–1.02, p = 0.01) were significantly associated with peripheral neuropathy. Conclusion Over half of the patients with chronic hepatitis C attending the DGH have a neurological manifestation, mainly presenting as sensory peripheral neuropathy. Routine screening of chronic hepatitis C patients for peripheral neuropathy is therefore necessary, with prime focus on those with older age and longer duration of illness.",TRUE,noun phrase
R104,Bioinformatics,R138992,User-level psychological stress detection from social media using deep neural network,S552279,R138994,Aims,R138996,Stress detection,"It is of significant importance to detect and manage stress before it turns into severe problems. However, existing stress detection methods usually rely on psychological scales or physiological devices, making the detection complicated and costly. In this paper, we explore to automatically detect individuals' psychological stress via social media. Employing real online micro-blog data, we first investigate the correlations between users' stress and their tweeting content, social engagement and behavior patterns. Then we define two types of stress-related attributes: 1) low-level content attributes from a single tweet, including text, images and social interactions; 2) user-scope statistical attributes through their weekly micro-blog postings, leveraging information of tweeting time, tweeting types and linguistic styles. To combine content attributes with statistical attributes, we further design a convolutional neural network (CNN) with cross autoencoders to generate user-scope content attributes from low-level content attributes. Finally, we propose a deep neural network (DNN) model to incorporate the two types of user-scope attributes to detect users' psychological stress. We test the trained model on four different datasets from major micro-blog platforms including Sina Weibo, Tencent Weibo and Twitter. Experimental results show that the proposed model is effective and efficient on detecting psychological stress from micro-blog data. We believe our model would be useful in developing stress detection tools for mental health agencies and individuals.",TRUE,noun phrase
R104,Bioinformatics,R138998,Psychological stress detection from cross-media microblog data using Deep Sparse Neural Network,S552301,R139000,Aims,R138996,Stress detection,"Long-term stress may lead to many severe physical and mental problems. Traditional psychological stress detection usually relies on the active individual participation, which makes the detection labor-consuming, time-costing and hysteretic. With the rapid development of social networks, people become more and more willing to share moods via microblog platforms. In this paper, we propose an automatic stress detection method from cross-media microblog data. We construct a three-level framework to formulate the problem. We first obtain a set of low-level features from the tweets. Then we define and extract middle-level representations based on psychological and art theories: linguistic attributes from tweets' texts, visual attributes from tweets' images, and social attributes from tweets' comments, retweets and favorites. Finally, a Deep Sparse Neural Network is designed to learn the stress categories incorporating the cross-media attributes. Experiment results show that the proposed method is effective and efficient on detecting psychological stress from microblog data.",TRUE,noun phrase
R104,Bioinformatics,R139014,Detecting Stress Based on Social Interactions in Social Networks,S552371,R139016,Aims,R138996,Stress detection,"Psychological stress is threatening people’s health. It is non-trivial to detect stress timely for proactive care. With the popularity of social media, people are used to sharing their daily activities and interacting with friends on social media platforms, making it feasible to leverage online social network data for stress detection. In this paper, we find that users stress state is closely related to that of his/her friends in social media, and we employ a large-scale dataset from real-world social platforms to systematically study the correlation of users’ stress states and social interactions. We first define a set of stress-related textual, visual, and social attributes from various aspects, and then propose a novel hybrid model - a factor graph model combined with Convolutional Neural Network to leverage tweet content and social interaction information for stress detection. Experimental results show that the proposed model can improve the detection performance by 6-9 percent in F1-score. By further analyzing the social interaction data, we also discover several intriguing phenomena, i.e., the number of social structures of sparse connections (i.e., with no delta connections) of stressed users is around 14 percent higher than that of non-stressed users, indicating that the social structure of stressed users’ friends tend to be less connected and less complicated than that of non-stressed users.",TRUE,noun phrase
R104,Bioinformatics,R138964,The Facial Stress Recognition Based on Multi-histogram Features and Convolutional Neural Network,S552179,R138966,Aims,R138968,Stress recognition,"The health disorders due to stress and depression should not be considered trivial because it has a negative impact on health. Prolonged stress not only triggers mental fatigue but also affects physical health. Therefore, we must be able to identify stress early. In this paper, we proposed the new methods for stress recognition on three classes (neutral, low stress, high stress) from a facial frontal image. Each image divided into three parts, i.e. pairs of eyes, nose, and mouth. Facial features have extracted on each image pixel using DoG, HOG, and DWT. The strength of orthonormality features is considered by the RICA. The GDA distributes the nonlinear covariance. Furthermore, the histogram features of the image parts are applied at a depth-based learning of ConvNet to model the facial stress expression. The proposed method is used FERET databases for training and validation. The k-fold validation method is used as a validation with k=5. Based on the experiments result, the proposed method accuracy showing outperforms compared with other works.",TRUE,noun phrase
R104,Bioinformatics,R135489,Identification of Leukemia Subtypes from Microscopic Images Using Convolutional Neural Network,S535865,R135491,Used models,L377980,support vector machine,"Leukemia is a fatal cancer and has two main types: Acute and chronic. Each type has two more subtypes: Lymphoid and myeloid. Hence, in total, there are four subtypes of leukemia. This study proposes a new approach for diagnosis of all subtypes of leukemia from microscopic blood cell images using convolutional neural networks (CNN), which requires a large training data set. Therefore, we also investigated the effects of data augmentation for an increasing number of training samples synthetically. We used two publicly available leukemia data sources: ALL-IDB and ASH Image Bank. Next, we applied seven different image transformation techniques as data augmentation. We designed a CNN architecture capable of recognizing all subtypes of leukemia. Besides, we also explored other well-known machine learning algorithms such as naive Bayes, support vector machine, k-nearest neighbor, and decision tree. To evaluate our approach, we set up a set of experiments and used 5-fold cross-validation. The results we obtained from experiments showed that our CNN model performance has 88.25% and 81.74% accuracy, in leukemia versus healthy and multi-class classification of all subtypes, respectively. Finally, we also showed that the CNN model has a better performance than other well-known machine learning algorithms.",TRUE,noun phrase
R104,Bioinformatics,R5107,Implementing LOINC – Current Status and Ongoing Work at a Medical University,S5644,R5119,Material,R5126,the central terminologies,"The Logical Observation Identifiers, Names and Codes (LOINC) is a common terminology used for standardizing laboratory terms. Within the consortium of the HiGHmed project, LOINC is one of the central terminologies used for health data sharing across all university sites. Therefore, linking the LOINC codes to the site-specific tests and measures is one crucial step to reach this goal. In this work we report our ongoing efforts in implementing LOINC to our laboratory information system and research infrastructure, as well as our challenges and the lessons learned. 407 local terms could be mapped to 376 LOINC codes of which 209 are already available to routine laboratory data. In our experience, mapping of local terms to LOINC is a widely manual and time consuming process for reasons of language and expert knowledge of local laboratory procedures.",TRUE,noun phrase
R104,Bioinformatics,R138927,DeepBreath: Deep learning of breathing patterns for automatic stress recognition using low-cost thermal imaging in unconstrained settings,S552039,R138929,Data,R138930,Thermal images,"We propose DeepBreath, a deep learning model which automatically recognises people's psychological stress level (mental overload) from their breathing patterns. Using a low cost thermal camera, we track a person's breathing patterns as temperature changes around his/her nostril. The paper's technical contribution is threefold. First of all, instead of creating handcrafted features to capture aspects of the breathing patterns, we transform the uni-dimensional breathing signals into two dimensional respiration variability spectrogram (RVS) sequences. The spectrograms easily capture the complexity of the breathing dynamics. Second, a spatial pattern analysis based on a deep Convolutional Neural Network (CNN) is directly applied to the spectrogram sequences without the need of hand-crafting features. Finally, a data augmentation technique, inspired from solutions for over-fitting problems in deep learning, is applied to allow the CNN to learn with a small-scale dataset from short-term measurements (e.g., up to a few hours). The model is trained and tested with data collected from people exposed to two types of cognitive tasks (Stroop Colour Word Test, Mental Computation test) with sessions of different difficulty levels. Using normalised self-report as ground truth, the CNN reaches 84.59% accuracy in discriminating between two levels of stress and 56.52% in discriminating between three levels. In addition, the CNN outperformed powerful shallow learning methods based on a single layer neural network. Finally, the dataset of labelled thermal images will be open to the community.",TRUE,noun phrase
R104,Bioinformatics,R138959,Automated Depression Diagnosis Based on Deep Networks to Encode Facial Appearance and Dynamics,S552159,R138962,Data,R138957,Video data,"As a severe psychiatric disorder disease, depression is a state of low mood and aversion to activity, which prevents a person from functioning normally in both work and daily lives. The study on automated mental health assessment has been given increasing attentions in recent years. In this paper, we study the problem of automatic diagnosis of depression. A new approach to predict the Beck Depression Inventory II (BDI-II) values from video data is proposed based on the deep networks. The proposed framework is designed in a two stream manner, aiming at capturing both the facial appearance and dynamics. Further, we employ joint tuning layers that can implicitly integrate the appearance and dynamic information. Experiments are conducted on two depression databases, AVEC2013 and AVEC2014. The experimental results show that our proposed approach significantly improve the depression prediction performance, compared to other visual-based approaches.",TRUE,noun phrase
R104,Bioinformatics,R75371,Isolating SARS-CoV-2 Strains From Countries in the Same Meridian: Genome Evolutionary Analysis,S345253,R75376,Has result,R75387,virus evolution,"Background COVID-19, caused by the novel SARS-CoV-2, is considered the most threatening respiratory infection in the world, with over 40 million people infected and over 0.934 million related deaths reported worldwide. It is speculated that epidemiological and clinical features of COVID-19 may differ across countries or continents. Genomic comparison of 48,635 SARS-CoV-2 genomes has shown that the average number of mutations per sample was 7.23, and most SARS-CoV-2 strains belong to one of 3 clades characterized by geographic and genomic specificity: Europe, Asia, and North America. Objective The aim of this study was to compare the genomes of SARS-CoV-2 strains isolated from Italy, Sweden, and Congo, that is, 3 different countries in the same meridian (longitude) but with different climate conditions, and from Brazil (as an outgroup country), to analyze similarities or differences in patterns of possible evolutionary pressure signatures in their genomes. Methods We obtained data from the Global Initiative on Sharing All Influenza Data repository by sampling all genomes available on that date. Using HyPhy, we achieved the recombination analysis by genetic algorithm recombination detection method, trimming, removal of the stop codons, and phylogenetic tree and mixed effects model of evolution analyses. We also performed secondary structure prediction analysis for both sequences (mutated and wild-type) and “disorder” and “transmembrane” analyses of the protein. We analyzed both protein structures with an ab initio approach to predict their ontologies and 3D structures. Results Evolutionary analysis revealed that codon 9628 is under episodic selective pressure for all SARS-CoV-2 strains isolated from the 4 countries, suggesting it is a key site for virus evolution. Codon 9628 encodes the P0DTD3 (Y14_SARS2) uncharacterized protein 14. Further investigation showed that the codon mutation was responsible for helical modification in the secondary structure. The codon was positioned in the more ordered region of the gene (41-59) and near to the area acting as the transmembrane (54-67), suggesting its involvement in the attachment phase of the virus. The predicted protein structures of both wild-type and mutated P0DTD3 confirmed the importance of the codon to define the protein structure. Moreover, ontological analysis of the protein emphasized that the mutation enhances the binding probability. Conclusions Our results suggest that RNA secondary structure may be affected and, consequently, the protein product changes T (threonine) to G (glycine) in position 50 of the protein. This position is located close to the predicted transmembrane region. Mutation analysis revealed that the change from G (glycine) to D (aspartic acid) may confer a new function to the protein—binding activity, which in turn may be responsible for attaching the virus to human eukaryotic cells. These findings can help design in vitro experiments and possibly facilitate a vaccine design and successful antiviral strategies.",TRUE,noun phrase
R104,Bioinformatics,R38466,"Biotea-2-Bioschemas, facilitating structured markup for semantically annotated scholarly publications",S126237,R38472,programming language,R38479,Web components,"The total number of scholarly publications grows day by day, making it necessary to explore and use simple yet effective ways to expose their metadata. Schema.org supports adding structured metadata to web pages via markup, making it easier for data providers but also for search engines to provide the right search results. Bioschemas is based on the standards of schema.org, providing new types, properties and guidelines for metadata, i.e., providing metadata profiles tailored to the Life Sciences domain. Here we present our proposed contribution to Bioschemas (from the project “Biotea”), which supports metadata contributions for scholarly publications via profiles and web components. Biotea comprises a semantic model to represent publications together with annotated elements recognized from the scientific text; our Biotea model has been mapped to schema.org following Bioschemas standards.",TRUE,noun phrase
R205,Biomedical Engineering and Bioengineering,R110043,Tensor gradient based discriminative region analysis for cognitive state classification,S501828,R110045,dataset,R110049,StarPlus fMRI data,"Extraction of relevant features from high-dimensional multi-way functional MRI (fMRI) data is essential for the classification of a cognitive task. In general, fMRI records a combination of neural activation signals and several other noisy components. Alternatively, fMRI data is represented as a high dimensional array using a number of voxels, time instants, and snapshots. The organisation of fMRI data includes a number of Region Of Interests (ROI), snapshots, and thousand of voxels. The crucial step in cognitive task classification is a reduction of feature size through feature selection. Extraction of a specific pattern of interest within the noisy components is a challenging task. Tensor decomposition techniques have found several applications in the scientific fields. In this paper, a novel tensor gradient-based feature extraction technique for cognitive task classification is proposed. The technique has efficiently been applied on StarPlus fMRI data. Also, the technique has been used to discriminate the ROIs in fMRI data in terms of cognitive state classification. The method has been achieved a better average accuracy when compared to other existing feature extraction methods.",TRUE,noun phrase
R205,Biomedical Engineering and Bioengineering,R110061,Tensor gradient based discriminative region analysis for cognitive state classification,S501878,R110063,dataset,L362843,StarPlus fMRI data,"Extraction of relevant features from high-dimensional multi-way functional MRI (fMRI) data is essential for the classification of a cognitive task. In general, fMRI records a combination of neural activation signals and several other noisy components. Alternatively, fMRI data is represented as a high dimensional array using a number of voxels, time instants, and snapshots. The organisation of fMRI data includes a number of Region Of Interests (ROI), snapshots, and thousand of voxels. The crucial step in cognitive task classification is a reduction of feature size through feature selection. Extraction of a specific pattern of interest within the noisy components is a challenging task. Tensor decomposition techniques have found several applications in the scientific fields. In this paper, a novel tensor gradient-based feature extraction technique for cognitive task classification is proposed. The technique has efficiently been applied on StarPlus fMRI data. Also, the technique has been used to discriminate the ROIs in fMRI data in terms of cognitive state classification. The method has been achieved a better average accuracy when compared to other existing feature extraction methods.",TRUE,noun phrase
R16,Biophysics,R74944,Differential Interaction of Antimicrobial Peptides with Lipid Structures Studied by Coarse-Grained Molecular Dynamics Simulations,S345938,R74946,Has method,R75552,Molecular Dynamics Simulations,In this work; we investigated the differential interaction of amphiphilic antimicrobial peptides with 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC) lipid structures by means of extensive molecular dynamics simulations. By using a coarse-grained (CG) model within the MARTINI force field; we simulated the peptide–lipid system from three different initial configurations: (a) peptides in water in the presence of a pre-equilibrated lipid bilayer; (b) peptides inside the hydrophobic core of the membrane; and (c) random configurations that allow self-assembled molecular structures. This last approach allowed us to sample the structural space of the systems and consider cooperative effects. The peptides used in our simulations are aurein 1.2 and maculatin 1.1; two well-known antimicrobial peptides from the Australian tree frogs; and molecules that present different membrane-perturbing behaviors. Our results showed differential behaviors for each type of peptide seen in a different organization that could guide a molecular interpretation of the experimental data. While both peptides are capable of forming membrane aggregates; the aurein 1.2 ones have a pore-like structure and exhibit a higher level of organization than those conformed by maculatin 1.1. Furthermore; maculatin 1.1 has a strong tendency to form clusters and induce curvature at low peptide–lipid ratios. The exploration of the possible lipid–peptide structures; as the one carried out here; could be a good tool for recognizing specific configurations that should be further studied with more sophisticated methodologies.,TRUE,noun phrase
R16,Biophysics,R75540,Differential Stability of Aurein 1.2 Pores in Model Membranes of Two Probiotic Strains,S346080,R75546,Has method,R75552,Molecular Dynamics Simulations,"Aurein 1.2 is an antimicrobial peptide from the skin secretion of an Australian frog. In previous experimental work, we reported a differential action of aurein 1.2 on two probiotic strains Lactobacillus delbrueckii subsp. Bulgaricus (CIDCA331) and Lactobacillus delbrueckii subsp. Lactis (CIDCA133). The differences found were attributed to the bilayer compositions. Cell cultures and CIDCA331-derived liposomes showed higher susceptibility than the ones derived from the CIDCA133 strain, leading to content leakage and structural disruption. Here, we used Molecular Dynamics simulations to explore these systems at atomistic level. We hypothesize that if the antimicrobial peptides organized themselves to form a pore, it will be more stable in membranes that emulate the CIDCA331 strain than in those of the CIDCA133 strain. To test this hypothesis, we simulated pre-assembled aurein 1.2 pores embedded into bilayer models that emulate the two probiotic strains. It was found that the general behavior of the systems depends on the composition of the membrane rather than the pre-assemble system characteristics. Overall, it was observed that aurein 1.2 pores are more stable in the CIDCA331 model membranes. This fact coincides with the high susceptibility of this strain against antimicrobial peptide. In contrast, in the case of the CIDCA133 model membranes, peptides migrate to the water-lipid interphase, the pore shrinks and the transport of water through the pore is reduced. The tendency of glycolipids to make hydrogen bonds with peptides destabilize the pore structures. This feature is observed to a lesser extent in CIDCA 331 due to the presence of anionic lipids. Glycolipid transverse diffusion (flip-flop) between monolayers occurs in the pore surface region in all the cases considered. These findings expand our understanding of the antimicrobial peptide resistance properties of probiotic strains.",TRUE,noun phrase
R16,Biophysics,R75547,"Could Cardiolipin Protect Membranes against the Action of Certain Antimicrobial Peptides? Aurein 1.2, a Case Study",S345925,R75551,Has method,R75552,Molecular Dynamics Simulations,"The activity of a host of antimicrobial peptides has been examined against a range of lipid bilayers mimicking bacterial and eukaryotic membranes. Despite this, the molecular mechanisms and the nature of the physicochemical properties underlying the peptide–lipid interactions that lead to membrane disruption are yet to be fully elucidated. In this study, the interaction of the short antimicrobial peptide aurein 1.2 was examined in the presence of an anionic cardiolipin-containing lipid bilayer using molecular dynamics simulations. Aurein 1.2 is known to interact strongly with anionic lipid membranes. In the simulations, the binding of aurein 1.2 was associated with buckling of the lipid bilayer, the degree of which varied with the peptide concentration. The simulations suggest that the intrinsic properties of cardiolipin, especially the fact that it promotes negative membrane curvature, may help protect membranes against the action of peptides such as aurein 1.2 by counteracting the tendency of the peptide to induce positive curvature in target membranes.",TRUE,noun phrase
R16,Biophysics,R70278,Adaptive behaviour and learning in slime moulds: the role of oscillations,S333684,R70283,about species,R70285,Physarum polycephalum,"The slime mould Physarum polycephalum, an aneural organism, uses information from previous experiences to adjust its behaviour, but the mechanisms by which this is accomplished remain unknown. This article examines the possible role of oscillations in learning and memory in slime moulds. Slime moulds share surprising similarities with the network of synaptic connections in animal brains. First, their topology derives from a network of interconnected, vein-like tubes in which signalling molecules are transported. Second, network motility, which generates slime mould behaviour, is driven by distinct oscillations that organize into spatio-temporal wave patterns. Likewise, neural activity in the brain is organized in a variety of oscillations characterized by different frequencies. Interestingly, the oscillating networks of slime moulds are not precursors of nervous systems but, rather, an alternative architecture. Here, we argue that comparable information-processing operations can be realized on different architectures sharing similar oscillatory properties. After describing learning abilities and oscillatory activities of P. polycephalum, we explore the relation between network oscillations and learning, and evaluate the organism's global architecture with respect to information-processing potential. We hypothesize that, as in the brain, modulation of spontaneous oscillations may sustain learning in slime mould. This article is part of the theme issue ‘Basal cognition: conceptual tools and the view from the single cell’.",TRUE,noun phrase
R16,Biophysics,R75519,"Direct Visualization of Membrane Leakage Induced by the Antibiotic Peptides: Maculatin, Citropin, and Aurein",S499491,R75521,has target,L361457,POPC/POPG model membranes,"Membrane lysis caused by antibiotic peptides is often rationalized by means of two different models: the so-called carpet model and the pore-forming model. We report here on the lytic activity of antibiotic peptides from Australian tree frogs, maculatin 1.1, citropin 1.1, and aurein 1.2, on POPC or POPC/POPG model membranes. Leakage experiments using fluorescence spectroscopy indicated that the peptide/lipid mol ratio necessary to induce 50% of probe leakage was smaller for maculatin compared with aurein or citropin, regardless of lipid membrane composition. To gain further insight into the lytic mechanism of these peptides we performed single vesicle experiments using confocal fluorescence microscopy. In these experiments, the time course of leakage for different molecular weight (water soluble) fluorescent markers incorporated inside of single giant unilamellar vesicles is observed after peptide exposure. We conclude that maculatin and its related peptides demonstrate a pore-forming mechanism (differential leakage of small fluorescent probe compared with high molecular weight markers). Conversely, citropin and aurein provoke a total membrane destabilization with vesicle burst without sequential probe leakage, an effect that can be assigned to a carpeting mechanism of lytic action. Additionally, to study the relevance of the proline residue on the membrane-action properties of maculatin, the same experimental approach was used for maculatin-Ala and maculatin-Gly (Pro-15 was replaced by Ala or Gly, respectively). Although a similar peptide/lipid mol ratio was necessary to induce 50% of leakage for POPC membranes, the lytic activity of maculatin-Ala and maculatin-Gly decreased in POPC/POPG (1:1 mol) membranes compared with that observed for the naturally occurring maculatin sequence. As observed for maculatin, the lytic action of Maculatin-Ala and maculatin-Gly is in keeping with the formation of pore-like structures at the membrane independently of lipid composition.",TRUE,noun phrase
R27,Botany,R111352,"Patterns, sources and ecological implications of clonal diversity in apomictic Ranunculus carpaticola (Ranunculus auricomus complex, Ranunculaceae): CLONAL DIVERSITY IN APOMICTIC RANUNCULUS",S507085,R111354,Location ,L365712,Central Slovakia,"Sources and implications of genetic diversity in agamic complexes are still under debate. Population studies (amplified fragment length polymorphisms, microsatellites) and karyological methods (Feulgen DNA image densitometry and flow cytometry) were employed for characterization of genetic diversity and ploidy levels of 10 populations of Ranunculus carpaticola in central Slovakia. Whereas two diploid populations showed high levels of genetic diversity, as expected for sexual reproduction, eight populations are hexaploid and harbour lower degrees of genotypic variation, but maintain high levels of heterozygosity at many loci, as is typical for apomicts. Polyploid populations consist either of a single AFLP genotype or of one dominant and a few deviating genotypes. genotype/genodive and character incompatibility analyses suggest that genotypic variation within apomictic populations is caused by mutations, but in one population probably also by recombination. This local facultative sexuality may have a great impact on regional genotypic diversity. Two microsatellite loci discriminated genotypes separated by the accumulation of few mutations (‘clone mates’) within each AFLP clone. Genetic diversity is partitioned mainly among apomictic populations and is not geographically structured, which may be due to facultative sexuality and/or multiple colonizations of sites by different clones. Habitat differentiation and a tendency to inhabit artificial meadows is more pronounced in apomictic than in sexual populations. We hypothesize that maintenance of genetic diversity and superior colonizing abilities of apomicts in temporally and spatially heterogeneous environments are important for their distributional success.",TRUE,noun phrase
R27,Botany,R111344,Cytogeography of Pilosella officinarum (Compositae): Altitudinal and Longitudinal Differences in Ploidy Level Distribution in the Czech Republic and Slovakia and the General Pattern in Europe,S507049,R111346,Location ,L365686,"Czech Republic, Slovakia","BACKGROUND AND AIMS Pilosella officinarum (syn. Hieracium pilosella) is a highly structured species with respect to the ploidy level, with obvious cytogeographic trends. Previous non-collated data indicated a possible differentiation in the frequency of particular ploidy levels in the Czech Republic and Slovakia. Therefore, detailed sampling and ploidy level analyses were assessed to reveal a boundary of common occurrence of tetraploids on one hand and higher ploids on the other. For a better understanding of cytogeographic differentiation of P. officinarum in central Europe, a search was made for a general cytogeographic pattern in Europe based on published data. METHODS DNA-ploidy level and/or chromosome number were identified for 1059 plants using flow cytometry and/or chromosome counting on root meristem preparations. Samples were collected from 336 localities in the Czech Republic, Slovakia and north-eastern Hungary. In addition, ploidy levels were determined for plants from 18 localities in Bulgaria, Georgia, Ireland, Italy, Romania and Ukraine. KEY RESULTS Four ploidy levels were found in the studied area with a contrasting pattern of distribution. The most widespread cytotype in the western part of the Czech Republic is tetraploid (4x) reproducing sexually, while the apomictic pentaploids and mostly apomictic hexaploids (5x and 6x, respectively) clearly prevail in Slovakia and the eastern part of the Czech Republic. The boundary between common occurrence of tetraploids and higher ploids is very obvious and represents the geomorphologic boundary between the Bohemian Massif and the Western Carpathians with the adjacent part of Pannonia. Mixed populations consisting of two different ploidy levels were recorded in nearly 11% of localities. A statistically significant difference in a vertical distribution of penta- and hexaploids was observed in the Western Carpathians and the adjacent Pannonian Plain. Hexaploid populations tend to occur at lower elevations (usually below 500 m), while the pentaploid level is more or less evenly distributed up to 1000 m a.s.l. For the first time the heptaploid level (7x) was found on one site in Slovakia. In Europe, the sexual tetraploid level has clearly a sub-Atlantic character of distribution. The plants of higher ploidy level (penta- and hexa-) with mostly apomictic reproduction prevail in the northern part of Scandinavia and the British Isles, the Alps and the Western Carpathians with the adjacent part of Pannonia. A detailed overview of published data shows that extremely rare records on existence of diploid populations in the south-west Alps are with high probability erroneous and most probably refer to the closely related diploid species P. peleteriana. CONCLUSIONS The recent distribution of P. officinarum in Europe is complex and probably reflects the climatic changes during the Pleistocene and consequent postglacial migrations. Probably both penta- and hexaploids arose independently in central Europe (Alps and Carpathian Mountains) and in northern Europe (Scandinavia, Great Britain, Ireland), where the apomictic plants colonized deglaciated areas. We suggest that P. officinarum is in fact an amphidiploid species with a basic tetraploid level, which probably originated from hybridizations of diploid taxa from the section Pilosellina.",TRUE,noun phrase
R27,Botany,R111341,Intraspecific ecological niche divergence and reproductive shifts foster cytotype displacement and provide ecological opportunity to polyploids,S507036,R111343,Species ,L365677,Paspalum intermedium,"Background and Aims Niche divergence between polyploids and their lower ploidy progenitors is one of the primary mechanisms fostering polyploid establishment and adaptive divergence. However, within-species chromosomal and reproductive variability have usually been neglected in community ecology and biodiversity analyses even though they have been recognized to play a role in the adaptive diversification of lineages. Methods We used Paspalum intermedium, a grass species with diverging genetic systems (diploidy vs. autopolyploidy, allogamy vs. autogamy and sexuality vs. apomixis), to recognize the causality of biogeographic patterns, adaptation and ecological flexibility of cytotypes. Chromosome counts and flow cytometry were used to characterize within-species genetic systems diversity. Environmental niche modelling was used to evaluate intraspecific ecological attributes associated with environmental and climatic factors and to assess correlations among ploidy, reproductive modes and ecological conditions ruling species' population dynamics, range expansion, adaptation and evolutionary history. Key Results Two dominant cytotypes non-randomly distributed along local and regional geographical scales displayed niche differentiation, a directional shift in niche optima and signs of disruptive selection on ploidy-related ecological aptitudes for the exploitation of environmental resources. Ecologically specialized allogamous sexual diploids were found in northern areas associated with higher temperature, humidity and productivity, while generalist autogamous apomictic tetraploids occurred in southern areas, occupying colder and less productive environments. Four localities with a documented shift in ploidy and four mixed populations in a zone of ecological transition revealed an uneven replacement between cytotypes. Conclusions Polyploidy and contrasting reproductive traits between cytotypes have promoted shifts in niche optima, and increased ecological tolerance and niche divergence. Ecologically specialized diploids maintain cytotype stability in core areas by displacing tetraploids, while broader ecological preferences and a shift from sexuality to apomixis favoured polyploid colonization in peripheral areas where diploids are displaced, and fostered the ecological opportunity for autotetraploids supporting range expansion to open southern habitats.",TRUE,noun phrase
R27,Botany,R111344,Cytogeography of Pilosella officinarum (Compositae): Altitudinal and Longitudinal Differences in Ploidy Level Distribution in the Czech Republic and Slovakia and the General Pattern in Europe,S507053,R111346,Species ,L365690,Pilosella officinarum,"BACKGROUND AND AIMS Pilosella officinarum (syn. Hieracium pilosella) is a highly structured species with respect to the ploidy level, with obvious cytogeographic trends. Previous non-collated data indicated a possible differentiation in the frequency of particular ploidy levels in the Czech Republic and Slovakia. Therefore, detailed sampling and ploidy level analyses were assessed to reveal a boundary of common occurrence of tetraploids on one hand and higher ploids on the other. For a better understanding of cytogeographic differentiation of P. officinarum in central Europe, a search was made for a general cytogeographic pattern in Europe based on published data. METHODS DNA-ploidy level and/or chromosome number were identified for 1059 plants using flow cytometry and/or chromosome counting on root meristem preparations. Samples were collected from 336 localities in the Czech Republic, Slovakia and north-eastern Hungary. In addition, ploidy levels were determined for plants from 18 localities in Bulgaria, Georgia, Ireland, Italy, Romania and Ukraine. KEY RESULTS Four ploidy levels were found in the studied area with a contrasting pattern of distribution. The most widespread cytotype in the western part of the Czech Republic is tetraploid (4x) reproducing sexually, while the apomictic pentaploids and mostly apomictic hexaploids (5x and 6x, respectively) clearly prevail in Slovakia and the eastern part of the Czech Republic. The boundary between common occurrence of tetraploids and higher ploids is very obvious and represents the geomorphologic boundary between the Bohemian Massif and the Western Carpathians with the adjacent part of Pannonia. Mixed populations consisting of two different ploidy levels were recorded in nearly 11% of localities. A statistically significant difference in a vertical distribution of penta- and hexaploids was observed in the Western Carpathians and the adjacent Pannonian Plain. Hexaploid populations tend to occur at lower elevations (usually below 500 m), while the pentaploid level is more or less evenly distributed up to 1000 m a.s.l. For the first time the heptaploid level (7x) was found on one site in Slovakia. In Europe, the sexual tetraploid level has clearly a sub-Atlantic character of distribution. The plants of higher ploidy level (penta- and hexa-) with mostly apomictic reproduction prevail in the northern part of Scandinavia and the British Isles, the Alps and the Western Carpathians with the adjacent part of Pannonia. A detailed overview of published data shows that extremely rare records on existence of diploid populations in the south-west Alps are with high probability erroneous and most probably refer to the closely related diploid species P. peleteriana. CONCLUSIONS The recent distribution of P. officinarum in Europe is complex and probably reflects the climatic changes during the Pleistocene and consequent postglacial migrations. Probably both penta- and hexaploids arose independently in central Europe (Alps and Carpathian Mountains) and in northern Europe (Scandinavia, Great Britain, Ireland), where the apomictic plants colonized deglaciated areas. We suggest that P. officinarum is in fact an amphidiploid species with a basic tetraploid level, which probably originated from hybridizations of diploid taxa from the section Pilosellina.",TRUE,noun phrase
R27,Botany,R111316,"Difference in reproductive mode rather than ploidy explains niche differentiation in sympatric sexual and apomictic populations of
Potentilla puberula",S506923,R111323,Species ,L365589,Potentilla puberula,"Abstract Apomicts tend to have larger geographical distributional ranges and to occur in ecologically more extreme environments than their sexual progenitors. However, the expression of apomixis is typically linked to polyploidy. Thus, it is a priori not clear whether intrinsic effects related to the change in the reproductive mode or rather in the ploidy drive ecological differentiation. We used sympatric sexual and apomictic populations of Potentilla puberula to test for ecological differentiation. To distinguish the effects of reproductive mode and ploidy on the ecology of cytotypes, we compared the niches (a) of sexuals (tetraploids) and autopolyploid apomicts (penta‐, hepta‐, and octoploids) and (b) of the three apomictic cytotypes. We based comparisons on a ploidy screen of 238 populations along a latitudinal transect through the Eastern European Alps and associated bioclimatic, and soil and topographic data. Sexual tetraploids preferred primary habitats at drier, steeper, more south‐oriented slopes, while apomicts mostly occurred in human‐made habitats with higher water availability. Contrariwise, we found no or only marginal ecological differentiation among the apomictic higher ploids. Based on the pronounced ecological differences found between sexuals and apomicts, in addition to the lack of niche differentiation among cytotypes of the same reproductive mode, we conclude that reproductive mode rather than ploidy is the main driver of the observed differences. Moreover, we compared our system with others from the literature, to stress the importance of identifying alternative confounding effects (such as hybrid origin). Finally, we underline the relevance of studying ecological parthenogenesis in sympatry, to minimize the effects of differential migration abilities.",TRUE,noun phrase
R27,Botany,R111352,"Patterns, sources and ecological implications of clonal diversity in apomictic Ranunculus carpaticola (Ranunculus auricomus complex, Ranunculaceae): CLONAL DIVERSITY IN APOMICTIC RANUNCULUS",S507089,R111354,Species ,L365716,Ranunculus carpaticola,"Sources and implications of genetic diversity in agamic complexes are still under debate. Population studies (amplified fragment length polymorphisms, microsatellites) and karyological methods (Feulgen DNA image densitometry and flow cytometry) were employed for characterization of genetic diversity and ploidy levels of 10 populations of Ranunculus carpaticola in central Slovakia. Whereas two diploid populations showed high levels of genetic diversity, as expected for sexual reproduction, eight populations are hexaploid and harbour lower degrees of genotypic variation, but maintain high levels of heterozygosity at many loci, as is typical for apomicts. Polyploid populations consist either of a single AFLP genotype or of one dominant and a few deviating genotypes. genotype/genodive and character incompatibility analyses suggest that genotypic variation within apomictic populations is caused by mutations, but in one population probably also by recombination. This local facultative sexuality may have a great impact on regional genotypic diversity. Two microsatellite loci discriminated genotypes separated by the accumulation of few mutations (‘clone mates’) within each AFLP clone. Genetic diversity is partitioned mainly among apomictic populations and is not geographically structured, which may be due to facultative sexuality and/or multiple colonizations of sites by different clones. Habitat differentiation and a tendency to inhabit artificial meadows is more pronounced in apomictic than in sexual populations. We hypothesize that maintenance of genetic diversity and superior colonizing abilities of apomicts in temporally and spatially heterogeneous environments are important for their distributional success.",TRUE,noun phrase
R122,Chemistry,R46099,Effects of F- Doping on the Photocatalytic Activity and Microstructures of Nanocrystalline TiO2 Powders,S140342,R46100,visible-light driven photocatalysis,L86204,photocatalytic oxidation of acetone,"A novel and simple method for preparing highly photoactive nanocrystalline F--doped TiO2 photocatalyst with anatase and brookite phase was developed by hydrolysis of titanium tetraisopropoxide in a mixed NH4F−H2O solution. The prepared F--doped TiO2 powders were characterized by differential thermal analysis-thermogravimetry (DTA-TG), X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), UV−vis absorption spectroscopy, photoluminescence spectra (PL), transmission electron microscopy (TEM), and BET surface areas. The photocatalytic activity was evaluated by the photocatalytic oxidation of acetone in air. The results showed that the crystallinity of anatase was improved upon F- doping. Moreover, fluoride ions not only suppressed the formation of brookite phase but also prevented phase transition of anatase to rutile. The F--doped TiO2 samples exhibited stronger absorption in the UV−visible range with a red shift in the band gap transition. The photocatalytic activity of F--doped TiO2 powders prep...",TRUE,noun phrase
R122,Chemistry,R46105,Improved photocatalytic activity of Sn 4+ doped TiO 2 nanoparticulate films prepared by plasma-enhanced chemical vapor deposition,S140393,R46106,visible-light driven photocatalysis,L86243,photodegradation of phenol,"Sn4+ ion doped TiO2 (TiO2–Sn4+) nanoparticulate films with a doping ratio of about 7∶100 [(Sn)∶(Ti)] were prepared by the plasma-enhanced chemical vapor deposition (PCVD) method. The doping mode (lattice Ti substituted by Sn4+ ions) and the doping energy level of Sn4+ were determined by X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), surface photovoltage spectroscopy (SPS) and electric field induced surface photovoltage spectroscopy (EFISPS). It is found that the introduction of a doping energy level of Sn4+ ions is profitable to the separation of photogenerated carriers under both UV and visible light excitation. Characterization of the films with XRD and SPS indicates that after doping by Sn, more surface defects are present on the surface. Consequently, the photocatalytic activity for photodegradation of phenol in the presence of the TiO2–Sn4+ film is higher than that of the pure TiO2 film under both UV and visible light irradiation.",TRUE,noun phrase
R122,Chemistry,R45106,How fast is interfacial hole transfer? In situ monitoring of carrier dynamics in anatase TiO 2 nanoparticles by femtosecond laser spectroscopy,S336491,R45107,Has output,R70705,Transient absorption,"By comparing the transient absorption spectra of nanosized anatase TiO2 colloidal systems with and without SCN−, the broad absorption band around 520 nm observed immediately after band-gap excitation for the system without SCN− has been assigned to shallowly trapped holes. In the presence of SCN−, the absorption from the trapped holes at 520 nm cannot be observed because of the ultrafast interfacial hole transfer between TiO2 nanoparticles and SCN−. The hole and electron trapping times were estimated to be <50 and 260 fs, respectively, by the analysis of rise and decay dynamics of transient absorption spectra. The rate of the hole transfer from nanosized TiO2 colloid to SCN− is comparable to that of the hole trapping and the time of formation of a weakly coupled (SCN···SCN)•− is estimated to be ∽2.3 ps with 0.3 M KSCN. A further structural change to form a stable (SCN)2•− is observed in a timescale of 100∽150 ps, which is almost independent of the concentration of SCN−.",TRUE,noun phrase
R122,Chemistry,R45108,Identification of Reactive Species in Photoexcited Nanocrystalline TiO2 Films by Wide-Wavelength-Range (400−2500 nm) Transient Absorption Spectroscopy,S336524,R45109,Has output,R70720,Transient absorption,"Reactive species, holes, and electrons in photoexcited nanocrystalline TiO2 films were studied by transient absorption spectroscopy in the wavelength range from 400 to 2500 nm. The electron spectrum was obtained through a hole-scavenging reaction under steady-state light irradiation. The spectrum can be analyzed by a superposition of the free-electron and trapped-electron spectra. By subtracting the electron spectrum from the transient absorption spectrum, the spectrum of trapped holes was obtained. As a result, three reactive speciestrapped holes and free and trapped electronswere identified in the transient absorption spectrum. The reactivity of these species was evaluated through transient absorption spectroscopy in the presence of hole- and electron-scavenger molecules. The spectra indicate that trapped holes and electrons are localized at the surface of the particles and free electrons are distributed in the bulk.",TRUE,noun phrase
R225,Civil Engineering,R4693,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5138,R4699,Material,R4700,a building,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4717,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5165,R4723,Material,R4724,a building,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4735,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5185,R4741,Material,R4742,a building,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5670,R5144,Material,R5148,a building,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4693,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5143,R4699,Material,R4705,a consistent mathematical process model,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4717,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5170,R4723,Material,R4729,a consistent mathematical process model,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4735,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5190,R4741,Material,R4747,a consistent mathematical process model,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5676,R5144,Material,R5154,a consistent mathematical process model,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5683,R5144,Data,R5161,a formal description,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5677,R5144,Material,R5155,a prototype implementation,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5681,R5144,Data,R5159,certain tasks,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4693,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5147,R4699,Process,R4709,Co-operative Building Planning,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4717,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5174,R4723,Process,R4733,Co-operative Building Planning,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4735,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5194,R4741,Process,R4751,Co-operative Building Planning,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5668,R5144,Process,R5146,Co-operative Building Planning,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5672,R5144,Material,R5150,different technical disciplines,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4693,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5139,R4699,Material,R4701,Many participants,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4717,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5166,R4723,Material,R4725,Many participants,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4735,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5186,R4741,Material,R4743,Many participants,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5671,R5144,Material,R5149,Many participants,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4693,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5141,R4699,Material,R4703,modern information and communication technologies,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4717,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5168,R4723,Material,R4727,modern information and communication technologies,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4735,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5188,R4741,Material,R4745,modern information and communication technologies,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5674,R5144,Material,R5152,modern information and communication technologies,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4693,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5148,R4699,Process,R4710,Network-based Co-operative Planning Processes,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4717,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5175,R4723,Process,R4734,Network-based Co-operative Planning Processes,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4735,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5195,R4741,Process,R4752,Network-based Co-operative Planning Processes,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5669,R5144,Process,R5147,Network-based Co-operative Planning Processes,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5678,R5144,Material,R5156,Our project,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4693,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5144,R4699,Material,R4706,our relational process model,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4717,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5171,R4723,Material,R4730,our relational process model,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4735,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5191,R4741,Material,R4748,our relational process model,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5679,R5144,Material,R5157,our relational process model,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4693,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5145,R4699,Data,R4707,"participants, tasks and building data","The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4717,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5172,R4723,Data,R4731,"participants, tasks and building data","The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4735,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5192,R4741,Data,R4749,"participants, tasks and building data","The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5682,R5144,Data,R5160,"participants, tasks and building data","The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5684,R5144,Data,R5162,the m odel,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5667,R5144,Process,R5145,The planning process,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4693,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5140,R4699,Material,R4702,the project leader,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4717,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5167,R4723,Material,R4726,the project leader,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4735,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5187,R4741,Material,R4744,the project leader,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5673,R5144,Material,R5151,the project leader,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4693,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5146,R4699,Data,R4708,the structural consistency and correctness,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4717,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5173,R4723,Data,R4732,the structural consistency and correctness,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4735,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5193,R4741,Data,R4750,the structural consistency and correctness,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5685,R5144,Data,R5163,the structural consistency and correctness,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5680,R5144,Material,R5158,the tool,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4693,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5142,R4699,Material,R4704,these technologies,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4717,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5169,R4723,Material,R4728,these technologies,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R4735,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5189,R4741,Material,R4746,these technologies,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5675,R5144,Material,R5153,these technologies,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,noun phrase
R169,Climate,R67866,Emergent constraints on transient climate response (TCR) and equilibrium climate sensitivity (ECS) from historical warming in CMIP5 and CMIP6 models,S324466,R68116,Global and annual mean surface air temperature,R68118,Climate response,"Abstract. Climate sensitivity to CO2 remains the key uncertainty in projections of future climate change. Transient climate response (TCR) is the metric of temperature sensitivity that is most relevant to warming in the next few decades and contributes the biggest uncertainty to estimates of the carbon budgets consistent with the Paris targets. Equilibrium climate sensitivity (ECS) is vital for understanding longer-term climate change and stabilisation targets. In the IPCC 5th Assessment Report (AR5), the stated “likely” ranges (16 %–84 % confidence) of TCR (1.0–2.5 K) and ECS (1.5–4.5 K) were broadly consistent with the ensemble of CMIP5 Earth system models (ESMs) available at the time. However, many of the latest CMIP6 ESMs have larger climate sensitivities, with 5 of 34 models having TCR values above 2.5 K and an ensemble mean TCR of 2.0±0.4 K. Even starker, 12 of 34 models have an ECS value above 4.5 K. On the face of it, these latest ESM results suggest that the IPCC likely ranges may need revising upwards, which would cast further doubt on the feasibility of the Paris targets. Here we show that rather than increasing the uncertainty in climate sensitivity, the CMIP6 models help to constrain the likely range of TCR to 1.3–2.1 K, with a central estimate of 1.68 K. We reach this conclusion through an emergent constraint approach which relates the value of TCR linearly to the global warming from 1975 onwards. This is a period when the signal-to-noise ratio of the net radiative forcing increases strongly, so that uncertainties in aerosol forcing become progressively less problematic. We find a consistent emergent constraint on TCR when we apply the same method to CMIP5 models. Our constraints on TCR are in good agreement with other recent studies which analysed CMIP ensembles. The relationship between ECS and the post-1975 warming trend is less direct and also non-linear. However, we are able to derive a likely range of ECS of 1.9–3.4 K from the CMIP6 models by assuming an underlying emergent relationship based on a two-box energy balance model. Despite some methodological differences; this is consistent with a previously published ECS constraint derived from warming trends in CMIP5 models to 2005. Our results seem to be part of a growing consensus amongst studies that have applied the emergent constraint approach to different model ensembles and to different aspects of the record of global warming.",TRUE,noun phrase
R169,Climate,R67866,Emergent constraints on transient climate response (TCR) and equilibrium climate sensitivity (ECS) from historical warming in CMIP5 and CMIP6 models,S324471,R68116,Global and annual mean surface air temperature,R68119,Climate sensitivity,"Abstract. Climate sensitivity to CO2 remains the key uncertainty in projections of future climate change. Transient climate response (TCR) is the metric of temperature sensitivity that is most relevant to warming in the next few decades and contributes the biggest uncertainty to estimates of the carbon budgets consistent with the Paris targets. Equilibrium climate sensitivity (ECS) is vital for understanding longer-term climate change and stabilisation targets. In the IPCC 5th Assessment Report (AR5), the stated “likely” ranges (16 %–84 % confidence) of TCR (1.0–2.5 K) and ECS (1.5–4.5 K) were broadly consistent with the ensemble of CMIP5 Earth system models (ESMs) available at the time. However, many of the latest CMIP6 ESMs have larger climate sensitivities, with 5 of 34 models having TCR values above 2.5 K and an ensemble mean TCR of 2.0±0.4 K. Even starker, 12 of 34 models have an ECS value above 4.5 K. On the face of it, these latest ESM results suggest that the IPCC likely ranges may need revising upwards, which would cast further doubt on the feasibility of the Paris targets. Here we show that rather than increasing the uncertainty in climate sensitivity, the CMIP6 models help to constrain the likely range of TCR to 1.3–2.1 K, with a central estimate of 1.68 K. We reach this conclusion through an emergent constraint approach which relates the value of TCR linearly to the global warming from 1975 onwards. This is a period when the signal-to-noise ratio of the net radiative forcing increases strongly, so that uncertainties in aerosol forcing become progressively less problematic. We find a consistent emergent constraint on TCR when we apply the same method to CMIP5 models. Our constraints on TCR are in good agreement with other recent studies which analysed CMIP ensembles. The relationship between ECS and the post-1975 warming trend is less direct and also non-linear. However, we are able to derive a likely range of ECS of 1.9–3.4 K from the CMIP6 models by assuming an underlying emergent relationship based on a two-box energy balance model. Despite some methodological differences; this is consistent with a previously published ECS constraint derived from warming trends in CMIP5 models to 2005. Our results seem to be part of a growing consensus amongst studies that have applied the emergent constraint approach to different model ensembles and to different aspects of the record of global warming.",TRUE,noun phrase
R346,Cognition and Perception,R110142,Do Animals Engage Greater Social Attention in Autism? An Eye Tracking Analysis,S502214,R110144,Process,R110152,autism spectrum disorder,"Background Visual atypicalities in autism spectrum disorder (ASD) are a well documented phenomenon, beginning as early as 2–6 months of age and manifesting in a significantly decreased attention to the eyes, direct gaze and socially salient information. Early emerging neurobiological deficits in perceiving social stimuli as rewarding or its active avoidance due to the anxiety it entails have been widely purported as potential reasons for this atypicality. Parallel research evidence also points to the significant benefits of animal presence for reducing social anxiety and enhancing social interaction in children with autism. While atypicality in social attention in ASD has been widely substantiated, whether this atypicality persists equally across species types or is confined to humans has not been a key focus of research insofar. Methods We attempted a comprehensive examination of the differences in visual attention to static images of human and animal faces (40 images; 20 human faces and 20 animal faces) among children with ASD using an eye tracking paradigm. 44 children (ASD n = 21; TD n = 23) participated in the study (10,362 valid observations) across five regions of interest (left eye, right eye, eye region, face and screen). Results Results obtained revealed significantly greater social attention across human and animal stimuli in typical controls when compared to children with ASD. However in children with ASD, a significantly greater attention allocation was seen to animal faces and eye region and lesser attention to the animal mouth when compared to human faces, indicative of a clear attentional preference to socially salient regions of animal stimuli. The positive attentional bias toward animals was also seen in terms of a significantly greater visual attention to direct gaze in animal images. Conclusion Our results suggest the possibility that atypicalities in social attention in ASD may not be uniform across species. It adds to the current neural and biomarker evidence base of the potentially greater social reward processing and lesser social anxiety underlying animal stimuli as compared to human stimuli in children with ASD.",TRUE,noun phrase
R346,Cognition and Perception,R110142,Do Animals Engage Greater Social Attention in Autism? An Eye Tracking Analysis,S502210,R110144,Method,R110148,eye tracking,"Background Visual atypicalities in autism spectrum disorder (ASD) are a well documented phenomenon, beginning as early as 2–6 months of age and manifesting in a significantly decreased attention to the eyes, direct gaze and socially salient information. Early emerging neurobiological deficits in perceiving social stimuli as rewarding or its active avoidance due to the anxiety it entails have been widely purported as potential reasons for this atypicality. Parallel research evidence also points to the significant benefits of animal presence for reducing social anxiety and enhancing social interaction in children with autism. While atypicality in social attention in ASD has been widely substantiated, whether this atypicality persists equally across species types or is confined to humans has not been a key focus of research insofar. Methods We attempted a comprehensive examination of the differences in visual attention to static images of human and animal faces (40 images; 20 human faces and 20 animal faces) among children with ASD using an eye tracking paradigm. 44 children (ASD n = 21; TD n = 23) participated in the study (10,362 valid observations) across five regions of interest (left eye, right eye, eye region, face and screen). Results Results obtained revealed significantly greater social attention across human and animal stimuli in typical controls when compared to children with ASD. However in children with ASD, a significantly greater attention allocation was seen to animal faces and eye region and lesser attention to the animal mouth when compared to human faces, indicative of a clear attentional preference to socially salient regions of animal stimuli. The positive attentional bias toward animals was also seen in terms of a significantly greater visual attention to direct gaze in animal images. Conclusion Our results suggest the possibility that atypicalities in social attention in ASD may not be uniform across species. It adds to the current neural and biomarker evidence base of the potentially greater social reward processing and lesser social anxiety underlying animal stimuli as compared to human stimuli in children with ASD.",TRUE,noun phrase
R346,Cognition and Perception,R110142,Do Animals Engage Greater Social Attention in Autism? An Eye Tracking Analysis,S502209,R110144,Data,R110147,Visual atypicalities,"Background Visual atypicalities in autism spectrum disorder (ASD) are a well documented phenomenon, beginning as early as 2–6 months of age and manifesting in a significantly decreased attention to the eyes, direct gaze and socially salient information. Early emerging neurobiological deficits in perceiving social stimuli as rewarding or its active avoidance due to the anxiety it entails have been widely purported as potential reasons for this atypicality. Parallel research evidence also points to the significant benefits of animal presence for reducing social anxiety and enhancing social interaction in children with autism. While atypicality in social attention in ASD has been widely substantiated, whether this atypicality persists equally across species types or is confined to humans has not been a key focus of research insofar. Methods We attempted a comprehensive examination of the differences in visual attention to static images of human and animal faces (40 images; 20 human faces and 20 animal faces) among children with ASD using an eye tracking paradigm. 44 children (ASD n = 21; TD n = 23) participated in the study (10,362 valid observations) across five regions of interest (left eye, right eye, eye region, face and screen). Results Results obtained revealed significantly greater social attention across human and animal stimuli in typical controls when compared to children with ASD. However in children with ASD, a significantly greater attention allocation was seen to animal faces and eye region and lesser attention to the animal mouth when compared to human faces, indicative of a clear attentional preference to socially salient regions of animal stimuli. The positive attentional bias toward animals was also seen in terms of a significantly greater visual attention to direct gaze in animal images. Conclusion Our results suggest the possibility that atypicalities in social attention in ASD may not be uniform across species. It adds to the current neural and biomarker evidence base of the potentially greater social reward processing and lesser social anxiety underlying animal stimuli as compared to human stimuli in children with ASD.",TRUE,noun phrase
R111778,Communication Neuroscience,R111735,Brain connectivity dynamics during social interaction reflect social network structure,S508308,R111739,Has approach,R111740,functional connectivity,"AbstractSocial ties are crucial for humans. Disruption of ties through social exclusion has a marked effect on our thoughts and feelings; however, such effects can be tempered by broader social network resources. Here, we use functional magnetic resonance imaging data acquired from 80 male adolescents to investigate how social exclusion modulates functional connectivity within and across brain networks involved in social pain and understanding the mental states of others (i.e., mentalizing). Furthermore, using objectively logged friendship network data, we examine how individual variability in brain reactivity to social exclusion relates to the density of participants’ friendship networks, an important aspect of social network structure. We find increased connectivity within a set of regions previously identified as a mentalizing system during exclusion relative to inclusion. These results are consistent across the regions of interest as well as a whole-brain analysis. Next, examining how social network characteristics are associated with task-based connectivity dynamics, participants who showed greater changes in connectivity within the mentalizing system when socially excluded by peers had less dense friendship networks. This work provides novel insight to understand how distributed brain systems respond to social and emotional challenges, and how such brain dynamics might vary based on broader social network characteristics.",TRUE,noun phrase
R111778,Communication Neuroscience,R111716,Engaged listeners: shared neural processing of powerful political speeches,S508331,R111718,Has approach,R111719,inter-subject correlation analysis,"Powerful speeches can captivate audiences, whereas weaker speeches fail to engage their listeners. What is happening in the brains of a captivated audience? Here, we assess audience-wide functional brain dynamics during listening to speeches of varying rhetorical quality. The speeches were given by German politicians and evaluated as rhetorically powerful or weak. Listening to each of the speeches induced similar neural response time courses, as measured by inter-subject correlation analysis, in widespread brain regions involved in spoken language processing. Crucially, alignment of the time course across listeners was stronger for rhetorically powerful speeches, especially for bilateral regions of the superior temporal gyri and medial prefrontal cortex. Thus, during powerful speeches, listeners as a group are more coupled to each other, suggesting that powerful speeches are more potent in taking control of the listeners' brain responses. Weaker speeches were processed more heterogeneously, although they still prompted substantially correlated responses. These patterns of coupled neural responses bear resemblance to metaphors of resonance, which are often invoked in discussions of speech impact, and contribute to the literature on auditory attention under natural circumstances. Overall, this approach opens up possibilities for research on the neural mechanisms mediating the reception of entertaining or persuasive messages.",TRUE,noun phrase
R111778,Communication Neuroscience,R136499,Increased attention but more efficient disengagement: Neuroscientific evidence for defensive processing of threatening health information.,S540219,R136501,has_analysis_approach,L380261,P300 amplitude,"OBJECTIVE Previous studies indicate that people respond defensively to threatening health information, especially when the information challenges self-relevant goals. The authors investigated whether reduced acceptance of self-relevant health risk information is already visible in early attention processes, that is, attention disengagement processes. DESIGN In a randomized, controlled trial with 29 smoking and nonsmoking students, a variant of Posner's cueing task was used in combination with the high-temporal resolution method of event-related brain potentials (ERPs). MAIN OUTCOME MEASURES Reaction times and P300 ERP. RESULTS Smokers showed lower P300 amplitudes in response to high- as opposed to low-threat invalid trials when moving their attention to a target in the opposite visual field, indicating more efficient attention disengagement processes. Furthermore, both smokers and nonsmokers showed increased P300 amplitudes in response to the presentation of high- as opposed to low-threat valid trials, indicating threat-induced attention-capturing processes. Reaction time measures did not support the ERP data, indicating that the ERP measure can be extremely informative to measure low-level attention biases in health communication. CONCLUSION The findings provide the first neuroscientific support for the hypothesis that threatening health information causes more efficient disengagement among those for whom the health threat is self-relevant.",TRUE,noun phrase
R288,Communication Sciences,R172941,Feeling Left Out: Underserved Audiences in Science Communication,S690007,R172944,problem,R172945,emotional impact on underserved audiences,"Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and “feeling left out” can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.",TRUE,noun phrase
R288,Communication Sciences,R172941,Feeling Left Out: Underserved Audiences in Science Communication,S690022,R172944,Material,R172960,feelings and emotions,"Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and “feeling left out” can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.",TRUE,noun phrase
R288,Communication Sciences,R172941,Feeling Left Out: Underserved Audiences in Science Communication,S690009,R172944,Method,R172947,focus group,"Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and “feeling left out” can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.",TRUE,noun phrase
R288,Communication Sciences,R172941,Feeling Left Out: Underserved Audiences in Science Communication,S690012,R172944,Method,R172950,focus groups,"Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and “feeling left out” can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.",TRUE,noun phrase
R288,Communication Sciences,R172941,Feeling Left Out: Underserved Audiences in Science Communication,S690020,R172944,Material,R172958,not reached groups,"Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and “feeling left out” can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.",TRUE,noun phrase
R288,Communication Sciences,R172941,Feeling Left Out: Underserved Audiences in Science Communication,S690018,R172944,Material,R172956,reached groups,"Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and “feeling left out” can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.",TRUE,noun phrase
R288,Communication Sciences,R172941,Feeling Left Out: Underserved Audiences in Science Communication,S690019,R172944,Material,R172957,three different underserved audiences,"Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and “feeling left out” can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.",TRUE,noun phrase
R288,Communication Sciences,R172941,Feeling Left Out: Underserved Audiences in Science Communication,S690021,R172944,Material,R172959,underserved audiences,"Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and “feeling left out” can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.",TRUE,noun phrase
R388,Comparative Literature,R8624,"Revisiting Style, a Key Concept in Literary Studies",S13529,R8625,Has result,R8628,Definition of style,"AbstractLanguage and literary studies have studied style for centuries, and even since the advent of ›stylistics‹ as a discipline at the beginning of the twentieth century, definitions of ›style‹ have varied heavily across time, space and fields. Today, with increasingly large collections of literary texts being made available in digital form, computational approaches to literary style are proliferating. New methods from disciplines such as corpus linguistics and computer science are being adopted and adapted in interrelated fields such as computational stylistics and corpus stylistics, and are facilitating new approaches to literary style.The relation between definitions of style in established linguistic or literary stylistics, and definitions of style in computational or corpus stylistics has not, however, been systematically assessed. This contribution aims to respond to the need to redefine style in the light of this new situation and to establish a clearer perception of both the overlap and the boundaries between ›mainstream‹ and ›computational‹ and/or ›empirical‹ literary stylistics. While stylistic studies of non-literary texts are currently flourishing, our contribution deliberately centers on those approaches relevant to ›literary stylistics‹. It concludes by proposing an operational definition of style that we hope can act as a common ground for diverse approaches to literary style, fostering transdisciplinary research.The focus of this contribution is on literary style in linguistics and literary studies (rather than in art history, musicology or fashion), on textual aspects of style (rather than production- or reception-oriented theories of style), and on a descriptive perspective (rather than a prescriptive or didactic one). Even within these limits, however, it appears necessary to build on a broad understanding of the various perspectives on style that have been adopted at different times and in different traditions. For this reason, the contribution first traces the development of the notion of style in three different traditions, those of German, Dutch and French language and literary studies. Despite the numerous links between each other, and between each of them to the British and American traditions, these three traditions each have their proper dynamics, especially with regard to the convergence and/or confrontation between mainstream and computational stylistics. For reasons of space and coherence, the contribution is limited to theoretical developments occurring since 1945.The contribution begins by briefly outlining the range of definitions of style that can be encountered across traditions today: style as revealing a higher-order aesthetic value, as the holistic ›gestalt‹ of single texts, as an expression of the individuality of an author, as an artifact presupposing choice among alternatives, as a deviation from a norm or reference, or as any formal property of a text. The contribution then traces the development of definitions of style in each of the three traditions mentioned, with the aim of giving a concise account of how, in each tradition, definitions of style have evolved over time, with special regard to the way such definitions relate to empirical, quantitative or otherwise computational approaches to style in literary texts. It will become apparent how, in each of the three traditions, foundational texts continue to influence current discussions on literary style, but also how stylistics has continuously reacted to broader developments in cultural and literary theory, and how empirical, quantitative or computational approaches have long existed, usually in parallel to or at the margins of mainstream stylistics. The review will also reflect the lines of discussion around style as a property of literary texts – or of any textual entity in general.The perspective on three stylistic traditions is accompanied by a more systematic perspective. The rationale is to work towards a common ground for literary scholars and linguists when talking about (literary) style, across traditions of stylistics, with respect for established definitions of style, but also in light of the digital paradigm. Here, we first show to what extent, at similar or different moments in time, the three traditions have developed comparable positions on style, and which definitions out of the range of possible definitions have been proposed or promoted by which authors in each of the three traditions.On the basis of this synthesis, we then conclude by proposing an operational definition of style that is an attempt to provide a common ground for both mainstream and computational literary stylistics. This definition is discussed in some detail in order to explain not only what is meant by each term in the definition, but also how it relates to computational analyses of style – and how this definition aims to avoid some of the pitfalls that can be perceived in earlier definitions of style. Our definition, we hope, will be put to use by a new generation of computational, quantitative, and empirical studies of style in literary texts.",TRUE,noun phrase
R277,Computational Engineering,R4884,"Learning to Generate Wikipedia Summaries for Underserved Languages
from Wikidata",S5368,R4893,Material,R4898,a neural network architecture,"While Wikipedia exists in 287 languages, its content is unevenly distributed among them. In this work, we investigate the generation of open domain Wikipedia summaries in underserved languages using structured data from Wikidata. To this end, we propose a neural network architecture equipped with copy actions that learns to generate single-sentence and comprehensible textual summaries from Wikidata triples. We demonstrate the effectiveness of the proposed approach by evaluating it against a set of baselines on two languages of different natures: Arabic, a morphological rich language with a larger vocabulary than English, and Esperanto, a constructed language known for its easy acquisition.",TRUE,noun phrase
R277,Computational Engineering,R41026,Predicting Infections Using Computational Intelligence – A Systematic Review,S130198,R41032,Has result,R41051,Machine Learning,"Infections encompass a set of medical conditions of very diverse kinds that can pose a significant risk to health, and even death. As with many other diseases, early diagnosis can help to provide patients with proper care to minimize the damage produced by the disease, or to isolate them to avoid the risk of spread. In this context, computational intelligence can be useful to predict the risk of infection in patients, raising early alarms that can aid medical teams to respond as quick as possible. In this paper, we survey the state of the art on infection prediction using computer science by means of a systematic literature review. The objective is to find papers where computational intelligence is used to predict infections in patients using physiological data as features. We have posed one major research question along with nine specific subquestions. The whole review process is thoroughly described, and eight databases are considered which index most of the literature published in different scholarly formats. A total of 101 relevant documents have been found in the period comprised between 2003 and 2019, and a detailed study of these documents is carried out to classify the works and answer the research questions posed, resulting to our best knowledge in the most comprehensive study of its kind. We conclude that the most widely addressed infection is by far sepsis, followed by Clostridium difficile infection and surgical site infections. Most works use machine learning techniques, from which logistic regression, support vector machines, random forest and naive Bayes are the most common. Some machine learning works provide some ideas on the problems of small data and class imbalance, which can be of interest. The current systematic literature review shows that automatic diagnosis of infectious diseases using computational intelligence is well documented in the medical literature.",TRUE,noun phrase
R277,Computational Engineering,R41026,Predicting Infections Using Computational Intelligence – A Systematic Review,S327207,R68883,Has result,R68890,Machine learning techniques,"Infections encompass a set of medical conditions of very diverse kinds that can pose a significant risk to health, and even death. As with many other diseases, early diagnosis can help to provide patients with proper care to minimize the damage produced by the disease, or to isolate them to avoid the risk of spread. In this context, computational intelligence can be useful to predict the risk of infection in patients, raising early alarms that can aid medical teams to respond as quick as possible. In this paper, we survey the state of the art on infection prediction using computer science by means of a systematic literature review. The objective is to find papers where computational intelligence is used to predict infections in patients using physiological data as features. We have posed one major research question along with nine specific subquestions. The whole review process is thoroughly described, and eight databases are considered which index most of the literature published in different scholarly formats. A total of 101 relevant documents have been found in the period comprised between 2003 and 2019, and a detailed study of these documents is carried out to classify the works and answer the research questions posed, resulting to our best knowledge in the most comprehensive study of its kind. We conclude that the most widely addressed infection is by far sepsis, followed by Clostridium difficile infection and surgical site infections. Most works use machine learning techniques, from which logistic regression, support vector machines, random forest and naive Bayes are the most common. Some machine learning works provide some ideas on the problems of small data and class imbalance, which can be of interest. The current systematic literature review shows that automatic diagnosis of infectious diseases using computational intelligence is well documented in the medical literature.",TRUE,noun phrase
R277,Computational Engineering,R4884,"Learning to Generate Wikipedia Summaries for Underserved Languages
from Wikidata",S5365,R4893,Data,R4895,open domain Wikipedia summaries,"While Wikipedia exists in 287 languages, its content is unevenly distributed among them. In this work, we investigate the generation of open domain Wikipedia summaries in underserved languages using structured data from Wikidata. To this end, we propose a neural network architecture equipped with copy actions that learns to generate single-sentence and comprehensible textual summaries from Wikidata triples. We demonstrate the effectiveness of the proposed approach by evaluating it against a set of baselines on two languages of different natures: Arabic, a morphological rich language with a larger vocabulary than English, and Esperanto, a constructed language known for its easy acquisition.",TRUE,noun phrase
R277,Computational Engineering,R108304,CATS: Characterizing automation of Twitter spammers,S493419,R108306,Method,L357557,Supervised learning algorithms,"Twitter, with its rising popularity as a micro-blogging website, has inevitably attracted the attention of spammers. Spammers use myriad of techniques to evade security mechanisms and post spam messages, which are either unwelcome advertisements for the victim or lure victims in to clicking malicious URLs embedded in spam tweets. In this paper, we propose several novel features capable of distinguishing spam accounts from legitimate accounts. The features analyze the behavioral and content entropy, bait-techniques, and profile vectors characterizing spammers, which are then fed into supervised learning algorithms to generate models for our tool, CATS. Using our system on two real-world Twitter data sets, we observe a 96% detection rate with about 0.8% false positive rate beating state of the art detection approach. Our analysis reveals detection of more than 90% of spammers with less than five tweets and about half of the spammers detected with only a single tweet. Our feature computation has low latency and resource requirement making fast detection feasible. Additionally, we cluster the unknown spammers to identify and understand the prevalent spam campaigns on Twitter.",TRUE,noun phrase
R277,Computational Engineering,R41026,Predicting Infections Using Computational Intelligence – A Systematic Review,S327205,R68883,Has method,R68888,Systematic Literature Review,"Infections encompass a set of medical conditions of very diverse kinds that can pose a significant risk to health, and even death. As with many other diseases, early diagnosis can help to provide patients with proper care to minimize the damage produced by the disease, or to isolate them to avoid the risk of spread. In this context, computational intelligence can be useful to predict the risk of infection in patients, raising early alarms that can aid medical teams to respond as quick as possible. In this paper, we survey the state of the art on infection prediction using computer science by means of a systematic literature review. The objective is to find papers where computational intelligence is used to predict infections in patients using physiological data as features. We have posed one major research question along with nine specific subquestions. The whole review process is thoroughly described, and eight databases are considered which index most of the literature published in different scholarly formats. A total of 101 relevant documents have been found in the period comprised between 2003 and 2019, and a detailed study of these documents is carried out to classify the works and answer the research questions posed, resulting to our best knowledge in the most comprehensive study of its kind. We conclude that the most widely addressed infection is by far sepsis, followed by Clostridium difficile infection and surgical site infections. Most works use machine learning techniques, from which logistic regression, support vector machines, random forest and naive Bayes are the most common. Some machine learning works provide some ideas on the problems of small data and class imbalance, which can be of interest. The current systematic literature review shows that automatic diagnosis of infectious diseases using computational intelligence is well documented in the medical literature.",TRUE,noun phrase
R277,Computational Engineering,R4884,"Learning to Generate Wikipedia Summaries for Underserved Languages
from Wikidata",S5367,R4893,Material,R4897,underserved languages,"While Wikipedia exists in 287 languages, its content is unevenly distributed among them. In this work, we investigate the generation of open domain Wikipedia summaries in underserved languages using structured data from Wikidata. To this end, we propose a neural network architecture equipped with copy actions that learns to generate single-sentence and comprehensible textual summaries from Wikidata triples. We demonstrate the effectiveness of the proposed approach by evaluating it against a set of baselines on two languages of different natures: Arabic, a morphological rich language with a larger vocabulary than English, and Esperanto, a constructed language known for its easy acquisition.",TRUE,noun phrase
R322,Computational Linguistics,R155259,Leveraging Abstract Meaning Representation for Knowledge Base Question Answering,S621395,R155261,Techniques/Methods,L427842,Abstract Meaning Representation,"Knowledge base question answering (KBQA) is an important task in Natural Language Processing. Existing approaches face significant challenges including complex question understanding, necessity for reasoning, and lack of large end-to-end training datasets. In this work, we propose Neuro-Symbolic Question Answering (NSQA), a modular KBQA system, that leverages (1) Abstract Meaning Representation (AMR) parses for task-independent question understanding; (2) a simple yet effective graph transformation approach to convert AMR parses into candidate logical queries that are aligned to the KB; (3) a pipeline-based approach which integrates multiple, reusable modules that are trained specifically for their individual tasks (semantic parser, entity and relationship linkers, and neuro-symbolic reasoner) and do not require end-to-end training data. NSQA achieves state-of-the-art performance on two prominent KBQA datasets based on DBpedia (QALD-9 and LC-QuAD 1.0). Furthermore, our analysis emphasizes that AMR is a powerful tool for KBQA systems.",TRUE,noun phrase
R322,Computational Linguistics,R147977,The ACL Anthology Network,S630005,R147979,consists,R157113,Author collaboration network,"We introduce the ACL Anthology Network (AAN), a manually curated networked database of citations, collaborations, and summaries in the field of Computational Linguistics. We also present a number of statistics about the network including the most cited authors, the most central collaborators, as well as network statistics about the paper citation, author citation, and author collaboration networks.",TRUE,noun phrase
R322,Computational Linguistics,R148131,Construction of an annotated corpus to support biomedical information extraction,S593933,R148133,Dataset name,R148135,Gene Regulation Event Corpus,"Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes.",TRUE,noun phrase
R322,Computational Linguistics,R148131,Construction of an annotated corpus to support biomedical information extraction,S593932,R148133,Other resources,R148134,Gene Regulation Ontology,"Abstract Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants ( arguments ) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes.",TRUE,noun phrase
R322,Computational Linguistics,R147545,GENIA corpus--a semantically annotated corpus for bio-textmining,S591894,R147547,Dataset name,R147548,GENIA corpus,"MOTIVATION Natural language processing (NLP) methods are regarded as being useful to raise the potential of text mining from biological literature. The lack of an extensively annotated corpus of this literature, however, causes a major bottleneck for applying NLP techniques. GENIA corpus is being developed to provide reference materials to let NLP techniques work for bio-textmining. RESULTS GENIA corpus version 3.0 consisting of 2000 MEDLINE abstracts has been released with more than 400,000 words and almost 100,000 annotations for biological terms.",TRUE,noun phrase
R322,Computational Linguistics,R163869,Syntax Annotation for the GENIA Corpus,S655658,R163871,Dataset name,R147548,GENIA corpus,"Linguistically annotated corpus based on texts in biomedical domain has been constructed to tune natural language processing (NLP) tools for biotextmining. As the focus of information extraction is shifting from ""nominal"" information such as named entity to ""verbal"" information such as function and interaction of substances, application of parsers has become one of the key technologies and thus the corpus annotated for syntactic structure of sentences is in demand. A subset of the GENIA corpus consisting of 500 MEDLINE abstracts has been annotated for syntactic structure in an XMLbased format based on Penn Treebank II (PTB) scheme. Inter-annotator agreement test indicated that the writing style rather than the contents of the research abstracts is the source of the difficulty in tree annotation, and that annotation can be stably done by linguists without much knowledge of biology with appropriate guidelines regarding to linguistic phenomena particular to scientific texts.",TRUE,noun phrase
R322,Computational Linguistics,R164218,The GENIA corpus: an annotated research abstract corpus in molecular biology domain,S655657,R164220,Dataset name,R147548,GENIA corpus,"With the information overload in genome-related field, there is an increasing need for natural language processing technology to extract information from literature and various attempts of information extraction using NLP has been being made. We are developing the necessary resources including domain ontology and annotated corpus from research abstracts in MEDLINE database (GENIA corpus). We are building the ontology and the corpus simultaneously, using each other. In this paper we report on our new corpus, its ontological basis, annotation scheme, and statistics of annotated objects. We also describe the tools used for corpus annotation and management.",TRUE,noun phrase
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595298,R148452,Dataset name,R148471,ITI TXM corpora,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,noun phrase
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595293,R148452,Other resources,R148003,NCBI Taxonomy,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,noun phrase
R322,Computational Linguistics,R163869,Syntax Annotation for the GENIA Corpus,S654319,R163871,Annotation scheme,R163874,Penn Treebank II (PTB) scheme,"Linguistically annotated corpus based on texts in biomedical domain has been constructed to tune natural language processing (NLP) tools for biotextmining. As the focus of information extraction is shifting from ""nominal"" information such as named entity to ""verbal"" information such as function and interaction of substances, application of parsers has become one of the key technologies and thus the corpus annotated for syntactic structure of sentences is in demand. A subset of the GENIA corpus consisting of 500 MEDLINE abstracts has been annotated for syntactic structure in an XMLbased format based on Penn Treebank II (PTB) scheme. Inter-annotator agreement test indicated that the writing style rather than the contents of the research abstracts is the source of the difficulty in tree annotation, and that annotation can be stably done by linguists without much knowledge of biology with appropriate guidelines regarding to linguistic phenomena particular to scientific texts.",TRUE,noun phrase
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595290,R148452,Data domains,R114514,protein-protein interactions,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,noun phrase
R322,Computational Linguistics,R148450,The ITI TXM corpora: Tissue expressions and protein-protein interactions,S595291,R148452,Data domains,R148468,tissue expression,"We report on two large corpora of semantically annotated full-text biomedical research papers created in order to devel op information extraction ( IE) tools for the TXM project. Both corpora have been annotated with a range of entities (CellLine, Complex, DevelopmentalStage, Disease, DrugCompound, ExperimentalMethod, Fragment, Fusion, GOMOP, Gene, Modification, mRNAcDNA, Mutant, Protein, Tissue), normalisations of selected entities to the NCBI Taxonomy, RefSeq, EntrezGene, ChEBI and MeSH and enriched relations (protein-protein interactions, tissue expressions and fr agment- or mutant-protein relations). While one corpus targets protein-protein interactions ( PPIs), the focus of other is on tissue expressions ( TEs). This paper describes the selected markables and the annotation process of the ITI TXM corpora, and provides a detailed breakdown of the inter-annotator agreement (IAA).",TRUE,noun phrase
R322,Computational Linguistics,R164455,BioNLP Shared Task 2011 - Bacteria Biotope,S659853,R165466,Data coverage,R164461,Web pages,"This paper presents the Bacteria Biotope task as part of the BioNLP Shared Tasks 2011. The Bacteria Biotope task aims at extracting the location of bacteria from scientific Web pages. Bacteria location is a crucial knowledge in biology for phenotype studies. The paper details the corpus specification, the evaluation metrics, summarizes and discusses the participant results.",TRUE,noun phrase
R231,Computer and Systems Architecture,R175444,Efficient synthesis of physically valid human motion,S695265,R175446,has parameters,L467399,Joint torques,"Optimization is a promising way to generate new animations from a minimal amount of input data. Physically based optimization techniques, however, are difficult to scale to complex animated characters, in part because evaluating and differentiating physical quantities becomes prohibitively slow. Traditional approaches often require optimizing or constraining parameters involving joint torques; obtaining first derivatives for these parameters is generally an O(D2) process, where D is the number of degrees of freedom of the character. In this paper, we describe a set of objective functions and constraints that lead to linear time analytical first derivatives. The surprising finding is that this set includes constraints on physical validity, such as ground contact constraints. Considering only constraints and objective functions that lead to linear time first derivatives results in fast per-iteration computation times and an optimization problem that appears to scale well to more complex characters. We show that qualities such as squash-and-stretch that are expected from physically based optimization result from our approach. Our animation system is particularly useful for synthesizing highly dynamic motions, and we show examples of swinging and leaping motions for characters having from 7 to 22 degrees of freedom.",TRUE,noun phrase
R134,Computer and Systems Architecture,R108316,Analyzing Knowledge Transfer Effectiveness--An Agent-Oriented Modeling Approach,S493480,R108318,Approach name,L357605,Knowledge Transfer,"Facilitating the transfer of knowledge between knowledge workers represents one of the main challenges of knowledge management. Knowledge transfer instruments, such as the experience factory concept, represent means for facilitating knowledge transfer in organizations. As past research has shown, effectiveness of knowledge transfer instruments strongly depends on their situational context, on the stakeholders involved in knowledge transfer, and on their acceptance, motivation and goals. In this paper, we introduce an agent-oriented modeling approach for analyzing the effectiveness of knowledge transfer instruments in the light of (potentially conflicting) stakeholders' goals. We apply this intentional approach to the experience factory concept and analyze under which conditions it can fail, and how adaptations to the experience factory can be explored in a structured way",TRUE,noun phrase
R134,Computer and Systems Architecture,R107933,The Ontology-based Business Architecture Engineering Framework,S491712,R107935,Approach name,L356332,ORG-Master framework,"Business architecture became a well-known tool for business transformations. According to a recent study by Forrester, 50 percent of the companies polled claimed to have an active business architecture initiative, whereas 20 percent were planning to engage in business architecture work in the near future. However, despite the high interest in BA, there is not yet a common understanding of the main concepts. There is a lack for the business architecture framework which provides a complete metamodel, suggests methodology for business architecture development and enables tool support for it. The ORG- Master framework is designed to solve this problem using the ontology as a core of the metamodel. This paper describes the ORG-Master framework, its implementation and dissemination.",TRUE,noun phrase
R239,Computer Engineering,R74012,Harvesting Information from Captions for Weakly Supervised Semantic Segmentation,S340044,R74014,Major Contributions,R74016,weakly supervised image segmentation,"Since acquiring pixel-wise annotations for training convolutional neural networks for semantic image segmentation is time-consuming, weakly supervised approaches that only require class tags have been proposed. In this work, we propose another form of supervision, namely image captions as they can be found on the Internet. These captions have two advantages. They do not require additional curation as it is the case for the clean class tags used by current weakly supervised approaches and they provide textual context for the classes present in an image. To leverage such textual context, we deploy a multi-modal network that learns a joint embedding of the visual representation of the image and the textual representation of the caption. The network estimates text activation maps (TAMs) for class names as well as compound concepts, i.e. combinations of nouns and their attributes. The TAMs of compound concepts describing classes of interest substantially improve the quality of the estimated class activation maps which are then used to train a network for semantic segmentation. We evaluate our method on the COCO dataset where it achieves state of the art results for weakly supervised image segmentation.",TRUE,noun phrase
R132,Computer Sciences,R51013,Masakhane–Machine Translation For Africa,S156064,R51015,Material,R51024,African continent,"Africa has over 2000 languages. Despite this, African languages account for a small portion of available resources and publications in Natural Language Processing (NLP). This is due to multiple factors, including: a lack of focus from government and funding, discoverability, a lack of community, sheer language complexity, difficulty in reproducing papers and no benchmarks to compare techniques. To begin to address the identified problems, MASAKHANE, an open-source, continent-wide, distributed, online research effort for machine translation for African languages, was founded. In this paper, we discuss our methodology for building the community and spurring research from the African continent, as well as outline the success of the community in terms of addressing the identified problems affecting African NLP.",TRUE,noun phrase
R132,Computer Sciences,R51013,Masakhane–Machine Translation For Africa,S156061,R51015,Material,R51021,African languages,"Africa has over 2000 languages. Despite this, African languages account for a small portion of available resources and publications in Natural Language Processing (NLP). This is due to multiple factors, including: a lack of focus from government and funding, discoverability, a lack of community, sheer language complexity, difficulty in reproducing papers and no benchmarks to compare techniques. To begin to address the identified problems, MASAKHANE, an open-source, continent-wide, distributed, online research effort for machine translation for African languages, was founded. In this paper, we discuss our methodology for building the community and spurring research from the African continent, as well as outline the success of the community in terms of addressing the identified problems affecting African NLP.",TRUE,noun phrase
R132,Computer Sciences,R51013,Masakhane–Machine Translation For Africa,S156062,R51015,Material,R51022,available resources,"Africa has over 2000 languages. Despite this, African languages account for a small portion of available resources and publications in Natural Language Processing (NLP). This is due to multiple factors, including: a lack of focus from government and funding, discoverability, a lack of community, sheer language complexity, difficulty in reproducing papers and no benchmarks to compare techniques. To begin to address the identified problems, MASAKHANE, an open-source, continent-wide, distributed, online research effort for machine translation for African languages, was founded. In this paper, we discuss our methodology for building the community and spurring research from the African continent, as well as outline the success of the community in terms of addressing the identified problems affecting African NLP.",TRUE,noun phrase
R132,Computer Sciences,R131085,Discrete Flows: Invertible Generative Models of Discrete Data,S521614,R131086,has model,R121040,Bipartite Flow,"While normalizing flows have led to significant advances in modeling high-dimensional continuous distributions, their applicability to discrete distributions remains unknown. In this paper, we show that flows can in fact be extended to discrete events---and under a simple change-of-variables formula not requiring log-determinant-Jacobian computations. Discrete flows have numerous applications. We consider two flow architectures: discrete autoregressive flows that enable bidirectionality, allowing, for example, tokens in text to depend on both left-to-right and right-to-left contexts in an exact language model; and discrete bipartite flows that enable efficient non-autoregressive generation as in RealNVP. Empirically, we find that discrete autoregressive flows outperform autoregressive baselines on synthetic discrete distributions, an addition task, and Potts models; and bipartite flows can obtain competitive performance with autoregressive baselines on character-level language modeling for Penn Tree Bank and text8.",TRUE,noun phrase
R132,Computer Sciences,R133207,Deep Exploration via Bootstrapped DQN,S528385,R133208,has model,R124902,Bootstrapped DQN,"Efficient exploration in complex environments remains a major challenge for reinforcement learning. We propose bootstrapped DQN, a simple algorithm that explores in a computationally and statistically efficient manner through use of randomized value functions. Unlike dithering strategies such as epsilon-greedy exploration, bootstrapped DQN carries out temporally-extended (or deep) exploration; this can lead to exponentially faster learning. We demonstrate these benefits in complex stochastic MDPs and in the large-scale Arcade Learning Environment. Bootstrapped DQN substantially improves learning times and performance across most Atari games.",TRUE,noun phrase
R132,Computer Sciences,R130572,Compressive Transformers for Long-Range Sequence Modelling,S519209,R130577,has model,R120911,Compressive Transformer,"We present the Compressive Transformer, an attentive sequence model which compresses past memories for long-range sequence learning. We find the Compressive Transformer obtains state-of-the-art language modelling results in the WikiText-103 and Enwik8 benchmarks, achieving 17.1 ppl and 0.97 bpc respectively. We also find it can model high-frequency speech effectively and can be used as a memory mechanism for RL, demonstrated on an object matching task. To promote the domain of long-range sequence learning, we propose a new open-vocabulary language modelling benchmark derived from books, PG-19.",TRUE,noun phrase
R132,Computer Sciences,R131867,Simple Unsupervised Summarization by Contextual Matching,S524089,R131868,has model,R124757,Contextual Match,"We propose an unsupervised method for sentence summarization using only language modeling. The approach employs two language models, one that is generic (i.e. pretrained), and the other that is specific to the target domain. We show that by using a product-of-experts criteria these are enough for maintaining continuous contextual matching while maintaining output fluency. Experiments on both abstractive and extractive sentence summarization data sets show promising results of our method without being exposed to any paired data.",TRUE,noun phrase
R132,Computer Sciences,R51013,Masakhane–Machine Translation For Africa,S156058,R51015,Data,R51018,lack of focus,"Africa has over 2000 languages. Despite this, African languages account for a small portion of available resources and publications in Natural Language Processing (NLP). This is due to multiple factors, including: a lack of focus from government and funding, discoverability, a lack of community, sheer language complexity, difficulty in reproducing papers and no benchmarks to compare techniques. To begin to address the identified problems, MASAKHANE, an open-source, continent-wide, distributed, online research effort for machine translation for African languages, was founded. In this paper, we discuss our methodology for building the community and spurring research from the African continent, as well as outline the success of the community in terms of addressing the identified problems affecting African NLP.",TRUE,noun phrase
R132,Computer Sciences,R129787,Linguistic Input Features Improve Neural Machine Translation,S515977,R129788,has model,R117251,Linguistic Input Features,"Neural machine translation has recently achieved impressive results, while using little in the way of external linguistic information. In this paper we show that the strong learning capability of neural MT models does not make linguistic features redundant; they can be easily incorporated to provide further improvements in performance. We generalize the embedding layer of the encoder in the attentional encoder--decoder architecture to support the inclusion of arbitrary features, in addition to the baseline word feature. We add morphological features, part-of-speech tags, and syntactic dependency labels as input features to English German, and English->Romanian neural machine translation systems. In experiments on WMT16 training and test sets, we find that linguistic input features improve model quality according to three metrics: perplexity, BLEU and CHRF3. An open-source implementation of our neural MT system is available, as are sample files and configurations.",TRUE,noun phrase
R132,Computer Sciences,R51013,Masakhane–Machine Translation For Africa,S156057,R51015,Data,R51017,multiple factors,"Africa has over 2000 languages. Despite this, African languages account for a small portion of available resources and publications in Natural Language Processing (NLP). This is due to multiple factors, including: a lack of focus from government and funding, discoverability, a lack of community, sheer language complexity, difficulty in reproducing papers and no benchmarks to compare techniques. To begin to address the identified problems, MASAKHANE, an open-source, continent-wide, distributed, online research effort for machine translation for African languages, was founded. In this paper, we discuss our methodology for building the community and spurring research from the African continent, as well as outline the success of the community in terms of addressing the identified problems affecting African NLP.",TRUE,noun phrase
R132,Computer Sciences,R51013,Masakhane–Machine Translation For Africa,S156060,R51015,Process,R51020,Natural Language Processing (NLP),"Africa has over 2000 languages. Despite this, African languages account for a small portion of available resources and publications in Natural Language Processing (NLP). This is due to multiple factors, including: a lack of focus from government and funding, discoverability, a lack of community, sheer language complexity, difficulty in reproducing papers and no benchmarks to compare techniques. To begin to address the identified problems, MASAKHANE, an open-source, continent-wide, distributed, online research effort for machine translation for African languages, was founded. In this paper, we discuss our methodology for building the community and spurring research from the African continent, as well as outline the success of the community in terms of addressing the identified problems affecting African NLP.",TRUE,noun phrase
R132,Computer Sciences,R130563,Improving Transformer Models by Reordering their Sublayers,S519173,R130568,has model,R120975,Sandwich Transformer,"Multilayer transformer networks consist of interleaved self-attention and feedforward sublayers. Could ordering the sublayers in a different pattern lead to better performance? We generate randomly ordered transformers and train them with the language modeling objective. We observe that some of these models are able to achieve better performance than the interleaved baseline, and that those successful variants tend to have more self-attention at the bottom and more feedforward sublayers at the top. We propose a new transformer pattern that adheres to this property, the sandwich transformer, and show that it improves perplexity on multiple word-level and character-level language modeling benchmarks, at no cost in parameters, memory, or training time. However, the sandwich reordering pattern does not guarantee performance gains across every task, as we demonstrate on machine translation models. Instead, we suggest that further exploration of task-specific sublayer reorderings is needed in order to unlock additional gains.",TRUE,noun phrase
R132,Computer Sciences,R51013,Masakhane–Machine Translation For Africa,S156059,R51015,Data,R51019,sheer language complexity,"Africa has over 2000 languages. Despite this, African languages account for a small portion of available resources and publications in Natural Language Processing (NLP). This is due to multiple factors, including: a lack of focus from government and funding, discoverability, a lack of community, sheer language complexity, difficulty in reproducing papers and no benchmarks to compare techniques. To begin to address the identified problems, MASAKHANE, an open-source, continent-wide, distributed, online research effort for machine translation for African languages, was founded. In this paper, we discuss our methodology for building the community and spurring research from the African continent, as well as outline the success of the community in terms of addressing the identified problems affecting African NLP.",TRUE,noun phrase
R132,Computer Sciences,R51013,Masakhane–Machine Translation For Africa,S156056,R51015,Data,R51016,small portion,"Africa has over 2000 languages. Despite this, African languages account for a small portion of available resources and publications in Natural Language Processing (NLP). This is due to multiple factors, including: a lack of focus from government and funding, discoverability, a lack of community, sheer language complexity, difficulty in reproducing papers and no benchmarks to compare techniques. To begin to address the identified problems, MASAKHANE, an open-source, continent-wide, distributed, online research effort for machine translation for African languages, was founded. In this paper, we discuss our methodology for building the community and spurring research from the African continent, as well as outline the success of the community in terms of addressing the identified problems affecting African NLP.",TRUE,noun phrase
R132,Computer Sciences,R130909,Trellis Networks for Sequence Modeling,S520703,R130910,has model,R120945,Trellis Network,"We present trellis networks, a new architecture for sequence modeling. On the one hand, a trellis network is a temporal convolutional network with special structure, characterized by weight tying across depth and direct injection of the input into deep layers. On the other hand, we show that truncated recurrent networks are equivalent to trellis networks with special sparsity structure in their weight matrices. Thus trellis networks with general weight matrices generalize truncated recurrent networks. We leverage these connections to design high-performing trellis networks that absorb structural and algorithmic elements from both recurrent and convolutional models. Experiments demonstrate that trellis networks outperform the current state of the art methods on a variety of challenging benchmarks, including word-level language modeling and character-level language modeling tasks, and stress tests designed to evaluate long-term memory retention. The code is available at this https URL .",TRUE,noun phrase
R132,Computer Sciences,R134920,The Tsetlin Machine - A Game Theoretic Bandit Driven Approach to Optimal Pattern Recognition with Propositional Logic,S533654,R134921,has model,R126155,Tsetlin Machine,"Although simple individually, artificial neurons provide state-of-the-art performance when interconnected in deep networks. Arguably, the Tsetlin Automaton is an even simpler and more versatile learning mechanism, capable of solving the multi-armed bandit problem. Merely by means of a single integer as memory, it learns the optimal action in stochastic environments through increment and decrement operations. In this paper, we introduce the Tsetlin Machine, which solves complex pattern recognition problems with propositional formulas, composed by a collective of Tsetlin Automata. To eliminate the longstanding problem of vanishing signal-to-noise ratio, the Tsetlin Machine orchestrates the automata using a novel game. Further, both inputs, patterns, and outputs are expressed as bits, while recognition and learning rely on bit manipulation, simplifying computation. Our theoretical analysis establishes that the Nash equilibria of the game align with the propositional formulas that provide optimal pattern recognition accuracy. This translates to learning without local optima, only global ones. In five benchmarks, the Tsetlin Machine provides competitive accuracy compared with SVMs, Decision Trees, Random Forests, Naive Bayes Classifier, Logistic Regression, and Neural Networks. We further demonstrate how the propositional formulas facilitate interpretation. We believe the combination of high accuracy, interpretability, and computational simplicity makes the Tsetlin Machine a promising tool for a wide range of domains.",TRUE,noun phrase
R132,Computer Sciences,R36091,A Large Public Corpus of Web Tables containing Time and Context Metadata,S123500,R36092,Name,L74333,Web Tables,"The Web contains vast amounts of HTML tables. Most of these tables are used for layout purposes, but a small subset of the tables is relational, meaning that they contain structured data describing a set of entities [2]. As these relational Web tables cover a very wide range of different topics, there is a growing body of research investigating the utility of Web table data for completing cross-domain knowledge bases [6], for extending arbitrary tables with additional attributes [7, 4], as well as for translating data values [5]. The existing research shows the potentials of Web tables. However, comparing the performance of the different systems is difficult as up till now each system is evaluated using a different corpus of Web tables and as most of the corpora are owned by large search engine companies and are thus not accessible to the public. In this poster, we present a large public corpus of Web tables which contains over 233 million tables and has been extracted from the July 2015 version of the CommonCrawl. By publishing the corpus as well as all tools that we used to extract it from the crawled data, we intend to provide a common ground for evaluating Web table systems. The main difference of the corpus compared to an earlier corpus that we extracted from the 2012 version of the CommonCrawl as well as the corpus extracted by Eberius et al. [3] from the 2014 version of the CommonCrawl is that the current corpus contains a richer set of metadata for each table. This metadata includes table-specific information such as table orientation, table caption, header row, and key column, but also context information such as the text before and after the table, the title of the HTML page, as well as timestamp information that was found before and after the table. The context information can be useful for recovering the semantics of a table [7]. The timestamp information is crucial for fusing time-depended data, such as alternative population numbers for a city [8].",TRUE,noun phrase
R132,Computer Sciences,R36091,A Large Public Corpus of Web Tables containing Time and Context Metadata,S123502,R36092,Scope,R36033,Web tables,"The Web contains vast amounts of HTML tables. Most of these tables are used for layout purposes, but a small subset of the tables is relational, meaning that they contain structured data describing a set of entities [2]. As these relational Web tables cover a very wide range of different topics, there is a growing body of research investigating the utility of Web table data for completing cross-domain knowledge bases [6], for extending arbitrary tables with additional attributes [7, 4], as well as for translating data values [5]. The existing research shows the potentials of Web tables. However, comparing the performance of the different systems is difficult as up till now each system is evaluated using a different corpus of Web tables and as most of the corpora are owned by large search engine companies and are thus not accessible to the public. In this poster, we present a large public corpus of Web tables which contains over 233 million tables and has been extracted from the July 2015 version of the CommonCrawl. By publishing the corpus as well as all tools that we used to extract it from the crawled data, we intend to provide a common ground for evaluating Web table systems. The main difference of the corpus compared to an earlier corpus that we extracted from the 2012 version of the CommonCrawl as well as the corpus extracted by Eberius et al. [3] from the 2014 version of the CommonCrawl is that the current corpus contains a richer set of metadata for each table. This metadata includes table-specific information such as table orientation, table caption, header row, and key column, but also context information such as the text before and after the table, the title of the HTML page, as well as timestamp information that was found before and after the table. The context information can be useful for recovering the semantics of a table [7]. The timestamp information is crucial for fusing time-depended data, such as alternative population numbers for a city [8].",TRUE,noun phrase
R132,Computer Sciences,R134916,The Weighted Tsetlin Machine: Compressed Representations with Weighted Clauses,S533639,R134917,has model,R126154,Weighted Tsetlin Machine,"The Tsetlin Machine (TM) is an interpretable mechanism for pattern recognition that constructs conjunctive clauses from data. The clauses capture frequent patterns with high discriminating power, providing increasing expression power with each additional clause. However, the resulting accuracy gain comes at the cost of linear growth in computation time and memory usage. In this paper, we present the Weighted Tsetlin Machine (WTM), which reduces computation time and memory usage by weighting the clauses. Real-valued weighting allows one clause to replace multiple, and supports fine-tuning the impact of each clause. Our novel scheme simultaneously learns both the composition of the clauses and their weights. Furthermore, we increase training efficiency by replacing $k$ Bernoulli trials of success probability $p$ with a uniform sample of average size $p k$, the size drawn from a binomial distribution. In our empirical evaluation, the WTM achieved the same accuracy as the TM on MNIST, IMDb, and Connect-4, requiring only $1/4$, $1/3$, and $1/50$ of the clauses, respectively. With the same number of clauses, the WTM outperformed the TM, obtaining peak test accuracies of respectively $98.63\%$, $90.37\%$, and $87.91\%$. Finally, our novel sampling scheme reduced sample generation time by a factor of $7$.",TRUE,noun phrase
R417,Cultural History,R139810,Digital heritage interpretation: a conceptual framework,S558047,R139813,has subject domain,R139817,‘digital heritage interpretation’,"ABSTRACT ‘Heritage Interpretation’ has always been considered as an effective learning, communication and management tool that increases visitors’ awareness of and empathy to heritage sites or artefacts. Yet the definition of ‘digital heritage interpretation’ is still wide and so far, no significant method and objective are evident within the domain of ‘digital heritage’ theory and discourse. Considering ‘digital heritage interpretation’ as a process rather than as a tool to present or communicate with end-users, this paper presents a critical application of a theoretical construct ascertained from multiple disciplines and explicates four objectives for a comprehensive interpretive process. A conceptual model is proposed and further developed into a conceptual framework with fifteen considerations. This framework is then implemented and tested on an online platform to assess its impact on end-users’ interpretation level. We believe the presented interpretive framework (PrEDiC) will help heritage professionals and media designers to develop interpretive heritage project.",TRUE,noun phrase
R417,Cultural History,R139820,"Digital Media, Participatory Culture, and Difficult Heritage: Online Remediation and the Trans-Atlantic Slave Trade",S558066,R139822,has subject domain,R139828,“dark” heritage,"A diverse and changing array of digital media have been used to present heritage online. While websites have been created for online heritage outreach for nearly two decades, social media is employed increasingly to complement and in some cases replace the use of websites. These same social media are used by stakeholders as a form of participatory culture, to create communities and to discuss heritage independently of narratives offered by official institutions such as museums, memorials, and universities. With difficult or “dark” heritage—places of memory centering on deaths, disasters, and atrocities—these online representations and conversations can be deeply contested. Examining the websites and social media of difficult heritage, with an emphasis on the trans-Atlantic slave trade provides insights into the efficacy of online resources provided by official institutions, as well as the unofficial, participatory communities of stakeholders who use social media for collective memories.",TRUE,noun phrase
R417,Cultural History,R139820,"Digital Media, Participatory Culture, and Difficult Heritage: Online Remediation and the Trans-Atlantic Slave Trade",S558067,R139822,has subject domain,R139829,collective memories,"A diverse and changing array of digital media have been used to present heritage online. While websites have been created for online heritage outreach for nearly two decades, social media is employed increasingly to complement and in some cases replace the use of websites. These same social media are used by stakeholders as a form of participatory culture, to create communities and to discuss heritage independently of narratives offered by official institutions such as museums, memorials, and universities. With difficult or “dark” heritage—places of memory centering on deaths, disasters, and atrocities—these online representations and conversations can be deeply contested. Examining the websites and social media of difficult heritage, with an emphasis on the trans-Atlantic slave trade provides insights into the efficacy of online resources provided by official institutions, as well as the unofficial, participatory communities of stakeholders who use social media for collective memories.",TRUE,noun phrase
R417,Cultural History,R139975,CULTURAL HERITAGE IN SMART CITY ENVIRONMENTS: THE UPDATE,S558926,R139978,Has method,R139979,comparative analysis ,"Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.",TRUE,noun phrase
R417,Cultural History,R139853,SMART CITIES AND HERITAGE CONSERVATION: DEVELOPING A SMARTHERITAGE AGENDA FOR SUSTAINABLE INCLUSIVE COMMUNITIES,S558297,R139855,Material,R139861,contemporary cities,"This paper discusses the potential of current advancements in Information Communication Technologies (ICT) for cultural heritage preservation, valorization and management within contemporary cities. The paper highlights the potential of virtual environments to assess the impacts of heritage policies on urban development. It does so by discussing the implications of virtual globes and crowdsourcing to support the participatory valuation and management of cultural heritage assets. To this purpose, a review of available valuation techniques is here presented together with a discussion on how these techniques might be coupled with ICT tools to promote inclusive governance. ",TRUE,noun phrase
R417,Cultural History,R139761,The Story of the Markham Car Collection: A Cross-Platform Panoramic Tour of Contested Heritage,S557944,R139763,has subject domain,L392181,contested heritage,"In this article, we share our experiences of using digital technologies and various media to present historical narratives of a museum object collection aiming to provide an engaging experience on multiple platforms. Based on P. Joseph’s article, Dawson presented multiple interpretations and historical views of the Markham car collection across various platforms using multimedia resources. Through her creative production, she explored how to use cylindrical panoramas and rich media to offer new ways of telling the controversial story of the contested heritage of a museum’s veteran and vintage car collection. The production’s usability was investigated involving five experts before it was published online and the general users’ experience was investigated. In this article, we present an important component of findings which indicates that virtual panorama tours featuring multimedia elements could be successful in attracting new audiences and that using this type of storytelling technique can be effective in the museum sector. The storyteller panorama tour presented here may stimulate GLAM (galleries, libraries, archives, and museums) professionals to think of new approaches, implement new strategies or services to engage their audiences more effectively. The research may ameliorate the education of future professionals as well.",TRUE,noun phrase
R417,Cultural History,R139800,A systematic review of literature on contested heritage,S558029,R139803,has subject domain,R139807,Contested heritage,"ABSTRACT Contested heritage has increasingly been studied by scholars over the last two decades in multiple disciplines, however, there is still limited knowledge about what contested heritage is and how it is realized in society. Therefore, the purpose of this paper is to produce a systematic literature review on this topic to provide a holistic understanding of contested heritage, and delineate its current state, trends and gaps. Methodologically, four electronic databases were searched, and 102 journal articles published before 2020 were extracted. A content analysis of each article was then conducted to identify key themes and variables for classification. Findings show that while its research often lacks theoretical underpinnings, contested heritage is marked by its diversity and complexity as it becomes a global issue for both tourism and urbanization. By presenting a holistic understanding of contested heritage, this review offers an extensive investigation of the topic area to help move literature pertaining contested heritage forward.",TRUE,noun phrase
R417,Cultural History,R139975,CULTURAL HERITAGE IN SMART CITY ENVIRONMENTS: THE UPDATE,S558934,R139978,has subject domain,R139987,cultural heritage,"Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.",TRUE,noun phrase
R417,Cultural History,R139853,SMART CITIES AND HERITAGE CONSERVATION: DEVELOPING A SMARTHERITAGE AGENDA FOR SUSTAINABLE INCLUSIVE COMMUNITIES,S558300,R139855,Material,R139864,cultural heritage assets,"This paper discusses the potential of current advancements in Information Communication Technologies (ICT) for cultural heritage preservation, valorization and management within contemporary cities. The paper highlights the potential of virtual environments to assess the impacts of heritage policies on urban development. It does so by discussing the implications of virtual globes and crowdsourcing to support the participatory valuation and management of cultural heritage assets. To this purpose, a review of available valuation techniques is here presented together with a discussion on how these techniques might be coupled with ICT tools to promote inclusive governance. ",TRUE,noun phrase
R417,Cultural History,R139853,SMART CITIES AND HERITAGE CONSERVATION: DEVELOPING A SMARTHERITAGE AGENDA FOR SUSTAINABLE INCLUSIVE COMMUNITIES,S558294,R139855,Process,R139858,cultural heritage preservation,"This paper discusses the potential of current advancements in Information Communication Technologies (ICT) for cultural heritage preservation, valorization and management within contemporary cities. The paper highlights the potential of virtual environments to assess the impacts of heritage policies on urban development. It does so by discussing the implications of virtual globes and crowdsourcing to support the participatory valuation and management of cultural heritage assets. To this purpose, a review of available valuation techniques is here presented together with a discussion on how these techniques might be coupled with ICT tools to promote inclusive governance. ",TRUE,noun phrase
R417,Cultural History,R139761,The Story of the Markham Car Collection: A Cross-Platform Panoramic Tour of Contested Heritage,S557936,R139763,Models technology,R139764,cylindrical panoramas and rich media,"In this article, we share our experiences of using digital technologies and various media to present historical narratives of a museum object collection aiming to provide an engaging experience on multiple platforms. Based on P. Joseph’s article, Dawson presented multiple interpretations and historical views of the Markham car collection across various platforms using multimedia resources. Through her creative production, she explored how to use cylindrical panoramas and rich media to offer new ways of telling the controversial story of the contested heritage of a museum’s veteran and vintage car collection. The production’s usability was investigated involving five experts before it was published online and the general users’ experience was investigated. In this article, we present an important component of findings which indicates that virtual panorama tours featuring multimedia elements could be successful in attracting new audiences and that using this type of storytelling technique can be effective in the museum sector. The storyteller panorama tour presented here may stimulate GLAM (galleries, libraries, archives, and museums) professionals to think of new approaches, implement new strategies or services to engage their audiences more effectively. The research may ameliorate the education of future professionals as well.",TRUE,noun phrase
R417,Cultural History,R139975,CULTURAL HERITAGE IN SMART CITY ENVIRONMENTS: THE UPDATE,S558939,R139978,uses framework,R139992,Digital Strategy Canvas,"Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.",TRUE,noun phrase
R417,Cultural History,R139810,Digital heritage interpretation: a conceptual framework,S558041,R139813,Data,R139814,end-users’ interpretation level,"ABSTRACT ‘Heritage Interpretation’ has always been considered as an effective learning, communication and management tool that increases visitors’ awareness of and empathy to heritage sites or artefacts. Yet the definition of ‘digital heritage interpretation’ is still wide and so far, no significant method and objective are evident within the domain of ‘digital heritage’ theory and discourse. Considering ‘digital heritage interpretation’ as a process rather than as a tool to present or communicate with end-users, this paper presents a critical application of a theoretical construct ascertained from multiple disciplines and explicates four objectives for a comprehensive interpretive process. A conceptual model is proposed and further developed into a conceptual framework with fifteen considerations. This framework is then implemented and tested on an online platform to assess its impact on end-users’ interpretation level. We believe the presented interpretive framework (PrEDiC) will help heritage professionals and media designers to develop interpretive heritage project.",TRUE,noun phrase
R417,Cultural History,R139927,Smart Cities and Historical Heritage,S558615,R139929,has subject domain,R139932,energy efficiency,"The theme of smart grids will connote in the immediate future the production and distribution of electricity, integrating effectively and in a sustainable way energy deriving from large power stations with that distributed and supplied by renewable sources. In programmes of urban redevelopment, however, the historical city has not yet been subject to significant experimentation, also due to the specific safeguard on this kind of Heritage. This reflection opens up interesting new perspectives of research and operations, which could significantly contribute to the pursuit of the aims of the Smart City. This is the main goal of the research here presented and focused on the binomial renovation of a historical complex/enhancement and upgrading of its energy efficiency.",TRUE,noun phrase
R417,Cultural History,R139761,The Story of the Markham Car Collection: A Cross-Platform Panoramic Tour of Contested Heritage,S557937,R139763,Data,R139765,general users’ experience,"In this article, we share our experiences of using digital technologies and various media to present historical narratives of a museum object collection aiming to provide an engaging experience on multiple platforms. Based on P. Joseph’s article, Dawson presented multiple interpretations and historical views of the Markham car collection across various platforms using multimedia resources. Through her creative production, she explored how to use cylindrical panoramas and rich media to offer new ways of telling the controversial story of the contested heritage of a museum’s veteran and vintage car collection. The production’s usability was investigated involving five experts before it was published online and the general users’ experience was investigated. In this article, we present an important component of findings which indicates that virtual panorama tours featuring multimedia elements could be successful in attracting new audiences and that using this type of storytelling technique can be effective in the museum sector. The storyteller panorama tour presented here may stimulate GLAM (galleries, libraries, archives, and museums) professionals to think of new approaches, implement new strategies or services to engage their audiences more effectively. The research may ameliorate the education of future professionals as well.",TRUE,noun phrase
R417,Cultural History,R139784,The Management Of Heritage In Contested Cross-Border Contexts: Emerging Research On The Island Of Ireland,S557981,R139785,has subject domain,L392193,heritage diplomacy,"This paper introduces the recently begun REINVENT research project focused on the management of heritage in the cross-border cultural landscape of Derry/Londonderry. The importance of facilitating dialogue over cultural heritage to the maintenance of ‘thin’ borders in contested cross-border contexts is underlined in the paper, as is the relatively favourable strategic policy context for progressing ‘heritage diplomacy’ on the island of Ireland. However, it is argued that more inclusive and participatory approaches to the management of heritage are required to assist in the mediation of contestation, particularly accommodating a greater diversity of ‘non-expert’ opinion, in addition to helping identify value conflicts and dissonance. The application of digital technologies in the form of Public Participation Geographic Information Systems (PPGIS) is proposed, and this is briefly discussed in relation to some of the expected benefits and methodological challenges that must be addressed in the REINVENT project. The paper concludes by emphasising the importance of dialogue and knowledge exchange between academia and heritage policymakers/practitioners.",TRUE,noun phrase
R417,Cultural History,R139810,Digital heritage interpretation: a conceptual framework,S558048,R139813,has subject domain,R139818,Heritage Interpretation,"ABSTRACT ‘Heritage Interpretation’ has always been considered as an effective learning, communication and management tool that increases visitors’ awareness of and empathy to heritage sites or artefacts. Yet the definition of ‘digital heritage interpretation’ is still wide and so far, no significant method and objective are evident within the domain of ‘digital heritage’ theory and discourse. Considering ‘digital heritage interpretation’ as a process rather than as a tool to present or communicate with end-users, this paper presents a critical application of a theoretical construct ascertained from multiple disciplines and explicates four objectives for a comprehensive interpretive process. A conceptual model is proposed and further developed into a conceptual framework with fifteen considerations. This framework is then implemented and tested on an online platform to assess its impact on end-users’ interpretation level. We believe the presented interpretive framework (PrEDiC) will help heritage professionals and media designers to develop interpretive heritage project.",TRUE,noun phrase
R417,Cultural History,R139820,"Digital Media, Participatory Culture, and Difficult Heritage: Online Remediation and the Trans-Atlantic Slave Trade",S558061,R139822,materal,R139823,heritage online,"A diverse and changing array of digital media have been used to present heritage online. While websites have been created for online heritage outreach for nearly two decades, social media is employed increasingly to complement and in some cases replace the use of websites. These same social media are used by stakeholders as a form of participatory culture, to create communities and to discuss heritage independently of narratives offered by official institutions such as museums, memorials, and universities. With difficult or “dark” heritage—places of memory centering on deaths, disasters, and atrocities—these online representations and conversations can be deeply contested. Examining the websites and social media of difficult heritage, with an emphasis on the trans-Atlantic slave trade provides insights into the efficacy of online resources provided by official institutions, as well as the unofficial, participatory communities of stakeholders who use social media for collective memories.",TRUE,noun phrase
R417,Cultural History,R139927,Smart Cities and Historical Heritage,S558614,R139929,Material,R139931,historical city,"The theme of smart grids will connote in the immediate future the production and distribution of electricity, integrating effectively and in a sustainable way energy deriving from large power stations with that distributed and supplied by renewable sources. In programmes of urban redevelopment, however, the historical city has not yet been subject to significant experimentation, also due to the specific safeguard on this kind of Heritage. This reflection opens up interesting new perspectives of research and operations, which could significantly contribute to the pursuit of the aims of the Smart City. This is the main goal of the research here presented and focused on the binomial renovation of a historical complex/enhancement and upgrading of its energy efficiency.",TRUE,noun phrase
R417,Cultural History,R139853,SMART CITIES AND HERITAGE CONSERVATION: DEVELOPING A SMARTHERITAGE AGENDA FOR SUSTAINABLE INCLUSIVE COMMUNITIES,S558296,R139855,Material,R139860,Information Communication Technologies (ICT),"This paper discusses the potential of current advancements in Information Communication Technologies (ICT) for cultural heritage preservation, valorization and management within contemporary cities. The paper highlights the potential of virtual environments to assess the impacts of heritage policies on urban development. It does so by discussing the implications of virtual globes and crowdsourcing to support the participatory valuation and management of cultural heritage assets. To this purpose, a review of available valuation techniques is here presented together with a discussion on how these techniques might be coupled with ICT tools to promote inclusive governance. ",TRUE,noun phrase
R417,Cultural History,R140030,World Heritage meets Smart City in an Urban-Educational Hackathon in Rauma,S559010,R140034,has subject domain,R140039,innovation practices,"During recent years, the ‘smart city’ concept has emerged in literature (e.g., Kunttu, 2019; Markkula & Kune, 2018; Öberg, Graham, & Hennelly, 2017; Visvizi & Lytras, 2018). Inherently, the smart city concept includes urban innovation; therefore, simply developing and applying technology is not enough for success. For cities to be 'smart,' they also have to be innovative, apply new ways of thinking among businesses, citizens, and academia, as well as integrate diverse actors, especially universities, in their innovation practices (Kunttu, 2019; Markkula & Kune, 2018).",TRUE,noun phrase
R417,Cultural History,R139736,Public History and Contested Heritage: Archival Memories of the Bombing of Italy,S557898,R139743,Institution,R139745,International Bomber Command Centre,"This article presents a case study of a collaborative public history project between participants in two countries, the United Kingdom and Italy. Its subject matter is the bombing war in Europe, 1939-1945, which is remembered and commemorated in very different ways in these two countries: the sensitivities involved thus constitute not only a case of public history conducted at the national level but also one involving contested heritage. An account of the ways in which public history has developed in the UK and Italy is presented. This is followed by an explanation of how the bombing war has been remembered in each country. In the UK, veterans of RAF Bomber Command have long felt a sense of neglect, largely because the deliberate targeting of civilians has not fitted comfortably into the dominant victor narrative. In Italy, recollections of being bombed have remained profoundly dissonant within the received liberation discourse. The International Bomber Command Centre Digital Archive (or Archive) is then described as a case study that employs a public history approach, focusing on various aspects of its inclusive ethos, intended to preserve multiple perspectives. The Italian component of the project is highlighted, problematising the digitisation of contested heritage within the broader context of twentieth-century history. Reflections on the use of digital archiving practices and working in partnership are offered, as well as a brief account of user analytics of the Archive through its first eighteen months online.",TRUE,noun phrase
R417,Cultural History,R139810,Digital heritage interpretation: a conceptual framework,S558051,R139813,Has result,R139819,interpretive framework (PrEDiC) ,"ABSTRACT ‘Heritage Interpretation’ has always been considered as an effective learning, communication and management tool that increases visitors’ awareness of and empathy to heritage sites or artefacts. Yet the definition of ‘digital heritage interpretation’ is still wide and so far, no significant method and objective are evident within the domain of ‘digital heritage’ theory and discourse. Considering ‘digital heritage interpretation’ as a process rather than as a tool to present or communicate with end-users, this paper presents a critical application of a theoretical construct ascertained from multiple disciplines and explicates four objectives for a comprehensive interpretive process. A conceptual model is proposed and further developed into a conceptual framework with fifteen considerations. This framework is then implemented and tested on an online platform to assess its impact on end-users’ interpretation level. We believe the presented interpretive framework (PrEDiC) will help heritage professionals and media designers to develop interpretive heritage project.",TRUE,noun phrase
R417,Cultural History,R139975,CULTURAL HERITAGE IN SMART CITY ENVIRONMENTS: THE UPDATE,S558929,R139978,has smart city instance,R139982,Karlsruhe (Germany),"Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.",TRUE,noun phrase
R417,Cultural History,R139993,The Role of Smart City Characteristics in the Plans of Fifteen Cities,S558960,R139995,has smart city instance,R140008,King Abdullah Economic City,"ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.",TRUE,noun phrase
R417,Cultural History,R139975,CULTURAL HERITAGE IN SMART CITY ENVIRONMENTS: THE UPDATE,S558930,R139978,Process,R139983,liveability and sustainable urban development,"Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.",TRUE,noun phrase
R417,Cultural History,R139761,The Story of the Markham Car Collection: A Cross-Platform Panoramic Tour of Contested Heritage,S557942,R139763,museum collection,R139766,Markham car collection,"In this article, we share our experiences of using digital technologies and various media to present historical narratives of a museum object collection aiming to provide an engaging experience on multiple platforms. Based on P. Joseph’s article, Dawson presented multiple interpretations and historical views of the Markham car collection across various platforms using multimedia resources. Through her creative production, she explored how to use cylindrical panoramas and rich media to offer new ways of telling the controversial story of the contested heritage of a museum’s veteran and vintage car collection. The production’s usability was investigated involving five experts before it was published online and the general users’ experience was investigated. In this article, we present an important component of findings which indicates that virtual panorama tours featuring multimedia elements could be successful in attracting new audiences and that using this type of storytelling technique can be effective in the museum sector. The storyteller panorama tour presented here may stimulate GLAM (galleries, libraries, archives, and museums) professionals to think of new approaches, implement new strategies or services to engage their audiences more effectively. The research may ameliorate the education of future professionals as well.",TRUE,noun phrase
R417,Cultural History,R139820,"Digital Media, Participatory Culture, and Difficult Heritage: Online Remediation and the Trans-Atlantic Slave Trade",S558065,R139822,has subject domain,R139827,participatory culture,"A diverse and changing array of digital media have been used to present heritage online. While websites have been created for online heritage outreach for nearly two decades, social media is employed increasingly to complement and in some cases replace the use of websites. These same social media are used by stakeholders as a form of participatory culture, to create communities and to discuss heritage independently of narratives offered by official institutions such as museums, memorials, and universities. With difficult or “dark” heritage—places of memory centering on deaths, disasters, and atrocities—these online representations and conversations can be deeply contested. Examining the websites and social media of difficult heritage, with an emphasis on the trans-Atlantic slave trade provides insights into the efficacy of online resources provided by official institutions, as well as the unofficial, participatory communities of stakeholders who use social media for collective memories.",TRUE,noun phrase
R417,Cultural History,R139853,SMART CITIES AND HERITAGE CONSERVATION: DEVELOPING A SMARTHERITAGE AGENDA FOR SUSTAINABLE INCLUSIVE COMMUNITIES,S558295,R139855,Process,R139859,participatory valuation and management,"This paper discusses the potential of current advancements in Information Communication Technologies (ICT) for cultural heritage preservation, valorization and management within contemporary cities. The paper highlights the potential of virtual environments to assess the impacts of heritage policies on urban development. It does so by discussing the implications of virtual globes and crowdsourcing to support the participatory valuation and management of cultural heritage assets. To this purpose, a review of available valuation techniques is here presented together with a discussion on how these techniques might be coupled with ICT tools to promote inclusive governance. ",TRUE,noun phrase
R417,Cultural History,R139993,The Role of Smart City Characteristics in the Plans of Fifteen Cities,S558965,R139995,has smart city instance,R140013,PlanIT Valley,"ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.",TRUE,noun phrase
R417,Cultural History,R139736,Public History and Contested Heritage: Archival Memories of the Bombing of Italy,S557897,R139743,Institution,R139744,RAF Bomber Command,"This article presents a case study of a collaborative public history project between participants in two countries, the United Kingdom and Italy. Its subject matter is the bombing war in Europe, 1939-1945, which is remembered and commemorated in very different ways in these two countries: the sensitivities involved thus constitute not only a case of public history conducted at the national level but also one involving contested heritage. An account of the ways in which public history has developed in the UK and Italy is presented. This is followed by an explanation of how the bombing war has been remembered in each country. In the UK, veterans of RAF Bomber Command have long felt a sense of neglect, largely because the deliberate targeting of civilians has not fitted comfortably into the dominant victor narrative. In Italy, recollections of being bombed have remained profoundly dissonant within the received liberation discourse. The International Bomber Command Centre Digital Archive (or Archive) is then described as a case study that employs a public history approach, focusing on various aspects of its inclusive ethos, intended to preserve multiple perspectives. The Italian component of the project is highlighted, problematising the digitisation of contested heritage within the broader context of twentieth-century history. Reflections on the use of digital archiving practices and working in partnership are offered, as well as a brief account of user analytics of the Archive through its first eighteen months online.",TRUE,noun phrase
R417,Cultural History,R139927,Smart Cities and Historical Heritage,S558616,R139929,has subject domain,R139933,Smart City,"The theme of smart grids will connote in the immediate future the production and distribution of electricity, integrating effectively and in a sustainable way energy deriving from large power stations with that distributed and supplied by renewable sources. In programmes of urban redevelopment, however, the historical city has not yet been subject to significant experimentation, also due to the specific safeguard on this kind of Heritage. This reflection opens up interesting new perspectives of research and operations, which could significantly contribute to the pursuit of the aims of the Smart City. This is the main goal of the research here presented and focused on the binomial renovation of a historical complex/enhancement and upgrading of its energy efficiency.",TRUE,noun phrase
R417,Cultural History,R139975,CULTURAL HERITAGE IN SMART CITY ENVIRONMENTS: THE UPDATE,S558938,R139978,Data,R139991,social and cultural values,"Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.",TRUE,noun phrase
R417,Cultural History,R139820,"Digital Media, Participatory Culture, and Difficult Heritage: Online Remediation and the Trans-Atlantic Slave Trade",S558063,R139822,has communication channel,R139825,social media,"A diverse and changing array of digital media have been used to present heritage online. While websites have been created for online heritage outreach for nearly two decades, social media is employed increasingly to complement and in some cases replace the use of websites. These same social media are used by stakeholders as a form of participatory culture, to create communities and to discuss heritage independently of narratives offered by official institutions such as museums, memorials, and universities. With difficult or “dark” heritage—places of memory centering on deaths, disasters, and atrocities—these online representations and conversations can be deeply contested. Examining the websites and social media of difficult heritage, with an emphasis on the trans-Atlantic slave trade provides insights into the efficacy of online resources provided by official institutions, as well as the unofficial, participatory communities of stakeholders who use social media for collective memories.",TRUE,noun phrase
R417,Cultural History,R139761,The Story of the Markham Car Collection: A Cross-Platform Panoramic Tour of Contested Heritage,S557945,R139763,Has result,R139768,storyteller panorama tour,"In this article, we share our experiences of using digital technologies and various media to present historical narratives of a museum object collection aiming to provide an engaging experience on multiple platforms. Based on P. Joseph’s article, Dawson presented multiple interpretations and historical views of the Markham car collection across various platforms using multimedia resources. Through her creative production, she explored how to use cylindrical panoramas and rich media to offer new ways of telling the controversial story of the contested heritage of a museum’s veteran and vintage car collection. The production’s usability was investigated involving five experts before it was published online and the general users’ experience was investigated. In this article, we present an important component of findings which indicates that virtual panorama tours featuring multimedia elements could be successful in attracting new audiences and that using this type of storytelling technique can be effective in the museum sector. The storyteller panorama tour presented here may stimulate GLAM (galleries, libraries, archives, and museums) professionals to think of new approaches, implement new strategies or services to engage their audiences more effectively. The research may ameliorate the education of future professionals as well.",TRUE,noun phrase
R417,Cultural History,R139975,CULTURAL HERITAGE IN SMART CITY ENVIRONMENTS: THE UPDATE,S558936,R139978,has subject domain,R139989,sustainable urban development,"Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.",TRUE,noun phrase
R417,Cultural History,R139800,A systematic review of literature on contested heritage,S558017,R139803,Has method,R139804,systematic literature review,"ABSTRACT Contested heritage has increasingly been studied by scholars over the last two decades in multiple disciplines, however, there is still limited knowledge about what contested heritage is and how it is realized in society. Therefore, the purpose of this paper is to produce a systematic literature review on this topic to provide a holistic understanding of contested heritage, and delineate its current state, trends and gaps. Methodologically, four electronic databases were searched, and 102 journal articles published before 2020 were extracted. A content analysis of each article was then conducted to identify key themes and variables for classification. Findings show that while its research often lacks theoretical underpinnings, contested heritage is marked by its diversity and complexity as it becomes a global issue for both tourism and urbanization. By presenting a holistic understanding of contested heritage, this review offers an extensive investigation of the topic area to help move literature pertaining contested heritage forward.",TRUE,noun phrase
R417,Cultural History,R139975,CULTURAL HERITAGE IN SMART CITY ENVIRONMENTS: THE UPDATE,S558927,R139978,has smart city instance,R139980,Tarragona (Spain),"Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.",TRUE,noun phrase
R417,Cultural History,R139975,CULTURAL HERITAGE IN SMART CITY ENVIRONMENTS: THE UPDATE,S558933,R139978,has subject domain,R139986,touristic branding and promotion,"Abstract. In 2017 we published a seminal research study in the International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences about how smart city tools, solutions and applications underpinned historical and cultural heritage of cities at that time (Angelidou et al. 2017). We now return to investigate the progress that has been made during the past three years, and specifically whether the weak substantiation of cultural heritage in smart city strategies that we observed in 2017 has been improved. The newest literature suggests that smart cities should capitalize on local strengths and give prominence to local culture and traditions and provides a handful of solutions to this end. However, a more thorough examination of what has been actually implemented reveals a (still) rather immature approach. The smart city cases that were selected for the purposes of this research include Tarragona (Spain), Budapest (Hungary) and Karlsruhe (Germany). For each one we collected information regarding the overarching structure of the initiative, the positioning of cultural heritage and the inclusion of heritage-related smart city applications. We then performed a comparative analysis based on a simplified version of the Digital Strategy Canvas. Our findings suggest that a rich cultural heritage and a broader strategic focus on touristic branding and promotion are key ingredients of smart city development in this domain; this is a commonality of all the investigated cities. Moreover, three different strategy architectures emerge, representing the different interplays among the smart city, cultural heritage and sustainable urban development. We conclude that a new generation of smart city initiatives is emerging, in which cultural heritage is of increasing importance. This generation tends to associate cultural heritage with social and cultural values, liveability and sustainable urban development.",TRUE,noun phrase
R417,Cultural History,R139820,"Digital Media, Participatory Culture, and Difficult Heritage: Online Remediation and the Trans-Atlantic Slave Trade",S558064,R139822,has subject domain,R139826,trans-Atlantic slave trade,"A diverse and changing array of digital media have been used to present heritage online. While websites have been created for online heritage outreach for nearly two decades, social media is employed increasingly to complement and in some cases replace the use of websites. These same social media are used by stakeholders as a form of participatory culture, to create communities and to discuss heritage independently of narratives offered by official institutions such as museums, memorials, and universities. With difficult or “dark” heritage—places of memory centering on deaths, disasters, and atrocities—these online representations and conversations can be deeply contested. Examining the websites and social media of difficult heritage, with an emphasis on the trans-Atlantic slave trade provides insights into the efficacy of online resources provided by official institutions, as well as the unofficial, participatory communities of stakeholders who use social media for collective memories.",TRUE,noun phrase
R417,Cultural History,R139853,SMART CITIES AND HERITAGE CONSERVATION: DEVELOPING A SMARTHERITAGE AGENDA FOR SUSTAINABLE INCLUSIVE COMMUNITIES,S558301,R139855,has subject domain,R139865,valuation techniques,"This paper discusses the potential of current advancements in Information Communication Technologies (ICT) for cultural heritage preservation, valorization and management within contemporary cities. The paper highlights the potential of virtual environments to assess the impacts of heritage policies on urban development. It does so by discussing the implications of virtual globes and crowdsourcing to support the participatory valuation and management of cultural heritage assets. To this purpose, a review of available valuation techniques is here presented together with a discussion on how these techniques might be coupled with ICT tools to promote inclusive governance. ",TRUE,noun phrase
R417,Cultural History,R139853,SMART CITIES AND HERITAGE CONSERVATION: DEVELOPING A SMARTHERITAGE AGENDA FOR SUSTAINABLE INCLUSIVE COMMUNITIES,S558298,R139855,Material,R139862,virtual environments,"This paper discusses the potential of current advancements in Information Communication Technologies (ICT) for cultural heritage preservation, valorization and management within contemporary cities. The paper highlights the potential of virtual environments to assess the impacts of heritage policies on urban development. It does so by discussing the implications of virtual globes and crowdsourcing to support the participatory valuation and management of cultural heritage assets. To this purpose, a review of available valuation techniques is here presented together with a discussion on how these techniques might be coupled with ICT tools to promote inclusive governance. ",TRUE,noun phrase
R417,Cultural History,R139853,SMART CITIES AND HERITAGE CONSERVATION: DEVELOPING A SMARTHERITAGE AGENDA FOR SUSTAINABLE INCLUSIVE COMMUNITIES,S558299,R139855,Material,R139863,virtual globes,"This paper discusses the potential of current advancements in Information Communication Technologies (ICT) for cultural heritage preservation, valorization and management within contemporary cities. The paper highlights the potential of virtual environments to assess the impacts of heritage policies on urban development. It does so by discussing the implications of virtual globes and crowdsourcing to support the participatory valuation and management of cultural heritage assets. To this purpose, a review of available valuation techniques is here presented together with a discussion on how these techniques might be coupled with ICT tools to promote inclusive governance. ",TRUE,noun phrase
R233,Data Storage Systems,R135474,An Ontology-Based Approach for Curriculum Mapping in Higher Education,S535845,R135476,keywords,R135485,curriculum mapping,"Programs offered by academic institutions in higher education need to meet specific standards that are established by the appropriate accreditation bodies. Curriculum mapping is an important part of the curriculum management process that is used to document the expected learning outcomes, ensure quality, and align programs and courses with industry standards. Semantic web languages can be used to express and share common agreement about the vocabularies used in the domain under study. In this paper, we present an approach based on ontology for curriculum mapping in higher education. Our proposed approach is focused on the creation of a core curriculum ontology that can support effective knowledge representation and knowledge discovery. The research work presents the case of ontology reuse through the extension of the curriculum ontology to support the creation of micro-credentials. We also present a conceptual framework for knowledge discovery to support various business use case scenarios based on ontology inferencing and querying operations.",TRUE,noun phrase
R233,Data Storage Systems,R135474,An Ontology-Based Approach for Curriculum Mapping in Higher Education,S535846,R135476,keywords,R135486,curriculum ontology,"Programs offered by academic institutions in higher education need to meet specific standards that are established by the appropriate accreditation bodies. Curriculum mapping is an important part of the curriculum management process that is used to document the expected learning outcomes, ensure quality, and align programs and courses with industry standards. Semantic web languages can be used to express and share common agreement about the vocabularies used in the domain under study. In this paper, we present an approach based on ontology for curriculum mapping in higher education. Our proposed approach is focused on the creation of a core curriculum ontology that can support effective knowledge representation and knowledge discovery. The research work presents the case of ontology reuse through the extension of the curriculum ontology to support the creation of micro-credentials. We also present a conceptual framework for knowledge discovery to support various business use case scenarios based on ontology inferencing and querying operations.",TRUE,noun phrase
R233,Data Storage Systems,R136098,An ontology based modeling framework for design of educational technologies,S538796,R136100,keywords,R136104,educational technologies,"Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.",TRUE,noun phrase
R233,Data Storage Systems,R136067,EduCOR: An Educational and Career-Oriented Recommendation Ontology,S538736,R136069,Has evaluation,R136078,Gold standard,"Abstract With the increased dependence on online learning platforms and educational resource repositories, a unified representation of digital learning resources becomes essential to support a dynamic and multi-source learning experience. We introduce the EduCOR ontology, an educational, career-oriented ontology that provides a foundation for representing online learning resources for personalised learning systems. The ontology is designed to enable learning material repositories to offer learning path recommendations, which correspond to the user’s learning goals and preferences, academic and psychological parameters, and labour-market skills. We present the multiple patterns that compose the EduCOR ontology, highlighting its cross-domain applicability and integrability with other ontologies. A demonstration of the proposed ontology on the real-life learning platform eDoer is discussed as a use case. We evaluate the EduCOR ontology using both gold standard and task-based approaches. The comparison of EduCOR to three gold schemata, and its application in two use-cases, shows its coverage and adaptability to multiple OER repositories, which allows generating user-centric and labour-market oriented recommendations. Resource : https://tibonto.github.io/educor/.",TRUE,noun phrase
R233,Data Storage Systems,R136098,An ontology based modeling framework for design of educational technologies,S538798,R136100,keywords,R136105,instructional design,"Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.",TRUE,noun phrase
R233,Data Storage Systems,R136098,An ontology based modeling framework for design of educational technologies,S538800,R136100,keywords,R136107,instructional process,"Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.",TRUE,noun phrase
R233,Data Storage Systems,R136067,EduCOR: An Educational and Career-Oriented Recommendation Ontology,S539306,R136069,Personalisation features,R136267,Learning Goal,"Abstract With the increased dependence on online learning platforms and educational resource repositories, a unified representation of digital learning resources becomes essential to support a dynamic and multi-source learning experience. We introduce the EduCOR ontology, an educational, career-oriented ontology that provides a foundation for representing online learning resources for personalised learning systems. The ontology is designed to enable learning material repositories to offer learning path recommendations, which correspond to the user’s learning goals and preferences, academic and psychological parameters, and labour-market skills. We present the multiple patterns that compose the EduCOR ontology, highlighting its cross-domain applicability and integrability with other ontologies. A demonstration of the proposed ontology on the real-life learning platform eDoer is discussed as a use case. We evaluate the EduCOR ontology using both gold standard and task-based approaches. The comparison of EduCOR to three gold schemata, and its application in two use-cases, shows its coverage and adaptability to multiple OER repositories, which allows generating user-centric and labour-market oriented recommendations. Resource : https://tibonto.github.io/educor/.",TRUE,noun phrase
R135,Databases/Information Systems,R6100,A fast method based on multiple clustering for name disambiguation in bibliographic citations,S6300,R6101,Evidence,R6012,Author name,"Name ambiguity in the context of bibliographic citation affects the quality of services in digital libraries. Previous methods are not widely applied in practice because of their high computational complexity and their strong dependency on excessive attributes, such as institutional affiliation, research area, address, etc., which are difficult to obtain in practice. To solve this problem, we propose a novel coarse‐to‐fine framework for name disambiguation which sequentially employs 3 common and easily accessible attributes (i.e., coauthor name, article title, and publication venue). Our proposed framework is based on multiple clustering and consists of 3 steps: (a) clustering articles by coauthorship and obtaining rough clusters, that is fragments; (b) clustering fragments obtained in step 1 by title information and getting bigger fragments; (c) and clustering fragments obtained in step 2 by the latent relations among venues. Experimental results on a Digital Bibliography and Library Project (DBLP) data set show that our method outperforms the existing state‐of‐the‐art methods by 2.4% to 22.7% on the average pairwise F1 score and is 10 to 100 times faster in terms of execution time.",TRUE,noun phrase
R135,Databases/Information Systems,R75311,Estimating Selectivity for Joined RDF Triple Patterns,S504468,R75313,Has implementation,L364364,Bayesian Network,"A fundamental problem related to RDF query processing is selectivity estimation, which is crucial to query optimization for determining a join order of RDF triple patterns. In this paper we focus research on selectivity estimation for SPARQL graph patterns. The previous work takes the join uniformity assumption when estimating the joined triple patterns. This assumption would lead to highly inaccurate estimations in the cases where properties in SPARQL graph patterns are correlated. We take into account the dependencies among properties in SPARQL graph patterns and propose a more accurate estimation model. Since star and chain query patterns are common in SPARQL graph patterns, we first focus on these two basic patterns and propose to use Bayesian network and chain histogram respectively for estimating the selectivity of them. Then, for estimating the selectivity of an arbitrary SPARQL graph pattern, we design algorithms for maximally using the precomputed statistics of the star paths and chain paths. The experiments show that our method outperforms existing approaches in accuracy.",TRUE,noun phrase
R135,Databases/Information Systems,R6050,A method for eliminating articles by homonymous authors from the large number of articles retrieved by author search,S6112,R6051,Evidence,R6046,Citation relationship,"This paper proposes a methodology which discriminates the articles by the target authors (“true” articles) from those by other homonymous authors (“false” articles). Author name searches for 2,595 “source” authors in six subject fields retrieved about 629,000 articles. In order to extract true articles from the large amount of the retrieved articles, including many false ones, two filtering stages were applied. At the first stage any retrieved article was eliminated as false if either its affiliation addresses had little similarity to those of its source article or there was no citation relationship between the journal of the retrieved article and that of its source article. At the second stage, a sample of retrieved articles was subjected to manual judgment, and utilizing the judgment results, discrimination functions based on logistic regression were defined. These discrimination functions demonstrated both the recall ratio and the precision of about 95% and the accuracy (correct answer ratio) of 90–95%. Existence of common coauthor(s), address similarity, title words similarity, and interjournal citation relationships between the retrieved and source articles were found to be the effective discrimination predictors. Whether or not the source author was from a specific country was also one of the important predictors. Furthermore, it was shown that a retrieved article is almost certainly true if it was cited by, or cocited with, its source article. The method proposed in this study would be effective when dealing with a large number of articles whose subject fields and affiliation addresses vary widely. © 2011 Wiley Periodicals, Inc.",TRUE,noun phrase
R135,Databases/Information Systems,R73135,The data-literature interlinking service: Towards a common infrastructure for sharing data-article links,S338518,R73138,Has implementation,R73139,Data-Literature Interlinking (DLI) Service,"Research data publishing is today widely regarded as crucial for reproducibility, proper assessment of scientific results, and as a way for researchers to get proper credit for sharing their data. However, several challenges need to be solved to fully realize its potential, one of them being the development of a global standard for links between research data and literature. Current linking solutions are mostly based on bilateral, ad hoc agreements between publishers and data centers. These operate in silos so that content cannot be readily combined to deliver a network graph connecting research data and literature in a comprehensive and reliable way. The Research Data Alliance (RDA) Publishing Data Services Working Group (PDS-WG) aims to address this issue of fragmentation by bringing together different stakeholders to agree on a common infrastructure for sharing links between datasets and literature. The paper aims to discuss these issues.,This paper presents the synergic effort of the RDA PDS-WG and the OpenAIRE infrastructure toward enabling a common infrastructure for exchanging data-literature links by realizing and operating the Data-Literature Interlinking (DLI) Service. The DLI Service populates and provides access to a graph of data set-literature links (at the time of writing close to five million, and growing) collected from a variety of major data centers, publishers, and research organizations.,To achieve its objectives, the Service proposes an interoperable exchange data model and format, based on which it collects and publishes links, thereby offering the opportunity to validate such common approach on real-case scenarios, with real providers and consumers. Feedback of these actors will drive continuous refinement of the both data model and exchange format, supporting the further development of the Service to become an essential part of a universal, open, cross-platform, cross-discipline solution for collecting, and sharing data set-literature links.,This realization of the DLI Service is the first technical, cross-community, and collaborative effort in the direction of establishing a common infrastructure for facilitating the exchange of data set-literature links. As a result of its operation and underlying community effort, a new activity, name Scholix, has been initiated involving the technological level stakeholders such as DataCite and CrossRef.",TRUE,noun phrase
R135,Databases/Information Systems,R107637,Scalable Methods for Measuring the Connectivity and Quality of Large Numbers of Linked Datasets,S489876,R107639,provided services,L355308,Dataset Discovery,"Although the ultimate objective of Linked Data is linking and integration, it is not currently evident how connected the current Linked Open Data (LOD) cloud is. In this article, we focus on methods, supported by special indexes and algorithms, for performing measurements related to the connectivity of more than two datasets that are useful in various tasks including (a) Dataset Discovery and Selection; (b) Object Coreference, i.e., for obtaining complete information about a set of entities, including provenance information; (c) Data Quality Assessment and Improvement, i.e., for assessing the connectivity between any set of datasets and monitoring their evolution over time, as well as for estimating data veracity; (d) Dataset Visualizations; and various other tasks. Since it would be prohibitively expensive to perform all these measurements in a naïve way, in this article, we introduce indexes (and their construction algorithms) that can speed up such tasks. In brief, we introduce (i) a namespace-based prefix index, (ii) a sameAs catalog for computing the symmetric and transitive closure of the owl:sameAs relationships encountered in the datasets, (iii) a semantics-aware element index (that exploits the aforementioned indexes), and, finally, (iv) two lattice-based incremental algorithms for speeding up the computation of the intersection of URIs of any set of datasets. For enhancing scalability, we propose parallel index construction algorithms and parallel lattice-based incremental algorithms, we evaluate the achieved speedup using either a single machine or a cluster of machines, and we provide insights regarding the factors that affect efficiency. Finally, we report measurements about the connectivity of the (billion triples-sized) LOD cloud that have never been carried out so far.",TRUE,noun phrase
R135,Databases/Information Systems,R77008,Random Walk TripleRush: Asynchronous Graph Querying and Sampling,S536003,R77010,Compares,R135525,Execution time,"Most Semantic Web applications rely on querying graphs, typically by using SPARQL with a triple store. Increasingly, applications also analyze properties of the graph structure to compute statistical inferences. The current Semantic Web infrastructure, however, does not efficiently support such operations. This forces developers to extract the relevant data for external statistical post-processing. In this paper we propose to rethink query execution in a triple store as a highly parallelized asynchronous graph exploration on an active index data structure. This approach also allows to integrate SPARQL-querying with the sampling of graph properties. To evaluate this architecture we implemented Random Walk TripleRush, which is built on a distributed graph processing system. Our evaluations show that this architecture enables both competitive graph querying, as well as the ability to execute various types of random walks with restarts that sample interesting graph properties. Thanks to the asynchronous architecture, first results are sometimes returned in a fraction of the full execution time. We also evaluate the scalability and show that the architecture supports fast query-times on a dataset with more than a billion triples.",TRUE,noun phrase
R135,Databases/Information Systems,R107613,Static analysis and optimization of semantic web queries,S541094,R107615,Has approach,R137015,graph pattern,"Static analysis is a fundamental task in query optimization. In this paper we study static analysis and optimization techniques for SPARQL, which is the standard language for querying Semantic Web data. Of particular interest for us is the optionality feature in SPARQL. It is crucial in Semantic Web data management, where data sources are inherently incomplete and the user is usually interested in partial answers to queries. This feature is one of the most complicated constructors in SPARQL and also the one that makes this language depart from classical query languages such as relational conjunctive queries. We focus on the class of well-designed SPARQL queries, which has been proposed in the literature as a fragment of the language with good properties regarding query evaluation. We first propose a tree representation for SPARQL queries, called pattern trees, which captures the class of well-designed SPARQL graph patterns and which can be considered as a query execution plan. Among other results, we propose several transformation rules for pattern trees, a simple normal form, and study equivalence and containment. We also study the enumeration and counting problems for this class of queries.",TRUE,noun phrase
R135,Databases/Information Systems,R77123,Heuristics-based query optimisation for SPARQL,S352126,R77125,Has implementation,L250662,Heuristic SPARQL Planner (HSP),"Query optimization in RDF Stores is a challenging problem as SPARQL queries typically contain many more joins than equivalent relational plans, and hence lead to a large join order search space. In such cases, cost-based query optimization often is not possible. One practical reason for this is that statistics typically are missing in web scale setting such as the Linked Open Datasets (LOD). The more profound reason is that due to the absence of schematic structure in RDF, join-hit ratio estimation requires complicated forms of correlated join statistics; and currently there are no methods to identify the relevant correlations beforehand. For this reason, the use of good heuristics is essential in SPARQL query optimization, even in the case that are partially used with cost-based statistics (i.e., hybrid query optimization). In this paper we describe a set of useful heuristics for SPARQL query optimizers. We present these in the context of a new Heuristic SPARQL Planner (HSP) that is capable of exploiting the syntactic and the structural variations of the triple patterns in a SPARQL query in order to choose an execution plan without the need of any cost model. For this, we define the variable graph and we show a reduction of the SPARQL query optimization problem to the maximum weight independent set problem. We implemented our planner on top of the MonetDB open source column-store and evaluated its effectiveness against the state-of-the-art RDF-3X engine as well as comparing the plan quality with a relational (SQL) equivalent of the benchmarks.",TRUE,noun phrase
R135,Databases/Information Systems,R70791,Enriching Knowledge Bases with Interesting Negative Statements,S336790,R70793,Material,R70796,highly related entities,"Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, but abstain from taking any stance towards statements not contained in them. In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards automatically compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In pattern-based query log extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.4M statements for 130K popular Wikidata entities.",TRUE,noun phrase
R135,Databases/Information Systems,R111213,Scalable join processing on very large RDF graphs,S536957,R111215,Has implementation,L378478,Join-order optimization,"With the proliferation of the RDF data format, engines for RDF query processing are faced with very large graphs that contain hundreds of millions of RDF triples. This paper addresses the resulting scalability problems. Recent prior work along these lines has focused on indexing and other physical-design issues. The current paper focuses on join processing, as the fine-grained and schema-relaxed use of RDF often entails star- and chain-shaped join queries with many input streams from index scans. We present two contributions for scalable join processing. First, we develop very light-weight methods for sideways information passing between separate joins at query run-time, to provide highly effective filters on the input streams of joins. Second, we improve previously proposed algorithms for join-order optimization by more accurate selectivity estimations for very large RDF graphs. Experimental studies with several RDF datasets, including the UniProt collection, demonstrate the performance gains of our approach, outperforming the previously fastest systems by more than an order of magnitude.",TRUE,noun phrase
R135,Databases/Information Systems,R70791,Enriching Knowledge Bases with Interesting Negative Statements,S336789,R70793,Material,R70795,Knowledge bases (KBs),"Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, but abstain from taking any stance towards statements not contained in them. In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards automatically compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In pattern-based query log extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.4M statements for 130K popular Wikidata entities.",TRUE,noun phrase
R135,Databases/Information Systems,R135477,A learning object ontology repository to support annotation and discovery of educational resources using semantic thesauri,S535884,R135479,keywords,R135499,learning objects," Open educational resources are currently becoming increasingly available from a multitude of sources and are consequently annotated in many diverse ways. Interoperability concerns that naturally arise can often be resolved through the semantification of metadata descriptions, while at the same time strengthening the knowledge value of resources. SKOS can be a solid linking point offering a standard vocabulary for thematic descriptions, by referencing semantic thesauri. We propose the enhancement and maintenance of educational resources’ metadata in the form of learning object ontologies and introduce the notion of a learning object ontology repository that can help towards their publication, discovery and reuse. At the same time, linking to thesauri datasets and contextualized sources interrelates learning objects with linked data and exposes them to the Web of Data. We build a set of extensions and workflows on top of contemporary ontology management tools, such as WebProtégé, that can make it suitable as a learning object ontology repository. The proposed approach and implementation can help libraries and universities in discovering, managing and incorporating open educational resources and enhancing current curricula. ",TRUE,noun phrase
R135,Databases/Information Systems,R135477,A learning object ontology repository to support annotation and discovery of educational resources using semantic thesauri,S535885,R135479,keywords,R135500,linked data," Open educational resources are currently becoming increasingly available from a multitude of sources and are consequently annotated in many diverse ways. Interoperability concerns that naturally arise can often be resolved through the semantification of metadata descriptions, while at the same time strengthening the knowledge value of resources. SKOS can be a solid linking point offering a standard vocabulary for thematic descriptions, by referencing semantic thesauri. We propose the enhancement and maintenance of educational resources’ metadata in the form of learning object ontologies and introduce the notion of a learning object ontology repository that can help towards their publication, discovery and reuse. At the same time, linking to thesauri datasets and contextualized sources interrelates learning objects with linked data and exposes them to the Web of Data. We build a set of extensions and workflows on top of contemporary ontology management tools, such as WebProtégé, that can make it suitable as a learning object ontology repository. The proposed approach and implementation can help libraries and universities in discovering, managing and incorporating open educational resources and enhancing current curricula. ",TRUE,noun phrase
R135,Databases/Information Systems,R6050,A method for eliminating articles by homonymous authors from the large number of articles retrieved by author search,S6102,R6051,Method,R6044,Logistic regression,"This paper proposes a methodology which discriminates the articles by the target authors (“true” articles) from those by other homonymous authors (“false” articles). Author name searches for 2,595 “source” authors in six subject fields retrieved about 629,000 articles. In order to extract true articles from the large amount of the retrieved articles, including many false ones, two filtering stages were applied. At the first stage any retrieved article was eliminated as false if either its affiliation addresses had little similarity to those of its source article or there was no citation relationship between the journal of the retrieved article and that of its source article. At the second stage, a sample of retrieved articles was subjected to manual judgment, and utilizing the judgment results, discrimination functions based on logistic regression were defined. These discrimination functions demonstrated both the recall ratio and the precision of about 95% and the accuracy (correct answer ratio) of 90–95%. Existence of common coauthor(s), address similarity, title words similarity, and interjournal citation relationships between the retrieved and source articles were found to be the effective discrimination predictors. Whether or not the source author was from a specific country was also one of the important predictors. Furthermore, it was shown that a retrieved article is almost certainly true if it was cited by, or cocited with, its source article. The method proposed in this study would be effective when dealing with a large number of articles whose subject fields and affiliation addresses vary widely. © 2011 Wiley Periodicals, Inc.",TRUE,noun phrase
R135,Databases/Information Systems,R6172,ADANA: Active Name Disambiguation,S6646,R6173,Graph,R6163,Pairwise Factor Graph,"Name ambiguity has long been viewed as a challenging problem in many applications, such as scientific literature management, people search, and social network analysis. When we search a person name in these systems, many documents (e.g., papers, web pages) containing that person's name may be returned. It is hard to determine which documents are about the person we care about. Although much research has been conducted, the problem remains largely unsolved, especially with the rapid growth of the people information available on the Web. In this paper, we try to study this problem from a new perspective and propose an ADANA method for disambiguating person names via active user interactions. In ADANA, we first introduce a pairwise factor graph (PFG) model for person name disambiguation. The model is flexible and can be easily extended by incorporating various features. Based on the PFG model, we propose an active name disambiguation algorithm, aiming to improve the disambiguation performance by maximizing the utility of the user's correction. Experimental results on three different genres of data sets show that with only a few user corrections, the error rate of name disambiguation can be reduced to 3.1%. A real system has been developed based on the proposed method and is available online.",TRUE,noun phrase
R135,Databases/Information Systems,R6172,ADANA: Active Name Disambiguation,S6647,R6173,Method,R6163,Pairwise Factor Graph,"Name ambiguity has long been viewed as a challenging problem in many applications, such as scientific literature management, people search, and social network analysis. When we search a person name in these systems, many documents (e.g., papers, web pages) containing that person's name may be returned. It is hard to determine which documents are about the person we care about. Although much research has been conducted, the problem remains largely unsolved, especially with the rapid growth of the people information available on the Web. In this paper, we try to study this problem from a new perspective and propose an ADANA method for disambiguating person names via active user interactions. In ADANA, we first introduce a pairwise factor graph (PFG) model for person name disambiguation. The model is flexible and can be easily extended by incorporating various features. Based on the PFG model, we propose an active name disambiguation algorithm, aiming to improve the disambiguation performance by maximizing the utility of the user's correction. Experimental results on three different genres of data sets show that with only a few user corrections, the error rate of name disambiguation can be reduced to 3.1%. A real system has been developed based on the proposed method and is available online.",TRUE,noun phrase
R135,Databases/Information Systems,R107613,Static analysis and optimization of semantic web queries,S541095,R107615,Has implementation,R137012,Pattern trees,"Static analysis is a fundamental task in query optimization. In this paper we study static analysis and optimization techniques for SPARQL, which is the standard language for querying Semantic Web data. Of particular interest for us is the optionality feature in SPARQL. It is crucial in Semantic Web data management, where data sources are inherently incomplete and the user is usually interested in partial answers to queries. This feature is one of the most complicated constructors in SPARQL and also the one that makes this language depart from classical query languages such as relational conjunctive queries. We focus on the class of well-designed SPARQL queries, which has been proposed in the literature as a fragment of the language with good properties regarding query evaluation. We first propose a tree representation for SPARQL queries, called pattern trees, which captures the class of well-designed SPARQL graph patterns and which can be considered as a query execution plan. Among other results, we propose several transformation rules for pattern trees, a simple normal form, and study equivalence and containment. We also study the enumeration and counting problems for this class of queries.",TRUE,noun phrase
R135,Databases/Information Systems,R70791,Enriching Knowledge Bases with Interesting Negative Statements,S336792,R70793,Method,R70798,pattern-based query log extraction,"Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, but abstain from taking any stance towards statements not contained in them. In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards automatically compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In pattern-based query log extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.4M statements for 130K popular Wikidata entities.",TRUE,noun phrase
R135,Databases/Information Systems,R70791,Enriching Knowledge Bases with Interesting Negative Statements,S336791,R70793,Material,R70797,popular Wikidata entities,"Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, but abstain from taking any stance towards statements not contained in them. In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards automatically compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In pattern-based query log extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.4M statements for 130K popular Wikidata entities.",TRUE,noun phrase
R135,Databases/Information Systems,R77036,Scalable indexing of RDF graphs for efficient join processing,S514227,R77038,keywords,L369084,Query processing costs,"Current approaches to RDF graph indexing suffer from weak data locality, i.e., information regarding a piece of data appears in multiple locations, spanning multiple data structures. Weak data locality negatively impacts storage and query processing costs. Towards stronger data locality, we propose a Three-way Triple Tree (TripleT) secondary memory indexing technique to facilitate flexible and efficient join evaluation on RDF data. The novelty of TripleT is that the index is built over the atoms occurring in the data set, rather than at a coarser granularity, such as whole triples occurring in the data set; and, the atoms are indexed regardless of the roles (i.e., subjects, predicates, or objects) they play in the triples of the data set. We show through extensive empirical evaluation that TripleT exhibits multiple orders of magnitude improvement over the state-of-the-art, in terms of both storage and query processing costs.",TRUE,noun phrase
R135,Databases/Information Systems,R70791,Enriching Knowledge Bases with Interesting Negative Statements,S336788,R70793,Process,R70794,question answering,"Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, but abstain from taking any stance towards statements not contained in them. In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards automatically compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In pattern-based query log extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.4M statements for 130K popular Wikidata entities.",TRUE,noun phrase
R135,Databases/Information Systems,R75311,Estimating Selectivity for Joined RDF Triple Patterns,S354147,R75313,description,L251584,Selectivity Estimation,"A fundamental problem related to RDF query processing is selectivity estimation, which is crucial to query optimization for determining a join order of RDF triple patterns. In this paper we focus research on selectivity estimation for SPARQL graph patterns. The previous work takes the join uniformity assumption when estimating the joined triple patterns. This assumption would lead to highly inaccurate estimations in the cases where properties in SPARQL graph patterns are correlated. We take into account the dependencies among properties in SPARQL graph patterns and propose a more accurate estimation model. Since star and chain query patterns are common in SPARQL graph patterns, we first focus on these two basic patterns and propose to use Bayesian network and chain histogram respectively for estimating the selectivity of them. Then, for estimating the selectivity of an arbitrary SPARQL graph pattern, we design algorithms for maximally using the precomputed statistics of the star paths and chain paths. The experiments show that our method outperforms existing approaches in accuracy.",TRUE,noun phrase
R135,Databases/Information Systems,R111213,Scalable join processing on very large RDF graphs,S536917,R111215,Has approach,R135718,selectivity estimation,"With the proliferation of the RDF data format, engines for RDF query processing are faced with very large graphs that contain hundreds of millions of RDF triples. This paper addresses the resulting scalability problems. Recent prior work along these lines has focused on indexing and other physical-design issues. The current paper focuses on join processing, as the fine-grained and schema-relaxed use of RDF often entails star- and chain-shaped join queries with many input streams from index scans. We present two contributions for scalable join processing. First, we develop very light-weight methods for sideways information passing between separate joins at query run-time, to provide highly effective filters on the input streams of joins. Second, we improve previously proposed algorithms for join-order optimization by more accurate selectivity estimations for very large RDF graphs. Experimental studies with several RDF datasets, including the UniProt collection, demonstrate the performance gains of our approach, outperforming the previously fastest systems by more than an order of magnitude.",TRUE,noun phrase
R135,Databases/Information Systems,R6119,A semi-supervised approach for author disambiguation in KDD CUP 2013,S6387,R6120,Method,R3096,Support Vector Machine,"Name disambiguation, which aims to identify multiple names which correspond to one person and same names which refer to different persons, is one of the most important basic problems in many areas such as natural language processing, information retrieval and digital libraries. Microsoft academic search data in KDD Cup 2013 Track 2 task brings one such challenge to the researchers in the knowledge discovery and data mining community. Besides the real-world and large-scale characteristic, the Track 2 task raises several challenges: (1) Consideration of both synonym and polysemy problems; (2) Existence of huge amount of noisy data with missing attributes; (3) Absence of labeled data that makes this challenge a cold start problem. In this paper, we describe our solution to Track 2 of KDD Cup 2013. The challenge of this track is author disambiguation, which aims at identifying whether authors are the same person by using academic publication data. We propose a multi-phase semi-supervised approach to deal with the challenge. First, we preprocess the dataset and generate features for models, then construct a coauthor-based network and employ community detection to accomplish first-phase disambiguation task, which handles the cold-start problem. Second, using results in first phase, we use support vector machine and various other models to utilize noisy data with missing attributes in the dataset. Further, we propose a self-taught procedure to solve ambiguity in coauthor information, boosting performance of results from other models. Finally, by blending results from different models, we finally achieves 6th place with 0.98717 mean F-score on public leaderboard and 7th place with 0.98651 mean F-score on private leaderboard.",TRUE,noun phrase
R135,Databases/Information Systems,R6050,A method for eliminating articles by homonymous authors from the large number of articles retrieved by author search,S6113,R6051,Evidence,R6015,Title words,"This paper proposes a methodology which discriminates the articles by the target authors (“true” articles) from those by other homonymous authors (“false” articles). Author name searches for 2,595 “source” authors in six subject fields retrieved about 629,000 articles. In order to extract true articles from the large amount of the retrieved articles, including many false ones, two filtering stages were applied. At the first stage any retrieved article was eliminated as false if either its affiliation addresses had little similarity to those of its source article or there was no citation relationship between the journal of the retrieved article and that of its source article. At the second stage, a sample of retrieved articles was subjected to manual judgment, and utilizing the judgment results, discrimination functions based on logistic regression were defined. These discrimination functions demonstrated both the recall ratio and the precision of about 95% and the accuracy (correct answer ratio) of 90–95%. Existence of common coauthor(s), address similarity, title words similarity, and interjournal citation relationships between the retrieved and source articles were found to be the effective discrimination predictors. Whether or not the source author was from a specific country was also one of the important predictors. Furthermore, it was shown that a retrieved article is almost certainly true if it was cited by, or cocited with, its source article. The method proposed in this study would be effective when dealing with a large number of articles whose subject fields and affiliation addresses vary widely. © 2011 Wiley Periodicals, Inc.",TRUE,noun phrase
R135,Databases/Information Systems,R77008,Random Walk TripleRush: Asynchronous Graph Querying and Sampling,S507850,R77010,Has implementation,L366121,Triple Store,"Most Semantic Web applications rely on querying graphs, typically by using SPARQL with a triple store. Increasingly, applications also analyze properties of the graph structure to compute statistical inferences. The current Semantic Web infrastructure, however, does not efficiently support such operations. This forces developers to extract the relevant data for external statistical post-processing. In this paper we propose to rethink query execution in a triple store as a highly parallelized asynchronous graph exploration on an active index data structure. This approach also allows to integrate SPARQL-querying with the sampling of graph properties. To evaluate this architecture we implemented Random Walk TripleRush, which is built on a distributed graph processing system. Our evaluations show that this architecture enables both competitive graph querying, as well as the ability to execute various types of random walks with restarts that sample interesting graph properties. Thanks to the asynchronous architecture, first results are sometimes returned in a fraction of the full execution time. We also evaluate the scalability and show that the architecture supports fast query-times on a dataset with more than a billion triples.",TRUE,noun phrase
R135,Databases/Information Systems,R6156,On Graph-Based Name Disambiguation,S6611,R6157,Evidence,R6155,User feedback,"Name ambiguity stems from the fact that many people or objects share identical names in the real world. Such name ambiguity decreases the performance of document retrieval, Web search, information integration, and may cause confusion in other applications. Due to the same name spellings and lack of information, it is a nontrivial task to distinguish them accurately. In this article, we focus on investigating the problem in digital libraries to distinguish publications written by authors with identical names. We present an effective framework named GHOST (abbreviation for GrapHical framewOrk for name diSambiguaTion), to solve the problem systematically. We devise a novel similarity metric, and utilize only one type of attribute (i.e., coauthorship) in GHOST. Given the similarity matrix, intermediate results are grouped into clusters with a recently introduced powerful clustering algorithm called Affinity Propagation . In addition, as a complementary technique, user feedback can be used to enhance the performance. We evaluated the framework on the real DBLP and PubMed datasets, and the experimental results show that GHOST can achieve both high precision and recall .",TRUE,noun phrase
R135,Databases/Information Systems,R77123,Heuristics-based query optimisation for SPARQL,S535910,R77125,Has output,R135515,variable graph,"Query optimization in RDF Stores is a challenging problem as SPARQL queries typically contain many more joins than equivalent relational plans, and hence lead to a large join order search space. In such cases, cost-based query optimization often is not possible. One practical reason for this is that statistics typically are missing in web scale setting such as the Linked Open Datasets (LOD). The more profound reason is that due to the absence of schematic structure in RDF, join-hit ratio estimation requires complicated forms of correlated join statistics; and currently there are no methods to identify the relevant correlations beforehand. For this reason, the use of good heuristics is essential in SPARQL query optimization, even in the case that are partially used with cost-based statistics (i.e., hybrid query optimization). In this paper we describe a set of useful heuristics for SPARQL query optimizers. We present these in the context of a new Heuristic SPARQL Planner (HSP) that is capable of exploiting the syntactic and the structural variations of the triple patterns in a SPARQL query in order to choose an execution plan without the need of any cost model. For this, we define the variable graph and we show a reduction of the SPARQL query optimization problem to the maximum weight independent set problem. We implemented our planner on top of the MonetDB open source column-store and evaluated its effectiveness against the state-of-the-art RDF-3X engine as well as comparing the plan quality with a relational (SQL) equivalent of the benchmarks.",TRUE,noun phrase
R135,Databases/Information Systems,R111213,Scalable join processing on very large RDF graphs,S536964,R111215,Has approach,L378483,Very large RDF Graphs,"With the proliferation of the RDF data format, engines for RDF query processing are faced with very large graphs that contain hundreds of millions of RDF triples. This paper addresses the resulting scalability problems. Recent prior work along these lines has focused on indexing and other physical-design issues. The current paper focuses on join processing, as the fine-grained and schema-relaxed use of RDF often entails star- and chain-shaped join queries with many input streams from index scans. We present two contributions for scalable join processing. First, we develop very light-weight methods for sideways information passing between separate joins at query run-time, to provide highly effective filters on the input streams of joins. Second, we improve previously proposed algorithms for join-order optimization by more accurate selectivity estimations for very large RDF graphs. Experimental studies with several RDF datasets, including the UniProt collection, demonstrate the performance gains of our approach, outperforming the previously fastest systems by more than an order of magnitude.",TRUE,noun phrase
R135,Databases/Information Systems,R77036,Scalable indexing of RDF graphs for efficient join processing,S514215,R77038,keywords,R114116,Weak data Locality,"Current approaches to RDF graph indexing suffer from weak data locality, i.e., information regarding a piece of data appears in multiple locations, spanning multiple data structures. Weak data locality negatively impacts storage and query processing costs. Towards stronger data locality, we propose a Three-way Triple Tree (TripleT) secondary memory indexing technique to facilitate flexible and efficient join evaluation on RDF data. The novelty of TripleT is that the index is built over the atoms occurring in the data set, rather than at a coarser granularity, such as whole triples occurring in the data set; and, the atoms are indexed regardless of the roles (i.e., subjects, predicates, or objects) they play in the triples of the data set. We show through extensive empirical evaluation that TripleT exhibits multiple orders of magnitude improvement over the state-of-the-art, in terms of both storage and query processing costs.",TRUE,noun phrase
R135,Databases/Information Systems,R6172,ADANA: Active Name Disambiguation,S6653,R6173,dataset,R6165,Web page,"Name ambiguity has long been viewed as a challenging problem in many applications, such as scientific literature management, people search, and social network analysis. When we search a person name in these systems, many documents (e.g., papers, web pages) containing that person's name may be returned. It is hard to determine which documents are about the person we care about. Although much research has been conducted, the problem remains largely unsolved, especially with the rapid growth of the people information available on the Web. In this paper, we try to study this problem from a new perspective and propose an ADANA method for disambiguating person names via active user interactions. In ADANA, we first introduce a pairwise factor graph (PFG) model for person name disambiguation. The model is flexible and can be easily extended by incorporating various features. Based on the PFG model, we propose an active name disambiguation algorithm, aiming to improve the disambiguation performance by maximizing the utility of the user's correction. Experimental results on three different genres of data sets show that with only a few user corrections, the error rate of name disambiguation can be reduced to 3.1%. A real system has been developed based on the proposed method and is available online.",TRUE,noun phrase
R135,Databases/Information Systems,R77008,Random Walk TripleRush: Asynchronous Graph Querying and Sampling,S544986,R137618,Algorithm,R135534,Random walk,"Most Semantic Web applications rely on querying graphs, typically by using SPARQL with a triple store. Increasingly, applications also analyze properties of the graph structure to compute statistical inferences. The current Semantic Web infrastructure, however, does not efficiently support such operations. This forces developers to extract the relevant data for external statistical post-processing. In this paper we propose to rethink query execution in a triple store as a highly parallelized asynchronous graph exploration on an active index data structure. This approach also allows to integrate SPARQL-querying with the sampling of graph properties. To evaluate this architecture we implemented Random Walk TripleRush, which is built on a distributed graph processing system. Our evaluations show that this architecture enables both competitive graph querying, as well as the ability to execute various types of random walks with restarts that sample interesting graph properties. Thanks to the asynchronous architecture, first results are sometimes returned in a fraction of the full execution time. We also evaluate the scalability and show that the architecture supports fast query-times on a dataset with more than a billion triples.",TRUE,noun phrase
R135,Databases/Information Systems,R77008,Random Walk TripleRush: Asynchronous Graph Querying and Sampling,S351763,R77010,Has implementation,R77013,Random Walks,"Most Semantic Web applications rely on querying graphs, typically by using SPARQL with a triple store. Increasingly, applications also analyze properties of the graph structure to compute statistical inferences. The current Semantic Web infrastructure, however, does not efficiently support such operations. This forces developers to extract the relevant data for external statistical post-processing. In this paper we propose to rethink query execution in a triple store as a highly parallelized asynchronous graph exploration on an active index data structure. This approach also allows to integrate SPARQL-querying with the sampling of graph properties. To evaluate this architecture we implemented Random Walk TripleRush, which is built on a distributed graph processing system. Our evaluations show that this architecture enables both competitive graph querying, as well as the ability to execute various types of random walks with restarts that sample interesting graph properties. Thanks to the asynchronous architecture, first results are sometimes returned in a fraction of the full execution time. We also evaluate the scalability and show that the architecture supports fast query-times on a dataset with more than a billion triples.",TRUE,noun phrase
R234,Digital Communications and Networking,R175090,CUBIC: a new TCP-friendly high-speed TCP variant,S693503,R175092,Method,R175101,Congestion Window,"CUBIC is a congestion control protocol for TCP (transmission control protocol) and the current default TCP algorithm in Linux. The protocol modifies the linear window growth function of existing TCP standards to be a cubic function in order to improve the scalability of TCP over fast and long distance networks. It also achieves more equitable bandwidth allocations among flows with different RTTs (round trip times) by making the window growth to be independent of RTT -- thus those flows grow their congestion window at the same rate. During steady state, CUBIC increases the window size aggressively when the window is far from the saturation point, and the slowly when it is close to the saturation point. This feature allows CUBIC to be very scalable when the bandwidth and delay product of the network is large, and at the same time, be highly stable and also fair to standard TCP flows. The implementation of CUBIC in Linux has gone through several upgrades. This paper documents its design, implementation, performance and evolution as the default TCP algorithm of Linux.",TRUE,noun phrase
R234,Digital Communications and Networking,R12293,"MAG: A Multilingual, Knowledge-base Agnostic and Deterministic Entity Linking Approach",S18842,R12295,Material,R12311,datasets in other languages,"Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages.",TRUE,noun phrase
R234,Digital Communications and Networking,R12293,"MAG: A Multilingual, Knowledge-base Agnostic and Deterministic Entity Linking Approach",S18845,R12295,Material,R12314,English datasets (PBOH),"Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages.",TRUE,noun phrase
R234,Digital Communications and Networking,R108545,Design and Evaluate Immersive Learning Experience for Massive Open Online Courses (MOOCs),S494546,R108550,Type of MOOC,L358339,Immersive learning,"Massive open online courses (MOOCs), a unique form of online education enabled by web-based learning technologies, allow learners from anywhere in the world with any level of educational background to enjoy online education experience provided by many top universities all around the world. Traditionally, MOOC learning contents are always delivered as text-based or video-based materials. Although introducing immersive learning experience for MOOCs may sound exciting and potentially significative, there are a number of challenges given this unique setting. In this paper, we present the design and evaluation methodologies for delivering immersive learning experience to MOOC learners via multiple media. Specifically, we have applied the techniques in the production of a MOOC entitled Virtual Hong Kong: New World, Old Traditions, led by AIMtech Centre, City University of Hong Kong, which is the first MOOC (as our knowledge) that delivers immersive learning content for distant learners to appreciate and experience how the traditional culture and folklore of Hong Kong impact upon the lives of its inhabitants in the 21st Century. The methodologies applied here can be further generalized as the fundamental framework of delivering immersive learning for future MOOCs.",TRUE,noun phrase
R234,Digital Communications and Networking,R12293,"MAG: A Multilingual, Knowledge-base Agnostic and Deterministic Entity Linking Approach",S18846,R12295,Data,R12315,micro F-measure,"Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages.",TRUE,noun phrase
R234,Digital Communications and Networking,R12293,"MAG: A Multilingual, Knowledge-base Agnostic and Deterministic Entity Linking Approach",S18843,R12295,Material,R12312,non-English languages,"Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages.",TRUE,noun phrase
R234,Digital Communications and Networking,R12293,"MAG: A Multilingual, Knowledge-base Agnostic and Deterministic Entity Linking Approach",S18844,R12295,Material,R12313,structured knowledge bases,"Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages.",TRUE,noun phrase
R234,Digital Communications and Networking,R12293,"MAG: A Multilingual, Knowledge-base Agnostic and Deterministic Entity Linking Approach",S18839,R12295,Material,R12308,trained mono-lingual models,"Entity linking has recently been the subject of a significant body of research. Currently, the best performing approaches rely on trained mono-lingual models. Porting these approaches to other languages is consequently a difficult endeavor as it requires corresponding training data and retraining of the models. We address this drawback by presenting a novel multilingual, knowledge-base agnostic and deterministic approach to entity linking, dubbed MAG. MAG is based on a combination of context-based retrieval on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data sets and in 7 languages. Our results show that the best approach trained on English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse on datasets in other languages. MAG on the other hand achieves state-of-the-art performance on English datasets and reaches a micro F-measure that is up to 0.6 higher than that of PBOH on non-English languages.",TRUE,noun phrase
R234,Digital Communications and Networking,R11010,Deeper Text Understanding for IR with Contextual Neural Language Modeling,S18253,R12097,Has value,R12110,Word embedding,"Neural networks provide new possibilities to automatically learn complex language patterns and query-document relations. Neural IR models have achieved promising results in learning query-document relevance patterns, but few explorations have been done on understanding the text content of a query or a document. This paper studies leveraging a recently-proposed contextual neural language model, BERT, to provide deeper text understanding for IR. Experimental results demonstrate that the contextual text representations from BERT are more effective than traditional word embeddings. Compared to bag-of-words retrieval models, the contextual language model can better leverage language structures, bringing large improvements on queries written in natural languages. Combining the text understanding ability with search knowledge leads to an enhanced pre-trained BERT model that can benefit related search tasks where training data are limited.",TRUE,noun phrase
R142,Earth Sciences,R160571,Performance of Spectral Angle Mapper and Parallelepiped Classifiers in Agriculture Hyperspectral Image,S640353,R160573,Study Area,R160569," Al-kharj, Saudi Arabia","Hyperspectral Imaging (HSI) is used to provide a wealth of information which can be used to address a variety of problems in different applications. The main requirement in all applications is the classification of HSI data. In this paper, supervised HSI classification algorithms are used to extract agriculture areas that specialize in wheat growing and get a classified image. In particular, Parallelepiped and Spectral Angel Mapper (SAM) algorithms are used. They are implemented by a software tool used to analyse and process geospatial images that is an Environment of Visualizing Images (ENVI). They are applied on Al-Kharj, Saudi Arabia as the study area. The overall accuracy after applying the algorithms on the image of the study area for SAM classification was 66.67%, and 33.33% for Parallelepiped classification. Therefore, SAM algorithm has provided a better a study area image classification.",TRUE,noun phrase
R142,Earth Sciences,R147383,Automated Seasonal Separation of Mine and Non Mine Water Bodies From Landsat 8 OLI/TIRS Using Clay Mineral and Iron Oxide Ratio,S591008,R147385,Methods,R147377, Clay mineral ratio (CLM),"Opencast mining has huge effects on water pollution for several reasons. Fresh water is heavily used to process ore. Mine effluent and seepage from various mine related areas especially tailing reservoir, increase water pollution immensely. Monitoring and classification of mine water bodies, which have such environmental impacts, have several research challenges. In the past, land cover classification of a mining region detects mine and non mine water bodies simultaneously. Water bodies inside surface mines have different characteristics from other water bodies. In this paper, a novel method has been proposed to differentiate mine and non mine water bodies over the seasons, which does not require to set a threshold value manually. Here, water body regions are detected over the entire scene by any classical water body detection algorithm. Further, each water body is treated independently, and reflectance properties of a bounding box over each water body region are analyzed. In the past, there were efforts to use clay mineral ratio (CLM) to separate mine and non mine water bodies. In this paper, it has been observed that iron oxide ratio (IO) can also separate mine and non mine water bodies. The accuracy is observed to increase, if the difference of CLM and IO is used for segregation. The proposed algorithm separates these regions by taking into account seasonal variations. Means of differences of CLM and IO of each bounding box have been clustered using K-means clustering algorithm. The automation provides precision and recall for mine, and non mine water bodies as $[77.83\%,76.55\%]$ and $[75.18\%,75.84\%]$, respectively, using ground truths from high-definition Google Earth images.",TRUE,noun phrase
R142,Earth Sciences,R155123,"Mapping hydrothermal alteration minerals using high-resolution AVIRIS-NG hyperspectral data in the Hutti-Maski gold deposit area, India",S620294,R155125,yields,R147490, Confusion Matrix,"ABSTRACT The present study exploits high-resolution hyperspectral imagery acquired by the Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) sensor from the Hutti-Maski gold deposit area, India, to map hydrothermal alteration minerals. The study area is a volcanic-dominated late Archean greenstone belt that hosts major gold mineralization in the Eastern Dharwar Craton of southern India. The study encompasses pre-processing, spectral and spatial image reduction using Minimum Noise Fraction (MNF) and Fast Pixel Purity Index (FPPI), followed by endmember extraction using n-dimensional visualizer and the United States Geological Survey (USGS) mineral spectral library. Image derived endmembers such as goethite, chlorite, chlorite at the mine site (chlorite mixed with mined materials), kaolinite, and muscovite were subsequently used in spectral mapping methods such as Spectral Angle Mapper (SAM), Spectral Information Divergence (SID) and its hybrid, i.e. SIDSAMtan. Spectral similarity matrix of the target and non-target-based method has been proposed to find the possible optimum threshold needed to obtain mineral map using spectral mapping methods. Relative Spectral Discrimination Power (RSDPW) and Confusion Matrix (CM) have been used to evaluate the performance of SAM, SID, and SIDSAMtan. The RSDPW and CM illustrate that the SIDSAMtan benefits from the unique characteristics of SAM and SID to achieve better discrimination capability. The Overall Accuracy (OA) and kappa coefficient (ҡ) of SAM, SID, and SIDSAMtan were computed using 900 random validation points and obtained 90% (OA) and 0.88 (ҡ), 91.4% and 0.90, and 94.4% and 0.93, respectively. Obtained mineral map demonstrates that the northern portion of the area mainly consists of muscovite whereas the southern part is marked by chlorite, goethite, muscovite and kaolinite, indicating the propylitic alteration. Most of these minerals are associated with altered metavolcanic rocks and migmatite.",TRUE,noun phrase
R142,Earth Sciences,R144015,A Raman spectroscopic study of humite minerals,S576405,R144017,Minerals in consideration,R144002, Humite Group,"Raman spectroscopy has been used to study the structure of the humite mineral group ((A2SiO4)n–A(OH, F)2 where n represents the number of olivine and brucite layers in the structure and is 1, 2, 3 or 4 and A2+ is Mg, Mn, Fe or some mix of these cations). The humite group of minerals forms a morphotropic series with the minerals olivine and brucite. The members of the humite group contain layers of the olivine structure that alternate with layers of the brucite-like sheets. The minerals are characterized by a complex set of bands in the 800–1000 cm−1 region attributed to the stretching vibrations of the olivine (SiO4)4− units. The number of bands in this region is influenced by the number of olivine layers. Characteristic bending modes of the (SiO4)4− units are observed in the 500–650 cm−1 region. The brucite sheets are characterized by the OH stretching vibrations in the 3475–3625 cm−1 wavenumber region. The position of the OH stretching vibrations is determined by the strength of the hydrogen bond formed between the brucite-like OH units and the olivine silica layer. The number of olivine sheets and not the chemical composition determines the strength of the hydrogen bonds. Copyright © 2006 John Wiley & Sons, Ltd.",TRUE,noun phrase
R142,Earth Sciences,R147383,Automated Seasonal Separation of Mine and Non Mine Water Bodies From Landsat 8 OLI/TIRS Using Clay Mineral and Iron Oxide Ratio,S591009,R147385,Methods,R147378, iron oxide ratio (IO),"Opencast mining has huge effects on water pollution for several reasons. Fresh water is heavily used to process ore. Mine effluent and seepage from various mine related areas especially tailing reservoir, increase water pollution immensely. Monitoring and classification of mine water bodies, which have such environmental impacts, have several research challenges. In the past, land cover classification of a mining region detects mine and non mine water bodies simultaneously. Water bodies inside surface mines have different characteristics from other water bodies. In this paper, a novel method has been proposed to differentiate mine and non mine water bodies over the seasons, which does not require to set a threshold value manually. Here, water body regions are detected over the entire scene by any classical water body detection algorithm. Further, each water body is treated independently, and reflectance properties of a bounding box over each water body region are analyzed. In the past, there were efforts to use clay mineral ratio (CLM) to separate mine and non mine water bodies. In this paper, it has been observed that iron oxide ratio (IO) can also separate mine and non mine water bodies. The accuracy is observed to increase, if the difference of CLM and IO is used for segregation. The proposed algorithm separates these regions by taking into account seasonal variations. Means of differences of CLM and IO of each bounding box have been clustered using K-means clustering algorithm. The automation provides precision and recall for mine, and non mine water bodies as $[77.83\%,76.55\%]$ and $[75.18\%,75.84\%]$, respectively, using ground truths from high-definition Google Earth images.",TRUE,noun phrase
R142,Earth Sciences,R143486,Application of remote sensing methods to hydrology and water resources,S574473,R143488,Outcomes,R143474, Irrigation scheduling,"Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.",TRUE,noun phrase
R142,Earth Sciences,R147491,"Lithological mapping using Landsat 8 OLI and Terra ASTER multispectral data in the Bas Drâa inlier, Moroccan Anti Atlas",S591678,R147493,Methods,R143762, Kappa coefficient,"Abstract. Lithological mapping is a fundamental step in various mineral prospecting studies because it forms the basis of the interpretation and validation of retrieved results. Therefore, this study exploited the multispectral Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Landsat 8 Operational Land Imager (OLI) data in order to map lithological units in the Bas Drâa inlier, at the Moroccan Anti Atlas. This task was completed by using principal component analysis (PCA), band ratios (BR), and support vector machine (SVM) classification. Overall accuracy and the kappa coefficient of SVM based on ground truth in addition to the results of PCA and BR show an excellent correlation with the existing geological map of the study area. Consequently, the methodology proposed demonstrates a high potential of ASTER and Landsat 8 OLI data in lithological units discrimination.",TRUE,noun phrase
R142,Earth Sciences,R143846,A review on the geological applications of hyperspectral remote sensing technology,S575726,R143848,Outcome,R143845, Mining environment monitoring,"Based on the progress of hyperspectral data acquisition, information processing and geological requirements, the current status and trends of hyperspectral remote sensing technology to geological applications are reviewed. The advantages and prospects of hyperspectral remote sensing applications to mineral recognition and mapping, lithologic mapping, mineral resource prospecting, mining environment monitoring, and leakage monitoring of oil and gas, are summarized and analyzed. Finally the open problems and future trends for this technology are pointed out.",TRUE,noun phrase
R142,Earth Sciences,R160558,Classification of Iowa wetlands using an airborne hyperspectral image: a comparison of the spectral angle mapper classifier and an object-oriented approach,S640318,R160560,Techniques/Methods,R160555, nonparametric object-oriented (OO) classification,"Wetlands mapping using multispectral imagery from Landsat multispectral scanner (MSS) and thematic mapper (TM) and Système pour l'observation de la Terre (SPOT) does not in general provide high classification accuracies because of poor spectral and spatial resolutions. This study tests the feasibility of using high-resolution hyperspectral imagery to map wetlands in Iowa with two nontraditional classification techniques: the spectral angle mapper (SAM) method and a new nonparametric object-oriented (OO) classification. The software programs used were ENVI and eCognition. Accuracies of these classified images were assessed by using the information collected through a field survey with a global positioning system and high-resolution color infrared images. Wetlands were identified more accurately with the OO method (overall accuracy 92.3%) than with SAM (63.53%). This paper also discusses the limitations of these classification techniques for wetlands, as well as discussing future directions for study.",TRUE,noun phrase
R142,Earth Sciences,R143486,Application of remote sensing methods to hydrology and water resources,S574476,R143488,Outcomes,R143477, Sediment load,"Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.",TRUE,noun phrase
R142,Earth Sciences,R143486,Application of remote sensing methods to hydrology and water resources,S574475,R143488,Outcomes,R143476, Snow cover,"Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.",TRUE,noun phrase
R142,Earth Sciences,R155123,"Mapping hydrothermal alteration minerals using high-resolution AVIRIS-NG hyperspectral data in the Hutti-Maski gold deposit area, India",S620299,R155125,yields,R147287, Spectral Information Divergence (SID),"ABSTRACT The present study exploits high-resolution hyperspectral imagery acquired by the Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) sensor from the Hutti-Maski gold deposit area, India, to map hydrothermal alteration minerals. The study area is a volcanic-dominated late Archean greenstone belt that hosts major gold mineralization in the Eastern Dharwar Craton of southern India. The study encompasses pre-processing, spectral and spatial image reduction using Minimum Noise Fraction (MNF) and Fast Pixel Purity Index (FPPI), followed by endmember extraction using n-dimensional visualizer and the United States Geological Survey (USGS) mineral spectral library. Image derived endmembers such as goethite, chlorite, chlorite at the mine site (chlorite mixed with mined materials), kaolinite, and muscovite were subsequently used in spectral mapping methods such as Spectral Angle Mapper (SAM), Spectral Information Divergence (SID) and its hybrid, i.e. SIDSAMtan. Spectral similarity matrix of the target and non-target-based method has been proposed to find the possible optimum threshold needed to obtain mineral map using spectral mapping methods. Relative Spectral Discrimination Power (RSDPW) and Confusion Matrix (CM) have been used to evaluate the performance of SAM, SID, and SIDSAMtan. The RSDPW and CM illustrate that the SIDSAMtan benefits from the unique characteristics of SAM and SID to achieve better discrimination capability. The Overall Accuracy (OA) and kappa coefficient (ҡ) of SAM, SID, and SIDSAMtan were computed using 900 random validation points and obtained 90% (OA) and 0.88 (ҡ), 91.4% and 0.90, and 94.4% and 0.93, respectively. Obtained mineral map demonstrates that the northern portion of the area mainly consists of muscovite whereas the southern part is marked by chlorite, goethite, muscovite and kaolinite, indicating the propylitic alteration. Most of these minerals are associated with altered metavolcanic rocks and migmatite.",TRUE,noun phrase
R142,Earth Sciences,R143763,Development and utilization of urban spectral library for remote sensing of urban environment,S575748,R143765,Outcome,R143368, Spectral Library,Hyperspectral technology is useful for urban studies due to its capability in examining detailed spectral characteristics of urban materials. This study aims to develop a spectral library of urban materials and demonstrate its application in remote sensing analysis of an urban environment. Field measurements were conducted by using ASD FieldSpec 3 Spectroradiometer with wavelength range from 350 to 2500 nm. The spectral reflectance curves of urban materials were interpreted and analyzed. A collection of 22 spectral data was compiled into a spectral library. The spectral library was put to practical use by utilizing the reference spectra for WorldView-2 satellite image classification which demonstrates the usability of such infrastructure to facilitate further progress of remote sensing applications in Malaysia.,TRUE,noun phrase
R142,Earth Sciences,R147485,Detection of Pb–Zn mineralization zones in west Kunlun using Landsat 8 and ASTER remote sensing data,S591541,R147487,Methods,R147482, Spectral Matched Filtering,"Abstract. The integration of Landsat 8 OLI and ASTER data is an efficient tool for interpreting lead–zinc mineralization in the Huoshaoyun Pb–Zn mining region located in the west Kunlun mountains at high altitude and very rugged terrain, where traditional geological work becomes limited and time-consuming. This task was accomplished by using band ratios (BRs), principal component analysis, and spectral matched filtering methods. It is concluded that some BR color composites and principal components of each imagery contain useful information for lithological mapping. SMF technique is useful for detecting lead–zinc mineralization zones, and the results could be verified by handheld portable X-ray fluorescence analysis. Therefore, the proposed methodology shows strong potential of Landsat 8 OLI and ASTER data in lithological mapping and lead–zinc mineralization zone extraction in carbonate stratum.",TRUE,noun phrase
R142,Earth Sciences,R143486,Application of remote sensing methods to hydrology and water resources,S574484,R143488,Outcomes,R143485, Surface water inventories,"Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.",TRUE,noun phrase
R142,Earth Sciences,R160584,Spectral angle mapper and object-based classification combined with hyperspectral remote sensing imagery for obtaining land use/cover mapping in a Mediterranean region,S640533,R160586,Techniques/Methods,R147438,Accuracy Assessment,"In this study, we test the potential of two different classification algorithms, namely the spectral angle mapper (SAM) and object-based classifier for mapping the land use/cover characteristics using a Hyperion imagery. We chose a study region that represents a typical Mediterranean setting in terms of landscape structure, composition and heterogeneous land cover classes. Accuracy assessment of the land cover classes was performed based on the error matrix statistics. Validation points were derived from visual interpretation of multispectral high resolution QuickBird-2 satellite imagery. Results from both the classifiers yielded more than 70% classification accuracy. However, the object-based classification clearly outperformed the SAM by 7.91% overall accuracy (OA) and a relatively high kappa coefficient. Similar results were observed in the classification of the individual classes. Our results highlight the potential of hyperspectral remote sensing data as well as object-based classification approach for mapping heterogeneous land use/cover in a typical Mediterranean setting.",TRUE,noun phrase
R142,Earth Sciences,R155127,"AVIRIS-NG Data for Geological Applications in Southeastern Parts of Aravalli Fold Belt, Rajasthan",S620316,R155129,yields,R147438,Accuracy Assessment,"Advanced techniques using high resolution hyperspectral remote sensing data has recently evolved as an emerging tool with potential to aid mineral exploration. In this study, pertinently, five mosaicked scenes of Airborne Visible InfraRed Imaging Spectrometer-Next Generation (AVIRIS-NG) hyperspectral data of southeastern parts of the Aravalli Fold belt in Jahazpur area, Rajasthan, were processed. The exposed Proterozoic rocks in this area is of immense economic and scientific interest because of richness of poly-metallic mineral resources and their unique metallogenesis. Analysis of high resolution multispectral satellite image reveals that there are many prominent lineaments which acted as potential conduits of hydrothermal fluid emanation, some of which resulted in altering the country rock. This study takes cues from studying those altered minerals to enrich our knowledge base on mineralized zones. In this imaging spectroscopic study we have identified different hydrothermally altered minerals consisting of hydroxyl, carbonate and iron-bearing species. Spectral signatures (image based) of minerals such as Kaosmec, Talc, Kaolinite, Dolomite, and Montmorillonite were derived in SWIR (Short wave infrared) region while Iron bearing minerals such as Goethite and Limonite were identified in the VNIR (Visible and Near Infrared) region of electromagnetic spectrum. Validation of the target minerals was done by subsequent ground truthing and X-ray diffractogram (XRD) analysis. The altered end members were further mapped by Spectral Angle Mapper (SAM) and Adaptive Coherence Estimator (ACE) techniques to detect target minerals. Accuracy assessment was reported to be 86.82% and 77.75% for SAM and ACE respectively. This study confirms that the AVIRIS-NG hyperspectral data provides better solution for identification of endmember minerals.",TRUE,noun phrase
R142,Earth Sciences,R155127,"AVIRIS-NG Data for Geological Applications in Southeastern Parts of Aravalli Fold Belt, Rajasthan",S620313,R155129,Techniques/Methods,R155147,Adaptive Coherence Estimator (ACE),"Advanced techniques using high resolution hyperspectral remote sensing data has recently evolved as an emerging tool with potential to aid mineral exploration. In this study, pertinently, five mosaicked scenes of Airborne Visible InfraRed Imaging Spectrometer-Next Generation (AVIRIS-NG) hyperspectral data of southeastern parts of the Aravalli Fold belt in Jahazpur area, Rajasthan, were processed. The exposed Proterozoic rocks in this area is of immense economic and scientific interest because of richness of poly-metallic mineral resources and their unique metallogenesis. Analysis of high resolution multispectral satellite image reveals that there are many prominent lineaments which acted as potential conduits of hydrothermal fluid emanation, some of which resulted in altering the country rock. This study takes cues from studying those altered minerals to enrich our knowledge base on mineralized zones. In this imaging spectroscopic study we have identified different hydrothermally altered minerals consisting of hydroxyl, carbonate and iron-bearing species. Spectral signatures (image based) of minerals such as Kaosmec, Talc, Kaolinite, Dolomite, and Montmorillonite were derived in SWIR (Short wave infrared) region while Iron bearing minerals such as Goethite and Limonite were identified in the VNIR (Visible and Near Infrared) region of electromagnetic spectrum. Validation of the target minerals was done by subsequent ground truthing and X-ray diffractogram (XRD) analysis. The altered end members were further mapped by Spectral Angle Mapper (SAM) and Adaptive Coherence Estimator (ACE) techniques to detect target minerals. Accuracy assessment was reported to be 86.82% and 77.75% for SAM and ACE respectively. This study confirms that the AVIRIS-NG hyperspectral data provides better solution for identification of endmember minerals.",TRUE,noun phrase
R142,Earth Sciences,R140694,Comparison of airborne hyperspectral data and eo-1 hyperion for mineral mapping,S561938,R140696,Data used,L394426,Airborne Visible/Infrared Imaging Spectrometer (AVIRIS),"Airborne hyperspectral data have been available to researchers since the early 1980s and their use for geologic applications is well documented. The launch of the National Aeronautics and Space Administration Earth Observing 1 Hyperion sensor in November 2000 marked the establishment of a test bed for spaceborne hyperspectral capabilities. Hyperion covers the 0.4-2.5-/spl mu/m range with 242 spectral bands at approximately 10-nm spectral resolution and 30-m spatial resolution. Analytical Imaging and Geophysics LLC and the Commonwealth Scientific and Industrial Research Organisation have been involved in efforts to evaluate, validate, and demonstrate Hyperions's utility for geologic mapping in a variety of sites in the United States and around the world. Initial results over several sites with established ground truth and years of airborne hyperspectral data show that Hyperion data from the shortwave infrared spectrometer can be used to produce useful geologic (mineralogic) information. Minerals mapped include carbonates, chlorite, epidote, kaolinite, alunite, buddingtonite, muscovite, hydrothermal silica, and zeolite. Hyperion data collected under optimum conditions (summer season, bright targets, well-exposed geology) indicate that Hyperion data meet prelaunch specifications and allow subtle distinctions such as determining the difference between calcite and dolomite and mapping solid solution differences in micas caused by substitution in octahedral molecular sites. Comparison of airborne hyperspectral data [from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS)] to the Hyperion data establishes that Hyperion provides similar basic mineralogic information, with the principal limitation being limited mapping of fine spectral detail under less-than-optimum acquisition conditions (winter season, dark targets) based on lower signal-to-noise ratios. Case histories demonstrate the analysis methodologies and level of information available from the Hyperion data. They also show the viability of Hyperion as a means of extending hyperspectral mineral mapping to areas not accessible to aircraft sensors. The analysis results demonstrate that spaceborne hyperspectral sensors can produce useful mineralogic information, but also indicate that SNR improvements are required for future spaceborne sensors to allow the same level of mapping that is currently possible from airborne sensors such as AVIRIS.",TRUE,noun phrase
R142,Earth Sciences,R140706,Spectral indices for lithologic discrimination and mapping by using the ASTER SWIR bands,S562477,R140708,Analysis,R140780,Alunite Index,"The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a research facility instrument launched on NASA's Terra spacecraft in December 1999. Spectral indices, a kind of orthogonal transformation in the five-dimensional space formed by the five ASTER short-wave-infrared (SWIR) bands, were proposed for discrimination and mapping of surface rock types. These include Alunite Index, Kaolinite Index, Calcite Index, and Montmorillonite Index, and can be calculated by linear combination of reflectance values of the five SWIR bands. The transform coefficients were determined so as to direct transform axes to the average spectral pattern of the typical minerals. The spectral indices were applied to the simulated ASTER dataset of Cuprite, Nevada, USA after converting its digital numbers to surface reflectance. The resultant spectral index images were useful for lithologic mapping and were easy to interpret geologically. An advantage of this method is that we can use the pre-determined transform coefficients, as long as image data are converted to surface reflectance.",TRUE,noun phrase
R142,Earth Sciences,R140548,"ASTER Data Analyses for Lithological Discrimination of Sittampundi Anorthositic Complex, Southern India",S561773,R140550,reference,R140676,ASTER resampled laboratory spectra,"ASTER is an advanced Thermal Emission and Reflection Radiometer, a multispectral sensor, which measures reflected and emitted electromagnetic radiation of earth surface with 14 bands. The present study aims to delineate different rock types in the Sittampundi Anorthositic Complex (SAC), Tamil Nadu using Visible (VIS), near-infrared (NIR) and short wave infrared (SWIR) reflectance data of ASTER 9 band data. We used different band ratioing, band combinations in the VNIR and SWIR region for discriminating lithological boundaries. SAC is also considered as a lunar highland analog rock. Anorthosite is a plagioclase-rich igneous rock with subordinate amounts of pyroxenes, olivine and other minerals. A methodology has been applied to correct the cross talk effect and radiance to reflectance. Principal Component Analysis (PCA) has been realized on the 9 ASTER bands in order to reduce the redundancy information in highly correlated bands. PCA derived FCC results enable the validation and support to demarcate the different lithological boundaries defined on previous geological map. The image derived spectral profiles for anorthosite are compared with the ASTER resampled laboratory spectra, JHU spectral library spectra and Apollo 14 lunar anorthosites spectra. The Spectral Angle Mapping imaging spectroscopy technique has been practiced to classify the ASTER image of the study area and found that, the processing of ASTER remote sensing data set can be used as a powerful tool for mapping the terrestrial Anorthositic regions and similar kind of process could be applied to map the planetary surfaces (E.g. Moon).",TRUE,noun phrase
R142,Earth Sciences,R140548,"ASTER Data Analyses for Lithological Discrimination of Sittampundi Anorthositic Complex, Southern India",S561753,R140550,Analysis,R108119,Band Ratio,"ASTER is an advanced Thermal Emission and Reflection Radiometer, a multispectral sensor, which measures reflected and emitted electromagnetic radiation of earth surface with 14 bands. The present study aims to delineate different rock types in the Sittampundi Anorthositic Complex (SAC), Tamil Nadu using Visible (VIS), near-infrared (NIR) and short wave infrared (SWIR) reflectance data of ASTER 9 band data. We used different band ratioing, band combinations in the VNIR and SWIR region for discriminating lithological boundaries. SAC is also considered as a lunar highland analog rock. Anorthosite is a plagioclase-rich igneous rock with subordinate amounts of pyroxenes, olivine and other minerals. A methodology has been applied to correct the cross talk effect and radiance to reflectance. Principal Component Analysis (PCA) has been realized on the 9 ASTER bands in order to reduce the redundancy information in highly correlated bands. PCA derived FCC results enable the validation and support to demarcate the different lithological boundaries defined on previous geological map. The image derived spectral profiles for anorthosite are compared with the ASTER resampled laboratory spectra, JHU spectral library spectra and Apollo 14 lunar anorthosites spectra. The Spectral Angle Mapping imaging spectroscopy technique has been practiced to classify the ASTER image of the study area and found that, the processing of ASTER remote sensing data set can be used as a powerful tool for mapping the terrestrial Anorthositic regions and similar kind of process could be applied to map the planetary surfaces (E.g. Moon).",TRUE,noun phrase
R142,Earth Sciences,R140706,Spectral indices for lithologic discrimination and mapping by using the ASTER SWIR bands,S562479,R140708,Analysis,R140782,Calcite Index,"The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a research facility instrument launched on NASA's Terra spacecraft in December 1999. Spectral indices, a kind of orthogonal transformation in the five-dimensional space formed by the five ASTER short-wave-infrared (SWIR) bands, were proposed for discrimination and mapping of surface rock types. These include Alunite Index, Kaolinite Index, Calcite Index, and Montmorillonite Index, and can be calculated by linear combination of reflectance values of the five SWIR bands. The transform coefficients were determined so as to direct transform axes to the average spectral pattern of the typical minerals. The spectral indices were applied to the simulated ASTER dataset of Cuprite, Nevada, USA after converting its digital numbers to surface reflectance. The resultant spectral index images were useful for lithologic mapping and were easy to interpret geologically. An advantage of this method is that we can use the pre-determined transform coefficients, as long as image data are converted to surface reflectance.",TRUE,noun phrase
R142,Earth Sciences,R147477,Automated Seasonal Detection of Coal Surface Mine Regions from Landsat 8 OLI Images,S591522,R147479,Supplementary information,L411773,Coal mine region,"Detection, and monitoring of surface mining region have various research aspects. Coal surface mining has severe social, ecological, environmental adverse effects. In the past, semisupervised and supervised clustering techniques have been used to detect such regions. Coal has lower reflectance values in short wave infrared I (SWIR-I) than short wave infrared II (SWIR-II). The proposed method presents a novel approach to detect coal mine regions without manual intervention using this cue. Clay mineral ratio is defined as a ratio of SWIR-I to SWIR-II. Here, unsupervised K-Means clustering has been used in a hierarchical fashion over a variant of clay mineral ratio to detect opencast coal mine regions in the Jharia coal field (JCF), India. The proposed method has average precision, and recall of 76.43%, and 62.75%, respectively.",TRUE,noun phrase
R142,Earth Sciences,R144217,Humanitarian applications of machine learning with remote-sensing data: review and case study in refugee settlement mapping,S577252,R144219,Output/Application,L404086,Damage assessment,"The coordination of humanitarian relief, e.g. in a natural disaster or a conflict situation, is often complicated by a scarcity of data to inform planning. Remote sensing imagery, from satellites or drones, can give important insights into conditions on the ground, including in areas which are difficult to access. Applications include situation awareness after natural disasters, structural damage assessment in conflict, monitoring human rights violations or population estimation in settlements. We review machine learning approaches for automating these problems, and discuss their potential and limitations. We also provide a case study of experiments using deep learning methods to count the numbers of structures in multiple refugee settlements in Africa and the Middle East. We find that while high levels of accuracy are possible, there is considerable variation in the characteristics of imagery collected from different sensors and regions. In this, as in the other applications discussed in the paper, critical inferences must be made from a relatively small amount of pixel data. We, therefore, consider that using machine learning systems as an augmentation of human analysts is a reasonable strategy to transition from current fully manual operational pipelines to ones which are both more efficient and have the necessary levels of quality control. This article is part of a discussion meeting issue ‘The growing ubiquity of algorithms in society: implications, impacts and innovations’.",TRUE,noun phrase
R142,Earth Sciences,R147402,"Pegmatite spectral behavior considering ASTER and Landsat 8 OLI data in Naipa and Muiane mines (Alto Ligonha, Mozambique)",S591324,R147404,Preprocesing,R147445,Dark Object Subtraction,"The Naipa and Muiane mines are located on the Nampula complex, a stratigraphic tectonic subdivision of the Mozambique Belt, in the Alto Ligonha region. The pegmatites are of the Li-Cs-Ta type, intrude a chlorite phyllite and gneisses with amphibole and biotite. The mines are still active. The main objective of this work was to analyze the pegmatite’s spectral behavior considering ASTER and Landsat 8 OLI data. An ASTER image from 27/05/2005, and an image Landsat OLI image from 02/02/2018 were considered. The data were radiometric calibrated and after atmospheric corrected considered the Dark Object Subtraction algorithm available in the Semi-Automatic Classification Plugin accessible in QGIS software. In the field, samples were collected from lepidolite waste pile in Naipa and Muaine mines. A spectroadiometer was used in order to analyze the spectral behavior of several pegmatite’s samples collected in the field in Alto Ligonha (Naipa and Muiane mines). In addition, QGIS software was also used for the spectral mapping of the hypothetical hydrothermal alterations associated with occurrences of basic metals, beryl gemstones, tourmalines, columbite-tantalites, and lithium minerals. A supervised classification algorithm was employed - Spectral Angle Mapper for the data processing, and the overall accuracy achieved was 80%. The integration of ASTER and Landsat 8 OLI data have proved very useful for pegmatite’s mapping. From the results obtained, we can conclude that: (i) the combination of ASTER and Landsat 8 OLI data allows us to obtain more information about mineral composition than just one sensor, i.e., these two sensors are complementary; (ii) the alteration spots identified in the mines area are composed of clay minerals. In the future, more data and others image processing algorithms can be applied in order to identify the different Lithium minerals, as spodumene, petalite, amblygonite and lepidolite.",TRUE,noun phrase
R142,Earth Sciences,R160566,"The Performance of Maximum Likelihood, Spectral Angle Mapper, Neural Network and Decision Tree Classifiers in Hyperspectral Image Analysis",S640345,R160568,Techniques/Methods,R151108,Decision Tree,"Several classification algorithms for pattern recognition had been tested in the mapping of tropical forest cover using airborne hyperspectral data. Results from the use of Maximum Likelihood (ML), Spectral Angle Mapper (SAM), Artificial Neural Network (ANN) and Decision Tree (DT) classifiers were compared and evaluated. It was found that ML performed the best followed by ANN, DT and SAM with accuracies of 86%, 84%, 51% and 49% respectively.",TRUE,noun phrase
R142,Earth Sciences,R143827,A short survey of hyperspectral remote sensing applications in agriculture,S575653,R143829,Application,L403263,Estimation of Crop Yield,"Hyperspectral sensors are devices that acquire images over hundreds of spectral bands, thereby enabling the extraction of spectral signatures for objects or materials observed. Hyperspectral remote sensing has been used over a wide range of applications, such as agriculture, forestry, geology, ecological monitoring and disaster monitoring. In this paper, the specific application of hyperspectral remote sensing to agriculture is examined. The technological development of agricultural methods is of critical importance as the world's population is anticipated to continuously rise much beyond the current number of 7 billion. One area upon which hyperspectral sensing can yield considerable impact is that of precision agriculture - the use of observations to optimize the use of resources and management of farming practices. For example, hyperspectral image processing is used in the monitoring of plant diseases, insect pests and invasive plant species; the estimation of crop yield; and the fine classification of crop distributions. This paper also presents a detailed overview of hyperspectral data processing techniques and suggestions for advancing the agricultural applications of hyperspectral technologies in Turkey.",TRUE,noun phrase
R142,Earth Sciences,R155123,"Mapping hydrothermal alteration minerals using high-resolution AVIRIS-NG hyperspectral data in the Hutti-Maski gold deposit area, India",S620295,R155125,yields,R155143,Fast Pixel Purity Index (FPPI),"ABSTRACT The present study exploits high-resolution hyperspectral imagery acquired by the Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) sensor from the Hutti-Maski gold deposit area, India, to map hydrothermal alteration minerals. The study area is a volcanic-dominated late Archean greenstone belt that hosts major gold mineralization in the Eastern Dharwar Craton of southern India. The study encompasses pre-processing, spectral and spatial image reduction using Minimum Noise Fraction (MNF) and Fast Pixel Purity Index (FPPI), followed by endmember extraction using n-dimensional visualizer and the United States Geological Survey (USGS) mineral spectral library. Image derived endmembers such as goethite, chlorite, chlorite at the mine site (chlorite mixed with mined materials), kaolinite, and muscovite were subsequently used in spectral mapping methods such as Spectral Angle Mapper (SAM), Spectral Information Divergence (SID) and its hybrid, i.e. SIDSAMtan. Spectral similarity matrix of the target and non-target-based method has been proposed to find the possible optimum threshold needed to obtain mineral map using spectral mapping methods. Relative Spectral Discrimination Power (RSDPW) and Confusion Matrix (CM) have been used to evaluate the performance of SAM, SID, and SIDSAMtan. The RSDPW and CM illustrate that the SIDSAMtan benefits from the unique characteristics of SAM and SID to achieve better discrimination capability. The Overall Accuracy (OA) and kappa coefficient (ҡ) of SAM, SID, and SIDSAMtan were computed using 900 random validation points and obtained 90% (OA) and 0.88 (ҡ), 91.4% and 0.90, and 94.4% and 0.93, respectively. Obtained mineral map demonstrates that the northern portion of the area mainly consists of muscovite whereas the southern part is marked by chlorite, goethite, muscovite and kaolinite, indicating the propylitic alteration. Most of these minerals are associated with altered metavolcanic rocks and migmatite.",TRUE,noun phrase
R142,Earth Sciences,R140812,Characterization and mapping of hematite ore mineral classes using hyperspectral remote sensing technique: a case study from Bailadila iron ore mining region,S563501,R140813,Analysis,R140640,Geochemical analysis,"Abstract The study demonstrates a methodology for mapping various hematite ore classes based on their reflectance and absorption spectra, using Hyperion satellite imagery. Substantial validation is carried out, using the spectral feature fitting technique, with the field spectra measured over the Bailadila hill range in Chhattisgarh State in India. The results of the study showed a good correlation between the concentration of iron oxide with the depth of the near-infrared absorption feature (R 2 = 0.843) and the width of the near-infrared absorption feature (R 2 = 0.812) through different empirical models, with a root-mean-square error (RMSE) between < 0.317 and < 0.409. The overall accuracy of the study is 88.2% with a Kappa coefficient value of 0.81. Geochemical analysis and X-ray fluorescence (XRF) of field ore samples are performed to ensure different classes of hematite ore minerals. Results showed a high content of Fe > 60 wt% in most of the hematite ore samples, except banded hematite quartzite (BHQ) (< 47 wt%).",TRUE,noun phrase
R142,Earth Sciences,R144024,Raman spectroscopy of the borosilicate mineral ferroaxinite,S577007,R144026,Techniques,R144037,Infrared spectroscopy,"Raman spectroscopy, complemented by infrared spectroscopy has been used to characterise the ferroaxinite minerals of theoretical formula Ca2Fe2+Al2BSi4O15(OH), a ferrous aluminium borosilicate. The Raman spectra are complex but are subdivided into sections based upon the vibrating units. The Raman spectra are interpreted in terms of the addition of borate and silicate spectra. Three characteristic bands of ferroaxinite are observed at 1082, 1056 and 1025 cm-1 and are attributed to BO4 stretching vibrations. Bands at 1003, 991, 980 and 963 cm-1 are assigned to SiO4 stretching vibrations. Bands are found in these positions for each of the ferroaxinites studied. No Raman bands were found above 1100 cm-1 showing that ferroaxinites contain only tetrahedral boron. The hydroxyl stretching region of ferroaxinites is characterised by a single Raman band between 3368 and 3376 cm-1, the position of which is sample dependent. Bands for ferroaxinite at 678, 643, 618, 609, 588, 572, 546 cm-1 may be attributed to the ν4 bending modes and the three bands at 484, 444 and 428 cm-1 may be attributed to the ν2 bending modes of the (SiO4)2-.",TRUE,noun phrase
R142,Earth Sciences,R140548,"ASTER Data Analyses for Lithological Discrimination of Sittampundi Anorthositic Complex, Southern India",S561771,R140550,reference,R140675,JHU spectral library spectra,"ASTER is an advanced Thermal Emission and Reflection Radiometer, a multispectral sensor, which measures reflected and emitted electromagnetic radiation of earth surface with 14 bands. The present study aims to delineate different rock types in the Sittampundi Anorthositic Complex (SAC), Tamil Nadu using Visible (VIS), near-infrared (NIR) and short wave infrared (SWIR) reflectance data of ASTER 9 band data. We used different band ratioing, band combinations in the VNIR and SWIR region for discriminating lithological boundaries. SAC is also considered as a lunar highland analog rock. Anorthosite is a plagioclase-rich igneous rock with subordinate amounts of pyroxenes, olivine and other minerals. A methodology has been applied to correct the cross talk effect and radiance to reflectance. Principal Component Analysis (PCA) has been realized on the 9 ASTER bands in order to reduce the redundancy information in highly correlated bands. PCA derived FCC results enable the validation and support to demarcate the different lithological boundaries defined on previous geological map. The image derived spectral profiles for anorthosite are compared with the ASTER resampled laboratory spectra, JHU spectral library spectra and Apollo 14 lunar anorthosites spectra. The Spectral Angle Mapping imaging spectroscopy technique has been practiced to classify the ASTER image of the study area and found that, the processing of ASTER remote sensing data set can be used as a powerful tool for mapping the terrestrial Anorthositic regions and similar kind of process could be applied to map the planetary surfaces (E.g. Moon).",TRUE,noun phrase
R142,Earth Sciences,R140706,Spectral indices for lithologic discrimination and mapping by using the ASTER SWIR bands,S562481,R140708,Analysis,R140784,Kaolinite Index,"The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a research facility instrument launched on NASA's Terra spacecraft in December 1999. Spectral indices, a kind of orthogonal transformation in the five-dimensional space formed by the five ASTER short-wave-infrared (SWIR) bands, were proposed for discrimination and mapping of surface rock types. These include Alunite Index, Kaolinite Index, Calcite Index, and Montmorillonite Index, and can be calculated by linear combination of reflectance values of the five SWIR bands. The transform coefficients were determined so as to direct transform axes to the average spectral pattern of the typical minerals. The spectral indices were applied to the simulated ASTER dataset of Cuprite, Nevada, USA after converting its digital numbers to surface reflectance. The resultant spectral index images were useful for lithologic mapping and were easy to interpret geologically. An advantage of this method is that we can use the pre-determined transform coefficients, as long as image data are converted to surface reflectance.",TRUE,noun phrase
R142,Earth Sciences,R140812,Characterization and mapping of hematite ore mineral classes using hyperspectral remote sensing technique: a case study from Bailadila iron ore mining region,S563503,R140813,Analysis,R140793,Kappa coefficient,"Abstract The study demonstrates a methodology for mapping various hematite ore classes based on their reflectance and absorption spectra, using Hyperion satellite imagery. Substantial validation is carried out, using the spectral feature fitting technique, with the field spectra measured over the Bailadila hill range in Chhattisgarh State in India. The results of the study showed a good correlation between the concentration of iron oxide with the depth of the near-infrared absorption feature (R 2 = 0.843) and the width of the near-infrared absorption feature (R 2 = 0.812) through different empirical models, with a root-mean-square error (RMSE) between < 0.317 and < 0.409. The overall accuracy of the study is 88.2% with a Kappa coefficient value of 0.81. Geochemical analysis and X-ray fluorescence (XRF) of field ore samples are performed to ensure different classes of hematite ore minerals. Results showed a high content of Fe > 60 wt% in most of the hematite ore samples, except banded hematite quartzite (BHQ) (< 47 wt%).",TRUE,noun phrase
R142,Earth Sciences,R143486,Application of remote sensing methods to hydrology and water resources,S574488,R143488,Application,L402509,Land cover,"Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.",TRUE,noun phrase
R142,Earth Sciences,R140556,"An image processing approach for converging ASTER-derived spectral maps for mapping Kolhan limestone, Jharkhand, India",S561708,R140557,Analysis,R108175,Minimum Noise Fraction (MNF),"In the present study, we have attempted the delineation of limestone using different spectral mapping algorithms in ASTER data. Each spectral mapping algorithm derives limestone exposure map independently. Although these spectral maps are broadly similar to each other, they are also different at places in terms of spatial disposition of limestone pixels. Therefore, an attempt is made to integrate the results of these spectral maps to derive an integrated map using minimum noise fraction (MNF) method. The first MNF image is the result of two cascaded principal component methods suitable for preserving complementary information derived from each spectral map. While implementing MNF, noise or non-coherent pixels occurring within a homogeneous patch of limestone are removed first using shift difference method, before attempting principal component analysis on input spectral maps for deriving composite spectral map of limestone exposures. The limestone exposure map is further validated based on spectral data and ancillary geological data.",TRUE,noun phrase
R142,Earth Sciences,R155123,"Mapping hydrothermal alteration minerals using high-resolution AVIRIS-NG hyperspectral data in the Hutti-Maski gold deposit area, India",S620291,R155125,Techniques/Methods,R108175,Minimum Noise Fraction (MNF),"ABSTRACT The present study exploits high-resolution hyperspectral imagery acquired by the Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) sensor from the Hutti-Maski gold deposit area, India, to map hydrothermal alteration minerals. The study area is a volcanic-dominated late Archean greenstone belt that hosts major gold mineralization in the Eastern Dharwar Craton of southern India. The study encompasses pre-processing, spectral and spatial image reduction using Minimum Noise Fraction (MNF) and Fast Pixel Purity Index (FPPI), followed by endmember extraction using n-dimensional visualizer and the United States Geological Survey (USGS) mineral spectral library. Image derived endmembers such as goethite, chlorite, chlorite at the mine site (chlorite mixed with mined materials), kaolinite, and muscovite were subsequently used in spectral mapping methods such as Spectral Angle Mapper (SAM), Spectral Information Divergence (SID) and its hybrid, i.e. SIDSAMtan. Spectral similarity matrix of the target and non-target-based method has been proposed to find the possible optimum threshold needed to obtain mineral map using spectral mapping methods. Relative Spectral Discrimination Power (RSDPW) and Confusion Matrix (CM) have been used to evaluate the performance of SAM, SID, and SIDSAMtan. The RSDPW and CM illustrate that the SIDSAMtan benefits from the unique characteristics of SAM and SID to achieve better discrimination capability. The Overall Accuracy (OA) and kappa coefficient (ҡ) of SAM, SID, and SIDSAMtan were computed using 900 random validation points and obtained 90% (OA) and 0.88 (ҡ), 91.4% and 0.90, and 94.4% and 0.93, respectively. Obtained mineral map demonstrates that the northern portion of the area mainly consists of muscovite whereas the southern part is marked by chlorite, goethite, muscovite and kaolinite, indicating the propylitic alteration. Most of these minerals are associated with altered metavolcanic rocks and migmatite.",TRUE,noun phrase
R142,Earth Sciences,R140706,Spectral indices for lithologic discrimination and mapping by using the ASTER SWIR bands,S562482,R140708,Analysis,R140785,Montmorillonite Index,"The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a research facility instrument launched on NASA's Terra spacecraft in December 1999. Spectral indices, a kind of orthogonal transformation in the five-dimensional space formed by the five ASTER short-wave-infrared (SWIR) bands, were proposed for discrimination and mapping of surface rock types. These include Alunite Index, Kaolinite Index, Calcite Index, and Montmorillonite Index, and can be calculated by linear combination of reflectance values of the five SWIR bands. The transform coefficients were determined so as to direct transform axes to the average spectral pattern of the typical minerals. The spectral indices were applied to the simulated ASTER dataset of Cuprite, Nevada, USA after converting its digital numbers to surface reflectance. The resultant spectral index images were useful for lithologic mapping and were easy to interpret geologically. An advantage of this method is that we can use the pre-determined transform coefficients, as long as image data are converted to surface reflectance.",TRUE,noun phrase
R142,Earth Sciences,R144217,Humanitarian applications of machine learning with remote-sensing data: review and case study in refugee settlement mapping,S577246,R144219,Output/Application,L404080,Natural disasters,"The coordination of humanitarian relief, e.g. in a natural disaster or a conflict situation, is often complicated by a scarcity of data to inform planning. Remote sensing imagery, from satellites or drones, can give important insights into conditions on the ground, including in areas which are difficult to access. Applications include situation awareness after natural disasters, structural damage assessment in conflict, monitoring human rights violations or population estimation in settlements. We review machine learning approaches for automating these problems, and discuss their potential and limitations. We also provide a case study of experiments using deep learning methods to count the numbers of structures in multiple refugee settlements in Africa and the Middle East. We find that while high levels of accuracy are possible, there is considerable variation in the characteristics of imagery collected from different sensors and regions. In this, as in the other applications discussed in the paper, critical inferences must be made from a relatively small amount of pixel data. We, therefore, consider that using machine learning systems as an augmentation of human analysts is a reasonable strategy to transition from current fully manual operational pipelines to ones which are both more efficient and have the necessary levels of quality control. This article is part of a discussion meeting issue ‘The growing ubiquity of algorithms in society: implications, impacts and innovations’.",TRUE,noun phrase
R142,Earth Sciences,R155157,"Petrography, XRD Analysis and Identification of Talc Minerals near Chhabadiya Village of Jahajpur Region, Bhilwara, India through Hyperion Hyperspectral Remote Sensing Data",S620728,R155159,Techniques/Methods,R155178,Petrographic analysis,"The larger synoptic view and contiguous channels arrangement of Hyperion hyperspectral remote sensing data enhance the minor spectral identification of earth’s features such as minerals, atmospheric gasses, vegetation and so on. Hydrothermal alteration minerals mostly associated with vicinity of geological structural features such as lineaments and fractures. In this study Hyperion data is used for identification of hydrothermally altered minerals and alteration facies near Chhabadiya village of Jahajpur area, Bhilwara, Rajasthan. There are some minerals such as talc minerals identified through Hyperion imagery. The identified talc minerals correlated and evaluated through petrographic analysis, XRD analysis and spectroscopic analysis. The validation of identified minerals completed by field survey, field sample spectra and USGS spectral library talc mineral spectra. The conclusion is that Hyperion hyperspectral remote sensing data have capability to identify the minerals, mineral assemblage, alteration minerals and alteration facies.",TRUE,noun phrase
R142,Earth Sciences,R140827,Sub-pixel mineral mapping using EO-1 Hyperion hyperspectral data,S563707,R140829,Analysis,R140795,Pixel Purity Index,"This study describes the utility of Earth Observation (EO)-1 Hyperion data for sub-pixel mineral investigation using Mixture Tuned Target Constrained Interference Minimized Filter (MTTCIMF) algorithm in hostile mountainous terrain of Rajsamand district of Rajasthan, which hosts economic mineralization such as lead, zinc, and copper etc. The study encompasses pre-processing, data reduction, Pixel Purity Index (PPI) and endmember extraction from reflectance image of surface minerals such as illite, montmorillonite, phlogopite, dolomite and chlorite. These endmembers were then assessed with USGS mineral spectral library and lab spectra of rock samples collected from field for spectral inspection. Subsequently, MTTCIMF algorithm was implemented on processed image to obtain mineral distribution map of each detected mineral. A virtual verification method has been adopted to evaluate the classified image, which uses directly image information to evaluate the result and confirm the overall accuracy and kappa coefficient of 68 % and 0.6 respectively. The sub-pixel level mineral information with reasonable accuracy could be a valuable guide to geological and exploration community for expensive ground and/or lab experiments to discover economic deposits. Thus, the study demonstrates the feasibility of Hyperion data for sub-pixel mineral mapping using MTTCIMF algorithm with cost and time effective approach.",TRUE,noun phrase
R142,Earth Sciences,R143827,A short survey of hyperspectral remote sensing applications in agriculture,S575652,R143829,Application,L403262,Precision agriculture,"Hyperspectral sensors are devices that acquire images over hundreds of spectral bands, thereby enabling the extraction of spectral signatures for objects or materials observed. Hyperspectral remote sensing has been used over a wide range of applications, such as agriculture, forestry, geology, ecological monitoring and disaster monitoring. In this paper, the specific application of hyperspectral remote sensing to agriculture is examined. The technological development of agricultural methods is of critical importance as the world's population is anticipated to continuously rise much beyond the current number of 7 billion. One area upon which hyperspectral sensing can yield considerable impact is that of precision agriculture - the use of observations to optimize the use of resources and management of farming practices. For example, hyperspectral image processing is used in the monitoring of plant diseases, insect pests and invasive plant species; the estimation of crop yield; and the fine classification of crop distributions. This paper also presents a detailed overview of hyperspectral data processing techniques and suggestions for advancing the agricultural applications of hyperspectral technologies in Turkey.",TRUE,noun phrase
R142,Earth Sciences,R140548,"ASTER Data Analyses for Lithological Discrimination of Sittampundi Anorthositic Complex, Southern India",S561754,R140550,Analysis,R108113,Principal Component Analysis (PCA),"ASTER is an advanced Thermal Emission and Reflection Radiometer, a multispectral sensor, which measures reflected and emitted electromagnetic radiation of earth surface with 14 bands. The present study aims to delineate different rock types in the Sittampundi Anorthositic Complex (SAC), Tamil Nadu using Visible (VIS), near-infrared (NIR) and short wave infrared (SWIR) reflectance data of ASTER 9 band data. We used different band ratioing, band combinations in the VNIR and SWIR region for discriminating lithological boundaries. SAC is also considered as a lunar highland analog rock. Anorthosite is a plagioclase-rich igneous rock with subordinate amounts of pyroxenes, olivine and other minerals. A methodology has been applied to correct the cross talk effect and radiance to reflectance. Principal Component Analysis (PCA) has been realized on the 9 ASTER bands in order to reduce the redundancy information in highly correlated bands. PCA derived FCC results enable the validation and support to demarcate the different lithological boundaries defined on previous geological map. The image derived spectral profiles for anorthosite are compared with the ASTER resampled laboratory spectra, JHU spectral library spectra and Apollo 14 lunar anorthosites spectra. The Spectral Angle Mapping imaging spectroscopy technique has been practiced to classify the ASTER image of the study area and found that, the processing of ASTER remote sensing data set can be used as a powerful tool for mapping the terrestrial Anorthositic regions and similar kind of process could be applied to map the planetary surfaces (E.g. Moon).",TRUE,noun phrase
R142,Earth Sciences,R147491,"Lithological mapping using Landsat 8 OLI and Terra ASTER multispectral data in the Bas Drâa inlier, Moroccan Anti Atlas",S591679,R147493,Methods,R108113,Principal Component Analysis (PCA),"Abstract. Lithological mapping is a fundamental step in various mineral prospecting studies because it forms the basis of the interpretation and validation of retrieved results. Therefore, this study exploited the multispectral Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Landsat 8 Operational Land Imager (OLI) data in order to map lithological units in the Bas Drâa inlier, at the Moroccan Anti Atlas. This task was completed by using principal component analysis (PCA), band ratios (BR), and support vector machine (SVM) classification. Overall accuracy and the kappa coefficient of SVM based on ground truth in addition to the results of PCA and BR show an excellent correlation with the existing geological map of the study area. Consequently, the methodology proposed demonstrates a high potential of ASTER and Landsat 8 OLI data in lithological units discrimination.",TRUE,noun phrase
R142,Earth Sciences,R155123,"Mapping hydrothermal alteration minerals using high-resolution AVIRIS-NG hyperspectral data in the Hutti-Maski gold deposit area, India",S620297,R155125,yields,R155144,Relative Spectral Discrimination Power (RSDPW),"ABSTRACT The present study exploits high-resolution hyperspectral imagery acquired by the Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) sensor from the Hutti-Maski gold deposit area, India, to map hydrothermal alteration minerals. The study area is a volcanic-dominated late Archean greenstone belt that hosts major gold mineralization in the Eastern Dharwar Craton of southern India. The study encompasses pre-processing, spectral and spatial image reduction using Minimum Noise Fraction (MNF) and Fast Pixel Purity Index (FPPI), followed by endmember extraction using n-dimensional visualizer and the United States Geological Survey (USGS) mineral spectral library. Image derived endmembers such as goethite, chlorite, chlorite at the mine site (chlorite mixed with mined materials), kaolinite, and muscovite were subsequently used in spectral mapping methods such as Spectral Angle Mapper (SAM), Spectral Information Divergence (SID) and its hybrid, i.e. SIDSAMtan. Spectral similarity matrix of the target and non-target-based method has been proposed to find the possible optimum threshold needed to obtain mineral map using spectral mapping methods. Relative Spectral Discrimination Power (RSDPW) and Confusion Matrix (CM) have been used to evaluate the performance of SAM, SID, and SIDSAMtan. The RSDPW and CM illustrate that the SIDSAMtan benefits from the unique characteristics of SAM and SID to achieve better discrimination capability. The Overall Accuracy (OA) and kappa coefficient (ҡ) of SAM, SID, and SIDSAMtan were computed using 900 random validation points and obtained 90% (OA) and 0.88 (ҡ), 91.4% and 0.90, and 94.4% and 0.93, respectively. Obtained mineral map demonstrates that the northern portion of the area mainly consists of muscovite whereas the southern part is marked by chlorite, goethite, muscovite and kaolinite, indicating the propylitic alteration. Most of these minerals are associated with altered metavolcanic rocks and migmatite.",TRUE,noun phrase
R142,Earth Sciences,R140710,"Simple mineral mapping algorithm based on multitype spectral diagnostic absorption features: a case study at Cuprite, Nevada",S562465,R140712,Analysis,R140776,Simple mineral mapping algorithm (SMMA),"Abstract. Hyperspectral remote sensing has been widely used in mineral identification using the particularly useful short-wave infrared (SWIR) wavelengths (1.0 to 2.5 μm). Current mineral mapping methods are easily limited by the sensor’s radiometric sensitivity and atmospheric effects. Therefore, a simple mineral mapping algorithm (SMMA) based on the combined application with multitype diagnostic SWIR absorption features for hyperspectral data is proposed. A total of nine absorption features are calculated, respectively, from the airborne visible/infrared imaging spectrometer data, the Hyperion hyperspectral data, and the ground reference spectra data collected from the United States Geological Survey (USGS) spectral library. Based on spectral analysis and statistics, a mineral mapping decision-tree model for the Cuprite mining district in Nevada, USA, is constructed. Then, the SMMA algorithm is used to perform mineral mapping experiments. The mineral map from the USGS (USGS map) in the Cuprite area is selected for validation purposes. Results showed that the SMMA algorithm is able to identify most minerals with high coincidence with USGS map results. Compared with Hyperion data (overall accuracy=74.54%), AVIRIS data showed overall better mineral mapping results (overall accuracy=94.82%) due to low signal-to-noise ratio and high spatial resolution.",TRUE,noun phrase
R142,Earth Sciences,R143486,Application of remote sensing methods to hydrology and water resources,S574491,R143488,Application,L402512,Soil moisture,"Abstract A brief review of research in remote sensing of water resources indicates that there are many positive results, and some techniques have been applied operationally. Currently, remote sensing data are being used operationally in precipitation estimates, soil moisture measurements for irrigation scheduling, snow water equivalent and snow cover extent assessments, seasonal and short term snowmelt runoff forecasts, and surface water inventories. In the next decade other operational applications are likely using remote measurements of land cover, sediment loads, erosion, groundwater, and areal inputs to hydrological models. Many research challenges remain, and significant progress is expected in areas like albedo measurements, energy budgets, and evapotranspiration estimation. The research in remote sensing and water resources also has much relevance for related studies of climate change and global habitability.",TRUE,noun phrase
R142,Earth Sciences,R140823,"Spatial distribution of altered minerals in the Gadag Schist Belt (GSB) of Karnataka, Southern India using hyperspectral remote sensing data",S563691,R140825,Analysis,R108184,Spectral Angle Mapper (SAM),"Abstract Spatial distribution of altered minerals in rocks and soils in the Gadag Schist Belt (GSB) is carried out using Hyperion data of March 2013. The entire spectral range is processed with emphasis on VNIR (0.4–1.0 μm) and SWIR regions (2.0–2.4 μm). Processing methodology includes Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes correction, minimum noise fraction transformation, spectral feature fitting (SFF) and spectral angle mapper (SAM) in conjunction with spectra collected, using an analytical spectral device spectroradiometer. A total of 155 bands were analysed to identify and map the major altered minerals by studying the absorption bands between the 0.4–1.0-μm and 2.0–2.3-μm wavelength regions. The most important and diagnostic spectral absorption features occur at 0.6–0.7 μm, 0.86 and at 0.9 μm in the VNIR region due to charge transfer of crystal field effect in the transition elements, whereas absorption near 2.1, 2.2, 2.25 and 2.33 μm in the SWIR region is related to the bending and stretching of the bonds in hydrous minerals (Al-OH, Fe-OH and Mg-OH), particularly in clay minerals. SAM and SFF techniques are implemented to identify the minerals present. A score of 0.33–1 was assigned for both SAM and SFF, where a value of 1 indicates the exact mineral type. However, endmember spectra were compared with United States Geological Survey and John Hopkins University spectral libraries for minerals and soils. Five minerals, i.e. kaolinite-5, kaolinite-2, muscovite, haematite, kaosmec and one soil, i.e. greyish brown loam have been identified. Greyish brown loam and kaosmec have been mapped as the major weathering/altered products present in soils and rocks of the GSB. This was followed by haematite and kaolinite. The SAM classifier was then applied on a Hyperion image to produce a mineral map. The dominant lithology of the area included greywacke, argillite and granite gneiss.",TRUE,noun phrase
R142,Earth Sciences,R155169,"Utilization of Hyperion data over Dongargarh, India, for mapping altered/weathered and clay minerals along with field spectral measurements",S620793,R155171,Techniques/Methods,R108184,Spectral Angle Mapper (SAM),"Hyperion data acquired over Dongargarh area, Chattisgarh (India), in December 2006 have been analysed to identify dominant mineral types present in the area, with special emphasis on mapping the altered/weathered and clay minerals present in the rocks and soils. Various advanced spectral processes such as reflectance calibration of the Hyperion data, minimum noise fraction transformation, spectral feature fitting (SFF) and spectral angle mapper (SAM) have been used for comparison/mapping in conjunction with spectra of rocks and soils that have been collected in the field using Analytical Spectral Devices's FieldSpec instrument. In this study, 40 shortwave infrared channels ranging from 2.0 to 2.4 μm were analysed mainly to identify and map the major altered/weathered and clay minerals by studying the absorption bands around the 2.2 and 2.3 μm wavelength regions. The absorption characteristics were the results of O–H stretching in the lattices of various hydrous minerals, in particular, clay minerals, constituting altered/weathered rocks and soils. SAM and SFF techniques implemented in Spectral Analyst were applied to identify the minerals present in the scene. A score of 0–1 was generated for both SAM and SFF, where a value of 1 indicated a perfect match showing the exact mineral type. Endmember spectra were matched with those of the minerals as available in the United States Geological Survey Spectral Library. Four minerals, oligoclase, rectorite, kaolinite and desert varnish, have been identified in the studied area. The SAM classifier was then applied to produce a mineral map over a subset of the Hyperion scene. The dominant lithology of the area included Dongargarh granite, Bijli rhyolite and Pitepani volcanics of Palaeo-Proterozoic age. Feldspar is one of the most dominant mineral constituents of all the above-mentioned rocks, which is highly susceptible to chemical weathering and produces various types of clay minerals. Oligoclase (a feldspar) was found in these areas where mostly rock outcrops were encountered. Kaolinite was also found mainly near exposed rocks, as it was formed due to the weathering of feldspar. Rectorite is the other clay mineral type that is observed mostly in the southern part of the studied area, where Bijli rhyolite dominates the lithology. However, the most predominant mineral type coating observed in this study is desert varnish, which is nothing but an assemblage of very fine clay minerals and forms a thin veneer on rock/soil surfaces, rendering a dark appearance to the latter. Thus, from this study, it could be inferred that Hyperion data can be well utilized to identify and map altered/weathered and clay minerals based on the study of the shape, size and position of spectral absorption features, which were otherwise absent in the signatures of the broadband sensors.",TRUE,noun phrase
R142,Earth Sciences,R155173,The spectral analysis and information extraction for small geological target detection using hyperion image,S620805,R155175,Techniques/Methods,R108184,Spectral Angle Mapper (SAM),"Imaging spectroscopic technique has been used for the mineral and rock geological mapping and alteration information extraction successfully with many reasonable results, but it is mainly used in arid and semi-arid land with low vegetation covering. In the case of the high vegetation covering, the outcrop of the altered rocks is small and distributes sparsely, the altered rocks is difficult to be identified directly. The target detection technique using imaging spectroscopic data should be introduced to the extraction of small geological targets under high vegetation covering area. In the paper, we take Ding-Ma gold deposit as the study area which located in Zhenan country, Shanxi province, the spectral features of the targets and the backgrounds are studied and analyzed using the field reflectance spectra, in addition to the study of the principle of the algorithms, some target detection algorithms which is appropriate to the small geological target detection are introduced. At last, the small altered rock targets under the covering of vegetation in forest are detected and discriminated in imaging spectroscopy data with the methods of spectral angle mapper (SAM), Constrained Energy Minimization (CEM) and Adaptive Cosine Estimator (ACE). The detection results are reasonable and indicate the ability of target detection algorithms in geological target detection in the forest area.",TRUE,noun phrase
R142,Earth Sciences,R160566,"The Performance of Maximum Likelihood, Spectral Angle Mapper, Neural Network and Decision Tree Classifiers in Hyperspectral Image Analysis",S640490,R160568,Techniques/Methods,R108184,Spectral Angle Mapper (SAM),"Several classification algorithms for pattern recognition had been tested in the mapping of tropical forest cover using airborne hyperspectral data. Results from the use of Maximum Likelihood (ML), Spectral Angle Mapper (SAM), Artificial Neural Network (ANN) and Decision Tree (DT) classifiers were compared and evaluated. It was found that ML performed the best followed by ANN, DT and SAM with accuracies of 86%, 84%, 51% and 49% respectively.",TRUE,noun phrase
R142,Earth Sciences,R155123,"Mapping hydrothermal alteration minerals using high-resolution AVIRIS-NG hyperspectral data in the Hutti-Maski gold deposit area, India",S620298,R155125,yields,R108184,Spectral Angle Mapper (SAM),"ABSTRACT The present study exploits high-resolution hyperspectral imagery acquired by the Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) sensor from the Hutti-Maski gold deposit area, India, to map hydrothermal alteration minerals. The study area is a volcanic-dominated late Archean greenstone belt that hosts major gold mineralization in the Eastern Dharwar Craton of southern India. The study encompasses pre-processing, spectral and spatial image reduction using Minimum Noise Fraction (MNF) and Fast Pixel Purity Index (FPPI), followed by endmember extraction using n-dimensional visualizer and the United States Geological Survey (USGS) mineral spectral library. Image derived endmembers such as goethite, chlorite, chlorite at the mine site (chlorite mixed with mined materials), kaolinite, and muscovite were subsequently used in spectral mapping methods such as Spectral Angle Mapper (SAM), Spectral Information Divergence (SID) and its hybrid, i.e. SIDSAMtan. Spectral similarity matrix of the target and non-target-based method has been proposed to find the possible optimum threshold needed to obtain mineral map using spectral mapping methods. Relative Spectral Discrimination Power (RSDPW) and Confusion Matrix (CM) have been used to evaluate the performance of SAM, SID, and SIDSAMtan. The RSDPW and CM illustrate that the SIDSAMtan benefits from the unique characteristics of SAM and SID to achieve better discrimination capability. The Overall Accuracy (OA) and kappa coefficient (ҡ) of SAM, SID, and SIDSAMtan were computed using 900 random validation points and obtained 90% (OA) and 0.88 (ҡ), 91.4% and 0.90, and 94.4% and 0.93, respectively. Obtained mineral map demonstrates that the northern portion of the area mainly consists of muscovite whereas the southern part is marked by chlorite, goethite, muscovite and kaolinite, indicating the propylitic alteration. Most of these minerals are associated with altered metavolcanic rocks and migmatite.",TRUE,noun phrase
R142,Earth Sciences,R155127,"AVIRIS-NG Data for Geological Applications in Southeastern Parts of Aravalli Fold Belt, Rajasthan",S620321,R155129,yields,R108184,Spectral Angle Mapper (SAM),"Advanced techniques using high resolution hyperspectral remote sensing data has recently evolved as an emerging tool with potential to aid mineral exploration. In this study, pertinently, five mosaicked scenes of Airborne Visible InfraRed Imaging Spectrometer-Next Generation (AVIRIS-NG) hyperspectral data of southeastern parts of the Aravalli Fold belt in Jahazpur area, Rajasthan, were processed. The exposed Proterozoic rocks in this area is of immense economic and scientific interest because of richness of poly-metallic mineral resources and their unique metallogenesis. Analysis of high resolution multispectral satellite image reveals that there are many prominent lineaments which acted as potential conduits of hydrothermal fluid emanation, some of which resulted in altering the country rock. This study takes cues from studying those altered minerals to enrich our knowledge base on mineralized zones. In this imaging spectroscopic study we have identified different hydrothermally altered minerals consisting of hydroxyl, carbonate and iron-bearing species. Spectral signatures (image based) of minerals such as Kaosmec, Talc, Kaolinite, Dolomite, and Montmorillonite were derived in SWIR (Short wave infrared) region while Iron bearing minerals such as Goethite and Limonite were identified in the VNIR (Visible and Near Infrared) region of electromagnetic spectrum. Validation of the target minerals was done by subsequent ground truthing and X-ray diffractogram (XRD) analysis. The altered end members were further mapped by Spectral Angle Mapper (SAM) and Adaptive Coherence Estimator (ACE) techniques to detect target minerals. Accuracy assessment was reported to be 86.82% and 77.75% for SAM and ACE respectively. This study confirms that the AVIRIS-NG hyperspectral data provides better solution for identification of endmember minerals.",TRUE,noun phrase
R142,Earth Sciences,R140823,"Spatial distribution of altered minerals in the Gadag Schist Belt (GSB) of Karnataka, Southern India using hyperspectral remote sensing data",S563692,R140825,Analysis,R140796,Spectral feature fitting (SFF) ,"Abstract Spatial distribution of altered minerals in rocks and soils in the Gadag Schist Belt (GSB) is carried out using Hyperion data of March 2013. The entire spectral range is processed with emphasis on VNIR (0.4–1.0 μm) and SWIR regions (2.0–2.4 μm). Processing methodology includes Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes correction, minimum noise fraction transformation, spectral feature fitting (SFF) and spectral angle mapper (SAM) in conjunction with spectra collected, using an analytical spectral device spectroradiometer. A total of 155 bands were analysed to identify and map the major altered minerals by studying the absorption bands between the 0.4–1.0-μm and 2.0–2.3-μm wavelength regions. The most important and diagnostic spectral absorption features occur at 0.6–0.7 μm, 0.86 and at 0.9 μm in the VNIR region due to charge transfer of crystal field effect in the transition elements, whereas absorption near 2.1, 2.2, 2.25 and 2.33 μm in the SWIR region is related to the bending and stretching of the bonds in hydrous minerals (Al-OH, Fe-OH and Mg-OH), particularly in clay minerals. SAM and SFF techniques are implemented to identify the minerals present. A score of 0.33–1 was assigned for both SAM and SFF, where a value of 1 indicates the exact mineral type. However, endmember spectra were compared with United States Geological Survey and John Hopkins University spectral libraries for minerals and soils. Five minerals, i.e. kaolinite-5, kaolinite-2, muscovite, haematite, kaosmec and one soil, i.e. greyish brown loam have been identified. Greyish brown loam and kaosmec have been mapped as the major weathering/altered products present in soils and rocks of the GSB. This was followed by haematite and kaolinite. The SAM classifier was then applied on a Hyperion image to produce a mineral map. The dominant lithology of the area included greywacke, argillite and granite gneiss.",TRUE,noun phrase
R142,Earth Sciences,R155123,"Mapping hydrothermal alteration minerals using high-resolution AVIRIS-NG hyperspectral data in the Hutti-Maski gold deposit area, India",S620302,R155125,yields,R155146,Spectral Similarity Matrix,"ABSTRACT The present study exploits high-resolution hyperspectral imagery acquired by the Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) sensor from the Hutti-Maski gold deposit area, India, to map hydrothermal alteration minerals. The study area is a volcanic-dominated late Archean greenstone belt that hosts major gold mineralization in the Eastern Dharwar Craton of southern India. The study encompasses pre-processing, spectral and spatial image reduction using Minimum Noise Fraction (MNF) and Fast Pixel Purity Index (FPPI), followed by endmember extraction using n-dimensional visualizer and the United States Geological Survey (USGS) mineral spectral library. Image derived endmembers such as goethite, chlorite, chlorite at the mine site (chlorite mixed with mined materials), kaolinite, and muscovite were subsequently used in spectral mapping methods such as Spectral Angle Mapper (SAM), Spectral Information Divergence (SID) and its hybrid, i.e. SIDSAMtan. Spectral similarity matrix of the target and non-target-based method has been proposed to find the possible optimum threshold needed to obtain mineral map using spectral mapping methods. Relative Spectral Discrimination Power (RSDPW) and Confusion Matrix (CM) have been used to evaluate the performance of SAM, SID, and SIDSAMtan. The RSDPW and CM illustrate that the SIDSAMtan benefits from the unique characteristics of SAM and SID to achieve better discrimination capability. The Overall Accuracy (OA) and kappa coefficient (ҡ) of SAM, SID, and SIDSAMtan were computed using 900 random validation points and obtained 90% (OA) and 0.88 (ҡ), 91.4% and 0.90, and 94.4% and 0.93, respectively. Obtained mineral map demonstrates that the northern portion of the area mainly consists of muscovite whereas the southern part is marked by chlorite, goethite, muscovite and kaolinite, indicating the propylitic alteration. Most of these minerals are associated with altered metavolcanic rocks and migmatite.",TRUE,noun phrase
R142,Earth Sciences,R143763,Development and utilization of urban spectral library for remote sensing of urban environment,S575764,R143765,Application,L403325,Urban Materials,Hyperspectral technology is useful for urban studies due to its capability in examining detailed spectral characteristics of urban materials. This study aims to develop a spectral library of urban materials and demonstrate its application in remote sensing analysis of an urban environment. Field measurements were conducted by using ASD FieldSpec 3 Spectroradiometer with wavelength range from 350 to 2500 nm. The spectral reflectance curves of urban materials were interpreted and analyzed. A collection of 22 spectral data was compiled into a spectral library. The spectral library was put to practical use by utilizing the reference spectra for WorldView-2 satellite image classification which demonstrates the usability of such infrastructure to facilitate further progress of remote sensing applications in Malaysia.,TRUE,noun phrase
R142,Earth Sciences,R144199,Machine learning in remote sensing data processing,S577208,R144201,Output/Application,L404060,Urban monitoring,"Remote sensing data processing deals with real-life applications with great societal values. For instance urban monitoring, fire detection or flood prediction from remotely sensed multispectral or radar images have a great impact on economical and environmental issues. To treat efficiently the acquired data and provide accurate products, remote sensing has evolved into a multidisciplinary field, where machine learning and signal processing algorithms play an important role nowadays. This paper serves as a survey of methods and applications, and reviews the latest methodological advances in machine learning for remote sensing data analysis.",TRUE,noun phrase
R142,Earth Sciences,R155157,"Petrography, XRD Analysis and Identification of Talc Minerals near Chhabadiya Village of Jahajpur Region, Bhilwara, India through Hyperion Hyperspectral Remote Sensing Data",S620727,R155159,Supplementary sources,R108183,USGS Spectral Library,"The larger synoptic view and contiguous channels arrangement of Hyperion hyperspectral remote sensing data enhance the minor spectral identification of earth’s features such as minerals, atmospheric gasses, vegetation and so on. Hydrothermal alteration minerals mostly associated with vicinity of geological structural features such as lineaments and fractures. In this study Hyperion data is used for identification of hydrothermally altered minerals and alteration facies near Chhabadiya village of Jahajpur area, Bhilwara, Rajasthan. There are some minerals such as talc minerals identified through Hyperion imagery. The identified talc minerals correlated and evaluated through petrographic analysis, XRD analysis and spectroscopic analysis. The validation of identified minerals completed by field survey, field sample spectra and USGS spectral library talc mineral spectra. The conclusion is that Hyperion hyperspectral remote sensing data have capability to identify the minerals, mineral assemblage, alteration minerals and alteration facies.",TRUE,noun phrase
R142,Earth Sciences,R140812,Characterization and mapping of hematite ore mineral classes using hyperspectral remote sensing technique: a case study from Bailadila iron ore mining region,S563507,R140813,Analysis,R141074,X-ray fluorescence (XRF),"Abstract The study demonstrates a methodology for mapping various hematite ore classes based on their reflectance and absorption spectra, using Hyperion satellite imagery. Substantial validation is carried out, using the spectral feature fitting technique, with the field spectra measured over the Bailadila hill range in Chhattisgarh State in India. The results of the study showed a good correlation between the concentration of iron oxide with the depth of the near-infrared absorption feature (R 2 = 0.843) and the width of the near-infrared absorption feature (R 2 = 0.812) through different empirical models, with a root-mean-square error (RMSE) between < 0.317 and < 0.409. The overall accuracy of the study is 88.2% with a Kappa coefficient value of 0.81. Geochemical analysis and X-ray fluorescence (XRF) of field ore samples are performed to ensure different classes of hematite ore minerals. Results showed a high content of Fe > 60 wt% in most of the hematite ore samples, except banded hematite quartzite (BHQ) (< 47 wt%).",TRUE,noun phrase
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R157056,A DNA Barcode Library for North American Pyraustinae (Lepidoptera: Pyraloidea: Crambidae),S629655,R157057,lower number estimated species (Method),R108976,current taxonomy,"Although members of the crambid subfamily Pyraustinae are frequently important crop pests, their identification is often difficult because many species lack conspicuous diagnostic morphological characters. DNA barcoding employs sequence diversity in a short standardized gene region to facilitate specimen identifications and species discovery. This study provides a DNA barcode reference library for North American pyraustines based upon the analysis of 1589 sequences recovered from 137 nominal species, 87% of the fauna. Data from 125 species were barcode compliant (>500bp, <1% n), and 99 of these taxa formed a distinct cluster that was assigned to a single BIN. The other 26 species were assigned to 56 BINs, reflecting frequent cases of deep intraspecific sequence divergence and a few instances of barcode sharing, creating a total of 155 BINs. Two systems for OTU designation, ABGD and BIN, were examined to check the correspondence between current taxonomy and sequence clusters. The BIN system performed better than ABGD in delimiting closely related species, while OTU counts with ABGD were influenced by the value employed for relative gap width. Different species with low or no interspecific divergence may represent cases of unrecognized synonymy, whereas those with high intraspecific divergence require further taxonomic scrutiny as they may involve cryptic diversity. The barcode library developed in this study will also help to advance understanding of relationships among species of Pyraustinae.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54210,Contrasting plant physiological adaptation to climate in the native and introduced range of Hypericum perforatum,S167371,R54211,Specific traits,L101949, Leaf-level morphological and physiological traits ,"Abstract How introduced plants, which may be locally adapted to specific climatic conditions in their native range, cope with the new abiotic conditions that they encounter as exotics is not well understood. In particular, it is unclear what role plasticity versus adaptive evolution plays in enabling exotics to persist under new environmental circumstances in the introduced range. We determined the extent to which native and introduced populations of St. John's Wort (Hypericum perforatum) are genetically differentiated with respect to leaf-level morphological and physiological traits that allow plants to tolerate different climatic conditions. In common gardens in Washington and Spain, and in a greenhouse, we examined clinal variation in percent leaf nitrogen and carbon, leaf δ13C values (as an integrative measure of water use efficiency), specific leaf area (SLA), root and shoot biomass, root/shoot ratio, total leaf area, and leaf area ratio (LAR). As well, we determined whether native European H. perforatum experienced directional selection on leaf-level traits in the introduced range and we compared, across gardens, levels of plasticity in these traits. In field gardens in both Washington and Spain, native populations formed latitudinal clines in percent leaf N. In the greenhouse, native populations formed latitudinal clines in root and shoot biomass and total leaf area, and in the Washington garden only, native populations also exhibited latitudinal clines in percent leaf C and leaf δ13C. Traits that failed to show consistent latitudinal clines instead exhibited significant phenotypic plasticity. Introduced St. John's Wort populations also formed significant or marginally significant latitudinal clines in percent leaf N in Washington and Spain, percent leaf C in Washington, and in root biomass and total leaf area in the greenhouse. In the Washington common garden, there was strong directional selection among European populations for higher percent leaf N and leaf δ13C, but no selection on any other measured trait. The presence of convergent, genetically based latitudinal clines between native and introduced H. perforatum, together with previously published molecular data, suggest that native and exotic genotypes have independently adapted to a broad-scale variation in climate that varies with latitude.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54142,Microhabitat analysis of the invasive exotic liana Lonicera japonica Thunb.,S166568,R54143,Species name,L101282, Lonicera japonica,"Abstract We documented microhabitat occurrence and growth of Lonicera japonica to identify factors related to its invasion into a southern Illinois shale barren. The barren was surveyed for L. japonica in June 2003, and the microhabitats of established L. japonica plants were compared to random points that sampled the range of available microhabitats in the barren. Vine and leaf characters were used as measurements of plant growth. Lonicera japonica occurred preferentially in areas of high litter cover and species richness, comparatively small trees, low PAR, low soil moisture and temperature, steep slopes, and shallow soils. Plant growth varied among these microhabitats. Among plots where L. japonica occurred, growth was related to soil and light conditions, and aspects of surrounding cover. Overhead canopy cover was a common variable associated with nearly all measured growth traits. Plasticity of traits to improve invader success can only affect the likelihood of invasion once constraints to establishment and persistence have been surmounted. Therefore, understanding where L. japonica invasion occurs, and microhabitat interactions with plant growth are important for estimating invasion success.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56096,"Across islands and continents, mammals are more successful invaders than birds",S192335,R56995,hypothesis,L120237, Tens rule,"Many invasive species cause ecological or economic damage, and the fraction of introduced species that become invasive is an important determinant of the overall costs caused by invaders. According to the widely quoted tens rule, about 10% of all introduced species establish themselves and about 10% of these established species become invasive. Global taxonomic differences in the fraction of species becoming invasive have not been described. In a global analysis of mammal and bird introductions, I show that both mammals and birds have a much higher invasion success than predicted by the tens rule, and that mammals have a significantly higher success than birds. Averaged across islands and continents, 79% of mammals and 50% of birds introduced have established themselves and 63% of mammals and 34% of birds established have become invasive. My analysis also does not support the hypothesis that islands are more susceptible to invaders than continents, as I did not find a significant relationship between invasion success and the size of the island or continent to which the species were introduced. The data set used in this study has a number of limitations, e.g. information on propagule pressure was not available at this global scale, so understanding the mechanisms behind the observed patterns has to be postponed to future studies.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56996,Invasion success of vertebrates in Europe and North America,S192369,R56998,hypothesis,L120265, Tens rule,"Species become invasive if they (i) are introduced to a new range, (ii) establish themselves, and (iii) spread. To address the global problems caused by invasive species, several studies investigated steps ii and iii of this invasion process. However, only one previous study looked at step i and examined the proportion of species that have been introduced beyond their native range. We extend this research by investigating all three steps for all freshwater fish, mammals, and birds native to Europe or North America. A higher proportion of European species entered North America than vice versa. However, the introduction rate from Europe to North America peaked in the late 19th century, whereas it is still rising in the other direction. There is no clear difference in invasion success between the two directions, so neither the imperialism dogma (that Eurasian species are exceptionally successful invaders) is supported, nor is the contradictory hypothesis that North America offers more biotic resistance to invaders than Europe because of its less disturbed and richer biota. Our results do not support the tens rule either: that approximately 10% of all introduced species establish themselves and that approximately 10% of established species spread. We find a success of approximately 50% at each step. In comparison, only approximately 5% of native vertebrates were introduced in either direction. These figures show that, once a vertebrate is introduced, it has a high potential to become invasive. Thus, it is crucial to minimize the number of species introductions to effectively control invasive vertebrates.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54224,Phenotypic plasticity of an invasive acacia versus two native Mediterranean species,S167536,R54225,Species name,L102086,Acacia longifolia,"
The phenotypic plasticity and the competitive ability of the invasive Acacia longifolia v. the indigenous Mediterranean dune species Halimium halimifolium and Pinus pinea were evaluated. In particular, we explored the hypothesis that phenotypic plasticity in response to biotic and abiotic factors explains the observed differences in competitiveness between invasive and native species. The seedlings’ ability to exploit different resource availabilities was examined in a two factorial experimental design of light and nutrient treatments by analysing 20 physiological and morphological traits. Competitiveness was tested using an additive experimental design in combination with 15N-labelling experiments. Light and nutrient availability had only minor effects on most physiological traits and differences between species were not significant. Plasticity in response to changes in resource availability occurred in morphological and allocation traits, revealing A. longifolia to be a species of intermediate responsiveness. The major competitive advantage of A. longifolia was its constitutively high shoot elongation rate at most resource treatments and its effective nutrient acquisition. Further, A. longifolia was found to be highly tolerant against competition from native species. In contrast to common expectations, the competition experiment indicated that A. longifolia expressed a constant allocation pattern and a phenotypic plasticity similar to that of the native species.
",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54056,"Seasonal Photoperiods Alter Developmental Time and Mass of an Invasive Mosquito, Aedes albopictus (Diptera: Culicidae), Across Its North-South Range in the United States",S165567,R54057,Species name,L100453,Aedes albopictus,"ABSTRACT The Asian tiger mosquito, Aedes albopictus (Skuse), is perhaps the most successful invasive mosquito species in contemporary history. In the United States, Ae. albopictus has spread from its introduction point in southern Texas to as far north as New Jersey (i.e., a span of ≈14° latitude). This species experiences seasonal constraints in activity because of cold temperatures in winter in the northern United States, but is active year-round in the south. We performed a laboratory experiment to examine how life-history traits of Ae. albopictus from four populations (New Jersey [39.4° N], Virginia [38.6° N], North Carolina [35.8° N], Florida [27.6° N]) responded to photoperiod conditions that mimic approaching winter in the north (short static daylength, short diminishing daylength) or relatively benign summer conditions in the south (long daylength), at low and high larval densities. Individuals from northern locations were predicted to exhibit reduced development times and to emerge smaller as adults under short daylength, but be larger and take longer to develop under long daylength. Life-history traits of southern populations were predicted to show less plasticity in response to daylength because of low probability of seasonal mortality in those areas. Males and females responded strongly to photoperiod regardless of geographic location, being generally larger but taking longer to develop under the long daylength compared with short day lengths; adults of both sexes were smaller when reared at low larval densities. Adults also differed in mass and development time among locations, although this effect was independent of density and photoperiod in females but interacted with density in males. Differences between male and female mass and development times was greater in the long photoperiod suggesting differences between the sexes in their reaction to different photoperiods. This work suggests that Ae. albopictus exhibits sex-specific phenotypic plasticity in life-history traits matching variation in important environmental variables.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54106,High temperature tolerance and thermal plasticity in emerald ash borer Agrilus planipennis,S166149,R54107,Species name,L100935,Agrilus planipennis,"1 The emerald ash borer Agrilus planipennis (Coleoptera: Buprestidae) (EAB), an invasive wood‐boring beetle, has recently caused significant losses of native ash (Fraxinus spp.) trees in North America. Movement of wood products has facilitated EAB spread, and heat sanitation of wooden materials according to International Standards for Phytosanitary Measures No. 15 (ISPM 15) is used to prevent this. 2 In the present study, we assessed the thermal conditions experienced during a typical heat‐treatment at a facility using protocols for pallet wood treatment under policy PI‐07, as implemented in Canada. The basal high temperature tolerance of EAB larvae and pupae was determined, and the observed heating rates were used to investigate whether the heat shock response and expression of heat shock proteins occurred in fourth‐instar larvae. 3 The temperature regime during heat treatment greatly exceeded the ISPM 15 requirements of 56 °C for 30 min. Emerald ash borer larvae were highly tolerant of elevated temperatures, with some instars surviving exposure to 53 °C without any heat pre‐treatments. High temperature survival was increased by either slow warming or pre‐exposure to elevated temperatures and a recovery regime that was accompanied by up‐regulated hsp70 expression under some of these conditions. 4 Because EAB is highly heat tolerant and exhibits a fully functional heat shock response, we conclude that greater survival than measured in vitro is possible under industry treatment conditions (with the larvae still embedded in the wood). We propose that the phenotypic plasticity of EAB may lead to high temperature tolerance very close to conditions experienced in an ISPM 15 standard treatment.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54112,VARIATION IN PHENOTYPIC PLASTICITY AMONG NATIVE AND INVASIVE POPULATIONS OF ALLIARIA PETIOLATA,S166218,R54113,Species name,L100992,Alliaria petiolata,"Alliaria petiolata is a Eurasian biennial herb that is invasive in North America and for which phenotypic plasticity has been noted as a potentially important invasive trait. Using four European and four North American populations, we explored variation among populations in the response of a suite of antioxidant, antiherbivore, and morphological traits to the availability of water and nutrients and to jasmonic acid treatment. Multivariate analyses revealed substantial variation among populations in mean levels of these traits and in the response of this suite of traits to environmental variation, especially water availability. Univariate analyses revealed variation in plasticity among populations in the expression of all of the traits measured to at least one of these environmental factors, with the exception of leaf length. There was no evidence for continentally distinct plasticity patterns, but there was ample evidence for variation in phenotypic plasticity among the populations within continents. This implies that A. petiolata has the potential to evolve distinct phenotypic plasticity patterns within populations but that invasive populations are no more plastic than native populations.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54198,Predicting invasiveness in exotic species: do subtropical native and invasive exotic aquatic plants differ in their growth responses to macronutrients?,S167227,R54199,Species name,L101829,Aquatic plant species,"We investigated whether plasticity in growth responses to nutrients could predict invasive potential in aquatic plants by measuring the effects of nutrients on growth of eight non‐invasive native and six invasive exotic aquatic plant species. Nutrients were applied at two levels, approximating those found in urbanized and relatively undisturbed catchments, respectively. To identify systematic differences between invasive and non‐invasive species, we compared the growth responses (total biomass, root:shoot allocation, and photosynthetic surface area) of native species with those of related invasive species after 13 weeks growth. The results were used to seek evidence of invasive potential among four recently naturalized species. There was evidence that invasive species tend to accumulate more biomass than native species (P = 0.0788). Root:shoot allocation did not differ between native and invasive plant species, nor was allocation affected by nutrient addition. However, the photosynthetic surface area of invasive species tended to increase with nutrients, whereas it did not among native species (P = 0.0658). Of the four recently naturalized species, Hydrocleys nymphoides showed the same nutrient‐related plasticity in photosynthetic area displayed by known invasive species. Cyperus papyrus showed a strong reduction in photosynthetic area with increased nutrients. H. nymphoides and C. papyrus also accumulated more biomass than their native relatives. H. nymphoides possesses both of the traits we found to be associated with invasiveness, and should thus be regarded as likely to be invasive.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54028,Jack-of-all-trades: phenotypic plasticity facilitates the invasion of an alien slug species,S165232,R54029,Species name,L100174,Arion lusitanicus,"Invasive alien species might benefit from phenotypic plasticity by being able to (i) maintain fitness in stressful environments (‘robust’), (ii) increase fitness in favourable environments (‘opportunistic’), or (iii) combine both abilities (‘robust and opportunistic’). Here, we applied this framework, for the first time, to an animal, the invasive slug, Arion lusitanicus, and tested (i) whether it has a more adaptive phenotypic plasticity compared with a congeneric native slug, Arion fuscus, and (ii) whether it is robust, opportunistic or both. During one year, we exposed specimens of both species to a range of temperatures along an altitudinal gradient (700–2400 m a.s.l.) and to high and low food levels, and we compared the responsiveness of two fitness traits: survival and egg production. During summer, the invasive species had a more adaptive phenotypic plasticity, and at high temperatures and low food levels, it survived better and produced more eggs than A. fuscus, representing the robust phenotype. During winter, A. lusitanicus displayed a less adaptive phenotype than A. fuscus. We show that the framework developed for plants is also very useful for a better mechanistic understanding of animal invasions. Warmer summers and milder winters might lead to an expansion of this invasive species to higher altitudes and enhance its spread in the lowlands, supporting the concern that global climate change will increase biological invasions.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54202,Photosynthesis and water-use efficiency: A comparison between invasive (exotic) and non-invasive (native) species,S167273,R54203,Species name,L101867,Berberis darwinii,"Invasive species have been hypothesized to out-compete natives though either a Jack-of-all-trades strategy, where they are able to utilize resources effectively in unfavourable environments, a master-of-some, where resource utilization is greater than its competitors in favourable environments, or a combination of the two (Jack-and-master). We examined the invasive strategy of Berberis darwinii in New Zealand compared with four co-occurring native species by examining germination, seedling survival, photosynthetic characteristics and water-use efficiency of adult plants, in sun and shade environments. Berberis darwinii seeds germinated more in shady sites than the other natives, but survival was low. In contrast, while germination of B. darwinii was the same as the native species in sunny sites, seedling survival after 18 months was nearly twice that of the all native species. The maximum photosynthetic rate of B. darwinii was nearly double that of all native species in the sun, but was similar among all species in the shade. Other photosynthetic traits (quantum yield and stomatal conductance) did not generally differ between B. darwinii and the native species, regardless of light environment. Berberis darwinii had more positive values of δ13C than the four native species, suggesting that it gains more carbon per unit water transpired than the competing native species. These results suggest that the invasion success of B. darwinii may be partially explained by combination of a Jack-of-all-trades scenario of widespread germination with a master-of-some scenario through its ability to photosynthesize at higher rates in the sun and, hence, gain a rapid height and biomass advantage over native species in favourable environments.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55041,TRANSLOCATION AS A SPECIES CONSERVATION TOOL - STATUS AND STRATEGY,S176655,R55043,Investigated species,L109307,Birds and Mammals,"Surveys of recent (1973 to 1986) intentional releases of native birds and mammals to the wild in Australia, Canada, Hawaii, New Zealand, and the United States were conducted to document current activities, identify factors associated with success, and suggest guidelines for enhancing future work. Nearly 700 translocations were conducted each year. Native game species constituted 90 percent of translocations and were more successful (86 percent) than were translocations of threatened, endangered, or sensitive species (46 percent). Knowledge of habitat quality, location of release area within the species range, number of animals released, program length, and reproductive traits allowed correct classification of 81 percent of observed translocations as successful or not.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54232,Leaf ontogenetic dependence of light acclimation in invasive and native subtropical trees of different successional status,S167628,R54233,Species name,L102162,Bischofia javanica,"In the Bonin Islands of the western Pacific where the light environment is characterized by high fluctuations due to frequent typhoon disturbance, we hypothesized that the invasive success of Bischofia javanica Blume (invasive tree, mid-successional) may be attributable to a high acclimation capacity under fluctuating light availability. The physiological and morphological responses of B. javanica to both simulated canopy opening and closure were compared against three native species of different successional status: Trema orientalis Blume (pioneer), Schima mertensiana (Sieb. et Zucc.) Koidz (mid-successional) and Elaeocarpus photiniaefolius Hook.et Arn (late-successional). The results revealed significant species-specific differences in the timing of physiological maturity and phenotypic plasticity in leaves developed under constant high and low light levels. For example, the photosynthetic capacity of T. orientalis reached a maximum in leaves that had just fully expanded when grown under constant high light (50% of full sun) whereas that of E. photiniaefolius leaves continued to increase until 50 d after full expansion. For leaves that had just reached full expansion, T. orientalis, having high photosynthetic plasticity between high and low light, exhibited low acclimation capacity under the changing light (from high to low or low to high light). In comparison with native species, B. javanica showed a higher degree of physiological and morphological acclimation following transfer to a new light condition in leaves of all age classes (i.e. before and after reaching full expansion). The high acclimation ability of B. javanica in response to changes in light availability may be a part of its pre-adaptations for invasiveness in the fluctuating environment of the Bonin Islands.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54090,Multiple common garden experiments suggest lack of local adaptation in an invasive ornamental plant,S165960,R54091,Species name,L100778,Buddleja davidii,"Aims Adaptive evolution along geographic gradients of climatic conditions is suggested to facilitate the spread of invasive plant species, leading to clinal variation among populations in the introduced range. We investigated whether adaptation to climate is also involved in the invasive spread of an ornamental shrub, Buddleja davidii, across western and central Europe. Methods We combined a common garden experiment, replicated in three climatically different central European regions, with reciprocal transplantation to quantify genetic differentiation in growth and reproductive traits of 20 invasive B. davidii populations. Additionally, we compared compensatory regrowth among populations after clipping of stems to simulate mechanical damage.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54174,"Growth, water relations, and stomatal development of Caragana korshinskii Kom. and Zygophyllum xanthoxylum (Bunge) Maxim. seedlings in response to water deficits",S166942,R54175,Species name,L101592,Caragana korshinskii,"Abstract The selection and introduction of drought tolerant species is a common method of restoring degraded grasslands in arid environments. This study investigated the effects of water stress on growth, water relations, Na+ and K+ accumulation, and stomatal development in the native plant species Zygophyllum xanthoxylum (Bunge) Maxim., and an introduced species, Caragana korshinskii Kom., under three watering regimes. Moderate drought significantly reduced pre‐dawn water potential, leaf relative water content, total biomass, total leaf area, above‐ground biomass, total number of leaves and specific leaf area, but it increased the root/total weight ratio (0.23 versus 0.33) in C. korshinskii. Only severe drought significantly affected water status and growth in Z. xanthoxylum. In any given watering regime, a significantly higher total biomass was observed in Z. xanthoxylum (1.14 g) compared to C. korshinskii (0.19 g). Moderate drought significantly increased Na+ accumulation in all parts of Z. xanthoxylum, e.g., moderate drought increased leaf Na+ concentration from 1.14 to 2.03 g/100 g DW, however, there was no change in Na+ (0.11 versus 0.12) in the leaf of C. korshinskii when subjected to moderate drought. Stomatal density increased as water availability was reduced in both C. korshinskii and Z. xanthoxylum, but there was no difference in stomatal index of either species. Stomatal length and width, and pore width were significantly reduced by moderate water stress in Z. xanthoxylum, but severe drought was required to produce a significant effect in C. korshinskii. These results indicated that C. korshinskii is more responsive to water stress and exhibits strong phenotypic plasticity especially in above‐ground/below‐ground biomass allocation. In contrast, Z. xanthoxylum was more tolerant to water deficit, with a lower specific leaf area and a strong ability to maintain water status through osmotic adjustment and stomatal closure, thereby providing an effective strategy to cope with local extreme arid environments.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54200,Spreading of the invasive Carpobrotus aff. acinaciformis in Mediterranean ecosystems: The advantage of performing in different light environments,S167250,R54201,Species name,L101848,Carpobrotus aff. acinaciformis,"ABSTRACT Question: Do specific environmental conditions affect the performance and growth dynamics of one of the most invasive taxa (Carpobrotus aff. acinaciformis) on Mediterranean islands? Location: Four populations located on Mallorca, Spain. Methods: We monitored growth rates of main and lateral shoots of this stoloniferous plant for over two years (2002–2003), comparing two habitats (rocky coast vs. coastal dune) and two different light conditions (sun vs. shade). In one population of each habitat type, we estimated electron transport rate and the level of plant stress (maximal photochemical efficiency Fv/Fm) by means of chlorophyll fluorescence. Results: Main shoots of Carpobrotus grew at similar rates at all sites, regardless habitat type. However, growth rate of lateral shoots was greater in shaded plants than in those exposed to sunlight. Its high phenotypic plasticity, expressed in different allocation patterns in sun and shade individuals, and its clonal growth which promotes the continuous sea...",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54134,"Thermal variability alters climatic stress resistance and plastic responses in a globally invasive pest, the Mediterranean fruit fly (Ceratitis capitata)",S166471,R54135,Species name,L101201,Ceratitis capitata,"Climatic means with different degrees of variability (δ) may change in the future and could significantly impact ectotherm species fitness. Thus, there is an increased interest in understanding the effects of changes in means and variances of temperature on traits of climatic stress resistance. Here, we examined short‐term (within‐generation) variation in mean temperature (23, 25, and 27 °C) at three levels of diel thermal fluctuations (δ = 1, 3, or 5 °C) on an invasive pest insect, the Mediterranean fruit fly, Ceratitis capitata (Wiedemann) (Diptera: Tephritidae). Using the adult flies, we address the hypothesis that temperature variability may affect the climatic stress resistance over and above changes in mean temperature at constant variability levels. We scored the traits of high‐ and low‐thermal tolerance, high‐ and low‐temperature acute hardening ability, water balance, and egg production under benign conditions after exposure to each of the nine experimental scenarios. Most importantly, results showed that temperature variance may have significant effects in addition to the changes in mean temperature for most traits scored. Although typical acclimation responses were detected for most of the traits under low variance conditions, high variance scenarios dramatically altered the outcomes, with poorer climatic stress resistance detected in some, but not all, traits. These results suggest that large temperature fluctuations might limit plastic responses which in turn could reduce the insect fitness. Increased mean temperatures in conjunction with increased temperature variability may therefore have stronger negative effects on this agricultural pest than elevated temperatures alone. The results of this study therefore have significant implications for understanding insect responses to climate change and suggest that analyses or simulations of only mean temperature variation may be inappropriate for predicting population‐level responses under future climate change scenarios despite their widespread use.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54140,Phenotypic plasticity of thermal tolerance contributes to the invasion potential of Mediterranean fruit flies (Ceratitis capitata) ,S166545,R54141,Species name,L101263,Ceratitis capitata,"1. The invasion success of Ceratitis capitata probably stems from physiological, morphological, and behavioural adaptations that enable them to survive in different habitats. However, it is generally poorly understood if variation in acute thermal tolerance and its phenotypic plasticity might be important in facilitating survival of C. capitata upon introduction to novel environments.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54082,"Geographically distinct Ceratophyllum demersum populations differ in growth, photosynthetic responses and phenotypic plasticity to nitrogen availability",S165865,R54083,Species name,L100699,Ceratophyllum demersum L.,"
Two geographically distinct populations of the submerged aquatic macrophyte Ceratophyllum demersum L. were compared after acclimation to five different nitrogen concentrations (0.005, 0.02, 0.05, 0.1 and 0.2 mM N) in a common garden setup. The two populations were an apparent invasive population from New Zealand (NZ) and a noninvasive population from Denmark (DK). The populations were compared with a focus on both morphological and physiological traits. The NZ population had higher relative growth rates (RGRs) and photosynthesis rates (Pmax) (range: RGR, 0.06–0.08 per day; Pmax, 200–395 µmol O2 g–1 dry mass (DM) h–1) compared with the Danish population (range: RGR, 0.02–0.05 per day; Pmax, 88–169 µmol O2 g–1 DM h–1). The larger, faster-growing NZ population also showed higher plasticity than the DK population in response to nitrogen in traits important for growth. Hence, the observed differences in growth behaviour between the two populations are a result of genetic differences and differences in their level of plasticity. Here, we show that two populations of the same species from similar climates but different geographical areas can differ in several ecophysiological traits after growth in a common garden setup.
",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54162,Trade-off between morphological convergence and opportunistic diet behavior in fish hybrid zone ,S166803,R54163,Species name,L101477,Chondrostoma nasus nasus,"Abstract Background The invasive Chondrostoma nasus nasus has colonized part of the distribution area of the protected endemic species Chondrostoma toxostoma toxostoma . This hybrid zone is a complex system where multiple effects such as inter-species competition, bi-directional introgression, strong environmental pressure and so on are combined. Why do sympatric Chondrostoma fish present a unidirectional change in body shape? Is this the result of inter-species interactions and/or a response to environmental effects or the result of trade-offs? Studies focusing on the understanding of a trade-off between multiple parameters are still rare. Although this has previously been done for Cichlid species flock and for Darwin finches, where mouth or beak morphology were coupled to diet and genetic identification, no similar studies have been done for a fish hybrid zone in a river. We tested the correlation between morphology (body and mouth morphology), diet (stable carbon and nitrogen isotopes) and genomic combinations in different allopatric and sympatric populations for a global data set of 1330 specimens. To separate the species interaction effect from the environmental effect in sympatry, we distinguished two data sets: the first one was obtained from a highly regulated part of the river and the second was obtained from specimens coming from the less regulated part. Results The distribution of the hybrid combinations was different in the two part of the sympatric zone, whereas all the specimens presented similar overall changes in body shape and in mouth morphology. Sympatric specimens were also characterized by a larger diet behavior variance than reference populations, characteristic of an opportunistic diet. No correlation was established between the body shape (or mouth deformation) and the stable isotope signature. Conclusion The Durance River is an untamed Mediterranean river despite the presence of numerous dams that split the river from upstream to downstream. The sympatric effect on morphology and the large diet behavior range can be explained by a tendency toward an opportunistic behavior of the sympatric specimens. Indeed, the similar response of the two species and their hybrids implied an adaptation that could be defined as an alternative trade-off that underline the importance of epigenetics mechanisms for potential success in a novel environment.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54020,"Establishment of an Invasive Plant Species (Conium maculatum) in Contaminated Roadside Soil in Cook County, Illinois",S165140,R54021,Species name,L100098,Conium maculatum,"Abstract Interactions between environmental variables in anthropogenically disturbed environments and physiological traits of invasive species may help explain reasons for invasive species' establishment in new areas. Here we analyze how soil contamination along roadsides may influence the establishment of Conium maculatum (poison hemlock) in Cook County, IL, USA. We combine analyses that: (1) characterize the soil and measure concentrations of heavy metals and polycyclic aromatic hydrocarbons (PAHs) where Conium is growing; (2) assess the genetic diversity and structure of individuals among nine known populations; and (3) test for tolerance to heavy metals and evidence for local soil growth advantage with greenhouse establishment experiments. We found elevated levels of metals and PAHs in the soil where Conium was growing. Specifically, arsenic (As), cadmium (Cd), and lead (Pb) were found at elevated levels relative to U.S. EPA ecological contamination thresholds. In a greenhouse study we found that Conium is more tolerant of soils containing heavy metals (As, Cd, Pb) than two native species. For the genetic analysis a total of 217 individuals (approximately 20–30 per population) were scored with 5 ISSR primers, yielding 114 variable loci. We found high levels of genetic diversity in all populations but little genetic structure or differentiation among populations. Although Conium shows a general tolerance to contamination, we found few significant associations between genetic diversity metrics and a suite of measured environmental and spatial parameters. Soil contamination is not driving the peculiar spatial distribution of Conium in Cook County, but these findings indicate that Conium is likely establishing in the Chicago region partially due to its ability to tolerate high levels of metal contamination.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54656,Roads as conduits for exotic plant invasions in a semiarid landscape,S172683,R54658,Measure of invasion success,L106253,Cover of exotic species,"Abstract: Roads are believed to be a major contributing factor to the ongoing spread of exotic plants. We examined the effect of road improvement and environmental variables on exotic and native plant diversity in roadside verges and adjacent semiarid grassland, shrubland, and woodland communities of southern Utah ( U.S.A. ). We measured the cover of exotic and native species in roadside verges and both the richness and cover of exotic and native species in adjacent interior communities ( 50 m beyond the edge of the road cut ) along 42 roads stratified by level of road improvement ( paved, improved surface, graded, and four‐wheel‐drive track ). In roadside verges along paved roads, the cover of Bromus tectorum was three times as great ( 27% ) as in verges along four‐wheel‐drive tracks ( 9% ). The cover of five common exotic forb species tended to be lower in verges along four‐wheel‐drive tracks than in verges along more improved roads. The richness and cover of exotic species were both more than 50% greater, and the richness of native species was 30% lower, at interior sites adjacent to paved roads than at those adjacent to four‐wheel‐drive tracks. In addition, environmental variables relating to dominant vegetation, disturbance, and topography were significantly correlated with exotic and native species richness and cover. Improved roads can act as conduits for the invasion of adjacent ecosystems by converting natural habitats to those highly vulnerable to invasion. However, variation in dominant vegetation, soil moisture, nutrient levels, soil depth, disturbance, and topography may render interior communities differentially susceptible to invasions originating from roadside verges. Plant communities that are both physically invasible ( e.g., characterized by deep or fertile soils ) and disturbed appear most vulnerable. Decision‐makers considering whether to build, improve, and maintain roads should take into account the potential spread of exotic plants.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54824,Pre-fire fuel reduction treatments influence plant communities and exotic species 9 years after a large wildfire,S174667,R54825,Measure of invasion success,L107903,Cover of exotic species ,"Questions: How did post-wildfire understorey plant community response, including exotic species response, differ between pre-fire treated areas that were less severely burned, and pre-fire untreated areas that were more severely burned? Were these differences consistent through time? Location: East-central Arizona, southwestern US. Methods: We used a multi-year data set from the 2002 Rodeo–Chediski Fire to detect post-fire trends in plant community response in burned ponderosa pine forests. Within the burn perimeter, we examined the effects of pre-fire fuels treatments on post-fire vegetation by comparing paired treated and untreated sites on the Apache-Sitgreaves National Forest. We sampled these paired sites in 2004, 2005 and 2011. Results: There were significant differences in pre-fire treated and untreated plant communities by species composition and abundance in 2004 and 2005, but these communities were beginning to converge in 2011. Total understorey plant cover was significantly higher in untreated areas for all 3 yr. Plant cover generally increased between 2004 and 2005 and markedly decreased in 2011, with the exception of shrub cover, which steadily increased through time. The sharp decrease in forb and graminoid cover in 2011 is likely related to drought conditions since the fire. Annual/biennial forb and graminoid cover decreased relative to perennial cover through time, consistent with the initial floristics hypothesis. Exotic plant response was highly variable and not limited to the immediate post-fire, annual/biennial community. Despite low overall exotic forb and graminoid cover for all years (<2.5%), several exotic species increased in frequency, and the relative proportion of exotic to native cover increased through time. Conclusions: Pre-treatment fuel reduction treatments helped maintain foundation overstorey species and associated native plant communities following this large wildfire. The overall low cover of exotic species on these sites supports other findings that the disturbance associated with high-severity fire does not always result in exotic species invasions. The increase in relative cover and frequency though time indicates that some species are proliferating, and continued monitoring is recommended. Patterns of exotic species invasions after severe burning are not easily predicted, and are likely more dependent on site-specific factors such as propagules, weather patterns and management.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54176,Inducible defences as key adaptations for the successful invasion of Daphnia lumholtzi in North America?,S166964,R54177,Species name,L101610,Daphnia lumholtzi,"The mechanisms underlying successful biological invasions often remain unclear. In the case of the tropical water flea Daphnia lumholtzi, which invaded North America, it has been suggested that this species possesses a high thermal tolerance, which in the course of global climate change promotes its establishment and rapid spread. However, D. lumholtzi has an additional remarkable feature: it is the only water flea that forms rigid head spines in response to chemicals released in the presence of fishes. These morphologically (phenotypically) plastic traits serve as an inducible defence against these predators. Here, we show in controlled mesocosm experiments that the native North American species Daphnia pulicaria is competitively superior to D. lumholtzi in the absence of predators. However, in the presence of fish predation the invasive species formed its defences and became dominant. This observation of a predator-mediated switch in dominance suggests that the inducible defence against fish predation may represent a key adaptation for the invasion success of D. lumholtzi.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54228,Predator-induced phenotypic plasticity in the exotic cladoceran Daphnia lumholtzi,S167583,R54229,Species name,L102125,Daphnia lumholtzi,"Summary 1. The exotic cladoceran Daphnia lumholtzi has recently invaded freshwater systems throughout the United States. Daphnia lumholtzi possesses extravagant head spines that are longer than those found on any other North American Daphnia. These spines are effective at reducing predation from many of the predators that are native to newly invaded habitats; however, they are plastic both in nature and in laboratory cultures. The purpose of this experiment was to better understand what environmental cues induce and maintain these effective predator-deterrent spines. We conducted life-table experiments on individual D. lumholtzi grown in water conditioned with an invertebrate insect predator, Chaoborus punctipennis, and water conditioned with a vertebrate fish predator, Lepomis macrochirus. 2. Daphnia lumholtzi exhibited morphological plasticity in response to kairomones released by both predators. However, direct exposure to predator kairomones during postembryonic development did not induce long spines in D. lumholtzi. In contrast, neonates produced from individuals exposed to Lepomis kairomones had significantly longer head and tail spines than neonates produced from control and Chaoborus individuals. These results suggest that there may be a maternal, or pre-embryonic, effect of kairomone exposure on spine development in D. lumholtzi. 3. Independent of these morphological shifts, D. lumholtzi also exhibited plasticity in life history characteristics in response to predator kairomones. For example, D. lumholtzi exhibited delayed reproduction in response to Chaoborus kairomones, and significantly more individuals produced resting eggs, or ephippia, in the presence of Lepomis kairomones.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54176,Inducible defences as key adaptations for the successful invasion of Daphnia lumholtzi in North America?,S166965,R54177,Specific traits,L101611,Defence against fish predation,"The mechanisms underlying successful biological invasions often remain unclear. In the case of the tropical water flea Daphnia lumholtzi, which invaded North America, it has been suggested that this species possesses a high thermal tolerance, which in the course of global climate change promotes its establishment and rapid spread. However, D. lumholtzi has an additional remarkable feature: it is the only water flea that forms rigid head spines in response to chemicals released in the presence of fishes. These morphologically (phenotypically) plastic traits serve as an inducible defence against these predators. Here, we show in controlled mesocosm experiments that the native North American species Daphnia pulicaria is competitively superior to D. lumholtzi in the absence of predators. However, in the presence of fish predation the invasive species formed its defences and became dominant. This observation of a predator-mediated switch in dominance suggests that the inducible defence against fish predation may represent a key adaptation for the invasion success of D. lumholtzi.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56867,Comparisons of isotopic niche widths of some invasive and indigenous fauna in a South African river,S190714,R56868,Type of effect description,L119074,Dietary overlap,"Summary Biological invasions threaten ecosystem integrity and biodiversity, with numerous adverse implications for native flora and fauna. Established populations of two notorious freshwater invaders, the snail Tarebia granifera and the fish Pterygoplichthys disjunctivus, have been reported on three continents and are frequently predicted to be in direct competition with native species for dietary resources. Using comparisons of species' isotopic niche widths and stable isotope community metrics, we investigated whether the diets of the invasive T. granifera and P. disjunctivus overlapped with those of native species in a highly invaded river. We also attempted to resolve diet composition for both species, providing some insight into the original pathway of invasion in the Nseleni River, South Africa. Stable isotope metrics of the invasive species were similar to or consistently mid-range in comparison with their native counterparts, with the exception of markedly more uneven spread in isotopic space relative to indigenous species. Dietary overlap between the invasive P. disjunctivus and native fish was low, with the majority of shared food resources having overlaps of <0.26. The invasive T. granifera showed effectively no overlap with the native planorbid snail. However, there was a high degree of overlap between the two invasive species (˜0.86). Bayesian mixing models indicated that detrital mangrove Barringtonia racemosa leaves contributed the largest proportion to P. disjunctivus diet (0.12–0.58), while the diet of T. granifera was more variable with high proportions of detrital Eichhornia crassipes (0.24–0.60) and Azolla filiculoides (0.09–0.33) as well as detrital Barringtonia racemosa leaves (0.00–0.30). Overall, although the invasive T. granifera and P. disjunctivus were not in direct competition for dietary resources with native species in the Nseleni River system, their spread in isotopic space suggests they are likely to restrict energy available to higher consumers in the food web. Establishment of these invasive populations in the Nseleni River is thus probably driven by access to resources unexploited or unavailable to native residents.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54661,Invasibility and abiotic gradients: the positive correlation between native and exotic plant diversity,S172724,R54662,Measure of disturbance,L106286,distance from areas of human disturbance,"We sampled the understory community in an old-growth, temperate forest to test alternative hypotheses explaining the establishment of exotic plants. We quantified the individual and net importance of distance from areas of human disturbance, native plant diversity, and environmental gradients in determining exotic plant establishment. Distance from disturbed areas, both within and around the reserve, was not correlated to exotic species richness. Numbers of native and exotic species were positively correlated at large (50 m 2 ) and small (10 m 2 ) plot sizes, a trend that persisted when relationships to environ- mental gradients were controlled statistically. Both native and exotic species richness in- creased with soil pH and decreased along a gradient of increasing nitrate availability. Exotic species were restricted to the upper portion of the pH gradient and had individualistic responses to the availability of soil resources. These results are inconsistent with both the diversity-resistance and resource-enrichment hypotheses for invasibility. Environmental conditions favoring native species richness also favor exotic species richness, and com- petitive interactions with the native flora do not appear to limit the entry of additional species into the understory community at this site. It appears that exotic species with niche requirements poorly represented in the regional flora of native species may establish with relatively little resistance or consequence for native species richness.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54148,Developmental plasticity of shell morphology of quagga mussels from shallow and deep-water habitats of the Great Lakes ,S166634,R54149,Species name,L101336,Dreissena polymorpha,"SUMMARY The invasive zebra mussel (Dreissena polymorpha) has quickly colonized shallow-water habitats in the North American Great Lakes since the 1980s but the quagga mussel (Dreissena bugensis) is becoming dominant in both shallow and deep-water habitats. While quagga mussel shell morphology differs between shallow and deep habitats, functional causes and consequences of such difference are unknown. We examined whether quagga mussel shell morphology could be induced by three environmental variables through developmental plasticity. We predicted that shallow-water conditions (high temperature, food quantity, water motion) would yield a morphotype typical of wild quagga mussels from shallow habitats, while deep-water conditions (low temperature, food quantity, water motion) would yield a morphotype present in deep habitats. We tested this prediction by examining shell morphology and growth rate of quagga mussels collected from shallow and deep habitats and reared under common-garden treatments that manipulated the three variables. Shell morphology was quantified using the polar moment of inertia. Of the variables tested, temperature had the greatest effect on shell morphology. Higher temperature (∼18–20°C) yielded a morphotype typical of wild shallow mussels regardless of the levels of food quantity or water motion. In contrast, lower temperature (∼6–8°C) yielded a morphotype approaching that of wild deep mussels. If shell morphology has functional consequences in particular habitats, a plastic response might confer quagga mussels with a greater ability than zebra mussels to colonize a wider range of habitats within the Great Lakes.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R53282,Enemy damage of exotic plant species is similar to that of natives and increases with productivity,S201803,R57876,hypothesis,L127419,Enemy release,"In their colonized ranges, exotic plants may be released from some of the herbivores or pathogens of their home ranges but these can be replaced by novel enemies. It is of basic and practical interest to understand which characteristics of invaded communities control accumulation of the new pests. Key questions are whether enemy load on exotic species is smaller than on native competitors as suggested by the enemy release hypothesis (ERH) and whether this difference is most pronounced in resource‐rich habitats as predicted by the resource–enemy release hypothesis (R‐ERH). In 72 populations of 12 exotic invasive species, we scored all visible above‐ground damage morphotypes caused by herbivores and fungal pathogens. In addition, we quantified levels of leaf herbivory and fruit damage. We then assessed whether variation in damage diversity and levels was explained by habitat fertility, by relatedness between exotic species and the native community or rather by native species diversity. In a second part of the study, we also tested the ERH and the R‐ERH by comparing damage of plants in 28 pairs of co‐occurring native and exotic populations, representing nine congeneric pairs of native and exotic species. In the first part of the study, diversity of damage morphotypes and damage levels of exotic populations were greater in resource‐rich habitats. Co‐occurrence of closely related, native species in the community significantly increased the probability of fruit damage. Herbivory on exotics was less likely in communities with high phylogenetic diversity. In the second part of the study, exotic and native congeneric populations incurred similar damage diversity and levels, irrespective of whether they co‐occurred in nutrient‐poor or nutrient‐rich habitats. Synthesis. We identified habitat productivity as a major community factor affecting accumulation of enemy damage by exotic populations. Similar damage levels in exotic and native congeneric populations, even in species pairs from fertile habitats, suggest that the enemy release hypothesis or the R‐ERH cannot always explain the invasiveness of introduced species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57607,Herbivores and the success of exotic plants: a phylogenetically controlled experiment,S198327,R57608,hypothesis,L124479,Enemy release,"In a field experiment with 30 locally occurring old-field plant species grown in a common garden, we found that non-native plants suffer levels of attack (leaf herbivory) equal to or greater than levels suffered by congeneric native plants. This phylogenetically controlled analysis is in striking contrast to the recent findings from surveys of exotic organisms, and suggests that even if enemy release does accompany the invasion process, this may not be an important mechanism of invasion, particularly for plants with close relatives in the recipient flora.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57632,Enemy release? An experiment with congeneric plant pairs and diverse above- and belowground enemies,S198647,R57634,hypothesis,L124747,Enemy release,"Several hypotheses proposed to explain the success of introduced species focus on altered interspecific interactions. One of the most prominent, the Enemy Release Hypothesis, posits that invading species benefit compared to their native counterparts if they lose their herbivores and pathogens during the invasion process. We previously reported on a common garden experiment (from 2002) in which we compared levels of herbivory between 30 taxonomically paired native and introduced old-field plants. In this phyloge- netically controlled comparison, herbivore damage tended to be higher on introduced than on native plants. This striking pattern, the opposite of current theory, prompted us to further investigate herbivory and several other interspecific interactions in a series of linked ex- periments with the same set of species. Here we show that, in these new experiments, introduced plants, on average, received less insect herbivory and were subject to half the negative soil microbial feedback compared to natives; attack by fungal and viral pathogens also tended to be reduced on introduced plants compared to natives. Although plant traits (foliar C:N, toughness, and water content) suggested that introduced species should be less resistant to generalist consumers, they were not consistently more heavily attacked. Finally, we used meta-analysis to combine data from this study with results from our previous work to show that escape generally was inconsistent among guilds of enemies: there were few instances in which escape from multiple guilds occurred for a taxonomic pair, and more cases in which the patterns of escape from different enemies canceled out. Our examination of multiple interspecific interactions demonstrates that escape from one guild of enemies does not necessarily imply escape from other guilds. Because the effects of each guild are likely to vary through space and time, the net effect of all enemies is also likely to be variable. The net effect of these interactions may create ''invasion opportunity windows'': times when introduced species make advances in native communities.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57637,"Herbivory, time since introduction and the invasiveness of exotic plants",S198696,R57638,hypothesis,L124788,Enemy release,"1 We tested the enemy release hypothesis for invasiveness using field surveys of herbivory on 39 exotic and 30 native plant species growing in natural areas near Ottawa, Canada, and found that exotics suffered less herbivory than natives. 2 For the 39 introduced species, we also tested relationships between herbivory, invasiveness and time since introduction to North America. Highly invasive plants had significantly less herbivory than plants ranked as less invasive. Recently arrived plants also tended to be more invasive; however, there was no relationship between time since introduction and herbivory. 3 Release from herbivory may be key to the success of highly aggressive invaders. Low herbivory may also indicate that a plant possesses potent defensive chemicals that are novel to North America, which may confer resistance to pathogens or enable allelopathy in addition to deterring herbivorous insects.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57654,Phytophagous insects of giant hogweed Heracleum mantegazzianum (Apiaceae) in invaded areas of Europe and in its native area of the Caucasus,S198918,R57655,hypothesis,L124976,Enemy release,"Giant hogweed, Heracleum mantegazzianum (Apiaceae), was introduced from the Caucasus into Western Europe more than 150 years ago and later became an invasive weed which created major problems for European authorities. Phytophagous insects were collected in the native range of the giant hogweed (Caucasus) and were compared to those found on plants in the invaded parts of Europe. The list of herbivores was compiled from surveys of 27 localities in nine countries during two seasons. In addition, litera- ture records for herbivores were analysed for a total of 16 Heracleum species. We recorded a total of 265 herbivorous insects on Heracleum species and we analysed them to describe the herbivore assemblages, locate vacant niches, and identify the most host- specific herbivores on H. mantegazzianum. When combining our investigations with similar studies of herbivores on other invasive weeds, all studies show a higher proportion of specialist herbivores in the native habitats compared to the invaded areas, supporting the ""enemy release hypothesis"" (ERH). When analysing the relative size of the niches (measured as plant organ biomass), we found less herbivore species per biomass on the stem and roots, and more on the leaves (Fig. 5). Most herbivores were polyphagous gener- alists, some were found to be oligophagous (feeding within the same family of host plants) and a few had only Heracleum species as host plants (monophagous). None were known to feed exclusively on H. mantegazzianum. The oligophagous herbivores were restricted to a few taxonomic groups, especially within the Hemiptera, and were particularly abundant on this weed.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57656,Prevalence and evolutionary relationships of haematozoan parasites in native versus introduced populations of common myna Acridotheres tristis,S198945,R57657,hypothesis,L124999,Enemy release,"The success of introduced species is frequently explained by their escape from natural enemies in the introduced region. We tested the enemy release hypothesis with respect to two well studied blood parasite genera (Plasmodium and Haemoproteus) in native and six introduced populations of the common myna Acridotheres tristis. Not all comparisons of introduced populations to the native population were consistent with expectations of the enemy release hypothesis. Native populations show greater overall parasite prevalence than introduced populations, but the lower prevalence in introduced populations is driven by low prevalence in two populations on oceanic islands (Fiji and Hawaii). When these are excluded, prevalence does not differ significantly. We found a similar number of parasite lineages in native populations compared to all introduced populations. Although there is some evidence that common mynas may have carried parasite lineages from native to introduced locations, and also that introduced populations may have become infected with novel parasite lineages, it may be difficult to differentiate between parasites that are native and introduced, because malarial parasite lineages often do not show regional or host specificity.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57662,"Insect herbivore faunal diversity among invasive, non-invasive and native Eugenia species: Implications for the enemy release hypothesis",S199061,R57666,hypothesis,L125097,Enemy release,"Abstract The enemy release hypothesis (ERH) frequently has been invoked to explain the naturalization and spread of introduced species. One ramification of the ERH is that invasive plants sustain less herbivore pressure than do native species. Empirical studies testing the ERH have mostly involved two-way comparisons between invasive introduced plants and their native counterparts in the invaded region. Testing the ERH would be more meaningful if such studies also included introduced non-invasive species because introduced plants, regardless of their abundance or impact, may support a reduced insect herbivore fauna and experience less damage. In this study, we employed a three-way comparison, in which we compared herbivore faunas among native, introduced invasive, and introduced non-invasive plants in the genus Eugenia (Myrtaceae) which all co-occur in South Florida. We observed a total of 25 insect species in 12 families and 6 orders feeding on the six species of Eugenia. Of these insect species, the majority were native (72%), polyphagous (64%), and ectophagous (68%). We found that invasive introduced Eugenia has a similar level of herbivore richness as both the native and the non-invasive introduced Eugenia. However, the numbers and percentages of oligophagous insect species were greatest on the native Eugenia, but they were not different between the invasive and non-invasive introduced Eugenia. One oligophagous endophagous insect has likely shifted from the native to the invasive, but none to the non-invasive Eugenia. In summary, the invasive Eugenia encountered equal, if not greater, herbivore pressure than the non-invasive Eugenia, including from oligophagous and endophagous herbivores. Our data only provided limited support to the ERH. We would not have been able to draw this conclusion without inclusion of the non-invasive Eugenia species in the study.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57682,"Experimental field comparison of native and non-native maple seedlings: natural enemies, ecophysiology, growth and survival",S199304,R57684,hypothesis,L125304,Enemy release,"1 Acer platanoides (Norway maple) is an important non‐native invasive canopy tree in North American deciduous forests, where native species diversity and abundance are greatly reduced under its canopy. We conducted a field experiment in North American forests to compare planted seedlings of A. platanoides and Acer saccharum (sugar maple), a widespread, common native that, like A. platanoides, is shade tolerant. Over two growing seasons in three forests we compared multiple components of seedling success: damage from natural enemies, ecophysiology, growth and survival. We reasoned that equal or superior performance by A. platanoides relative to A. saccharum indicates seedling characteristics that support invasiveness, while inferior performance indicates potential barriers to invasion. 2 Acer platanoides seedlings produced more leaves and allocated more biomass to roots, A. saccharum had greater water use efficiency, and the two species exhibited similar photosynthesis and first‐season mortality rates. Acer platanoides had greater winter survival and earlier spring leaf emergence, but second‐season mortality rates were similar. 3 The success of A. platanoides seedlings was not due to escape from natural enemies, contrary to the enemy release hypothesis. Foliar insect herbivory and disease symptoms were similarly high for both native and non‐native, and seedling biomass did not differ. Rather, A. platanoides compared well with A. saccharum because of its equivalent ability to photosynthesize in the low light herb layer, its higher leaf production and greater allocation to roots, and its lower winter mortality coupled with earlier spring emergence. Its only potential barrier to seedling establishment, relative to A. saccharum, was lower water use efficiency, which possibly could hinder its invasion into drier forests. 4 The spread of non‐native canopy trees poses an especially serious problem for native forest communities, because canopy trees strongly influence species in all forest layers. Success at reaching the canopy depends on a tree's ecology in previous life‐history stages, particularly as a vulnerable seedling, but little is known about seedling characteristics that promote non‐native tree invasion. Experimental field comparison with ecologically successful native trees provides insight into why non‐native trees succeed as seedlings, which is a necessary stage on their journey into the forest canopy.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57691,Soil feedback of exotic savanna grass relates to pathogen absence and mycorrhizal selectivity,S199419,R57692,hypothesis,L125403,Enemy release,"Enemy release of exotic plants from soil pathogens has been tested by examining plant-soil feedback effects in repetitive growth cycles. However, positive soil feedback may also be due to enhanced benefit from the local arbuscular mycorrhizal fungi (AMF). Few studies actually have tested pathogen effects, and none of them did so in arid savannas. In the Kalahari savanna in Botswana, we compared the soil feedback of the exotic grass Cenchrus biflorus with that of two dominant native grasses, Eragrostis lehmanniana and Aristida meridionalis. The exotic grass had neutral to positive soil feedback, whereas both native grasses showed neutral to negative feedback effects. Isolation and testing of root-inhabiting fungi of E. lehmanniana yielded two host-specific pathogens that did not influence the exotic C. biflorus or the other native grass, A. meridionalis. None of the grasses was affected by the fungi that were isolated from the roots of the exotic C. biflorus. We isolated and compared the AMF community of the native and exotic grasses by polymerase chain reaction-denaturing gradient gel elecrophoresis (PCR-DGGE), targeting AMF 18S rRNA. We used roots from monospecific field stands and from plants grown in pots with mixtures of soils from the monospecific field stands. Three-quarters of the root samples of the exotic grass had two nearly identical sequences, showing 99% similarity with Glomus versiforme. The two native grasses were also associated with distinct bands, but each of these bands occurred in only a fraction of the root samples. The native grasses contained a higher diversity of AMF bands than the exotic grass. Canonical correspondence analyses of the AMF band patterns revealed almost as much difference between the native and exotic grasses as between the native grasses. In conclusion, our results support the hypothesis that release from soil-borne enemies may facilitate local abundance of exotic plants, and we provide the first evidence that these processes may occur in arid savanna ecosystems. Pathogenicity tests implicated the involvement of soil pathogens in the soil feedback responses, and further studies should reveal the functional consequences of the observed high infection with a low diversity of AMF in the roots of exotic plants.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57710,"Metazoan parasites of introduced round and tubenose gobies in the Great Lakes: Support for the ""Enemy Release Hypothesis""",S199655,R57711,hypothesis,L125601,Enemy release,"ABSTRACT Recent invasion theory has hypothesized that newly established exotic species may initially be free of their native parasites, augmenting their population success. Others have hypothesized that invaders may introduce exotic parasites to native species and/or may become hosts to native parasites in their new habitats. Our study analyzed the parasites of two exotic Eurasian gobies that were detected in the Great Lakes in 1990: the round goby Apollonia melanostoma and the tubenose goby Proterorhinus semilunaris. We compared our results from the central region of their introduced ranges in Lakes Huron, St. Clair, and Erie with other studies in the Great Lakes over the past decade, as well as Eurasian native and nonindigenous habitats. Results showed that goby-specific metazoan parasites were absent in the Great Lakes, and all but one species were represented only as larvae, suggesting that adult parasites presently are poorly-adapted to the new gobies as hosts. Seven parasitic species are known to infest the tubenose goby in the Great Lakes, including our new finding of the acanthocephalan Southwellina hispida, and all are rare. We provide the first findings of four parasite species in the round goby and clarified two others, totaling 22 in the Great Lakes—with most being rare. In contrast, 72 round goby parasites occur in the Black Sea region. Trematodes are the most common parasitic group of the round goby in the Great Lakes, as in their native Black Sea range and Baltic Sea introduction. Holarctic trematode Diplostomum spathaceum larvae, which are one of two widely distributed species shared with Eurasia, were found in round goby eyes from all Great Lakes localities except Lake Huron proper. Our study and others reveal no overall increases in parasitism of the invasive gobies over the past decade after their establishment in the Great Lakes. In conclusion, the parasite “load” on the invasive gobies appears relatively low in comparison with their native habitats, lending support to the “enemy release hypothesis.”",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57712,Role of plant enemies in the forestry of indigenous vs. nonindigenous pines,S199691,R57714,hypothesis,L125631,Enemy release,"Plantations of rapidly growing trees are becoming increasingly common because the high productivity can enhance local economies, support improvements in educational systems, and generally improve the quality of life in rural communities. Landowners frequently choose to plant nonindigenous species; one rationalization has been that silvicultural productivity is enhanced when trees are separated from their native herbivores and pathogens. The expectation of enemy reduction in nonindigenous species has theoretical and empirical support from studies of the enemy release hypothesis (ERH) in the context of invasion ecology, but its relevance to forestry has not been evaluated. We evaluated ERH in the productive forests of Galicia, Spain, where there has been a profusion of pine plantations, some with the indigenous Pinus pinaster, but increasingly with the nonindigenous P. radiata. Here, one of the most important pests of pines is the indigenous bark beetle, Tomicus piniperda. In support of ERH, attacks by T. piniperda were more than twice as great in stands of P. pinaster compared to P. radiata. This differential held across a range of tree ages and beetle abundance. However, this extension of ERH to forestry failed in the broader sense because beetle attacks, although fewer on P. radiata, reduced productivity of P. radiata more than that of P. pinaster (probably because more photosynthetic tissue is lost per beetle attack in P. radiata). Productivity of the nonindigenous pine was further reduced by the pathogen, Sphaeropsis sapinea, which infected up to 28% of P. radiata but was absent in P. pinaster. This was consistent with the forestry axiom (antithetical to ERH) that trees planted ""off-site"" are more susceptible to pathogens. Fungal infections were positively correlated with beetle attacks; apparently T. piniperda facilitates S. sapinea infections by creating wounds and by carrying fungal propagules. A globally important component in the diminution of indigenous flora has been the deliberate large-scale propagation of nonnative trees for silviculture. At least for Pinus forestry in Spain, reduced losses to pests did not rationalize the planting of nonindigenous trees. There would be value in further exploration of relations between invasion ecology and the forestry of nonindigenous trees.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57720,"Herbivores, but not other insects, are scarce on alien plants",S199782,R57721,hypothesis,L125708,Enemy release,"Abstract Understanding how the landscape-scale replacement of indigenous plants with alien plants influences ecosystem structure and functioning is critical in a world characterized by increasing biotic homogenization. An important step in this process is to assess the impact on invertebrate communities. Here we analyse insect species richness and abundance in sweep collections from indigenous and alien (Australasian) woody plant species in South Africa's Western Cape. We use phylogenetically relevant comparisons and compare one indigenous with three Australasian alien trees within each of Fabaceae: Mimosoideae, Myrtaceae, and Proteaceae: Grevilleoideae. Although some of the alien species analysed had remarkably high abundances of herbivores, even when intentionally introduced biological control agents are discounted, overall, herbivorous insect assemblages from alien plants were slightly less abundant and less diverse compared with those from indigenous plants – in accordance with predictions from the enemy release hypothesis. However, there were no clear differences in other insect feeding guilds. We conclude that insect assemblages from alien plants are generally quite diverse, and significant differences between these and assemblages from indigenous plants are only evident for herbivorous insects.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57725,Test of the enemy release hypothesis: The native magpie moth prefers a native fireweed (Senecio pinnatifolius) to its introduced congener (S madagascariensis),S199845,R57726,hypothesis,L125761,Enemy release,"The enemy release hypothesis predicts that native herbivores will either prefer or cause more damage to native than introduced plant species. We tested this using preference and performance experiments in the laboratory and surveys of leaf damage caused by the magpie moth Nyctemera amica on a co-occuring native and introduced species of fireweed (Senecio) in eastern Australia. In the laboratory, ovipositing females and feeding larvae preferred the native S. pinnatifolius over the introduced S. madagascariensis. Larvae performed equally well on foliage of S. pinnatifolius and S. madagascariensis: pupal weights did not differ between insects reared on the two species, but growth rates were significantly faster on S. pinnatifolius. In the field, foliage damage was significantly greater on native S. pinnatifolius than introduced S. madagascariensis. These results support the enemy release hypothesis, and suggest that the failure of native consumers to switch to introduced species contributes to their invasive success. Both plant species experienced reduced, rather than increased, levels of herbivory when growing in mixed populations, as opposed to pure stands in the field; thus, there was no evidence that apparent competition occurred.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57727,Diversity and abundance of arthropod floral visitor and herbivore assemblages on exotic and native Senecio species,S199869,R57728,hypothesis,L125781,Enemy release,"The enemy release hypothesis predicts that native herbivores prefer native, rather than exotic plants, giving invaders a competitive advantage. In contrast, the biotic resistance hypothesis states that many invaders are prevented from establishing because of competitive interactions, including herbivory, with native fauna and flora. Success or failure of spread and establishment might also be influenced by the presence or absence of mutualists, such as pollinators. Senecio madagascariensis (fireweed), an annual weed from South Africa, inhabits a similar range in Australia to the related native S. pinnatifolius. The aim of this study was to determine, within the context of invasion biology theory, whether the two Senecio species share insect fauna, including floral visitors and herbivores. Surveys were carried out in south-east Queensland on allopatric populations of the two Senecio species, with collected insects identified to morphospecies. Floral visitor assemblages were variable between populations. However, the two Senecio species shared the two most abundant floral visitors, honeybees and hoverflies. Herbivore assemblages, comprising mainly hemipterans of the families Cicadellidae and Miridae, were variable between sites and no patterns could be detected between Senecio species at the morphospecies level. However, when insect assemblages were pooled (i.e. community level analysis), S. pinnatifolius was shown to host a greater total abundance and richness of herbivores. Senecio madagascariensis is unlikely to be constrained by lack of pollinators in its new range and may benefit from lower levels of herbivory compared to its native congener S. pinnatifolius.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57740,Acceleration of Exotic Plant Invasion in a Forested Ecosystem by a Generalist Herbivore,S200040,R57741,hypothesis,L125926,Enemy release,"Abstract: The successful invasion of exotic plants is often attributed to the absence of coevolved enemies in the introduced range (i.e., the enemy release hypothesis). Nevertheless, several components of this hypothesis, including the role of generalist herbivores, remain relatively unexplored. We used repeated censuses of exclosures and paired controls to investigate the role of a generalist herbivore, white‐tailed deer (Odocoileus virginianus), in the invasion of 3 exotic plant species (Microstegium vimineum, Alliaria petiolata, and Berberis thunbergii) in eastern hemlock (Tsuga canadensis) forests in New Jersey and Pennsylvania (U.S.A.). This work was conducted in 10 eastern hemlock (T. canadensis) forests that spanned gradients in deer density and in the severity of canopy disturbance caused by an introduced insect pest, the hemlock woolly adelgid (Adelges tsugae). We used maximum likelihood estimation and information theoretics to quantify the strength of evidence for alternative models of the influence of deer density and its interaction with the severity of canopy disturbance on exotic plant abundance. Our results were consistent with the enemy release hypothesis in that exotic plants gained a competitive advantage in the presence of generalist herbivores in the introduced range. The abundance of all 3 exotic plants increased significantly more in the control plots than in the paired exclosures. For all species, the inclusion of canopy disturbance parameters resulted in models with substantially greater support than the deer density only models. Our results suggest that white‐tailed deer herbivory can accelerate the invasion of exotic plants and that canopy disturbance can interact with herbivory to magnify the impact. In addition, our results provide compelling evidence of nonlinear relationships between deer density and the impact of herbivory on exotic species abundance. These findings highlight the important role of herbivore density in determining impacts on plant abundance and provide evidence of the operation of multiple mechanisms in exotic plant invasion.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57755,Release from foliar and floral fungal pathogen species does not explain the geographic spread of naturalized North American plants in Europe,S200248,R57756,hypothesis,L126104,Enemy release,"1 During the last centuries many alien species have established and spread in new regions, where some of them cause large ecological and economic problems. As one of the main explanations of the spread of alien species, the enemy‐release hypothesis is widely accepted and frequently serves as justification for biological control. 2 We used a global fungus–plant host distribution data set for 140 North American plant species naturalized in Europe to test whether alien plants are generally released from foliar and floral pathogens, whether they are mainly released from pathogens that are rare in the native range, and whether geographic spread of the North American plant species in Europe is associated with release from fungal pathogens. 3 We show that the 140 North American plant species naturalized in Europe were released from 58% of their foliar and floral fungal pathogen species. However, when we also consider fungal pathogens of the native North American host range that in Europe so far have only been reported on other plant species, the estimated release is reduced to 10.3%. Moreover, in Europe North American plants have mainly escaped their rare, pathogens, of which the impact is restricted to few populations. Most importantly and directly opposing the enemy‐release hypothesis, geographic spread of the alien plants in Europe was negatively associated with their release from fungal pathogens. 4 Synthesis. North American plants may have escaped particular fungal species that control them in their native range, but based on total loads of fungal species, release from foliar and floral fungal pathogens does not explain the geographic spread of North American plant species in Europe. To test whether enemy release is the major driver of plant invasiveness, we urgently require more studies comparing release of invasive and non‐invasive alien species from enemies of different guilds, and studies that assess the actual impact of the enemies.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57763,Entomofauna of the introduced Chinese Tallow Tree,S200344,R57764,hypothesis,L126184,Enemy release,"Abstract Entomofauna in monospecific stands of the introduced Chinese tallow tree (Sapium sebiferum) and native mixed woodlands was sampled in 1982 along the Texas coast and compared to samples of arthropods from an earlier study of native coastal prairie and from a study of arthropods in S. sebiferum in 2004. Species diversity, richness, and abundance were highest in prairie, and were higher in mixed woodland than in S. sebiferum. Nonmetric multidimensional scaling distinguished orders and families of arthropods, and families of herbivores in S. sebiferum from mixed woodland and coastal prairie. Taxonomic similarity between S. sebiferum and mixed woodland was 51%. Fauna from S. sebiferum in 2001 was more similar to mixed woodland than to samples from S. sebiferum collected in 1982. These results indicate that the entomofauna in S. sebiferum originated from mixed prairie and that, with time, these faunas became more similar. Species richness and abundance of herbivores was lower in S. sebiferum, but proportion of total species in all trophic groups, except herbivores, was higher in S. sebiferum than mixed woodland. Low concentration of tannin in leaves of S. sebiferum did not explain low loss of leaves to herbivores. Lower abundance of herbivores on introduced species of plants fits the enemy release hypothesis, and low concentration of defense compounds in the face of low number of herbivores fits the evolution of increased competitive ability hypothesis.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57774,"Effects of large enemies on success of exotic species in marine fouling communities of Washington, USA",S200476,R57775,hypothesis,L126294,Enemy release,"The enemy release hypothesis, which posits that exotic species are less regulated by enemies than native species, has been well-supported in terrestrial systems but rarely tested in marine systems. Here, the enemy release hypothesis was tested in a marine system by excluding large enemies (>1.3 cm) in dock fouling communities in Washington, USA. After documenting the distribution and abundance of potential enemies such as chitons, gastropods and flatworms at 4 study sites, exclusion experiments were conducted to test the hypotheses that large grazing ene- mies (1) reduced recruitment rates in the exotic ascidian Botrylloides violaceus and native species, (2) reduced B. violaceus and native species abundance, and (3) altered fouling community struc- ture. Experiments demonstrated that, as predicted by the enemy release hypothesis, exclusion of large enemies did not significantly alter B. violaceus recruitment or abundance and it did signifi- cantly increase abundance or recruitment of 2 common native species. However, large enemy exclusion had no significant effects on most native species or on overall fouling community struc- ture. Furthermore, neither B. violaceus nor total exotic species abundance correlated positively with abundance of large enemies across sites. I therefore conclude that release from large ene- mies is likely not an important mechanism for the success of exotic species in Washington fouling communities.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57787,Low prevalence of haemosporidian parasites in the introduced house sparrow (Passer domesticus) in Brazil,S200644,R57788,hypothesis,L126436,Enemy release,"Species that are introduced to novel environments can lose their native pathogens and parasites during the process of introduction. The escape from the negative effects associated with these natural enemies is commonly employed as an explanation for the success and expansion of invasive species, which is termed the enemy release hypothesis (ERH). In this study, nested PCR techniques and microscopy were used to determine the prevalence and intensity (respectively) of Plasmodium spp. and Haemoproteus spp. in introduced house sparrows and native urban birds of central Brazil. Generalized linear mixed models were fitted by Laplace approximation considering a binomial error distribution and logit link function. Location and species were considered as random effects and species categorization (native or non-indigenous) as fixed effects. We found that native birds from Brazil presented significantly higher parasite prevalence in accordance with the ERH. We also compared our data with the literature, and found that house sparrows native to Europe exhibited significantly higher parasite prevalence than introduced house sparrows from Brazil, which also supports the ERH. Therefore, it is possible that house sparrows from Brazil might have experienced a parasitic release during the process of introduction, which might also be related to a demographic release (e.g. release from the negative effects of parasites on host population dynamics).",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57808,Range-expanding populations of a globally introduced weed experience negative plant-soil feedbacks,S200924,R57810,hypothesis,L126672,Enemy release,"Background Biological invasions are fundamentally biogeographic processes that occur over large spatial scales. Interactions with soil microbes can have strong impacts on plant invasions, but how these interactions vary among areas where introduced species are highly invasive vs. naturalized is still unknown. In this study, we examined biogeographic variation in plant-soil microbe interactions of a globally invasive weed, Centaurea solstitialis (yellow starthistle). We addressed the following questions (1) Is Centaurea released from natural enemy pressure from soil microbes in introduced regions? and (2) Is variation in plant-soil feedbacks associated with variation in Centaurea's invasive success? Methodology/Principal Findings We conducted greenhouse experiments using soils and seeds collected from native Eurasian populations and introduced populations spanning North and South America where Centaurea is highly invasive and noninvasive. Soil microbes had pervasive negative effects in all regions, although the magnitude of their effect varied among regions. These patterns were not unequivocally congruent with the enemy release hypothesis. Surprisingly, we also found that Centaurea generated strong negative feedbacks in regions where it is the most invasive, while it generated neutral plant-soil feedbacks where it is noninvasive. Conclusions/Significance Recent studies have found reduced below-ground enemy attack and more positive plant-soil feedbacks in range-expanding plant populations, but we found increased negative effects of soil microbes in range-expanding Centaurea populations. While such negative feedbacks may limit the long-term persistence of invasive plants, such feedbacks may also contribute to the success of invasions, either by having disproportionately negative impacts on competing species, or by yielding relatively better growth in uncolonized areas that would encourage lateral spread. Enemy release from soil-borne pathogens is not sufficient to explain the success of this weed in such different regions. The biogeographic variation in soil-microbe effects indicates that different mechanisms may operate on this species in different regions, thus establishing geographic mosaics of species interactions that contribute to variation in invasion success.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57814,Remote analysis of biological invasion and the impact of enemy release,S200999,R57815,hypothesis,L126737,Enemy release,"Escape from natural enemies is a widely held generalization for the success of exotic plants. We conducted a large-scale experiment in Hawaii (USA) to quantify impacts of ungulate removal on plant growth and performance, and to test whether elimination of an exotic generalist herbivore facilitated exotic success. Assessment of impacted and control sites before and after ungulate exclusion using airborne imaging spectroscopy and LiDAR, time series satellite observations, and ground-based field studies over nine years indicated that removal of generalist herbivores facilitated exotic success, but the abundance of native species was unchanged. Vegetation cover <1 m in height increased in ungulate-free areas from 48.7% +/- 1.5% to 74.3% +/- 1.8% over 8.4 years, corresponding to an annualized growth rate of lambda = 1.05 +/- 0.01 yr(-1) (median +/- SD). Most of the change was attributable to exotic plant species, which increased from 24.4% +/- 1.4% to 49.1% +/- 2.0%, (lambda = 1.08 +/- 0.01 yr(-1)). Native plants experienced no significant change in cover (23.0% +/- 1.3% to 24.2% +/- 1.8%, lambda = 1.01 +/- 0.01 yr(-1)). Time series of satellite phenology were indistinguishable between the treatment and a 3.0-km2 control site for four years prior to ungulate removal, but they diverged immediately following exclusion of ungulates. Comparison of monthly EVI means before and after ungulate exclusion and between the managed and control areas indicates that EVI strongly increased in the managed area after ungulate exclusion. Field studies and airborne analyses show that the dominant invader was Senecio madagascariensis, an invasive annual forb that increased from < 0.01% to 14.7% fractional cover in ungulate-free areas (lambda = 1.89 +/- 0.34 yr(-1)), but which was nearly absent from the control site. A combination of canopy LAI, water, and fractional cover were expressed in satellite EVI time series and indicate that the invaded region maintained greenness during drought conditions. These findings demonstrate that enemy release from generalist herbivores can facilitate exotic success and suggest a plausible mechanism by which invasion occurred. They also show how novel remote-sensing technology can be integrated with conservation and management to help address exotic plant invasions.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57816,"Diversity, loss, and gain of malaria parasites in a globally invasive bird",S201021,R57817,hypothesis,L126755,Enemy release,"Invasive species can displace natives, and thus identifying the traits that make aliens successful is crucial for predicting and preventing biodiversity loss. Pathogens may play an important role in the invasive process, facilitating colonization of their hosts in new continents and islands. According to the Novel Weapon Hypothesis, colonizers may out-compete local native species by bringing with them novel pathogens to which native species are not adapted. In contrast, the Enemy Release Hypothesis suggests that flourishing colonizers are successful because they have left their pathogens behind. To assess the role of avian malaria and related haemosporidian parasites in the global spread of a common invasive bird, we examined the prevalence and genetic diversity of haemosporidian parasites (order Haemosporida, genera Plasmodium and Haemoproteus) infecting house sparrows (Passer domesticus). We sampled house sparrows (N = 1820) from 58 locations on 6 continents. All the samples were tested using PCR-based methods; blood films from the PCR-positive birds were examined microscopically to identify parasite species. The results show that haemosporidian parasites in the house sparrows' native range are replaced by species from local host-generalist parasite fauna in the alien environments of North and South America. Furthermore, sparrows in colonized regions displayed a lower diversity and prevalence of parasite infections. Because the house sparrow lost its native parasites when colonizing the American continents, the release from these natural enemies may have facilitated its invasion in the last two centuries. Our findings therefore reject the Novel Weapon Hypothesis and are concordant with the Enemy Release Hypothesis.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57820,"Island invasion by a threatened tree species: evidence for natural enemy release of mahogany (Swietenia macrophylla) on Dominica, Lesser Antilles",S201082,R57822,hypothesis,L126806,Enemy release,"Despite its appeal to explain plant invasions, the enemy release hypothesis (ERH) remains largely unexplored for tropical forest trees. Even scarcer are ERH studies conducted on the same host species at both the community and biogeographical scale, irrespective of the system or plant life form. In Cabrits National Park, Dominica, we observed patterns consistent with enemy release of two introduced, congeneric mahogany species, Swietenia macrophylla and S. mahagoni, planted almost 50 years ago. Swietenia populations at Cabrits have reproduced, with S. macrophylla juveniles established in and out of plantation areas at densities much higher than observed in its native range. Swietenia macrophylla juveniles also experienced significantly lower leaf-level herbivory (∼3.0%) than nine co-occurring species native to Dominica (8.4–21.8%), and far lower than conspecific herbivory observed in its native range (11%–43%, on average). These complimentary findings at multiple scales support ERH, and confirm that Swietenia has naturalized at Cabrits. However, Swietenia abundance was positively correlated with native plant diversity at the seedling stage, and only marginally negatively correlated with native plant abundance for stems ≥1-cm dbh. Taken together, these descriptive patterns point to relaxed enemy pressure from specialized enemies, specifically the defoliator Steniscadia poliophaea and the shoot-borer Hypsipyla grandella, as a leading explanation for the enhanced recruitment of Swietenia trees documented at Cabrits.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57853,Invading from the garden? A comparison of leaf herbivory for exotic and native plants in natural and ornamental settings,S201527,R57855,hypothesis,L127185,Enemy release,"Abstract The enemies release hypothesis proposes that exotic species can become invasive by escaping from predators and parasites in their novel environment. Agrawal et al. (Enemy release? An experiment with congeneric plant pairs and diverse above‐ and below‐ground enemies. Ecology, 86, 2979–2989) proposed that areas or times in which damage to introduced species is low provide opportunities for the invasion of native habitat. We tested whether ornamental settings may provide areas with low levels of herbivory for trees and shrubs, potentially facilitating invasion success. First, we compared levels of leaf herbivory among native and exotic species in ornamental and natural settings in Cincinnati, Ohio, United States. In the second study, we compared levels of herbivory for invasive and noninvasive exotic species between natural and ornamental settings. We found lower levels of leaf damage for exotic species than for native species; however, we found no differences in the amount of leaf damage suffered in ornamental or natural settings. Our results do not provide any evidence that ornamental settings afford additional release from herbivory for exotic plant species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57860,Herbivory by an introduced Asian weevil negatively affects population growth of an invasive Brazilian shrub in Florida,S201617,R57862,hypothesis,L127261,Enemy release,"The enemy release hypothesis (ERH) is often cited to explain why some plants successfully invade natural communities while others do not. This hypothesis maintains that plant populations are regulated by coevolved enemies in their native range but are relieved of this pressure where their enemies have not been co-introduced. Some studies have shown that invasive plants sustain lower levels of herbivore damage when compared to native species, but how damage affects fitness and population dynamics remains unclear. We used a system of co-occurring native and invasive Eugenia congeners in south Florida (USA) to experimentally test the ERH, addressing deficiencies in our understanding of the role of natural enemies in plant invasion at the population level. Insecticide was used to experimentally exclude insect herbivores from invasive Eugenia uniflora and its native co-occurring congeners in the field for two years. Herbivore damage, plant growth, survival, and population growth rates for the three species were then compared for control and insecticide-treated plants. Our results contradict the ERH, indicating that E. uniflora sustains more herbivore damage than its native congeners and that this damage negatively impacts stem height, survival, and population growth. In addition, most damage to E. uniflora, a native of Brazil, is carried out by Myllocerus undatus, a recently introduced weevil from Sri Lanka, and M. undatus attacks a significantly greater proportion of E. uniflora leaves than those of its native congeners. This interaction is particularly interesting because M. undatus and E. uniflora share no coevolutionary history, having arisen on two separate continents and come into contact on a third. Our study is the first to document negative population-level effects for an invasive plant as a result of the introduction of a novel herbivore. Such inhibitory interactions are likely to become more prevalent as suites of previously noninteracting species continue to accumulate and new communities assemble worldwide.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57892,Does enemy loss cause release? A biogeographical comparison of parasitoid effects on an introduced insect,S202039,R57894,hypothesis,L127619,Enemy release,"The loss of natural enemies is a key feature of species introductions and is assumed to facilitate the increased success of species in new locales (enemy release hypothesis; ERH). The ERH is rarely tested experimentally, however, and is often assumed from observations of enemy loss. We provide a rigorous test of the link between enemy loss and enemy release by conducting observational surveys and an in situ parasitoid exclusion experiment in multiple locations in the native and introduced ranges of a gall-forming insect, Neuroterus saltatorius, which was introduced poleward, within North America. Observational surveys revealed that the gall-former experienced increased demographic success and lower parasitoid attack in the introduced range. Also, a different composition of parasitoids attacked the gall-former in the introduced range. These observational results show that enemies were lost and provide support for the ERH. Experimental results, however, revealed that, while some enemy release occurred, it was not the sole driver of demographic success. This was because background mortality in the absence of enemies was higher in the native range than in the introduced range, suggesting that factors other than parasitoids limit the species in its native range and contribute to its success in its introduced range. Our study demonstrates the importance of measuring the effect of enemies in the context of other community interactions in both ranges to understand what factors cause the increased demographic success of introduced species. This case also highlights that species can experience very different dynamics when introduced into ecologically similar communities.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57900,"The herbivorous arthropods associated with the invasive alien plant, Arundo donax, and the native analogous plant, Phragmites australis, in the Free State Province, South Africa",S202127,R57901,hypothesis,L127693,Enemy release,"The Enemy Release Hypothesis (ERH) predicts that when plant species are introduced outside their native range there is a release from natural enemies resulting in the plants becoming problematic invasive alien species (Lake & Leishman 2004; Puliafico et al. 2008). The release from natural enemies may benefit alien plants more than simply reducing herbivory because, according to the Evolution of Increased Competitive Ability (EICA) hypothesis, without pressure from herbivores more resources that were previously allocated to defence can be allocated to reproduction (Blossey & Notzold 1995). Alien invasive plants are therefore expected to have simpler herbivore communities with fewer specialist herbivores (Frenzel & Brandl 2003; Heleno et al. 2008; Heger & Jeschke 2014).",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57904,"Escape from parasitism by the invasive alien ladybird, Harmonia axyridis",S202197,R57906,hypothesis,L127753,Enemy release,"Alien species are often reported to perform better than functionally similar species native to the invaded range, resulting in high population densities, and a tendency to become invasive. The enemy release hypothesis (ERH) explains the success of invasive alien species (IAS) as a consequence of reduced mortality from natural enemies (predators, parasites and pathogens) compared with native species. The harlequin ladybird, Harmonia axyridis, a species alien to Britain, provides a model system for testing the ERH. Pupae of H. axyridis and the native ladybird Coccinella septempunctata were monitored for parasitism between 2008 and 2011, from populations across southern England in areas first invaded by H. axyridis between 2004 and 2009. In addition, a semi‐field experiment was established to investigate the incidence of parasitism of adult H. axyridis and C. septempunctata by Dinocampus coccinellae. Harmonia axyridis pupae were parasitised at a much lower rate than conspecifics in the native range, and both pupae and adults were parasitised at a considerably lower rate than C. septempunctata populations from the same place and time (H. axyridis: 1.67%; C. septempunctata: 18.02%) or in previous studies on Asian H. axyridis (2–7%). We found no evidence that the presence of H. axyridis affected the parasitism rate of C. septempunctata by D. coccinellae. Our results are consistent with the general prediction that the prevalence of natural enemies is lower for introduced species than for native species at early stages of invasion. This may partly explain why H. axyridis is such a successful IAS.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57907,Little evidence for release from herbivores as a driver of plant invasiveness from a multi-species herbivore-removal experiment,S202268,R57911,hypothesis,L127814,Enemy release,"Enemy release is frequently posed as a main driver of invasiveness of alien species. However, an experimental multi-species test examining performance and herbivory of invasive alien, non-invasive alien and native plant species in the presence and absence of natural enemies is lacking. In a common garden experiment in Switzerland, we manipulated exposure of seven alien invasive, eight alien non-invasive and fourteen native species from six taxonomic groups to natural enemies (invertebrate herbivores), by applying a pesticide treatment under two different nutrient levels. We assessed biomass production, herbivore damage and the major herbivore taxa on plants. Across all species, plants gained significantly greater biomass under pesticide treatment. However, invasive, non-invasive and native species did not differ in their biomass response to pesticide treatment at either nutrient level. The proportion of leaves damaged on invasive species was significantly lower compared to native species, but not when compared to non-invasive species. However, the difference was lost when plant size was accounted for. There were no differences between invasive, non-invasive and native species in herbivore abundance. Our study offers little support for invertebrate herbivore release as a driver of plant invasiveness, but suggests that future enemy release studies should account for differences in plant size among species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57912,Parasites and genetic diversity in an invasive bumblebee,S202294,R57913,hypothesis,L127836,Enemy release,"Biological invasions are facilitated by the global transportation of species and climate change. Given that invasions may cause ecological and economic damage and pose a major threat to biodiversity, understanding the mechanisms behind invasion success is essential. Both the release of non-native populations from natural enemies, such as parasites, and the genetic diversity of these populations may play key roles in their invasion success. We investigated the roles of parasite communities, through enemy release and parasite acquisition, and genetic diversity in the invasion success of the non-native bumblebee, Bombus hypnorum, in the United Kingdom. The invasive B. hypnorum had higher parasite prevalence than most, or all native congeners for two high-impact parasites, probably due to higher susceptibility and parasite acquisition. Consequently parasites had a higher impact on B. hypnorum queens’ survival and colony-founding success than on native species. Bombus hypnorum also had lower functional genetic diversity at the sex-determining locus than native species. Higher parasite prevalence and lower genetic diversity have not prevented the rapid invasion of the United Kingdom by B. hypnorum. These data may inform our understanding of similar invasions by commercial bumblebees around the world. This study suggests that concerns about parasite impacts on the small founding populations common to re-introduction and translocation programs may be less important than currently believed.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57918,Determining the origin of invasions and demonstrating a lack of enemy release from microsporidian pathogens in common wasps (Vespula vulgaris),S202378,R57919,hypothesis,L127908,Enemy release,"Understanding the role of enemy release in biological invasions requires an assessment of the invader's home range, the number of invasion events and enemy prevalence. The common wasp (Vespula vulgaris) is a widespread invader. We sought to determine the Eurasian origin of this wasp and examined world‐wide populations for microsporidian pathogen infections to investigate enemy release.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57924,"Macroparasite Fauna of Alien Grey Squirrels (Sciurus carolinensis): Composition, Variability and Implications for Native Species",S202457,R57925,hypothesis,L127975,Enemy release,"Introduced hosts populations may benefit of an ""enemy release"" through impoverishment of parasite communities made of both few imported species and few acquired local ones. Moreover, closely related competing native hosts can be affected by acquiring introduced taxa (spillover) and by increased transmission risk of native parasites (spillback). We determined the macroparasite fauna of invasive grey squirrels (Sciurus carolinensis) in Italy to detect any diversity loss, introduction of novel parasites or acquisition of local ones, and analysed variation in parasite burdens to identify factors that may increase transmission risk for native red squirrels (S. vulgaris). Based on 277 grey squirrels sampled from 7 populations characterised by different time scales in introduction events, we identified 7 gastro-intestinal helminths and 4 parasite arthropods. Parasite richness is lower than in grey squirrel's native range and independent from introduction time lags. The most common parasites are Nearctic nematodes Strongyloides robustus (prevalence: 56.6%) and Trichostrongylus calcaratus (6.5%), red squirrel flea Ceratophyllus sciurorum (26.0%) and Holarctic sucking louse Neohaematopinus sciuri (17.7%). All other parasites are European or cosmopolitan species with prevalence below 5%. S. robustus abundance is positively affected by host density and body mass, C. sciurorum abundance increases with host density and varies with seasons. Overall, we show that grey squirrels in Italy may benefit of an enemy release, and both spillback and spillover processes towards native red squirrels may occur.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57926,Grassland fires may favor native over introduced plants by reducing pathogen loads,S202482,R57927,hypothesis,L127996,Enemy release,"Grasslands have been lost and degraded in the United States since Euro-American settlement due to agriculture, development, introduced invasive species, and changes in fire regimes. Fire is frequently used in prairie restoration to control invasion by trees and shrubs, but may have additional consequences. For example, fire might reduce damage by herbivore and pathogen enemies by eliminating litter, which harbors eggs and spores. Less obviously, fire might influence enemy loads differently for native and introduced plant hosts. We used a controlled burn in a Willamette Valley (Oregon) prairie to examine these questions. We expected that, without fire, introduced host plants should have less damage than native host plants because the introduced species are likely to have left many of their enemies behind when they were transported to their new range (the enemy release hypothesis, or ERH). If the ERH holds, then fire, which should temporarily reduce enemies on all species, should give an advantage to the natives because they should see greater total reduction in damage by enemies. Prior to the burn, we censused herbivore and pathogen attack on eight plant species (five of nonnative origin: Bromus hordaceous, Cynosuros echinatus, Galium divaricatum, Schedonorus arundinaceus (= Festuca arundinacea), and Sherardia arvensis; and three natives: Danthonia californica, Epilobium minutum, and Lomatium nudicale). The same plots were monitored for two years post-fire. Prior to the burn, native plants had more kinds of damage and more pathogen damage than introduced plants, consistent with the ERH. Fire reduced pathogen damage relative to the controls more for the native than the introduced species, but the effects on herbivory were negligible. Pathogen attack was correlated with plant reproductive fitness, whereas herbivory was not. These results suggest that fire may be useful for promoting some native plants in prairies due to its negative effects on their pathogens.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57943,Comparison of invertebrate herbivores on native and non-native Senecio species: Implications for the enemy release hypothesis,S202748,R57947,hypothesis,L128222,Enemy release,"The enemy release hypothesis posits that non-native plant species may gain a competitive advantage over their native counterparts because they are liberated from co-evolved natural enemies from their native area. The phylogenetic relationship between a non-native plant and the native community may be important for understanding the success of some non-native plants, because host switching by insect herbivores is more likely to occur between closely related species. We tested the enemy release hypothesis by comparing leaf damage and herbivorous insect assemblages on the invasive species Senecio madagascariensis Poir. to that on nine congeneric species, of which five are native to the study area, and four are non-native but considered non-invasive. Non-native species had less leaf damage than natives overall, but we found no significant differences in the abundance, richness and Shannon diversity of herbivores between native and non-native Senecio L. species. The herbivore assemblage and percentage abundance of herbivore guilds differed among all Senecio species, but patterns were not related to whether the species was native or not. Species-level differences indicate that S. madagascariensis may have a greater proportion of generalist insect damage (represented by phytophagous leaf chewers) than the other Senecio species. Within a plant genus, escape from natural enemies may not be a sufficient explanation for why some non-native species become more invasive than others.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57950,Phytophagous Insects on Native and Non-Native Host Plants: Combining the Community Approach and the Biogeographical Approach,S202845,R57954,hypothesis,L128305,Enemy release,"During the past centuries, humans have introduced many plant species in areas where they do not naturally occur. Some of these species establish populations and in some cases become invasive, causing economic and ecological damage. Which factors determine the success of non-native plants is still incompletely understood, but the absence of natural enemies in the invaded area (Enemy Release Hypothesis; ERH) is one of the most popular explanations. One of the predictions of the ERH, a reduced herbivore load on non-native plants compared with native ones, has been repeatedly tested. However, many studies have either used a community approach (sampling from native and non-native species in the same community) or a biogeographical approach (sampling from the same plant species in areas where it is native and where it is non-native). Either method can sometimes lead to inconclusive results. To resolve this, we here add to the small number of studies that combine both approaches. We do so in a single study of insect herbivory on 47 woody plant species (trees, shrubs, and vines) in the Netherlands and Japan. We find higher herbivore diversity, higher herbivore load and more herbivory on native plants than on non-native plants, generating support for the enemy release hypothesis.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57959,No release for the wicked: enemy release is dynamic and not associated with invasiveness,S202932,R57961,hypothesis,L128378,Enemy release,"The enemy release hypothesis predicts that invasive species will receive less damage from enemies, compared to co-occurring native and noninvasive exotic species in their introduced range. However, release operating early in invasion could be lost over time and with increased range size as introduced species acquire new enemies. We used three years of data, from 61 plant species planted into common gardens, to determine whether (1) invasive, noninvasive exotic, and native species experience differential damage from insect herbivores. and mammalian browsers, and (2) enemy release is lost with increased residence time and geographic spread in the introduced range. We find no evidence suggesting enemy release is a general mechanism contributing to invasiveness in this region. Invasive species received the most insect herbivory, and damage increased with longer residence times and larger range sizes at three spatial scales. Our results show that invasive and exotic species fail to escape enemies, particularly over longer temporal and larger spatial scales.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57988,Alien and native plant establishment in grassland communities is more strongly affected by disturbance than above- and below-ground enemies,S203295,R57989,hypothesis,L128685,Enemy release,"Understanding the factors that drive commonness and rarity of plant species and whether these factors differ for alien and native species are key questions in ecology. If a species is to become common in a community, incoming propagules must first be able to establish. The latter could be determined by competition with resident plants, the impacts of herbivores and soil biota, or a combination of these factors. We aimed to tease apart the roles that these factors play in determining establishment success in grassland communities of 10 alien and 10 native plant species that are either common or rare in Germany, and from four families. In a two‐year multisite field experiment, we assessed the establishment success of seeds and seedlings separately, under all factorial combinations of low vs. high disturbance (mowing vs mowing and tilling of the upper soil layer), suppression or not of pathogens (biocide application) and, for seedlings only, reduction or not of herbivores (net‐cages). Native species showed greater establishment success than alien species across all treatments, regardless of their commonness. Moreover, establishment success of all species was positively affected by disturbance. Aliens showed lower establishment success in undisturbed sites with biocide application. Release of the undisturbed resident community from pathogens by biocide application might explain this lower establishment success of aliens. These findings were consistent for establishment from either seeds or seedlings, although less significantly so for seedlings, suggesting a more important role of pathogens in very early stages of establishment after germination. Herbivore exclusion did play a limited role in seedling establishment success. Synthesis: In conclusion, we found that less disturbed grassland communities exhibited strong biotic resistance to establishment success of species, whether alien or native. However, we also found evidence that alien species may benefit weakly from soilborne enemy release, but that this advantage over native species is lost when the latter are also released by biocide application. Thus, disturbance was the major driver for plant species establishment success and effects of pathogens on alien plant establishment may only play a minor role.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57994,Can enemy release explain the invasion success of the diploid Leucanthemum vulgare in North America?,S203378,R57995,hypothesis,L128756,Enemy release,"Abstract Enemy release is a commonly accepted mechanism to explain plant invasions. Both the diploid Leucanthemum vulgare and the morphologically very similar tetraploid Leucanthemum ircutianum have been introduced into North America. To verify which species is more prevalent in North America we sampled 98 Leucanthemum populations and determined their ploidy level. Although polyploidy has repeatedly been proposed to be associated with increased invasiveness in plants, only two of the populations surveyed in North America were the tetraploid L. ircutianum . We tested the enemy release hypothesis by first comparing 20 populations of L. vulgare and 27 populations of L. ircutianum in their native range in Europe, and then comparing the European L. vulgare populations with 31 L. vulgare populations sampled in North America. Characteristics of the site and associated vegetation, plant performance and invertebrate herbivory were recorded. In Europe, plant height and density of the two species were similar but L. vulgare produced more flower heads than L. ircutianum . Leucanthemum vulgare in North America was 17 % taller, produced twice as many flower heads and grew much denser compared to L. vulgare in Europe. Attack rates by root- and leaf-feeding herbivores on L. vulgare in Europe (34 and 75 %) was comparable to that on L. ircutianum (26 and 71 %) but higher than that on L. vulgare in North America (10 and 3 %). However, herbivore load and leaf damage were low in Europe. Cover and height of the co-occurring vegetation was higher in L. vulgare populations in the native than in the introduced range, suggesting that a shift in plant competition may more easily explain the invasion success of L. vulgare than escape from herbivory.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57996,A Comparison of Herbivore Damage on Three Invasive Plants and Their Native Congeners: Implications for the Enemy Release Hypothesis,S203401,R57997,hypothesis,L128775,Enemy release,"ABSTRACT One explanation for the success of exotic plants in their introduced habitats is that, upon arriving to a new continent, plants escaped their native herbivores or pathogens, resulting in less damage and lower abundance of enemies than closely related native species (enemy release hypothesis). We tested whether the three exotic plant species, Rubus phoenicolasius (wineberry), Fallopia japonica (Japanese knotweed), and Persicaria perfoliata (mile-a-minute weed), suffered less herbivory or pathogen attack than native species by comparing leaf damage and invertebrate herbivore abundance and diversity on the invasive species and their native congeners. Fallopia japonica and R. phoenicolasius received less leaf damage than their native congeners, and F. japonica also contained a lower diversity and abundance of invertebrate herbivores. If the observed decrease in damage experienced by these two plant species contributes to increased fitness, then escape from enemies may provide at least a partial explanation for their invasiveness. However, P. perfoliata actually received greater leaf damage than its native congener. Rhinoncomimus latipes, a weevil previously introduced in the United States as a biological control for P. perfoliata, accounted for the greatest abundance of insects collected from P. perfoliata. Therefore, it is likely that the biocontrol R. latipes was responsible for the greater damage on P. perfoliata, suggesting this insect may be effective at controlling P. perfoliata populations if its growth and reproduction is affected by the increased herbivore damage.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54064,Plastic Traits of an Exotic Grass Contribute to Its Abundance but Are Not Always Favourable,S165660,R54065,Species name,L100530,Eragrostis curvula,"In herbaceous ecosystems worldwide, biodiversity has been negatively impacted by changed grazing regimes and nutrient enrichment. Altered disturbance regimes are thought to favour invasive species that have a high phenotypic plasticity, although most studies measure plasticity under controlled conditions in the greenhouse and then assume plasticity is an advantage in the field. Here, we compare trait plasticity between three co-occurring, C4 perennial grass species, an invader Eragrostis curvula, and natives Eragrostis sororia and Aristida personata to grazing and fertilizer in a three-year field trial. We measured abundances and several leaf traits known to correlate with strategies used by plants to fix carbon and acquire resources, i.e. specific leaf area (SLA), leaf dry matter content (LDMC), leaf nutrient concentrations (N, C∶N, P), assimilation rates (Amax) and photosynthetic nitrogen use efficiency (PNUE). In the control treatment (grazed only), trait values for SLA, leaf C∶N ratios, Amax and PNUE differed significantly between the three grass species. When trait values were compared across treatments, E. curvula showed higher trait plasticity than the native grasses, and this correlated with an increase in abundance across all but the grazed/fertilized treatment. The native grasses showed little trait plasticity in response to the treatments. Aristida personata decreased significantly in the treatments where E. curvula increased, and E. sororia abundance increased possibly due to increased rainfall and not in response to treatments or invader abundance. Overall, we found that plasticity did not favour an increase in abundance of E. curvula under the grazed/fertilized treatment likely because leaf nutrient contents increased and subsequently its' palatability to consumers. E. curvula also displayed a higher resource use efficiency than the native grasses. These findings suggest resource conditions and disturbance regimes can be manipulated to disadvantage the success of even plastic exotic species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54172,Understanding the consequences of seed dispersal in a heterogeneous environment ,S166919,R54173,Species name,L101573,Erodium cicutarium,"Plant distributions are in part determined by environmental heterogeneity on both large (landscape) and small (several meters) spatial scales. Plant populations can respond to environmental heterogeneity via genetic differentiation between large distinct patches, and via phenotypic plasticity in response to heterogeneity occurring at small scales relative to dispersal distance. As a result, the level of environmental heterogeneity experienced across generations, as determined by seed dispersal distance, may itself be under selection. Selection could act to increase or decrease seed dispersal distance, depending on patterns of heterogeneity in environmental quality with distance from a maternal home site. Serpentine soils, which impose harsh and variable abiotic stress on non-adapted plants, have been partially invaded by Erodium cicutarium in northern California, USA. Using nearby grassland sites characterized as either serpentine or non-serpentine, we collected seeds from dense patches of E. cicutarium on both soil types in spring 2004 and subsequently dispersed those seeds to one of four distances from their maternal home site (0, 0.5, 1, or 10 m). We examined distance-dependent patterns of variation in offspring lifetime fitness, conspecific density, soil availability, soil water content, and aboveground grass and forb biomass. ANOVA revealed a distinct fitness peak when seeds were dispersed 0.5 m from their maternal home site on serpentine patches. In non-serpentine patches, fitness was reduced only for seeds placed back into the maternal home site. Conspecific density was uniformly high within 1 m of a maternal home site on both soils, whereas soil water content and grass biomass were significantly heterogeneous among dispersal distances only on serpentine soils. Structural equation modeling and multigroup analysis revealed significantly stronger direct and indirect effects linking abiotic and biotic variation to offspring performance on serpentine soils than on non-serpentine soils, indicating the potential for soil-specific selection on seed dispersal distance in this invasive species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54707,Do biodiversity and human impact influence the introduction or establishment of alien mammals?,S173281,R54708,Measure of invasion success,L106751,Establishment success,"What determines the number of alien species in a given region? ‘Native biodiversity’ and ‘human impact’ are typical answers to this question. Indeed, studies comparing different regions have frequently found positive relationships between number of alien species and measures of both native biodiversity (e.g. the number of native species) and human impact (e.g. human population). These relationships are typically explained by biotic acceptance or resistance, i.e. by influence of native biodiversity and human impact on the second step of the invasion process, establishment. The first step of the invasion process, introduction, has often been ignored. Here we investigate whether relationships between number of alien mammals and native biodiversity or human impact in 43 European countries are mainly shaped by differences in number of introduced mammals or establishment success. Our results suggest that correlation between number of native and established mammals is spurious, as it is simply explainable by the fact that both quantities are linked to country area. We also demonstrate that countries with higher human impact host more alien mammals than other countries because they received more introductions than other countries. Differences in number of alien mammals cannot be explained by differences in establishment success. Our findings highlight importance of human activities and question, at least for mammals in Europe, importance of biotic acceptance and resistance.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54707,Do biodiversity and human impact influence the introduction or establishment of alien mammals?,S195453,R57247,Measure of resistance/susceptibility,L122638,Establishment success,"What determines the number of alien species in a given region? ‘Native biodiversity’ and ‘human impact’ are typical answers to this question. Indeed, studies comparing different regions have frequently found positive relationships between number of alien species and measures of both native biodiversity (e.g. the number of native species) and human impact (e.g. human population). These relationships are typically explained by biotic acceptance or resistance, i.e. by influence of native biodiversity and human impact on the second step of the invasion process, establishment. The first step of the invasion process, introduction, has often been ignored. Here we investigate whether relationships between number of alien mammals and native biodiversity or human impact in 43 European countries are mainly shaped by differences in number of introduced mammals or establishment success. Our results suggest that correlation between number of native and established mammals is spurious, as it is simply explainable by the fact that both quantities are linked to country area. We also demonstrate that countries with higher human impact host more alien mammals than other countries because they received more introductions than other countries. Differences in number of alien mammals cannot be explained by differences in establishment success. Our findings highlight importance of human activities and question, at least for mammals in Europe, importance of biotic acceptance and resistance.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54984,Global patterns of introduction effort and establishment success in birds,S194347,R57152,Measure of resistance/susceptibility,L121722,Establishment success,"Theory suggests that introduction effort (propagule size or number) should be a key determinant of establishment success for exotic species. Unfortunately, however, propagule pressure is not recorded for most introductions. Studies must therefore either use proxies whose efficacy must be largely assumed, or ignore effort altogether. The results of such studies will be flawed if effort is not distributed at random with respect to other characteristics that are predicted to influence success. We use global data for more than 600 introduction events for birds to show that introduction effort is both the strongest correlate of introduction success, and correlated with a large number of variables previously thought to influence success. Apart from effort, only habitat generalism relates to establishment success in birds.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56092,Global assessment of establishment success for amphibian and reptile invaders,S182875,R56093,Measure of resistance/susceptibility,L112373,Establishment success,"Abstract Context. According to the tens rule, 10% of introduced species establish themselves. Aims. We tested this component of the tens rule for amphibians and reptiles globally, in Europe and North America, where data are presumably of good quality, and on islands versus continents. We also tested whether there was a taxonomic difference in establishment success between amphibians and reptiles. Methods. We examined data comprising 206 successful and 165 failed introduction records for 161 species of amphibians to 55 locations, and 560 successful and 641 failed introduction records for 469 species of reptiles to 116 locations around the world. Key results. Globally, establishment success was not different between amphibians (67%) and reptiles (62%). Both means were well above the 10% value predicted by the tens rule. In Europe and North America, establishment success was lower, although still higher than 10%. For reptiles, establishment success was higher on islands than on continents. Our results question the tens rule and do not show taxonomic differences in establishment success. Implications. Similar to studies on other taxa (birds and mammals), we found that establishment success was generally above 40%. This suggests that we should focus management on reducing the number of herptile species introduced because both reptiles and amphibians have a high likelihood of establishing. As data collection on invasions continue, testing establishment success in light of other factors, including propagule pressure, climate matching and taxonomic classifications, may provide additional insight into which species are most likely to establish in particular areas.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56098,Establishment success across convergent Mediterranean ecosystems: an analysis of bird introductions,S182922,R56099,Measure of resistance/susceptibility,L112414,Establishment success,"Abstract: Concern over the impact of invaders on biodiversity and on the functioning of ecosystems has generated a rising tide of comparative analyses aiming to unveil the factors that shape the success of introduced species across different regions. One limitation of these studies is that they often compare geographically rather than ecologically defined regions. We propose an approach that can help address this limitation: comparison of invasions across convergent ecosystems that share similar climates. We compared avian invasions in five convergent mediterranean climate systems around the globe. Based on a database of 180 introductions representing 121 avian species, we found that the proportion of bird species successfully established was high in all mediterranean systems (more than 40% for all five regions). Species differed in their likelihood to become established, although success was not higher for those originating from mediterranean systems than for those from nonmediterranean regions. Controlling for this taxonomic effect with generalized linear mixed models, species introduced into mediterranean islands did not show higher establishment success than those introduced to the mainland. Susceptibility to avian invaders, however, differed substantially among the different mediterranean regions. The probability that a species will become established was highest in the Mediterranean Basin and lowest in mediterranean Australia and the South African Cape. Our results suggest that many of the birds recently introduced into mediterranean systems, and especially into the Mediterranean Basin, have a high potential to establish self‐sustaining populations. This finding has important implications for conservation in these biologically diverse hotspots.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56106,Establishment success of introduced amphibians increases in the presence of congeneric species,S182987,R56107,Measure of resistance/susceptibility,L112471,Establishment success,"Darwin’s naturalization hypothesis predicts that the success of alien invaders will decrease with increasing taxonomic similarity to the native community. Alternatively, shared traits between aliens and the native assemblage may preadapt aliens to their novel surroundings, thereby facilitating establishment (the preadaptation hypothesis). Here we examine successful and failed introductions of amphibian species across the globe and find that the probability of successful establishment is higher when congeneric species are present at introduction locations and increases with increasing congener species richness. After accounting for positive effects of congeners, residence time, and propagule pressure, we also find that invader establishment success is higher on islands than on mainland areas and is higher in areas with abiotic conditions similar to the native range. These findings represent the first example in which the preadaptation hypothesis is supported in organisms other than plants and suggest that preadaptation has played a critical role in enabling introduced species to succeed in novel environments.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54128,Functional differences in response to drought in the invasive Taraxacum officinale from native and introduced alpine habitat ranges,S166403,R54129,Specific traits,L101145,Fitness and physiological responses to drought,"Background: Phenotypic plasticity and ecotypic differentiation have been suggested as the main mechanisms by which widely distributed species can colonise broad geographic areas with variable and stressful conditions. Some invasive plant species are among the most widely distributed plants worldwide. Plasticity and local adaptation could be the mechanisms for colonising new areas. Aims: We addressed if Taraxacum officinale from native (Alps) and introduced (Andes) stock responded similarly to drought treatment, in terms of photosynthesis, foliar angle, and flowering time. We also evaluated if ontogeny affected fitness and physiological responses to drought. Methods: We carried out two common garden experiments with both seedlings and adults (F2) of T. officinale from its native and introduced ranges in order to evaluate their plasticity and ecotypic differentiation under a drought treatment. Results: Our data suggest that the functional response of T. officinale individuals from the introduced range to drought is the result of local adaptation rather than plasticity. In addition, the individuals from the native distribution range were more sensitive to drought than those from the introduced distribution ranges at both seedling and adult stages. Conclusions: These results suggest that local adaptation may be a possible mechanism underlying the successful invasion of T. officinale in high mountain environments of the Andes.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57048,Predicting the number of ecologically harmful exotic species in an aquatic system,S193001,R57050,Habitat,L120793,Freshwater and marine,"Most introduced species apparently have little impact on native biodiversity, but the proliferation of human vectors that transport species worldwide increases the probability of a region being affected by high‐impact invaders – i.e. those that cause severe declines in native species populations. Our study determined whether the number of high‐impact invaders can be predicted from the total number of invaders in an area, after controlling for species–area effects. These two variables are positively correlated in a set of 16 invaded freshwater and marine systems from around the world. The relationship is a simple linear function; there is no evidence of synergistic or antagonistic effects of invaders across systems. A similar relationship is found for introduced freshwater fishes across 149 regions. In both data sets, high‐impact invaders comprise approximately 10% of the total number of invaders. Although the mechanism driving this correlation is likely a sampling effect, it is not simply the proportional sampling of a constant number of repeat‐offenders; in most cases, an invader is not reported to have strong impacts on native species in the majority of regions it invades. These findings link vector activity and the negative impacts of introduced species on biodiversity, and thus justify management efforts to reduce invasion rates even where numerous invasions have already occurred.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R52073,Using ecological restoration to constrain biological invasion,S158703,R52074,Measure of species similarity,L95595,Functional groups,"Summary 1 Biological invasion can permanently alter ecosystem structure and function. Invasive species are difficult to eradicate, so methods for constraining invasions would be ecologically valuable. We examined the potential of ecological restoration to constrain invasion of an old field by Agropyron cristatum, an introduced C3 grass. 2 A field experiment was conducted in the northern Great Plains of North America. One-hundred and forty restored plots were planted in 1994–96 with a mixture of C3 and C4 native grass seed, while 100 unrestored plots were not. Vegetation on the plots was measured periodically between 1994 and 2002. 3 Agropyron cristatum invaded the old field between 1994 and 2002, occurring in 5% of plots in 1994 and 66% of plots in 2002, and increasing in mean cover from 0·2% in 1994 to 17·1% in 2002. However, A. cristatum invaded one-third fewer restored than unrestored plots between 1997 and 2002, suggesting that restoration constrained invasion. Further, A. cristatum cover in restored plots decreased with increasing planted grass cover. Stepwise regression indicated that A. cristatum cover was more strongly correlated with planted grass cover than with distance from the A. cristatum source, species richness, percentage bare ground or percentage litter. 4 The strength of the negative relationship between A. cristatum and planted native grasses varied among functional groups: the correlation was stronger with species with phenology and physiology similar to A. cristatum (i.e. C3 grasses) than with dissimilar species (C4 grasses). 5 Richness and cover of naturally establishing native species decreased with increasing A. cristatum cover. In contrast, restoration had little effect on the establishment and colonization of naturally establishing native species. Thus, A. cristatum hindered colonization by native species while planted native grasses did not. 6 Synthesis and applications. To our knowledge, this study provides the first indication that restoration can act as a filter, constraining invasive species while allowing colonization by native species. These results suggest that resistance to invasion depends on the identity of species in the community and that restoration seed mixes might be tailored to constrain selected invaders. Restoring areas before invasive species become established can reduce the magnitude of biological invasion.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R52109,Establishment and Management of Native Functional Groups in Restoration,S159056,R52111,Measure of species similarity,L95894,Functional groups,"The limiting similarity hypothesis predicts that communities should be more resistant to invasion by non‐natives when they include natives with a diversity of traits from more than one functional group. In restoration, planting natives with a diversity of traits may result in competition between natives of different functional groups and may influence the efficacy of different seeding and maintenance methods, potentially impacting native establishment. We compare initial establishment and first‐year performance of natives and the effectiveness of maintenance techniques in uniform versus mixed functional group plantings. We seeded ruderal herbaceous natives, longer‐lived shrubby natives, or a mixture of the two functional groups using drill‐ and hand‐seeding methods. Non‐natives were left undisturbed, removed by hand‐weeding and mowing, or treated with herbicide to test maintenance methods in a factorial design. Native functional groups had highest establishment, growth, and reproduction when planted alone, and hand‐seeding resulted in more natives as well as more of the most common invasive, Brassica nigra. Wick herbicide removed more non‐natives and resulted in greater reproduction of natives, while hand‐weeding and mowing increased native density. Our results point to the importance of considering competition among native functional groups as well as between natives and invasives in restoration. Interactions among functional groups, seeding methods, and maintenance techniques indicate restoration will be easier to implement when natives with different traits are planted separately.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R52120,Plant functional group diversity as a mechanism for invasion resistance,S159156,R52121,Measure of species similarity,L95979,Functional groups,"A commonly cited mechanism for invasion resistance is more complete resource use by diverse plant assemblages with maximum niche complementarity. We investigated the invasion resistance of several plant functional groups against the nonindigenous forb Spotted knapweed (Centaurea maculosa). The study consisted of a factorial combination of seven functional group removals (groups singularly or in combination) and two C. maculosa treatments (addition vs. no addition) applied in a randomized complete block design replicated four times at each of two sites. We quantified aboveground plant material nutrient concentration and uptake (concentration × biomass) by indigenous functional groups: grasses, shallow‐rooted forbs, deep‐rooted forbs, spikemoss, and the nonindigenous invader C. maculosa. In 2001, C. maculosa density depended upon which functional groups were removed. The highest C. maculosa densities occurred where all vegetation or all forbs were removed. Centaurea maculosa densities were the lowest in plots where nothing, shallow‐rooted forbs, deep‐rooted forbs, grasses, or spikemoss were removed. Functional group biomass was also collected and analyzed for nitrogen, phosphorus, potassium, and sulphur. Based on covariate analyses, postremoval indigenous plot biomass did not relate to invasion by C. maculosa. Analysis of variance indicated that C. maculosa tissue nutrient percentage and net nutrient uptake were most similar to indigenous forb functional groups. Our study suggests that establishing and maintaining a diversity of plant functional groups within the plant community enhances resistance to invasion. Indigenous plants of functionally similar groups as an invader may be particularly important in invasion resistance.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R52124,Resistance of Native Plant Functional Groups to Invasion by Medusahead (Taeniatherum caput-medusae),S159193,R52125,Measure of species similarity,L96010,Functional groups,"AbstractUnderstanding the relative importance of various functional groups in minimizing invasion by medusahead is central to increasing the resistance of native plant communities. The objective of this study was to determine the relative importance of key functional groups within an intact Wyoming big sagebrush–bluebunch wheatgrass community type on minimizing medusahead invasion. Treatments consisted of removal of seven functional groups at each of two sites, one with shrubs and one without shrubs. Removal treatments included (1) everything, (2) shrubs, (3) perennial grasses, (4) taprooted forbs, (5) rhizomatous forbs, (6) annual forbs, and (7) mosses. A control where nothing was removed was also established. Plots were arranged in a randomized complete block with 4 replications (blocks) at each site. Functional groups were removed beginning in the spring of 2004 and maintained monthly throughout each growing season through 2009. Medusahead was seeded at a rate of 2,000 seeds m−2 (186 seeds ft−2) in fall 2005. Removing perennial grasses nearly doubled medusahead density and biomass compared with any other removal treatment. The second highest density and biomass of medusahead occurred from removing rhizomatous forbs (phlox). We found perennial grasses played a relatively more significant role than other species in minimizing invasion by medusahead. We suggest that the most effective basis for establishing medusahead-resistant plant communities is to establish 2 or 3 highly productive grasses that are complementary in niche and that overlap that of the invading species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R52129,A test of the effects of functional group richness and composition on grassland invasibility,S159238,R52130,Measure of species similarity,L96048,Functional groups,"Although many theoretical and observational studies suggest that diverse systems are more resistant to invasion by novel species than are less diverse systems, experimental data are uncommon. In this experiment, I manipulated the functional group richness and composition of a grassland community to test two related hypotheses: (1) Diversity and invasion resistance are positively related through diversity's effects on the resources necessary for invading plants' growth. (2) Plant communities resist invasion by species in functional groups already present in the community. To test these hypotheses, I removed plant functional groups (forbs, C3 graminoids, and C4 graminoids) from existing grassland vegetation to create communities that contained all possible combinations of one, two, or three functional groups. After three years of growth, I added seeds of 16 different native prairie species (legumes, nonleguminous forbs, C3 graminoids, and C4 graminoids) to a1 3 1 m portion of each 4 3 8 m plot. Overall invasion success was negatively related to resident functional group richness, but there was only weak evidence that resident species repelled functionally similar invaders. A weak effect of functional group richness on some resources did not explain the significant diversity-invasibility relationship. Other factors, particularly the different responses of resident functional groups to the initial disturbance of the experimental manipulation, seem to have been more important to community in- vasibility.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R52131,Experimental invasion by legumes reveals non-random assembly rules in grassland communities,S159260,R52132,Measure of species similarity,L96067,Functional groups,"1 Although experimental studies usually reveal that resistance to invasion increases with species diversity, observational studies sometimes show the opposite trend. The higher resistance of diverse plots to invasion may be partly due to the increased probability of a plot containing a species with similar resource requirements to the invader. 2 We conducted a study of the invasibility of monocultures belonging to three different functional groups by seven sown species of legume. By only using experimentally established monocultures, rather than manipulating the abundance of particular functional groups, we removed both species diversity and differences in underlying abiotic conditions as potentially confounding variables. 3 We found that legume monocultures were more resistant than monocultures of grasses or non‐leguminous forbs to invasion by sown legumes but not to invasion by other unsown species. The functional group effect remained after controlling for differences in total biomass and the average height of the above‐ground biomass. 4 The relative success of legume species and types also varied with monoculture characteristics. The proportional biomass of climbing legumes increased strongly with biomass height in non‐leguminous forb monocultures, while it declined with biomass height in grass monocultures. Trifolium pratense was the most successful invader in grass monocultures, while Vicia cracca was the most successful in non‐leguminous forb monocultures. 5 Our results suggest that non‐random assembly rules operate in grassland communities both between and within functional groups. Legume invaders found it much more difficult to invade legume plots, while grass and non‐leguminous forb plots favoured non‐climbing and climbing legumes, respectively. If plots mimic monospecific patches, the effect of these assembly rules in diverse communities might depend upon the patch structure of diverse communities. This dependency on patch structure may contribute to differences in results of research from experimental vs. natural communities.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R52135,Testing Fox's assembly rule: does plant invasion depend on recipient community structure?,S159305,R52137,Measure of species similarity,L96105,Functional groups,"Fox's assembly rule, that relative dearth of certain functional groups in a community will facilitate invasion of that particular functional group, serves as the basis for investigation into the functional group effects of invasion resistance. We explored resistance to plant invaders by eliminating or decreasing the number of understory plant species in particular functional groups from plots at a riparian site in southwestern Virginia, USA. Our functional groups comprise combinations of aboveground biomass and rooting structure type. Manipulated plots were planted with 10 randomly chosen species from widespread native and introduced plants commonly found throughout the floodplains of Big Stony Creek. We assessed success of an invasion by plant survivorship and growth. We analyzed survivorship of functional groups with loglinear models for the analysis of categorical data in a 4-way table. There was a significant interaction between functional groups removed in a plot and survivorship in the functional groups added to that plot. However, survivorship of species in functional groups introduced into plots with their respective functional group removed did not differ from survivorship when any other functional group was removed. Additionally, growth of each of the most abundant species did not differ significantly among plots with different functional groups manipulated. Specifically, species did not fare better in those plots that had representatives of their own functional group removed. Fox's assembly rule does not hold for these functional groups in this plant community; however, composition of the recipient community is a significant factor in community assembly.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R52138,The role of diversity and functional traits of species in community invasibility,S159327,R52139,Measure of species similarity,L96124,Functional groups,"The invasion of exotic species into assemblages of native plants is a pervasive and widespread phenomenon. Many theoretical and observational studies suggest that diverse communities are more resistant to invasion by exotic species than less diverse ones. However, experimental results do not always support such a relationship. Therefore, the hypothesis of diversity-community invasibility is still a focus of controversy in the field of invasion ecology. In this study, we established and manipulated communities with different species diversity and different species functional groups (16 species belong to C3, C4, forbs and legumes, respectively) to test Elton's hypothesis and other relevant hypotheses by studying the process of invasion. Alligator weed (Alternanthera philoxeroides) was chosen as the invader. We found that the correlation between the decrement of extractable soil nitrogen and biomass of alligator weed was not significant, and that species diversity, independent of functional groups diversity, did not show a significant correlation with invasibility. However, the communities with higher functional groups diversity significantly reduced the biomass of alligator weed by decreasing its resource opportunity. Functional traits of species also influenced the success of the invasion. Alternanthera sessilis, in the same morphological and functional group as alligator weed, was significantly resistant to alligator weed invasion. Because community invasibility is influenced by many factors and interactions among them, the pattern and mechanisms of community invasibility are likely to be far subtler than we found in this study. More careful manipulated experiments coupled with theoretical modeling studies are essential steps to a more profound understanding of community invasibility.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R52077,Plant functional group identity and diversity determine biotic resistance to invasion by an exotic grass,S158741,R52078,Measure of species similarity,L95627,Functional traits,"Biotic resistance, the ability of species in a community to limit invasion, is central to our understanding of how communities at risk of invasion assemble after disturbances, but it has yet to translate into guiding principles for the restoration of invasion‐resistant plant communities. We combined experimental, functional, and modelling approaches to investigate processes of community assembly contributing to biotic resistance to an introduced lineage of Phragmites australis, a model invasive species in North America. We hypothesized that (i) functional group identity would be a good predictor of biotic resistance to P. australis, while species identity effect would be redundant within functional group (ii) mixtures of species would be more invasion resistant than monocultures. We classified 36 resident wetland plants into four functional groups based on eight functional traits. We conducted two competition experiments based on the additive competition design with P. australis and monocultures or mixtures of wetland plants. As an indicator of biotic resistance, we calculated a relative competition index (RCIavg) based on the average performance of P. australis in competition treatment compared with control. To explain diversity effect further, we partitioned it into selection effect and complementarity effect and tested several diversity–interaction models. In monoculture treatments, RCIavg of wetland plants was significantly different among functional groups, but not within each functional group. We found the highest RCIavg for fast‐growing annuals, suggesting priority effect. RCIavg of wetland plants was significantly greater in mixture than in monoculture mainly due to complementarity–diversity effect among functional groups. In diversity–interaction models, species interaction patterns in mixtures were described best by interactions between functional groups when fitted to RCIavg or biomass, implying niche partitioning. Synthesis. Functional group identity and diversity of resident plant communities are good indicators of biotic resistance to invasion by introduced Phragmites australis, suggesting niche pre‐emption (priority effect) and niche partitioning (diversity effect) as underlying mechanisms. Guiding principles to understand and/or manage biological invasion could emerge from advances in community theory and the use of a functional framework. Targeting widely distributed invasive plants in different contexts and scaling up to field situations will facilitate generalization.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R52090,Variation in resource acquisition and utilization traits between native and invasive perennial forbs,S158867,R52091,Measure of species similarity,L95734,Functional traits,"Understanding the functional traits that allow invasives to outperform natives is a necessary first step in improving our ability to predict and manage the spread of invaders. In nutrient-limited systems, plant competitive ability is expected to be closely tied to the ability of a plant to exploit nutrient-rich microsites and use these captured nutrients efficiently. The broad objective of this work was to compare the ability of native and invasive perennial forbs to acquire and use nutrients from nutrient-rich microsites. We evaluated morphological and physiological responses among four native and four invasive species exposed to heterogeneous (patch) or homogeneous (control) nutrient distribution. Invasives, on average, allocated more biomass to roots and allocated proportionately more root length to nutrient-rich microsites than did natives. Invasives also had higher leaf N, photosynthetic rates, and photosynthetic nitrogen use efficiency than natives, regardless of treatment. While these results suggest multiple traits may contribute to the success of invasive forbs in low-nutrient environments, we also observed large variation in these traits among native forbs. These observations support the idea that functional trait variation in the plant community may be a better predictor of invasion resistance than the functional group composition of the plant community.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R52102,Functional differences between alien and native species: do biotic interactions determine the functional structure of highly invaded grasslands?,S158980,R52103,Measure of species similarity,L95829,Functional traits,"Summary 1. Although observed functional differences between alien and native plant species support the idea that invasions are favoured by niche differentiation (ND), when considering invasions along large ecological gradients, habitat filtering (HF) has been proposed to constrain alien species such that they exhibit similar trait values to natives. 2. To reconcile these contrasting observations, we used a multiscale approach using plant functional traits to evaluate how biotic interactions with native species and grazing might determine the functional structure of highly invaded grasslands along an elevation gradient in New Zealand. 3. At a regional scale, functional differences between alien and native plant species translated into nonrandom community assembly and high ND. Alien and native species showed contrasting responses to elevation and the degree of ND between them decreased as elevation increased, suggesting a role for HF. At the plant-neighbourhood scale, species with contrasting traits were generally spatially segregated, highlighting the impact of biotic interactions in structuring local plant communities. A confirmatory multilevel path analysis showed that the effect of elevation and grazing was moderated by the presence of native species, which in turn influenced the local abundance of alien species. 4. Our study showed that functional differences between aliens and natives are fundamental to understand the interplay between multiple mechanisms driving alien species success and their coexistence with natives. In particular, the success of alien species is driven by the presence of native species which can have a negative (biotic resistance) or a positive (facilitation) effect depending on the functional identity of alien species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54194,Major morphological changes in a Lake Victoria cichlid fish within two decades,S167179,R54195,Species name,L101789,Haplochromis (Yssichromis) pyrrhocephalus,"During the upsurge of the introduced predatory Nile perch in Lake Victoria in the 1980s, the zooplanktivorous Haplochromis (Yssichromis) pyrrhocephalus nearly vanished. The species recovered coincident with the intense fishing of Nile perch in the 1990s, when water clarity and dissolved oxygen levels had decreased dramatically due to increased eutrophication. In response to the hypoxic conditions, total gill surface in resurgent H. pyrrhocephalus increased by 64%. Remarkably, head length, eye length, and head volume decreased in size, whereas cheek depth increased. Reductions in eye size and depth of the rostral part of the musculus sternohyoideus, and reallocation of space between the opercular and suspensorial compartments of the head may have permitted accommodation of larger gills in a smaller head. By contrast, the musculus levator posterior, located dorsal to the gills, increased in depth. This probably reflects an adaptive response to the larger and tougher prey types in the diet of resurgent H. pyrrhocephalus. These striking morphological changes over a time span of only two decades could be the combined result of phenotypic plasticity and genetic change and may have fostered recovery of this species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54190,Phenotypic variability in Holcus lanatus L. in southern Chile: a strategy that enhances plant survival and pasture stability,S167131,R54191,Species name,L101749,Holcus lanatus,"
Holcus lanatus L. can colonise a wide range of sites within the naturalised grassland of the Humid Dominion of Chile. The objectives were to determine plant growth mechanisms and strategies that have allowed H. lanatus to colonise contrasting pastures and to determine the existence of ecotypes of H. lanatus in southern Chile. Plants of H. lanatus were collected from four geographic zones of southern Chile and established in a randomised complete block design with four replicates. Five newly emerging tillers were marked per plant and evaluated at the vegetative, pre-ear emergence, complete emerged inflorescence, end of flowering period, and mature seed stages. At each evaluation, one marked tiller was harvested per plant. The variables measured included lamina length and width, tiller height, length of the inflorescence, total number of leaves, and leaf, stem, and inflorescence mass. At each phenological stage, groups of accessions were statistically formed using cluster analysis. The grouping of accessions (cluster analysis) into statistically different groups (ANOVA and canonical variate analysis) indicated the existence of different ecotypes. The phenotypic variation within each group of the accessions suggested that each group has its own phenotypic plasticity. It is concluded that the successful colonisation by H. lanatus has resulted from diversity within the species.
",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54210,Contrasting plant physiological adaptation to climate in the native and introduced range of Hypericum perforatum,S167369,R54211,Species name,L101947,Hypericum perforatum,"Abstract How introduced plants, which may be locally adapted to specific climatic conditions in their native range, cope with the new abiotic conditions that they encounter as exotics is not well understood. In particular, it is unclear what role plasticity versus adaptive evolution plays in enabling exotics to persist under new environmental circumstances in the introduced range. We determined the extent to which native and introduced populations of St. John's Wort (Hypericum perforatum) are genetically differentiated with respect to leaf-level morphological and physiological traits that allow plants to tolerate different climatic conditions. In common gardens in Washington and Spain, and in a greenhouse, we examined clinal variation in percent leaf nitrogen and carbon, leaf δ13C values (as an integrative measure of water use efficiency), specific leaf area (SLA), root and shoot biomass, root/shoot ratio, total leaf area, and leaf area ratio (LAR). As well, we determined whether native European H. perforatum experienced directional selection on leaf-level traits in the introduced range and we compared, across gardens, levels of plasticity in these traits. In field gardens in both Washington and Spain, native populations formed latitudinal clines in percent leaf N. In the greenhouse, native populations formed latitudinal clines in root and shoot biomass and total leaf area, and in the Washington garden only, native populations also exhibited latitudinal clines in percent leaf C and leaf δ13C. Traits that failed to show consistent latitudinal clines instead exhibited significant phenotypic plasticity. Introduced St. John's Wort populations also formed significant or marginally significant latitudinal clines in percent leaf N in Washington and Spain, percent leaf C in Washington, and in root biomass and total leaf area in the greenhouse. In the Washington common garden, there was strong directional selection among European populations for higher percent leaf N and leaf δ13C, but no selection on any other measured trait. The presence of convergent, genetically based latitudinal clines between native and introduced H. perforatum, together with previously published molecular data, suggest that native and exotic genotypes have independently adapted to a broad-scale variation in climate that varies with latitude.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56078,Global patterns in threats to vertebrates by biological invasions,S182754,R56079,Measure of resistance/susceptibility,L112266,IAS-threatened species,"Biological invasions as drivers of biodiversity loss have recently been challenged. Fundamentally, we must know where species that are threatened by invasive alien species (IAS) live, and the degree to which they are threatened. We report the first study linking 1372 vertebrates threatened by more than 200 IAS from the completely revised Global Invasive Species Database. New maps of the vulnerability of threatened vertebrates to IAS permit assessments of whether IAS have a major influence on biodiversity, and if so, which taxonomic groups are threatened and where they are threatened. We found that centres of IAS-threatened vertebrates are concentrated in the Americas, India, Indonesia, Australia and New Zealand. The areas in which IAS-threatened species are located do not fully match the current hotspots of invasions, or the current hotspots of threatened species. The relative importance of biological invasions as drivers of biodiversity loss clearly varies across regions and taxa, and changes over time, with mammals from India, Indonesia, Australia and Europe are increasingly being threatened by IAS. The chytrid fungus primarily threatens amphibians, whereas invasive mammals primarily threaten other vertebrates. The differences in IAS threats between regions and taxa can help efficiently target IAS, which is essential for achieving the Strategic Plan 2020 of the Convention on Biological Diversity.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54014,"Native jewelweed, but not other native species, displays post-invasion trait divergence",S165073,R54015,Species name,L100043,Impatiens parviflora,"Invasive exotic plants reduce the diversity of native communities by displacing native species. According to the coexistence theory, native plants are able to coexist with invaders only when their fitness is not significantly smaller than that of the exotics or when they occupy a different niche. It has therefore been hypothesized that the survival of some native species at invaded sites is due to post-invasion evolutionary changes in fitness and/or niche traits. In common garden experiments, we tested whether plants from invaded sites of two native species, Impatiens noli-tangere and Galeopsis speciosa, outperform conspecifics from non-invaded sites when grown in competition with the invader (Impatiens parviflora). We further examined whether the expected superior performance of the plants from the invaded sites is due to changes in the plant size (fitness proxy) and/or changes in the germination phenology and phenotypic plasticity (niche proxies). Invasion history did not influence the performance of any native species when grown with the exotic competitor. In I. noli-tangere, however, we found significant trait divergence with regard to plant size, germination phenology and phenotypic plasticity. In the absence of a competitor, plants of I. noli-tangere from invaded sites were larger than plants from non-invaded sites. The former plants germinated earlier than inexperienced conspecifics or an exotic congener. Invasion experience was also associated with increased phenotypic plasticity and an improved shade-avoidance syndrome. Although these changes indicate fitness and niche differentiation of I. noli-tangere at invaded sites, future research should examine more closely the adaptive value of these changes and their genetic basis.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56545,Ecological traits of the amphipod invader Dikerogammarus villosus on a mesohabitat scale,S187088,R56546,hypothesis,L116092,Invasional meltdown,"Since 1995, Dikerogammarus villosus Sowinski, a Ponto-Caspian amphi- pod species, has been invading most of Western Europe' s hydrosystems. D. villosus geographic extension and quickly increasing population density has enabled it to become a major component of macrobenthic assemblages in recipient ecosystems. The ecological characteristics of D. villosus on a mesohabitat scale were investigated at a station in the Moselle River. This amphipod is able to colonize a wide range of sub- stratum types, thus posing a threat to all freshwater ecosystems. Rivers whose domi- nant substratum is cobbles and which have tree roots along the banks could harbour particularly high densities of D. villosus. A relationship exists between substratum par- ticle size and the length of the individuals, and spatial segregation according to length was shown. This allows the species to limit intra-specific competition between genera- tions while facilitating reproduction. A strong association exists between D. villosus and other Ponto-Caspian species, such as Dreissena polymorpha and Corophium cur- vispinum, in keeping with Invasional Meltdown Theory. Four taxa (Coenagrionidae, Calopteryx splendens, Corophium curvispinum and Gammarus pulex ) exhibited spa- tial niches that overlap significantly that of D. villosus. According to the predatory be- haviour of the newcomer, their populations may be severely impacted.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56547,Invasional 'meltdown' on an oceanic island,S187112,R56548,hypothesis,L116112,Invasional meltdown,"Islands can serve as model systems for understanding how biological invasions affect community structure and ecosystem function. Here we show invasion by the alien crazy ant Anoplolepis gracilipes causes a rapid, catastrophic shift in the rain forest ecosystem of a tropical oceanic island, affecting at least three trophic levels. In invaded areas, crazy ants extirpate the red land crab, the dominant endemic consumer on the forest floor. In doing so, crazy ants indirectly release seedling recruitment, enhance species richness of seedlings, and slow litter breakdown. In the forest canopy, new associations between this invasive ant and honeydew-secreting scale insects accelerate and diversify impacts. Sustained high densities of foraging ants on canopy trees result in high population densities of hostgeneralist scale insects and growth of sooty moulds, leading to canopy dieback and even deaths of canopy trees. The indirect fallout from the displacement of a native keystone species by an ant invader, itself abetted by introduced/cryptogenic mutualists, produces synergism in impacts to precipitate invasional meltdown in this system.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56563,Positive interactions between nonindigenous species facilitate transport by human vectors,S187294,R56564,hypothesis,L116262,Invasional meltdown,"Numerous studies have shown how interactions between nonindigenous spe- cies (NIS) can accelerate the rate at which they establish and spread in invaded habitats, leading to an ""invasional meltdown."" We investigated facilitation at an earlier stage in the invasion process: during entrainment of propagules in a transport pathway. The introduced bryozoan Watersipora subtorquata is tolerant of several antifouling biocides and a common component of hull-fouling assemblages, a major transport pathway for aquatic NIS. We predicted that colonies of W. subtorquata act as nontoxic refugia for other, less tolerant species to settle on. We compared rates of recruitment of W. subtorquata and other fouling organisms to surfaces coated with three antifouling paints and a nontoxic primer in coastal marinas in Queensland, Australia. Diversity and abundance of fouling taxa were compared between bryozoan colonies and adjacent toxic or nontoxic paint surfaces. After 16 weeks immersion, W. subtorquata covered up to 64% of the tile surfaces coated in antifouling paint. Twenty-two taxa occurred exclusively on W. subtorquata and were not found on toxic surfaces. Other fouling taxa present on toxic surfaces were up to 248 times more abundant on W. subtorquata. Because biocides leach from the paint surface, we expected a positive relationship between the size of W. subtorquata colonies and the abundance and diversity of epibionts. To test this, we compared recruitment of fouling organisms to mimic W. subtorquata colonies of three different sizes that had the same total surface area. Sec- ondary recruitment to mimic colonies was greater when the surrounding paint surface contained biocides. Contrary to our predictions, epibionts were most abundant on small mimic colonies with a large total perimeter. This pattern was observed in encrusting and erect bryozoans, tubiculous amphipods, and serpulid and sabellid polychaetes, but only in the presence of toxic paint. Our results show that W. subtorquata acts as a foundation species for fouling assemblages on ship hulls and facilitates the transport of other species at greater abundance and frequency than would otherwise be possible. Invasion success may be increased by positive interactions between NIS that enhance the delivery of prop- agules by human transport vectors.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56569,Recent biological invasion may hasten invasional meltdown by accelerating historical introductions,S187365,R56570,hypothesis,L116321,Invasional meltdown,"Biological invasions are rapidly producing planet-wide changes in biodiversity and ecosystem function. In coastal waters of the U.S., >500 invaders have become established, and new introductions continue at an increasing rate. Although most species have little impact on native communities, some initially benign introductions may occasionally turn into damaging invasions, although such introductions are rarely documented. Here, I demonstrate that a recently introduced crab has resulted in the rapid spread and increase of an introduced bivalve that had been rare in the system for nearly 50 yr. This increase has occurred through the positive indirect effects of predation by the introduced crab on native bivalves. I used field and laboratory experiments to show that the mechanism is size-specific predation interacting with the different reproductive life histories of the native (protandrous hermaphrodite) and the introduced (dioecious) bivalves. These results suggest that positive interactions among the hundreds of introduced species that are accumulating in coastal systems could result in the rapid transformation of previously benign introductions into aggressively expanding invasions. Even if future management efforts reduce the number of new introductions, given the large number of species already present, there is a high potential for positive interactions to produce many future management problems. Given that invasional meltdown is now being documented in natural systems, I suggest that coastal systems may be closer to this threshold than currently believed.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56577,Functional diversity of mammalian predators and extinction in island birds,S187456,R56578,hypothesis,L116396,Invasional meltdown,"The probability of a bird species going extinct on oceanic islands in the period since European colonization is predicted by the number of introduced predatory mammal species, but the exact mechanism driving this relationship is unknown. One possibility is that larger exotic predator communities include a wider array of predator functional types. These predator communities may target native bird species with a wider range of behavioral or life history characteristics. We explored the hypothesis that the functional diversity of the exotic predators drives bird species extinctions. We also tested how different combinations of functionally important traits of the predators explain variation in extinction probability. Our results suggest a unique impact of each introduced mammal species on native bird populations, as opposed to a situation where predators exhibit functional redundancy. Further, the impact of each additional predator may be facilitated by those already present, suggesting the possibility of “invasional meltdown.”",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56589,A null model of temporal trends in biological invasion records,S187594,R56590,hypothesis,L116510,Invasional meltdown,"Biological invasions are a growing aspect of global biodiversity change. In many regions, introduced species richness increases supralinearly over time. This does not, however, necessarily indicate increasing introduction rates or invasion success. We develop a simple null model to identify the expected trend in invasion records over time. For constant introduction rates and success, the expected trend is exponentially increasing. Model extensions with varying introduction rate and success can also generate exponential distributions. We then analyse temporal trends in aquatic, marine and terrestrial invasion records. Most data sets support an exponential distribution (15/16) and the null invasion model (12/16). Thus, our model shows that no change in introduction rate or success need be invoked to explain the majority of observed trends. Further, an exponential trend does not necessarily indicate increasing invasion success or 'invasional meltdown', and a saturating trend does not necessarily indicate decreasing success or biotic resistance.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56614,Inhibition between invasives: a newly introduced predator moderates the impacts of a previously established invasive predator,S187884,R56615,hypothesis,L116750,Invasional meltdown,"1. With continued globalization, species are being transported and introduced into novel habitats at an accelerating rate. Interactions between invasive species may provide important mechanisms that moderate their impacts on native species. 2. The European green crab Carcinus maenas is an aggressive predator that was introduced to the east coast of North America in the mid-1800 s and is capable of rapid consumption of bivalve prey. A newer invasive predator, the Asian shore crab Hemigrapsus sanguineus, was first discovered on the Atlantic coast in the 1980s, and now inhabits many of the same regions as C. maenas within the Gulf of Maine. Using a series of field and laboratory investigations, we examined the consequences of interactions between these predators. 3. Density patterns of these two species at different spatial scales are consistent with negative interactions. As a result of these interactions, C. maenas alters its diet to consume fewer mussels, its preferred prey, in the presence of H. sanguineus. Decreased mussel consumption in turn leads to lower growth rates for C. maenas, with potential detrimental effects on C. maenas populations. 4. Rather than an invasional meltdown, this study demonstrates that, within the Gulf of Maine, this new invasive predator can moderate the impacts of the older invasive predator.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56626,"Enemy release or invasional meltdown? Deer preference for exotic and native trees on Isla Victoria, Argentina",S188023,R56627,hypothesis,L116865,Invasional meltdown,"How interactions between exotic species affect invasion impact is a fundamental issue on both theoretical and applied grounds. Exotics can facilitate establishment and invasion of other exotics (invasional meltdown) or they can restrict them by re-establishing natural population control (as predicted by the enemy- release hypothesis). We studied forest invasion on an Argentinean island where 43 species of Pinaceae, including 60% of the world's recorded invasive Pinaceae, were introduced c. 1920 but where few species are colonizing pristine areas. In this area two species of Palearctic deer, natural enemies of most Pinaceae, were introduced 80 years ago. Expecting deer to help to control the exotics, we conducted a cafeteria experiment to assess deer preferences among the two dominant native species (a conifer, Austrocedrus chilensis, and a broadleaf, Nothofagus dombeyi) and two widely introduced exotic tree species (Pseudotsuga menziesii and Pinus ponderosa). Deer browsed much more intensively on native species than on exotic conifers, in terms of number of individuals attacked and degree of browsing. Deer preference for natives could potentially facilitate invasion by exotic pines. However, we hypothesize that the low rates of invasion currently observed can result at least partly from high densities of exotic deer, which, despite their preference for natives, can prevent establishment of both native and exotic trees. Other factors, not mutually exclusive, could produce the observed pattern. Our results underscore the difficulty of predicting how one introduced species will effect impact of another one.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56630,Exploitative competition between invasive herbivores benefits a native host plant,S188069,R56631,hypothesis,L116903,Invasional meltdown,"Although biological invasions are of considerable concern to ecologists, relatively little attention has been paid to the potential for and consequences of indirect interactions between invasive species. Such interactions are generally thought to enhance invasives' spread and impact (i.e., the ""invasional meltdown"" hypothesis); however, exotic species might also act indirectly to slow the spread or blunt the impact of other invasives. On the east coast of the United States, the invasive hemlock woolly adelgid (Adelges tsugae, HWA) and elongate hemlock scale (Fiorinia externa, EHS) both feed on eastern hemlock (Tsuga canadensis). Of the two insects, HWA is considered far more damaging and disproportionately responsible for hemlock mortality. We describe research assessing the interaction between HWA and EHS, and the consequences of this interaction for eastern hemlock. We conducted an experiment in which uninfested hemlock branches were experimentally infested with herbivores in a 2 x 2 factorial design (either, both, or neither herbivore species). Over the 2.5-year course of the experiment, each herbivore's density was approximately 30% lower in mixed- vs. single-species treatments. Intriguingly, however, interspecific competition weakened rather than enhanced plant damage: growth was lower in the HWA-only treatment than in the HWA + EHS, EHS-only, or control treatments. Our results suggest that, for HWA-infested hemlocks, the benefit of co-occurring EHS infestations (reduced HWA density) may outweigh the cost (increased resource depletion).",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56638,Positive interactions among plant species for pollinator service: assessing the 'magnet species' concept with invasive species,S188160,R56639,hypothesis,L116978,Invasional meltdown,"Plants with poorly attractive flowers or with little floral rewards may have inadequate pollinator service, which in turn reduces seed output. However, pollinator service of less attractive species could be enhanced when they are associated with species with highly attractive flowers (so called ‘magnet-species’). Although several studies have reported the magnet species effect, few of them have evaluated whether this positive interaction result in an enhancement of the seed output for the beneficiary species. Here, we compared pollinator visitation rates and seed output of the invasive annual species Carduus pycnocephalus when grow associated with shrubs of the invasive Lupinus arboreus and when grow alone, and hypothesized that L. arboreus acts as a magnet species for C. pycnocephalus. Results showed that C. pycnocephalus individuals associated with L. arboreus had higher pollinator visitation rates and higher seed output than individuals growing alone. The higher visitation rates of C. pycnocephalus associated to L. arboreus were maintained after accounting for flower density, which consistently supports our hypothesis on the magnet species effect of L. arboreus. Given that both species are invasives, the facilitated pollination and reproduction of C. pycnocephalus by L. arboreus could promote its naturalization in the community, suggesting a synergistic invasional process contributing to an ‘invasional meltdown’. The magnet effect of Lupinus on Carduus found in this study seems to be one the first examples of indirect facilitative interactions via increased pollination among invasive species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56706,Non-native ecosystem engineer alters estuarine communities,S188928,R56707,hypothesis,L117610,Invasional meltdown,"Many ecosystems are created by the presence of ecosystem engineers that play an important role in determining species' abundance and species composition. Additionally, a mosaic environment of engineered and non-engineered habitats has been shown to increase biodiversity. Non-native ecosystem engineers can be introduced into environments that do not contain or have lost species that form biogenic habitat, resulting in dramatic impacts upon native communities. Yet, little is known about how non-native ecosystem engineers interact with natives and other non-natives already present in the environment, specifically whether non-native ecosystem engineers facilitate other non-natives, and whether they increase habitat heterogeneity and alter the diversity, abundance, and distribution of benthic species. Through sampling and experimental removal of reefs, we examine the effects of a non-native reef-building tubeworm, Ficopomatus enigmaticus, on community composition in the central Californian estuary, Elkhorn Slough. Tubeworm reefs host significantly greater abundances of many non-native polychaetes and amphipods, particularly the amphipods Monocorophium insidiosum and Melita nitida, compared to nearby mudflats. Infaunal assemblages under F. enigmaticus reefs and around reef's edges show very low abundance and taxonomic diversity. Once reefs are removed, the newly exposed mudflat is colonized by opportunistic non-native species, such as M. insidiosum and the polychaete Streblospio benedicti, making removal of reefs a questionable strategy for control. These results show that provision of habitat by a non-native ecosystem engineer may be a mechanism for invasional meltdown in Elkhorn Slough, and that reefs increase spatial heterogeneity in the abundance and composition of benthic communities.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56736,Long-term impacts of invasive grasses and subsequent fire in seasonally dry Hawaiian woodlands,S189260,R56737,hypothesis,L117882,Invasional meltdown,"Invasive nonnative grasses have altered the composition of seasonally dry shrublands and woodlands throughout the world. In many areas they coexist with native woody species until fire occurs, after which they become dominant. Yet it is not clear how long their impacts persist in the absence of further fire. We evaluated the long-term impacts of grass invasions and subsequent fire in seasonally dry submontane habitats on Hawai'i, USA. We recensused transects in invaded unburned woodland and woodland that had burned in exotic grass-fueled fires in 1970 and 1987 and had last been censused in 1991. In the unburned woodlands, we found that the dominant understory grass invader, Schizachyrium condensatum, had declined by 40%, while native understory species were abundant and largely unchanged from measurements 17 years ago. In burned woodland, exotic grass cover also declined, but overall values remained high and recruitment of native species was poor. Sites that had converted to exotic grassland after a 1970 fire remained dominated by exotic grasses with no increase in native cover despite 37 years without fire. Grass-dominated sites that had burned twice also showed limited recovery despite 20 years of fire suppression. We found limited evidence for ""invasional meltdown"": Exotic richness remained low across burned sites, and the dominant species in 1991, Melinis minutiflora, is still dominant today. Twice-burned sites are, however, being invaded by the nitrogen-fixing tree Morella faya, an introduced species with the potential to greatly alter the successional trajectory on young volcanic soils. In summary, despite decades of fire suppression, native species show little recovery in burned Hawaiian woodlands. Thus, burned sites appear to be beyond a threshold for ""natural recovery"" (e.g., passive restoration).",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56738,Invasional meltdown. Invader-invader mutualism facilitates a secondary invasion,S189282,R56739,hypothesis,L117900,Invasional meltdown,"In multiply invaded ecosystems, introduced species should interact with each other as well as with native species. Invader-invader interactions may affect the success of further invaders by altering attributes of recipient communities and propagule pressure. The invasional meltdown hypothesis (IMH) posits that positive interactions among invaders initiate positive population-level feedback that intensifies impacts and promotes secondary invasions. IMH remains controversial: few studies show feedback between invaders that amplifies their effects, and none yet demonstrate facilitation of entry and spread of secondary invaders. Our results show that supercolonies of an alien ant, promoted by mutualism with introduced honeydew-secreting scale insects, permitted invasion by an exotic land snail on Christmas Island, Indian Ocean. Modeling of land snail spread over 750 sites across 135 km2 over seven years showed that the probability of land snail invasion was facilitated 253-fold in ant supercolonies but impeded in intact forest where predaceous native land crabs remained abundant. Land snail occurrence at neighboring sites, a measure of propagule pressure, also promoted land snail spread. Site comparisons and experiments revealed that ant supercolonies, by killing land crabs but not land snails, disrupted biotic resistance and provided enemy-free space. Predation pressure on land snails was lower (28.6%), survival 115 times longer, and abundance 20-fold greater in supercolonies than in intact forest. Whole-ecosystem suppression of supercolonies reversed the probability of land snail invasion by allowing recolonization of land crabs; land snails were much less likely (0.79%) to invade sites where supercolonies were suppressed than where they remained intact. Our results provide strong empirical evidence for IMH by demonstrating that mutualism between invaders reconfigures key interactions in the recipient community. This facilitates entry of secondary invaders and elevates propagule pressure, propagating their spread at the whole-ecosystem level. We show that identification and management of key facilitative interactions in invaded ecosystems can be used to reverse impacts and restore resistance to further invasions.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56754,Invasional interference due to similar inter- and intraspecific competition between invaders may affect management,S189460,R56755,hypothesis,L118046,Invasional meltdown,"As the number of biological invasions increases, the potential for invader-invader interactions also rises. The effect of multiple invaders can be superadditive (invasional meltdown), additive, or subadditive (invasional interference); which of these situations occurs has critical implications for prioritization of management efforts. Carduus nutans and C. acanthoides, two congeneric invasive weeds, have a striking, segregated distribution in central Pennsylvania, U.S.A. Possible hypotheses for this pattern include invasion history and chance, direct competition, or negative interactions mediated by other species, such as shared pollinators. To explore the role of resource competition in generating this pattern, we conducted three related experiments using a response-surface design throughout the life cycles of two cohorts. Although these species have similar niche requirements, we found no differential response to competition between conspecifics vs. congeners. The response to combined density was relatively weak for both species. While direct competitive interactions do not explain the segregated distributional patterns of these two species, we predict that invasions of either species singly, or both species together, would have similar impacts. When prioritizing which areas to target to prevent the spread of one of the species, it is better to focus on areas as yet unaffected by its congener; where the congener is already present, invasional interference makes it unlikely that the net effect will change.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56760,Facilitation and competition among invasive plants: A field experiment with alligatorweed and water hyacinth,S189527,R56761,hypothesis,L118101,Invasional meltdown,"Ecosystems that are heavily invaded by an exotic species often contain abundant populations of other invasive species. This may reflect shared responses to a common factor, but may also reflect positive interactions among these exotic species. Armand Bayou (Pasadena, TX) is one such ecosystem where multiple species of invasive aquatic plants are common. We used this system to investigate whether presence of one exotic species made subsequent invasions by other exotic species more likely, less likely, or if it had no effect. We performed an experiment in which we selectively removed exotic rooted and/or floating aquatic plant species and tracked subsequent colonization and growth of native and invasive species. This allowed us to quantify how presence or absence of one plant functional group influenced the likelihood of successful invasion by members of the other functional group. We found that presence of alligatorweed (rooted plant) decreased establishment of new water hyacinth (free-floating plant) patches but increased growth of hyacinth in established patches, with an overall net positive effect on success of water hyacinth. Water hyacinth presence had no effect on establishment of alligatorweed but decreased growth of existing alligatorweed patches, with an overall net negative effect on success of alligatorweed. Moreover, observational data showed positive correlations between hyacinth and alligatorweed with hyacinth, on average, more abundant. The negative effect of hyacinth on alligatorweed growth implies competition, not strong mutual facilitation (invasional meltdown), is occurring in this system. Removal of hyacinth may increase alligatorweed invasion through release from competition. However, removal of alligatorweed may have more complex effects on hyacinth patch dynamics because there were strong opposing effects on establishment versus growth. The mix of positive and negative interactions between floating and rooted aquatic plants may influence local population dynamics of each group and thus overall invasion pressure in this watershed.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56781,Reciprocally beneficial interactions between introduced plants and ants are induced by the presence of a third introduced species,S189763,R56782,hypothesis,L118295,Invasional meltdown,"Interspecific interactions play an important role in the success of introduced species. For example, the ‘enemy release’ hypothesis posits that introduced species become invasive because they escape top–down regulation by natural enemies while the ‘invasional meltdown’ hypothesis posits that invasions may be facilitated by synergistic interactions between introduced species. Here, we explore how facilitation and enemy release interact to moderate the potential effect of a large category of positive interactions – protection mutualisms. We use the interactions between an introduced plant (Japanese knotweed Fallopia japonica), an introduced herbivore (Japanese beetle Popillia japonica), an introduced ant (European red ant Myrmica rubra), and native ants and herbivores in riparian zones of the northeastern United States as a model system. Japanese knotweed produces sugary extrafloral nectar that is attractive to ants, and we show that both sugar reward production and ant attendance increase when plants experience a level of leaf damage that is typical in the plants’ native range. Using manipulative experiments at six sites, we demonstrate low levels of ant patrolling, little effect of ants on herbivory rates, and low herbivore pressure during midsummer. Herbivory rates and the capacity of ants to protect plants (as evidenced by effects of ant exclusion) increased significantly when plants were exposed to introduced Japanese beetles that attack plants in the late summer. Beetles were also associated with greater on-plant foraging by ants, and among-plant differences in ant-foraging were correlated with the magnitude of damage inflicted on plants by the beetles. Last, we found that sites occupied by introduced M. rubra ants almost invariably included Japanese knotweed. Thus, underlying variation in the spatiotemporal distribution of the introduced herbivore influences the provision of benefits to the introduced plant and to the introduced ant. More specifically, the presence of the introduced herbivore converts an otherwise weak interaction between two introduced species into a reciprocally beneficial mutualism. Because the prospects for facilitation are linked to the prospects for enemy release in protection mutualisms, species",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56839,Novel interactions between non-native mammals and fungi facilitate establishment of invasive pines,S190407,R56840,hypothesis,L118823,Invasional meltdown,"The role of novel ecological interactions between mammals, fungi and plants in invaded ecosystems remains unresolved, but may play a key role in the widespread successful invasion of pines and their ectomycorrhizal fungal associates, even where mammal faunas originate from different continents to trees and fungi as in New Zealand. We examine the role of novel mammal associations in dispersal of ectomycorrhizal fungal inoculum of North American pines (Pinus contorta, Pseudotsuga menziesii), and native beech trees (Lophozonia menziesii) using faecal analyses, video monitoring and a bioassay experiment. Both European red deer (Cervus elaphus) and Australian brushtail possum (Trichosurus vulpecula) pellets contained spores and DNA from a range of native and non‐native ectomycorrhizal fungi. Faecal pellets from both animals resulted in ectomycorrhizal infection of pine seedlings with fungal genera Rhizopogon and Suillus, but not with native fungi or the invasive fungus Amanita muscaria, despite video and DNA evidence of consumption of these fungi. Native L. menziesii seedlings never developed any ectomycorrhizal infection from faecal pellet inoculation. Synthesis. Our results show that introduced mammals from Australia and Europe facilitate the co‐invasion of invasive North American trees and Northern Hemisphere fungi in New Zealand, while we find no evidence that introduced mammals benefit native trees or fungi. This novel tripartite ‘invasional meltdown’, comprising taxa from three kingdoms and three continents, highlights unforeseen consequences of global biotic homogenization.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56871,"Experimental test of the invasional meltdown hypothesis: na exotic herbivore facilitates na exotic plant, but the plant does not reciprocally facilitate the herbivore",S190763,R56872,hypothesis,L119115,Invasional meltdown,"Summary Ecosystems with multiple exotic species may be affected by facilitative invader interactions, which could lead to additional invasions (invasional meltdown hypothesis). Experiments show that one-way facilitation favours exotic species and observational studies suggest that reciprocal facilitation among exotic species may lead to an invasional meltdown. We conducted a mesocosm experiment to determine whether reciprocal facilitation occurs in wetland communities. We established communities with native wetland plants and aquatic snails. Communities were assigned to treatments: control (only natives), exotic snail (Pomacea maculata) invasion, exotic plant (Alternanthera philoxeroides) invasion, sequential invasion (snails then plants or plants then snails) or simultaneous invasion (snails and plants). Pomacea maculata preferentially consumed native plants, so A. philoxeroides comprised a larger percentage of plant mass and native plant mass was lowest in sequential (snail then plant) invasion treatments. Even though P. maculata may indirectly facilitate A. philoxeroides, A. philoxeroides did not reciprocally facilitate P. maculata. Rather, ecosystems invaded by multiple exotic species may be affected by one-way facilitation or reflect exotic species’ common responses to abiotic factors or common paths of introduction.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56899,Biodiversity ffects and rates of spred of nonnative eucalypt woodlands in central California,S191074,R56900,hypothesis,L119370,Invasional meltdown,"Woodlands comprised of planted, nonnative trees are increasing in extent globally, while native woodlands continue to decline due to human activities. The ecological impacts of planted woodlands may include changes to the communities of understory plants and animals found among these nonnative trees relative to native woodlands, as well as invasion of adjacent habitat areas through spread beyond the originally planted areas. Eucalypts (Eucalyptus spp.) are among the most widely planted trees worldwide, and are very common in California, USA. The goals of our investigation were to compare the biological communities of nonnative eucalypt woodlands to native oak woodlands in coastal central California, and to examine whether planted eucalypt groves have increased in size over the past decades. We assessed site and habitat attributes and characterized biological communities using understory plant, ground-dwelling arthropod, amphibian, and bird communities as indicators. Degree of difference between native and nonnative woodlands depended on the indicator used. Eucalypts had significantly greater canopy height and cover, and significantly lower cover by perennial plants and species richness of arthropods than oaks. Community composition of arthropods also differed significantly between eucalypts and oaks. Eucalypts had marginally significantly deeper litter depth, lower abundance of native plants with ranges limited to western North America, and lower abundance of amphibians. In contrast to these differences, eucalypt and oak groves had very similar bird community composition, species richness, and abundance. We found no evidence of ""invasional meltdown,"" documenting similar abundance and richness of nonnatives in eucalypt vs. oak woodlands. Our time-series analysis revealed that planted eucalypt groves increased 271% in size, on average, over six decades, invading adjacent areas. Our results inform science-based management of California woodlands, revealing that while bird communities would probably not be affected by restoration of eucalypt to oak woodlands, such a restoration project would not only stop the spread of eucalypts into adjacent habitats but would also enhance cover by western North American native plants and perennials, enhance amphibian abundance, and increase arthropod richness.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56903,"Quantifying ""apparent"" impact and distinguishing impact from invasiveness in multispecies plant invasions",S191119,R56904,hypothesis,L119407,Invasional meltdown,"The quantification of invader impacts remains a major hurdle to understanding and managing invasions. Here, we demonstrate a method for quantifying the community-level impact of multiple plant invaders by applying Parker et al.'s (1999) equation (impact = range x local abundance x per capita effect or per unit effect) using data from 620 survey plots from 31 grasslands across west-central Montana, USA. In testing for interactive effects of multiple invaders on native plant abundance (percent cover), we found no evidence for invasional meltdown or synergistic interactions for the 25 exotics tested. While much concern exists regarding impact thresholds, we also found little evidence for nonlinear relationships between invader abundance and impacts. These results suggest that management actions that reduce invader abundance should reduce invader impacts monotonically in this system. Eleven of 25 invaders had significant per unit impacts (negative local-scale relationships between invader and native cover). In decomposing the components of impact, we found that local invader abundance had a significant influence on the likelihood of impact, but range (number of plots occupied) did not. This analysis helped to differentiate measures of invasiveness (local abundance and range) from impact to distinguish high-impact invaders from invaders that exhibit negligible impacts, even when widespread. Distinguishing between high- and low-impact invaders should help refine trait-based prediction of problem species. Despite the unique information derived from evaluation of per unit effects of invaders, invasiveness 'scores based on range and local abundance produced similar rankings to impact scores that incorporated estimates of per unit effects. Hence, information on range and local abundance alone was sufficient to identify problematic plant invaders at the regional scale. In comparing empirical data on invader impacts to the state noxious weed list, we found that the noxious weed list captured 45% of the high impact invaders but missed 55% and assigned the lowest risk category to the highest-impact invader. While such subjective weed lists help to guide invasive species management, empirical data are needed to develop more comprehensive rankings of ecological impacts. Using weed lists to classify invaders for testing invasion theory is not well supported.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56919,Strong invaders are strong defenders - implications for the resistance of invaded communities,S191295,R56920,hypothesis,L119551,Invasional meltdown,"Many ecosystems receive a steady stream of non-native species. How biotic resistance develops over time in these ecosystems will depend on how established invaders contribute to subsequent resistance. If invasion success and defence capacity (i.e. contribution to resistance) are correlated, then community resistance should increase as species accumulate. If successful invaders also cause most impact (through replacing native species with low defence capacity) then the effect will be even stronger. If successful invaders instead have weak defence capacity or even facilitative attributes, then resistance should decrease with time, as proposed by the invasional meltdown hypothesis. We analysed 1157 introductions of freshwater fish in Swedish lakes and found that species' invasion success was positively correlated with their defence capacity and impact, suggesting that these communities will develop stronger resistance over time. These insights can be used to identify scenarios where invading species are expected to cause large impact.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56923,Early life stages of exotic gobiids as new hosts for unionid glochidia,S191340,R56924,hypothesis,L119588,Invasional meltdown,"Summary Introduction of an exotic species has the potential to alter interactions between fish and bivalves; yet our knowledge in this field is limited, not least by lack of studies involving fish early life stages (ELS). Here, for the first time, we examine glochidial infection of fish ELS by native and exotic bivalves in a system recently colonised by two exotic gobiid species (round goby Neogobius melanostomus, tubenose goby Proterorhinus semilunaris) and the exotic Chinese pond mussel Anodonta woodiana. The ELS of native fish were only rarely infected by native glochidia. By contrast, exotic fish displayed significantly higher native glochidia prevalence and mean intensity of infection than native fish (17 versus 2% and 3.3 versus 1.4 respectively), inferring potential for a parasite spillback/dilution effect. Exotic fish also displayed a higher parasitic load for exotic glochidia, inferring potential for invasional meltdown. Compared to native fish, presence of gobiids increased the total number of glochidia transported downstream on drifting fish by approximately 900%. We show that gobiid ELS are a novel, numerous and ‘attractive’ resource for unionid glochidia. As such, unionids could negatively affect gobiid recruitment through infection-related mortality of gobiid ELS and/or reinforce downstream unionid populations through transport on drifting gobiid ELS. These implications go beyond what is suggested in studies of older life stages, thereby stressing the importance of an holistic ontogenetic approach in ecological studies.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54032,Morphological variation between non-native lake- and stream-dwelling pumpkinseed Lepomis gibbosusin the Iberian Peninsula,S165279,R54033,Species name,L100213,Lepomis gibbosus,"The objective of this study was to test if morphological differences in pumpkinseed Lepomis gibbosus found in their native range (eastern North America) that are linked to feeding regime, competition with other species, hydrodynamic forces and habitat were also found among stream- and lake- or reservoir-dwelling fish in Iberian systems. The species has been introduced into these systems, expanding its range, and is presumably well adapted to freshwater Iberian Peninsula ecosystems. The results show a consistent pattern for size of lateral fins, with L. gibbosus that inhabit streams in the Iberian Peninsula having longer lateral fins than those inhabiting reservoirs or lakes. Differences in fin placement, body depth and caudal peduncle dimensions do not differentiate populations of L. gibbosus from lentic and lotic water bodies and, therefore, are not consistent with functional expectations. Lepomis gibbosus from lotic and lentic habitats also do not show a consistent pattern of internal morphological differentiation, probably due to the lack of lotic-lentic differences in prey type. Overall, the univariate and multivariate analyses show that most of the external and internal morphological characters that vary among populations do not differentiate lotic from lentic Iberian populations. The lack of expected differences may be a consequence of the high seasonal flow variation in Mediterranean streams, and the resultant low- or no-flow conditions during periods of summer drought.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R52079,Patterns of trait convergence and divergence among native and exotic species in herbaceous plant communities are not modified by nitrogen enrichment,S158765,R52080,hypothesis,L95648,limiting similarity,"1. Community assembly theories predict that the success of invading species into a new community should be predictable by functional traits. Environmental filters could constrain the number of successful ecological strategies in a habitat, resulting in similar suites of traits between native and successfully invading species (convergence). Conversely, concepts of limiting similarity and competitive exclusion predict native species will prevent invasion by functionally similar exotic species, resulting in trait divergence between the two species pools. Nutrient availability may further alter the strength of convergent or divergent forces in community assembly, by relaxing environmental constraints and/or influencing competitive interactions.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R52094,Limiting similarity between invaders and dominant species in herbaceous plant communities?,S158903,R52095,hypothesis,L95764,limiting similarity,"1 Limiting similarity theory predicts that successful invaders should differ functionally from species already present in the community. This theory has been tested by manipulating the functional richness of communities, but not other aspects of functional diversity such as the identity of dominant species. Because dominant species are known to have strong effects on ecosystem functioning, I hypothesized that successful invaders should be functionally dissimilar from community dominants. 2 To test this hypothesis, I added seeds of 17 different species to two different experiments: one in a natural oldfield community that had patches dominated by different plant species, and one in grassland mesocosms that varied in the identity of the dominant species but not in species richness or evenness. I used indicator species analyses to test whether invaders had higher establishment success in plots with functionally different dominant species. 3 A large percentage of invader species (47–71%) in both experiments showed no difference in affinity across the different dominant treatments, although one‐third of species did show some evidence for limiting similarity. Exotic invaders had much higher invasion success than native invaders, and seemed to be inhibited by dominant species that were functionally similar. However, even these invasion patterns were not consistent across the two experiments. 4 The results from this study show that there is some evidence that dominant species suppress invasion by functionally similar species, beyond the effect of simple presence or absence of species in communities, although it is not the sole factor affecting invasion success. Patterns of invasion success were inconsistent across species and experiments, indicating that other studies using only a single species of invader to make conclusions about community invasibility should be interpreted with caution.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R52104,Assembly rules operating along a primary riverbed-grassland successional sequence,S159000,R52105,hypothesis,L95846,limiting similarity,"1 Assembly rules are broadly defined as any filter imposed on a regional species pool that acts to determine the local community structure and composition. Environmental filtering is thought to result in the formation of groups of species with similar traits that tend to co‐occur more often than expected by chance alone, known as Beta guilds. At a smaller scale, within a single Beta guild, species may be partitioned into Alpha guilds – groups of species that have similar resource use and hence should tend not to co‐occur at small scales due the principle of limiting similarity. 2 This research investigates the effects of successional age and the presence of an invasive exotic species on Alpha and Beta guild structuring within plant communities along two successional river terrace sequences in the Waimakariri braided river system in New Zealand. 3 Fifteen sites were sampled, six with and nine without the Russel lupin (Lupinus polyphyllus), an invasive exotic species. At each site, species presence/absence was recorded in 100 circular quadrats (5 cm in diameter) at 30‐cm intervals along a 30‐m transect. Guild proportionality (Alpha guild structuring) was tested for using two a priori guild classifications each containing three guilds, and cluster analysis was used to test for environmental structuring between sites. 4 Significant assembly rules based on Alpha guild structuring were found, particularly for the monocot and dicot guild. Guild proportionality increased with increasing ecological age, which indicated an increase in the relative importance of competitive structuring at later stages of succession. This provides empirical support for Weiher and Keddy's theoretical model of community assembly. 5 Lupins were associated with altered Alpha and Beta guild structuring at early mid successional sites. Lupin‐containing sites had higher silt content than sites without lupins, and this could have altered the strength and scale of competitive structuring within the communities present. 6 This research adds to the increasing evidence for the existence of assembly rules based on limiting similarity within plant communities, and demonstrates the need to incorporate gradients of environmental and competitive adversity when investigating the rules that govern community assembly.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R52109,Establishment and Management of Native Functional Groups in Restoration,S159054,R52111,hypothesis,L95892,limiting similarity,"The limiting similarity hypothesis predicts that communities should be more resistant to invasion by non‐natives when they include natives with a diversity of traits from more than one functional group. In restoration, planting natives with a diversity of traits may result in competition between natives of different functional groups and may influence the efficacy of different seeding and maintenance methods, potentially impacting native establishment. We compare initial establishment and first‐year performance of natives and the effectiveness of maintenance techniques in uniform versus mixed functional group plantings. We seeded ruderal herbaceous natives, longer‐lived shrubby natives, or a mixture of the two functional groups using drill‐ and hand‐seeding methods. Non‐natives were left undisturbed, removed by hand‐weeding and mowing, or treated with herbicide to test maintenance methods in a factorial design. Native functional groups had highest establishment, growth, and reproduction when planted alone, and hand‐seeding resulted in more natives as well as more of the most common invasive, Brassica nigra. Wick herbicide removed more non‐natives and resulted in greater reproduction of natives, while hand‐weeding and mowing increased native density. Our results point to the importance of considering competition among native functional groups as well as between natives and invasives in restoration. Interactions among functional groups, seeding methods, and maintenance techniques indicate restoration will be easier to implement when natives with different traits are planted separately.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54241,Greater morphological plasticity of exotic honeysuckle species may make them better invaders than native species,S167735,R54242,Species name,L102251,Lonicera japonic,"sempervirens L., a non-invasive native. We hypothesized that greater morphological plasticity may contribute to the ability of L. japonica to occupy more habitat types, and contribute to its invasiveness. We compared the morphology of plants provided with climbing supports with plants that had no climbing supports, and thus quantified their morphological plasticity in response to an important variable in their habitats. The two species responded differently to the treatments, with L. japonica showing greater responses in more characters. For example, Lonicera japonica responded to climbing supports with a 15.3% decrease in internode length, a doubling of internode number and a 43% increase in shoot biomass. In contrast, climbing supports did not influence internode length or shoot biomass for L. sempervirens, and only resulted in a 25% increase in internode number. This plasticity may allow L. japonica to actively place plant modules in favorable microhabitats and ultimately affect plant fitness.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54212,Phenotypic plasticity of native vs. invasive purple loosestrife: A two-state multivariate approach,S167391,R54213,Species name,L101965,Lythrum salicaria,"The differences in phenotypic plasticity between invasive (North American) and native (German) provenances of the invasive plant Lythrum salicaria (purple loosestrife) were examined using a multivariate reaction norm approach testing two important attributes of reaction norms described by multivariate vectors of phenotypic change: the magnitude and direction of mean trait differences between environments. Data were collected for six life history traits from native and invasive plants using a split-plot design with experimentally manipulated water and nutrient levels. We found significant differences between native and invasive plants in multivariate phenotypic plasticity for comparisons between low and high water treatments within low nutrient levels, between low and high nutrient levels within high water treatments, and for comparisons that included both a water and nutrient level change. The significant genotype x environment (G x E) effects support the argument that invasiveness of purple loosestrife is closely associated with the interaction of high levels of soil nutrient and flooding water regime. Our results indicate that native and invasive plants take different strategies for growth and reproduction; native plants flowered earlier and allocated more to flower production, while invasive plants exhibited an extended period of vegetative growth before flowering to increase height and allocation to clonal reproduction, which may contribute to increased fitness and invasiveness in subsequent years.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54236,Induced defenses in response to an invading crab predator: An explanation of historical and geographic phenotypic change,S167673,R54237,Species name,L102199,Marine snails,"The expression of defensive morphologies in prey often is correlated with predator abundance or diversity over a range of temporal and spatial scales. These patterns are assumed to reflect natural selection via differential predation on genetically determined, fixed phenotypes. Phenotypic variation, however, also can reflect within-generation developmental responses to environmental cues (phenotypic plasticity). For example, water-borne effluents from predators can induce the production of defensive morphologies in many prey taxa. This phenomenon, however, has been examined only on narrow scales. Here, we demonstrate adaptive phenotypic plasticity in prey from geographically separated populations that were reared in the presence of an introduced predator. Marine snails exposed to predatory crab effluent in the field increased shell thickness rapidly compared with controls. Induced changes were comparable to (i) historical transitions in thickness previously attributed to selection by the invading predator and (ii) present-day clinal variation predicted from water temperature differences. Thus, predator-induced phenotypic plasticity may explain broad-scale geographic and temporal phenotypic variation. If inducible defenses are heritable, then selection on the reaction norm may influence coevolution between predator and prey. Trade-offs may explain why inducible rather than constitutive defenses have evolved in several gastropod species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54092,Invasive Microstegium populations consistently outperform native range populations across diverse environments,S165983,R54093,Species name,L100797,Microstegium vimineum,"Plant species introduced into novel ranges may become invasive due to evolutionary change, phenotypic plasticity, or other biotic or abiotic mechanisms. Evolution of introduced populations could be the result of founder effects, drift, hybridization, or adaptation to local conditions, which could enhance the invasiveness of introduced species. However, understanding whether the success of invading populations is due to genetic differences between native and introduced populations may be obscured by origin x environment interactions. That is, studies conducted under a limited set of environmental conditions may show inconsistent results if native or introduced populations are differentially adapted to specific conditions. We tested for genetic differences between native and introduced populations, and for origin x environment interactions, between native (China) and introduced (U.S.) populations of the invasive annual grass Microstegium vimineum (stiltgrass) across 22 common gardens spanning a wide range of habitats and environmental conditions. On average, introduced populations produced 46% greater biomass and had 7.4% greater survival, and outperformed native range populations in every common garden. However, we found no evidence that introduced Microstegium exhibited greater phenotypic plasticity than native populations. Biomass of Microstegium was positively correlated with light and resident community richness and biomass across the common gardens. However, these relationships were equivalent for native and introduced populations, suggesting that the greater mean performance of introduced populations is not due to unequal responses to specific environmental parameters. Our data on performance of invasive and native populations suggest that post-introduction evolutionary changes may have enhanced the invasive potential of this species. Further, the ability of Microstegium to survive and grow across the wide variety of environmental conditions demonstrates that few habitats are immune to invasion.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54052,Light Response of Native and Introduced Miscanthus sinensis Seedlings,S165520,R54053,Species name,L100414,Miscanthus sinensis,"The Asian grass Miscanthus sinensis (Poaceae) is being considered for use as a bioenergy crop in the U.S. Corn Belt. Originally introduced to the United States for ornamental plantings, it escaped, forming invasive populations. The concern is that naturalized M. sinensis populations have evolved shade tolerance. We tested the hypothesis that seedlings from within the invasive U.S. range of M. sinensis would display traits associated with shade tolerance, namely increased area for light capture and phenotypic plasticity, compared with seedlings from the native Japanese populations. In a common garden experiment, seedlings of 80 half-sib maternal lines were grown from the native range (Japan) and 60 half-sib maternal lines from the invasive range (U.S.) under four light levels. Seedling leaf area, leaf size, growth, and biomass allocation were measured on the resulting seedlings after 12 wk. Seedlings from both regions responded strongly to the light gradient. High light conditions resulted in seedlings with greater leaf area, larger leaves, and a shift to greater belowground biomass investment, compared with shaded seedlings. Japanese seedlings produced more biomass and total leaf area than U.S. seedlings across all light levels. Generally, U.S. and Japanese seedlings allocated a similar amount of biomass to foliage and equal leaf area per leaf mass. Subtle differences in light response by region were observed for total leaf area, mass, growth, and leaf size. U.S. seedlings had slightly higher plasticity for total mass and leaf area but lower plasticity for measures of biomass allocation and leaf traits compared with Japanese seedlings. Our results do not provide general support for the hypothesis of increased M. sinensis shade tolerance within its introduced U.S. range compared with native Japanese populations. Nomenclature: Eulaliagrass; Miscanthus sinensis Anderss. Management Implications: Eulaliagrass (Miscanthus sinensis), an Asian species under consideration for biomass production in the Midwest, has escaped ornamental plantings in the United States to form naturalized populations. Evidence suggests that U.S. populations are able to tolerate relatively shady conditions, but it is unclear whether U.S. populations have greater shade tolerance than the relatively shade-intolerant populations within the species' native range in Asia. Increased shade tolerance could result in a broader range of invaded light environments within the introduced range of M. sinensis. However, results from our common garden experiment do not support the hypothesis of increased shade tolerance in introduced U.S. populations compared with seedlings from native Asian populations. Our results do demonstrate that for both U.S. and Japanese populations under low light conditions, M. sinensis seeds germinate and seedlings gain mass and leaf area; therefore, land managers should carefully monitor or eradicate M. sinensis within these habitats.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54126,Differential patterns of plasticity to water availability along native and naturalized latitudinal gradients,S166379,R54127,Specific traits,L101125,Morphological and fitness-related traits,"Questions: Does plasticity to water availability differ between native and naturalized and laboratory plant accessions? Is there a relationship between morphological plasticity and a fitness measure? Can we account for latitudinal patterns of plasticity with rainfall data from the seed source location? Organism: We examined an array of 23 native, 14 naturalized, and 5 laboratory accessions of Arabidopsis thaliana. Methods: We employed a split-plot experimental design in the greenhouse with two water treatments. We measured morphological and fitness-related traits at various developmental stages. We utilized a published dataset representing 30-year average precipitation trends for each accession origin. Results: We detected evidence of differential patterns of plasticity between native, naturalized, and laboratory populations for several morphological traits. Native, laboratory, and naturalized populations also differed in which traits were positively associated with fitness, and did not follow the Jack-of-all-trades or Master-of-some scenarios. Significant negative relationships were detected for plasticity in morphological traits with latitude. We found modest evidence that rainfall may play a role in this latitudinal trend.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54120,"Variation in morphological characters of two invasive leafminers, Liriomyza huidobrensis and L. sativae, across a tropical elevation gradient",S166312,R54121,Specific traits,L101070,Morphological differences,"Abstract Changes in morphological traits along elevation and latitudinal gradients in ectotherms are often interpreted in terms of the temperature-size rule, which states that the body size of organisms increases under low temperatures, and is therefore expected to increase with elevation and latitude. However other factors like host plant might contribute to spatial patterns in size as well, particularly for polyphagous insects. Here elevation patterns for trait size and shape in two leafminer species are examined, Liriomyza huidobrensis (Blanchard) (Diptera: Agromyzidae) and L. sativae Blanchard, along a tropical elevation gradient in Java, Indonesia. Adult leafminers were trapped from different locations in the mountainous area of Dieng in the province of Central Java. To separate environmental versus genetic effects, L. huidobrensis originating from 1378 m and 2129 m ASL were reared in the laboratory for five generations. Size variation along the elevation gradient was only found in L. huidobrensis and this followed expectations based on the temperature-size rule. There were also complex changes in wing shape along the gradient. Morphological differences were influenced by genetic and environmental effects. Findings are discussed within the context of adaptation to different elevations in the two species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54078,"Intra-population variability of life-history traits and growth during range expansion of the invasive round goby, Neogobius melanostomus",S165814,R54079,Species name,L100656,Neogobius melanostomus,"Fish can undergo changes in their life-history traits that correspond with local demographic conditions. Under range expansion, a population of non-native fish might then be expected to exhibit a suite of life-history traits that differ between the edge and the centre of the population’s geographic range. To test this hypothesis, life-history traits of an expanding population of round goby, Neogobius melanostomus (Pallas), in early and newly established sites in the Trent River (Ontario, Canada) were compared in 2007 and 2008. Round goby in the area of first introduction exhibited a significant decrease in age at maturity, increased length at age 1 and they increased in GSI from 2007 to 2008. While individuals at the edges of the range exhibited traits that promote population growth under low intraspecific density, yearly variability in life-history traits suggests that additional processes such as declining density and fluctuating food availability are influencing the reproductive strategy and growth of round goby during an invasion.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54070,Phenotypic variation of an alien species in a new environment: the body size and diet of American mink over time and at local and continental scales,S165726,R54071,Species name,L100584,Neovison vison,"Introduced species must adapt their ecology, behaviour, and morphological traits to new conditions. The successful introduction and invasive potential of a species are related to its levels of phenotypic plasticity and genetic polymorphism. We analysed changes in the body mass and length of American mink (Neovison vison) since its introduction into the Warta Mouth National Park, western Poland, in relation to diet composition and colonization progress from 1996 to 2004. Mink body mass decreased significantly during the period of population establishment within the study area, with an average decrease of 13% from 1.36 to 1.18 kg in males and of 16% from 0.83 to 0.70 kg in females. Diet composition varied seasonally and between consecutive years. The main prey items were mammals and fish in the cold season and birds and fish in the warm season. During the study period the proportion of mammals preyed upon increased in the cold season and decreased in the warm season. The proportion of birds preyed upon decreased over the study period, whereas the proportion of fish increased. Following introduction, the strictly aquatic portion of mink diet (fish and frogs) increased over time, whereas the proportion of large prey (large birds, muskrats, and water voles) decreased. The average yearly proportion of large prey and average-sized prey in the mink diet was significantly correlated with the mean body masses of males and females. Biogeographical variation in the body mass and length of mink was best explained by the percentage of large prey in the mink diet in both sexes, and by latitude for females. Together these results demonstrate that American mink rapidly changed their body mass in relation to local conditions. This phenotypic variability may be underpinned by phenotypic plasticity and/or by adaptation of quantitative genetic variation. The potential to rapidly change phenotypic variation in this manner is an important factor determining the negative ecological impacts of invasive species. © 2012 The Linnean Society of London, Biological Journal of the Linnean Society, 2012, 105, 681–693.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R52073,Using ecological restoration to constrain biological invasion,S159548,R52074,Continent,R49276,North America,"Summary 1 Biological invasion can permanently alter ecosystem structure and function. Invasive species are difficult to eradicate, so methods for constraining invasions would be ecologically valuable. We examined the potential of ecological restoration to constrain invasion of an old field by Agropyron cristatum, an introduced C3 grass. 2 A field experiment was conducted in the northern Great Plains of North America. One-hundred and forty restored plots were planted in 1994–96 with a mixture of C3 and C4 native grass seed, while 100 unrestored plots were not. Vegetation on the plots was measured periodically between 1994 and 2002. 3 Agropyron cristatum invaded the old field between 1994 and 2002, occurring in 5% of plots in 1994 and 66% of plots in 2002, and increasing in mean cover from 0·2% in 1994 to 17·1% in 2002. However, A. cristatum invaded one-third fewer restored than unrestored plots between 1997 and 2002, suggesting that restoration constrained invasion. Further, A. cristatum cover in restored plots decreased with increasing planted grass cover. Stepwise regression indicated that A. cristatum cover was more strongly correlated with planted grass cover than with distance from the A. cristatum source, species richness, percentage bare ground or percentage litter. 4 The strength of the negative relationship between A. cristatum and planted native grasses varied among functional groups: the correlation was stronger with species with phenology and physiology similar to A. cristatum (i.e. C3 grasses) than with dissimilar species (C4 grasses). 5 Richness and cover of naturally establishing native species decreased with increasing A. cristatum cover. In contrast, restoration had little effect on the establishment and colonization of naturally establishing native species. Thus, A. cristatum hindered colonization by native species while planted native grasses did not. 6 Synthesis and applications. To our knowledge, this study provides the first indication that restoration can act as a filter, constraining invasive species while allowing colonization by native species. These results suggest that resistance to invasion depends on the identity of species in the community and that restoration seed mixes might be tailored to constrain selected invaders. Restoring areas before invasive species become established can reduce the magnitude of biological invasion.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R52077,Plant functional group identity and diversity determine biotic resistance to invasion by an exotic grass,S197599,R57143,Continent,R49276,North America,"Biotic resistance, the ability of species in a community to limit invasion, is central to our understanding of how communities at risk of invasion assemble after disturbances, but it has yet to translate into guiding principles for the restoration of invasion‐resistant plant communities. We combined experimental, functional, and modelling approaches to investigate processes of community assembly contributing to biotic resistance to an introduced lineage of Phragmites australis, a model invasive species in North America. We hypothesized that (i) functional group identity would be a good predictor of biotic resistance to P. australis, while species identity effect would be redundant within functional group (ii) mixtures of species would be more invasion resistant than monocultures. We classified 36 resident wetland plants into four functional groups based on eight functional traits. We conducted two competition experiments based on the additive competition design with P. australis and monocultures or mixtures of wetland plants. As an indicator of biotic resistance, we calculated a relative competition index (RCIavg) based on the average performance of P. australis in competition treatment compared with control. To explain diversity effect further, we partitioned it into selection effect and complementarity effect and tested several diversity–interaction models. In monoculture treatments, RCIavg of wetland plants was significantly different among functional groups, but not within each functional group. We found the highest RCIavg for fast‐growing annuals, suggesting priority effect. RCIavg of wetland plants was significantly greater in mixture than in monoculture mainly due to complementarity–diversity effect among functional groups. In diversity–interaction models, species interaction patterns in mixtures were described best by interactions between functional groups when fitted to RCIavg or biomass, implying niche partitioning. Synthesis. Functional group identity and diversity of resident plant communities are good indicators of biotic resistance to invasion by introduced Phragmites australis, suggesting niche pre‐emption (priority effect) and niche partitioning (diversity effect) as underlying mechanisms. Guiding principles to understand and/or manage biological invasion could emerge from advances in community theory and the use of a functional framework. Targeting widely distributed invasive plants in different contexts and scaling up to field situations will facilitate generalization.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R52092,Species composition and diversity affect grassland susceptibility and response to invasion,S158883,R52093,Continent,L95747,North America,"In a microcosm experiment, I tested how species composition, species rich- ness, and community age affect the susceptibility of grassland communities to invasion by a noxious weed (Centaurea solstitialis L.). I also examined how these factors influenced Centaurea's impact on the rest of the plant community. When grown in monoculture, eight species found in California's grasslands differed widely in their ability to suppress Centaurea growth. The most effective competitor in monoculture was Hemizonia congesta ssp. Iuzulifolia, which, like Centaurea, is a summer- active annual forb. On average, Centaurea growth decreased as the species richness of communities increased. However, no polyculture suppressed Centaurea growth more than the monoculture of Hemizonia. Centaurea generally made up a smaller proportion of com- munity biomass in newly created (""new"") microcosms than in older (""established"") mi- crocosms, largely because Centaurea's competitors were more productive in the new treat- ment. Measures of complementarity suggest that Centaurea partitioned resources with an- nual grasses in the new microcosms. This resource partitioning may help to explain Cen- taurea's great success in western North American grasslands. Centaurea strongly suppressed growth of some species but hardly affected others. An- nual grasses were the least affected species in the new monocultures, and perennial grasses were among the least affected species in the established monocultures. In the new micro- cosms, Centaurea's suppression of competing species marginally abated with increasing species richness. This trend was a consequence of the declining success of Centaurea in species-rich communities, rather than a change in the vulnerability of these communities to suppression by a given amount of the invader. The impact of the invader was not related to species richness in the-established microcosms. The results of this study suggest that, at the neighborhood level, diversity can limit invasibility and may reduce the impact of an invader.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R52118,OVERLAP OF FOOD AND MICROHABITAT PREFERENCES AMONG SOME NATIVE AND NONNATIVE SLUGS IN MID-ATLANTIC FORESTS OF EASTERN NORTH AMERICA,S159419,R52119,Continent,R49276,North America,"Introduced competitors do not share an evolutionary history that would promote coexistence mechanisms, i.e. niche partitioning. Thus, nonnative species can harm a trophically similar native species by competing with them more intensely than other native species. However, nonnative species may only be able initially to invade habitats in which resource overlap with native species is small. The nonnative slug Arion subfuscus exists in close sympatry with the native philomycid slugs Philomycus carolinianus and Megapallifera mutabilis in central Maryland forests. Resource use by most terrestrial gastropods is poorly known, but seems to suggest high dietary and macrohabitat overlap, potentially placing native gastropod species at high risk of competitive pressure from invading species. However, A. subfuscus was introduced to North America 150 years ago, supporting the possibility that A. subfuscus initially entered an empty niche. We tested the hypothesis that P. carolinianus and M. mutabilis would exhibit greater overlap in food and microhabitat use with A. subfuscus than they would with each other. We established food preferences by examining the faecal material of wild-caught slugs, distinguishing food types and quantifying them by volume on a microgrid. We determined microhabitat preferences by surveying the substrates of slugs in the field. The overlap in substrate and food resources was greater between A. subfuscus and P. carolinianus than between the two native species. However, substrate choice was correlated with local substrate availability for P. carolinianus, suggesting flexibility in habitat use, and the slight overlap in food use between A. subfuscus and P. carolinianus may be low enough to minimize competition.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54034,Norway maple displays greater seasonal growth and phenotypic plasticity to light than native sugar maple,S165305,R54035,Continent,L100235,North America,"Norway maple (Acer platanoides L), which is among the most invasive tree species in forests of eastern North America, is associated with reduced regeneration of the related native species, sugar maple (Acer saccharum Marsh) and other native flora. To identify traits conferring an advantage to Norway maple, we grew both species through an entire growing season under simulated light regimes mimicking a closed forest understorey vs. a canopy disturbance (gap). Dynamic shade-houses providing a succession of high-intensity direct-light events between longer periods of low, diffuse light were used to simulate the light regimes. We assessed seedling height growth three times in the season, as well as stem diameter, maximum photosynthetic capacity, biomass allocation above- and below-ground, seasonal phenology and phenotypic plasticity. Given the north European provenance of Norway maple, we also investigated the possibility that its growth in North America might be increased by delayed fall senescence. We found that Norway maple had significantly greater photosynthetic capacity in both light regimes and grew larger in stem diameter than sugar maple. The differences in below- and above-ground biomass, stem diameter, height and maximum photosynthesis were especially important in the simulated gap where Norway maple continued extension growth during the late fall. In the gap regime sugar maple had a significantly higher root : shoot ratio that could confer an advantage in the deepest shade of closed understorey and under water stress or browsing pressure. Norway maple is especially invasive following canopy disturbance where the opposite (low root : shoot ratio) could confer a competitive advantage. Considering the effects of global change in extending the potential growing season, we anticipate that the invasiveness of Norway maple will increase in the future.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54046,Phenotypic Plasticity and Population Differentiation in an Ongoing Species Invasion,S165448,R54047,Continent,L100354,North America,"The ability to succeed in diverse conditions is a key factor allowing introduced species to successfully invade and spread across new areas. Two non-exclusive factors have been suggested to promote this ability: adaptive phenotypic plasticity of individuals, and the evolution of locally adapted populations in the new range. We investigated these individual and population-level factors in Polygonum cespitosum, an Asian annual that has recently become invasive in northeastern North America. We characterized individual fitness, life-history, and functional plasticity in response to two contrasting glasshouse habitat treatments (full sun/dry soil and understory shade/moist soil) in 165 genotypes sampled from nine geographically separate populations representing the range of light and soil moisture conditions the species inhabits in this region. Polygonum cespitosum genotypes from these introduced-range populations expressed broadly similar plasticity patterns. In response to full sun, dry conditions, genotypes from all populations increased photosynthetic rate, water use efficiency, and allocation to root tissues, dramatically increasing reproductive fitness compared to phenotypes expressed in simulated understory shade. Although there were subtle among-population differences in mean trait values as well as in the slope of plastic responses, these population differences did not reflect local adaptation to environmental conditions measured at the population sites of origin. Instead, certain populations expressed higher fitness in both glasshouse habitat treatments. We also compared the introduced-range populations to a single population from the native Asian range, and found that the native population had delayed phenology, limited functional plasticity, and lower fitness in both experimental environments compared with the introduced-range populations. Our results indicate that the future spread of P. cespitosum in its introduced range will likely be fueled by populations consisting of individuals able to express high fitness across diverse light and moisture conditions, rather than by the evolution of locally specialized populations.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54060,"Seedling defoliation, plant growth and flowering potential in native- and invasive-range Plantago lanceolata populations",S165610,R54061,Continent,L100488,North America,"Hanley ME (2012). Seedling defoliation, plant growth and flowering potential in native- and invasive-range Plantago lanceolata populations. Weed Research52, 252–259. Summary The plastic response of weeds to new environmental conditions, in particular the likely relaxation of herbivore pressure, is considered vital for successful colonisation and spread. However, while variation in plant anti-herbivore resistance between native- and introduced-range populations is well studied, few authors have considered herbivore tolerance, especially at the seedling stage. This study examines variation in seedling tolerance in native (European) and introduced (North American) Plantago lanceolata populations following cotyledon removal at 14 days old. Subsequent effects on plant growth were quantified at 35 days, along with effects on flowering potential at maturity. Cotyledon removal reduced early growth for all populations, with no variation between introduced- or native-range plants. Although more variable, the effects of cotyledon loss on flowering potential were also unrelated to range. The likelihood that generalist seedling herbivores are common throughout North America may explain why no difference in seedling tolerance was apparent. However, increased flowering potential in plants from North American P. lanceolata populations was observed. As increased flowering potential was not lost, even after severe cotyledon damage, the manifestation of phenotypic plasticity in weeds at maturity may nonetheless still be shaped by plasticity in the ability to tolerate herbivory during seedling establishment.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54106,High temperature tolerance and thermal plasticity in emerald ash borer Agrilus planipennis,S166147,R54107,Continent,L100933,North America,"1 The emerald ash borer Agrilus planipennis (Coleoptera: Buprestidae) (EAB), an invasive wood‐boring beetle, has recently caused significant losses of native ash (Fraxinus spp.) trees in North America. Movement of wood products has facilitated EAB spread, and heat sanitation of wooden materials according to International Standards for Phytosanitary Measures No. 15 (ISPM 15) is used to prevent this. 2 In the present study, we assessed the thermal conditions experienced during a typical heat‐treatment at a facility using protocols for pallet wood treatment under policy PI‐07, as implemented in Canada. The basal high temperature tolerance of EAB larvae and pupae was determined, and the observed heating rates were used to investigate whether the heat shock response and expression of heat shock proteins occurred in fourth‐instar larvae. 3 The temperature regime during heat treatment greatly exceeded the ISPM 15 requirements of 56 °C for 30 min. Emerald ash borer larvae were highly tolerant of elevated temperatures, with some instars surviving exposure to 53 °C without any heat pre‐treatments. High temperature survival was increased by either slow warming or pre‐exposure to elevated temperatures and a recovery regime that was accompanied by up‐regulated hsp70 expression under some of these conditions. 4 Because EAB is highly heat tolerant and exhibits a fully functional heat shock response, we conclude that greater survival than measured in vitro is possible under industry treatment conditions (with the larvae still embedded in the wood). We propose that the phenotypic plasticity of EAB may lead to high temperature tolerance very close to conditions experienced in an ISPM 15 standard treatment.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54112,VARIATION IN PHENOTYPIC PLASTICITY AMONG NATIVE AND INVASIVE POPULATIONS OF ALLIARIA PETIOLATA,S166216,R54113,Continent,L100990,North America,"Alliaria petiolata is a Eurasian biennial herb that is invasive in North America and for which phenotypic plasticity has been noted as a potentially important invasive trait. Using four European and four North American populations, we explored variation among populations in the response of a suite of antioxidant, antiherbivore, and morphological traits to the availability of water and nutrients and to jasmonic acid treatment. Multivariate analyses revealed substantial variation among populations in mean levels of these traits and in the response of this suite of traits to environmental variation, especially water availability. Univariate analyses revealed variation in plasticity among populations in the expression of all of the traits measured to at least one of these environmental factors, with the exception of leaf length. There was no evidence for continentally distinct plasticity patterns, but there was ample evidence for variation in phenotypic plasticity among the populations within continents. This implies that A. petiolata has the potential to evolve distinct phenotypic plasticity patterns within populations but that invasive populations are no more plastic than native populations.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54148,Developmental plasticity of shell morphology of quagga mussels from shallow and deep-water habitats of the Great Lakes ,S166632,R54149,Continent,L101334,North America,"SUMMARY The invasive zebra mussel (Dreissena polymorpha) has quickly colonized shallow-water habitats in the North American Great Lakes since the 1980s but the quagga mussel (Dreissena bugensis) is becoming dominant in both shallow and deep-water habitats. While quagga mussel shell morphology differs between shallow and deep habitats, functional causes and consequences of such difference are unknown. We examined whether quagga mussel shell morphology could be induced by three environmental variables through developmental plasticity. We predicted that shallow-water conditions (high temperature, food quantity, water motion) would yield a morphotype typical of wild quagga mussels from shallow habitats, while deep-water conditions (low temperature, food quantity, water motion) would yield a morphotype present in deep habitats. We tested this prediction by examining shell morphology and growth rate of quagga mussels collected from shallow and deep habitats and reared under common-garden treatments that manipulated the three variables. Shell morphology was quantified using the polar moment of inertia. Of the variables tested, temperature had the greatest effect on shell morphology. Higher temperature (∼18–20°C) yielded a morphotype typical of wild shallow mussels regardless of the levels of food quantity or water motion. In contrast, lower temperature (∼6–8°C) yielded a morphotype approaching that of wild deep mussels. If shell morphology has functional consequences in particular habitats, a plastic response might confer quagga mussels with a greater ability than zebra mussels to colonize a wider range of habitats within the Great Lakes.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54176,Inducible defences as key adaptations for the successful invasion of Daphnia lumholtzi in North America?,S166962,R54177,Continent,L101608,North America,"The mechanisms underlying successful biological invasions often remain unclear. In the case of the tropical water flea Daphnia lumholtzi, which invaded North America, it has been suggested that this species possesses a high thermal tolerance, which in the course of global climate change promotes its establishment and rapid spread. However, D. lumholtzi has an additional remarkable feature: it is the only water flea that forms rigid head spines in response to chemicals released in the presence of fishes. These morphologically (phenotypically) plastic traits serve as an inducible defence against these predators. Here, we show in controlled mesocosm experiments that the native North American species Daphnia pulicaria is competitively superior to D. lumholtzi in the absence of predators. However, in the presence of fish predation the invasive species formed its defences and became dominant. This observation of a predator-mediated switch in dominance suggests that the inducible defence against fish predation may represent a key adaptation for the invasion success of D. lumholtzi.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54212,Phenotypic plasticity of native vs. invasive purple loosestrife: A two-state multivariate approach,S167388,R54213,Continent,L101962,North America,"The differences in phenotypic plasticity between invasive (North American) and native (German) provenances of the invasive plant Lythrum salicaria (purple loosestrife) were examined using a multivariate reaction norm approach testing two important attributes of reaction norms described by multivariate vectors of phenotypic change: the magnitude and direction of mean trait differences between environments. Data were collected for six life history traits from native and invasive plants using a split-plot design with experimentally manipulated water and nutrient levels. We found significant differences between native and invasive plants in multivariate phenotypic plasticity for comparisons between low and high water treatments within low nutrient levels, between low and high nutrient levels within high water treatments, and for comparisons that included both a water and nutrient level change. The significant genotype x environment (G x E) effects support the argument that invasiveness of purple loosestrife is closely associated with the interaction of high levels of soil nutrient and flooding water regime. Our results indicate that native and invasive plants take different strategies for growth and reproduction; native plants flowered earlier and allocated more to flower production, while invasive plants exhibited an extended period of vegetative growth before flowering to increase height and allocation to clonal reproduction, which may contribute to increased fitness and invasiveness in subsequent years.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54228,Predator-induced phenotypic plasticity in the exotic cladoceran Daphnia lumholtzi,S167581,R54229,Continent,L102123,North America,"Summary 1. The exotic cladoceran Daphnia lumholtzi has recently invaded freshwater systems throughout the United States. Daphnia lumholtzi possesses extravagant head spines that are longer than those found on any other North American Daphnia. These spines are effective at reducing predation from many of the predators that are native to newly invaded habitats; however, they are plastic both in nature and in laboratory cultures. The purpose of this experiment was to better understand what environmental cues induce and maintain these effective predator-deterrent spines. We conducted life-table experiments on individual D. lumholtzi grown in water conditioned with an invertebrate insect predator, Chaoborus punctipennis, and water conditioned with a vertebrate fish predator, Lepomis macrochirus. 2. Daphnia lumholtzi exhibited morphological plasticity in response to kairomones released by both predators. However, direct exposure to predator kairomones during postembryonic development did not induce long spines in D. lumholtzi. In contrast, neonates produced from individuals exposed to Lepomis kairomones had significantly longer head and tail spines than neonates produced from control and Chaoborus individuals. These results suggest that there may be a maternal, or pre-embryonic, effect of kairomone exposure on spine development in D. lumholtzi. 3. Independent of these morphological shifts, D. lumholtzi also exhibited plasticity in life history characteristics in response to predator kairomones. For example, D. lumholtzi exhibited delayed reproduction in response to Chaoborus kairomones, and significantly more individuals produced resting eggs, or ephippia, in the presence of Lepomis kairomones.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54595,"Contingency of grassland restoration on year, site, and competition from introduced grasses",S171943,R54596,Continent,L105637,North America,"Semiarid ecosystems such as grasslands are characterized by high temporal variability in abiotic factors, which has led to suggestions that management actions may be more effective in some years than others. Here we examine this hypothesis in the context of grassland restoration, which faces two major obstacles: the contingency of native grass establishment on unpredictable precipitation, and competition from introduced species. We established replicated restoration experiments over three years at two sites in the northern Great Plains in order to examine the extent to which the success of several restoration strategies varied between sites and among years. We worked in 50-yr-old stands of crested wheatgrass (Agropyron cristatum), an introduced perennial grass that has been planted on >10 × 106 ha in western North America. Establishment of native grasses was highly contingent on local conditions, varying fourfold among years and threefold between sites. Survivorship also varied greatly and increased signi...",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54770,Disturbance-mediated competition and the spread of Phragmites australis in a coastal marsh,S174021,R54771,Continent,L107365,North America,"In recent decades the grass Phragmites australis has been aggressively in- vading coastal, tidal marshes of North America, and in many areas it is now considered a nuisance species. While P. australis has historically been restricted to the relatively benign upper border of brackish and salt marshes, it has been expanding seaward into more phys- iologically stressful regions. Here we test a leading hypothesis that the spread of P. australis is due to anthropogenic modification of coastal marshes. We did a field experiment along natural borders between stands of P. australis and the other dominant grasses and rushes (i.e., matrix vegetation) in a brackish marsh in Rhode Island, USA. We applied a pulse disturbance in one year by removing or not removing neighboring matrix vegetation and adding three levels of nutrients (specifically nitrogen) in a factorial design, and then we monitored the aboveground performance of P. australis and the matrix vegetation. Both disturbances increased the density, height, and biomass of shoots of P. australis, and the effects of fertilization were more pronounced where matrix vegetation was removed. Clear- ing competing matrix vegetation also increased the distance that shoots expanded and their reproductive output, both indicators of the potential for P. australis to spread within and among local marshes. In contrast, the biomass of the matrix vegetation decreased with increasing severity of disturbance. Disturbance increased the total aboveground production of plants in the marsh as matrix vegetation was displaced by P. australis. A greenhouse experiment showed that, with increasing nutrient levels, P. australis allocates proportionally more of its biomass to aboveground structures used for spread than to belowground struc- tures used for nutrient acquisition. Therefore, disturbances that enrich nutrients or remove competitors promote the spread of P. australis by reducing belowground competition for nutrients between P. australis and the matrix vegetation, thus allowing P. australis, the largest plant in the marsh, to expand and displace the matrix vegetation. Reducing nutrient load and maintaining buffers of matrix vegetation along the terrestrial-marsh ecotone will, therefore, be important methods of control for this nuisance species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54826,Shoreline development drives invasion of Phragmites australis and the loss of plant diversity on New England salt marshes,S174691,R54827,Continent,L107923,North America,"Abstract: The reed Phragmites australis Cav. is aggressively invading salt marshes along the Atlantic Coast of North America. We examined the interactive role of habitat alteration (i.e., shoreline development) in driving this invasion and its consequences for plant richness in New England salt marshes. We surveyed 22 salt marshes in Narragansett Bay, Rhode Island, and quantified shoreline development, Phragmites cover, soil salinity, and nitrogen availability. Shoreline development, operationally defined as removal of the woody vegetation bordering marshes, explained >90% of intermarsh variation in Phragmites cover. Shoreline development was also significantly correlated with reduced soil salinities and increased nitrogen availability, suggesting that removing woody vegetation bordering marshes increases nitrogen availability and decreases soil salinities, thus facilitating Phragmites invasion. Soil salinity (64%) and nitrogen availability (56%) alone explained a large proportion of variation in Phragmites cover, but together they explained 80% of the variation in Phragmites invasion success. Both univariate and aggregate (multidimensional scaling) analyses of plant community composition revealed that Phragmites dominance in developed salt marshes resulted in an almost three‐fold decrease in plant species richness. Our findings illustrate the importance of maintaining integrity of habitat borders in conserving natural communities and provide an example of the critical role that local conservation can play in preserving these systems. In addition, our findings provide ecologists and natural resource managers with a mechanistic understanding of how human habitat alteration in one vegetation community can interact with species introductions in adjacent communities (i.e., flow‐on or adjacency effects) to hasten ecosystem degradation.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54849,"Alien Flora in Grasslands Adjacent to Road and Trail Corridors in Glacier National Park, Montana (U.S.A.)",S174965,R54850,Continent,L108151,North America,": Alien plant species have rapidly invaded and successfully displaced native species in many grasslands of western North America. Thus, the status of alien species in the nature reserve grasslands of this region warrants special attention. This study describes alien flora in nine fescue grassland study sites adjacent to three types of transportation corridors—primary roads, secondary roads, and backcountry trails—in Glacier National Park, Montana (U.S.A.). Parallel transects, placed at varying distances from the adjacent road or trail, were used to determine alien species richness and frequency at individual study sites. Fifteen alien species were recorded, two Eurasian grasses, Phleum pratense and Poa pratensis, being particularly common in most of the study sites. In sites adjacent to primary and secondary roads, alien species richness declined out to the most distant transect, suggesting that alien species are successfully invading grasslands from the roadside area. In study sites adjacent to backcountry trails, absence of a comparable decline and unexpectedly high levels of alien species richness 100 m from the trailside suggest that alien species have been introduced in off-trail areas. The results of this study imply that in spite of low levels of livestock grazing and other anthropogenic disturbances, fescue grasslands in nature reserves of this region are vulnerable to invasion by alien flora. Given the prominent role that roadsides play in the establishment and dispersal of alien flora, road construction should be viewed from a biological, rather than an engineering, perspective. Nature reserve man agers should establish effective roadside vegetation management programs that include monitoring, quickly treating keystone alien species upon their initial occurrence in nature reserves, and creating buffer zones on roadside leading to nature reserves. Resumen: Especies de plantas introducidas han invadido rapidamente y desplazado exitosamente especies nativas en praderas del Oeste de America del Norte. Por lo tanto el estado de las especies introducidas en las reservas de pastizales naturales de esta region exige especial atencion. Este estudio describe la flora introducida en nueve pastizales naturales de festuca, las areas de estudios son adyacentes a tres tipos decorredores de transporte—caminos primarios, caminos secundarios y senderos remotos—en el Parque Nacional “Glacier,” Montana (EE.UU). Para determinar riqueza y frecuencia de especies introducidas, se trazaron transectas paralelas, localizadas a distancias variables del camino o sendero adyacente en las areas de estudio. Se registraron quince especies introducidas. Dos pastos eurasiaticos, Phleum pratensis y Poa pratensis, resultaron particularmente abuntes en la mayoria de las areas de estudio. En lugares adyacentes a caminos primarios y secundarios, la riqueza de especies introducidas disminuyo en la direccion de las transectas mas distantes, sugiriendo que las especies introducidas estan invadiendo exitosamente las praderas desde areas aledanas a caminos. En las areas de estudio adyacentes a senderos remotos no se encontro una disminucion comparable; inesperados altos niveles de riqueza de especies introducidas a 100 m de los senderos, sugieren que las especies foraneas han sido introducidas desde otras areas fuero de los senderos. Los resultados de este estudio implican que a pesar de los bajos niveles de pastoreo y otras perturbaciones antropogenicas, los pastizales de festuca en las reservas naturales de esta region son vulnerables a la invasion de la flora introducida. Dada el rol preponderante que juegan los caminos en el establecimiento y dispersion de la flora introducida, la construccion de rutas debe ser vista desde un punto de vista biologica, mas que desde una perspectiva meramente ingenieril. Los administradores de reservas naturales deberian establecer programas efectivos de manejo de vegetacion en los bordes de los caminos. Estos programas deberian incluir monitoreo, tratamiento rapido de especies introducidas y claves tan pronto como se detecten en las reservas naturales, y creacion de zonas de transicion en los caminos que conducen a las reservas naturales.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55000,"Movement, colonization, and establishment success of a planthopper of prairie potholes, Delphacodes scolochloa (Hemiptera: Delphacidae)",S176185,R55001,Continent,L108921,North America,"Abstract 1. Movement, and particularly the colonisation of new habitat patches, remains one of the least known aspects of the life history and ecology of the vast majority of species. Here, a series of experiments was conducted to rectify this problem with Delphacodes scolochloa Cronin & Wilson, a wing‐dimorphic planthopper of the North American Great Plains.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55061,Determinants of vertebrate invasion success in Europe and North America,S197506,R57248,Continent,R49276,North America,"Species that are frequently introduced to an exotic range have a high potential of becoming invasive. Besides propagule pressure, however, no other generally strong determinant of invasion success is known. Although evidence has accumulated that human affiliates (domesticates, pets, human commensals) also have high invasion success, existing studies do not distinguish whether this success can be completely explained by or is partly independent of propagule pressure. Here, we analyze both factors independently, propagule pressure and human affiliation. We also consider a third factor directly related to humans, hunting, and 17 traits on each species' population size and extent, diet, body size, and life history. Our dataset includes all 2362 freshwater fish, mammals, and birds native to Europe or North America. In contrast to most previous studies, we look at the complete invasion process consisting of (1) introduction, (2) establishment, and (3) spread. In this way, we not only consider which of the introduced species became invasive but also which species were introduced. Of the 20 factors tested, propagule pressure and human affiliation were the two strongest determinants of invasion success across all taxa and steps. This was true for multivariate analyses that account for intercorrelations among variables as well as univariate analyses, suggesting that human affiliation influenced invasion success independently of propagule pressure. Some factors affected the different steps of the invasion process antagonistically. For example, game species were much more likely to be introduced to an exotic continent than nonhunted species but tended to be less likely to establish themselves and spread. Such antagonistic effects show the importance of considering the complete invasion process.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55076,The relative importance of latitude matching and propagule pressure in the colonization success of an invasive forb,S177036,R55077,Continent,L109620,North America,"Factors that influence the early stages of invasion can be critical to invasion success, yet are seldom studied. In particular, broad pre-adaptation to recipient climate may importantly influence early colonization success, yet few studies have explicitly examined this. I performed an experiment to determine how similarity between seed source and transplant site latitude, as a general indicator of pre-adaptation to climate, interacts with propagule pressure (100, 200 and 400 seeds/pot) to influence early colonization success of the widespread North American weed, St. John's wort Hypericum perforatum. Seeds originating from seven native European source populations were sown in pots buried in the ground in a field in western Montana. Seed source populations were either similar or divergent in latitude to the recipient transplant site. Across seed density treatments, the match between seed source and recipient latitude did not affect the proportion of pots colonized or the number of individual colonists per pot. In contrast, propagule pressure had a significant and positive effect on colonization. These results suggest that propagules from many climatically divergent source populations can be viable invaders.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55088,"Effects of soil fungi, disturbance and propagule pressure on exotic plant recruitment and establishment at home and abroad",S177181,R55089,Continent,L109741,North America,"Biogeographic experiments that test how multiple interacting factors influence exotic plant abundance in their home and recipient communities are remarkably rare. We examined the effects of soil fungi, disturbance and propagule pressure on seed germination, seedling recruitment and adult plant establishment of the invasive Centaurea stoebe in its native European and non‐native North American ranges. Centaurea stoebe can establish virtual monocultures in parts of its non‐native range, but occurs at far lower abundances where it is native. We conducted parallel experiments at four European and four Montana (USA) grassland sites with all factorial combinations of ± suppression of soil fungi, ±disturbance and low versus high knapweed propagule pressure [100 or 300 knapweed seeds per 0.3 m × 0.3 m plot (1000 or 3000 per m2)]. We also measured germination in buried bags containing locally collected knapweed seeds that were either treated or not with fungicide. Disturbance and propagule pressure increased knapweed recruitment and establishment, but did so similarly in both ranges. Treating plots with fungicides had no effect on recruitment or establishment in either range. However, we found: (i) greater seedling recruitment and plant establishment in undisturbed plots in Montana compared to undisturbed plots in Europe and (ii) substantially greater germination of seeds in bags buried in Montana compared to Europe. Also, across all treatments, total plant establishment was greater in Montana than in Europe. Synthesis. Our results highlight the importance of simultaneously examining processes that could influence invasion in both ranges. They indicate that under ‘background’ undisturbed conditions, knapweed recruits and establishes at greater abundance in Montana than in Europe. However, our results do not support the importance of soil fungi or local disturbances as mechanisms for knapweed's differential success in North America versus Europe.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55129,Propagule pressure and resource availability determine plant community invasibility in a temperate forest understorey,S177654,R55131,Continent,L110123,North America,"Few field experiments have examined the effects of both resource availability and propagule pressure on plant community invasibility. Two non-native forest species, a herb and a shrub (Hesperis matronalis and Rhamnus cathartica, respectively), were sown into 60 1-m 2 sub-plots distributed across three plots. These contained reconstructed native plant communities in a replaced surface soil layer in a North American forest interior. Resource availability and propagule pressure were manipulated as follows: understorey light level (shaded/unshaded), nutrient availability (control/fertilized), and seed pressures of the two non-native species (control/low/high). Hesperis and Rhamnus cover and the above-ground biomass of Hesperis were significantly higher in shaded sub-plots and at greater propagule pressures. Similarly, the above-ground biomass of Rhamnus was significantly increased with propagule pressure, although this was a function of density. In contrast, of species that seeded into plots from the surrounding forest during the growing season, the non-native species had significantly greater cover in unshaded sub-plots. Plants in these unshaded sub-plots were significantly taller than plants in shaded sub-plots, suggesting a greater fitness. Total and non-native species richness varied significantly among plots indicating the importance of fine-scale dispersal patterns. None of the experimental treatments influenced native species. Since the forest seed bank in our study was colonized primarily by non-native ruderal species that dominated understorey vegetation, the management of invasions by non-native species in forest understoreys will have to address factors that influence light levels and dispersal pathways.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56533,Interactions among aliens: apparent replacement of one exotic species by another,S186944,R56534,Continent,L115972,North America,"Although many studies have documented the impact of invasive species on indigenous flora and fauna, few have rigorously examined interactions among invaders and the potential for one exotic species to replace another. European green crabs (Carcinus maenas), once common in rocky intertidal habitats of southern New England, have recently declined in abundance coincident with the invasion of the Asian shore crab (Hemigrapsus sanguineus). Over a four-year period in the late 1990s we documented a significant (40- 90%) decline in green crab abundance and a sharp (10-fold) increase in H. sanguineus at three sites in southern New England. Small, newly recruited green crabs had a significant risk of predation when paired with larger H. sanguineus in the laboratory, and recruitment of 0-yr C. maenas was reduced by H. sanguineus as well as by larger conspecifics in field- deployed cages (via predation and cannibalism, respectively). In contrast, recruitment of 0-yr H. sanguineus was not affected by larger individuals of either crab species during the same experiments. The differential susceptibility of C. maenas and H. sanguineus recruits to predation and cannibalism likely contributed to the observed decrease in C. maenas abundance and the almost exponential increase in H. sanguineus abundance during the period of study, While the Asian shore crab is primarily restricted to rocky intertidal habitats, C. maenas is found intertidally, subtidally, and in a range of substrate types in New England. Thus, the apparent replacement of C. maenas by H. sanguineus in rocky intertidal habitats of southern New England may not ameliorate the economic and ecological impacts attributed to green crab populations in other habitats of this region. For example, field experiments indicate that predation pressure on a native bivalve species (Mytilus edulis) has not nec- essarily decreased with the declines of C. maenas. While H. sanguineus has weaker per capita effects than C. maenas, its densities greatly exceed those of C. maenas at present and its population-level effects are likely comparable to the past effects of C. maenas. The Carcinus-Hemigrapsus interactions documented here are relevant in other parts of the world where green crabs and grapsid crabs interact, particularly on the west coast of North America where C. maenas has recently invaded and co-occurs with two native Hemigrapsus species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56614,Inhibition between invasives: a newly introduced predator moderates the impacts of a previously established invasive predator,S187881,R56615,Continent,L116747,North America,"1. With continued globalization, species are being transported and introduced into novel habitats at an accelerating rate. Interactions between invasive species may provide important mechanisms that moderate their impacts on native species. 2. The European green crab Carcinus maenas is an aggressive predator that was introduced to the east coast of North America in the mid-1800 s and is capable of rapid consumption of bivalve prey. A newer invasive predator, the Asian shore crab Hemigrapsus sanguineus, was first discovered on the Atlantic coast in the 1980s, and now inhabits many of the same regions as C. maenas within the Gulf of Maine. Using a series of field and laboratory investigations, we examined the consequences of interactions between these predators. 3. Density patterns of these two species at different spatial scales are consistent with negative interactions. As a result of these interactions, C. maenas alters its diet to consume fewer mussels, its preferred prey, in the presence of H. sanguineus. Decreased mussel consumption in turn leads to lower growth rates for C. maenas, with potential detrimental effects on C. maenas populations. 4. Rather than an invasional meltdown, this study demonstrates that, within the Gulf of Maine, this new invasive predator can moderate the impacts of the older invasive predator.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56843,Does whirling disease mediate hybridization between a native and nonnative trout?,S190449,R56844,Continent,L118857,North America,"AbstractThe spread of nonnative species over the last century has profoundly altered freshwater ecosystems, resulting in novel species assemblages. Interactions between nonnative species may alter their impacts on native species, yet few studies have addressed multispecies interactions. The spread of whirling disease, caused by the nonnative parasite Myxobolus cerebralis, has generated declines in wild trout populations across western North America. Westslope Cutthroat Trout Oncorhynchus clarkii lewisi in the northern Rocky Mountains are threatened by hybridization with introduced Rainbow Trout O. mykiss. Rainbow Trout are more susceptible to whirling disease than Cutthroat Trout and may be more vulnerable due to differences in spawning location. We hypothesized that the presence of whirling disease in a stream would (1) reduce levels of introgressive hybridization at the site scale and (2) limit the size of the hybrid zone at the whole-stream scale. We measured levels of introgression and the spatial ext...",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56899,Biodiversity ffects and rates of spred of nonnative eucalypt woodlands in central California,S191071,R56900,Continent,L119367,North America,"Woodlands comprised of planted, nonnative trees are increasing in extent globally, while native woodlands continue to decline due to human activities. The ecological impacts of planted woodlands may include changes to the communities of understory plants and animals found among these nonnative trees relative to native woodlands, as well as invasion of adjacent habitat areas through spread beyond the originally planted areas. Eucalypts (Eucalyptus spp.) are among the most widely planted trees worldwide, and are very common in California, USA. The goals of our investigation were to compare the biological communities of nonnative eucalypt woodlands to native oak woodlands in coastal central California, and to examine whether planted eucalypt groves have increased in size over the past decades. We assessed site and habitat attributes and characterized biological communities using understory plant, ground-dwelling arthropod, amphibian, and bird communities as indicators. Degree of difference between native and nonnative woodlands depended on the indicator used. Eucalypts had significantly greater canopy height and cover, and significantly lower cover by perennial plants and species richness of arthropods than oaks. Community composition of arthropods also differed significantly between eucalypts and oaks. Eucalypts had marginally significantly deeper litter depth, lower abundance of native plants with ranges limited to western North America, and lower abundance of amphibians. In contrast to these differences, eucalypt and oak groves had very similar bird community composition, species richness, and abundance. We found no evidence of ""invasional meltdown,"" documenting similar abundance and richness of nonnatives in eucalypt vs. oak woodlands. Our time-series analysis revealed that planted eucalypt groves increased 271% in size, on average, over six decades, invading adjacent areas. Our results inform science-based management of California woodlands, revealing that while bird communities would probably not be affected by restoration of eucalypt to oak woodlands, such a restoration project would not only stop the spread of eucalypts into adjacent habitats but would also enhance cover by western North American native plants and perennials, enhance amphibian abundance, and increase arthropod richness.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56913,Scaling the consequences of interactions between invaders from the indivdual to the population level,S191226,R56914,Continent,L119494,North America,"Abstract The impact of human‐induced stressors, such as invasive species, is often measured at the organismal level, but is much less commonly scaled up to the population level. Interactions with invasive species represent an increasingly common source of stressor in many habitats. However, due to the increasing abundance of invasive species around the globe, invasive species now commonly cause stresses not only for native species in invaded areas, but also for other invasive species. I examine the European green crab Carcinus maenas, an invasive species along the northeast coast of North America, which is known to be negatively impacted in this invaded region by interactions with the invasive Asian shore crab Hemigrapsus sanguineus. Asian shore crabs are known to negatively impact green crabs via two mechanisms: by directly preying on green crab juveniles and by indirectly reducing green crab fecundity via interference (and potentially exploitative) competition that alters green crab diets. I used life‐table analyses to scale these two mechanistic stressors up to the population level in order to examine their relative impacts on green crab populations. I demonstrate that lost fecundity has larger impacts on per capita population growth rates, but that both predation and lost fecundity are capable of reducing population growth sufficiently to produce the declines in green crab populations that have been observed in areas where these two species overlap. By scaling up the impacts of one invader on a second invader, I have demonstrated that multiple documented interactions between these species are capable of having population‐level impacts and that both may be contributing to the decline of European green crabs in their invaded range on the east coast of North America.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R56996,Invasion success of vertebrates in Europe and North America,S197509,R57249,Continent,R49276,North America,"Species become invasive if they (i) are introduced to a new range, (ii) establish themselves, and (iii) spread. To address the global problems caused by invasive species, several studies investigated steps ii and iii of this invasion process. However, only one previous study looked at step i and examined the proportion of species that have been introduced beyond their native range. We extend this research by investigating all three steps for all freshwater fish, mammals, and birds native to Europe or North America. A higher proportion of European species entered North America than vice versa. However, the introduction rate from Europe to North America peaked in the late 19th century, whereas it is still rising in the other direction. There is no clear difference in invasion success between the two directions, so neither the imperialism dogma (that Eurasian species are exceptionally successful invaders) is supported, nor is the contradictory hypothesis that North America offers more biotic resistance to invaders than Europe because of its less disturbed and richer biota. Our results do not support the tens rule either: that approximately 10% of all introduced species establish themselves and that approximately 10% of established species spread. We find a success of approximately 50% at each step. In comparison, only approximately 5% of native vertebrates were introduced in either direction. These figures show that, once a vertebrate is introduced, it has a high potential to become invasive. Thus, it is crucial to minimize the number of species introductions to effectively control invasive vertebrates.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57000,Ecological predictions and risk assessment for alien fishes in North America,S193627,R57002,Continent,R49276,North America,"Methods of risk assessment for alien species, especially for nonagricultural systems, are largely qualitative. Using a generalizable risk assessment approach and statistical models of fish introductions into the Great Lakes, North America, we developed a quantitative approach to target prevention efforts on species most likely to cause damage. Models correctly categorized established, quickly spreading, and nuisance fishes with 87 to 94% accuracy. We then identified fishes that pose a high risk to the Great Lakes if introduced from unintentional (ballast water) or intentional pathways (sport, pet, bait, and aquaculture industries).",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57020,Differentiating successful and failed molluscan invaders in estuarine ecosystems,S193630,R57023,Continent,R49276,North America,"ABSTRACT: Despite mounting evidence of invasive species’ impacts on the environment and society,our ability to predict invasion establishment, spread, and impact are inadequate. Efforts to explainand predict invasion outcomes have been limited primarily to terrestrial and freshwater ecosystems.Invasions are also common in coastal marine ecosystems, yet to date predictive marine invasion mod-els are absent. Here we present a model based on biological attributes associated with invasion suc-cess (establishment) of marine molluscs that compares successful and failed invasions from a groupof 93 species introduced to San Francisco Bay (SFB) in association with commercial oyster transfersfrom eastern North America (ca. 1869 to 1940). A multiple logistic regression model correctly classi-fied 83% of successful and 80% of failed invaders according to their source region abundance at thetime of oyster transfers, tolerance of low salinity, and developmental mode. We tested the generalityof the SFB invasion model by applying it to 3 coastal locations (2 in North America and 1 in Europe)that received oyster transfers from the same source and during the same time as SFB. The model cor-rectly predicted 100, 75, and 86% of successful invaders in these locations, indicating that abun-dance, environmental tolerance (ability to withstand low salinity), and developmental mode not onlyexplain patterns of invasion success in SFB, but more importantly, predict invasion success in geo-graphically disparate marine ecosystems. Finally, we demonstrate that the proportion of marine mol-luscs that succeeded in the latter stages of invasion (i.e. that establish self-sustaining populations,spread and become pests) is much greater than has been previously predicted or shown for otheranimals and plants.KEY WORDS: Invasion · Bivalve · Gastropod · Mollusc · Marine · Oyster · Vector · Risk assessment",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57111,Environmental and biotic correlates to lionfish invasion success in Bahamian coral reefs,S197624,R57112,Continent,R49276,North America,"Lionfish (Pterois volitans), venomous predators from the Indo-Pacific, are recent invaders of the Caribbean Basin and southeastern coast of North America. Quantification of invasive lionfish abundances, along with potentially important physical and biological environmental characteristics, permitted inferences about the invasion process of reefs on the island of San Salvador in the Bahamas. Environmental wave-exposure had a large influence on lionfish abundance, which was more than 20 and 120 times greater for density and biomass respectively at sheltered sites as compared with wave-exposed environments. Our measurements of topographic complexity of the reefs revealed that lionfish abundance was not driven by habitat rugosity. Lionfish abundance was not negatively affected by the abundance of large native predators (or large native groupers) and was also unrelated to the abundance of medium prey fishes (total length of 5–10 cm). These relationships suggest that (1) higher-energy environments may impose intrinsic resistance against lionfish invasion, (2) habitat complexity may not facilitate the lionfish invasion process, (3) predation or competition by native fishes may not provide biotic resistance against lionfish invasion, and (4) abundant prey fish might not facilitate lionfish invasion success. The relatively low biomass of large grouper on this island could explain our failure to detect suppression of lionfish abundance and we encourage continuing the preservation and restoration of potential lionfish predators in the Caribbean. In addition, energetic environments might exert direct or indirect resistance to the lionfish proliferation, providing native fish populations with essential refuges.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57171,Invasion of exotic plant species in tallgrass prairie fragments,S197562,R57172,Continent,R49276,North America,"Abstract: The tallgrass prairie is one of the most severely affected ecosystems in North America. As a result of extensive conversion to agriculture during the last century, as little as 1% of the original tallgrass prairie remains. The remaining fragments of tallgrass prairie communities have conservation significance, but questions remain about their viability and importance to conservation. We investigated the effects of fragment size, native plant species diversity, and location on invasion by exotic plant species at 25 tallgrass prairie sites in central North America at various geographic scales. We used exotic species richness and relative cover as measures of invasion. Exotic species richness and cover were not related to area for all sites considered together. There were no significant relationships between native species richness and exotic species richness at the cluster and regional scale or for all sites considered together. At the local scale, exotic species richness was positively related to native species richness at four sites and negatively related at one. The 10 most frequently occurring and abundant exotic plant species in the prairie fragments were cool‐season, or C3, species, in contrast to the native plant community, which was dominated by warm‐season, or C4, species. This suggests that timing is important to the success of exotic species in the tallgrass prairie. Our study indicates that some small fragments of tallgrass prairie are relatively intact and should not be overlooked as long‐term refuges for prairie species, sources of genetic variability, and material for restoration.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57214,Plant community diversity and composition provide little resistance to Juniperus encroachment,S197533,R57215,Continent,R49276,North America,"Widespread encroachment of the fire-intolerant species Juniperus virginiana L. into North American grasslands and savannahs where fire has largely been removed has prompted the need to identify mechanisms driving J. virginiana encroachment. We tested whether encroachment success of J. virginiana is related to plant species diversity and composi- tion across three plant communities. We predicted J. virginiana encroachment success would (i) decrease with increasing diversity, and (ii)J.virginiana encroachment success would be unrelated to species composition. We simulated encroachment by planting J. virginiana seedlings in tallgrass prairie, old-field grassland, and upland oak forest. We used J. virginiana survival and growth as an index of encroachment success and evaluated success as a function of plant community traits (i.e., species richness, species diversity, and species composition). Our results indicated that J. virginiana encroachment suc- cess increased with increasing plant richness and diversity. Moreover, growth and survival of J. virginiana seedlings was associated with plant species composition only in the old-field grassland and upland oak forest. These results suggest that greater plant species richness and diversity provide little resistance to J. virginiana encroachment, and the results suggest resource availability and other biotic or abiotic factors are determinants of J. virginiana encroachment success.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57223,Distributions of exotic plants in eastern Asia and North America,S197541,R57224,Continent,R49276,North America,"Although some plant traits have been linked to invasion success, the possible effects of regional factors, such as diversity, habitat suitability, and human activity are not well understood. Each of these mechanisms predicts a different pattern of distribution at the regional scale. Thus, where climate and soils are similar, predictions based on regional hypotheses for invasion success can be tested by comparisons of distributions in the source and receiving regions. Here, we analyse the native and alien geographic ranges of all 1567 plant species that have been introduced between eastern Asia and North America or have been introduced to both regions from elsewhere. The results reveal correlations between the spread of exotics and both the native species richness and transportation networks of recipient regions. This suggests that both species interactions and human-aided dispersal influence exotic distributions, although further work on the relative importance of these processes is needed.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57273,"Phalaris arundinacea seedling establishment: effects of canopy complexity in fen, mesocosm, and restoration experiments",S197500,R57274,Continent,R49276,North America,"Phalaris arundinacea L. (reed canary grass) is a major invader of wetlands in temperate North America; it creates monotypic stands and displaces native vegetation. In this study, the effect of plant canopies on the establishment of P. arundinacea from seed in a fen, fen-like mesocosms, and a fen restoration site was assessed. In Wingra Fen, canopies that were more resistant to P. arundinacea establishment had more species (eight or nine versus four to six species) and higher cover of Aster firmus. In mesocosms planted with Glyceria striata plus 1, 6, or 15 native species, all canopies closed rapidly and prevented P. arundinacea establishment from seed, regardless of the density of the matrix species or the number of added species. Only after gaps were created in the canopy was P. arundinacea able to establish seedlings; then, the 15-species treatment reduced establishment to 48% of that for single-species canopies. A similar experiment in the restoration site produced less cover of native plants, and P. a...",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57321,Biotic acceptance in introduced amphibians and reptiles in Europe and North America,S197451,R57322,Continent,R49276,North America,"Aim The biotic resistance hypothesis argues that complex plant and animal communities are more resistant to invasion than simpler communities. Conversely, the biotic acceptance hypothesis states that non-native and native species richness are positively related. Most tests of these hypotheses at continental scales, typically conducted on plants, have found support for biotic acceptance. We tested these hypotheses on both amphibians and reptiles across Europe and North America. Location Continental countries in Europe and states/provinces in North America. Methods We used multiple linear regression models to determine which factors predicted successful establishment of amphibians and reptiles in Europe and North America, and additional models to determine which factors predicted native species richness. Results Successful establishment of amphibians and reptiles in Europe and reptiles in North America was positively related to native species richness. We found higher numbers of successful amphibian species in Europe than in North America. Potential evapotranspiration (PET) was positively related to non-native species richness for amphibians and reptiles in Europe and reptiles in North America. PET was also the primary factor determining native species richness for both amphibians and reptiles in Europe and North America. Main conclusions We found support for the biotic acceptance hypothesis for amphibians and reptiles in Europe and reptiles in North America, suggesting that the presence of native amphibian and reptile species generally indicates good habitat for non-native species. Our data suggest that the greater number of established amphibians per native amphibians in Europe than in North America might be explained by more introductions in Europe or climate-matching of the invaders. Areas with high native species richness should be the focus of control and management efforts, especially considering that non-native species located in areas with a high number of natives can have a large impact on biological diversity.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57359,A null model of exotic plant diversity tested with exotic and native species-area relationships,S197434,R57360,Continent,R49276,North America,"At large spatial scales, exotic and native plant diversity exhibit a strong positive relationship. This may occur because exotic and native species respond similarly to processes that influence diversity over large geographical areas. To test this hypothesis, we compared exotic and native species-area relationships within six North American ecoregions. We predicted and found that within ecoregions the ratio of exotic to native species richness remains constant with increasing area. Furthermore, we predicted that areas with more native species than predicted by the species-area relationship would have proportionally more exotics as well. We did find that these exotic and native deviations were highly correlated, but areas that were good (or bad) for native plants were even better (or worse) for exotics. Similar processes appear to influence exotic and native plant diversity but the degree of this influence may differ with site quality.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57614,Herbivorous arthropod community of an alien weed Solanum carolinense L.,S203732,R57615,Continent,R49276,North America,"Herbivorous arthropod fauna of the horse nettle Solanum carolinense L., an alien solanaceous herb of North American origin, was characterized by surveying arthropod communities in the fields and comparing them with the original community compiled from published data to infer the impact of herbivores on the weed in the introduced region. Field surveys were carried out in the central part of mainland Japan for five years including an intensive regular survey in 1992. Thirty-nine arthropod species were found feeding on the weed. The leaf, stem, flower and fruit of the weed were infested by the herbivores. The comparison of characteristics of the arthropod community with those of the community in the USA indicated that more sapsuckers and less chewers were on the weed in Japan than in the USA. The community in Japan was composed of high proportions of polyphages and exophages compared to that in the USA. Eighty-seven percent of the species are known to be pests of agricultural crops. Low species diversity of the community was also suggested. The depauperated herbivore community, in terms of feeding habit and niche on S. carolinense, suggested that the weed partly escaped from herbivory in its reproductive parts. The regular population census, however, indicated that a dominant coccinellid beetle, Epilachna vigintioctopunctata, caused a noticeable damage on the leaves of the weed.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57635,Invasive exotic plants suffer less herbivory than non-invasive exotic plants,S203723,R57636,Continent,R49276,North America,"We surveyed naturally occurring leaf herbivory in nine invasive and nine non-invasive exotic plant species sampled in natural areas in Ontario, New York and Massachusetts, and found that invasive plants experienced, on average, 96% less leaf damage than non-invasive species. Invasive plants were also more taxonomically isolated than non-invasive plants, belonging to families with 75% fewer native North American genera. However, the relationship between taxonomic isolation at the family level and herbivory was weak. We suggest that invasive plants may possess novel phytochemicals with anti-herbivore properties in addition to allelopathic and anti-microbial characteristics. Herbivory could be employed as an easily measured predictor of the likelihood that recently introduced exotic plants may become invasive.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57637,"Herbivory, time since introduction and the invasiveness of exotic plants",S203722,R57638,Continent,R49276,North America,"1 We tested the enemy release hypothesis for invasiveness using field surveys of herbivory on 39 exotic and 30 native plant species growing in natural areas near Ottawa, Canada, and found that exotics suffered less herbivory than natives. 2 For the 39 introduced species, we also tested relationships between herbivory, invasiveness and time since introduction to North America. Highly invasive plants had significantly less herbivory than plants ranked as less invasive. Recently arrived plants also tended to be more invasive; however, there was no relationship between time since introduction and herbivory. 3 Release from herbivory may be key to the success of highly aggressive invaders. Low herbivory may also indicate that a plant possesses potent defensive chemicals that are novel to North America, which may confer resistance to pathogens or enable allelopathy in addition to deterring herbivorous insects.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57682,"Experimental field comparison of native and non-native maple seedlings: natural enemies, ecophysiology, growth and survival",S203644,R57684,Continent,R49276,North America,"1 Acer platanoides (Norway maple) is an important non‐native invasive canopy tree in North American deciduous forests, where native species diversity and abundance are greatly reduced under its canopy. We conducted a field experiment in North American forests to compare planted seedlings of A. platanoides and Acer saccharum (sugar maple), a widespread, common native that, like A. platanoides, is shade tolerant. Over two growing seasons in three forests we compared multiple components of seedling success: damage from natural enemies, ecophysiology, growth and survival. We reasoned that equal or superior performance by A. platanoides relative to A. saccharum indicates seedling characteristics that support invasiveness, while inferior performance indicates potential barriers to invasion. 2 Acer platanoides seedlings produced more leaves and allocated more biomass to roots, A. saccharum had greater water use efficiency, and the two species exhibited similar photosynthesis and first‐season mortality rates. Acer platanoides had greater winter survival and earlier spring leaf emergence, but second‐season mortality rates were similar. 3 The success of A. platanoides seedlings was not due to escape from natural enemies, contrary to the enemy release hypothesis. Foliar insect herbivory and disease symptoms were similarly high for both native and non‐native, and seedling biomass did not differ. Rather, A. platanoides compared well with A. saccharum because of its equivalent ability to photosynthesize in the low light herb layer, its higher leaf production and greater allocation to roots, and its lower winter mortality coupled with earlier spring emergence. Its only potential barrier to seedling establishment, relative to A. saccharum, was lower water use efficiency, which possibly could hinder its invasion into drier forests. 4 The spread of non‐native canopy trees poses an especially serious problem for native forest communities, because canopy trees strongly influence species in all forest layers. Success at reaching the canopy depends on a tree's ecology in previous life‐history stages, particularly as a vulnerable seedling, but little is known about seedling characteristics that promote non‐native tree invasion. Experimental field comparison with ecologically successful native trees provides insight into why non‐native trees succeed as seedlings, which is a necessary stage on their journey into the forest canopy.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57698,Using parasites to inform ecological history: Comparisons among three congeneric marine snails,S203614,R57699,Continent,R49276,North America,"Species introduced to novel regions often leave behind many parasite species. Signatures of parasite release could thus be used to resolve cryptogenic (uncertain) origins such as that of Littorina littorea, a European marine snail whose history in North America has been debated for over 100 years. Through extensive field and literature surveys, we examined species richness of parasitic trematodes infecting this snail and two co-occurring congeners, L. saxatilis and L. obtusata, both considered native throughout the North Atlantic. Of the three snails, only L. littorea possessed significantly fewer trematode species in North America, and all North American trematodes infecting the three Littorina spp. were a nested subset of Europe. Surprisingly, several of L. littorea's missing trematodes in North America infected the other Littorina congeners. Most likely, long separation of these trematodes from their former host resulted in divergence of the parasites' recognition of L. littorea. Overall, these patterns of parasitism suggest a recent invasion from Europe to North America for L. littorea and an older, natural expansion from Europe to North America for L. saxatilis and L. obtusata.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57748,"Cryptic seedling herbivory by nocturnal introduced generalists impacts survival, performance of native and exotic plants",S203578,R57750,Continent,R49276,North America,"Although much of the theory on the success of invasive species has been geared at escape from specialist enemies, the impact of introduced generalist invertebrate herbivores on both native and introduced plant species has been underappreciated. The role of nocturnal invertebrate herbivores in structuring plant communities has been examined extensively in Europe, but less so in North America. Many nocturnal generalists (slugs, snails, and earwigs) have been introduced to North America, and 96% of herbivores found during a night census at our California Central Valley site were introduced generalists. We explored the role of these herbivores in the distribution, survivorship, and growth of 12 native and introduced plant species from six families. We predicted that introduced species sharing an evolutionary history with these generalists might be less vulnerable than native plant species. We quantified plant and herbivore abundances within our heterogeneous site and also established herbivore removal experiments in 160 plots spanning the gamut of microhabitats. As 18 collaborators, we checked 2000 seedling sites every day for three weeks to assess nocturnal seedling predation. Laboratory feeding trials allowed us to quantify the palatability of plant species to the two dominant nocturnal herbivores at the site (slugs and earwigs) and allowed us to account for herbivore microhabitat preferences when analyzing attack rates on seedlings. The relationship between local slug abundance and percent cover of five common plant taxa at the field site was significantly negatively associated with the mean palatability of these taxa to slugs in laboratory trials. Moreover, seedling mortality of 12 species in open-field plots was positively correlated with mean palatability of these taxa to both slugs and earwigs in laboratory trials. Counter to expectations, seedlings of native species were neither more vulnerable nor more palatable to nocturnal generalists than those of introduced species. Growth comparison of plants within and outside herbivore exclosures also revealed no differences between native and introduced plant species, despite large impacts of herbivores on growth. Cryptic nocturnal predation on seedlings was common and had large effects on plant establishment at our site. Without intensive monitoring, such predation could easily be misconstrued as poor seedling emergence.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57755,Release from foliar and floral fungal pathogen species does not explain the geographic spread of naturalized North American plants in Europe,S203577,R57756,Continent,R49276,North America,"1 During the last centuries many alien species have established and spread in new regions, where some of them cause large ecological and economic problems. As one of the main explanations of the spread of alien species, the enemy‐release hypothesis is widely accepted and frequently serves as justification for biological control. 2 We used a global fungus–plant host distribution data set for 140 North American plant species naturalized in Europe to test whether alien plants are generally released from foliar and floral pathogens, whether they are mainly released from pathogens that are rare in the native range, and whether geographic spread of the North American plant species in Europe is associated with release from fungal pathogens. 3 We show that the 140 North American plant species naturalized in Europe were released from 58% of their foliar and floral fungal pathogen species. However, when we also consider fungal pathogens of the native North American host range that in Europe so far have only been reported on other plant species, the estimated release is reduced to 10.3%. Moreover, in Europe North American plants have mainly escaped their rare, pathogens, of which the impact is restricted to few populations. Most importantly and directly opposing the enemy‐release hypothesis, geographic spread of the alien plants in Europe was negatively associated with their release from fungal pathogens. 4 Synthesis. North American plants may have escaped particular fungal species that control them in their native range, but based on total loads of fungal species, release from foliar and floral fungal pathogens does not explain the geographic spread of North American plant species in Europe. To test whether enemy release is the major driver of plant invasiveness, we urgently require more studies comparing release of invasive and non‐invasive alien species from enemies of different guilds, and studies that assess the actual impact of the enemies.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57803,Testing hypotheses for exotic plant success: parallel experiments in the native and introduced ranges,S203536,R57805,Continent,R49276,North America,"A central question in ecology concerns how some exotic plants that occur at low densities in their native range are able to attain much higher densities where they are introduced. This question has remained unresolved in part due to a lack of experiments that assess factors that affect the population growth or abundance of plants in both ranges. We tested two hypotheses for exotic plant success: escape from specialist insect herbivores and a greater response to disturbance in the introduced range. Within three introduced populations in Montana, USA, and three native populations in Germany, we experimentally manipulated insect herbivore pressure and created small-scale disturbances to determine how these factors affect the performance of houndstongue (Cynoglossum officinale), a widespread exotic in western North America. Herbivores reduced plant size and fecundity in the native range but had little effect on plant performance in the introduced range. Small-scale experimental disturbances enhanced seedling recruitment in both ranges, but subsequent seedling survival was more positively affected by disturbance in the introduced range. We combined these experimental results with demographic data from each population to parameterize integral projection population models to assess how enemy escape and disturbance might differentially influence C. officinale in each range. Model results suggest that escape from specialist insects would lead to only slight increases in the growth rate (lambda) of introduced populations. In contrast, the larger response to disturbance in the introduced vs. native range had much greater positive effects on lambda. These results together suggest that, at least in the regions where the experiments were performed, the differences in response to small disturbances by C. officinale contribute more to higher abundance in the introduced range compared to at home. Despite the challenges of conducting experiments on a wide biogeographic scale and the logistical constraints of adequately sampling populations within a range, this approach is a critical step forward to understanding the success of exotic plants.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57872,Arthropod Communities on Native and Nonnative Early Successional Plants,S203478,R57873,Continent,R49276,North America,"ABSTRACT Early successional ruderal plants in North America include numerous native and nonnative species, and both are abundant in disturbed areas. The increasing presence of nonnative plants may negatively impact a critical component of food web function if these species support fewer or a less diverse arthropod fauna than the native plant species that they displace. We compared arthropod communities on six species of common early successional native plants and six species of nonnative plants, planted in replicated native and nonnative plots in a farm field. Samples were taken twice each year for 2 yr. In most arthropod samples, total biomass and abundance were substantially higher on the native plants than on the nonnative plants. Native plants produced as much as five times more total arthropod biomass and up to seven times more species per 100 g of dry leaf biomass than nonnative plants. Both herbivores and natural enemies (predators and parasitoids) predominated on native plants when analyzed separately. In addition, species richness was about three times greater on native than on nonnative plants, with 83 species of insects collected exclusively from native plants, and only eight species present only on nonnatives. These results support a growing body of evidence suggesting that nonnative plants support fewer arthropods than native plants, and therefore contribute to reduced food resources for higher trophic levels.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57892,Does enemy loss cause release? A biogeographical comparison of parasitoid effects on an introduced insect,S203474,R57894,Continent,R49276,North America,"The loss of natural enemies is a key feature of species introductions and is assumed to facilitate the increased success of species in new locales (enemy release hypothesis; ERH). The ERH is rarely tested experimentally, however, and is often assumed from observations of enemy loss. We provide a rigorous test of the link between enemy loss and enemy release by conducting observational surveys and an in situ parasitoid exclusion experiment in multiple locations in the native and introduced ranges of a gall-forming insect, Neuroterus saltatorius, which was introduced poleward, within North America. Observational surveys revealed that the gall-former experienced increased demographic success and lower parasitoid attack in the introduced range. Also, a different composition of parasitoids attacked the gall-former in the introduced range. These observational results show that enemies were lost and provide support for the ERH. Experimental results, however, revealed that, while some enemy release occurred, it was not the sole driver of demographic success. This was because background mortality in the absence of enemies was higher in the native range than in the introduced range, suggesting that factors other than parasitoids limit the species in its native range and contribute to its success in its introduced range. Our study demonstrates the importance of measuring the effect of enemies in the context of other community interactions in both ranges to understand what factors cause the increased demographic success of introduced species. This case also highlights that species can experience very different dynamics when introduced into ecologically similar communities.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57920,Invasive plants escape from suppressive soil biota at regional scales,S203443,R57921,Continent,R49276,North America,"A prominent hypothesis for plant invasions is escape from the inhibitory effects of soil biota. Although the strength of these inhibitory effects, measured as soil feedbacks, has been assessed between natives and exotics in non‐native ranges, few studies have compared the strength of plant–soil feedbacks for exotic species in soils from non‐native versus native ranges. We examined whether 6 perennial European forb species that are widespread invaders in North American grasslands (Centaurea stoebe, Euphorbia esula, Hypericum perforatum, Linaria vulgaris, Potentilla recta and Leucanthemum vulgare) experienced different suppressive effects of soil biota collected from 21 sites across both ranges. Four of the six species tested exhibited substantially reduced shoot biomass in ‘live’ versus sterile soil from Europe. In contrast, North American soils produced no significant feedbacks on any of the invasive species tested indicating a broad scale escape from the inhibitory effects of soil biota. Negative feedbacks generated by European soil varied idiosyncratically among sites and species. Since this variation did not correspond with the presence of the target species at field sites, it suggests that negative feedbacks can be generated from soil biota that are widely distributed in native ranges in the absence of density‐dependent effects. Synthesis. Our results show that for some invasives, native soils have strong suppressive potential, whereas this is not the case in soils from across the introduced range. Differences in regional‐scale evolutionary history among plants and soil biota could ultimately help explain why some exotics are able to occur at higher abundance in the introduced versus native range.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57971,"Insect assemblages associated with the exotic riparian shrub Russian olive (Elaeagnaceae), and co-occurring native shrubs in British Columbia, Canada",S203415,R57972,Continent,R49276,North America,"AbstractRussian olive (Elaeagnus angustifolia Linnaeus; Elaeagnaceae) is an exotic shrub/tree that has become invasive in many riparian ecosystems throughout semi-arid, western North America, including southern British Columbia, Canada. Despite its prevalence and the potentially dramatic impacts it can have on riparian and aquatic ecosystems, little is known about the insect communities associated with Russian olive within its invaded range. At six sites throughout the Okanagan valley of southern British Columbia, Canada, we compared the diversity of insects associated with Russian olive plants to that of insects associated with two commonly co-occurring native plant species: Woods’ rose (Rosa woodsii Lindley; Rosaceae) and Saskatoon (Amelanchier alnifolia (Nuttall) Nuttall ex Roemer; Rosaceae). Total abundance did not differ significantly among plant types. Family richness and Shannon diversity differed significantly between Woods’ rose and Saskatoon, but not between either of these plant types and Russian olive. An abundance of Thripidae (Thysanoptera) on Russian olive and Tingidae (Hemiptera) on Saskatoon contributed to significant compositional differences among plant types. The families Chloropidae (Diptera), Heleomyzidae (Diptera), and Gryllidae (Orthoptera) were uniquely associated with Russian olive, albeit in low abundances. Our study provides valuable and novel information about the diversity of insects associated with an emerging plant invader of western Canada.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57994,Can enemy release explain the invasion success of the diploid Leucanthemum vulgare in North America?,S203412,R57995,Continent,R49276,North America,"Abstract Enemy release is a commonly accepted mechanism to explain plant invasions. Both the diploid Leucanthemum vulgare and the morphologically very similar tetraploid Leucanthemum ircutianum have been introduced into North America. To verify which species is more prevalent in North America we sampled 98 Leucanthemum populations and determined their ploidy level. Although polyploidy has repeatedly been proposed to be associated with increased invasiveness in plants, only two of the populations surveyed in North America were the tetraploid L. ircutianum . We tested the enemy release hypothesis by first comparing 20 populations of L. vulgare and 27 populations of L. ircutianum in their native range in Europe, and then comparing the European L. vulgare populations with 31 L. vulgare populations sampled in North America. Characteristics of the site and associated vegetation, plant performance and invertebrate herbivory were recorded. In Europe, plant height and density of the two species were similar but L. vulgare produced more flower heads than L. ircutianum . Leucanthemum vulgare in North America was 17 % taller, produced twice as many flower heads and grew much denser compared to L. vulgare in Europe. Attack rates by root- and leaf-feeding herbivores on L. vulgare in Europe (34 and 75 %) was comparable to that on L. ircutianum (26 and 71 %) but higher than that on L. vulgare in North America (10 and 3 %). However, herbivore load and leaf damage were low in Europe. Cover and height of the co-occurring vegetation was higher in L. vulgare populations in the native than in the introduced range, suggesting that a shift in plant competition may more easily explain the invasion success of L. vulgare than escape from herbivory.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54638,Exotic invasive species in urban wetlands: environmental correlates and implications for wetland management,S172451,R54639,Measure of invasion success,L106059,Number of exotic species,"Summary 1. Wetlands in urban regions are subjected to a wide variety of anthropogenic disturbances, many of which may promote invasions of exotic plant species. In order to devise management strategies, the influence of different aspects of the urban and natural environments on invasion and community structure must be understood. 2. The roles of soil variables, anthropogenic effects adjacent to and within the wetlands, and vegetation structure on exotic species occurrence within 21 forested wetlands in north-eastern New Jersey, USA, were compared. The hypotheses were tested that different vegetation strata and different invasive species respond similarly to environmental factors, and that invasion increases with increasing direct human impact, hydrologic disturbance, adjacent residential land use and decreasing wetland area. Canonical correspondence analyses, correlation and logistic regression analyses were used to examine invasion by individual species and overall site invasion, as measured by the absolute and relative number of exotic species in the site flora. 3. Within each stratum, different sets of environmental factors separated exotic and native species. Nutrients, soil clay content and pH, adjacent land use and canopy composition were the most frequently identified factors affecting species, but individual species showed highly individualistic responses to the sets of environmental variables, often responding in opposite ways to the same factor. 4. Overall invasion increased with decreasing area but only when sites > 100 ha were included. Unexpectedly, invasion decreased with increasing proportions of industrial/commercial adjacent land use. 5. The hypotheses were only partially supported; invasion does not increase in a simple way with increasing human presence and disturbance. 6. Synthesis and applications . The results suggest that a suite of environmental conditions can be identified that are associated with invasion into urban wetlands, which can be widely used for assessment and management. However, a comprehensive ecosystem approach is needed that places the remediation of physical alterations from urbanization within a landscape context. Specifically, sediment, inputs and hydrologic changes need to be related to adjoining urban land use and to the overlapping requirements of individual native and exotic species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57219,Ecological filtering of exotic plants in an Australian sub-alpine environment,S195130,R57220,Measure of resistance/susceptibility,L122369,Number of exotic species,"Abstract We investigated some of the factors influencing exotic invasion of native sub-alpine plant communities at a site in southeast Australia. Structure, floristic composition and invasibility of the plant communities and attributes of the invasive species were studied. To determine the plant characteristics correlated with invasiveness, we distinguished between roadside invaders, native community invaders and non-invasive exotic species, and compared these groups across a range of traits including functional group, taxonomic affinity, life history, mating system and morphology. Poa grasslands and Eucalyptus-Poa woodlands contained the largest number of exotic species, although all communities studied appeared resilient to invasion by most species. Most community invaders were broad-leaved herbs while roadside invaders contained both herbs and a range of grass species. Over the entire study area the richness and cover of native and exotic herbaceous species were positively related, but exotic herbs were more negatively related to cover of specific functional groups (e.g. trees) than native herbs. Compared with the overall pool of exotic species, those capable of invading native plant communities were disproportionately polycarpic, Asteracean and cross-pollinating. Our data support the hypothesis that strong ecological filtering of exotic species generates an exotic assemblage containing few dominant species and which functionally converges on the native assemblage. These findings contrast with those observed in the majority of invaded natural systems. We conclude that the invasion of closed sub-alpine communities must be viewed in terms of the unique attributes of the invading species, the structure and composition of the invaded communities and the strong extrinsic physical and climatic factors typical of the sub-alpine environment. Nomenclature: Australian Plant Name Index (APNI); http://www.anbg.gov.au/cgi-bin/apni Abbreviations: KNP = Kosciuszko National Park; MRPP = Multi response permutation procedure; VE = Variance explained.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57367,Exotic plant species invade hot spots of native plant diversity,S196824,R57368,Measure of resistance/susceptibility,L123767,Number of exotic species,"Some theories and experimental studies suggest that areas of low plant spe- cies richness may be invaded more easily than areas of high plant species richness. We gathered nested-scale vegetation data on plant species richness, foliar cover, and frequency from 200 1-m 2 subplots (20 1000-m 2 modified-Whittaker plots) in the Colorado Rockies (USA), and 160 1-m 2 subplots (16 1000-m 2 plots) in the Central Grasslands in Colorado, Wyoming, South Dakota, and Minnesota (USA) to test the generality of this paradigm. At the 1-m 2 scale, the paradigm was supported in four prairie types in the Central Grasslands, where exotic species richness declined with increasing plant species richness and cover. At the 1-m 2 scale, five forest and meadow vegetation types in the Colorado Rockies contradicted the paradigm; exotic species richness increased with native-plant species richness and foliar cover. At the 1000-m 2 plot scale (among vegetation types), 83% of the variance in exotic species richness in the Central Grasslands was explained by the total percentage of nitrogen in the soil and the cover of native plant species. In the Colorado Rockies, 69% of the variance in exotic species richness in 1000-m 2 plots was explained by the number of native plant species and the total percentage of soil carbon. At landscape and biome scales, exotic species primarily invaded areas of high species richness in the four Central Grasslands sites and in the five Colorado Rockies vegetation types. For the nine vegetation types in both biomes, exotic species cover was positively correlated with mean foliar cover, mean soil percentage N, and the total number of exotic species. These patterns of invasibility depend on spatial scale, biome and vegetation type, spatial autocorrelation effects, availability of resources, and species-specific responses to grazing and other disturbances. We conclude that: (1) sites high in herbaceous foliar cover and soil fertility, and hot spots of plant diversity (and biodiversity), are invasible in many landscapes; and (2) this pattern may be more closely related to the degree resources are available in native plant communities, independent of species richness. Exotic plant in- vasions in rare habitats and distinctive plant communities pose a significant challenge to land managers and conservation biologists.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54707,Do biodiversity and human impact influence the introduction or establishment of alien mammals?,S195452,R57247,Measure of native biodiversity,L122637,Number of native species,"What determines the number of alien species in a given region? ‘Native biodiversity’ and ‘human impact’ are typical answers to this question. Indeed, studies comparing different regions have frequently found positive relationships between number of alien species and measures of both native biodiversity (e.g. the number of native species) and human impact (e.g. human population). These relationships are typically explained by biotic acceptance or resistance, i.e. by influence of native biodiversity and human impact on the second step of the invasion process, establishment. The first step of the invasion process, introduction, has often been ignored. Here we investigate whether relationships between number of alien mammals and native biodiversity or human impact in 43 European countries are mainly shaped by differences in number of introduced mammals or establishment success. Our results suggest that correlation between number of native and established mammals is spurious, as it is simply explainable by the fact that both quantities are linked to country area. We also demonstrate that countries with higher human impact host more alien mammals than other countries because they received more introductions than other countries. Differences in number of alien mammals cannot be explained by differences in establishment success. Our findings highlight importance of human activities and question, at least for mammals in Europe, importance of biotic acceptance and resistance.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57185,Darwin's naturalization conundrum: dissecting taxonomic patterns of species invasions,S194745,R57186,Measure of native biodiversity,L122052,Number of native species,"Darwin acknowledged contrasting, plausible arguments for how species invasions are influenced by phylogenetic relatedness to the native community. These contrasting arguments persist today without clear resolution. Using data on the naturalization and abundance of exotic plants in the Auckland region, we show how different expectations can be accommodated through attention to scale, assumptions about niche overlap, and stage of invasion. Probability of naturalization was positively related to the number of native species in a genus but negatively related to native congener abundance, suggesting the importance of both niche availability and biotic resistance. Once naturalized, however, exotic abundance was not related to the number of native congeners, but positively related to native congener abundance. Changing the scale of analysis altered this outcome: within habitats exotic abundance was negatively related to native congener abundance, implying that native and exotic species respond similarly to broad scale environmental variation across habitats, with biotic resistance occurring within habitats.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54072,"Phenotypic Plasticity Influences the Size, Shape and Dynamics of the Geographic Distribution of an Invasive Plant",S165750,R54073,Species name,L100604,Parkinsonia aculeata,"Phenotypic plasticity has long been suspected to allow invasive species to expand their geographic range across large-scale environmental gradients. We tested this possibility in Australia using a continental scale survey of the invasive tree Parkinsonia aculeata (Fabaceae) in twenty-three sites distributed across four climate regions and three habitat types. Using tree-level responses, we detected a trade-off between seed mass and seed number across the moisture gradient. Individual trees plastically and reversibly produced many small seeds at dry sites or years, and few big seeds at wet sites and years. Bigger seeds were positively correlated with higher seed and seedling survival rates. The trade-off, the relation between seed mass, seed and seedling survival, and other fitness components of the plant life-cycle were integrated within a matrix population model. The model confirms that the plastic response resulted in average fitness benefits across the life-cycle. Plasticity resulted in average fitness being positively maintained at the wet and dry range margins where extinction risks would otherwise have been high (“Jack-of-all-Trades” strategy JT), and fitness being maximized at the species range centre where extinction risks were already low (“Master-of-Some” strategy MS). The resulting hybrid “Jack-and-Master” strategy (JM) broadened the geographic range and amplified average fitness in the range centre. Our study provides the first empirical evidence for a JM species. It also confirms mechanistically the importance of phenotypic plasticity in determining the size, the shape and the dynamic of a species distribution. The JM allows rapid and reversible phenotypic responses to new or changing moisture conditions at different scales, providing the species with definite advantages over genetic adaptation when invading diverse and variable environments. Furthermore, natural selection pressure acting on phenotypic plasticity is predicted to result in maintenance of the JT and strengthening of the MS, further enhancing the species invasiveness in its range centre.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54214,"Phenotypic plasticity, precipitation, and invasiveness in the fire-promoting grass Pennisetum setaceum (poaceae)",S167414,R54215,Species name,L101984,Pennisetum setaceum,"Invasiveness may result from genetic variation and adaptation or phenotypic plasticity, and genetic variation in fitness traits may be especially critical. Pennisetum setaceum (fountain grass, Poaceae) is highly invasive in Hawaii (HI), moderately invasive in Arizona (AZ), and less invasive in southern California (CA). In common garden experiments, we examined the relative importance of quantitative trait variation, precipitation, and phenotypic plasticity in invasiveness. In two very different environments, plants showed no differences by state of origin (HI, CA, AZ) in aboveground biomass, seeds/flower, and total seed number. Plants from different states were also similar within watering treatment. Plants with supplemental watering, relative to unwatered plants, had greater biomass, specific leaf area (SLA), and total seed number, but did not differ in seeds/flower. Progeny grown from seeds produced under different watering treatments showed no maternal effects in seed mass, germination, biomass or SLA. High phenotypic plasticity, rather than local adaptation is likely responsible for variation in invasiveness. Global change models indicate that temperature and precipitation patterns over the next several decades will change, although the direction of change is uncertain. Drier summers in southern California may retard further invasion, while wetter summers may favor the spread of fountain grass.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54014,"Native jewelweed, but not other native species, displays post-invasion trait divergence",S165075,R54015,hypothesis,L100045,Phenotypic plasticity,"Invasive exotic plants reduce the diversity of native communities by displacing native species. According to the coexistence theory, native plants are able to coexist with invaders only when their fitness is not significantly smaller than that of the exotics or when they occupy a different niche. It has therefore been hypothesized that the survival of some native species at invaded sites is due to post-invasion evolutionary changes in fitness and/or niche traits. In common garden experiments, we tested whether plants from invaded sites of two native species, Impatiens noli-tangere and Galeopsis speciosa, outperform conspecifics from non-invaded sites when grown in competition with the invader (Impatiens parviflora). We further examined whether the expected superior performance of the plants from the invaded sites is due to changes in the plant size (fitness proxy) and/or changes in the germination phenology and phenotypic plasticity (niche proxies). Invasion history did not influence the performance of any native species when grown with the exotic competitor. In I. noli-tangere, however, we found significant trait divergence with regard to plant size, germination phenology and phenotypic plasticity. In the absence of a competitor, plants of I. noli-tangere from invaded sites were larger than plants from non-invaded sites. The former plants germinated earlier than inexperienced conspecifics or an exotic congener. Invasion experience was also associated with increased phenotypic plasticity and an improved shade-avoidance syndrome. Although these changes indicate fitness and niche differentiation of I. noli-tangere at invaded sites, future research should examine more closely the adaptive value of these changes and their genetic basis.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54028,Jack-of-all-trades: phenotypic plasticity facilitates the invasion of an alien slug species,S165234,R54029,hypothesis,L100176,Phenotypic plasticity,"Invasive alien species might benefit from phenotypic plasticity by being able to (i) maintain fitness in stressful environments (‘robust’), (ii) increase fitness in favourable environments (‘opportunistic’), or (iii) combine both abilities (‘robust and opportunistic’). Here, we applied this framework, for the first time, to an animal, the invasive slug, Arion lusitanicus, and tested (i) whether it has a more adaptive phenotypic plasticity compared with a congeneric native slug, Arion fuscus, and (ii) whether it is robust, opportunistic or both. During one year, we exposed specimens of both species to a range of temperatures along an altitudinal gradient (700–2400 m a.s.l.) and to high and low food levels, and we compared the responsiveness of two fitness traits: survival and egg production. During summer, the invasive species had a more adaptive phenotypic plasticity, and at high temperatures and low food levels, it survived better and produced more eggs than A. fuscus, representing the robust phenotype. During winter, A. lusitanicus displayed a less adaptive phenotype than A. fuscus. We show that the framework developed for plants is also very useful for a better mechanistic understanding of animal invasions. Warmer summers and milder winters might lead to an expansion of this invasive species to higher altitudes and enhance its spread in the lowlands, supporting the concern that global climate change will increase biological invasions.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54034,Norway maple displays greater seasonal growth and phenotypic plasticity to light than native sugar maple,S165310,R54035,hypothesis,L100240,Phenotypic plasticity,"Norway maple (Acer platanoides L), which is among the most invasive tree species in forests of eastern North America, is associated with reduced regeneration of the related native species, sugar maple (Acer saccharum Marsh) and other native flora. To identify traits conferring an advantage to Norway maple, we grew both species through an entire growing season under simulated light regimes mimicking a closed forest understorey vs. a canopy disturbance (gap). Dynamic shade-houses providing a succession of high-intensity direct-light events between longer periods of low, diffuse light were used to simulate the light regimes. We assessed seedling height growth three times in the season, as well as stem diameter, maximum photosynthetic capacity, biomass allocation above- and below-ground, seasonal phenology and phenotypic plasticity. Given the north European provenance of Norway maple, we also investigated the possibility that its growth in North America might be increased by delayed fall senescence. We found that Norway maple had significantly greater photosynthetic capacity in both light regimes and grew larger in stem diameter than sugar maple. The differences in below- and above-ground biomass, stem diameter, height and maximum photosynthesis were especially important in the simulated gap where Norway maple continued extension growth during the late fall. In the gap regime sugar maple had a significantly higher root : shoot ratio that could confer an advantage in the deepest shade of closed understorey and under water stress or browsing pressure. Norway maple is especially invasive following canopy disturbance where the opposite (low root : shoot ratio) could confer a competitive advantage. Considering the effects of global change in extending the potential growing season, we anticipate that the invasiveness of Norway maple will increase in the future.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54036,Latitudinal Patterns in Phenotypic Plasticity and Fitness-Related Traits: Assessing the Climatic Variability Hypothesis (CVH) with an Invasive Plant Species,S165332,R54037,hypothesis,L100258,Phenotypic plasticity,"Phenotypic plasticity has been suggested as the main mechanism for species persistence under a global change scenario, and also as one of the main mechanisms that alien species use to tolerate and invade broad geographic areas. However, contrasting with this central role of phenotypic plasticity, standard models aimed to predict the effect of climatic change on species distributions do not allow for the inclusion of differences in plastic responses among populations. In this context, the climatic variability hypothesis (CVH), which states that higher thermal variability at higher latitudes should determine an increase in phenotypic plasticity with latitude, could be considered a timely and promising hypothesis. Accordingly, in this study we evaluated, for the first time in a plant species (Taraxacum officinale), the prediction of the CVH. Specifically, we measured plastic responses at different environmental temperatures (5 and 20°C), in several ecophysiological and fitness-related traits for five populations distributed along a broad latitudinal gradient. Overall, phenotypic plasticity increased with latitude for all six traits analyzed, and mean trait values increased with latitude at both experimental temperatures, the change was noticeably greater at 20° than at 5°C. Our results suggest that the positive relationship found between phenotypic plasticity and geographic latitude could have very deep implications on future species persistence and invasion processes under a scenario of climate change.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54040,Architectural strategies of Rhamnus cathartica (Rhamnaceae) in relation to canopy openness,S165380,R54041,hypothesis,L100298,Phenotypic plasticity,"While phenotypic plasticity is considered the major means that allows plant to cope with environmental heterogeneity, scant information is available on phenotypic plasticity of the whole-plant architecture in relation to ontogenic processes. We performed an architectural analysis to gain an understanding of the structural and ontogenic properties of common buckthorn (Rhamnus cathartica L., Rhamnaceae) growing in the understory and under an open canopy. We found that ontogenic effects on growth need to be calibrated if a full description of phenotypic plasticity is to be obtained. Our analysis pointed to three levels of organization (or nested structural units) in R. cathartica. Their modulation in relation to light conditions leads to the expression of two architectural strategies that involve sets of traits known to confer competitive advantage in their respective environments. In the understory, the plant develops a tree-like form. Its strategy here is based on restricting investment in exploitation str...",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54046,Phenotypic Plasticity and Population Differentiation in an Ongoing Species Invasion,S165452,R54047,hypothesis,L100358,Phenotypic plasticity,"The ability to succeed in diverse conditions is a key factor allowing introduced species to successfully invade and spread across new areas. Two non-exclusive factors have been suggested to promote this ability: adaptive phenotypic plasticity of individuals, and the evolution of locally adapted populations in the new range. We investigated these individual and population-level factors in Polygonum cespitosum, an Asian annual that has recently become invasive in northeastern North America. We characterized individual fitness, life-history, and functional plasticity in response to two contrasting glasshouse habitat treatments (full sun/dry soil and understory shade/moist soil) in 165 genotypes sampled from nine geographically separate populations representing the range of light and soil moisture conditions the species inhabits in this region. Polygonum cespitosum genotypes from these introduced-range populations expressed broadly similar plasticity patterns. In response to full sun, dry conditions, genotypes from all populations increased photosynthetic rate, water use efficiency, and allocation to root tissues, dramatically increasing reproductive fitness compared to phenotypes expressed in simulated understory shade. Although there were subtle among-population differences in mean trait values as well as in the slope of plastic responses, these population differences did not reflect local adaptation to environmental conditions measured at the population sites of origin. Instead, certain populations expressed higher fitness in both glasshouse habitat treatments. We also compared the introduced-range populations to a single population from the native Asian range, and found that the native population had delayed phenology, limited functional plasticity, and lower fitness in both experimental environments compared with the introduced-range populations. Our results indicate that the future spread of P. cespitosum in its introduced range will likely be fueled by populations consisting of individuals able to express high fitness across diverse light and moisture conditions, rather than by the evolution of locally specialized populations.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54052,Light Response of Native and Introduced Miscanthus sinensis Seedlings,S165522,R54053,hypothesis,L100416,Phenotypic plasticity,"The Asian grass Miscanthus sinensis (Poaceae) is being considered for use as a bioenergy crop in the U.S. Corn Belt. Originally introduced to the United States for ornamental plantings, it escaped, forming invasive populations. The concern is that naturalized M. sinensis populations have evolved shade tolerance. We tested the hypothesis that seedlings from within the invasive U.S. range of M. sinensis would display traits associated with shade tolerance, namely increased area for light capture and phenotypic plasticity, compared with seedlings from the native Japanese populations. In a common garden experiment, seedlings of 80 half-sib maternal lines were grown from the native range (Japan) and 60 half-sib maternal lines from the invasive range (U.S.) under four light levels. Seedling leaf area, leaf size, growth, and biomass allocation were measured on the resulting seedlings after 12 wk. Seedlings from both regions responded strongly to the light gradient. High light conditions resulted in seedlings with greater leaf area, larger leaves, and a shift to greater belowground biomass investment, compared with shaded seedlings. Japanese seedlings produced more biomass and total leaf area than U.S. seedlings across all light levels. Generally, U.S. and Japanese seedlings allocated a similar amount of biomass to foliage and equal leaf area per leaf mass. Subtle differences in light response by region were observed for total leaf area, mass, growth, and leaf size. U.S. seedlings had slightly higher plasticity for total mass and leaf area but lower plasticity for measures of biomass allocation and leaf traits compared with Japanese seedlings. Our results do not provide general support for the hypothesis of increased M. sinensis shade tolerance within its introduced U.S. range compared with native Japanese populations. Nomenclature: Eulaliagrass; Miscanthus sinensis Anderss. Management Implications: Eulaliagrass (Miscanthus sinensis), an Asian species under consideration for biomass production in the Midwest, has escaped ornamental plantings in the United States to form naturalized populations. Evidence suggests that U.S. populations are able to tolerate relatively shady conditions, but it is unclear whether U.S. populations have greater shade tolerance than the relatively shade-intolerant populations within the species' native range in Asia. Increased shade tolerance could result in a broader range of invaded light environments within the introduced range of M. sinensis. However, results from our common garden experiment do not support the hypothesis of increased shade tolerance in introduced U.S. populations compared with seedlings from native Asian populations. Our results do demonstrate that for both U.S. and Japanese populations under low light conditions, M. sinensis seeds germinate and seedlings gain mass and leaf area; therefore, land managers should carefully monitor or eradicate M. sinensis within these habitats.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54054,Phenotypic Plasticity in the Invasion of Crofton Weed (Eupatorium adenophorum) in China,S165547,R54055,hypothesis,L100437,Phenotypic plasticity,"Phenotypic plasticity and rapid evolution are two important strategies by which invasive species adapt to a wide range of environments and consequently are closely associated with plant invasion. To test their importance in invasion success of Crofton weed, we examined the phenotypic response and genetic variation of the weed by conducting a field investigation, common garden experiments, and intersimple sequence repeat (ISSR) marker analysis on 16 populations in China. Molecular markers revealed low genetic variation among and within the sampled populations. There were significant differences in leaf area (LA), specific leaf area (SLA), and seed number (SN) among field populations, and plasticity index (PIv) for LA, SLA, and SN were 0.62, 0.46 and 0.85, respectively. Regression analyses revealed a significant quadratic effect of latitude of population origin on LA, SLA, and SN based on field data but not on traits in the common garden experiments (greenhouse and open air). Plants from different populations showed similar reaction norms across the two common gardens for functional traits. LA, SLA, aboveground biomass, plant height at harvest, first flowering day, and life span were higher in the greenhouse than in the open-air garden, whereas SN was lower. Growth conditions (greenhouse vs. open air) and the interactions between growth condition and population origin significantly affect plant traits. The combined evidence suggests high phenotypic plasticity but low genetically based variation for functional traits of Crofton weed in the invaded range. Therefore, we suggest that phenotypic plasticity is the primary strategy for Crofton weed as an aggressive invader that can adapt to diverse environments in China.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54056,"Seasonal Photoperiods Alter Developmental Time and Mass of an Invasive Mosquito, Aedes albopictus (Diptera: Culicidae), Across Its North-South Range in the United States",S165570,R54057,hypothesis,L100456,Phenotypic plasticity,"ABSTRACT The Asian tiger mosquito, Aedes albopictus (Skuse), is perhaps the most successful invasive mosquito species in contemporary history. In the United States, Ae. albopictus has spread from its introduction point in southern Texas to as far north as New Jersey (i.e., a span of ≈14° latitude). This species experiences seasonal constraints in activity because of cold temperatures in winter in the northern United States, but is active year-round in the south. We performed a laboratory experiment to examine how life-history traits of Ae. albopictus from four populations (New Jersey [39.4° N], Virginia [38.6° N], North Carolina [35.8° N], Florida [27.6° N]) responded to photoperiod conditions that mimic approaching winter in the north (short static daylength, short diminishing daylength) or relatively benign summer conditions in the south (long daylength), at low and high larval densities. Individuals from northern locations were predicted to exhibit reduced development times and to emerge smaller as adults under short daylength, but be larger and take longer to develop under long daylength. Life-history traits of southern populations were predicted to show less plasticity in response to daylength because of low probability of seasonal mortality in those areas. Males and females responded strongly to photoperiod regardless of geographic location, being generally larger but taking longer to develop under the long daylength compared with short day lengths; adults of both sexes were smaller when reared at low larval densities. Adults also differed in mass and development time among locations, although this effect was independent of density and photoperiod in females but interacted with density in males. Differences between male and female mass and development times was greater in the long photoperiod suggesting differences between the sexes in their reaction to different photoperiods. This work suggests that Ae. albopictus exhibits sex-specific phenotypic plasticity in life-history traits matching variation in important environmental variables.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54060,"Seedling defoliation, plant growth and flowering potential in native- and invasive-range Plantago lanceolata populations",S165614,R54061,hypothesis,L100492,Phenotypic plasticity,"Hanley ME (2012). Seedling defoliation, plant growth and flowering potential in native- and invasive-range Plantago lanceolata populations. Weed Research52, 252–259. Summary The plastic response of weeds to new environmental conditions, in particular the likely relaxation of herbivore pressure, is considered vital for successful colonisation and spread. However, while variation in plant anti-herbivore resistance between native- and introduced-range populations is well studied, few authors have considered herbivore tolerance, especially at the seedling stage. This study examines variation in seedling tolerance in native (European) and introduced (North American) Plantago lanceolata populations following cotyledon removal at 14 days old. Subsequent effects on plant growth were quantified at 35 days, along with effects on flowering potential at maturity. Cotyledon removal reduced early growth for all populations, with no variation between introduced- or native-range plants. Although more variable, the effects of cotyledon loss on flowering potential were also unrelated to range. The likelihood that generalist seedling herbivores are common throughout North America may explain why no difference in seedling tolerance was apparent. However, increased flowering potential in plants from North American P. lanceolata populations was observed. As increased flowering potential was not lost, even after severe cotyledon damage, the manifestation of phenotypic plasticity in weeds at maturity may nonetheless still be shaped by plasticity in the ability to tolerate herbivory during seedling establishment.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54062,Shell morphology and relative growth variability of the invasive pearl oyster Pinctada radiata in coastal Tunisia,S165639,R54063,hypothesis,L100513,Phenotypic plasticity,"The variability of shell morphology and relative growth of the invasive pearl oyster Pinctada radiata was studied within and among ten populations from coastal Tunisia using discriminant tests. Therefore, 12 morphological characters were examined and 34 metric and weight ratios were defined. In addition to the classic morphological characters, populations were compared by the thickness of the nacreous layer. Results of Duncan's multiple comparison test showed that the most discriminative ratios were the width of nacreous layer of right valve to the inflation of shell, the hinge line length to the maximum width of shell and the nacre thickness to the maximum width of shell. The analysis of variance revealed an important inter-population morphological variability. Both multidimensional scaling analysis and the squared Mahalanobis distances (D2) of metric ratios divided Tunisian P. radiata populations into four biogeographical groupings: the north coast (La Marsa); harbours (Hammamet, Monastir and Zarzis); the Gulf of Gabès (Sfax, Kerkennah Island, Maharès, Skhira and Djerba) and the intertidal area (Ajim). However, the Kerkennah Island population was discriminated by the squared Mahalanobis distances (D2) of weight ratios in an isolated group suggesting particular trophic conditions in this area. The allometric study revealed high linear correlation between shell morphological characters and differences in allometric growth among P. radiata populations. Unlike the morphological discrimination, allometric differentiation shows no clear geographical distinction. This study revealed that the pearl oyster P. radiata exhibited considerable phenotypic plasticity related to differences of environmental and/or ecological conditions along Tunisian coasts and highlighted the discriminative character of the nacreous layer thickness parameter.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54064,Plastic Traits of an Exotic Grass Contribute to Its Abundance but Are Not Always Favourable,S165662,R54065,hypothesis,L100532,Phenotypic plasticity,"In herbaceous ecosystems worldwide, biodiversity has been negatively impacted by changed grazing regimes and nutrient enrichment. Altered disturbance regimes are thought to favour invasive species that have a high phenotypic plasticity, although most studies measure plasticity under controlled conditions in the greenhouse and then assume plasticity is an advantage in the field. Here, we compare trait plasticity between three co-occurring, C4 perennial grass species, an invader Eragrostis curvula, and natives Eragrostis sororia and Aristida personata to grazing and fertilizer in a three-year field trial. We measured abundances and several leaf traits known to correlate with strategies used by plants to fix carbon and acquire resources, i.e. specific leaf area (SLA), leaf dry matter content (LDMC), leaf nutrient concentrations (N, C∶N, P), assimilation rates (Amax) and photosynthetic nitrogen use efficiency (PNUE). In the control treatment (grazed only), trait values for SLA, leaf C∶N ratios, Amax and PNUE differed significantly between the three grass species. When trait values were compared across treatments, E. curvula showed higher trait plasticity than the native grasses, and this correlated with an increase in abundance across all but the grazed/fertilized treatment. The native grasses showed little trait plasticity in response to the treatments. Aristida personata decreased significantly in the treatments where E. curvula increased, and E. sororia abundance increased possibly due to increased rainfall and not in response to treatments or invader abundance. Overall, we found that plasticity did not favour an increase in abundance of E. curvula under the grazed/fertilized treatment likely because leaf nutrient contents increased and subsequently its' palatability to consumers. E. curvula also displayed a higher resource use efficiency than the native grasses. These findings suggest resource conditions and disturbance regimes can be manipulated to disadvantage the success of even plastic exotic species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54070,Phenotypic variation of an alien species in a new environment: the body size and diet of American mink over time and at local and continental scales,S165728,R54071,hypothesis,L100586,Phenotypic plasticity,"Introduced species must adapt their ecology, behaviour, and morphological traits to new conditions. The successful introduction and invasive potential of a species are related to its levels of phenotypic plasticity and genetic polymorphism. We analysed changes in the body mass and length of American mink (Neovison vison) since its introduction into the Warta Mouth National Park, western Poland, in relation to diet composition and colonization progress from 1996 to 2004. Mink body mass decreased significantly during the period of population establishment within the study area, with an average decrease of 13% from 1.36 to 1.18 kg in males and of 16% from 0.83 to 0.70 kg in females. Diet composition varied seasonally and between consecutive years. The main prey items were mammals and fish in the cold season and birds and fish in the warm season. During the study period the proportion of mammals preyed upon increased in the cold season and decreased in the warm season. The proportion of birds preyed upon decreased over the study period, whereas the proportion of fish increased. Following introduction, the strictly aquatic portion of mink diet (fish and frogs) increased over time, whereas the proportion of large prey (large birds, muskrats, and water voles) decreased. The average yearly proportion of large prey and average-sized prey in the mink diet was significantly correlated with the mean body masses of males and females. Biogeographical variation in the body mass and length of mink was best explained by the percentage of large prey in the mink diet in both sexes, and by latitude for females. Together these results demonstrate that American mink rapidly changed their body mass in relation to local conditions. This phenotypic variability may be underpinned by phenotypic plasticity and/or by adaptation of quantitative genetic variation. The potential to rapidly change phenotypic variation in this manner is an important factor determining the negative ecological impacts of invasive species. © 2012 The Linnean Society of London, Biological Journal of the Linnean Society, 2012, 105, 681–693.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54072,"Phenotypic Plasticity Influences the Size, Shape and Dynamics of the Geographic Distribution of an Invasive Plant",S165752,R54073,hypothesis,L100606,Phenotypic plasticity,"Phenotypic plasticity has long been suspected to allow invasive species to expand their geographic range across large-scale environmental gradients. We tested this possibility in Australia using a continental scale survey of the invasive tree Parkinsonia aculeata (Fabaceae) in twenty-three sites distributed across four climate regions and three habitat types. Using tree-level responses, we detected a trade-off between seed mass and seed number across the moisture gradient. Individual trees plastically and reversibly produced many small seeds at dry sites or years, and few big seeds at wet sites and years. Bigger seeds were positively correlated with higher seed and seedling survival rates. The trade-off, the relation between seed mass, seed and seedling survival, and other fitness components of the plant life-cycle were integrated within a matrix population model. The model confirms that the plastic response resulted in average fitness benefits across the life-cycle. Plasticity resulted in average fitness being positively maintained at the wet and dry range margins where extinction risks would otherwise have been high (“Jack-of-all-Trades” strategy JT), and fitness being maximized at the species range centre where extinction risks were already low (“Master-of-Some” strategy MS). The resulting hybrid “Jack-and-Master” strategy (JM) broadened the geographic range and amplified average fitness in the range centre. Our study provides the first empirical evidence for a JM species. It also confirms mechanistically the importance of phenotypic plasticity in determining the size, the shape and the dynamic of a species distribution. The JM allows rapid and reversible phenotypic responses to new or changing moisture conditions at different scales, providing the species with definite advantages over genetic adaptation when invading diverse and variable environments. Furthermore, natural selection pressure acting on phenotypic plasticity is predicted to result in maintenance of the JT and strengthening of the MS, further enhancing the species invasiveness in its range centre.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54074,Phenotypic divergence of exotic fish populations is shaped by spatial proximity and habitat differences across an invaded landscape,S165773,R54075,hypothesis,L100623,Phenotypic plasticity,"Background: Brown trout (Salmo trutta) were introduced into, and subsequently colonized, a number of disparate watersheds on the island of Newfoundland, Canada (110,638 km 2 ), starting in 1883. Questions: Do environmental features of recently invaded habitats shape population-level phenotypic variability? Are patterns of phenotypic variability suggestive of parallel adaptive divergence? And does the extent of phenotypic divergence increase as a function of distance between populations? Hypotheses: Populations that display similar phenotypes will inhabit similar environments. Patterns in morphology, coloration, and growth in an invasive stream-dwelling fish should be consistent with adaptation, and populations closer to each other should be more similar than should populations that are farther apart. Organism and study system: Sixteen brown trout populations of probable common descent, inhabiting a gradient of environments. These populations include the most ancestral (∼130 years old) and most recently established (∼20 years old). Analytical methods: We used multivariate statistical techniques to quantify morphological (e.g. body shape via geometric morphometrics and linear measurements of traits), meristic (e.g. counts of pigmentation spots), and growth traits from 1677 individuals. To account for ontogenetic and allometric effects on morphology, we conducted separate analyses on three distinct size/age classes. We used the BIO-ENV routine and Mantel tests to measure the correlation between phenotypic and habitat features. Results: Phenotypic similarity was significantly correlated with environmental similarity, especially in the larger size classes of fish. The extent to which these associations between phenotype and habitat result from parallel evolution, adaptive phenotypic plasticity, or historical founder effects is not known. Observed patterns of body shape and fin sizes were generally consistent with predictions of adaptive trait patterns, but other traits showed less consistent patterns with habitat features. Phenotypic differences increased as a function of straight-line distance (km) between watersheds and to a lesser extent fish dispersal distances, which suggests habitat has played a more significant role in shaping population phenotypes compared with founder effects.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54092,Invasive Microstegium populations consistently outperform native range populations across diverse environments,S165985,R54093,hypothesis,L100799,Phenotypic plasticity,"Plant species introduced into novel ranges may become invasive due to evolutionary change, phenotypic plasticity, or other biotic or abiotic mechanisms. Evolution of introduced populations could be the result of founder effects, drift, hybridization, or adaptation to local conditions, which could enhance the invasiveness of introduced species. However, understanding whether the success of invading populations is due to genetic differences between native and introduced populations may be obscured by origin x environment interactions. That is, studies conducted under a limited set of environmental conditions may show inconsistent results if native or introduced populations are differentially adapted to specific conditions. We tested for genetic differences between native and introduced populations, and for origin x environment interactions, between native (China) and introduced (U.S.) populations of the invasive annual grass Microstegium vimineum (stiltgrass) across 22 common gardens spanning a wide range of habitats and environmental conditions. On average, introduced populations produced 46% greater biomass and had 7.4% greater survival, and outperformed native range populations in every common garden. However, we found no evidence that introduced Microstegium exhibited greater phenotypic plasticity than native populations. Biomass of Microstegium was positively correlated with light and resident community richness and biomass across the common gardens. However, these relationships were equivalent for native and introduced populations, suggesting that the greater mean performance of introduced populations is not due to unequal responses to specific environmental parameters. Our data on performance of invasive and native populations suggest that post-introduction evolutionary changes may have enhanced the invasive potential of this species. Further, the ability of Microstegium to survive and grow across the wide variety of environmental conditions demonstrates that few habitats are immune to invasion.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54094,Multispecies comparison reveals that invasive and native plants differ in their traits but not in their plasticity,S166008,R54095,hypothesis,L100818,Phenotypic plasticity,"Summary 1. Plastic responses to spatiotemporal environmental variation strongly influence species distribution, with widespread species expected to have high phenotypic plasticity. Theoretically, high phenotypic plasticity has been linked to plant invasiveness because it facilitates colonization and rapid spreading over large and environmentally heterogeneous new areas. 2. To determine the importance of phenotypic plasticity for plant invasiveness, we compare well-known exotic invasive species with widespread native congeners. First, we characterized the phenotype of 20 invasive–native ecologically and phylogenetically related pairs from the Mediterranean region by measuring 20 different traits involved in resource acquisition, plant competition ability and stress tolerance. Second, we estimated their plasticity across nutrient and light gradients. 3. On average, invasive species had greater capacity for carbon gain and enhanced performance over a range of limiting to saturating resource availabilities than natives. However, both groups responded to environmental variations with high albeit similar levels of trait plasticity. Therefore, contrary to the theory, the extent of phenotypic plasticity was not significantly higher for invasive plants. 4. We argue that the combination of studying mean values of a trait with its plasticity can render insightful conclusions on functional comparisons of species such as those exploring the performance of species coexisting in heterogeneous and changing environments.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54106,High temperature tolerance and thermal plasticity in emerald ash borer Agrilus planipennis,S166151,R54107,hypothesis,L100937,Phenotypic plasticity,"1 The emerald ash borer Agrilus planipennis (Coleoptera: Buprestidae) (EAB), an invasive wood‐boring beetle, has recently caused significant losses of native ash (Fraxinus spp.) trees in North America. Movement of wood products has facilitated EAB spread, and heat sanitation of wooden materials according to International Standards for Phytosanitary Measures No. 15 (ISPM 15) is used to prevent this. 2 In the present study, we assessed the thermal conditions experienced during a typical heat‐treatment at a facility using protocols for pallet wood treatment under policy PI‐07, as implemented in Canada. The basal high temperature tolerance of EAB larvae and pupae was determined, and the observed heating rates were used to investigate whether the heat shock response and expression of heat shock proteins occurred in fourth‐instar larvae. 3 The temperature regime during heat treatment greatly exceeded the ISPM 15 requirements of 56 °C for 30 min. Emerald ash borer larvae were highly tolerant of elevated temperatures, with some instars surviving exposure to 53 °C without any heat pre‐treatments. High temperature survival was increased by either slow warming or pre‐exposure to elevated temperatures and a recovery regime that was accompanied by up‐regulated hsp70 expression under some of these conditions. 4 Because EAB is highly heat tolerant and exhibits a fully functional heat shock response, we conclude that greater survival than measured in vitro is possible under industry treatment conditions (with the larvae still embedded in the wood). We propose that the phenotypic plasticity of EAB may lead to high temperature tolerance very close to conditions experienced in an ISPM 15 standard treatment.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54110,Relatedness predicts phenotypic plasticity in plants better than weediness,S166198,R54111,hypothesis,L100976,Phenotypic plasticity,"Background: Weedy non-native species have long been predicted to be more phenotypically plastic than native species. Question: Are weedy non-native species more plastic than natives? Organisms: Fourteen perennial plant species: Acer platanoides, Acer saccharum, Bromus inermis, Bromus latiglumis, Celastrus orbiculatus, Celastrus scandens, Elymus repens, Elymus trachycaulus, Plantago major, Plantago rugelii, Rosa multiflora, Rosa palustris, Solanum dulcamara, and Solanum carolinense. Field site: Mesic old-field in Dryden, NY (422749″N, 762640″W). Methods: We grew seven pairs of native and non-native plant congeners in the field and tested their responses to reduced competition and the addition of fertilizer. We measured the plasticity of six traits related to growth and leaf palatability (total length, leaf dry mass, maximum relative growth rate, leaf toughness, trichome density, and specific leaf area). Conclusions: Weedy non-native species did not differ consistently from natives in their phenotypic plasticity. Instead, relatedness was a better predictor of plasticity.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54112,VARIATION IN PHENOTYPIC PLASTICITY AMONG NATIVE AND INVASIVE POPULATIONS OF ALLIARIA PETIOLATA,S166220,R54113,hypothesis,L100994,Phenotypic plasticity,"Alliaria petiolata is a Eurasian biennial herb that is invasive in North America and for which phenotypic plasticity has been noted as a potentially important invasive trait. Using four European and four North American populations, we explored variation among populations in the response of a suite of antioxidant, antiherbivore, and morphological traits to the availability of water and nutrients and to jasmonic acid treatment. Multivariate analyses revealed substantial variation among populations in mean levels of these traits and in the response of this suite of traits to environmental variation, especially water availability. Univariate analyses revealed variation in plasticity among populations in the expression of all of the traits measured to at least one of these environmental factors, with the exception of leaf length. There was no evidence for continentally distinct plasticity patterns, but there was ample evidence for variation in phenotypic plasticity among the populations within continents. This implies that A. petiolata has the potential to evolve distinct phenotypic plasticity patterns within populations but that invasive populations are no more plastic than native populations.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54128,Functional differences in response to drought in the invasive Taraxacum officinale from native and introduced alpine habitat ranges,S166404,R54129,hypothesis,L101146,Phenotypic plasticity,"Background: Phenotypic plasticity and ecotypic differentiation have been suggested as the main mechanisms by which widely distributed species can colonise broad geographic areas with variable and stressful conditions. Some invasive plant species are among the most widely distributed plants worldwide. Plasticity and local adaptation could be the mechanisms for colonising new areas. Aims: We addressed if Taraxacum officinale from native (Alps) and introduced (Andes) stock responded similarly to drought treatment, in terms of photosynthesis, foliar angle, and flowering time. We also evaluated if ontogeny affected fitness and physiological responses to drought. Methods: We carried out two common garden experiments with both seedlings and adults (F2) of T. officinale from its native and introduced ranges in order to evaluate their plasticity and ecotypic differentiation under a drought treatment. Results: Our data suggest that the functional response of T. officinale individuals from the introduced range to drought is the result of local adaptation rather than plasticity. In addition, the individuals from the native distribution range were more sensitive to drought than those from the introduced distribution ranges at both seedling and adult stages. Conclusions: These results suggest that local adaptation may be a possible mechanism underlying the successful invasion of T. officinale in high mountain environments of the Andes.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54130,"Morphological differentiation of introduced pikeperch (Sander lucioperca L., 1758) populations in Tunisian freshwaters",S166428,R54131,hypothesis,L101166,Phenotypic plasticity,"Summary In order to evaluate the phenotypic plasticity of introduced pikeperch populations in Tunisia, the intra- and interpopulation differentiation was analysed using a biometric approach. Thus, nine meristic counts and 23 morphological measurements were taken from 574 specimens collected from three dams and a hill lake. The univariate (anova) and multivariate analyses (PCA and DFA) showed a low meristic variability between the pikeperch samples and a segregated pikeperch group from the Sidi Salem dam which displayed a high distance between mouth and pectoral fin and a high antedorsal distance. In addition, the Korba hill lake population seemed to have more important values of total length, eye diameter, maximum body height and a higher distance between mouth and operculum than the other populations. However, the most accentuated segregation was found in the Lebna sample where the individuals were characterized by high snout length, body thickness, pectoral fin length, maximum body height and distance between mouth and operculum. This study shows the existence of morphological differentiations between populations derived from a single gene pool that have been isolated in separated sites for several decades although in relatively similar environments.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54132,Elevational distribution limits of non-native species: combining observational and experimental evidence,S166452,R54133,hypothesis,L101186,Phenotypic plasticity,"Background: In temperate mountains, most non-native plant species reach their distributional limit somewhere along the elevational gradient. However, it is unclear if growth limitations can explain upper range limits and whether phenotypic plasticity or genetic changes allow species to occupy a broad elevational gradient. Aims: We investigated how non-native plant individuals from different elevations responded to growing season temperatures, which represented conditions at the core and margin of the elevational distributions of the species. Methods: We recorded the occurrence of nine non-native species in the Swiss Alps and subsequently conducted a climate chamber experiment to assess growth rates of plants from different elevations under different temperature treatments. Results: The elevational limit observed in the field was not related to the species' temperature response in the climate chamber experiment. Almost all species showed a similar level of reduction in growth rates under lower temperatures independent of the upper elevational limit of the species' distribution. For two species we found indications for genetic differentiation among plants from different elevations. Conclusions: We conclude that factors other than growing season temperatures, such as extreme events or winter mortality, might shape the elevational limit of non-native species, and that ecological filtering might select for genotypes that are phenotypically plastic.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54136,Invasion strategies in clonal aquatic plants: are phenotypic differences caused by phenotypic plasticity or local adaptation? ,S166499,R54137,hypothesis,L101225,Phenotypic plasticity,"BACKGROUND AND AIMS The successful spread of invasive plants in new environments is often linked to multiple introductions and a diverse gene pool that facilitates local adaptation to variable environmental conditions. For clonal plants, however, phenotypic plasticity may be equally important. Here the primary adaptive strategy in three non-native, clonally reproducing macrophytes (Egeria densa, Elodea canadensis and Lagarosiphon major) in New Zealand freshwaters were examined and an attempt was made to link observed differences in plant morphology to local variation in habitat conditions. METHODS Field populations with a large phenotypic variety were sampled in a range of lakes and streams with different chemical and physical properties. The phenotypic plasticity of the species before and after cultivation was studied in a common garden growth experiment, and the genetic diversity of these same populations was also quantified. KEY RESULTS For all three species, greater variation in plant characteristics was found before they were grown in standardized conditions. Moreover, field populations displayed remarkably little genetic variation and there was little interaction between habitat conditions and plant morphological characteristics. CONCLUSIONS The results indicate that at the current stage of spread into New Zealand, the primary adaptive strategy of these three invasive macrophytes is phenotypic plasticity. However, while limited, the possibility that genetic diversity between populations may facilitate ecotypic differentiation in the future cannot be excluded. These results thus indicate that invasive clonal aquatic plants adapt to new introduced areas by phenotypic plasticity. Inorganic carbon, nitrogen and phosphorous were important in controlling plant size of E. canadensis and L. major, but no other relationships between plant characteristics and habitat conditions were apparent. This implies that within-species differences in plant size can be explained by local nutrient conditions. All together this strongly suggests that invasive clonal aquatic plants adapt to a wide range of habitats in introduced areas by phenotypic plasticity rather than local adaptation.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54140,Phenotypic plasticity of thermal tolerance contributes to the invasion potential of Mediterranean fruit flies (Ceratitis capitata) ,S166547,R54141,hypothesis,L101265,Phenotypic plasticity,"1. The invasion success of Ceratitis capitata probably stems from physiological, morphological, and behavioural adaptations that enable them to survive in different habitats. However, it is generally poorly understood if variation in acute thermal tolerance and its phenotypic plasticity might be important in facilitating survival of C. capitata upon introduction to novel environments.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54144,Evolution of dispersal traits along an invasion route in the wind-dispersed Senecio inaequidens (Asteraceae) ,S166592,R54145,hypothesis,L101302,Phenotypic plasticity,"In introduced organisms, dispersal propensity is expected to increase during range expansion. This prediction is based on the assumption that phenotypic plasticity is low compared to genetic diversity, and an increase in dispersal can be counteracted by the Allee effect. Empirical evidence in support of these hypotheses is however lacking. The present study tested for evidence of differentiation in dispersal-related traits and the Allee effect in the wind-dispersed invasive Senecio inaequidens (Asteraceae). We collected capitula from individuals in ten field populations, along an invasion route including the original introduction site in southern France. In addition, we conducted a common garden experiment from field-collected seeds and obtained capitula from individuals representing the same ten field populations. We analysed phenotypic variation in dispersal traits between field and common garden environments as a function of the distance between populations and the introduction site. Our results revealed low levels of phenotypic differentiation among populations. However, significant clinal variation in dispersal traits was demonstrated in common garden plants representing the invasion route. In field populations, similar trends in dispersal-related traits and evidence of an Allee effect were not detected. In part, our results supported expectations of increased dispersal capacity with range expansion, and emphasized the contribution of phenotypic plasticity under natural conditions.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54172,Understanding the consequences of seed dispersal in a heterogeneous environment ,S166921,R54173,hypothesis,L101575,Phenotypic plasticity,"Plant distributions are in part determined by environmental heterogeneity on both large (landscape) and small (several meters) spatial scales. Plant populations can respond to environmental heterogeneity via genetic differentiation between large distinct patches, and via phenotypic plasticity in response to heterogeneity occurring at small scales relative to dispersal distance. As a result, the level of environmental heterogeneity experienced across generations, as determined by seed dispersal distance, may itself be under selection. Selection could act to increase or decrease seed dispersal distance, depending on patterns of heterogeneity in environmental quality with distance from a maternal home site. Serpentine soils, which impose harsh and variable abiotic stress on non-adapted plants, have been partially invaded by Erodium cicutarium in northern California, USA. Using nearby grassland sites characterized as either serpentine or non-serpentine, we collected seeds from dense patches of E. cicutarium on both soil types in spring 2004 and subsequently dispersed those seeds to one of four distances from their maternal home site (0, 0.5, 1, or 10 m). We examined distance-dependent patterns of variation in offspring lifetime fitness, conspecific density, soil availability, soil water content, and aboveground grass and forb biomass. ANOVA revealed a distinct fitness peak when seeds were dispersed 0.5 m from their maternal home site on serpentine patches. In non-serpentine patches, fitness was reduced only for seeds placed back into the maternal home site. Conspecific density was uniformly high within 1 m of a maternal home site on both soils, whereas soil water content and grass biomass were significantly heterogeneous among dispersal distances only on serpentine soils. Structural equation modeling and multigroup analysis revealed significantly stronger direct and indirect effects linking abiotic and biotic variation to offspring performance on serpentine soils than on non-serpentine soils, indicating the potential for soil-specific selection on seed dispersal distance in this invasive species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54174,"Growth, water relations, and stomatal development of Caragana korshinskii Kom. and Zygophyllum xanthoxylum (Bunge) Maxim. seedlings in response to water deficits",S166944,R54175,hypothesis,L101594,Phenotypic plasticity,"Abstract The selection and introduction of drought tolerant species is a common method of restoring degraded grasslands in arid environments. This study investigated the effects of water stress on growth, water relations, Na+ and K+ accumulation, and stomatal development in the native plant species Zygophyllum xanthoxylum (Bunge) Maxim., and an introduced species, Caragana korshinskii Kom., under three watering regimes. Moderate drought significantly reduced pre‐dawn water potential, leaf relative water content, total biomass, total leaf area, above‐ground biomass, total number of leaves and specific leaf area, but it increased the root/total weight ratio (0.23 versus 0.33) in C. korshinskii. Only severe drought significantly affected water status and growth in Z. xanthoxylum. In any given watering regime, a significantly higher total biomass was observed in Z. xanthoxylum (1.14 g) compared to C. korshinskii (0.19 g). Moderate drought significantly increased Na+ accumulation in all parts of Z. xanthoxylum, e.g., moderate drought increased leaf Na+ concentration from 1.14 to 2.03 g/100 g DW, however, there was no change in Na+ (0.11 versus 0.12) in the leaf of C. korshinskii when subjected to moderate drought. Stomatal density increased as water availability was reduced in both C. korshinskii and Z. xanthoxylum, but there was no difference in stomatal index of either species. Stomatal length and width, and pore width were significantly reduced by moderate water stress in Z. xanthoxylum, but severe drought was required to produce a significant effect in C. korshinskii. These results indicated that C. korshinskii is more responsive to water stress and exhibits strong phenotypic plasticity especially in above‐ground/below‐ground biomass allocation. In contrast, Z. xanthoxylum was more tolerant to water deficit, with a lower specific leaf area and a strong ability to maintain water status through osmotic adjustment and stomatal closure, thereby providing an effective strategy to cope with local extreme arid environments.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54190,Phenotypic variability in Holcus lanatus L. in southern Chile: a strategy that enhances plant survival and pasture stability,S167133,R54191,hypothesis,L101751,Phenotypic plasticity,"
Holcus lanatus L. can colonise a wide range of sites within the naturalised grassland of the Humid Dominion of Chile. The objectives were to determine plant growth mechanisms and strategies that have allowed H. lanatus to colonise contrasting pastures and to determine the existence of ecotypes of H. lanatus in southern Chile. Plants of H. lanatus were collected from four geographic zones of southern Chile and established in a randomised complete block design with four replicates. Five newly emerging tillers were marked per plant and evaluated at the vegetative, pre-ear emergence, complete emerged inflorescence, end of flowering period, and mature seed stages. At each evaluation, one marked tiller was harvested per plant. The variables measured included lamina length and width, tiller height, length of the inflorescence, total number of leaves, and leaf, stem, and inflorescence mass. At each phenological stage, groups of accessions were statistically formed using cluster analysis. The grouping of accessions (cluster analysis) into statistically different groups (ANOVA and canonical variate analysis) indicated the existence of different ecotypes. The phenotypic variation within each group of the accessions suggested that each group has its own phenotypic plasticity. It is concluded that the successful colonisation by H. lanatus has resulted from diversity within the species.
",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54192,Differences in plasticity between invasive and native plants from a low resource environment,S167154,R54193,hypothesis,L101768,Phenotypic plasticity,"1 Phenotypic plasticity is often cited as an important mechanism of plant invasion. However, few studies have evaluated the plasticity of a diverse set of traits among invasive and native species, particularly in low resource habitats, and none have examined the functional significance of these traits. 2 I explored trait plasticity in response to variation in light and nutrient availability in five phylogenetically related pairs of native and invasive species occurring in a nutrient‐poor habitat. In addition to the magnitude of trait plasticity, I assessed the correlation between 16 leaf‐ and plant‐level traits and plant performance, as measured by total plant biomass. Because plasticity for morphological and physiological traits is thought to be limited in low resource environments (where native species usually display traits associated with resource conservation), I predicted that native and invasive species would display similar, low levels of trait plasticity. 3 Across treatments, invasive and native species within pairs differed with respect to many of the traits measured; however, invasive species as a group did not show consistent patterns in the direction of trait values. Relative to native species, invasive species displayed high plasticity in traits pertaining to biomass partitioning and leaf‐level nitrogen and light use, but only in response to nutrient availability. Invasive and native species showed similar levels of resource‐use efficiency and there was no relationship between species plasticity and resource‐use efficiency across species. 4 Traits associated with carbon fixation were strongly correlated with performance in invasive species while only a single resource conservation trait was strongly correlated with performance in multiple native species. Several highly plastic traits were not strongly correlated with performance which underscores the difficulty in assessing the functional significance of resource conservation traits over short timescales and calls into question the relevance of simple, quantitative assessments of trait plasticity. 5 Synthesis. My data support the idea that invasive species display high trait plasticity. The degree of plasticity observed here for species occurring in low resource systems corresponds with values observed in high resource systems, which contradicts the general paradigm that trait plasticity is constrained in low resource systems. Several traits were positively correlated with plant performance suggesting that trait plasticity will influence plant fitness.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54194,Major morphological changes in a Lake Victoria cichlid fish within two decades,S167181,R54195,hypothesis,L101791,Phenotypic plasticity,"During the upsurge of the introduced predatory Nile perch in Lake Victoria in the 1980s, the zooplanktivorous Haplochromis (Yssichromis) pyrrhocephalus nearly vanished. The species recovered coincident with the intense fishing of Nile perch in the 1990s, when water clarity and dissolved oxygen levels had decreased dramatically due to increased eutrophication. In response to the hypoxic conditions, total gill surface in resurgent H. pyrrhocephalus increased by 64%. Remarkably, head length, eye length, and head volume decreased in size, whereas cheek depth increased. Reductions in eye size and depth of the rostral part of the musculus sternohyoideus, and reallocation of space between the opercular and suspensorial compartments of the head may have permitted accommodation of larger gills in a smaller head. By contrast, the musculus levator posterior, located dorsal to the gills, increased in depth. This probably reflects an adaptive response to the larger and tougher prey types in the diet of resurgent H. pyrrhocephalus. These striking morphological changes over a time span of only two decades could be the combined result of phenotypic plasticity and genetic change and may have fostered recovery of this species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54200,Spreading of the invasive Carpobrotus aff. acinaciformis in Mediterranean ecosystems: The advantage of performing in different light environments,S167252,R54201,hypothesis,L101850,Phenotypic plasticity,"ABSTRACT Question: Do specific environmental conditions affect the performance and growth dynamics of one of the most invasive taxa (Carpobrotus aff. acinaciformis) on Mediterranean islands? Location: Four populations located on Mallorca, Spain. Methods: We monitored growth rates of main and lateral shoots of this stoloniferous plant for over two years (2002–2003), comparing two habitats (rocky coast vs. coastal dune) and two different light conditions (sun vs. shade). In one population of each habitat type, we estimated electron transport rate and the level of plant stress (maximal photochemical efficiency Fv/Fm) by means of chlorophyll fluorescence. Results: Main shoots of Carpobrotus grew at similar rates at all sites, regardless habitat type. However, growth rate of lateral shoots was greater in shaded plants than in those exposed to sunlight. Its high phenotypic plasticity, expressed in different allocation patterns in sun and shade individuals, and its clonal growth which promotes the continuous sea...",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54210,Contrasting plant physiological adaptation to climate in the native and introduced range of Hypericum perforatum,S167372,R54211,hypothesis,L101950,Phenotypic plasticity,"Abstract How introduced plants, which may be locally adapted to specific climatic conditions in their native range, cope with the new abiotic conditions that they encounter as exotics is not well understood. In particular, it is unclear what role plasticity versus adaptive evolution plays in enabling exotics to persist under new environmental circumstances in the introduced range. We determined the extent to which native and introduced populations of St. John's Wort (Hypericum perforatum) are genetically differentiated with respect to leaf-level morphological and physiological traits that allow plants to tolerate different climatic conditions. In common gardens in Washington and Spain, and in a greenhouse, we examined clinal variation in percent leaf nitrogen and carbon, leaf δ13C values (as an integrative measure of water use efficiency), specific leaf area (SLA), root and shoot biomass, root/shoot ratio, total leaf area, and leaf area ratio (LAR). As well, we determined whether native European H. perforatum experienced directional selection on leaf-level traits in the introduced range and we compared, across gardens, levels of plasticity in these traits. In field gardens in both Washington and Spain, native populations formed latitudinal clines in percent leaf N. In the greenhouse, native populations formed latitudinal clines in root and shoot biomass and total leaf area, and in the Washington garden only, native populations also exhibited latitudinal clines in percent leaf C and leaf δ13C. Traits that failed to show consistent latitudinal clines instead exhibited significant phenotypic plasticity. Introduced St. John's Wort populations also formed significant or marginally significant latitudinal clines in percent leaf N in Washington and Spain, percent leaf C in Washington, and in root biomass and total leaf area in the greenhouse. In the Washington common garden, there was strong directional selection among European populations for higher percent leaf N and leaf δ13C, but no selection on any other measured trait. The presence of convergent, genetically based latitudinal clines between native and introduced H. perforatum, together with previously published molecular data, suggest that native and exotic genotypes have independently adapted to a broad-scale variation in climate that varies with latitude.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54212,Phenotypic plasticity of native vs. invasive purple loosestrife: A two-state multivariate approach,S167393,R54213,hypothesis,L101967,Phenotypic plasticity,"The differences in phenotypic plasticity between invasive (North American) and native (German) provenances of the invasive plant Lythrum salicaria (purple loosestrife) were examined using a multivariate reaction norm approach testing two important attributes of reaction norms described by multivariate vectors of phenotypic change: the magnitude and direction of mean trait differences between environments. Data were collected for six life history traits from native and invasive plants using a split-plot design with experimentally manipulated water and nutrient levels. We found significant differences between native and invasive plants in multivariate phenotypic plasticity for comparisons between low and high water treatments within low nutrient levels, between low and high nutrient levels within high water treatments, and for comparisons that included both a water and nutrient level change. The significant genotype x environment (G x E) effects support the argument that invasiveness of purple loosestrife is closely associated with the interaction of high levels of soil nutrient and flooding water regime. Our results indicate that native and invasive plants take different strategies for growth and reproduction; native plants flowered earlier and allocated more to flower production, while invasive plants exhibited an extended period of vegetative growth before flowering to increase height and allocation to clonal reproduction, which may contribute to increased fitness and invasiveness in subsequent years.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54214,"Phenotypic plasticity, precipitation, and invasiveness in the fire-promoting grass Pennisetum setaceum (poaceae)",S167416,R54215,hypothesis,L101986,Phenotypic plasticity,"Invasiveness may result from genetic variation and adaptation or phenotypic plasticity, and genetic variation in fitness traits may be especially critical. Pennisetum setaceum (fountain grass, Poaceae) is highly invasive in Hawaii (HI), moderately invasive in Arizona (AZ), and less invasive in southern California (CA). In common garden experiments, we examined the relative importance of quantitative trait variation, precipitation, and phenotypic plasticity in invasiveness. In two very different environments, plants showed no differences by state of origin (HI, CA, AZ) in aboveground biomass, seeds/flower, and total seed number. Plants from different states were also similar within watering treatment. Plants with supplemental watering, relative to unwatered plants, had greater biomass, specific leaf area (SLA), and total seed number, but did not differ in seeds/flower. Progeny grown from seeds produced under different watering treatments showed no maternal effects in seed mass, germination, biomass or SLA. High phenotypic plasticity, rather than local adaptation is likely responsible for variation in invasiveness. Global change models indicate that temperature and precipitation patterns over the next several decades will change, although the direction of change is uncertain. Drier summers in southern California may retard further invasion, while wetter summers may favor the spread of fountain grass.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54224,Phenotypic plasticity of an invasive acacia versus two native Mediterranean species,S167538,R54225,hypothesis,L102088,Phenotypic plasticity,"
The phenotypic plasticity and the competitive ability of the invasive Acacia longifolia v. the indigenous Mediterranean dune species Halimium halimifolium and Pinus pinea were evaluated. In particular, we explored the hypothesis that phenotypic plasticity in response to biotic and abiotic factors explains the observed differences in competitiveness between invasive and native species. The seedlings’ ability to exploit different resource availabilities was examined in a two factorial experimental design of light and nutrient treatments by analysing 20 physiological and morphological traits. Competitiveness was tested using an additive experimental design in combination with 15N-labelling experiments. Light and nutrient availability had only minor effects on most physiological traits and differences between species were not significant. Plasticity in response to changes in resource availability occurred in morphological and allocation traits, revealing A. longifolia to be a species of intermediate responsiveness. The major competitive advantage of A. longifolia was its constitutively high shoot elongation rate at most resource treatments and its effective nutrient acquisition. Further, A. longifolia was found to be highly tolerant against competition from native species. In contrast to common expectations, the competition experiment indicated that A. longifolia expressed a constant allocation pattern and a phenotypic plasticity similar to that of the native species.
",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54230,Leaf-level phenotypic variability and plasticity of invasive Rhododendron ponticum and non-invasive Ilex aquifolium co-occurring at two contrasting European sites,S167608,R54231,hypothesis,L102146,Phenotypic plasticity,"To understand the role of leaf-level plasticity and variability in species invasiveness, foliar characteristics were studied in relation to seasonal average integrated quantum flux density (Qint) in the understorey evergreen species Rhododendron ponticum and Ilex aquifolium at two sites. A native relict population of R. ponticum was sampled in southern Spain (Mediterranean climate), while an invasive alien population was investigated in Belgium (temperate maritime climate). Ilex aquifolium was native at both sites. Both species exhibited a significant plastic response to Qint in leaf dry mass per unit area, thickness, photosynthetic potentials, and chlorophyll contents at the two sites. However, R. ponticum exhibited a higher photosynthetic nitrogen use efficiency and larger investment of nitrogen in chlorophyll than I. aquifolium. Since leaf nitrogen (N) contents per unit dry mass were lower in R. ponticum, this species formed a larger foliar area with equal photosynthetic potential and light-harvesting efficiency compared with I. aquifolium. The foliage of R. ponticum was mechanically more resistant with larger density in the Belgian site than in the Spanish site. Mean leaf-level phenotypic plasticity was larger in the Belgian population of R. ponticum than in the Spanish population of this species and the two populations of I. aquifolium. We suggest that large fractional investments of foliar N in photosynthetic function coupled with a relatively large mean, leaf-level phenotypic plasticity may provide the primary explanation for the invasive nature and superior performance of R. ponticum at the Belgian site. With alleviation of water limitations from Mediterranean to temperate maritime climates, the invasiveness of R. ponticum may also be enhanced by the increased foliage mechanical resistance observed in the alien populations.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54232,Leaf ontogenetic dependence of light acclimation in invasive and native subtropical trees of different successional status,S167631,R54233,hypothesis,L102165,Phenotypic plasticity,"In the Bonin Islands of the western Pacific where the light environment is characterized by high fluctuations due to frequent typhoon disturbance, we hypothesized that the invasive success of Bischofia javanica Blume (invasive tree, mid-successional) may be attributable to a high acclimation capacity under fluctuating light availability. The physiological and morphological responses of B. javanica to both simulated canopy opening and closure were compared against three native species of different successional status: Trema orientalis Blume (pioneer), Schima mertensiana (Sieb. et Zucc.) Koidz (mid-successional) and Elaeocarpus photiniaefolius Hook.et Arn (late-successional). The results revealed significant species-specific differences in the timing of physiological maturity and phenotypic plasticity in leaves developed under constant high and low light levels. For example, the photosynthetic capacity of T. orientalis reached a maximum in leaves that had just fully expanded when grown under constant high light (50% of full sun) whereas that of E. photiniaefolius leaves continued to increase until 50 d after full expansion. For leaves that had just reached full expansion, T. orientalis, having high photosynthetic plasticity between high and low light, exhibited low acclimation capacity under the changing light (from high to low or low to high light). In comparison with native species, B. javanica showed a higher degree of physiological and morphological acclimation following transfer to a new light condition in leaves of all age classes (i.e. before and after reaching full expansion). The high acclimation ability of B. javanica in response to changes in light availability may be a part of its pre-adaptations for invasiveness in the fluctuating environment of the Bonin Islands.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54234,Can life-history traits predict the fate of introduced species? A case study on two cyprinid fish in southern France,S167654,R54235,hypothesis,L102184,Phenotypic plasticity,"1. The ecological and economic costs of introduced species can be high. Ecologists try to predict the probability of success and potential risk of the establishment of recently introduced species, given their biological characteristics. 2. In 1990 gudgeon, Gobio gobio, were released in a drainage canal of the Rhone delta of southern France. The Asian topmouth gudgeon, Pseudorasbora parva, was found for the first time in the same canal in 1993. Those introductions offered a unique opportunity to compare in situ the fate of two closely related fish in the same habitat. 3. Our major aims were to assess whether G. gobio was able to establish in what seemed an unlikely environment, to compare populations trends and life-history traits of both species and to assess whether we could explain or could have predicted our results, by considering their life-history strategies. 4. Data show that both species have established in the canal and have spread. Catches of P. parva have increased strongly and are now higher than those of G. gobio. 5. The two cyprinids have the same breeding season and comparable traits (such as short generation time, small body, high reproductive effort), so both could be classified as opportunists. The observed difference in their success (in terms of population growth and colonization rate) could be explained by the wider ecological and physiological tolerance of P. parva. 6. In conclusion, our field study seems to suggest that invasive vigour also results from the ability to tolerate environmental changes through phenotypic plasticity, rather than from particular life-history features pre-adapted to invasion. It thus remains difficult to define a good invader simply on the basis of its life-history features.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54236,Induced defenses in response to an invading crab predator: An explanation of historical and geographic phenotypic change,S167675,R54237,hypothesis,L102201,Phenotypic plasticity,"The expression of defensive morphologies in prey often is correlated with predator abundance or diversity over a range of temporal and spatial scales. These patterns are assumed to reflect natural selection via differential predation on genetically determined, fixed phenotypes. Phenotypic variation, however, also can reflect within-generation developmental responses to environmental cues (phenotypic plasticity). For example, water-borne effluents from predators can induce the production of defensive morphologies in many prey taxa. This phenomenon, however, has been examined only on narrow scales. Here, we demonstrate adaptive phenotypic plasticity in prey from geographically separated populations that were reared in the presence of an introduced predator. Marine snails exposed to predatory crab effluent in the field increased shell thickness rapidly compared with controls. Induced changes were comparable to (i) historical transitions in thickness previously attributed to selection by the invading predator and (ii) present-day clinal variation predicted from water temperature differences. Thus, predator-induced phenotypic plasticity may explain broad-scale geographic and temporal phenotypic variation. If inducible defenses are heritable, then selection on the reaction norm may influence coevolution between predator and prey. Trade-offs may explain why inducible rather than constitutive defenses have evolved in several gastropod species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54062,Shell morphology and relative growth variability of the invasive pearl oyster Pinctada radiata in coastal Tunisia,S165637,R54063,Species name,L100511,Pinctada radiata,"The variability of shell morphology and relative growth of the invasive pearl oyster Pinctada radiata was studied within and among ten populations from coastal Tunisia using discriminant tests. Therefore, 12 morphological characters were examined and 34 metric and weight ratios were defined. In addition to the classic morphological characters, populations were compared by the thickness of the nacreous layer. Results of Duncan's multiple comparison test showed that the most discriminative ratios were the width of nacreous layer of right valve to the inflation of shell, the hinge line length to the maximum width of shell and the nacre thickness to the maximum width of shell. The analysis of variance revealed an important inter-population morphological variability. Both multidimensional scaling analysis and the squared Mahalanobis distances (D2) of metric ratios divided Tunisian P. radiata populations into four biogeographical groupings: the north coast (La Marsa); harbours (Hammamet, Monastir and Zarzis); the Gulf of Gabès (Sfax, Kerkennah Island, Maharès, Skhira and Djerba) and the intertidal area (Ajim). However, the Kerkennah Island population was discriminated by the squared Mahalanobis distances (D2) of weight ratios in an isolated group suggesting particular trophic conditions in this area. The allometric study revealed high linear correlation between shell morphological characters and differences in allometric growth among P. radiata populations. Unlike the morphological discrimination, allometric differentiation shows no clear geographical distinction. This study revealed that the pearl oyster P. radiata exhibited considerable phenotypic plasticity related to differences of environmental and/or ecological conditions along Tunisian coasts and highlighted the discriminative character of the nacreous layer thickness parameter.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54060,"Seedling defoliation, plant growth and flowering potential in native- and invasive-range Plantago lanceolata populations",S165612,R54061,Species name,L100490,Plantago lanceolata,"Hanley ME (2012). Seedling defoliation, plant growth and flowering potential in native- and invasive-range Plantago lanceolata populations. Weed Research52, 252–259. Summary The plastic response of weeds to new environmental conditions, in particular the likely relaxation of herbivore pressure, is considered vital for successful colonisation and spread. However, while variation in plant anti-herbivore resistance between native- and introduced-range populations is well studied, few authors have considered herbivore tolerance, especially at the seedling stage. This study examines variation in seedling tolerance in native (European) and introduced (North American) Plantago lanceolata populations following cotyledon removal at 14 days old. Subsequent effects on plant growth were quantified at 35 days, along with effects on flowering potential at maturity. Cotyledon removal reduced early growth for all populations, with no variation between introduced- or native-range plants. Although more variable, the effects of cotyledon loss on flowering potential were also unrelated to range. The likelihood that generalist seedling herbivores are common throughout North America may explain why no difference in seedling tolerance was apparent. However, increased flowering potential in plants from North American P. lanceolata populations was observed. As increased flowering potential was not lost, even after severe cotyledon damage, the manifestation of phenotypic plasticity in weeds at maturity may nonetheless still be shaped by plasticity in the ability to tolerate herbivory during seedling establishment.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54046,Phenotypic Plasticity and Population Differentiation in an Ongoing Species Invasion,S165450,R54047,Species name,L100356,Polygonum cespitosum,"The ability to succeed in diverse conditions is a key factor allowing introduced species to successfully invade and spread across new areas. Two non-exclusive factors have been suggested to promote this ability: adaptive phenotypic plasticity of individuals, and the evolution of locally adapted populations in the new range. We investigated these individual and population-level factors in Polygonum cespitosum, an Asian annual that has recently become invasive in northeastern North America. We characterized individual fitness, life-history, and functional plasticity in response to two contrasting glasshouse habitat treatments (full sun/dry soil and understory shade/moist soil) in 165 genotypes sampled from nine geographically separate populations representing the range of light and soil moisture conditions the species inhabits in this region. Polygonum cespitosum genotypes from these introduced-range populations expressed broadly similar plasticity patterns. In response to full sun, dry conditions, genotypes from all populations increased photosynthetic rate, water use efficiency, and allocation to root tissues, dramatically increasing reproductive fitness compared to phenotypes expressed in simulated understory shade. Although there were subtle among-population differences in mean trait values as well as in the slope of plastic responses, these population differences did not reflect local adaptation to environmental conditions measured at the population sites of origin. Instead, certain populations expressed higher fitness in both glasshouse habitat treatments. We also compared the introduced-range populations to a single population from the native Asian range, and found that the native population had delayed phenology, limited functional plasticity, and lower fitness in both experimental environments compared with the introduced-range populations. Our results indicate that the future spread of P. cespitosum in its introduced range will likely be fueled by populations consisting of individuals able to express high fitness across diverse light and moisture conditions, rather than by the evolution of locally specialized populations.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54222,Adaptation vs. phenotypic plasticity in the success of a clonal invader,S167513,R54223,Species name,L102067,Potamopyrgus antipodarum,"The relative importance of plasticity vs. adaptation for the spread of invasive species has rarely been studied. We examined this question in a clonal population of invasive freshwater snails (Potamopyrgus antipodarum) from the western United States by testing whether observed plasticity in life history traits conferred higher fitness across a range of temperatures. We raised isofemale lines from three populations from different climate regimes (high- and low-elevation rivers and an estuary) in a split-brood, common-garden design in three temperatures. We measured life history and growth traits and calculated population growth rate (as a measure of fitness) using an age-structured projection matrix model. We found a strong effect of temperature on all traits, but no evidence for divergence in the average level of traits among populations. Levels of genetic variation and significant reaction norm divergence for life history traits suggested some role for adaptation. Plasticity varied among traits and was lowest for size and reproductive traits compared to age-related traits and fitness. Plasticity in fitness was intermediate, suggesting that invasive populations are not general-purpose genotypes with respect to the range of temperatures studied. Thus, by considering plasticity in fitness and its component traits, we have shown that trait plasticity alone does not yield the same fitness across a relevant set of temperature conditions.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54956,"The vulnerability of habitats to plant invasion: disentangling the roles of propagule pressure, time and sampling effort",S175699,R54957,hypothesis,L108523,Propagule pressure,"Aim To quantify the vulnerability of habitats to invasion by alien plants having accounted for the effects of propagule pressure, time and sampling effort. Location New Zealand. Methods We used spatial, temporal and habitat information taken from 9297 herbarium records of 301 alien plant species to examine the vulnerability of 11 terrestrial habitats to plant invasions. A null model that randomized species records across habitats was used to account for variation in sampling effort and to derive a relative measure of invasion based either on all records for a species or only its first record. The relative level of invasion was related to the average distance of each habitat from the nearest conurbation, which was used as a proxy for propagule pressure. The habitat in which a species was first recorded was compared to the habitats encountered for all records of that species to determine whether the initial habitat could predict subsequent habitat occupancy. Results Variation in sampling effort in space and time significantly masked the underlying vulnerability of habitats to plant invasions. Distance from the nearest conurbation had little effect on the relative level of invasion in each habitat, but the number of first records of each species significantly declined with increasing distance. While Urban, Streamside and Coastal habitats were over-represented as sites of initial invasion, there was no evidence of major invasion hotspots from which alien plants might subsequently spread. Rather, the data suggest that certain habitats (especially Roadsides) readily accumulate alien plants from other habitats. Main conclusions Herbarium records combined with a suitable null model provide a powerful tool for assessing the relative vulnerability of habitats to plant invasion. The first records of alien plants tend to be found near conurbations, but this pattern disappears with subsequent spread. Regardless of the habitat where a species was first recorded, ultimately most alien plants spread to Roadside and Sparse habitats. This information suggests that such habitats may be useful targets for weed surveillance and monitoring.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54958,"Quarantine arthropod invasions in Europe: the role of climate, hosts and propagule pressure",S175723,R54959,hypothesis,L108543,Propagule pressure,"To quantify the relative importance of propagule pressure, climate‐matching and host availability for the invasion of agricultural pest arthropods in Europe and to forecast newly emerging pest species and European areas with the highest risk of arthropod invasion under current climate and a future climate scenario (A1F1).",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54961,Propagule pressure and stream characteristics influence introgression: cutthroat and rainbow trout in British Columbia,S175759,R54962,hypothesis,L108573,Propagule pressure,"Hybridization and introgression between introduced and native salmonids threaten the continued persistence of many inland cutthroat trout species. Environmental models have been developed to predict the spread of introgression, but few studies have assessed the role of propagule pressure. We used an extensive set of fish Stocking records and geographic information system (GIS) data to produce a spatially explicit index of potential propagule pressure exerted by introduced rainbow trout in the Upper Kootenay River, British Columbia, Canada. We then used logistic regression and the information-theoretic approach to test the ability of a set of environmental and spatial variables to predict the level of introgression between native westslope cutthroat trout and introduced rainbow trout. Introgression was assessed using between four and seven co-dominant, diagnostic nuclear markers at 45 sites in 31 different streams. The best model for predicting introgression included our GIS propagule pressure index and an environmental variable that accounted for the biogeoclimatic zone of the site (r2=0.62). This model was 1.4 times more likely to explain introgression than the next-best model, which consisted of only the propagule pressure index variable. We created a composite model based on the model-averaged results of the seven top models that included environmental, spatial, and propagule pressure variables. The propagule pressure index had the highest importance weight (0.995) of all variables tested and was negatively related to sites with no introgression. This study used an index of propagule pressure and demonstrated that propagule pressure had the greatest influence on the level of introgression between a native and introduced trout in a human-induced hybrid zone.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54973,How many founders for a biological invasion? Predicting introduction outcomes from propagule pressure,S175888,R54974,hypothesis,L108678,Propagule pressure,"Ecological theory on biological invasions attempts to characterize the predictors of invasion success and the relative importance of the different drivers of population establishment. An outstanding question is how propagule pressure determines the probability of population establishment, where propagule pressure is the number of individuals of a species introduced into a specific location (propagule size) and their frequency of introduction (propagule number). Here, we used large-scale replicated mesocosm ponds over three reproductive seasons to identify how propagule size and number predict the probability of establishment of one of world's most invasive fish, Pseudorasbora parva, as well as its effect on the somatic growth of individuals during establishment. We demonstrated that, although a threshold of 11 introduced pairs of fish (a pair is 1 male, 1 female) was required for establishment probability to exceed 95%, establishment also occurred at low propagule size (1-5 pairs). Although single introduction events were as effective as multiple events at enabling establishment, the propagule sizes used in the multiple introductions were above the detected threshold for establishment. After three reproductive seasons, population abundance was also a function of propagule size, with rapid increases in abundance only apparent when propagule size exceeded 25 pairs. This was initially assisted by adapted biological traits, including rapid individual somatic growth that helped to overcome demographic bottlenecks.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54975,Short- and long-term effects of disturbance and propagule pressure on a biological invasion,S175909,R54976,hypothesis,L108695,Propagule pressure,"1 Invading species typically need to overcome multiple limiting factors simultaneously in order to become established, and understanding how such factors interact to regulate the invasion process remains a major challenge in ecology. 2 We used the invasion of marine algal communities by the seaweed Sargassum muticum as a study system to experimentally investigate the independent and interactive effects of disturbance and propagule pressure in the short term. Based on our experimental results, we parameterized an integrodifference equation model, which we used to examine how disturbances created by different benthic herbivores influence the longer term invasion success of S. muticum. 3 Our experimental results demonstrate that in this system neither disturbance nor propagule input alone was sufficient to maximize invasion success. Rather, the interaction between these processes was critical for understanding how the S. muticum invasion is regulated in the short term. 4 The model showed that both the size and spatial arrangement of herbivore disturbances had a major impact on how disturbance facilitated the invasion, by jointly determining how much space‐limitation was alleviated and how readily disturbed areas could be reached by dispersing propagules. 5 Synthesis. Both the short‐term experiment and the long‐term model show that S. muticum invasion success is co‐regulated by disturbance and propagule pressure. Our results underscore the importance of considering interactive effects when making predictions about invasion success.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54977,Introduction history and species characteristics partly explain naturalization success of North American woody species in Europe,S175930,R54978,hypothesis,L108712,Propagule pressure,"1 The search for general characteristics of invasive species has not been very successful yet. A reason for this could be that current invasion patterns are mainly reflecting the introduction history (i.e. time since introduction and propagule pressure) of the species. Accurate data on the introduction history are, however, rare, particularly for introduced alien species that have not established. As a consequence, few studies that tested for the effects of species characteristics on invasiveness corrected for introduction history. 2 We tested whether the naturalization success of 582 North American woody species in Europe, measured as the proportion of European geographic regions in which each species is established, can be explained by their introduction history. For 278 of these species we had data on characteristics related to growth form, life cycle, growth, fecundity and environmental tolerance. We tested whether naturalization success can be further explained by these characteristics. In addition, we tested whether the effects of species characteristics differ between growth forms. 3 Both planting frequency in European gardens and time since introduction significantly increased naturalization success, but the effect of the latter was relatively weak. After correction for introduction history and taxonomy, six of the 26 species characteristics had significant effects on naturalization success. Leaf retention and precipitation tolerance increased naturalization success. Tree species were only 56% as likely to naturalize as non‐tree species (vines, shrubs and subshrubs), and the effect of planting frequency on naturalization success was much stronger for non‐trees than for trees. On the other hand, the naturalization success of trees, but not for non‐trees, increased with native range size, maximum plant height and seed spread rate. 4 Synthesis. Our results suggest that introduction history, particularly planting frequency, is an important determinant of current naturalization success of North American woody species (particularly of non‐trees) in Europe. Therefore, studies comparing naturalization success among species should correct for introduction history. Species characteristics are also significant determinants of naturalization success, but their effects may differ between growth forms.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54981,Dealing with scarce data to understand how environmental gradients and propagule pressure shape fine-scale alien distribution patterns on coastal dunes,S175983,R54983,hypothesis,L108755,Propagule pressure,"Questions: On sandy coastal habitats, factors related to substrate and to wind action vary along the sea–inland ecotone, forming a marked directional disturbance and stress gradient. Further, input of propagules of alien plant species associated to touristic exploitation and development is intense. This has contributed to establishment and spread of aliens in coastal systems. Records of alien species in databases of such heterogeneous landscapes remain scarce, posing a challenge for statistical modelling. We address this issue and attempt to shed light on the role of environmental stress/disturbance gradients and propagule pressure on invasibility of plant communities in these typical model systems. Location: Sandy coasts of Lazio (Central Italy). Methods: We proposed an innovative methodology to deal with low prevalence of alien occurrence in a data set and high cost of field-based sampling by taking advantage, through predictive modelling, of the strong interrelation between vegetation and abiotic features in coastal dunes. We fitted generalized additive models to analyse (1) overall patterns of alien occurrence and spread and (2) specific patterns of the most common alien species recorded. Conclusion: Even in the presence of strong propagule pressure, variation in local abiotic conditions can explain differences in invasibility within a local environment, and intermediate levels of natural disturbance and stress offer the best conditions for spread of alien species. However, in our model system, propagule pressure is actually the main determinant of alien species occurrence and spread. We demonstrated that extending the information of environmental features measured in a subsample of vegetation plots through predictive modelling allows complex questions in invasion biology to be addressed without requiring disproportionate funding and sampling effort.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54984,Global patterns of introduction effort and establishment success in birds,S176020,R54986,hypothesis,L108786,Propagule pressure,"Theory suggests that introduction effort (propagule size or number) should be a key determinant of establishment success for exotic species. Unfortunately, however, propagule pressure is not recorded for most introductions. Studies must therefore either use proxies whose efficacy must be largely assumed, or ignore effort altogether. The results of such studies will be flawed if effort is not distributed at random with respect to other characteristics that are predicted to influence success. We use global data for more than 600 introduction events for birds to show that introduction effort is both the strongest correlate of introduction success, and correlated with a large number of variables previously thought to influence success. Apart from effort, only habitat generalism relates to establishment success in birds.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54987,Hotspots of plant invasion predicted by propagule pressure and ecosystem characteristics,S176048,R54988,hypothesis,L108810,Propagule pressure,"Aim Biological invasions pose a major conservation threat and are occurring at an unprecedented rate. Disproportionate levels of invasion across the landscape indicate that propagule pressure and ecosystem characteristics can mediate invasion success. However, most invasion predictions relate to species’ characteristics (invasiveness) and habitat requirements. Given myriad invaders and the inability to generalize from single‐species studies, more general predictions about invasion are required. We present a simple new method for characterizing and predicting landscape susceptibility to invasion that is not species‐specific.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54989,Effects of pre-existing submersed vegetation and propagule pressure on the invasion success of Hydrilla verticillata,S176077,R54991,hypothesis,L108833,Propagule pressure,"Summary 1 With biological invasions causing widespread problems in ecosystems, methods to curb the colonization success of invasive species are needed. The effective management of invasive species will require an integrated approach that restores community structure and ecosystem processes while controlling propagule pressure of non-native species. 2 We tested the hypotheses that restoring native vegetation and minimizing propagule pressure of invasive species slows the establishment of an invader. In field and greenhouse experiments, we evaluated (i) the effects of a native submersed aquatic plant species, Vallisneria americana, on the colonization success of a non-native species, Hydrilla verticillata; and (ii) the effects of H. verticillata propagule density on its colonization success. 3 Results from the greenhouse experiment showed that V. americana decreased H. verticillata colonization through nutrient draw-down in the water column of closed mesocosms, although data from the field experiment, located in a tidal freshwater region of Chesapeake Bay that is open to nutrient fluxes, suggested that V. americana did not negatively impact H. verticillata colonization. However, H. verticillata colonization was greater in a treatment of plastic V. americana look-alikes, suggesting that the canopy of V. americana can physically capture H. verticillata fragments. Thus pre-emption effects may be less clear in the field experiment because of complex interactions between competitive and facilitative effects in combination with continuous nutrient inputs from tides and rivers that do not allow nutrient draw-down to levels experienced in the greenhouse. 4 Greenhouse and field tests differed in the timing, duration and density of propagule inputs. However, irrespective of these differences, propagule pressure of the invader affected colonization success except in situations when the native species could draw-down nutrients in closed greenhouse mesocosms. In that case, no propagules were able to colonize. 5 Synthesis and applications. We have shown that reducing propagule pressure through targeted management should be considered to slow the spread of invasive species. This, in combination with restoration of native species, may be the best defence against non-native species invasion. Thus a combined strategy of targeted control and promotion of native plant growth is likely to be the most sustainable and cost-effective form of invasive species management.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54992,Propagule pressure and disturbance interact to overcome biotic resistance of marine invertebrate,S176103,R54993,hypothesis,L108855,Propagule pressure,"Propagule pressure is fundamental to invasion success, yet our understanding of its role in the marine domain is limited. Few studies have manipulated or controlled for propagule supply in the field, and consequently there is little empirical data to test for non-linearities or interactions with other processes. Supply of non-indigenous propagules is most likely to be elevated in urban estuaries, where vessels congregate and bring exotic species on fouled hulls and in ballast water. These same environments are also typically subject to elevated levels of disturbance from human activities, creating the potential for propagule pressure and disturbance to interact. By applying a controlled dose of free-swimming larvae to replicate assemblages, we were able to quantify a dose-response relationship at much finer spatial and temporal scales than previously achieved in the marine environment. We experimentally crossed controlled levels of propagule pressure and disturbance in the field, and found that both were required for invasion to occur. Only recruits that had settled onto bare space survived beyond three months, precluding invader persistence in undisturbed communities. In disturbed communities initial survival on bare space appeared stochastic, such that a critical density was required before the probability of at least one colony surviving reached a sufficient level. Those that persisted showed 75% survival over the following three months, signifying a threshold past which invaders were resilient to chance mortality. Urban estuaries subject to anthropogenic disturbance are common throughout the world, and similar interactions may be integral to invasion dynamics in these ecosystems.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54994,Propagule pressure and the invasion risks of non-native freshwater fishes: a case study in England,S176125,R54995,hypothesis,L108873,Propagule pressure,"European countries in general, and England in particular, have a long history of introducing non-native fish species, but there exist no detailed studies of the introduction pathways and propagules pressure for any European country. Using the nine regions of England as a preliminary case study, the potential relationship between the occurrence in the wild of non-native freshwater fishes (from a recent audit of non-native species) and the intensity (i.e. propagule pressure) and diversity of fish imports was investigated. The main pathways of introduction were via imports of fishes for ornamental use (e.g. aquaria and garden ponds) and sport fishing, with no reported or suspected cases of ballast water or hull fouling introductions. The recorded occurrence of non-native fishes in the wild was found to be related to the time (number of years) since the decade of introduction. A shift in the establishment rate, however, was observed in the 1970s after which the ratio of established-to-introduced species declined. The number of established non-native fish species observed in the wild was found to increase significantly (P < 0·05) with increasing import intensity (log10x + 1 of the numbers of fish imported for the years 2000–2004) and with increasing consignment diversity (log10x + 1 of the numbers of consignment types imported for the years 2000–2004). The implications for policy and management are discussed.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54996,"The demography of introduction pathways, propagule pressure and occurrences of non-native freshwater fish in England",S176147,R54997,hypothesis,L108891,Propagule pressure,"1. Biological invasion theory predicts that the introduction and establishment of non-native species is positively correlated with propagule pressure. Releases of pet and aquarium fishes to inland waters has a long history; however, few studies have examined the demographic basis of their importation and incidence in the wild. 2. For the 1500 grid squares (10×10 km) that make up England, data on human demographics (population density, numbers of pet shops, garden centres and fish farms), the numbers of non-native freshwater fishes (from consented licences) imported in those grid squares (i.e. propagule pressure), and the reported incidences (in a national database) of non-native fishes in the wild were used to examine spatial relationships between the occurrence of non-native fishes and the demographic factors associated with propagule pressure, as well as to test whether the demographic factors are statistically reliable predictors of the incidence of non-native fishes, and as such surrogate estimators of propagule pressure. 3. Principal coordinates of neighbour matrices analyses, used to generate spatially explicit models, and confirmatory factor analysis revealed that spatial distributions of non-native species in England were significantly related to human population density, garden centre density and fish farm density. Human population density and the number of fish imports were identified as the best predictors of propagule pressure. 4. Human population density is an effective surrogate estimator of non-native fish propagule pressure and can be used to predict likely areas of non-native fish introductions. In conjunction with fish movements, where available, human population densities can be used to support biological invasion monitoring programmes across Europe (and perhaps globally) and to inform management decisions as regards the prioritization of areas for the control of non-native fish introductions. © Crown copyright 2010. Reproduced with the permission of her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55002,Factors explaining alien plant invasion success in a tropical ecosystem differ at each stage of invasion,S176210,R55003,hypothesis,L108942,Propagule pressure,"1 Understanding why some alien plant species become invasive when others fail is a fundamental goal in invasion ecology. We used detailed historical planting records of alien plant species introduced to Amani Botanical Garden, Tanzania and contemporary surveys of their invasion status to assess the relative ability of phylogeny, propagule pressure, residence time, plant traits and other factors to explain the success of alien plant species at different stages of the invasion process. 2 Species with native ranges centred in the tropics and with larger seeds were more likely to regenerate, whereas naturalization success was explained by longer residence time, faster growth rate, fewer seeds per fruit, smaller seed mass and shade tolerance. 3 Naturalized species spreading greater distances from original plantings tended to have more seeds per fruit, whereas species dispersed by canopy‐feeding animals and with native ranges centred on the tropics tended to have spread more widely in the botanical garden. Species dispersed by canopy‐feeding animals and with greater seed mass were more likely to be established in closed forest. 4 Phylogeny alone made a relatively minor contribution to the explanatory power of statistical models, but a greater proportion of variation in spread within the botanical garden and in forest establishment was explained by phylogeny alone than for other models. Phylogeny jointly with variables also explained a greater proportion of variation in forest establishment than in other models. Phylogenetic correction weakened the importance of dispersal syndrome in explaining compartmental spread, seed mass in the forest establishment model, and all factors except for growth rate and residence time in the naturalization model. 5 Synthesis. This study demonstrates that it matters considerably how invasive species are defined when trying to understand the relative ability of multiple variables to explain invasion success. By disentangling different invasion stages and using relatively objective criteria to assess species status, this study highlights that relatively simple models can help to explain why some alien plants are able to naturalize, spread and even establish in closed tropical forests.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55006,Propagule pressure and persistence in experimental populations,S176261,R55008,hypothesis,L108983,Propagule pressure,"Average inoculum size and number of introductions are known to have positive effects on population persistence. However, whether these factors affect persistence independently or interact is unknown. We conducted a two-factor experiment in which 112 populations of parthenogenetic Daphnia magna were maintained for 41 days to study effects of inoculum size and introduction frequency on: (i) population growth, (ii) population persistence and (iii) time-to-extinction. We found that the interaction of inoculum size and introduction frequency—the immigration rate—affected all three dependent variables, while population growth was additionally affected by introduction frequency. We conclude that for this system the most important aspect of propagule pressure is immigration rate, with relatively minor additional effects of introduction frequency and negligible effects of inoculum size.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55015,Insect herbivory and propagule pressure influence Cirsium vulgare invasiveness across the landscape,S176353,R55016,hypothesis,L109059,Propagule pressure,"A current challenge in ecology is to better understand the magnitude, variation, and interaction in the factors that limit the invasiveness of exotic species. We conducted a factorial experiment involving herbivore manipulation (insecticide-in-water vs. water-only control) and seven densities of introduced nonnative Cirsium vulgare (bull thistle) seed. The experiment was repeated with two seed cohorts at eight grassland sites uninvaded by C. vulgare in the central Great Plains, USA. Herbivory by native insects significantly reduced thistle seedling density, causing the largest reductions in density at the highest propagule inputs. The magnitude of this herbivore effect varied widely among sites and between cohort years. The combination of herbivory and lower propagule pressure increased the rate at which new C. vulgare populations failed to establish during the initial stages of invasion. This experiment demonstrates that the interaction between biotic resistance by native insects, propagule pressure, and spatiotemporal variation in their effects were crucial to the initial invasion by this Eurasian plant in the western tallgrass prairie.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55019,"The role of propagule pressure, genetic diversity and microsite availability for Senecio vernalis invasion",S176397,R55020,hypothesis,L109095,Propagule pressure,"Genetic diversity is supposed to support the colonization success of expanding species, in particular in situations where microsite availability is constrained. Addressing the role of genetic diversity in plant invasion experimentally requires its manipulation independent of propagule pressure. To assess the relative importance of these components for the invasion of Senecio vernalis, we created propagule mixtures of four levels of genotype diversity by combining seeds across remote populations, across proximate populations, within single populations and within seed families. In a first container experiment with constant Festuca rupicola density as matrix, genotype diversity was crossed with three levels of seed density. In a second experiment, we tested for effects of establishment limitation and genotype diversity by manipulating Festuca densities. Increasing genetic diversity had no effects on abundance and biomass of S. vernalis but positively affected the proportion of large individuals to small individuals. Mixtures composed from proximate populations had a significantly higher proportion of large individuals than mixtures composed from within seed families only. High propagule pressure increased emergence and establishment of S. vernalis but had no effect on individual growth performance. Establishment was favoured in containers with Festuca, but performance of surviving seedlings was higher in open soil treatments. For S. vernalis invasion, we found a shift in driving factors from density dependence to effects of genetic diversity across life stages. While initial abundance was mostly linked to the amount of seed input, genetic diversity, in contrast, affected later stages of colonization probably via sampling effects and seemed to contribute to filtering the genotypes that finally grew up. In consequence, when disentangling the mechanistic relationships of genetic diversity, seed density and microsite limitation in colonization of invasive plants, a clear differentiation between initial emergence and subsequent survival to juvenile and adult stages is required.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55021,"Assessing the Relative Importance of Disturbance, Herbivory, Diversity, and Propagule Pressure in Exotic Plant Invasion",S176418,R55022,hypothesis,L109112,Propagule pressure,"The current rate of invasive species introductions is unprecedented, and the dramatic impacts of exotic invasive plants on community and ecosystem properties have been well documented. Despite the pressing management implications, the mechanisms that control exotic plant invasion remain poorly understood. Several factors, such as disturbance, propagule pressure, species diversity, and herbivory, are widely believed to play a critical role in exotic plant invasions. However, few studies have examined the relative importance of these factors, and little is known about how propagule pressure interacts with various mechanisms of ecological resistance to determine invasion success. We quantified the relative importance of canopy disturbance, propagule pressure, species diversity, and herbivory in determining exotic plant invasion in 10 eastern hemlock forests in Pennsylvania and New Jersey (USA). Use of a maximum-likelihood estimation framework and information theoretics allowed us to quantify the strength of evidence for alternative models of the influence of these factors on changes in exotic plant abundance. In addition, we developed models to determine the importance of interactions between ecosystem properties and propagule pressure. These analyses were conducted for three abundant, aggressive exotic species that represent a range of life histories: Alliaria petiolata, Berberis thunbergii, and Microstegium vimineum. Of the four hypothesized determinants of exotic plant invasion considered in this study, canopy disturbance and propagule pressure appear to be the most important predictors of A. petiolata, B. thunbergii, and M. vimineum invasion. Herbivory was also found to be important in contributing to the invasion of some species. In addition, we found compelling evidence of an important interaction between propagule pressure and canopy disturbance. This is the first study to demonstrate the dominant role of the interaction between canopy disturbance and propagule pressure in determining forest invasibility relative to other potential controlling factors. The importance of the disturbance-propagule supply interaction, and its nonlinear functional form, has profound implications for the management of exotic plant species populations. Improving our ability to predict exotic plant invasions will require enhanced understanding of the interaction between propagule pressure and ecological resistance mechanisms.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55023,The importance of quantifying propagule pressure to understand invasion: an examination of riparian forest invasibility,S176439,R55024,hypothesis,L109129,Propagule pressure,"The widely held belief that riparian communities are highly invasible to exotic plants is based primarily on comparisons of the extent of invasion in riparian and upland communities. However, because differences in the extent of invasion may simply result from variation in propagule supply among recipient environments, true comparisons of invasibility require that both invasion success and propagule pressure are quantified. In this study, we quantified propagule pressure in order to compare the invasibility of riparian and upland forests and assess the accuracy of using a community's level of invasion as a surrogate for its invasibility. We found the extent of invasion to be a poor proxy for invasibility. The higher level of invasion in the studied riparian forests resulted from greater propagule availability rather than higher invasibility. Furthermore, failure to account for propagule pressure may confound our understanding of general invasion theories. Ecological theory suggests that species-rich communities should be less invasible. However, we found significant relationships between species diversity and invasion extent, but no diversity-invasibility relationship was detected for any species. Our results demonstrate that using a community's level of invasion as a surrogate for its invasibility can confound our understanding of invasibility and its determinants.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55036,Genetic evidence for high propagule pressure and long-distance dispersal in monk parakeet (Myiopsitta monachus) invasive populations,S176595,R55038,hypothesis,L109257,Propagule pressure,"The monk parakeet (Myiopsitta monachus) is a successful invasive species that does not exhibit life history traits typically associated with colonizing species (e.g., high reproductive rate or long‐distance dispersal capacity). To investigate this apparent paradox, we examined individual and population genetic patterns of microsatellite loci at one native and two invasive sites. More specifically, we aimed at evaluating the role of propagule pressure, sexual monogamy and long‐distance dispersal in monk parakeet invasion success. Our results indicate little loss of genetic variation at invasive sites relative to the native site. We also found strong evidence for sexual monogamy from patterns of relatedness within sites, and no definite cases of extra‐pair paternity in either the native site sample or the examined invasive site. Taken together, these patterns directly and indirectly suggest that high propagule pressure has contributed to monk parakeet invasion success. In addition, we found evidence for frequent long‐distance dispersal at an invasive site (∼100 km) that sharply contrasted with previous estimates of smaller dispersal distance made in the native range (∼2 km), suggesting long‐range dispersal also contributes to the species’ spread within the United States. Overall, these results add to a growing body of literature pointing to the important role of propagule pressure in determining, and thus predicting, invasion success, especially for species whose life history traits are not typically associated with invasiveness.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55044,The interacting effects of diversity and propagule pressure on early colonization and population size,S176679,R55045,hypothesis,L109327,Propagule pressure,"We are now beginning to understand the role of intraspecific diversity on fundamental ecological phenomena. There exists a paucity of knowledge, however, regarding how intraspecific, or genetic diversity, may covary with other important factors such as propagule pressure. A combination of theoretical modelling and experimentation was used to explore the way propagule pressure and genetic richness may interact. We compare colonization rates of the Australian bivalve Saccostrea glomerata (Gould 1885). We cross propagule size and genetic richness in a factorial design in order to examine the generalities of our theoretical model. Modelling showed that diversity and propagule pressure should generally interact synergistically when positive feedbacks occur (e.g. aggregation). The strength of genotype effects depended on propagule size, or the numerical abundance of arriving individuals. When propagule size was very small (<4 individuals), however, greater genetic richness unexpectedly reduced colonization. The probability of S. glomerata colonization was 76% in genetically rich, larger propagules, almost 39 percentage points higher than in genetically poor propagules of similar size. This pattern was not observed in less dense, smaller propagules. We predict that density-dependent interactions between larvae in the water column may explain this pattern.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55050,ECOLOGICAL RESISTANCE TO BIOLOGICAL INVASION OVERWHELMED BY PROPAGULE PRESSURE,S176753,R55052,hypothesis,L109387,Propagule pressure,"Models and observational studies have sought patterns of predictability for invasion of natural areas by nonindigenous species, but with limited success. In a field experiment using forest understory plants, we jointly manipulated three hypothesized determinants of biological invasion outcome: resident diversity, physical disturbance and abiotic conditions, and propagule pressure. The foremost constraints on net habitat invasibility were the number of propagules that arrived at a site and naturally varying resident plant density. The physical environment (flooding regime) and the number of established resident species had negligible impact on habitat invasibility as compared to propagule pressure, despite manipulations that forced a significant reduction in resident richness, and a gradient in flooding from no flooding to annual flooding. This is the first experimental study to demonstrate the primacy of propagule pressure as a determinant of habitat invasibility in comparison with other candidate controlling factors.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55053,Propagule pressure of an invasive crab overwhelms native biotic resistance,S176779,R55054,hypothesis,L109409,Propagule pressure,"Over the last decade, the porcelain crab Petrolisthes armatus invaded oyster reefs of Georgia, USA, at mean densities of up to 11 000 adults m -2 . Interactions affecting the invasion are undocumented. We tested the effects of native species richness and composition on invasibility by constructing isolated reef communities with 0, 2, or 4 of the most common native species, by seeding adult P. armatus into a subset of the 4 native species communities and by constructing communities with and without native, predatory mud crabs. At 4 wk, recruitment of P. armatus juveniles to oyster shells lacking native species was 2.75 times greater than to the 2 native species treatment and 3.75 times greater than to the 4 native species treatment. The biotic resistance produced by 2 species of native filter feeders may have occurred due to competition with, or predation on, the settling juve- niles of the filter feeding invasive crab. Adding adult porcelain crabs to communities with 4 native species enhanced recruitment by a significant 3-fold, and countered the effects of native biotic resis- tance. Differences in recruitment at Week 4 were lost by Weeks 8 and 12, when densities of recent recruits reached ~17 000 to 34 000 crabs m -2 across all treatments. Thus, native species richness slows initial invasion, but early colonists stimulate settlement by later ones and produce tremendous propagule pressure that overwhelms the effects of biotic resistance.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55057,"Role of propagule pressure in colonization success: disentangling the relative importance of demographic, genetic and habitat effects",S176826,R55058,hypothesis,L109448,Propagule pressure,"High propagule pressure is arguably the only consistent predictor of colonization success. More individuals enhance colonization success because they aid in overcoming demographic consequences of small population size (e.g. stochasticity and Allee effects). The number of founders can also have direct genetic effects: with fewer individuals, more inbreeding and thus inbreeding depression will occur, whereas more individuals typically harbour greater genetic variation. Thus, the demographic and genetic components of propagule pressure are interrelated, making it difficult to understand which mechanisms are most important in determining colonization success. We experimentally disentangled the demographic and genetic components of propagule pressure by manipulating the number of founders (fewer or more), and genetic background (inbred or outbred) of individuals released in a series of three complementary experiments. We used Bemisia whiteflies and released them onto either their natal host (benign) or a novel host (challenging). Our experiments revealed that having more founding individuals and those individuals being outbred both increased the number of adults produced, but that only genetic background consistently shaped net reproductive rate of experimental populations. Environment was also important and interacted with propagule size to determine the number of adults produced. Quality of the environment interacted also with genetic background to determine establishment success, with a more pronounced effect of inbreeding depression in harsh environments. This interaction did not hold for the net reproductive rate. These data show that the positive effect of propagule pressure on founding success can be driven as much by underlying genetic processes as by demographics. Genetic effects can be immediate and have sizable effects on fitness.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55059,Reproductive potential and seedling establishment of the invasive alien tree Schinus molle (Anacardiaceae) in South Africa,S176848,R55060,hypothesis,L109466,Propagule pressure,"Schinus molle (Peruvian pepper tree) was introduced to South Africa more than 150 years ago and was widely planted, mainly along roads. Only in the last two decades has the species become naturalized and invasive in some parts of its new range, notably in semi-arid savannas. Research is being undertaken to predict its potential for further invasion in South Africa. We studied production, dispersal and predation of seeds, seed banks, and seedling establishment in relation to land uses at three sites, namely ungrazed savanna once used as a military training ground; a savanna grazed by native game; and an ungrazed mine dump. We found that seed production and seed rain density of S. molle varied greatly between study sites, but was high at all sites (384 864–1 233 690 seeds per tree per year; 3877–9477 seeds per square metre per year). We found seeds dispersed to distances of up to 320 m from female trees, and most seeds were deposited within 50 m of putative source trees. Annual seed rain density below canopies of Acacia tortillis, the dominant native tree at all sites, was significantly lower in grazed savanna. The quality of seed rain was much reduced by endophagous predators. Seed survival in the soil was low, with no survival recorded beyond 1 year. Propagule pressure to drive the rate of recruitment: densities of seedlings and sapling densities were higher in ungrazed savanna and the ungrazed mine dump than in grazed savanna, as reflected by large numbers of young individuals, but adult : seedling ratios did not differ between savanna sites. Frequent and abundant seed production, together with effective dispersal of viable S. molle seed by birds to suitable establishment sites below trees of other species to overcome predation effects, facilitates invasion. Disturbance enhances invasion, probably by reducing competition from native plants.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55061,Determinants of vertebrate invasion success in Europe and North America,S176877,R55063,hypothesis,L109489,Propagule pressure,"Species that are frequently introduced to an exotic range have a high potential of becoming invasive. Besides propagule pressure, however, no other generally strong determinant of invasion success is known. Although evidence has accumulated that human affiliates (domesticates, pets, human commensals) also have high invasion success, existing studies do not distinguish whether this success can be completely explained by or is partly independent of propagule pressure. Here, we analyze both factors independently, propagule pressure and human affiliation. We also consider a third factor directly related to humans, hunting, and 17 traits on each species' population size and extent, diet, body size, and life history. Our dataset includes all 2362 freshwater fish, mammals, and birds native to Europe or North America. In contrast to most previous studies, we look at the complete invasion process consisting of (1) introduction, (2) establishment, and (3) spread. In this way, we not only consider which of the introduced species became invasive but also which species were introduced. Of the 20 factors tested, propagule pressure and human affiliation were the two strongest determinants of invasion success across all taxa and steps. This was true for multivariate analyses that account for intercorrelations among variables as well as univariate analyses, suggesting that human affiliation influenced invasion success independently of propagule pressure. Some factors affected the different steps of the invasion process antagonistically. For example, game species were much more likely to be introduced to an exotic continent than nonhunted species but tended to be less likely to establish themselves and spread. Such antagonistic effects show the importance of considering the complete invasion process.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55068,"The role of propagule pressure in the invasion success of bluegill sunfish, Lepomis macrochirus, in Japan",S176954,R55069,hypothesis,L109554,Propagule pressure,"The bluegill sunfish, Lepomis macrochirus, is a widespread exotic species in Japan that is considered to have originated from 15 fish introduced from Guttenberg, Iowa, in 1960. Here, the genetic and phenotypic traits of Japanese populations were examined, together with 11 native populations of the USA using 10 microsatellite markers and six meristic traits. Phylogenetic analysis reconfirmed a single origin of Japanese populations, among which populations established in the 1960s were genetically close to Guttenberg population, keeping high genetic diversity comparable to the ancestral population. In contrast, genetic diversity of later‐established populations significantly declined with genetic divergence from the ancestral population. Among the 1960s established populations, that from Lake Biwa showed a significant isolation‐by‐distance pattern with surrounding populations in which genetic bottlenecks increased with geographical distance from Lake Biwa. Although phenotypic divergence among populations was recognized in both neutral and adaptive traits, PST–FST comparisons showed that it is independent of neutral genetic divergence. Divergent selection was suggested in some populations from reservoirs with unstable habitats, while stabilizing selection was dominant. Accordingly, many Japanese populations of L. macrochirus appear to have derived from Lake Biwa population, expanding their distribution with population bottlenecks. Despite low propagule pressure, the invasion success of L. macrochirus is probably because of its drastic population growth in Lake Biwa shortly after its introduction, together with artificial transplantations. It not only enabled the avoidance of a loss in genetic diversity but also formed a major gene pool that supported local adaptation with high phenotypic plasticity.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55072,Planting history and propagule pressure as predictors of invasion by woody species in a temperate region,S176998,R55073,hypothesis,L109590,Propagule pressure,"Abstract: We studied 28 alien tree species currently planted for forestry purposes in the Czech Republic to determine the probability of their escape from cultivation and naturalization. Indicators of propagule pressure (number of administrative units in which a species is planted and total planting area) and time of introduction into cultivation were used as explanatory variables in multiple regression models. Fourteen species escaped from cultivation, and 39% of the variance was explained by the number of planting units and the time of introduction, the latter being more important. Species introduced early had a higher probability of escape than those introduced later, with more than 95% probability of escape for those introduced before 1801 and <5% for those introduced after 1892. Probability of naturalization was more difficult to predict, and eight species were misclassified. A model omitting two species with the largest influence on the model yielded similar predictors of naturalization as did the probability of escape. Both phases of invasion therefore appear to be driven by planting and introduction history in a similar way. Our results demonstrate the importance of forestry for recruitment of invasive trees. Six alien forestry trees, classified as invasive in the Czech Republic, are currently reported in nature reserves. In addition, forestry authorities want to increase the diversity of alien species and planting area in the country.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55076,The relative importance of latitude matching and propagule pressure in the colonization success of an invasive forb,S177039,R55077,hypothesis,L109623,Propagule pressure,"Factors that influence the early stages of invasion can be critical to invasion success, yet are seldom studied. In particular, broad pre-adaptation to recipient climate may importantly influence early colonization success, yet few studies have explicitly examined this. I performed an experiment to determine how similarity between seed source and transplant site latitude, as a general indicator of pre-adaptation to climate, interacts with propagule pressure (100, 200 and 400 seeds/pot) to influence early colonization success of the widespread North American weed, St. John's wort Hypericum perforatum. Seeds originating from seven native European source populations were sown in pots buried in the ground in a field in western Montana. Seed source populations were either similar or divergent in latitude to the recipient transplant site. Across seed density treatments, the match between seed source and recipient latitude did not affect the proportion of pots colonized or the number of individual colonists per pot. In contrast, propagule pressure had a significant and positive effect on colonization. These results suggest that propagules from many climatically divergent source populations can be viable invaders.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55081,Invasive species profiling? Exploring the characteristics of non-native fishes across invasion stages in California,S177098,R55082,hypothesis,L109672,Propagule pressure,"Summary 1. The global spread of non-native species is a major concern for ecologists, particularly in regards to aquatic systems. Predicting the characteristics of successful invaders has been a goal of invasion biology for decades. Quantitative analysis of species characteristics may allow invasive species profiling and assist the development of risk assessment strategies. 2. In the current analysis we developed a data base on fish invasions in catchments throughout California that distinguishes among the establishment, spread and integration stages of the invasion process, and separates social and biological factors related to invasion success. 3. Using Akaike's information criteria (AIC), logistic and multiple regression models, we show suites of biological variables, which are important in predicting establishment (parental care and physiological tolerance), spread (life span, distance from nearest native source and trophic status) and abundance (maximum size, physiological tolerance and distance from nearest native source). Two variables indicating human interest in a species (propagule pressure and prior invasion success) are predictors of successful establishment and prior invasion success is a predictor of spread and integration. 4. Despite the idiosyncratic nature of the invasion process, our results suggest some assistance in the search for characteristics of fish species that successfully transition between invasion stages.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55083,ALIEN FISHES IN CALIFORNIA WATERSHEDS: CHARACTERISTICS OF SUCCESSFUL AND FAILED INVADERS,S177120,R55084,hypothesis,L109690,Propagule pressure,"The literature on alien animal invaders focuses largely on successful invasions over broad geographic scales and rarely examines failed invasions. As a result, it is difficult to make predictions about which species are likely to become successful invaders or which environments are likely to be most susceptible to invasion. To address these issues, we developed a data set on fish invasions in watersheds throughout California (USA) that includes failed introductions. Our data set includes information from three stages of the invasion process (establishment, spread, and integration). We define seven categorical predictor variables (trophic status, size of native range, parental care, maximum adult size, physiological tolerance, distance from nearest native source, and propagule pressure) and one continuous predictor variable (prior invasion success) for all introduced species. Using an information-theoretic approach we evaluate 45 separate hypotheses derived from the invasion literature over these three sta...",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55088,"Effects of soil fungi, disturbance and propagule pressure on exotic plant recruitment and establishment at home and abroad",S177184,R55089,hypothesis,L109744,Propagule pressure,"Biogeographic experiments that test how multiple interacting factors influence exotic plant abundance in their home and recipient communities are remarkably rare. We examined the effects of soil fungi, disturbance and propagule pressure on seed germination, seedling recruitment and adult plant establishment of the invasive Centaurea stoebe in its native European and non‐native North American ranges. Centaurea stoebe can establish virtual monocultures in parts of its non‐native range, but occurs at far lower abundances where it is native. We conducted parallel experiments at four European and four Montana (USA) grassland sites with all factorial combinations of ± suppression of soil fungi, ±disturbance and low versus high knapweed propagule pressure [100 or 300 knapweed seeds per 0.3 m × 0.3 m plot (1000 or 3000 per m2)]. We also measured germination in buried bags containing locally collected knapweed seeds that were either treated or not with fungicide. Disturbance and propagule pressure increased knapweed recruitment and establishment, but did so similarly in both ranges. Treating plots with fungicides had no effect on recruitment or establishment in either range. However, we found: (i) greater seedling recruitment and plant establishment in undisturbed plots in Montana compared to undisturbed plots in Europe and (ii) substantially greater germination of seeds in bags buried in Montana compared to Europe. Also, across all treatments, total plant establishment was greater in Montana than in Europe. Synthesis. Our results highlight the importance of simultaneously examining processes that could influence invasion in both ranges. They indicate that under ‘background’ undisturbed conditions, knapweed recruits and establishes at greater abundance in Montana than in Europe. However, our results do not support the importance of soil fungi or local disturbances as mechanisms for knapweed's differential success in North America versus Europe.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55092,Inferring Process from Pattern in Plant Invasions: A Semimechanistic Model Incorporating Propagule Pressure and Environmental Factors,S177226,R55093,hypothesis,L109778,Propagule pressure,"Propagule pressure is intuitively a key factor in biological invasions: increased availability of propagules increases the chances of establishment, persistence, naturalization, and invasion. The role of propagule pressure relative to disturbance and various environmental factors is, however, difficult to quantify. We explored the relative importance of factors driving invasions using detailed data on the distribution and percentage cover of alien tree species on South Africa’s Agulhas Plain (2,160 km2). Classification trees based on geology, climate, land use, and topography adequately explained distribution but not abundance (canopy cover) of three widespread invasive species (Acacia cyclops, Acacia saligna, and Pinus pinaster). A semimechanistic model was then developed to quantify the roles of propagule pressure and environmental heterogeneity in structuring invasion patterns. The intensity of propagule pressure (approximated by the distance from putative invasion foci) was a much better predictor of canopy cover than any environmental factor that was considered. The influence of environmental factors was then assessed on the residuals of the first model to determine how propagule pressure interacts with environmental factors. The mediating effect of environmental factors was species specific. Models combining propagule pressure and environmental factors successfully predicted more than 70% of the variation in canopy cover for each species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55097,Effect of propagule pressure on the establishment and spread of the little fire ant Wasmannia auropunctata in a Gabonese oilfield,S177286,R55098,hypothesis,L109828,Propagule pressure,"We studied the effect of propagule pressure on the establishment and subsequent spread of the invasive little fire ant Wasmannia auropunctata in a Gabonese oilfield in lowland rain forest. Oil well drilling, the major anthropogenic disturbance over the past 21 years in the area, was used as an indirect measure of propagule pressure. An analysis of 82 potential introductions at oil production platforms revealed that the probability of successful establishment significantly increased with the number of drilling events. Specifically, the shape of the dose–response establishment curve could be closely approximated by a Poisson process with a 34% chance of infestation per well drilled. Consistent with our knowledge of largely clonal reproduction by W. auropunctata, the shape of the establishment curve suggested that the ants were not substantially affected by Allee effects, probably greatly contributing to this species’ success as an invader. By contrast, the extent to which W. auropunctata spread beyond the point of initial introduction, and thus the extent of its damage to diversity of other ant species, was independent of propagule pressure. These results suggest that while establishment success depends on propagule pressure, other ecological or genetic factors may limit the extent of further spread. Knowledge of the shape of the dose–response establishment curve should prove useful in modelling the future spread of W. auropunctata and perhaps the spread of other clonal organisms.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55099,Invasive alien plants infiltrate bird-mediated shrub nucleation processes in arid savanna,S177312,R55100,hypothesis,L109850,Propagule pressure,"1 The cultivation and dissemination of alien ornamental plants increases their potential to invade. More specifically, species with bird‐dispersed seeds can potentially infiltrate natural nucleation processes in savannas. 2 To test (i) whether invasion depends on facilitation by host trees, (ii) whether propagule pressure determines invasion probability, and (iii) whether alien host plants are better facilitators of alien fleshy‐fruited species than indigenous species, we mapped the distribution of alien fleshy‐fruited species planted inside a military base, and compared this with the distribution of alien and native fleshy‐fruited species established in the surrounding natural vegetation. 3 Abundance and diversity of fleshy‐fruited plant species was much greater beneath tree canopies than in open grassland and, although some native fleshy‐fruited plants were found both beneath host trees and in the open, alien fleshy‐fruited plants were found only beneath trees. 4 Abundance of fleshy‐fruited alien species in the natural savanna was positively correlated with the number of individuals of those species planted in the grounds of the military base, while the species richness of alien fleshy‐fruited taxa decreased with distance from the military base, supporting the notion that propagule pressure is a fundamental driver of invasions. 5 There were more fleshy‐fruited species beneath native Acacia tortilis than beneath alien Prosopis sp. trees of the equivalent size. Although there were significant differences in native plant assemblages beneath these hosts, the proportion of alien to native fleshy‐fruited species did not differ with host. 6 Synthesis. Birds facilitate invasion of a semi‐arid African savanna by alien fleshy‐fruited plants, and this process does not require disturbance. Instead, propagule pressure and a few simple biological observations define the probability that a plant will invade, with alien species planted in gardens being a major source of propagules. Some invading species have the potential to transform this savanna by overtopping native trees, leading to ecosystem‐level impacts. Likewise, the invasion of the open savanna by alien host trees (such as Prosopis sp.) may change the diversity, abundance and species composition of the fleshy‐fruited understorey. These results illustrate the complex interplay between propagule pressure, facilitation, and a range of other factors in biological invasions.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55107,"Population structure, propagule pressure, and conservation biogeography in the sub-Antarctic: lessons from indigenous and invasive springtails",S177423,R55111,hypothesis,L109932,Propagule pressure,"The patterns in and the processes underlying the distribution of invertebrates among Southern Ocean islands and across vegetation types on these islands are reasonably well understood. However, few studies have examined the extent to which populations are genetically structured. Given that many sub‐Antarctic islands experienced major glaciation and volcanic activity, it might be predicted that substantial population substructure and little genetic isolation‐by‐distance should characterize indigenous species. By contrast, substantially less population structure might be expected for introduced species. Here, we examine these predictions and their consequences for the conservation of diversity in the region. We do so by examining haplotype diversity based on mitochondrial cytochrome c oxidase subunit I sequence data, from two indigenous (Cryptopygus antarcticus travei, Tullbergia bisetosa) and two introduced (Isotomurus cf. palustris, Ceratophysella denticulata) springtail species from Marion Island. We find considerable genetic substructure in the indigenous species that is compatible with the geological and glacialogical history of the island. Moreover, by employing ecological techniques, we show that haplotype diversity is likely much higher than our sequenced samples suggest. No structure is found in the introduced species, with each being represented by a single haplotype only. This indicates that propagule pressure is not significant for these small animals unlike the situation for other, larger invasive species: a few individuals introduced once are likely to have initiated the invasion. These outcomes demonstrate that sampling must be more comprehensive if the population history of indigenous arthropods on these islands is to be comprehended, and that the risks of within‐ and among‐island introductions are substantial. The latter means that, if biogeographical signal is to be retained in the region, great care must be taken to avoid inadvertent movement of indigenous species among and within islands. Thus, quarantine procedures should also focus on among‐island movements.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55114,Propagule pressure hypothesis not supported by an 80-year experiment on woody species invasion,S177483,R55116,hypothesis,L109982,Propagule pressure,"Ecological filters and availability of propagules play key roles structuring natural communities. Propagule pressure has recently been suggested to be a fundamental factor explaining the success or failure of biological introductions. We tested this hypothesis with a remarkable data set on trees introduced to Isla Victoria, Nahuel Huapi National Park, Argentina. More than 130 species of woody plants, many known to be highly invasive elsewhere, were introduced to this island early in the 20th century, as part of an experiment to test their suitability as commercial forestry trees for this region. We obtained detailed data on three estimates of propagule pressure (number of introduced individuals, number of areas where introduced, and number of years during which the species was planted) for 18 exotic woody species. We matched these data with a survey of the species and number of individuals currently invading the island. None of the three estimates of propagule pressure predicted the current pattern of invasion. We suggest that other factors, such as biotic resistance, may be operating to determine the observed pattern of invasion, and that propagule pressure may play a relatively minor role in explaining at least some observed patterns of invasion success and failure.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55117,The comparative importance of species traits and introduction characteristics in tropical plant invasions,S177516,R55119,hypothesis,L110009,Propagule pressure,"Aim We used alien plant species introduced to a botanic garden to investigate the relative importance of species traits (leaf traits, dispersal syndrome) and introduction characteristics (propagule pressure, residence time and distance to forest) in explaining establishment success in surrounding tropical forest. We also used invasion scores from a weed risk assessment protocol as an independent measure of invasion risk and assessed differences in variables between high‐ and low‐risk species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55120,"Planting intensity, residence time, and species traits determine invasion success of alien woody species",S177543,R55121,hypothesis,L110032,Propagule pressure,"We studied the relative importance of residence time, propagule pressure, and species traits in three stages of invasion of alien woody plants cultivated for about 150 years in the Czech Republic, Central Europe. The probability of escape from cultivation, naturalization, and invasion was assessed using classification trees. We compared 109 escaped-not-escaped congeneric pairs, 44 naturalized-not-naturalized, and 17 invasive-not-invasive congeneric pairs. We used the following predictors of the above probabilities: date of introduction to the target region as a measure of residence time; intensity of planting in the target area as a proxy for propagule pressure; the area of origin; and 21 species-specific biological and ecological traits. The misclassification rates of the naturalization and invasion model were low, at 19.3% and 11.8%, respectively, indicating that the variables used included the major determinants of these processes. The probability of escape increased with residence time in the Czech Republic, whereas the probability of naturalization increased with the residence time in Europe. This indicates that some species were already adapted to local conditions when introduced to the Czech Republic. Apart from residence time, the probability of escape depends on planting intensity (propagule pressure), and that of naturalization on the area of origin and fruit size; it is lower for species from Asia and those with small fruits. The probability of invasion is determined by a long residence time and the ability to tolerate low temperatures. These results indicate that a simple suite of factors determines, with a high probability, the invasion success of alien woody plants, and that the relative role of biological traits and other factors is stage dependent. High levels of propagule pressure as a result of planting lead to woody species eventually escaping from cultivation, regardless of biological traits. However, the biological traits play a role in later stages of invasion.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55127,Propagule pressure and climate contribute to the displacement of Linepithema humile by Pachycondyla chinensis,S177619,R55128,hypothesis,L110094,Propagule pressure,"Identifying mechanisms governing the establishment and spread of invasive species is a fundamental challenge in invasion biology. Because species invasions are frequently observed only after the species presents an environmental threat, research identifying the contributing agents to dispersal and subsequent spread are confined to retrograde observations. Here, we use a combination of seasonal surveys and experimental approaches to test the relative importance of behavioral and abiotic factors in determining the local co-occurrence of two invasive ant species, the established Argentine ant (Linepithema humile Mayr) and the newly invasive Asian needle ant (Pachycondyla chinensis Emery). We show that the broader climatic envelope of P. chinensis enables it to establish earlier in the year than L. humile. We also demonstrate that increased P. chinensis propagule pressure during periods of L. humile scarcity contributes to successful P. chinensis early season establishment. Furthermore, we show that, although L. humile is the numerically superior and behaviorally dominant species at baits, P. chinensis is currently displacing L. humile across the invaded landscape. By identifying the features promoting the displacement of one invasive ant by another we can better understand both early determinants in the invasion process and factors limiting colony expansion and survival.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55129,Propagule pressure and resource availability determine plant community invasibility in a temperate forest understorey,S177648,R55131,hypothesis,L110117,Propagule pressure,"Few field experiments have examined the effects of both resource availability and propagule pressure on plant community invasibility. Two non-native forest species, a herb and a shrub (Hesperis matronalis and Rhamnus cathartica, respectively), were sown into 60 1-m 2 sub-plots distributed across three plots. These contained reconstructed native plant communities in a replaced surface soil layer in a North American forest interior. Resource availability and propagule pressure were manipulated as follows: understorey light level (shaded/unshaded), nutrient availability (control/fertilized), and seed pressures of the two non-native species (control/low/high). Hesperis and Rhamnus cover and the above-ground biomass of Hesperis were significantly higher in shaded sub-plots and at greater propagule pressures. Similarly, the above-ground biomass of Rhamnus was significantly increased with propagule pressure, although this was a function of density. In contrast, of species that seeded into plots from the surrounding forest during the growing season, the non-native species had significantly greater cover in unshaded sub-plots. Plants in these unshaded sub-plots were significantly taller than plants in shaded sub-plots, suggesting a greater fitness. Total and non-native species richness varied significantly among plots indicating the importance of fine-scale dispersal patterns. None of the experimental treatments influenced native species. Since the forest seed bank in our study was colonized primarily by non-native ruderal species that dominated understorey vegetation, the management of invasions by non-native species in forest understoreys will have to address factors that influence light levels and dispersal pathways.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55134,"The roles of climate, phylogenetic relatedness, introduction effort, and reproductive traits in the establishment of non-native reptiles and amphibians",S177697,R55135,hypothesis,L110158,Propagule pressure,"Abstract: We developed a method to predict the potential of non‐native reptiles and amphibians (herpetofauna) to establish populations. This method may inform efforts to prevent the introduction of invasive non‐native species. We used boosted regression trees to determine whether nine variables influence establishment success of introduced herpetofauna in California and Florida. We used an independent data set to assess model performance. Propagule pressure was the variable most strongly associated with establishment success. Species with short juvenile periods and species with phylogenetically more distant relatives in regional biotas were more likely to establish than species that start breeding later and those that have close relatives. Average climate match (the similarity of climate between native and non‐native range) and life form were also important. Frogs and lizards were the taxonomic groups most likely to establish, whereas a much lower proportion of snakes and turtles established. We used results from our best model to compile a spreadsheet‐based model for easy use and interpretation. Probability scores obtained from the spreadsheet model were strongly correlated with establishment success as were probabilities predicted for independent data by the boosted regression tree model. However, the error rate for predictions made with independent data was much higher than with cross validation using training data. This difference in predictive power does not preclude use of the model to assess the probability of establishment of herpetofauna because (1) the independent data had no information for two variables (meaning the full predictive capacity of the model could not be realized) and (2) the model structure is consistent with the recent literature on the primary determinants of establishment success for herpetofauna. It may still be difficult to predict the establishment probability of poorly studied taxa, but it is clear that non‐native species (especially lizards and frogs) that mature early and come from environments similar to that of the introduction region have the highest probability of establishment.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55139,"Habitat, dispersal and propagule pressure control exotic plant infilling within an invaded range",S177755,R55140,hypothesis,L110206,Propagule pressure,"Deep in the heart of a longstanding invasion, an exotic grass is still invading. Range infilling potentially has the greatest impact on native communities and ecosystem processes, but receives much less attention than range expansion. ‘Snapshot' studies of invasive plant dispersal, habitat and propagule limitations cannot determine whether a landscape is saturated or whether a species is actively infilling empty patches. We investigate the mechanisms underlying invasive plant infilling by tracking the localized movement and expansion of Microstegium vimineum populations from 2009 to 2011 at sites along a 100-km regional gradient in eastern U.S. deciduous forests. We find that infilling proceeds most rapidly where the invasive plants occur in warm, moist habitats adjacent to roads: under these conditions they produce copious seed, the dispersal distances of which increase exponentially with proximity to roadway. Invasion then appears limited where conditions are generally dry and cool as propagule pressure tapers off. Invasion also is limited in habitats >1 m from road corridors, where dispersal distances decline precipitously. In contrast to propagule and dispersal limitations, we find little evidence that infilling is habitat limited, meaning that as long as M. vimineum seeds are available and transported, the plant generally invades quite vigorously. Our results suggest an invasive species continues to spread, in a stratified manner, within the invaded landscape long after first arriving. These dynamics conflict with traditional invasion models that emphasize an invasive edge with distinct boundaries. We find that propagule pressure and dispersal regulate infilling, providing the basis for projecting spread and landscape coverage, ecological effects and the efficacy of containment strategies.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55146,Propagule pressure drives establishment of introduced freshwater fish: quantitative evidence from an irrigation network,S177834,R55147,hypothesis,L110271,Propagule pressure,"Propagule pressure is recognized as a fundamental driver of freshwater fish invasions, though few studies have quantified its role. Natural experiments can be used to quantify the role of this factor relative to others in driving establishment success. An irrigation network in South Africa takes water from an inter-basin water transfer (IBWT) scheme to supply multiple small irrigation ponds. We compared fish community composition upstream, within, and downstream of the irrigation network, to show that this system is a unidirectional dispersal network with a single immigration source. We then assessed the effect of propagule pressure and biological adaptation on the colonization success of nine fish species across 30 recipient ponds of varying age. Establishing species received significantly more propagules at the source than did incidental species, while rates of establishment across the ponds displayed a saturation response to propagule pressure. This shows that propagule pressure is a significant driver of establishment overall. Those species that did not establish were either extremely rare at the immigration source or lacked the reproductive adaptations to breed in the ponds. The ability of all nine species to arrive at some of the ponds illustrates how long-term continuous propagule pressure from IBWT infrastructure enables range expansion of fishes. The quantitative link between propagule pressure and success and rate of population establishment confirms the driving role of this factor in fish invasion ecology.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55150,Propagule pressure and colony social organization are associated with the successful invasion and rapid range expansion of fire ants in China,S177882,R55151,hypothesis,L110311,Propagule pressure,"We characterized patterns of genetic variation in populations of the fire ant Solenopsis invicta in China using mitochondrial DNA sequences and nuclear microsatellite loci to test predictions as to how propagule pressure and subsequent dispersal following establishment jointly shape the invasion success of this ant in this recently invaded area. Fire ants in Wuchuan (Guangdong Province) are genetically differentiated from those found in other large infested areas of China. The immediate source of ants in Wuchuan appears to be somewhere near Texas, which ranks first among the southern USA infested states in the exportation of goods to China. Most colonies from spatially distant, outlying areas in China are genetically similar to one another and appear to share a common source (Wuchuan, Guangdong Province), suggesting that long‐distance jump dispersal has been a prevalent means of recent spread of fire ants in China. Furthermore, most colonies at outlier sites are of the polygyne social form (featuring multiple egg‐laying queens per nest), reinforcing the important role of this social form in the successful invasion of new areas and subsequent range expansion following invasion. Several analyses consistently revealed characteristic signatures of genetic bottlenecks for S. invicta populations in China. The results of this study highlight the invasive potential of this pest ant, suggest that the magnitude of international trade may serve as a predictor of propagule pressure and indicate that rates and patterns of subsequent range expansion are partly determined by the interplay between species traits and the trade and transportation networks.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54963,Colonization Success in Roesel's Bush-Cricket Metrioptera roeseli: The Effects of Propagule Size,S175778,R54964,Measure of propagule pressure,L108588,Propagule size,"Assessing the colonizing ability of a species is important for predicting its future distribution or for planning the introduction or reintroduction of that species for conservation purposes. The best way to assess colonizing ability is by making experimental introductions of the species and monitoring the outcome. In this study, different-sized propagules of Roesel's bush-cricket, Metrioptera roeseli, were experimentally introduced into 70 habitat islands, previously uninhabited by the species, in farmland fields in south- eastern Sweden. The areas of introduction were carefully monitored for 2-3 yr to determine whether the propagules had successfully colonized the patches. The study showed that large propagules resulted in larger local populations during the years following introduction. Probability of colonization for each propagule size was measured and showed that propagule size had a significant effect on colonization success, i.e., large propagules were more successful in colonizing new patches. If future introductions were to be made with this or a similar species, a propagule size of at least 32 individuals would be required to establish a viable population with a high degree of certainty.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54973,How many founders for a biological invasion? Predicting introduction outcomes from propagule pressure,S175887,R54974,Measure of propagule pressure,L108677,Propagule size,"Ecological theory on biological invasions attempts to characterize the predictors of invasion success and the relative importance of the different drivers of population establishment. An outstanding question is how propagule pressure determines the probability of population establishment, where propagule pressure is the number of individuals of a species introduced into a specific location (propagule size) and their frequency of introduction (propagule number). Here, we used large-scale replicated mesocosm ponds over three reproductive seasons to identify how propagule size and number predict the probability of establishment of one of world's most invasive fish, Pseudorasbora parva, as well as its effect on the somatic growth of individuals during establishment. We demonstrated that, although a threshold of 11 introduced pairs of fish (a pair is 1 male, 1 female) was required for establishment probability to exceed 95%, establishment also occurred at low propagule size (1-5 pairs). Although single introduction events were as effective as multiple events at enabling establishment, the propagule sizes used in the multiple introductions were above the detected threshold for establishment. After three reproductive seasons, population abundance was also a function of propagule size, with rapid increases in abundance only apparent when propagule size exceeded 25 pairs. This was initially assisted by adapted biological traits, including rapid individual somatic growth that helped to overcome demographic bottlenecks.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54984,Global patterns of introduction effort and establishment success in birds,S176011,R54985,Measure of propagule pressure,L108779,Propagule size,"Theory suggests that introduction effort (propagule size or number) should be a key determinant of establishment success for exotic species. Unfortunately, however, propagule pressure is not recorded for most introductions. Studies must therefore either use proxies whose efficacy must be largely assumed, or ignore effort altogether. The results of such studies will be flawed if effort is not distributed at random with respect to other characteristics that are predicted to influence success. We use global data for more than 600 introduction events for birds to show that introduction effort is both the strongest correlate of introduction success, and correlated with a large number of variables previously thought to influence success. Apart from effort, only habitat generalism relates to establishment success in birds.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55044,The interacting effects of diversity and propagule pressure on early colonization and population size,S176678,R55045,Measure of propagule pressure,L109326,Propagule size,"We are now beginning to understand the role of intraspecific diversity on fundamental ecological phenomena. There exists a paucity of knowledge, however, regarding how intraspecific, or genetic diversity, may covary with other important factors such as propagule pressure. A combination of theoretical modelling and experimentation was used to explore the way propagule pressure and genetic richness may interact. We compare colonization rates of the Australian bivalve Saccostrea glomerata (Gould 1885). We cross propagule size and genetic richness in a factorial design in order to examine the generalities of our theoretical model. Modelling showed that diversity and propagule pressure should generally interact synergistically when positive feedbacks occur (e.g. aggregation). The strength of genotype effects depended on propagule size, or the numerical abundance of arriving individuals. When propagule size was very small (<4 individuals), however, greater genetic richness unexpectedly reduced colonization. The probability of S. glomerata colonization was 76% in genetically rich, larger propagules, almost 39 percentage points higher than in genetically poor propagules of similar size. This pattern was not observed in less dense, smaller propagules. We predict that density-dependent interactions between larvae in the water column may explain this pattern.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55046,Role of Propagule Size in the Success of Incipient Colonies of the Invasive Argentine Ant,S176701,R55047,Measure of propagule pressure,L109345,Propagule size,"Abstract: Factors that contribute to the successful establishment of invasive species are often poorly understood. Propagule size is considered a key determinant of establishment success, but experimental tests of its importance are rare. We used experimental colonies of the invasive Argentine ant ( Linepithema humile) that differed both in worker and queen number to test how these attributes influence the survivorship and growth of incipient colonies. All propagules without workers experienced queen mortality, in contrast to only 6% of propagules with workers. In small propagules (10–1,000 workers), brood production increased with worker number but not queen number. In contrast, per capita measures of colony growth decreased with worker number over these colony sizes. In larger propagules ( 1,000–11,000 workers), brood production also increased with increasing worker number, but per capita brood production appeared independent of colony size. Our results suggest that queens need workers to establish successfully but that propagules with as few as 10 workers can grow quickly. Given the requirements for propagule success in Argentine ants, it is not surprising how easily they spread via human commerce.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55057,"Role of propagule pressure in colonization success: disentangling the relative importance of demographic, genetic and habitat effects",S176825,R55058,Measure of propagule pressure,L109447,Propagule size,"High propagule pressure is arguably the only consistent predictor of colonization success. More individuals enhance colonization success because they aid in overcoming demographic consequences of small population size (e.g. stochasticity and Allee effects). The number of founders can also have direct genetic effects: with fewer individuals, more inbreeding and thus inbreeding depression will occur, whereas more individuals typically harbour greater genetic variation. Thus, the demographic and genetic components of propagule pressure are interrelated, making it difficult to understand which mechanisms are most important in determining colonization success. We experimentally disentangled the demographic and genetic components of propagule pressure by manipulating the number of founders (fewer or more), and genetic background (inbred or outbred) of individuals released in a series of three complementary experiments. We used Bemisia whiteflies and released them onto either their natal host (benign) or a novel host (challenging). Our experiments revealed that having more founding individuals and those individuals being outbred both increased the number of adults produced, but that only genetic background consistently shaped net reproductive rate of experimental populations. Environment was also important and interacted with propagule size to determine the number of adults produced. Quality of the environment interacted also with genetic background to determine establishment success, with a more pronounced effect of inbreeding depression in harsh environments. This interaction did not hold for the net reproductive rate. These data show that the positive effect of propagule pressure on founding success can be driven as much by underlying genetic processes as by demographics. Genetic effects can be immediate and have sizable effects on fitness.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R55122,"Behavioural plasticity associated with propagule size, resources, and the invasion success of the Argentine ant Linepithema humile",S177574,R55124,Measure of propagule pressure,L110057,Propagule size,"Summary 1. The number of individuals involved in an invasion event, or ‘propagule size’, has a strong theoretical basis for influencing invasion success. However, rarely has propagule size been experimentally manipulated to examine changes in invader behaviour, and propagule longevity and success. 2. We manipulated propagule size of the invasive Argentine ant Linepithema humile in laboratory and field studies. Laboratory experiments involved L. humile propagules containing two queens and 10, 100, 200 or 1000 workers. Propagules were introduced into arenas containing colonies of queens and 200 workers of the competing native ant Monomorium antarcticum . The effects of food availability were investigated via treatments of only one central resource, or 10 separated resources. Field studies used similar colony sizes of L. humile , which were introduced into novel environments near an invasion front. 3. In laboratory studies, small propagules of L. humile were quickly annihilated. Only the larger propagule size survived and killed the native ant colony in some replicates. Aggression was largely independent of food availability, but the behaviour of L. humile changed substantially with propagule size. In larger propagules, aggressive behaviour was significantly more frequent, while L. humile were much more likely to avoid conflict in smaller propagules. 4. In field studies, however, propagule size did not influence colony persistence. Linepithema humile colonies persisted for up to 2 months, even in small propagules of 10 workers. Factors such as temperature or competitor abundance had no effect, although some colonies were decimated by M. antarcticum . 5. Synthesis and applications. Although propagule size has been correlated with invasion success in a wide variety of taxa, our results indicate that it will have limited predictive power with species displaying behavioural plasticity. We recommend that aspects of animal behaviour be given much more consideration in attempts to model invasion success. Secondly, areas of high biodiversity are thought to offer biotic resistance to invasion via the abundance of predators and competitors. Invasive pests such as L. humile appear to modify their behaviour according to local conditions, and establishment was not related to resource availability. We cannot necessarily rely on high levels of native biodiversity to repel invasions.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54753,Are invasive species the drivers or passengers of change in degraded ecosystems?,S173824,R54754,Measure of disturbance,L107202,reduction (mowing of aboveground biomass) ,"Few invaded ecosystems are free from habitat loss and disturbance, leading to uncertainty whether dominant invasive species are driving community change or are passengers along for the environmental ride. The ''driver'' model predicts that invaded communities are highly interactive, with subordinate native species being limited or ex- cluded by competition from the exotic dominants. The ''passenger'' model predicts that invaded communities are primarily structured by noninteractive factors (environmental change, dispersal limitation) that are less constraining on the exotics, which thus dominate. We tested these alternative hypotheses in an invaded, fragmented, and fire-suppressed oak savanna. We examined the impact of two invasive dominant perennial grasses on community structure using a reduction (mowing of aboveground biomass) and removal (weeding of above- and belowground biomass) experiment conducted at different seasons and soil depths. We examined the relative importance of competition vs. dispersal limitation with experimental seed additions. Competition by the dominants limits the abundance and re- production of many native and exotic species based on their increased performance with removals and mowing. The treatments resulted in increased light availability and bare soil; soil moisture and N were unaffected. Although competition was limiting for some, 36 of 79 species did not respond to the treatments or declined in the absence of grass cover. Seed additions revealed that some subordinates are dispersal limited; competition alone was insufficient to explain their rarity even though it does exacerbate dispersal inefficiencies by lowering reproduction. While the net effects of the dominants were negative, their presence restricted woody plants, facilitated seedling survival with moderate disturbance (i.e., treatments applied in the fall), or was not the primary limiting factor for the occurrence of some species. Finally, the species most functionally distinct from the dominants (forbs, woody plants) responded most significantly to the treatments. This suggests that relative abundance is determined more by trade-offs relating to environmental conditions (long- term fire suppression) than to traits relating to resource capture (which should most impact functionally similar species). This points toward the passenger model as the underlying cause of exotic dominance, although their combined effects (suppressive and facilitative) on community structure are substantial.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54040,Architectural strategies of Rhamnus cathartica (Rhamnaceae) in relation to canopy openness,S165378,R54041,Species name,L100296,Rhamnus cathartica L.,"While phenotypic plasticity is considered the major means that allows plant to cope with environmental heterogeneity, scant information is available on phenotypic plasticity of the whole-plant architecture in relation to ontogenic processes. We performed an architectural analysis to gain an understanding of the structural and ontogenic properties of common buckthorn (Rhamnus cathartica L., Rhamnaceae) growing in the understory and under an open canopy. We found that ontogenic effects on growth need to be calibrated if a full description of phenotypic plasticity is to be obtained. Our analysis pointed to three levels of organization (or nested structural units) in R. cathartica. Their modulation in relation to light conditions leads to the expression of two architectural strategies that involve sets of traits known to confer competitive advantage in their respective environments. In the understory, the plant develops a tree-like form. Its strategy here is based on restricting investment in exploitation str...",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54230,Leaf-level phenotypic variability and plasticity of invasive Rhododendron ponticum and non-invasive Ilex aquifolium co-occurring at two contrasting European sites,S167606,R54231,Species name,L102144,Rhododendron ponticum,"To understand the role of leaf-level plasticity and variability in species invasiveness, foliar characteristics were studied in relation to seasonal average integrated quantum flux density (Qint) in the understorey evergreen species Rhododendron ponticum and Ilex aquifolium at two sites. A native relict population of R. ponticum was sampled in southern Spain (Mediterranean climate), while an invasive alien population was investigated in Belgium (temperate maritime climate). Ilex aquifolium was native at both sites. Both species exhibited a significant plastic response to Qint in leaf dry mass per unit area, thickness, photosynthetic potentials, and chlorophyll contents at the two sites. However, R. ponticum exhibited a higher photosynthetic nitrogen use efficiency and larger investment of nitrogen in chlorophyll than I. aquifolium. Since leaf nitrogen (N) contents per unit dry mass were lower in R. ponticum, this species formed a larger foliar area with equal photosynthetic potential and light-harvesting efficiency compared with I. aquifolium. The foliage of R. ponticum was mechanically more resistant with larger density in the Belgian site than in the Spanish site. Mean leaf-level phenotypic plasticity was larger in the Belgian population of R. ponticum than in the Spanish population of this species and the two populations of I. aquifolium. We suggest that large fractional investments of foliar N in photosynthetic function coupled with a relatively large mean, leaf-level phenotypic plasticity may provide the primary explanation for the invasive nature and superior performance of R. ponticum at the Belgian site. With alleviation of water limitations from Mediterranean to temperate maritime climates, the invasiveness of R. ponticum may also be enhanced by the increased foliage mechanical resistance observed in the alien populations.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54074,Phenotypic divergence of exotic fish populations is shaped by spatial proximity and habitat differences across an invaded landscape,S165771,R54075,Species name,L100621,Salmo trutta,"Background: Brown trout (Salmo trutta) were introduced into, and subsequently colonized, a number of disparate watersheds on the island of Newfoundland, Canada (110,638 km 2 ), starting in 1883. Questions: Do environmental features of recently invaded habitats shape population-level phenotypic variability? Are patterns of phenotypic variability suggestive of parallel adaptive divergence? And does the extent of phenotypic divergence increase as a function of distance between populations? Hypotheses: Populations that display similar phenotypes will inhabit similar environments. Patterns in morphology, coloration, and growth in an invasive stream-dwelling fish should be consistent with adaptation, and populations closer to each other should be more similar than should populations that are farther apart. Organism and study system: Sixteen brown trout populations of probable common descent, inhabiting a gradient of environments. These populations include the most ancestral (∼130 years old) and most recently established (∼20 years old). Analytical methods: We used multivariate statistical techniques to quantify morphological (e.g. body shape via geometric morphometrics and linear measurements of traits), meristic (e.g. counts of pigmentation spots), and growth traits from 1677 individuals. To account for ontogenetic and allometric effects on morphology, we conducted separate analyses on three distinct size/age classes. We used the BIO-ENV routine and Mantel tests to measure the correlation between phenotypic and habitat features. Results: Phenotypic similarity was significantly correlated with environmental similarity, especially in the larger size classes of fish. The extent to which these associations between phenotype and habitat result from parallel evolution, adaptive phenotypic plasticity, or historical founder effects is not known. Observed patterns of body shape and fin sizes were generally consistent with predictions of adaptive trait patterns, but other traits showed less consistent patterns with habitat features. Phenotypic differences increased as a function of straight-line distance (km) between watersheds and to a lesser extent fish dispersal distances, which suggests habitat has played a more significant role in shaping population phenotypes compared with founder effects.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54144,Evolution of dispersal traits along an invasion route in the wind-dispersed Senecio inaequidens (Asteraceae) ,S166589,R54145,Species name,L101299,Senecio inaequidens,"In introduced organisms, dispersal propensity is expected to increase during range expansion. This prediction is based on the assumption that phenotypic plasticity is low compared to genetic diversity, and an increase in dispersal can be counteracted by the Allee effect. Empirical evidence in support of these hypotheses is however lacking. The present study tested for evidence of differentiation in dispersal-related traits and the Allee effect in the wind-dispersed invasive Senecio inaequidens (Asteraceae). We collected capitula from individuals in ten field populations, along an invasion route including the original introduction site in southern France. In addition, we conducted a common garden experiment from field-collected seeds and obtained capitula from individuals representing the same ten field populations. We analysed phenotypic variation in dispersal traits between field and common garden environments as a function of the distance between populations and the introduction site. Our results revealed low levels of phenotypic differentiation among populations. However, significant clinal variation in dispersal traits was demonstrated in common garden plants representing the invasion route. In field populations, similar trends in dispersal-related traits and evidence of an Allee effect were not detected. In part, our results supported expectations of increased dispersal capacity with range expansion, and emphasized the contribution of phenotypic plasticity under natural conditions.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54196,Increased fitness and plasticity of an invasive species in its introduced range: a study using Senecio pterophorus,S167204,R54197,Species name,L101810,Senecio pterophorus,"1 When a plant species is introduced into a new range, it may differentiate genetically from the original populations in the home range. This genetic differentiation may influence the extent to which the invasion of the new range is successful. We tested this hypothesis by examining Senecio pterophorus, a South African shrub that was introduced into NE Spain about 40 years ago. We predicted that in the introduced range invasive populations would perform better and show greater plasticity than native populations. 2 Individuals of S. pterophorus from four Spanish (invasive) and four South African (native) populations were grown in Catalonia, Spain, in a common garden in which disturbance and water availability were manipulated. Fitness traits and several ecophysiological parameters were measured. 3 The invasive populations of S. pterophorus survived better throughout the summer drought in a disturbed (unvegetated) environment than native South African populations. This success may be attributable to the lower specific leaf area (SLA) and better water content regulation of the invasive populations in this treatment. 4 Invasive populations displayed up to three times higher relative growth rate than native populations under conditions of disturbance and non‐limiting water availability. 5 The reproductive performance of the invasive populations was higher in all treatments except under the most stressful conditions (i.e. in non‐watered undisturbed plots), where no plant from either population flowered. 6 The results for leaf parameters and chlorophyll fluorescence measurements suggested that the greater fitness of the invasive populations could be attributed to more favourable ecophysiological responses. 7 Synthesis. Spanish invasive populations of S. pterophorus performed better in the presence of high levels of disturbance, and displayed higher plasticity of fitness traits in response to resource availability than native South African populations. Our results suggest that genetic differentiation from source populations associated with founding may play a role in invasion success.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R57207,Diversity and biomass of native macrophytes are negatively related to dominance of an invasive Poaceae in Brazilian sub-tropical streams,S194987,R57208,Measure of native biodiversity,L122250,Shannon index,"Besides exacerbated exploitation, pollution, flow alteration and habitats degradation, freshwater biodiversity is also threatened by biological invasions. This paper addresses how native aquatic macrophyte communities are affected by the non-native species Urochloa arrecta, a current successful invader in Brazilian freshwater systems. We compared the native macrophytes colonizing patches dominated and non-dominated by this invader species. We surveyed eight streams in Northwest Parana State (Brazil). In each stream, we recorded native macrophytes' richness and biomass in sites where U. arrecta was dominant and in sites where it was not dominant or absent. No native species were found in seven, out of the eight investigated sites where U. arrecta was dominant. Thus, we found higher native species richness, Shannon index and native biomass values in sites without dominance of U. arrecta than in sites dominated by this invader. Although difficult to conclude about causes of such differences, we infer that the elevated biomass production by this grass might be the primary reason for alterations in invaded environments and for the consequent impacts on macrophytes' native communities. However, biotic resistance offered by native richer sites could be an alternative explanation for our results. To mitigate potential impacts and to prevent future environmental perturbations, we propose mechanical removal of the invasive species and maintenance or restoration of riparian vegetation, for freshwater ecosystems have vital importance for the maintenance of ecological services and biodiversity and should be preserved.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54826,Shoreline development drives invasion of Phragmites australis and the loss of plant diversity on New England salt marshes,S174687,R54827,Measure of disturbance,L107919,shoreline development,"Abstract: The reed Phragmites australis Cav. is aggressively invading salt marshes along the Atlantic Coast of North America. We examined the interactive role of habitat alteration (i.e., shoreline development) in driving this invasion and its consequences for plant richness in New England salt marshes. We surveyed 22 salt marshes in Narragansett Bay, Rhode Island, and quantified shoreline development, Phragmites cover, soil salinity, and nitrogen availability. Shoreline development, operationally defined as removal of the woody vegetation bordering marshes, explained >90% of intermarsh variation in Phragmites cover. Shoreline development was also significantly correlated with reduced soil salinities and increased nitrogen availability, suggesting that removing woody vegetation bordering marshes increases nitrogen availability and decreases soil salinities, thus facilitating Phragmites invasion. Soil salinity (64%) and nitrogen availability (56%) alone explained a large proportion of variation in Phragmites cover, but together they explained 80% of the variation in Phragmites invasion success. Both univariate and aggregate (multidimensional scaling) analyses of plant community composition revealed that Phragmites dominance in developed salt marshes resulted in an almost three‐fold decrease in plant species richness. Our findings illustrate the importance of maintaining integrity of habitat borders in conserving natural communities and provide an example of the critical role that local conservation can play in preserving these systems. In addition, our findings provide ecologists and natural resource managers with a mechanistic understanding of how human habitat alteration in one vegetation community can interact with species introductions in adjacent communities (i.e., flow‐on or adjacency effects) to hasten ecosystem degradation.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54627,Variable effects of feral pig disturbances on native and exotic plants in a California grassland,S172328,R54629,Type of disturbance,L105956,Soil disturbance,"Biological invasions are a global phenomenon that can accelerate disturbance regimes and facilitate colonization by other nonnative species. In a coastal grassland in northern California, we conducted a four-year exclosure experiment to assess the effects of soil disturbances by feral pigs (Sus scrofa) on plant community composition and soil nitrogen availability. Our results indicate that pig disturbances had substantial effects on the community, although many responses varied with plant functional group, geographic origin (native vs. exotic), and grassland type. (''Short patches'' were dominated by annual grasses and forbs, whereas ''tall patches'' were dominated by perennial bunchgrasses.) Soil disturbances by pigs increased the richness of exotic plant species by 29% and native taxa by 24%. Although native perennial grasses were unaffected, disturbances reduced the bio- mass of exotic perennial grasses by 52% in tall patches and had no effect in short patches. Pig disturbances led to a 69% decrease in biomass of exotic annual grasses in tall patches but caused a 62% increase in short patches. Native, nongrass monocots exhibited the opposite biomass pattern as those seen for exotic annual grasses, with disturbance causing an 80% increase in tall patches and a 56% decrease in short patches. Native forbs were unaffected by disturbance, whereas the biomass of exotic forbs increased by 79% with disturbance in tall patches and showed no response in short patches. In contrast to these vegetation results, we found no evidence that pig disturbances affected nitrogen mineral- ization rates or soil moisture availability. Thus, we hypothesize that the observed vegetation changes were due to space clearing by pigs that provided greater opportunities for colo- nization and reduced intensity of competition, rather than changes in soil characteristics. In summary, although responses were variable, disturbances by feral pigs generally pro- moted the continued invasion of this coastal grassland by exotic plant taxa.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54028,Jack-of-all-trades: phenotypic plasticity facilitates the invasion of an alien slug species,S165233,R54029,Specific traits,L100175,Survival and egg production,"Invasive alien species might benefit from phenotypic plasticity by being able to (i) maintain fitness in stressful environments (‘robust’), (ii) increase fitness in favourable environments (‘opportunistic’), or (iii) combine both abilities (‘robust and opportunistic’). Here, we applied this framework, for the first time, to an animal, the invasive slug, Arion lusitanicus, and tested (i) whether it has a more adaptive phenotypic plasticity compared with a congeneric native slug, Arion fuscus, and (ii) whether it is robust, opportunistic or both. During one year, we exposed specimens of both species to a range of temperatures along an altitudinal gradient (700–2400 m a.s.l.) and to high and low food levels, and we compared the responsiveness of two fitness traits: survival and egg production. During summer, the invasive species had a more adaptive phenotypic plasticity, and at high temperatures and low food levels, it survived better and produced more eggs than A. fuscus, representing the robust phenotype. During winter, A. lusitanicus displayed a less adaptive phenotype than A. fuscus. We show that the framework developed for plants is also very useful for a better mechanistic understanding of animal invasions. Warmer summers and milder winters might lead to an expansion of this invasive species to higher altitudes and enhance its spread in the lowlands, supporting the concern that global climate change will increase biological invasions.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54036,Latitudinal Patterns in Phenotypic Plasticity and Fitness-Related Traits: Assessing the Climatic Variability Hypothesis (CVH) with an Invasive Plant Species,S165330,R54037,Species name,L100256,Taraxacum officinale,"Phenotypic plasticity has been suggested as the main mechanism for species persistence under a global change scenario, and also as one of the main mechanisms that alien species use to tolerate and invade broad geographic areas. However, contrasting with this central role of phenotypic plasticity, standard models aimed to predict the effect of climatic change on species distributions do not allow for the inclusion of differences in plastic responses among populations. In this context, the climatic variability hypothesis (CVH), which states that higher thermal variability at higher latitudes should determine an increase in phenotypic plasticity with latitude, could be considered a timely and promising hypothesis. Accordingly, in this study we evaluated, for the first time in a plant species (Taraxacum officinale), the prediction of the CVH. Specifically, we measured plastic responses at different environmental temperatures (5 and 20°C), in several ecophysiological and fitness-related traits for five populations distributed along a broad latitudinal gradient. Overall, phenotypic plasticity increased with latitude for all six traits analyzed, and mean trait values increased with latitude at both experimental temperatures, the change was noticeably greater at 20° than at 5°C. Our results suggest that the positive relationship found between phenotypic plasticity and geographic latitude could have very deep implications on future species persistence and invasion processes under a scenario of climate change.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54128,Functional differences in response to drought in the invasive Taraxacum officinale from native and introduced alpine habitat ranges,S166402,R54129,Species name,L101144,Taraxacum officinale,"Background: Phenotypic plasticity and ecotypic differentiation have been suggested as the main mechanisms by which widely distributed species can colonise broad geographic areas with variable and stressful conditions. Some invasive plant species are among the most widely distributed plants worldwide. Plasticity and local adaptation could be the mechanisms for colonising new areas. Aims: We addressed if Taraxacum officinale from native (Alps) and introduced (Andes) stock responded similarly to drought treatment, in terms of photosynthesis, foliar angle, and flowering time. We also evaluated if ontogeny affected fitness and physiological responses to drought. Methods: We carried out two common garden experiments with both seedlings and adults (F2) of T. officinale from its native and introduced ranges in order to evaluate their plasticity and ecotypic differentiation under a drought treatment. Results: Our data suggest that the functional response of T. officinale individuals from the introduced range to drought is the result of local adaptation rather than plasticity. In addition, the individuals from the native distribution range were more sensitive to drought than those from the introduced distribution ranges at both seedling and adult stages. Conclusions: These results suggest that local adaptation may be a possible mechanism underlying the successful invasion of T. officinale in high mountain environments of the Andes.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R4796,The hierarchy-of-hypotheses approach: A synthesis method for enhancing theory development in ecology and evolution,S5276,R4816,method,R4818,the hierarchy-of-hypotheses (HoH) approach,"Abstract In the current era of Big Data, existing synthesis tools such as formal meta-analyses are critical means to handle the deluge of information. However, there is a need for complementary tools that help to (a) organize evidence, (b) organize theory, and (c) closely connect evidence to theory. We present the hierarchy-of-hypotheses (HoH) approach to address these issues. In an HoH, hypotheses are conceptually and visually structured in a hierarchically nested way where the lower branches can be directly connected to empirical results. Used for organizing evidence, this tool allows researchers to conceptually connect empirical results derived through diverse approaches and to reveal under which circumstances hypotheses are applicable. Used for organizing theory, it allows researchers to uncover mechanistic components of hypotheses and previously neglected conceptual connections. In the present article, we offer guidance on how to build an HoH, provide examples from population and evolutionary biology and propose terminological clarifications.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54020,"Establishment of an Invasive Plant Species (Conium maculatum) in Contaminated Roadside Soil in Cook County, Illinois",S165141,R54021,Specific traits,L100099,Tolerance to heavy metals,"Abstract Interactions between environmental variables in anthropogenically disturbed environments and physiological traits of invasive species may help explain reasons for invasive species' establishment in new areas. Here we analyze how soil contamination along roadsides may influence the establishment of Conium maculatum (poison hemlock) in Cook County, IL, USA. We combine analyses that: (1) characterize the soil and measure concentrations of heavy metals and polycyclic aromatic hydrocarbons (PAHs) where Conium is growing; (2) assess the genetic diversity and structure of individuals among nine known populations; and (3) test for tolerance to heavy metals and evidence for local soil growth advantage with greenhouse establishment experiments. We found elevated levels of metals and PAHs in the soil where Conium was growing. Specifically, arsenic (As), cadmium (Cd), and lead (Pb) were found at elevated levels relative to U.S. EPA ecological contamination thresholds. In a greenhouse study we found that Conium is more tolerant of soils containing heavy metals (As, Cd, Pb) than two native species. For the genetic analysis a total of 217 individuals (approximately 20–30 per population) were scored with 5 ISSR primers, yielding 114 variable loci. We found high levels of genetic diversity in all populations but little genetic structure or differentiation among populations. Although Conium shows a general tolerance to contamination, we found few significant associations between genetic diversity metrics and a suite of measured environmental and spatial parameters. Soil contamination is not driving the peculiar spatial distribution of Conium in Cook County, but these findings indicate that Conium is likely establishing in the Chicago region partially due to its ability to tolerate high levels of metal contamination.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54186,Establishment of parallel altitudinal clines in traits of native and introduced forbs,S167086,R54187,Specific traits,L101712,Trends in growth and reproductive traits,"Due to altered ecological and evolutionary contexts, we might expect the responses of alien plants to environmental gradients, as revealed through patterns of trait variation, to differ from those of the same species in their native range. In particular, the spread of alien plant species along such gradients might be limited by their ability to establish clinal patterns of trait variation. We investigated trends in growth and reproductive traits in natural populations of eight invasive Asteraceae forbs along altitudinal gradients in their native and introduced ranges (Valais, Switzerland, and Wallowa Mountains, Oregon, USA). Plants showed similar responses to altitude in both ranges, being generally smaller and having fewer inflorescences but larger seeds at higher altitudes. However, these trends were modified by region-specific effects that were independent of species status (native or introduced), suggesting that any differential performance of alien species in the introduced range cannot be interpreted without a fully reciprocal approach to test the basis of these differences. Furthermore, we found differences in patterns of resource allocation to capitula among species in the native and the introduced areas. These suggest that the mechanisms underlying trait variation, for example, increasing seed size with altitude, might differ between ranges. The rapid establishment of clinal patterns of trait variation in the new range indicates that the need to respond to altitudinal gradients, possibly by local adaptation, has not limited the ability of these species to invade mountain regions. Studies are now needed to test the underlying mechanisms of altitudinal clines in traits of alien species.",TRUE,noun phrase
R24,Ecology and Evolutionary Biology,R54170,Life history plasticity magnifies the ecological effects of a social wasp invasion ,S166896,R54171,Species name,L101554,Vespula pensylvanica,"An unresolved question in ecology concerns why the ecological effects of invasions vary in magnitude. Many introduced species fail to interact strongly with the recipient biota, whereas others profoundly disrupt the ecosystems they invade through predation, competition, and other mechanisms. In the context of ecological impacts, research on biological invasions seldom considers phenotypic or microevolutionary changes that occur following introduction. Here, we show how plasticity in key life history traits (colony size and longevity), together with omnivory, magnifies the predatory impacts of an invasive social wasp (Vespula pensylvanica) on a largely endemic arthropod fauna in Hawaii. Using a combination of molecular, experimental, and behavioral approaches, we demonstrate (i) that yellowjackets consume an astonishing diversity of arthropod resources and depress prey populations in invaded Hawaiian ecosystems and (ii) that their impact as predators in this region increases when they shift from small annual colonies to large perennial colonies. Such trait plasticity may influence invasion success and the degree of disruption that invaded ecosystems experience. Moreover, postintroduction phenotypic changes may help invaders to compensate for reductions in adaptive potential resulting from founder events and small population sizes. The dynamic nature of biological invasions necessitates a more quantitative understanding of how postintroduction changes in invader traits affect invasion processes.",TRUE,noun phrase
R302,Economics,R182241,COVID-19 Disruptions Disproportionately Affect Female Academics,S717580,R187515,influencing factor,R187152,COVID-19 pandemic,"The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents’ circumstances, such as a spouse’s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.",TRUE,noun phrase
R267,Energy Systems,R110083,Optimal Sizing and Scheduling of Hybrid Energy Systems: The Cases of Morona Santiago and the Galapagos Islands,S502057,R110088,System location,L362964,Morona Santiago,"Hybrid energy systems (HESs) generate electricity from multiple energy sources that complement each other. Recently, due to the reduction in costs of photovoltaic (PV) modules and wind turbines, these types of systems have become economically competitive. In this study, a mathematical programming model is applied to evaluate the techno-economic feasibility of autonomous units located in two isolated areas of Ecuador: first, the province of Galapagos (subtropical island) and second, the province of Morona Santiago (Amazonian tropical forest). The two case studies suggest that HESs are potential solutions to reduce the dependence of rural villages on fossil fuels and viable mechanisms to bring electrical power to isolated communities in Ecuador. Our results reveal that not only from the economic but also from the environmental point of view, for the case of the Galapagos province, a hybrid energy system with a PV–wind–battery configuration and a levelized cost of energy (LCOE) equal to 0.36 $/kWh is the optimal energy supply system. For the case of Morona Santiago, a hybrid energy system with a PV–diesel–battery configuration and an LCOE equal to 0.37 $/kWh is the most suitable configuration to meet the load of a typical isolated community in Ecuador. The proposed optimization model can be used as a decision-support tool for evaluating the viability of autonomous HES projects at any other location.",TRUE,noun phrase
R267,Energy Systems,R110083,Optimal Sizing and Scheduling of Hybrid Energy Systems: The Cases of Morona Santiago and the Galapagos Islands,S502126,R110088,System components,L362995,Wind turbine,"Hybrid energy systems (HESs) generate electricity from multiple energy sources that complement each other. Recently, due to the reduction in costs of photovoltaic (PV) modules and wind turbines, these types of systems have become economically competitive. In this study, a mathematical programming model is applied to evaluate the techno-economic feasibility of autonomous units located in two isolated areas of Ecuador: first, the province of Galapagos (subtropical island) and second, the province of Morona Santiago (Amazonian tropical forest). The two case studies suggest that HESs are potential solutions to reduce the dependence of rural villages on fossil fuels and viable mechanisms to bring electrical power to isolated communities in Ecuador. Our results reveal that not only from the economic but also from the environmental point of view, for the case of the Galapagos province, a hybrid energy system with a PV–wind–battery configuration and a levelized cost of energy (LCOE) equal to 0.36 $/kWh is the optimal energy supply system. For the case of Morona Santiago, a hybrid energy system with a PV–diesel–battery configuration and an LCOE equal to 0.37 $/kWh is the most suitable configuration to meet the load of a typical isolated community in Ecuador. The proposed optimization model can be used as a decision-support tool for evaluating the viability of autonomous HES projects at any other location.",TRUE,noun phrase
R194,Engineering,R139283,Glucose Biosensor Based on Disposable Activated Carbon Electrodes Modified with Platinum Nanoparticles Electrodeposited on Poly(Azure A),S555148,R139286,keywords,L390522, activated screen-printed carbon electrodes,"Herein, a novel electrochemical glucose biosensor based on glucose oxidase (GOx) immobilized on a surface containing platinum nanoparticles (PtNPs) electrodeposited on poly(Azure A) (PAA) previously electropolymerized on activated screen-printed carbon electrodes (GOx-PtNPs-PAA-aSPCEs) is reported. The resulting electrochemical biosensor was validated towards glucose oxidation in real samples and further electrochemical measurement associated with the generated H2O2. The electrochemical biosensor showed an excellent sensitivity (42.7 μA mM−1 cm−2), limit of detection (7.6 μM), linear range (20 μM–2.3 mM), and good selectivity towards glucose determination. Furthermore, and most importantly, the detection of glucose was performed at a low potential (0.2 V vs. Ag). The high performance of the electrochemical biosensor was explained through surface exploration using field emission SEM, XPS, and impedance measurements. The electrochemical biosensor was successfully applied to glucose quantification in several real samples (commercial juices and a plant cell culture medium), exhibiting a high accuracy when compared with a classical spectrophotometric method. This electrochemical biosensor can be easily prepared and opens up a good alternative in the development of new sensitive glucose sensors.",TRUE,noun phrase
R194,Engineering,R139273,A Highly Sensitive Nonenzymatic Glucose Biosensor Based on the Regulatory Effect of Glucose on Electrochemical Behaviors of Colloidal Silver Nanoparticles on MoS2,S555075,R139276,Sensing material,L390462, Colloidal silver nanoparticles (AgNPs),"A novel and highly sensitive nonenzymatic glucose biosensor was developed by nucleating colloidal silver nanoparticles (AgNPs) on MoS2. The facile fabrication method, high reproducibility (97.5%) and stability indicates a promising capability for large-scale manufacturing. Additionally, the excellent sensitivity (9044.6 μA·mM−1·cm−2), low detection limit (0.03 μM), appropriate linear range of 0.1–1000 μM, and high selectivity suggests that this biosensor has a great potential to be applied for noninvasive glucose detection in human body fluids, such as sweat and saliva.",TRUE,noun phrase
R194,Engineering,R141153,Effect of Environmental Humidity on Dielectric Charging Effect in RF MEMS Capacitive Switches Based on C–V Properties,S564262,R141155,Study Area ,L395988, Microelectromechanical Systems (MEMS,"A capacitance-voltage (C- V) model is developed for RF microelectromechanical systems (MEMS) switches at upstate and downstate. The transient capacitance response of the RF MEMS switches at different switch states was measured for different humidity levels. By using the C -V model as well as the voltage shift dependent of trapped charges, the transient trapped charges at different switch states and humidity levels are obtained. Charging models at different switch states are explored in detail. It is shown that the injected charges increase linearly with humidity levels and the internal polarization increases with increasing humidity at downstate. The speed of charge injection at 80% relative humidity (RH) is about ten times faster than that at 20% RH. A measurement of pull-in voltage shifts by C- V sweep cycles at 20% and 80 % RH gives a reasonable evidence. The present model is useful to understand the pull-in voltage shift of the RF MEMS switch.",TRUE,noun phrase
R194,Engineering,R139283,Glucose Biosensor Based on Disposable Activated Carbon Electrodes Modified with Platinum Nanoparticles Electrodeposited on Poly(Azure A),S555149,R139286,substrate,L390523,activated screen-printed carbon,"Herein, a novel electrochemical glucose biosensor based on glucose oxidase (GOx) immobilized on a surface containing platinum nanoparticles (PtNPs) electrodeposited on poly(Azure A) (PAA) previously electropolymerized on activated screen-printed carbon electrodes (GOx-PtNPs-PAA-aSPCEs) is reported. The resulting electrochemical biosensor was validated towards glucose oxidation in real samples and further electrochemical measurement associated with the generated H2O2. The electrochemical biosensor showed an excellent sensitivity (42.7 μA mM−1 cm−2), limit of detection (7.6 μM), linear range (20 μM–2.3 mM), and good selectivity towards glucose determination. Furthermore, and most importantly, the detection of glucose was performed at a low potential (0.2 V vs. Ag). The high performance of the electrochemical biosensor was explained through surface exploration using field emission SEM, XPS, and impedance measurements. The electrochemical biosensor was successfully applied to glucose quantification in several real samples (commercial juices and a plant cell culture medium), exhibiting a high accuracy when compared with a classical spectrophotometric method. This electrochemical biosensor can be easily prepared and opens up a good alternative in the development of new sensitive glucose sensors.",TRUE,noun phrase
R194,Engineering,R141147,RF MEMS Shunt Capacitive Switches Using AlN Compared to Si3N4 Dielectric,S564206,R141149,keywords,L395946,aluminum nitride,"RF microelectromechanical systems (MEMS) capacitive switches for two different dielectrics, aluminum nitride (AlN) and silicon nitride (Si3N4), are presented. The switches have been characterized and compared in terms of DC and RF performance (5-40 GHz). Switches based on AlN have higher down-state capacitance for similar dielectric thicknesses and provide better isolation and smaller insertion losses compared to Si3N4 switches. Experiments were carried out on RF MEMS switches with stiffening bars to prevent membrane deformation due to residual stress and with different spring and meander-type anchor designs. For a ~300-nm dielectric thickness, an air gap of 2.3 μm and identical spring-type designs, the AlN switches systematically show an improvement in the isolation by more than -12 dB (-35.8 dB versus -23.7 dB) and a better insertion loss (-0.68 dB versus -0.90 dB) at 40 GHz compared to Si3N4. DC measurements show small leakage current densities for both dielectrics (<;10-8 A/cm2 at 1 MV/cm). However, the resulting leakage current for AlN devices is ten times higher than for Si3N4 when applying a larger electric field. The fabricated switches were also stressed by applying different voltages in air and vacuum, and dielectric charging effects were investigated. AlN switches eliminate the residual or injected charge faster than the Si3N4 devices do.",TRUE,noun phrase
R194,Engineering,R145538,Role of ALD Al2O3 Surface Passivation on the Performance of p-Type Cu2O Thin Film Transistors,S582820,R145548,keywords,L407060,Atomic layer deposition,"High-performance p-type oxide thin film transistors (TFTs) have great potential for many semiconductor applications. However, these devices typically suffer from low hole mobility and high off-state currents. We fabricated p-type TFTs with a phase-pure polycrystalline Cu2O semiconductor channel grown by atomic layer deposition (ALD). The TFT switching characteristics were improved by applying a thin ALD Al2O3 passivation layer on the Cu2O channel, followed by vacuum annealing at 300 °C. Detailed characterization by transmission electron microscopy-energy dispersive X-ray analysis and X-ray photoelectron spectroscopy shows that the surface of Cu2O is reduced following Al2O3 deposition and indicates the formation of a 1-2 nm thick CuAlO2 interfacial layer. This, together with field-effect passivation caused by the high negative fixed charge of the ALD Al2O3, leads to an improvement in the TFT performance by reducing the density of deep trap states as well as by reducing the accumulation of electrons in the semiconducting layer in the device off-state.",TRUE,noun phrase
R194,Engineering,R145538,Role of ALD Al2O3 Surface Passivation on the Performance of p-Type Cu2O Thin Film Transistors,S582952,R145548,Film deposition method,L407160,Atomic layer deposition (ALD),"High-performance p-type oxide thin film transistors (TFTs) have great potential for many semiconductor applications. However, these devices typically suffer from low hole mobility and high off-state currents. We fabricated p-type TFTs with a phase-pure polycrystalline Cu2O semiconductor channel grown by atomic layer deposition (ALD). The TFT switching characteristics were improved by applying a thin ALD Al2O3 passivation layer on the Cu2O channel, followed by vacuum annealing at 300 °C. Detailed characterization by transmission electron microscopy-energy dispersive X-ray analysis and X-ray photoelectron spectroscopy shows that the surface of Cu2O is reduced following Al2O3 deposition and indicates the formation of a 1-2 nm thick CuAlO2 interfacial layer. This, together with field-effect passivation caused by the high negative fixed charge of the ALD Al2O3, leads to an improvement in the TFT performance by reducing the density of deep trap states as well as by reducing the accumulation of electrons in the semiconducting layer in the device off-state.",TRUE,noun phrase
R194,Engineering,R141130,Effects of surface roughness on electromagnetic characteristics of capacitive switches,S564096,R141132,keywords,L395860,Capacitive switches,"This paper studies the effect of surface roughness on up-state and down-state capacitances of microelectromechanical systems (MEMS) capacitive switches. When the root-mean-square (RMS) roughness is 10 nm, the up-state capacitance is approximately 9% higher than the theoretical value. When the metal bridge is driven down, the normalized contact area between the metal bridge and the surface of the dielectric layer is less than 1% if the RMS roughness is larger than 2 nm. Therefore, the down-state capacitance is actually determined by the non-contact part of the metal bridge. The normalized isolation is only 62% for RMS roughness of 10 nm when the hold-down voltage is 30 V. The analysis also shows that the down-state capacitance and the isolation increase with the hold-down voltage. The normalized isolation increases from 58% to 65% when the hold-down voltage increases from 10 V to 60 V for RMS roughness of 10 nm.",TRUE,noun phrase
R194,Engineering,R148163,Sensitivity-enhanced temperature sensor by hybrid cascaded configuration of a Sagnac loop and a F-P cavity,S594556,R148165,keywords,L413322,cascaded configuration,"A hybrid cascaded configuration consisting of a fiber Sagnac interferometer (FSI) and a Fabry-Perot interferometer (FPI) was proposed and experimentally demonstrated to enhance the temperature intensity by the Vernier-effect. The FSI, which consists of a certain length of Panda fiber, is for temperature sensing, while the FPI acts as a filter due to its temperature insensitivity. The two interferometers have almost the same free spectral range, with the spectral envelope of the cascaded sensor shifting much more than the single FSI. Experimental results show that the temperature sensitivity is enhanced from −1.4 nm/°C (single FSI) to −29.0 (cascaded configuration). The enhancement factor is 20.7, which is basically consistent with theoretical analysis (19.9).",TRUE,noun phrase
R194,Engineering,R144807,Solar blind deep ultraviolet β-Ga2O3 photodetectors grown on sapphire by the Mist-CVD method,S579747,R144810,substrate,L405362,c-plane sapphire,"In this report, we demonstrate high spectral responsivity (SR) solar blind deep ultraviolet (UV) β-Ga2O3 metal-semiconductor-metal (MSM) photodetectors grown by the mist chemical-vapor deposition (Mist-CVD) method. The β-Ga2O3 thin film was grown on c-plane sapphire substrates, and the fabricated MSM PDs with Al contacts in an interdigitated geometry were found to exhibit peak SR>150A/W for the incident light wavelength of 254 nm at a bias of 20 V. The devices exhibited very low dark current, about 14 pA at 20 V, and showed sharp transients with a photo-to-dark current ratio>105. The corresponding external quantum efficiency is over 7 × 104%. The excellent deep UV β-Ga2O3 photodetectors will enable significant advancements for the next-generation photodetection applications.",TRUE,noun phrase
R194,Engineering,R139290,Engineered Hierarchical CuO Nanoleaves Based Electrochemical Nonenzymatic Biosensor for Glucose Detection,S555208,R139294,keywords,L390572,CuO nanoleaves,"In this study, we synthesized hierarchical CuO nanoleaves in large-quantity via the hydrothermal method. We employed different techniques to characterize the morphological, structural, optical properties of the as-prepared hierarchical CuO nanoleaves sample. An electrochemical based nonenzymatic glucose biosensor was fabricated using engineered hierarchical CuO nanoleaves. The electrochemical behavior of fabricated biosensor towards glucose was analyzed with cyclic voltammetry (CV) and amperometry (i–t) techniques. Owing to the high electroactive surface area, hierarchical CuO nanoleaves based nonenzymatic biosensor electrode shows enhanced electrochemical catalytic behavior for glucose electro-oxidation in 100 mM sodium hydroxide (NaOH) electrolyte. The nonenzymatic biosensor displays a high sensitivity (1467.32 μ A/(mM cm 2 )), linear range (0.005–5.89 mM), and detection limit of 12 nM (S/N = 3). Moreover, biosensor displayed good selectivity, reproducibility, repeatability, and stability at room temperature over three-week storage period. Further, as-fabricated nonenzymatic glucose biosensors were employed for practical applications in human serum sample measurements. The obtained data were compared to the commercial biosensor, which demonstrates the practical usability of nonenzymatic glucose biosensors in real sample analysis.",TRUE,noun phrase
R194,Engineering,R139290,Engineered Hierarchical CuO Nanoleaves Based Electrochemical Nonenzymatic Biosensor for Glucose Detection,S555198,R139294,Sensing material,L390562,CuO nanoleaves,"In this study, we synthesized hierarchical CuO nanoleaves in large-quantity via the hydrothermal method. We employed different techniques to characterize the morphological, structural, optical properties of the as-prepared hierarchical CuO nanoleaves sample. An electrochemical based nonenzymatic glucose biosensor was fabricated using engineered hierarchical CuO nanoleaves. The electrochemical behavior of fabricated biosensor towards glucose was analyzed with cyclic voltammetry (CV) and amperometry (i–t) techniques. Owing to the high electroactive surface area, hierarchical CuO nanoleaves based nonenzymatic biosensor electrode shows enhanced electrochemical catalytic behavior for glucose electro-oxidation in 100 mM sodium hydroxide (NaOH) electrolyte. The nonenzymatic biosensor displays a high sensitivity (1467.32 μ A/(mM cm 2 )), linear range (0.005–5.89 mM), and detection limit of 12 nM (S/N = 3). Moreover, biosensor displayed good selectivity, reproducibility, repeatability, and stability at room temperature over three-week storage period. Further, as-fabricated nonenzymatic glucose biosensors were employed for practical applications in human serum sample measurements. The obtained data were compared to the commercial biosensor, which demonstrates the practical usability of nonenzymatic glucose biosensors in real sample analysis.",TRUE,noun phrase
R194,Engineering,R141884,Enhancement of c-Axis Oriented Aluminum Nitride Films via Low Temperature DC Sputtering,S569234,R141886,Film deposition method,L399504,DC sputtering method,"In this study, we successfully deposit c-axis oriented aluminum nitride (AlN) piezoelectric films at low temperature (100 °C) via the DC sputtering method with tilt gun. The X-ray diffraction (XRD) observations prove that the deposited films have a c-axis preferred orientation. Effective d33 value of the proposed films is 5.92 pm/V, which is better than most of the reported data using DC sputtering or other processing methods. It is found that the gun placed at 25° helped the films to rearrange at low temperature and c-axis orientation AlN films were successfully grown at 100 °C. This temperature is much lower than the reported growing temperature. It means the piezoelectric films can be deposited at flexible substrate and the photoresist can be stable at this temperature. The cantilever beam type microelectromechanical systems (MEMS) piezoelectric accelerometers are then fabricated based on the proposed AlN films with a lift-off process. The results show that the responsivity of the proposed devices is 8.12 mV/g, and the resonance frequency is 460 Hz, which indicates they can be used for machine tools.",TRUE,noun phrase
R194,Engineering,R139602,Organometal Halide Perovskites as Visible-Light Sensitizers for Photovoltaic Cells,S557066,R139603,keywords,L391560,Electrochemical Cells,"Two organolead halide perovskite nanocrystals, CH(3)NH(3)PbBr(3) and CH(3)NH(3)PbI(3), were found to efficiently sensitize TiO(2) for visible-light conversion in photoelectrochemical cells. When self-assembled on mesoporous TiO(2) films, the nanocrystalline perovskites exhibit strong band-gap absorptions as semiconductors. The CH(3)NH(3)PbI(3)-based photocell with spectral sensitivity of up to 800 nm yielded a solar energy conversion efficiency of 3.8%. The CH(3)NH(3)PbBr(3)-based cell showed a high photovoltage of 0.96 V with an external quantum conversion efficiency of 65%.",TRUE,noun phrase
R194,Engineering,R148172,Cylinder-type fiber-optic Vernier probe based on cascaded Fabry–Perot interferometers with a controlled FSR ratio,S594560,R148175,keywords,L413325,Fabry-Perot interferometer,"We designed a cylinder-type fiber-optic Vernier probe based on cascaded Fabry-Perot interferometers (FPIs) in this paper. It is fabricated by inserting a short single-mode fiber (SMF) column into a large-aperture hollow-core fiber (LA-HCF) with an internal diameter of 150 µm, which structures a length adjusted air microcavity with the lead-in SMF inserted into the LA-HCF from the other end. The length of the SMF column is 537.9 µm. By adjusting the distance between the SMF column and the lead-in SMF, the spectral change is displayed intuitively, and the Vernier spectra are recorded and analyzed. In sensitivity analysis, the probe is encapsulated in the medical needle by ultraviolet glue as a small body thermometer when the length of the air microcavity is 715.5 µm. The experiment shows that the sensitivity of the Vernier envelope is 12.55 times higher than that of the high-frequency comb. This design can effectively reduce the preparation difficulty of the optical fiber Vernier sensor based on cascaded FPIs, and can expand the applied fields by using different fibers and materials.",TRUE,noun phrase
R194,Engineering,R148179,Sensitivity-Enhanced Fiber Temperature Sensor Based on Vernier Effect and Dual In-Line Mach–Zehnder Interferometers,S594116,R148183,keywords,L413091,Fiber temperature sensor,"A highly sensitive fiber temperature sensor based on in-line Mach-Zehnder interferometers (MZIs) and Vernier effect was proposed and experimentally demonstrated. The MZI was fabricated by splicing a section of hollow core fiber between two pieces of multimode fiber. The temperature sensitivity obtained by extracting envelope dip shift of the superimposed spectrum reaches to 528.5 pm/°C in the range of 0 °C–100 °C, which is 17.5 times as high as that without enhanced by the Vernier effect. The experimental sensitivity amplification factor is close to the theoretical predication (18.3 times).The proposed sensitivity enhancement system employs parallel connecting to implement the Vernier effect, which possesses the advantages of easy fabrication and high flexibility.",TRUE,noun phrase
R194,Engineering,R141873,Preparation of highly c-axis oriented AlN thin films on Hastelloy tapes with Y2O3 buffer layer for flexible SAW sensor applications,S569156,R141876,Device type,L399439,Flexible SAW sensor,"Highly c-axis oriented aluminum nitrade (AlN) films were successfully deposited on flexible Hastelloy tapes by middle-frequency magnetron sputtering. The microstructure and piezoelectric properties of the AlN films were investigated. The results show that the AlN films deposited directly on the bare Hastelloy substrate have rough surface with root mean square (RMS) roughness of 32.43nm and its full width at half maximum (FWHM) of the AlN (0002) peak is 12.5∘. However, the AlN films deposited on the Hastelloy substrate with Y2O3 buffer layer show smooth surface with RMS roughness of 5.46nm and its FWHM of the AlN (0002) peak is only 3.7∘. The piezoelectric coefficient d33 of the AlN films deposited on the Y2O3/Hastelloy substrate is larger than three times that of the AlN films deposited on the bare Hastelloy substrate. The prepared highly c-axis oriented AlN films can be used to develop high-temperature flexible SAW sensors.",TRUE,noun phrase
R194,Engineering,R143687,Design and Development of a Flexible Strain Sensor for Textile Structures Based on a Conductive Polymer Composite,S574969,R143689,keywords,L402733,flexible sensor,"The aim of this work is to develop a smart flexible sensor adapted to textile structures, able to measure their strain deformations. The sensors are “smart” because of their capacity to adapt to the specific mechanical properties of textile structures that are lightweight, highly flexible, stretchable, elastic, etc. Because of these properties, textile structures are continuously in movement and easily deformed, even under very low stresses. It is therefore important that the integration of a sensor does not modify their general behavior. The material used for the sensor is based on a thermoplastic elastomer (Evoprene)/carbon black nanoparticle composite, and presents general mechanical properties strongly compatible with the textile substrate. Two preparation techniques are investigated: the conventional melt-mixing process, and the solvent process which is found to be more adapted for this particular application. The preparation procedure is fully described, namely the optimization of the process in terms of filler concentration in which the percolation theory aspects have to be considered. The sensor is then integrated on a thin, lightweight Nylon fabric, and the electromechanical characterization is performed to demonstrate the adaptability and the correct functioning of the sensor as a strain gauge on the fabric. A normalized relative resistance is defined in order to characterize the electrical response of the sensor. Finally, the influence of environmental factors, such as temperature and atmospheric humidity, on the sensor performance is investigated. The results show that the sensor's electrical resistance is particularly affected by humidity. This behavior is discussed in terms of the sensitivity of the carbon black filler particles to the presence of water.",TRUE,noun phrase
R194,Engineering,R139273,A Highly Sensitive Nonenzymatic Glucose Biosensor Based on the Regulatory Effect of Glucose on Electrochemical Behaviors of Colloidal Silver Nanoparticles on MoS2,S555076,R139276,keywords,L390463,glucose biosensor,"A novel and highly sensitive nonenzymatic glucose biosensor was developed by nucleating colloidal silver nanoparticles (AgNPs) on MoS2. The facile fabrication method, high reproducibility (97.5%) and stability indicates a promising capability for large-scale manufacturing. Additionally, the excellent sensitivity (9044.6 μA·mM−1·cm−2), low detection limit (0.03 μM), appropriate linear range of 0.1–1000 μM, and high selectivity suggests that this biosensor has a great potential to be applied for noninvasive glucose detection in human body fluids, such as sweat and saliva.",TRUE,noun phrase
R194,Engineering,R139283,Glucose Biosensor Based on Disposable Activated Carbon Electrodes Modified with Platinum Nanoparticles Electrodeposited on Poly(Azure A),S555144,R139286,keywords,L390518,glucose oxidase,"Herein, a novel electrochemical glucose biosensor based on glucose oxidase (GOx) immobilized on a surface containing platinum nanoparticles (PtNPs) electrodeposited on poly(Azure A) (PAA) previously electropolymerized on activated screen-printed carbon electrodes (GOx-PtNPs-PAA-aSPCEs) is reported. The resulting electrochemical biosensor was validated towards glucose oxidation in real samples and further electrochemical measurement associated with the generated H2O2. The electrochemical biosensor showed an excellent sensitivity (42.7 μA mM−1 cm−2), limit of detection (7.6 μM), linear range (20 μM–2.3 mM), and good selectivity towards glucose determination. Furthermore, and most importantly, the detection of glucose was performed at a low potential (0.2 V vs. Ag). The high performance of the electrochemical biosensor was explained through surface exploration using field emission SEM, XPS, and impedance measurements. The electrochemical biosensor was successfully applied to glucose quantification in several real samples (commercial juices and a plant cell culture medium), exhibiting a high accuracy when compared with a classical spectrophotometric method. This electrochemical biosensor can be easily prepared and opens up a good alternative in the development of new sensitive glucose sensors.",TRUE,noun phrase
R194,Engineering,R138191,Ultrafast Dynamic Piezoresistive Response of Graphene-Based Cellular Elastomers,S548181,R138193,keywords,L385482,graphene elastomers ,"Ultralight graphene-based cellular elastomers are found to exhibit nearly frequency-independent piezoresistive behaviors. Surpassing the mechanoreceptors in the human skin, these graphene elastomers can provide an instantaneous and high-fidelity electrical response to dynamic pressures ranging from quasi-static up to 2000 Hz, and are capable of detecting ultralow pressures as small as 0.082 Pa.",TRUE,noun phrase
R194,Engineering,R148176,Sensitivity-enhanced fiber interferometric high temperature sensor based on Vernier effect,S594562,R148178,keywords,L413327,High temperature sensor,"A novel sensitivity-enhanced intrinsic fiber Fabry-Perot interferometer (IFFPI) high temperature sensor based on a hollow- core photonic crystal fiber (HC-PCF) and modified Vernier effect is proposed and experimentally demonstrated. The all fiber IFFPIs are easily constructed by splicing one end of the HC-PCF to a leading single mode fiber (SMF) and applying an arc at the other end of the HC-PCF to form a pure silica tip. The modified Vernier effect is formed by three beams of lights reflected from the SMF-PCF splicing joint, and the two air/glass interfaces on the ends of the collapsed HC-PCF tip, respectively. Vernier effect was first applied to high temperature sensing up to 1200°C, in this work, and the experimental results exhibit good stability and repeatability. The temperature sensitivity, measured from the spectrum envelope, is 14 to 57 times higher than that of other configurations using similar HC-PCFs without the Vernier effect. The proposed sensor has the advantages of high sensitivity, good stability, compactness, ease of fabrication, and has potential application in practical high-temperature measurements.",TRUE,noun phrase
R194,Engineering,R141127,RF MEMS Switches With Enhanced Power-Handling Capabilities,S564073,R141129,keywords,L395841,High-power applications,"This paper reports on the experimental and theoretical characterization of RF microelectromechanical systems (MEMS) switches for high-power applications. First, we investigate the problem of self-actuation due to high RF power and we demonstrate switches that do not self-actuate or catastrophically fail with a measured RF power of up to 5.5 W. Second, the problem of switch stiction to the down state as a function of the applied RF power is also theoretically and experimentally studied. Finally, a novel switch design with a top electrode is introduced and its advantages related to RF power-handling capabilities are presented. By applying this technology, we demonstrate hot-switching measurements with a maximum power of 0.8 W. Our results, backed by theory and measurements, illustrate that careful design can significantly improve the power-handling capabilities of RF MEMS switches.",TRUE,noun phrase
R194,Engineering,R145512,"Highly Stable, Solution‐Processed Ga‐Doped IZTO Thin Film Transistor by Ar/O
2
Plasma Treatment",S582674,R145515,Material,L406957,Indium–zinc–tin oxide (IZTO) thin film,"The effects of gallium doping into indium–zinc–tin oxide (IZTO) thin film transistors (TFTs) and Ar/O2 plasma treatment on the performance of a‐IZTO TFT are reported. The Ga doping ratio is varied from 0 to 20%, and it is found that 10% gallium doping in a‐IZTO TFT results in a saturation mobility (µsat) of 11.80 cm2 V−1 s−1, a threshold voltage (Vth) of 0.17 V, subthreshold swing (SS) of 94 mV dec−1, and on/off current ratio (Ion/Ioff) of 1.21 × 107. Additionally, the performance of 10% Ga‐doped IZTO TFT can be further improved by Ar/O2 plasma treatment. It is found that 30 s plasma treatment gives the best TFT performances such as µsat of 30.60 cm2 V−1 s−1, Vth of 0.12 V, SS of 92 mV dec−1, and Ion/Ioff ratio of 7.90 × 107. The bias‐stability of 10% Ga‐doped IZTO TFT is also improved by 30 s plasma treatment. The enhancement of the TFT performance appears to be due to the reduction in the oxygen vacancy and OH concentrations.",TRUE,noun phrase
R194,Engineering,R139605,Device modeling of perovskite solar cells based on structural similarity with thin film inorganic semiconductor solar cells,S557086,R139607,keywords,L391575,Inorganic Semiconductor,"Device modeling of CH3NH3PbI3−xCl3 perovskite-based solar cells was performed. The perovskite solar cells employ a similar structure with inorganic semiconductor solar cells, such as Cu(In,Ga)Se2, and the exciton in the perovskite is Wannier-type. We, therefore, applied one-dimensional device simulator widely used in the Cu(In,Ga)Se2 solar cells. A high open-circuit voltage of 1.0 V reported experimentally was successfully reproduced in the simulation, and also other solar cell parameters well consistent with real devices were obtained. In addition, the effect of carrier diffusion length of the absorber and interface defect densities at front and back sides and the optimum thickness of the absorber were analyzed. The results revealed that the diffusion length experimentally reported is long enough for high efficiency, and the defect density at the front interface is critical for high efficiency. Also, the optimum absorber thickness well consistent with the thickness range of real devices was derived.",TRUE,noun phrase
R194,Engineering,R148179,Sensitivity-Enhanced Fiber Temperature Sensor Based on Vernier Effect and Dual In-Line Mach–Zehnder Interferometers,S594117,R148183,keywords,L413092,Mach-Zehnder Interferometer,"A highly sensitive fiber temperature sensor based on in-line Mach-Zehnder interferometers (MZIs) and Vernier effect was proposed and experimentally demonstrated. The MZI was fabricated by splicing a section of hollow core fiber between two pieces of multimode fiber. The temperature sensitivity obtained by extracting envelope dip shift of the superimposed spectrum reaches to 528.5 pm/°C in the range of 0 °C–100 °C, which is 17.5 times as high as that without enhanced by the Vernier effect. The experimental sensitivity amplification factor is close to the theoretical predication (18.3 times).The proposed sensitivity enhancement system employs parallel connecting to implement the Vernier effect, which possesses the advantages of easy fabrication and high flexibility.",TRUE,noun phrase
R194,Engineering,R138217,"Highly Sensitive, Transparent, and Durable Pressure Sensors Based on Sea-Urchin Shaped Metal Nanoparticles",S548781,R138219,keywords,L386017,metal nanoparticles,"Highly sensitive, transparent, and durable pressure sensors are fabricated using sea-urchin-shaped metal nanoparticles and insulating polyurethane elastomer. The pressure sensors exhibit outstanding sensitivity (2.46 kPa-1 ), superior optical transmittance (84.8% at 550 nm), fast response/relaxation time (30 ms), and excellent operational durability. In addition, the pressure sensors successfully detect minute movements of human muscles.",TRUE,noun phrase
R194,Engineering,R138217,"Highly Sensitive, Transparent, and Durable Pressure Sensors Based on Sea-Urchin Shaped Metal Nanoparticles",S547895,R138219,Piezoresistive Material,L385373,Metal Nanoparticles,"Highly sensitive, transparent, and durable pressure sensors are fabricated using sea-urchin-shaped metal nanoparticles and insulating polyurethane elastomer. The pressure sensors exhibit outstanding sensitivity (2.46 kPa-1 ), superior optical transmittance (84.8% at 550 nm), fast response/relaxation time (30 ms), and excellent operational durability. In addition, the pressure sensors successfully detect minute movements of human muscles.",TRUE,noun phrase
R194,Engineering,R144869,Arrays of Solar-Blind Ultraviolet Photodetector Based on β-Ga2O3 Epitaxial Thin Films,S580110,R144872,keywords,L405571,Metal-semiconductor-metal structure,"Recently, the $\beta $ -Ga2O3-based solar-blind ultraviolet photodetector has attracted intensive attention due to its wide application prospects. Photodetector arrays can act as an imaging detector and also improve the detecting sensitivity by series or parallel of detector cells. In this letter, the highly integrated metal-semiconductor-metal structured photodetector arrays of $32\times 32$ , $16\times 16$ , $8\times 8$ , and $4\times 4$ have been designed and fabricated for the first time. Herein, we present a 4-1 photodetector cell chosen from a $4\times 4$ photodetector array as an example to demonstrate the performance. The photo responsivity is $8.926 \times 10^{-1}$ A/W @ 250 nm at a 10-V bias voltage, corresponding to a quantum efficiency of 444%. All of the photodetector cells exhibit the solar-blind ultraviolet photoelectric characteristic and the consistent photo responsivity with a standard deviation of 12.1%. The outcome of the study offers an efficient route toward the development of high-performance and low-cost DUV photodetector arrays.",TRUE,noun phrase
R194,Engineering,R141119,High-isolation CPW MEMS shunt switches-part 1: modeling ,S564021,R141121,keywords,L395797,Microelectromechanical systems,"This paper, the first of two parts, presents an electromagnetic model for membrane microelectromechanical systems (MEMS) shunt switches for microwave/millimeter-wave applications. The up-state capacitance can be accurately modeled using three-dimensional static solvers, and full-wave solvers are used to predict the current distribution and inductance of the switch. The loss in the up-state position is equivalent to the coplanar waveguide line loss and is 0.01-0.02 dB at 10-30 GHz for a 2-/spl mu/m-thick Au MEMS shunt switch. It is seen that the capacitance, inductance, and series resistance can be accurately extracted from DC-40 GHz S-parameter measurements. It is also shown that dramatic increase in the down-state isolation (20/sup +/ dB) can be achieved with the choice of the correct LC series resonant frequency of the switch. In part 2 of this paper, the equivalent capacitor-inductor-resistor model is used in the design of tuned high isolation switches at 10 and 30 GHz.",TRUE,noun phrase
R194,Engineering,R141119,High-isolation CPW MEMS shunt switches-part 1: modeling ,S564003,R141121,Study Area ,L395787,Microelectromechanical Systems (MEMS),"This paper, the first of two parts, presents an electromagnetic model for membrane microelectromechanical systems (MEMS) shunt switches for microwave/millimeter-wave applications. The up-state capacitance can be accurately modeled using three-dimensional static solvers, and full-wave solvers are used to predict the current distribution and inductance of the switch. The loss in the up-state position is equivalent to the coplanar waveguide line loss and is 0.01-0.02 dB at 10-30 GHz for a 2-/spl mu/m-thick Au MEMS shunt switch. It is seen that the capacitance, inductance, and series resistance can be accurately extracted from DC-40 GHz S-parameter measurements. It is also shown that dramatic increase in the down-state isolation (20/sup +/ dB) can be achieved with the choice of the correct LC series resonant frequency of the switch. In part 2 of this paper, the equivalent capacitor-inductor-resistor model is used in the design of tuned high isolation switches at 10 and 30 GHz.",TRUE,noun phrase
R194,Engineering,R141127,RF MEMS Switches With Enhanced Power-Handling Capabilities,S564070,R141129,Study Area ,L395838,Microelectromechanical Systems (MEMS),"This paper reports on the experimental and theoretical characterization of RF microelectromechanical systems (MEMS) switches for high-power applications. First, we investigate the problem of self-actuation due to high RF power and we demonstrate switches that do not self-actuate or catastrophically fail with a measured RF power of up to 5.5 W. Second, the problem of switch stiction to the down state as a function of the applied RF power is also theoretically and experimentally studied. Finally, a novel switch design with a top electrode is introduced and its advantages related to RF power-handling capabilities are presented. By applying this technology, we demonstrate hot-switching measurements with a maximum power of 0.8 W. Our results, backed by theory and measurements, illustrate that careful design can significantly improve the power-handling capabilities of RF MEMS switches.",TRUE,noun phrase
R194,Engineering,R141139,Fabrication of low pull-in voltage RF MEMS switches on glass substrate in recessed CPW configuration for V-band application,S564171,R141141,Study Area ,L395917,Microelectromechanical Systems (MEMS),"A new technique for the fabrication of radio frequency (RF) microelectromechanical systems (MEMS) shunt switches in recessed coplaner waveguide (CPW) configuration on glass substrates is presented. Membranes with low spring constant are used for reducing the pull-in voltage. A layer of silicon dioxide is deposited on glass wafer and is used to form the recess, which partially defines the gap between the membrane and signal line. Positive photoresist S1813 is used as a sacrificial layer and gold as the membrane material. The membranes are released with the help of Pirhana solution and finally rinsed in low surface tension liquid to avoid stiction during release. Switches with 500 µm long two-meander membranes show very high isolation of greater than 40 dB at their resonant frequency of 61 GHz and pull-in voltage less than 15 V, while switches with 700 µm long six-strip membranes show isolation greater than 30 dB at the frequency of 65 GHz and pull-in voltage less than 10 V. Both types of switches show insertion loss less than 0.65 dB up to 65 GHz.",TRUE,noun phrase
R194,Engineering,R141147,RF MEMS Shunt Capacitive Switches Using AlN Compared to Si3N4 Dielectric,S564196,R141149,Study Area ,L395936,Microelectromechanical Systems (MEMS),"RF microelectromechanical systems (MEMS) capacitive switches for two different dielectrics, aluminum nitride (AlN) and silicon nitride (Si3N4), are presented. The switches have been characterized and compared in terms of DC and RF performance (5-40 GHz). Switches based on AlN have higher down-state capacitance for similar dielectric thicknesses and provide better isolation and smaller insertion losses compared to Si3N4 switches. Experiments were carried out on RF MEMS switches with stiffening bars to prevent membrane deformation due to residual stress and with different spring and meander-type anchor designs. For a ~300-nm dielectric thickness, an air gap of 2.3 μm and identical spring-type designs, the AlN switches systematically show an improvement in the isolation by more than -12 dB (-35.8 dB versus -23.7 dB) and a better insertion loss (-0.68 dB versus -0.90 dB) at 40 GHz compared to Si3N4. DC measurements show small leakage current densities for both dielectrics (<;10-8 A/cm2 at 1 MV/cm). However, the resulting leakage current for AlN devices is ten times higher than for Si3N4 when applying a larger electric field. The fabricated switches were also stressed by applying different voltages in air and vacuum, and dielectric charging effects were investigated. AlN switches eliminate the residual or injected charge faster than the Si3N4 devices do.",TRUE,noun phrase
R194,Engineering,R141130,Effects of surface roughness on electromagnetic characteristics of capacitive switches,S564095,R141132,Study Area ,L395859,Microelectromechanical Systems (MEMS) ,"This paper studies the effect of surface roughness on up-state and down-state capacitances of microelectromechanical systems (MEMS) capacitive switches. When the root-mean-square (RMS) roughness is 10 nm, the up-state capacitance is approximately 9% higher than the theoretical value. When the metal bridge is driven down, the normalized contact area between the metal bridge and the surface of the dielectric layer is less than 1% if the RMS roughness is larger than 2 nm. Therefore, the down-state capacitance is actually determined by the non-contact part of the metal bridge. The normalized isolation is only 62% for RMS roughness of 10 nm when the hold-down voltage is 30 V. The analysis also shows that the down-state capacitance and the isolation increase with the hold-down voltage. The normalized isolation increases from 58% to 65% when the hold-down voltage increases from 10 V to 60 V for RMS roughness of 10 nm.",TRUE,noun phrase
R194,Engineering,R151008,Quality Inspection of Textile Artificial Textures Using a Neuro-Symbolic Hybrid System Methodology ,S605308,R151010,has methodology,R151011,Neuro-symbolic Hybrid System ,"In the industrial sector there are many processes where the visual inspection is essential, the automation of that processes becomes a necessity to guarantee the quality of several objects. In this paper we propose a methodology for textile quality inspection based on the texture cue of an image. To solve this, we use a Neuro-Symbolic Hybrid System (NSHS) that allow us to combine an artificial neural network and the symbolic representation of the expert knowledge. The artificial neural network uses the CasCor learning algorithm and we use production rules to represent the symbolic knowledge. The features used for inspection has the advantage of being tolerant to rotation and scale changes. We compare the results with those obtained from an automatic computer vision task, and we conclude that results obtained using the proposed methodology are better.",TRUE,noun phrase
R194,Engineering,R141884,Enhancement of c-Axis Oriented Aluminum Nitride Films via Low Temperature DC Sputtering,S569235,R141886,Device type,L399505,Piezoelectric accelerometers ,"In this study, we successfully deposit c-axis oriented aluminum nitride (AlN) piezoelectric films at low temperature (100 °C) via the DC sputtering method with tilt gun. The X-ray diffraction (XRD) observations prove that the deposited films have a c-axis preferred orientation. Effective d33 value of the proposed films is 5.92 pm/V, which is better than most of the reported data using DC sputtering or other processing methods. It is found that the gun placed at 25° helped the films to rearrange at low temperature and c-axis orientation AlN films were successfully grown at 100 °C. This temperature is much lower than the reported growing temperature. It means the piezoelectric films can be deposited at flexible substrate and the photoresist can be stable at this temperature. The cantilever beam type microelectromechanical systems (MEMS) piezoelectric accelerometers are then fabricated based on the proposed AlN films with a lift-off process. The results show that the responsivity of the proposed devices is 8.12 mV/g, and the resonance frequency is 460 Hz, which indicates they can be used for machine tools.",TRUE,noun phrase
R194,Engineering,R139283,Glucose Biosensor Based on Disposable Activated Carbon Electrodes Modified with Platinum Nanoparticles Electrodeposited on Poly(Azure A),S555147,R139286,keywords,L390521,platinum nanoparticles,"Herein, a novel electrochemical glucose biosensor based on glucose oxidase (GOx) immobilized on a surface containing platinum nanoparticles (PtNPs) electrodeposited on poly(Azure A) (PAA) previously electropolymerized on activated screen-printed carbon electrodes (GOx-PtNPs-PAA-aSPCEs) is reported. The resulting electrochemical biosensor was validated towards glucose oxidation in real samples and further electrochemical measurement associated with the generated H2O2. The electrochemical biosensor showed an excellent sensitivity (42.7 μA mM−1 cm−2), limit of detection (7.6 μM), linear range (20 μM–2.3 mM), and good selectivity towards glucose determination. Furthermore, and most importantly, the detection of glucose was performed at a low potential (0.2 V vs. Ag). The high performance of the electrochemical biosensor was explained through surface exploration using field emission SEM, XPS, and impedance measurements. The electrochemical biosensor was successfully applied to glucose quantification in several real samples (commercial juices and a plant cell culture medium), exhibiting a high accuracy when compared with a classical spectrophotometric method. This electrochemical biosensor can be easily prepared and opens up a good alternative in the development of new sensitive glucose sensors.",TRUE,noun phrase
R194,Engineering,R139283,Glucose Biosensor Based on Disposable Activated Carbon Electrodes Modified with Platinum Nanoparticles Electrodeposited on Poly(Azure A),S555146,R139286,keywords,L390520,poly(Azure A),"Herein, a novel electrochemical glucose biosensor based on glucose oxidase (GOx) immobilized on a surface containing platinum nanoparticles (PtNPs) electrodeposited on poly(Azure A) (PAA) previously electropolymerized on activated screen-printed carbon electrodes (GOx-PtNPs-PAA-aSPCEs) is reported. The resulting electrochemical biosensor was validated towards glucose oxidation in real samples and further electrochemical measurement associated with the generated H2O2. The electrochemical biosensor showed an excellent sensitivity (42.7 μA mM−1 cm−2), limit of detection (7.6 μM), linear range (20 μM–2.3 mM), and good selectivity towards glucose determination. Furthermore, and most importantly, the detection of glucose was performed at a low potential (0.2 V vs. Ag). The high performance of the electrochemical biosensor was explained through surface exploration using field emission SEM, XPS, and impedance measurements. The electrochemical biosensor was successfully applied to glucose quantification in several real samples (commercial juices and a plant cell culture medium), exhibiting a high accuracy when compared with a classical spectrophotometric method. This electrochemical biosensor can be easily prepared and opens up a good alternative in the development of new sensitive glucose sensors.",TRUE,noun phrase
R194,Engineering,R139618,Efficiently Improving the Stability of Inverted Perovskite Solar Cells by Employing Polyethylenimine-Modified Carbon Nanotubes as Electrodes,S557192,R139622,keywords,L391662,Power conversion efficiency,"Inverted perovskite solar cells (PSCs) have been becoming more and more attractive, owing to their easy-fabrication and suppressed hysteresis, while the ion diffusion between metallic electrode and perovskite layer limit the long-term stability of devices. In this work, we employed a novel polyethylenimine (PEI) modified cross-stacked superaligned carbon nanotube (CSCNT) film in the inverted planar PSCs configurated FTO/NiO x/methylammonium lead tri-iodide (MAPbI3)/6, 6-phenyl C61-butyric acid methyl ester (PCBM)/CSCNT:PEI. By modifying CSCNT with a certain concentration of PEI (0.5 wt %), suitable energy level alignment and promoted interfacial charge transfer have been achieved, leading to a significant enhancement in the photovoltaic performance. As a result, a champion power conversion efficiency (PCE) of ∼11% was obtained with a Voc of 0.95 V, a Jsc of 18.7 mA cm-2, a FF of 0.61 as well as negligible hysteresis. Moreover, CSCNT:PEI based inverted PSCs show superior durability in comparison to the standard silver based devices, remaining over 85% of the initial PCE after 500 h aging under various conditions, including long-term air exposure, thermal, and humid treatment. This work opens up a new avenue of facile modified carbon electrodes for highly stable and hysteresis suppressed PSCs.",TRUE,noun phrase
R194,Engineering,R139632,Fabrication of Efficient Low-Bandgap Perovskite Solar Cells by Combining Formamidinium Tin Iodide with Methylammonium Lead Iodide,S557301,R139633,keywords,L391755,Power Conversion Efficiency,"Mixed tin (Sn)-lead (Pb) perovskites with high Sn content exhibit low bandgaps suitable for fabricating the bottom cell of perovskite-based tandem solar cells. In this work, we report on the fabrication of efficient mixed Sn-Pb perovskite solar cells using precursors combining formamidinium tin iodide (FASnI3) and methylammonium lead iodide (MAPbI3). The best-performing cell fabricated using a (FASnI3)0.6(MAPbI3)0.4 absorber with an absorption edge of ∼1.2 eV achieved a power conversion efficiency (PCE) of 15.08 (15.00)% with an open-circuit voltage of 0.795 (0.799) V, a short-circuit current density of 26.86(26.82) mA/cm(2), and a fill factor of 70.6(70.0)% when measured under forward (reverse) voltage scan. The average PCE of 50 cells we have fabricated is 14.39 ± 0.33%, indicating good reproducibility.",TRUE,noun phrase
R194,Engineering,R135556,Flexible capacitive pressure sensor with sensitivity and linear measuring range enhanced based on porous composite of carbon conductive paste and polydimethylsiloxane,S536348,R135559,keywords,R135586,Pressure sensor,"In recent years, the development of electronic skin and smart wearable body sensors has put forward high requirements for flexible pressure sensors with high sensitivity and large linear measuring range. However it turns out to be difficult to increase both of them simultaneously. In this paper, a flexible capacitive pressure sensor based on porous carbon conductive paste-PDMS composite is reported, the sensitivity and the linear measuring range of which were developed using multiple methods including adjusting the stiffness of the dielectric layer material, fabricating micro-structure and increasing dielectric permittivity of dielectric layer. The capacitive pressure sensor reported here has a relatively high sensitivity of 1.1 kPa-1 and a large linear measuring range of 10 kPa, making the product of the sensitivity and linear measuring range is 11, which is higher than that of the most reported capacitive pressure sensor to our best knowledge. The sensor has a detection of limit of 4 Pa, response time of 60 ms and great stability. Some potential applications of the sensor were demonstrated such as arterial pulse wave measuring and breathe measuring, which shows a promising candidate for wearable biomedical devices. In addition, a pressure sensor array based on the material was also fabricated and it could identify objects in the shape of different letters clearly, which shows a promising application in the future electronic skins.",TRUE,noun phrase
R194,Engineering,R138217,"Highly Sensitive, Transparent, and Durable Pressure Sensors Based on Sea-Urchin Shaped Metal Nanoparticles",S548779,R138219,keywords,L386015,pressure sensors,"Highly sensitive, transparent, and durable pressure sensors are fabricated using sea-urchin-shaped metal nanoparticles and insulating polyurethane elastomer. The pressure sensors exhibit outstanding sensitivity (2.46 kPa-1 ), superior optical transmittance (84.8% at 550 nm), fast response/relaxation time (30 ms), and excellent operational durability. In addition, the pressure sensors successfully detect minute movements of human muscles.",TRUE,noun phrase
R194,Engineering,R139623,Hybrid Perovskite Films by a New Variant of Pulsed Excimer Laser Deposition: A Room-Temperature Dry Process,S557220,R139625,keywords,L391686,Pulsed Laser Deposition,"A new variant of the classic pulsed laser deposition (PLD) process is introduced as a room-temperature dry process for the growth and stoichiometry control of hybrid perovskite films through the use of nonstoichiometric single target ablation and off-axis growth. Mixed halide hybrid perovskite films nominally represented by CH3NH3PbI3–xAx (A = Cl or F) are also grown and are shown to reveal interesting trends in the optical properties and photoresponse. Growth of good quality lead-free CH3NH3SnI3 films is also demonstrated, and the corresponding optical properties are presented. Finally, perovskite solar cells fabricated at room temperature (which makes the process adaptable to flexible substrates) are shown to yield a conversion efficiency of about 7.7%.",TRUE,noun phrase
R194,Engineering,R155297,Ultrasensitive refractive index sensor based on enhanced Vernier effect through cascaded fiber core-offset pairs,S621792,R155300,keywords,L428067,refractive index,"An ultrasensitive refractive index (RI) sensor based on enhanced Vernier effect is proposed, which consists of two cascaded fiber core-offset pairs. One pair functions as a Mach-Zehnder interferometer (MZI), the other with larger core offset as a low-finesse Fabry-Perot interferometer (FPI). In traditional Vernier-effect based sensors, an interferometer insensitive to environment change is used as sensing reference. Here in the proposed sensor, interference fringes of the MZI and the FPI shift to opposite directions as ambient RI varies, and to the same direction as surrounding temperature changes. Thus, the envelope of superimposed fringe manifests enhanced Vernier effect for RI sensing while reduced Vernier effect for temperature change. As a result, an ultra-high RI sensitivity of -87261.06 nm/RIU is obtained near the RI of 1.33 with good linearity, while the temperature sensitivity is as low as 204.7 pm/ °C. The proposed structure is robust and of low cost. Furthermore, the proposed scheme of enhanced Vernier effect provides a new perspective and idea in other sensing field. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement",TRUE,noun phrase
R194,Engineering,R141136,A zipper RF MEMS tunable capacitor with interdigitated RF and actuation electrodes,S564142,R141138,keywords,L395898,RF MEMS,"This paper presents a new RF MEMS tunable capacitor based on the zipper principle and with interdigitated RF and actuation electrodes. The electrode configuration prevents dielectric charging under high actuation voltages. It also increases the capacitance ratio and the tunable analog range. The effect of the residual stress on the capacitance tunability is also investigated. Two devices with different interdigital RF and actuation electrodes are fabricated on an alumina substrate and result in a capacitance ratio around 3.0 (Cmin = 70?90 fF, Cmax = 240?270 fF) and with a Q > 100 at 3 GHz. This design can be used in wideband tunable filters and matching networks.",TRUE,noun phrase
R194,Engineering,R141147,RF MEMS Shunt Capacitive Switches Using AlN Compared to Si3N4 Dielectric,S564204,R141149,keywords,L395944,RF MEMS,"RF microelectromechanical systems (MEMS) capacitive switches for two different dielectrics, aluminum nitride (AlN) and silicon nitride (Si3N4), are presented. The switches have been characterized and compared in terms of DC and RF performance (5-40 GHz). Switches based on AlN have higher down-state capacitance for similar dielectric thicknesses and provide better isolation and smaller insertion losses compared to Si3N4 switches. Experiments were carried out on RF MEMS switches with stiffening bars to prevent membrane deformation due to residual stress and with different spring and meander-type anchor designs. For a ~300-nm dielectric thickness, an air gap of 2.3 μm and identical spring-type designs, the AlN switches systematically show an improvement in the isolation by more than -12 dB (-35.8 dB versus -23.7 dB) and a better insertion loss (-0.68 dB versus -0.90 dB) at 40 GHz compared to Si3N4. DC measurements show small leakage current densities for both dielectrics (<;10-8 A/cm2 at 1 MV/cm). However, the resulting leakage current for AlN devices is ten times higher than for Si3N4 when applying a larger electric field. The fabricated switches were also stressed by applying different voltages in air and vacuum, and dielectric charging effects were investigated. AlN switches eliminate the residual or injected charge faster than the Si3N4 devices do.",TRUE,noun phrase
R194,Engineering,R141127,RF MEMS Switches With Enhanced Power-Handling Capabilities,S564071,R141129,keywords,L395839,RF MEMS switches,"This paper reports on the experimental and theoretical characterization of RF microelectromechanical systems (MEMS) switches for high-power applications. First, we investigate the problem of self-actuation due to high RF power and we demonstrate switches that do not self-actuate or catastrophically fail with a measured RF power of up to 5.5 W. Second, the problem of switch stiction to the down state as a function of the applied RF power is also theoretically and experimentally studied. Finally, a novel switch design with a top electrode is introduced and its advantages related to RF power-handling capabilities are presented. By applying this technology, we demonstrate hot-switching measurements with a maximum power of 0.8 W. Our results, backed by theory and measurements, illustrate that careful design can significantly improve the power-handling capabilities of RF MEMS switches.",TRUE,noun phrase
R194,Engineering,R148179,Sensitivity-Enhanced Fiber Temperature Sensor Based on Vernier Effect and Dual In-Line Mach–Zehnder Interferometers,S594118,R148183,keywords,L413093,sensitivity enhancement,"A highly sensitive fiber temperature sensor based on in-line Mach-Zehnder interferometers (MZIs) and Vernier effect was proposed and experimentally demonstrated. The MZI was fabricated by splicing a section of hollow core fiber between two pieces of multimode fiber. The temperature sensitivity obtained by extracting envelope dip shift of the superimposed spectrum reaches to 528.5 pm/°C in the range of 0 °C–100 °C, which is 17.5 times as high as that without enhanced by the Vernier effect. The experimental sensitivity amplification factor is close to the theoretical predication (18.3 times).The proposed sensitivity enhancement system employs parallel connecting to implement the Vernier effect, which possesses the advantages of easy fabrication and high flexibility.",TRUE,noun phrase
R194,Engineering,R141147,RF MEMS Shunt Capacitive Switches Using AlN Compared to Si3N4 Dielectric,S564207,R141149,keywords,L395947,silicon nitride,"RF microelectromechanical systems (MEMS) capacitive switches for two different dielectrics, aluminum nitride (AlN) and silicon nitride (Si3N4), are presented. The switches have been characterized and compared in terms of DC and RF performance (5-40 GHz). Switches based on AlN have higher down-state capacitance for similar dielectric thicknesses and provide better isolation and smaller insertion losses compared to Si3N4 switches. Experiments were carried out on RF MEMS switches with stiffening bars to prevent membrane deformation due to residual stress and with different spring and meander-type anchor designs. For a ~300-nm dielectric thickness, an air gap of 2.3 μm and identical spring-type designs, the AlN switches systematically show an improvement in the isolation by more than -12 dB (-35.8 dB versus -23.7 dB) and a better insertion loss (-0.68 dB versus -0.90 dB) at 40 GHz compared to Si3N4. DC measurements show small leakage current densities for both dielectrics (<;10-8 A/cm2 at 1 MV/cm). However, the resulting leakage current for AlN devices is ten times higher than for Si3N4 when applying a larger electric field. The fabricated switches were also stressed by applying different voltages in air and vacuum, and dielectric charging effects were investigated. AlN switches eliminate the residual or injected charge faster than the Si3N4 devices do.",TRUE,noun phrase
R194,Engineering,R139605,Device modeling of perovskite solar cells based on structural similarity with thin film inorganic semiconductor solar cells,S557088,R139607,keywords,L391577,Solar Cells,"Device modeling of CH3NH3PbI3−xCl3 perovskite-based solar cells was performed. The perovskite solar cells employ a similar structure with inorganic semiconductor solar cells, such as Cu(In,Ga)Se2, and the exciton in the perovskite is Wannier-type. We, therefore, applied one-dimensional device simulator widely used in the Cu(In,Ga)Se2 solar cells. A high open-circuit voltage of 1.0 V reported experimentally was successfully reproduced in the simulation, and also other solar cell parameters well consistent with real devices were obtained. In addition, the effect of carrier diffusion length of the absorber and interface defect densities at front and back sides and the optimum thickness of the absorber were analyzed. The results revealed that the diffusion length experimentally reported is long enough for high efficiency, and the defect density at the front interface is critical for high efficiency. Also, the optimum absorber thickness well consistent with the thickness range of real devices was derived.",TRUE,noun phrase
R194,Engineering,R139608,Lead-Free Halide Perovskite Solar Cells with High Photocurrents Realized Through Vacancy Modulation,S557117,R139610,keywords,L391602,Solar Cells,Lead free perovskite solar cells based on a CsSnI3 light absorber with a spectral response from 950 nm is demonstrated. The high photocurrents noted in the system are a consequence of SnF2 addition which reduces defect concentrations and hence the background charge carrier density.,TRUE,noun phrase
R194,Engineering,R139614,Highly Efficient and Stable Sn-Rich Perovskite Solar Cells by Introducing Bromine,S557159,R139617,keywords,L391636,Solar Cells,"Compositional engineering of recently arising methylammonium (MA) lead (Pb) halide based perovskites is an essential approach for finding better perovskite compositions to resolve still remaining issues of toxic Pb, long-term instability, etc. In this work, we carried out crystallographic, morphological, optical, and photovoltaic characterization of compositional MASn0.6Pb0.4I3-xBrx by gradually introducing bromine (Br) into parental Pb-Sn binary perovskite (MASn0.6Pb0.4I3) to elucidate its function in Sn-rich (Sn:Pb = 6:4) perovskites. We found significant advances in crystallinity and dense coverage of the perovskite films by inserting the Br into Sn-rich perovskite lattice. Furthermore, light-intensity-dependent open circuit voltage (Voc) measurement revealed much suppressed trap-assisted recombination for a proper Br-added (x = 0.4) device. These contributed to attaining the unprecedented power conversion efficiency of 12.1% and Voc of 0.78 V, which are, to the best of our knowledge, the highest performance in the Sn-rich (≥60%) perovskite solar cells reported so far. In addition, impressive enhancement of photocurrent-output stability and little hysteresis were found, which paves the way for the development of environmentally benign (Pb reduction), stable monolithic tandem cells using the developed low band gap (1.24-1.26 eV) MASn0.6Pb0.4I3-xBrx with suggested composition (x = 0.2-0.4).",TRUE,noun phrase
R194,Engineering,R139623,Hybrid Perovskite Films by a New Variant of Pulsed Excimer Laser Deposition: A Room-Temperature Dry Process,S557217,R139625,keywords,L391683,Solar Cells,"A new variant of the classic pulsed laser deposition (PLD) process is introduced as a room-temperature dry process for the growth and stoichiometry control of hybrid perovskite films through the use of nonstoichiometric single target ablation and off-axis growth. Mixed halide hybrid perovskite films nominally represented by CH3NH3PbI3–xAx (A = Cl or F) are also grown and are shown to reveal interesting trends in the optical properties and photoresponse. Growth of good quality lead-free CH3NH3SnI3 films is also demonstrated, and the corresponding optical properties are presented. Finally, perovskite solar cells fabricated at room temperature (which makes the process adaptable to flexible substrates) are shown to yield a conversion efficiency of about 7.7%.",TRUE,noun phrase
R194,Engineering,R139632,Fabrication of Efficient Low-Bandgap Perovskite Solar Cells by Combining Formamidinium Tin Iodide with Methylammonium Lead Iodide,S557299,R139633,keywords,L391753,Solar Cells,"Mixed tin (Sn)-lead (Pb) perovskites with high Sn content exhibit low bandgaps suitable for fabricating the bottom cell of perovskite-based tandem solar cells. In this work, we report on the fabrication of efficient mixed Sn-Pb perovskite solar cells using precursors combining formamidinium tin iodide (FASnI3) and methylammonium lead iodide (MAPbI3). The best-performing cell fabricated using a (FASnI3)0.6(MAPbI3)0.4 absorber with an absorption edge of ∼1.2 eV achieved a power conversion efficiency (PCE) of 15.08 (15.00)% with an open-circuit voltage of 0.795 (0.799) V, a short-circuit current density of 26.86(26.82) mA/cm(2), and a fill factor of 70.6(70.0)% when measured under forward (reverse) voltage scan. The average PCE of 50 cells we have fabricated is 14.39 ± 0.33%, indicating good reproducibility.",TRUE,noun phrase
R194,Engineering,R139634,Highly Reproducible Sn-Based Hybrid Perovskite Solar Cells with 9% Efficiency,S557325,R139637,keywords,L391774,Solar Cells,"The low power conversion efficiency (PCE) of tin‐based hybrid perovskite solar cells (HPSCs) is mainly attributed to the high background carrier density due to a high density of intrinsic defects such as Sn vacancies and oxidized species (Sn4+) that characterize Sn‐based HPSCs. Herein, this study reports on the successful reduction of the background carrier density by more than one order of magnitude by depositing near‐single‐crystalline formamidinium tin iodide (FASnI3) films with the orthorhombic a‐axis in the out‐of‐plane direction. Using these highly crystalline films, obtained by mixing a very small amount (0.08 m) of layered (2D) Sn perovskite with 0.92 m (3D) FASnI3, for the first time a PCE as high as 9.0% in a planar p–i–n device structure is achieved. These devices display negligible hysteresis and light soaking, as they benefit from very low trap‐assisted recombination, low shunt losses, and more efficient charge collection. This represents a 50% improvement in PCE compared to the best reference cell based on a pure FASnI3 film using SnF2 as a reducing agent. Moreover, the 2D/3D‐based HPSCs show considerable improved stability due to the enhanced robustness of the perovskite film compared to the reference cell.",TRUE,noun phrase
R194,Engineering,R139638,Efficient perovskite solar cells by metal ion doping,S557362,R139641,keywords,L391806,Solar Cells,"Realizing the theoretical limiting power conversion efficiency (PCE) in perovskite solar cells requires a better understanding and control over the fundamental loss processes occurring in the bulk of the perovskite layer and at the internal semiconductor interfaces in devices. One of the main challenges is to eliminate the presence of charge recombination centres throughout the film which have been observed to be most densely located at regions near the grain boundaries. Here, we introduce aluminium acetylacetonate to the perovskite precursor solution, which improves the crystal quality by reducing the microstrain in the polycrystalline film. At the same time, we achieve a reduction in the non-radiative recombination rate, a remarkable improvement in the photoluminescence quantum efficiency (PLQE) and a reduction in the electronic disorder deduced from an Urbach energy of only 12.6 meV in complete devices. As a result, we demonstrate a PCE of 19.1% with negligible hysteresis in planar heterojunction solar cells comprising all organic p and n-type charge collection layers. Our work shows that an additional level of control of perovskite thin film quality is possible via impurity cation doping, and further demonstrates the continuing importance of improving the electronic quality of the perovskite absorber and the nature of the heterojunctions to further improve the solar cell performance.",TRUE,noun phrase
R194,Engineering,R141130,Effects of surface roughness on electromagnetic characteristics of capacitive switches,S564097,R141132,keywords,L395861,Surface roughness,"This paper studies the effect of surface roughness on up-state and down-state capacitances of microelectromechanical systems (MEMS) capacitive switches. When the root-mean-square (RMS) roughness is 10 nm, the up-state capacitance is approximately 9% higher than the theoretical value. When the metal bridge is driven down, the normalized contact area between the metal bridge and the surface of the dielectric layer is less than 1% if the RMS roughness is larger than 2 nm. Therefore, the down-state capacitance is actually determined by the non-contact part of the metal bridge. The normalized isolation is only 62% for RMS roughness of 10 nm when the hold-down voltage is 30 V. The analysis also shows that the down-state capacitance and the isolation increase with the hold-down voltage. The normalized isolation increases from 58% to 65% when the hold-down voltage increases from 10 V to 60 V for RMS roughness of 10 nm.",TRUE,noun phrase
R194,Engineering,R141127,RF MEMS Switches With Enhanced Power-Handling Capabilities,S564075,R141129,keywords,L395843,Switch stiction,"This paper reports on the experimental and theoretical characterization of RF microelectromechanical systems (MEMS) switches for high-power applications. First, we investigate the problem of self-actuation due to high RF power and we demonstrate switches that do not self-actuate or catastrophically fail with a measured RF power of up to 5.5 W. Second, the problem of switch stiction to the down state as a function of the applied RF power is also theoretically and experimentally studied. Finally, a novel switch design with a top electrode is introduced and its advantages related to RF power-handling capabilities are presented. By applying this technology, we demonstrate hot-switching measurements with a maximum power of 0.8 W. Our results, backed by theory and measurements, illustrate that careful design can significantly improve the power-handling capabilities of RF MEMS switches.",TRUE,noun phrase
R194,Engineering,R144792,Thermal annealing effect on β-Ga2O3 thin film solar blind photodetector heteroepitaxially grown on sapphire substrate,S580070,R144794,keywords,L405533,thin film,"This paper presents the effect of thermal annealing on β‐Ga2O3 thin film solar‐blind (SB) photodetector (PD) synthesized on c‐plane sapphire substrates by a low pressure chemical vapor deposition (LPCVD). The thin films were synthesized using high purity gallium (Ga) and oxygen (O2) as source precursors. The annealing was performed ex situ the under the oxygen atmosphere, which helped to reduce oxygen or oxygen‐related vacancies in the thin film. Metal/semiconductor/metal (MSM) type photodetectors were fabricated using both the as‐grown and annealed films. The PDs fabricated on the annealed films had lower dark current, higher photoresponse and improved rejection ratio (R250/R370 and R250/R405) compared to the ones fabricated on the as‐grown films. These improved PD performances are due to the significant reduction of the photo‐generated carriers trapped by oxygen or oxygen‐related vacancies.",TRUE,noun phrase
R194,Engineering,R145527,Performance Investigation of an n-Type Tin-Oxide Thin Film Transistor by Channel Plasma Processing,S582756,R145529,keywords,L407012,Thin film transistor (TFT),"In this paper, we investigated the performance of an n-type tin-oxide (SnOx) thin film transistor (TFT) by experiments and simulation. The fabricated SnOx TFT device by oxygen plasma treatment on the channel exhibited n-type conduction with an on/off current ratio of 4.4x104, a high field-effect mobility of 18.5 cm2/V.s and a threshold swing of 405 mV/decade, which could be attributed to the excess reacted oxygen incorporated to the channel to form the oxygen-rich n-type SnOx. Furthermore, a TCAD simulation based on the n-type SnOx TFT device was performed by fitting the experimental data to investigate the effect of the channel traps on the device performance, indicating that performance enhancements were further achieved by suppressing the density of channel traps. In addition, the n-type SnOx TFT device exhibited high stability upon illumination with visible light. The results show that the n-type SnOx TFT device by channel plasma processing has considerable potential for next-generation high-performance display application.",TRUE,noun phrase
R194,Engineering,R141136,A zipper RF MEMS tunable capacitor with interdigitated RF and actuation electrodes,S564143,R141138,keywords,L395899,Tunable capacitor,"This paper presents a new RF MEMS tunable capacitor based on the zipper principle and with interdigitated RF and actuation electrodes. The electrode configuration prevents dielectric charging under high actuation voltages. It also increases the capacitance ratio and the tunable analog range. The effect of the residual stress on the capacitance tunability is also investigated. Two devices with different interdigital RF and actuation electrodes are fabricated on an alumina substrate and result in a capacitance ratio around 3.0 (Cmin = 70?90 fF, Cmax = 240?270 fF) and with a Q > 100 at 3 GHz. This design can be used in wideband tunable filters and matching networks.",TRUE,noun phrase
R194,Engineering,R148166,Ultrasensitive Temperature Sensor With Cascaded Fiber Optic Fabry–Perot Interferometers Based on Vernier Effect,S594067,R148171,keywords,L413057,Vernier effect,"We have proposed and experimentally demonstrated an ultrasensitive fiber-optic temperature sensor based on two cascaded Fabry–Perot interferometers (FPIs). Vernier effect that significantly improves the sensitivity is generated due to the slight cavity length difference of the sensing and reference FPI. The sensing FPI is composed of a cleaved fiber end-face and UV-cured adhesive while the reference FPI is fabricated by splicing SMF with hollow core fiber. Temperature sensitivity of the sensing FPI is much higher than the reference FPI, which means that the reference FPI need not to be thermally isolated. By curve fitting method, three different temperature sensitivities of 33.07, −58.60, and 67.35 nm/°C have been experimentally demonstrated with different cavity lengths ratio of the sensing and reference FPI, which can be flexibly adjusted to meet different application demands. The proposed probe-type ultrahigh sensitivity temperature sensor is compact and cost effective, which can be applied to special fields, such as biochemical engineering, medical treatment, and nuclear test.",TRUE,noun phrase
R194,Engineering,R148176,Sensitivity-enhanced fiber interferometric high temperature sensor based on Vernier effect,S594563,R148178,keywords,L413328,Vernier effect,"A novel sensitivity-enhanced intrinsic fiber Fabry-Perot interferometer (IFFPI) high temperature sensor based on a hollow- core photonic crystal fiber (HC-PCF) and modified Vernier effect is proposed and experimentally demonstrated. The all fiber IFFPIs are easily constructed by splicing one end of the HC-PCF to a leading single mode fiber (SMF) and applying an arc at the other end of the HC-PCF to form a pure silica tip. The modified Vernier effect is formed by three beams of lights reflected from the SMF-PCF splicing joint, and the two air/glass interfaces on the ends of the collapsed HC-PCF tip, respectively. Vernier effect was first applied to high temperature sensing up to 1200°C, in this work, and the experimental results exhibit good stability and repeatability. The temperature sensitivity, measured from the spectrum envelope, is 14 to 57 times higher than that of other configurations using similar HC-PCFs without the Vernier effect. The proposed sensor has the advantages of high sensitivity, good stability, compactness, ease of fabrication, and has potential application in practical high-temperature measurements.",TRUE,noun phrase
R194,Engineering,R148179,Sensitivity-Enhanced Fiber Temperature Sensor Based on Vernier Effect and Dual In-Line Mach–Zehnder Interferometers,S594119,R148183,keywords,L413094,Vernier effect,"A highly sensitive fiber temperature sensor based on in-line Mach-Zehnder interferometers (MZIs) and Vernier effect was proposed and experimentally demonstrated. The MZI was fabricated by splicing a section of hollow core fiber between two pieces of multimode fiber. The temperature sensitivity obtained by extracting envelope dip shift of the superimposed spectrum reaches to 528.5 pm/°C in the range of 0 °C–100 °C, which is 17.5 times as high as that without enhanced by the Vernier effect. The experimental sensitivity amplification factor is close to the theoretical predication (18.3 times).The proposed sensitivity enhancement system employs parallel connecting to implement the Vernier effect, which possesses the advantages of easy fabrication and high flexibility.",TRUE,noun phrase
R194,Engineering,R155294,Vernier effect of two cascaded in-fiber Mach–Zehnder interferometers based on a spherical-shaped structure,S621789,R155296,keywords,L428064,Vernier effect,"The Vernier effect of two cascaded in-fiber Mach-Zehnder interferometers (MZIs) based on a spherical-shaped structure has been investigated. The envelope based on the Vernier effect is actually formed by a frequency component of the superimposed spectrum, and the frequency value is determined by the subtraction between the optical path differences of two cascaded MZIs. A method based on band-pass filtering is put forward to extract the envelope efficiently; strain and curvature measurements are carried out to verify the validity of the method. The results show that the strain and curvature sensitivities are enhanced to -8.47 pm/με and -33.70 nm/m-1 with magnification factors of 5.4 and -5.4, respectively. The detection limit of the sensors with the Vernier effect is also discussed.",TRUE,noun phrase
R194,Engineering,R155297,Ultrasensitive refractive index sensor based on enhanced Vernier effect through cascaded fiber core-offset pairs,S621794,R155300,keywords,L428069,Vernier effect,"An ultrasensitive refractive index (RI) sensor based on enhanced Vernier effect is proposed, which consists of two cascaded fiber core-offset pairs. One pair functions as a Mach-Zehnder interferometer (MZI), the other with larger core offset as a low-finesse Fabry-Perot interferometer (FPI). In traditional Vernier-effect based sensors, an interferometer insensitive to environment change is used as sensing reference. Here in the proposed sensor, interference fringes of the MZI and the FPI shift to opposite directions as ambient RI varies, and to the same direction as surrounding temperature changes. Thus, the envelope of superimposed fringe manifests enhanced Vernier effect for RI sensing while reduced Vernier effect for temperature change. As a result, an ultra-high RI sensitivity of -87261.06 nm/RIU is obtained near the RI of 1.33 with good linearity, while the temperature sensitivity is as low as 204.7 pm/ °C. The proposed structure is robust and of low cost. Furthermore, the proposed scheme of enhanced Vernier effect provides a new perspective and idea in other sensing field. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement",TRUE,noun phrase
R194,Engineering,R155301,High-Sensitivity Fiber-Optic Strain Sensor Based on the Vernier Effect and Separated Fabry–Perot Interferometers,S621796,R155306,keywords,L428071,Vernier effect,"A high-sensitivity fiber-optic strain sensor, based on the Vernier effect and separated Fabry–Perot interferometers (FPIs), is proposed and experimentally demonstrated. One air-cavity FPI is used as a sensing FPI (SFPI) and another is used as a matched FPI (MFPI) to generate the Vernier effect. The two FPIs are connected by a fiber link but separated by a long section of single-mode fiber (SMF). The SFPI is fabricated by splicing a section of microfiber between two SMFs with a large lateral offset, and the MFPI is formed by a section of hollow-core fiber sandwiched between two SMFs. By using the Vernier effect, the strain sensitivity of the proposed sensor reaches $\text{1.15 nm/}\mu \varepsilon $, which is the highest strain sensitivity of an FPI-based sensor reported so far. Owing to the separated structure of the proposed sensor, the MFPI can be isolated from the SFPI and the detection environment. Therefore, the MFPI is not affected by external physical quantities (such as strain and temperature) and thus has a very low temperature cross-sensitivity. The experimental results show that a low-temperature cross-sensitivity of $\text{0.056 } \mu \varepsilon /^ {\circ }{\text{C}}$ can be obtained with the proposed sensor. With its advantages of simple fabrication, high strain sensitivity, and low-temperature cross-sensitivity, the proposed sensor has great application prospects in several fields.",TRUE,noun phrase
R32,Environmental Health,R76029,Towards Consistent Data Representation in the IoT Healthcare Landscape,S348966,R76031,Has evaluation,L249389,automatic reasoning by inference engines,"Nowadays, the enormous volume of health and fitness data gathered from IoT wearable devices offers favourable opportunities to the research community. For instance, it can be exploited using sophisticated data analysis techniques, such as automatic reasoning, to find patterns and, extract information and new knowledge in order to enhance decision-making and deliver better healthcare. However, due to the high heterogeneity of data representation formats, the IoT healthcare landscape is characterised by an ubiquitous presence of data silos which prevents users and clinicians from obtaining a consistent representation of the whole knowledge. Semantic web technologies, such as ontologies and inference rules, have been shown as a promising way for the integration and exploitation of data from heterogeneous sources. In this paper, we present a semantic data model useful to: (1) consistently represent health and fitness data from heterogeneous IoT sources; (2) integrate and exchange them; and (3) enable automatic reasoning by inference engines.",TRUE,noun phrase
R32,Environmental Health,R76035,An ontology-based healthcare monitoring system in the Internet of Things,S348925,R76037,Has evaluation,L249376,by semantic querying,"Continuous health monitoring is a hopeful solution that can efficiently provide health-related services to elderly people suffering from chronic diseases. The emergence of the Internet of Things (IoT) technologies have led to their adoption in the development of new healthcare systems for efficient healthcare monitoring, diagnosis and treatment. This paper presents a healthcare-IoT based system where an ontology is proposed to provide semantic interoperability among heterogeneous devices and users in healthcare domain. Our work consists on integrating existing ontologies related to health, IoT domain and time, instantiating classes, and establishing reasoning rules. The model created has been validated by semantic querying. The results show the feasibility and efficiency of the proposed ontology and its capability to grow into a more understanding and specialized ontology for health monitoring and treatment.",TRUE,noun phrase
R32,Environmental Health,R76020,HealthIoT Ontology for Data Semantic Representation and Interpretation Obtained from Medical Connected Objects,S348107,R76022,Has result,L249053,make an appropriate decision,"Internet of Things (IoT) covers a variety of applications including the Healthcare field. Consequently, medical objects become connected to each other with the purpose to share and exchange health data. These medical connected objects raise issues on how to ensure the analysis, interpretation and semantic interoperability of the extensive obtained health data with the purpose to make an appropriate decision. This paper proposes a HealthIoT ontology for representing the semantic interoperability of the medical connected objects and their data; while an algorithm alleviates the analysis of the detected vital signs and the decision-making of the doctor. The execution of this algorithm needs the definition of several SWRL rules (Semantic Web Rule Language).",TRUE,noun phrase
R32,Environmental Health,R76020,HealthIoT Ontology for Data Semantic Representation and Interpretation Obtained from Medical Connected Objects,S348106,R76022,Has method,R76040,Several SWRL,"Internet of Things (IoT) covers a variety of applications including the Healthcare field. Consequently, medical objects become connected to each other with the purpose to share and exchange health data. These medical connected objects raise issues on how to ensure the analysis, interpretation and semantic interoperability of the extensive obtained health data with the purpose to make an appropriate decision. This paper proposes a HealthIoT ontology for representing the semantic interoperability of the medical connected objects and their data; while an algorithm alleviates the analysis of the detected vital signs and the decision-making of the doctor. The execution of this algorithm needs the definition of several SWRL rules (Semantic Web Rule Language).",TRUE,noun phrase
R145,Environmental Sciences,R186691,Soil Organic Matter Prediction Model with Satellite Hyperspectral Image Based on Optimized Denoising Method,S713756,R186693,Has method,R186696,Denoising Method,"In order to improve the signal-to-noise ratio of the hyperspectral sensors and exploit the potential of satellite hyperspectral data for predicting soil properties, we took MingShui County as the study area, which the study area is approximately 1481 km2, and we selected Gaofen-5 (GF-5) satellite hyperspectral image of the study area to explore an applicable and accurate denoising method that can effectively improve the prediction accuracy of soil organic matter (SOM) content. First, fractional-order derivative (FOD) processing is performed on the original reflectance (OR) to evaluate the optimal FOD. Second, singular value decomposition (SVD), Fourier transform (FT) and discrete wavelet transform (DWT) are used to denoise the OR and optimal FOD reflectance. Third, the spectral indexes of the reflectance under different denoising methods are extracted by optimal band combination algorithm, and the input variables of different denoising methods are selected by the recursive feature elimination (RFE) algorithm. Finally, the SOM content is predicted by a random forest prediction model. The results reveal that 0.6-order reflectance describes more useful details in satellite hyperspectral data. Five spectral indexes extracted from the reflectance under different denoising methods have a strong correlation with the SOM content, which is helpful for realizing high-accuracy SOM predictions. All three denoising methods can reduce the noise in hyperspectral data, and the accuracies of the different denoising methods are ranked DWT > FT > SVD, where 0.6-order-DWT has the highest accuracy (R2 = 0.84, RMSE = 3.36 g kg−1, and RPIQ = 1.71). This paper is relatively novel, in that GF-5 satellite hyperspectral data based on different denoising methods are used to predict SOM, and the results provide a highly robust and novel method for mapping the spatial distribution of SOM content at the regional scale.",TRUE,noun phrase
R145,Environmental Sciences,R186691,Soil Organic Matter Prediction Model with Satellite Hyperspectral Image Based on Optimized Denoising Method,S713811,R186693,Has method,R186699,discrete wavelet transform (DWT),"In order to improve the signal-to-noise ratio of the hyperspectral sensors and exploit the potential of satellite hyperspectral data for predicting soil properties, we took MingShui County as the study area, which the study area is approximately 1481 km2, and we selected Gaofen-5 (GF-5) satellite hyperspectral image of the study area to explore an applicable and accurate denoising method that can effectively improve the prediction accuracy of soil organic matter (SOM) content. First, fractional-order derivative (FOD) processing is performed on the original reflectance (OR) to evaluate the optimal FOD. Second, singular value decomposition (SVD), Fourier transform (FT) and discrete wavelet transform (DWT) are used to denoise the OR and optimal FOD reflectance. Third, the spectral indexes of the reflectance under different denoising methods are extracted by optimal band combination algorithm, and the input variables of different denoising methods are selected by the recursive feature elimination (RFE) algorithm. Finally, the SOM content is predicted by a random forest prediction model. The results reveal that 0.6-order reflectance describes more useful details in satellite hyperspectral data. Five spectral indexes extracted from the reflectance under different denoising methods have a strong correlation with the SOM content, which is helpful for realizing high-accuracy SOM predictions. All three denoising methods can reduce the noise in hyperspectral data, and the accuracies of the different denoising methods are ranked DWT > FT > SVD, where 0.6-order-DWT has the highest accuracy (R2 = 0.84, RMSE = 3.36 g kg−1, and RPIQ = 1.71). This paper is relatively novel, in that GF-5 satellite hyperspectral data based on different denoising methods are used to predict SOM, and the results provide a highly robust and novel method for mapping the spatial distribution of SOM content at the regional scale.",TRUE,noun phrase
R145,Environmental Sciences,R186691,Soil Organic Matter Prediction Model with Satellite Hyperspectral Image Based on Optimized Denoising Method,S713808,R186693,Has method,R186700,Fourier transform (FT),"In order to improve the signal-to-noise ratio of the hyperspectral sensors and exploit the potential of satellite hyperspectral data for predicting soil properties, we took MingShui County as the study area, which the study area is approximately 1481 km2, and we selected Gaofen-5 (GF-5) satellite hyperspectral image of the study area to explore an applicable and accurate denoising method that can effectively improve the prediction accuracy of soil organic matter (SOM) content. First, fractional-order derivative (FOD) processing is performed on the original reflectance (OR) to evaluate the optimal FOD. Second, singular value decomposition (SVD), Fourier transform (FT) and discrete wavelet transform (DWT) are used to denoise the OR and optimal FOD reflectance. Third, the spectral indexes of the reflectance under different denoising methods are extracted by optimal band combination algorithm, and the input variables of different denoising methods are selected by the recursive feature elimination (RFE) algorithm. Finally, the SOM content is predicted by a random forest prediction model. The results reveal that 0.6-order reflectance describes more useful details in satellite hyperspectral data. Five spectral indexes extracted from the reflectance under different denoising methods have a strong correlation with the SOM content, which is helpful for realizing high-accuracy SOM predictions. All three denoising methods can reduce the noise in hyperspectral data, and the accuracies of the different denoising methods are ranked DWT > FT > SVD, where 0.6-order-DWT has the highest accuracy (R2 = 0.84, RMSE = 3.36 g kg−1, and RPIQ = 1.71). This paper is relatively novel, in that GF-5 satellite hyperspectral data based on different denoising methods are used to predict SOM, and the results provide a highly robust and novel method for mapping the spatial distribution of SOM content at the regional scale.",TRUE,noun phrase
R145,Environmental Sciences,R186691,Soil Organic Matter Prediction Model with Satellite Hyperspectral Image Based on Optimized Denoising Method,S713809,R186693,Has method,R186697,fractional-order derivative (FOD),"In order to improve the signal-to-noise ratio of the hyperspectral sensors and exploit the potential of satellite hyperspectral data for predicting soil properties, we took MingShui County as the study area, which the study area is approximately 1481 km2, and we selected Gaofen-5 (GF-5) satellite hyperspectral image of the study area to explore an applicable and accurate denoising method that can effectively improve the prediction accuracy of soil organic matter (SOM) content. First, fractional-order derivative (FOD) processing is performed on the original reflectance (OR) to evaluate the optimal FOD. Second, singular value decomposition (SVD), Fourier transform (FT) and discrete wavelet transform (DWT) are used to denoise the OR and optimal FOD reflectance. Third, the spectral indexes of the reflectance under different denoising methods are extracted by optimal band combination algorithm, and the input variables of different denoising methods are selected by the recursive feature elimination (RFE) algorithm. Finally, the SOM content is predicted by a random forest prediction model. The results reveal that 0.6-order reflectance describes more useful details in satellite hyperspectral data. Five spectral indexes extracted from the reflectance under different denoising methods have a strong correlation with the SOM content, which is helpful for realizing high-accuracy SOM predictions. All three denoising methods can reduce the noise in hyperspectral data, and the accuracies of the different denoising methods are ranked DWT > FT > SVD, where 0.6-order-DWT has the highest accuracy (R2 = 0.84, RMSE = 3.36 g kg−1, and RPIQ = 1.71). This paper is relatively novel, in that GF-5 satellite hyperspectral data based on different denoising methods are used to predict SOM, and the results provide a highly robust and novel method for mapping the spatial distribution of SOM content at the regional scale.",TRUE,noun phrase
R145,Environmental Sciences,R186593,"Extraction of built-up area using multi-sensor data—A case study based on Google earth engine in Zhejiang Province, China",S713592,R186597,Softwares,R186604,Google Earth Engine,"ABSTRACT Accurate and up-to-date built-up area mapping is of great importance to the science community, decision-makers, and society. Therefore, satellite-based, built-up area (BUA) extraction at medium resolution with supervised classification has been widely carried out. However, the spectral confusion between BUA and bare land (BL) is the primary hindering factor for accurate BUA mapping over large regions. Here we propose a new methodology for the efficient BUA extraction using multi-sensor data under Google Earth Engine cloud computing platform. The proposed method mainly employs intra-annual satellite imagery for water and vegetation masks, and a random-forest machine learning classifier combined with auxiliary data to discriminate between BUA and BL. First, a vegetation mask and water mask are generated using NDVI (normalized differenced vegetation index) max in vegetation growth periods and the annual water-occurrence frequency. Second, to accurately extract BUA from unmasked pixels, consisting of BUA and BL, random-forest-based classification is conducted using multi-sensor features, including temperature, night-time light, backscattering, topography, optical spectra, and NDVI time-series metrics. This approach is applied in Zhejiang Province, China, and an overall accuracy of 92.5% is obtained, which is 3.4% higher than classification with spectral data only. For large-scale BUA mapping, it is feasible to enhance the performance of BUA mapping with multi-temporal and multi-sensor data, which takes full advantage of datasets available in Google Earth Engine.",TRUE,noun phrase
R145,Environmental Sciences,R9221,"The ACCESS coupled model: description, control climate and evaluation",S14694,R9228,Earth System Model,R9274,Land Surface,"4OASIS3.2–5 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 °C in ACCESS1.0 and 0.04 °C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.",TRUE,noun phrase
R145,Environmental Sciences,R23260,The NCEP Climate Forecast System Reanalysis,S72099,R23261,Earth System Model,R23267,Land Surface,"The NCEP Climate Forecast System Reanalysis (CFSR) was completed for the 31-yr period from 1979 to 2009, in January 2010. The CFSR was designed and executed as a global, high-resolution coupled atmosphere–ocean–land surface–sea ice system to provide the best estimate of the state of these coupled domains over this period. The current CFSR will be extended as an operational, real-time product into the future. New features of the CFSR include 1) coupling of the atmosphere and ocean during the generation of the 6-h guess field, 2) an interactive sea ice model, and 3) assimilation of satellite radiances by the Gridpoint Statistical Interpolation (GSI) scheme over the entire period. The CFSR global atmosphere resolution is ~38 km (T382) with 64 levels extending from the surface to 0.26 hPa. The global ocean's latitudinal spacing is 0.25° at the equator, extending to a global 0.5° beyond the tropics, with 40 levels to a depth of 4737 m. The global land surface model has four soil levels and the global sea ice m...",TRUE,noun phrase
R145,Environmental Sciences,R23273,"The ACCESS coupled model: description, control climate and evaluation",S72173,R23274,Earth System Model,R23282,Land Surface,"4OASIS3.2–5 coupling framework. The primary goal of the ACCESS-CM development is to provide the Australian climate community with a new generation fully coupled climate model for climate research, and to participate in phase five of the Coupled Model Inter-comparison Project (CMIP5). This paper describes the ACCESS-CM framework and components, and presents the control climates from two versions of the ACCESS-CM, ACCESS1.0 and ACCESS1.3, together with some fields from the 20 th century historical experiments, as part of model evaluation. While sharing the same ocean sea-ice model (except different setups for a few parameters), ACCESS1.0 and ACCESS1.3 differ from each other in their atmospheric and land surface components: the former is configured with the UK Met Office HadGEM2 (r1.1) atmospheric physics and the Met Office Surface Exchange Scheme land surface model version 2, and the latter with atmospheric physics similar to the UK Met Office Global Atmosphere 1.0 includ ing modifications performed at CAWCR and the CSIRO Community Atmosphere Biosphere Land Exchange land surface model version 1.8. The global average annual mean surface air temperature across the 500-year preindustrial control integrations show a warming drift of 0.35 °C in ACCESS1.0 and 0.04 °C in ACCESS1.3. The overall skills of ACCESS-CM in simulating a set of key climatic fields both globally and over Australia significantly surpass those from the preceding CSIRO Mk3.5 model delivered to the previous coupled model inter-comparison. However, ACCESS-CM, like other CMIP5 models, has deficiencies in various as pects, and these are also discussed.",TRUE,noun phrase
R145,Environmental Sciences,R23260,The NCEP Climate Forecast System Reanalysis,S72118,R23261,Earth System Model,R23272,Sea Ice,"The NCEP Climate Forecast System Reanalysis (CFSR) was completed for the 31-yr period from 1979 to 2009, in January 2010. The CFSR was designed and executed as a global, high-resolution coupled atmosphere–ocean–land surface–sea ice system to provide the best estimate of the state of these coupled domains over this period. The current CFSR will be extended as an operational, real-time product into the future. New features of the CFSR include 1) coupling of the atmosphere and ocean during the generation of the 6-h guess field, 2) an interactive sea ice model, and 3) assimilation of satellite radiances by the Gridpoint Statistical Interpolation (GSI) scheme over the entire period. The CFSR global atmosphere resolution is ~38 km (T382) with 64 levels extending from the surface to 0.26 hPa. The global ocean's latitudinal spacing is 0.25° at the equator, extending to a global 0.5° beyond the tropics, with 40 levels to a depth of 4737 m. The global land surface model has four soil levels and the global sea ice m...",TRUE,noun phrase
R145,Environmental Sciences,R23287,A Modified Dynamic Framework for the Atmospheric Spectral Model and Its Application,S72230,R23288,Earth System Model,R23299,Sea Ice,"This paper describes a dynamic framework for an atmospheric general circulation spectral model in which a reference stratified atmospheric temperature and a reference surface pressure are introduced into the governing equations so as to improve the calculation of the pressure gradient force and gradients of surface pressure and temperature. The vertical profile of the reference atmospheric temperature approximately corresponds to that of the U.S. midlatitude standard atmosphere within the troposphere and stratosphere, and the reference surface pressure is a function of surface terrain geopotential and is close to the observed mean surface pressure. Prognostic variables for the temperature and surface pressure are replaced by their perturbations from the prescribed references. The numerical algorithms of the explicit time difference scheme for vorticity and the semi-implicit time difference scheme for divergence, perturbation temperature, and perturbation surface pressure equation are given in detail. The modified numerical framework is implemented in the Community Atmosphere Model version 3 (CAM3) developed at the National Center for Atmospheric Research (NCAR) to test its validation and impact on simulated climate. Both the original and the modified models are run with the same spectral resolution (T42), the same physical parameterizations, and the same boundary conditions corresponding to the observed monthly mean sea surface temperature and sea ice concentration from 1971 to 2000. This permits one to evaluate the performance of the new dynamic framework compared to the commonly used one. Results show that there is a general improvement for the simulated climate at regional and global scales, especially for temperature and wind.",TRUE,noun phrase
R145,Environmental Sciences,R186691,Soil Organic Matter Prediction Model with Satellite Hyperspectral Image Based on Optimized Denoising Method,S713810,R186693,Has method,R186698,singular value decomposition (SVD),"In order to improve the signal-to-noise ratio of the hyperspectral sensors and exploit the potential of satellite hyperspectral data for predicting soil properties, we took MingShui County as the study area, which the study area is approximately 1481 km2, and we selected Gaofen-5 (GF-5) satellite hyperspectral image of the study area to explore an applicable and accurate denoising method that can effectively improve the prediction accuracy of soil organic matter (SOM) content. First, fractional-order derivative (FOD) processing is performed on the original reflectance (OR) to evaluate the optimal FOD. Second, singular value decomposition (SVD), Fourier transform (FT) and discrete wavelet transform (DWT) are used to denoise the OR and optimal FOD reflectance. Third, the spectral indexes of the reflectance under different denoising methods are extracted by optimal band combination algorithm, and the input variables of different denoising methods are selected by the recursive feature elimination (RFE) algorithm. Finally, the SOM content is predicted by a random forest prediction model. The results reveal that 0.6-order reflectance describes more useful details in satellite hyperspectral data. Five spectral indexes extracted from the reflectance under different denoising methods have a strong correlation with the SOM content, which is helpful for realizing high-accuracy SOM predictions. All three denoising methods can reduce the noise in hyperspectral data, and the accuracies of the different denoising methods are ranked DWT > FT > SVD, where 0.6-order-DWT has the highest accuracy (R2 = 0.84, RMSE = 3.36 g kg−1, and RPIQ = 1.71). This paper is relatively novel, in that GF-5 satellite hyperspectral data based on different denoising methods are used to predict SOM, and the results provide a highly robust and novel method for mapping the spatial distribution of SOM content at the regional scale.",TRUE,noun phrase
R33,Epidemiology,R142068,"Diseases and Health Outcomes Registry Systems in I.R. Iran: Successful Initiative to Improve Public Health Programs, Quality of Care, and Biomedical Research",S570846,R142071,Executive,L400712,Ministry of Health and Medical Education,"Registration systems for diseases and other health outcomes provide important resource for biomedical research, as well as tools for public health surveillance and improvement of quality of care. The Ministry of Health and Medical Education (MOHME) of Iran launched a national program to establish registration systems for different diseases and health outcomes. Based on the national program, we organized several workshops and training programs and disseminated the concepts and knowledge of the registration systems. Following a call for proposals, we received 100 applications and after thorough evaluation and corrections by the principal investigators, we approved and granted about 80 registries for three years. Having strong steering committee, committed executive and scientific group, establishing national and international collaboration, stating clear objectives, applying feasible software, and considering stable financing were key components for a successful registry and were considered in the evaluation processes. We paid particulate attention to non-communicable diseases, which constitute an emerging public health problem. We prioritized establishment of regional population-based cancer registries (PBCRs) in 10 provinces in collaboration with the International Agency for Research on Cancer. This initiative was successful and registry programs became popular among researchers and research centers and created several national and international collaborations in different areas to answer important public health and clinical questions. In this paper, we report the details of the program and list of registries that were granted in the first round.",TRUE,noun phrase
R456,Ethics,R140317,Costume in the dance archive: Towards a records-centred ethics of care,S560082,R140320,performance title,R140321,Dance in Trees and Church,"Focusing on the archival records of the production and performance of Dance in Trees and Church by the Swedish independent dance group Rubicon, this article conceptualizes a records-oriented costume ethics. Theorizations of costume as a co-creative agent of performance are brought into the dance archive to highlight the productivity of paying attention to costume in the making of performance history. Addressing recent developments within archival studies, a feminist ethics of care and radical empathy is employed, which is the capability to empathically engage with others, even if it can be difficult, as a means of exploring how a records-centred costume ethics can be conceptualized for the dance archive. The exploration resulted in two ethical stances useful for better attending to costume-bodies in the dance archive: (1) caring for costume-body relations in the dance archive means that a conventional, so-called static understanding of records as neutral carriers of facts is replaced by a more inclusive, expanding and infinite process. By moving across time and space, and with a caring attitude finding and exploring fragments from various, sometimes contradictory production processes, one can help scattered and poorly represented dance and costume histories to emerge and contribute to the formation of identity and memory. (2) The use of bodily empathy with records can respectfully bring together the understanding of costume in performance as inseparable from the performer’s body with dance as an art form that explicitly uses the dancing costume-body as an expressive tool. It is argued that bodily empathy with records in the dance archive helps one access bodily holisms that create possibilities for exploring the potential of art to critically expose and render strange ideological systems and normativities.",TRUE,noun phrase
R456,Ethics,R140317,Costume in the dance archive: Towards a records-centred ethics of care,S560083,R140320,has research domain,R140322,Theorizations of costume as a co-creative agent of performance,"Focusing on the archival records of the production and performance of Dance in Trees and Church by the Swedish independent dance group Rubicon, this article conceptualizes a records-oriented costume ethics. Theorizations of costume as a co-creative agent of performance are brought into the dance archive to highlight the productivity of paying attention to costume in the making of performance history. Addressing recent developments within archival studies, a feminist ethics of care and radical empathy is employed, which is the capability to empathically engage with others, even if it can be difficult, as a means of exploring how a records-centred costume ethics can be conceptualized for the dance archive. The exploration resulted in two ethical stances useful for better attending to costume-bodies in the dance archive: (1) caring for costume-body relations in the dance archive means that a conventional, so-called static understanding of records as neutral carriers of facts is replaced by a more inclusive, expanding and infinite process. By moving across time and space, and with a caring attitude finding and exploring fragments from various, sometimes contradictory production processes, one can help scattered and poorly represented dance and costume histories to emerge and contribute to the formation of identity and memory. (2) The use of bodily empathy with records can respectfully bring together the understanding of costume in performance as inseparable from the performer’s body with dance as an art form that explicitly uses the dancing costume-body as an expressive tool. It is argued that bodily empathy with records in the dance archive helps one access bodily holisms that create possibilities for exploring the potential of art to critically expose and render strange ideological systems and normativities.",TRUE,noun phrase
R356,"Family, Life Course, and Society",R76554,The COVID-19 pandemic and subjective well-being: longitudinal evidence on satisfaction with work and family,S352202,R76558,Indicator for well-being,R77151,Family satisfaction,"ABSTRACT This paper provides a timely evaluation of whether the main COVID-19 lockdown policies – remote work, short-time work and closure of schools and childcare – have an immediate effect on the German population in terms of changes in satisfaction with work and family life. Relying on individual level panel data collected before and during the lockdown, we examine (1) how family satisfaction and work satisfaction of individuals have changed over the lockdown period, and (2) how lockdown-driven changes in the labour market situation (i.e. working remotely and being sent on short-time work) have affected satisfactions. We apply first-difference regressions for mothers, fathers, and persons without children. Our results show a general decrease in family satisfaction. We also find an overall decline in work satisfaction which is most pronounced for mothers and those without children who have to switch to short-time work. In contrast, fathers' well-being is less affected negatively and their family satisfaction even increased after changing to short-time work. We conclude that while the lockdown circumstances generally have a negative effect on the satisfaction with work and family of individuals in Germany, effects differ between childless persons, mothers, and fathers with the latter being least negatively affected.",TRUE,noun phrase
R356,"Family, Life Course, and Society",R76542,Up and About: Older Adults’ Well-being During the COVID-19 Pandemic in a Swedish Longitudinal Study,S352200,R76545,Indicator for well-being,R77143,Life satisfaction,"Abstract Objectives To investigate early effects of the COVID-19 pandemic related to (a) levels of worry, risk perception, and social distancing; (b) longitudinal effects on well-being; and (c) effects of worry, risk perception, and social distancing on well-being. Methods We analyzed annual changes in four aspects of well-being over 5 years (2015–2020): life satisfaction, financial satisfaction, self-rated health, and loneliness in a subsample (n = 1,071, aged 65–71) from a larger survey of Swedish older adults. The 2020 wave, collected March 26–April 2, included measures of worry, risk perception, and social distancing in response to COVID-19. Results (a) In relation to COVID-19: 44.9% worried about health, 69.5% about societal consequences, 25.1% about financial consequences; 86.4% perceived a high societal risk, 42.3% a high risk of infection, and 71.2% reported high levels of social distancing. (b) Well-being remained stable (life satisfaction and loneliness) or even increased (self-rated health and financial satisfaction) in 2020 compared to previous years. (c) More worry about health and financial consequences was related to lower scores in all four well-being measures. Higher societal worry and more social distancing were related to higher well-being. Discussion In the early stage of the pandemic, Swedish older adults on average rated their well-being as high as, or even higher than, previous years. However, those who worried more reported lower well-being. Our findings speak to the resilience, but also heterogeneity, among older adults during the pandemic. Further research, on a broad range of health factors and long-term psychological consequences, is needed.",TRUE,noun phrase
R356,"Family, Life Course, and Society",R76542,Up and About: Older Adults’ Well-being During the COVID-19 Pandemic in a Swedish Longitudinal Study,S351978,R76545,Examined (sub-)group,R77091,older adults,"Abstract Objectives To investigate early effects of the COVID-19 pandemic related to (a) levels of worry, risk perception, and social distancing; (b) longitudinal effects on well-being; and (c) effects of worry, risk perception, and social distancing on well-being. Methods We analyzed annual changes in four aspects of well-being over 5 years (2015–2020): life satisfaction, financial satisfaction, self-rated health, and loneliness in a subsample (n = 1,071, aged 65–71) from a larger survey of Swedish older adults. The 2020 wave, collected March 26–April 2, included measures of worry, risk perception, and social distancing in response to COVID-19. Results (a) In relation to COVID-19: 44.9% worried about health, 69.5% about societal consequences, 25.1% about financial consequences; 86.4% perceived a high societal risk, 42.3% a high risk of infection, and 71.2% reported high levels of social distancing. (b) Well-being remained stable (life satisfaction and loneliness) or even increased (self-rated health and financial satisfaction) in 2020 compared to previous years. (c) More worry about health and financial consequences was related to lower scores in all four well-being measures. Higher societal worry and more social distancing were related to higher well-being. Discussion In the early stage of the pandemic, Swedish older adults on average rated their well-being as high as, or even higher than, previous years. However, those who worried more reported lower well-being. Our findings speak to the resilience, but also heterogeneity, among older adults during the pandemic. Further research, on a broad range of health factors and long-term psychological consequences, is needed.",TRUE,noun phrase
R356,"Family, Life Course, and Society",R76554,The COVID-19 pandemic and subjective well-being: longitudinal evidence on satisfaction with work and family,S352214,R77087,Indicator for well-being,R77152,Work satisfaction,"ABSTRACT This paper provides a timely evaluation of whether the main COVID-19 lockdown policies – remote work, short-time work and closure of schools and childcare – have an immediate effect on the German population in terms of changes in satisfaction with work and family life. Relying on individual level panel data collected before and during the lockdown, we examine (1) how family satisfaction and work satisfaction of individuals have changed over the lockdown period, and (2) how lockdown-driven changes in the labour market situation (i.e. working remotely and being sent on short-time work) have affected satisfactions. We apply first-difference regressions for mothers, fathers, and persons without children. Our results show a general decrease in family satisfaction. We also find an overall decline in work satisfaction which is most pronounced for mothers and those without children who have to switch to short-time work. In contrast, fathers' well-being is less affected negatively and their family satisfaction even increased after changing to short-time work. We conclude that while the lockdown circumstances generally have a negative effect on the satisfaction with work and family of individuals in Germany, effects differ between childless persons, mothers, and fathers with the latter being least negatively affected.",TRUE,noun phrase
R83,Food Processing,R111114,Development of optimized substitution ratio for wheatcassava-african yam bean flour composite for Nigerian bread industries,S505926,R111116,non wheat flour,L365204,Cassava and African Yam Bean,"An optimization study of the mix ratio for substitution of Wheat flour with Cassava and African Yam Bean flours (AYB) was carried out and reported in this paper. The aim was to obtain a mix ratio that would optimise selected physical properties of the bread. Wheat flour was substituted with Cassava and African Yam Bean flours at different levels: 80% to 100% of wheat, 0% to 10% of cassava flour and 0% to 10% for AYB flour. The experiment was conducted in mixture design which was generated and analysed by Design-Expert Software 11 version. The Composite dough was prepared in different mix ratios according to the design matrix and subsequently baked under the same conditions and analysed for the following loaf quality attributes: Loaf Specific Volume, Bread Crumb Hardness and Crumb Colour Index as response variables. The objective functions were to maximize Loaf Specific Volume, minimize Wheat flour, Bread Crumb Hardness and Crumb Colour Index to obtain the most suitable substitution ratio acceptable to consumers. Predictive models for the response variables were developed with the coefficient of determination (R) of 0.991 for Loaf Specific Volume (LSV) while that of Bread Crumb Hardness (BCH) and Crumb Colour Index (CCI) were 0.834 and 0.895 respectively at 95% confidence interval (CI).The predicted optimal substitution ratio was obtained as follows: 88% Wheat flour, 10% Cassava flour, and 2% AYB flour. At this formulation, the predicted Loaf Specific Volume was 2.11cm/g, Bread Crumb Hardness was 25.12N, and Crumb Colour Index was 18.88.The study shows that addition of 2% of AYB flour in the formulation would help to optimise the LSV, BCH and the CCI of the Wheat-Cassava flour bread at the mix ratio of 88:10. Application of the results of this study in bread industries will reduce the cost of bread in Nigeria, which is influenced by the rising cost of imported wheat. This is a significant development because wheat flour was the sole baking flour in Nigeria before wheat substitution initiative.",TRUE,noun phrase
R83,Food Processing,R111050,"Optimisation of raw tooke flour, vital gluten and water absorption in tooke/wheat composite bread, using response surface methodology (Part II)",S505681,R111052,has methodology,R111054,response surface methodology,"The objective of this study was to optimise raw tooke flour-(RTF), vital gluten (VG) and water absorption (WA) with respect to bread-making quality and cost effectiveness of RTF/wheat composite flour. The hypothesis generated for this study was that optimal substitution of RTF and VG into wheat has no significant effect on baking quality of the resultant composite flour. A basic white wheat bread recipe was adopted and response surface methodology (RSM) procedures applied. A D-optimal design was employed with the following variables: RTF (x1) 0-33%, WA (x2) -2FWA to +2FWA and VG (x3) 0 - 3%. Seven responses were modelled. Baking worth number, volume yield and cost were simultaneously optimized using desirability function approach. Models developed adequately described the relationships and were confirmed by validation studies. RTF showed the greatest effect on all models, which effect impaired baking performance of composite flour. VG and Farinograph water absorption (FWA) as well as their interaction improved bread quality. Vitality of VG was enhanced by RTF. The optimal formulation for maximum baking quality was 0.56%(x1), 0.33%(x2) and -1.24(x3) while a formulation of 22%(x1), 3%(x2) and +1.13(x3) maximized RTF incorporation in the respective and composite bread quality at lowest cost. Thus, the set hypothesis was not rejected. Key words: Raw tooke flour, composite bread, baking quality, response surface methodology, Farinograph water absorption, vital gluten.",TRUE,noun phrase
R38,Genomics,R50397,The application of RNA sequencing for the diagnosis and genomic classification of pediatric acute lymphoblastic leukemia,S154531,R50404,Disease,R50549,acute lymphoblastic leukemia,"Acute lymphoblastic leukemia (ALL) is the most common childhood malignancy, and implementation of risk-adapted therapy has been instrumental in the dramatic improvements in clinical outcomes. A key to risk-adapted therapies includes the identification of genomic features of individual tumors, including chromosome number (for hyper- and hypodiploidy) and gene fusions, notably ETV6-RUNX1, TCF3-PBX1, and BCR-ABL1 in B-cell ALL (B-ALL). RNA-sequencing (RNA-seq) of large ALL cohorts has expanded the number of recurrent gene fusions recognized as drivers in ALL, and identification of these new entities will contribute to refining ALL risk stratification. We used RNA-seq on 126 ALL patients from our clinical service to test the utility of including RNA-seq in standard-of-care diagnostic pipelines to detect gene rearrangements and IKZF1 deletions. RNA-seq identified 86% of rearrangements detected by standard-of-care diagnostics. KMT2A (MLL) rearrangements, although usually identified, were the most commonly missed by RNA-seq as a result of low expression. RNA-seq identified rearrangements that were not detected by standard-of-care testing in 9 patients. These were found in patients who were not classifiable using standard molecular assessment. We developed an approach to detect the most common IKZF1 deletion from RNA-seq data and validated this using an RQ-PCR assay. We applied an expression classifier to identify Philadelphia chromosome-like B-ALL patients. T-ALL proved a rich source of novel gene fusions, which have clinical implications or provide insights into disease biology. Our experience shows that RNA-seq can be implemented within an individual clinical service to enhance the current molecular diagnostic risk classification of ALL.",TRUE,noun phrase
R38,Genomics,R50397,The application of RNA sequencing for the diagnosis and genomic classification of pediatric acute lymphoblastic leukemia,S154268,R50404,Population size,R50452,b-cell ALL,"Acute lymphoblastic leukemia (ALL) is the most common childhood malignancy, and implementation of risk-adapted therapy has been instrumental in the dramatic improvements in clinical outcomes. A key to risk-adapted therapies includes the identification of genomic features of individual tumors, including chromosome number (for hyper- and hypodiploidy) and gene fusions, notably ETV6-RUNX1, TCF3-PBX1, and BCR-ABL1 in B-cell ALL (B-ALL). RNA-sequencing (RNA-seq) of large ALL cohorts has expanded the number of recurrent gene fusions recognized as drivers in ALL, and identification of these new entities will contribute to refining ALL risk stratification. We used RNA-seq on 126 ALL patients from our clinical service to test the utility of including RNA-seq in standard-of-care diagnostic pipelines to detect gene rearrangements and IKZF1 deletions. RNA-seq identified 86% of rearrangements detected by standard-of-care diagnostics. KMT2A (MLL) rearrangements, although usually identified, were the most commonly missed by RNA-seq as a result of low expression. RNA-seq identified rearrangements that were not detected by standard-of-care testing in 9 patients. These were found in patients who were not classifiable using standard molecular assessment. We developed an approach to detect the most common IKZF1 deletion from RNA-seq data and validated this using an RQ-PCR assay. We applied an expression classifier to identify Philadelphia chromosome-like B-ALL patients. T-ALL proved a rich source of novel gene fusions, which have clinical implications or provide insights into disease biology. Our experience shows that RNA-seq can be implemented within an individual clinical service to enhance the current molecular diagnostic risk classification of ALL.",TRUE,noun phrase
R317,Geographic Information Sciences,R111061,Reversed urbanism: Inferring urban performance through behavioral patterns in temporal telecom data,S505796,R111064,Has method,R69557,Linear Regression,"Abstract A fundamental aspect of well performing cities is successful public spaces. For centuries, understanding these places has been limited to sporadic observations and laborious data collection. This study proposes a novel methodology to analyze citywide, discrete urban spaces using highly accurate anonymized telecom data and machine learning algorithms. Through superposition of human dynamics and urban features, this work aims to expose clear correlations between the design of the city and the behavioral patterns of its users. Geolocated telecom data, obtained for the state of Andorra, were initially analyzed to identify “stay-points”—events in which cellular devices remain within a certain roaming distance for a given length of time. These stay-points were then further analyzed to find clusters of activity characterized in terms of their size, persistence, and diversity. Multivariate linear regression models were used to identify associations between the formation of these clusters and various urban features such as urban morphology or land-use within a 25–50 meters resolution. Some of the urban features that were found to be highly related to the creation of large, diverse and long-lasting clusters were the presence of service and entertainment amenities, natural water features, and the betweenness centrality of the road network; others, such as educational and park amenities were shown to have a negative impact. Ultimately, this study suggests a “reversed urbanism” methodology: an evidence-based approach to urban design, planning, and decision making, in which human behavioral patterns are instilled as a foundational design tool for inferring the success rates of highly performative urban places.",TRUE,noun phrase
R317,Geographic Information Sciences,R111061,Reversed urbanism: Inferring urban performance through behavioral patterns in temporal telecom data,S505821,R111064,Has method,R111093,Multivariate linear regression,"Abstract A fundamental aspect of well performing cities is successful public spaces. For centuries, understanding these places has been limited to sporadic observations and laborious data collection. This study proposes a novel methodology to analyze citywide, discrete urban spaces using highly accurate anonymized telecom data and machine learning algorithms. Through superposition of human dynamics and urban features, this work aims to expose clear correlations between the design of the city and the behavioral patterns of its users. Geolocated telecom data, obtained for the state of Andorra, were initially analyzed to identify “stay-points”—events in which cellular devices remain within a certain roaming distance for a given length of time. These stay-points were then further analyzed to find clusters of activity characterized in terms of their size, persistence, and diversity. Multivariate linear regression models were used to identify associations between the formation of these clusters and various urban features such as urban morphology or land-use within a 25–50 meters resolution. Some of the urban features that were found to be highly related to the creation of large, diverse and long-lasting clusters were the presence of service and entertainment amenities, natural water features, and the betweenness centrality of the road network; others, such as educational and park amenities were shown to have a negative impact. Ultimately, this study suggests a “reversed urbanism” methodology: an evidence-based approach to urban design, planning, and decision making, in which human behavioral patterns are instilled as a foundational design tool for inferring the success rates of highly performative urban places.",TRUE,noun phrase
R317,Geographic Information Sciences,R110803,A new insight into land use classification based on aggregated mobile phone data,S504921,R110805,Data,L364680,Normalized hourly call volume,"Land-use classification is essential for urban planning. Urban land-use types can be differentiated either by their physical characteristics (such as reflectivity and texture) or social functions. Remote sensing techniques have been recognized as a vital method for urban land-use classification because of their ability to capture the physical characteristics of land use. Although significant progress has been achieved in remote sensing methods designed for urban land-use classification, most techniques focus on physical characteristics, whereas knowledge of social functions is not adequately used. Owing to the wide usage of mobile phones, the activities of residents, which can be retrieved from the mobile phone data, can be determined in order to indicate the social function of land use. This could bring about the opportunity to derive land-use information from mobile phone data. To verify the application of this new data source to urban land-use classification, we first construct a vector of aggregated mobile phone data to characterize land-use types. This vector is composed of two aspects: the normalized hourly call volume and the total call volume. A semi-supervised fuzzy c-means clustering approach is then applied to infer the land-use types. The method is validated using mobile phone data collected in Singapore. Land use is determined with a detection rate of 58.03%. An analysis of the land-use classification results shows that the detection rate decreases as the heterogeneity of land use increases, and increases as the density of cell phone towers increases.",TRUE,noun phrase
R317,Geographic Information Sciences,R110803,A new insight into land use classification based on aggregated mobile phone data,S504922,R110805,Data,L364681,Total call volume,"Land-use classification is essential for urban planning. Urban land-use types can be differentiated either by their physical characteristics (such as reflectivity and texture) or social functions. Remote sensing techniques have been recognized as a vital method for urban land-use classification because of their ability to capture the physical characteristics of land use. Although significant progress has been achieved in remote sensing methods designed for urban land-use classification, most techniques focus on physical characteristics, whereas knowledge of social functions is not adequately used. Owing to the wide usage of mobile phones, the activities of residents, which can be retrieved from the mobile phone data, can be determined in order to indicate the social function of land use. This could bring about the opportunity to derive land-use information from mobile phone data. To verify the application of this new data source to urban land-use classification, we first construct a vector of aggregated mobile phone data to characterize land-use types. This vector is composed of two aspects: the normalized hourly call volume and the total call volume. A semi-supervised fuzzy c-means clustering approach is then applied to infer the land-use types. The method is validated using mobile phone data collected in Singapore. Land use is determined with a detection rate of 58.03%. An analysis of the land-use classification results shows that the detection rate decreases as the heterogeneity of land use increases, and increases as the density of cell phone towers increases.",TRUE,noun phrase
R317,Geographic Information Sciences,R78263,"Evaluating the effect of visually represented geodata uncertainty on decision-making: systematic review, lessons learned, and recommendations",S354016,R78265,Reviews,R67737,Visualization Techniques,"ABSTRACT For many years, uncertainty visualization has been a topic of research in several disparate fields, particularly in geographical visualization (geovisualization), information visualization, and scientific visualization. Multiple techniques have been proposed and implemented to visually depict uncertainty, but their evaluation has received less attention by the research community. In order to understand how uncertainty visualization influences reasoning and decision-making using spatial information in visual displays, this paper presents a comprehensive review of uncertainty visualization assessments from geovisualization and related fields. We systematically analyze characteristics of the studies under review, i.e., number of participants, tasks, evaluation metrics, etc. An extensive summary of findings with respect to the effects measured or the impact of different visualization techniques helps to identify commonalities and differences in the outcome. Based on this summary, we derive “lessons learned” and provide recommendations for carrying out evaluation of uncertainty visualizations. As a basis for systematic evaluation, we present a categorization of research foci related to evaluating the effects of uncertainty visualization on decision-making. By assigning the studies to categories, we identify gaps in the literature and suggest key research questions for the future. This paper is the second of two reviews on uncertainty visualization. It follows the first that covers the communication of uncertainty, to investigate the effects of uncertainty visualization on reasoning and decision-making.",TRUE,noun phrase
R146,Geology,R109225,"Mapping mineralogical alteration using principal-component analysis and matched filter processing in the Takab area, north-west Iran, from ASTER data",S498404,R109226,Minerals Mapped/ Identified,L360744,Argillic alteration,"The Takab area, located in north‐west Iran, is an important gold mineralized region with a long history of gold mining. The gold is associated with toxic metals/metalloids. In this study, Advanced Space Borne Thermal Emission and Reflection Radiometer data are evaluated for mapping gold and base‐metal mineralization through alteration mapping. Two different methods are used for argillic and silicic alteration mapping: selective principal‐component analysis and matched filter processing (MF). Running a selective principal‐component analysis using the main spectral characteristics of key alteration minerals enhanced the altered areas in PC2. MF using spectral library and laboratory spectra of the study area samples gave similar results. However, MF, using the image reference spectra from principal component (PC) images, produced the best results and indicated the advantage of using image spectra rather than library spectra in spectral mapping techniques. It seems that argillic alteration is more effective than silicic alteration for exploration purposes. It is suggested that alteration mapping can also be used to delineate areas contaminated by potentially toxic metals.",TRUE,noun phrase
R146,Geology,R108144,Mapping of Alteration Zones in Mineral Rich Belt of South-East Rajasthan Using Remote Sensing Techniques,S493171,R108145,Band Parameters,R108119,Band Ratio,"Remote sensing techniques have emerged as an asset for various geological studies. Satellite images obtained by different sensors contain plenty of information related to the terrain. Digital image processing further helps in customized ways for the prospecting of minerals. In this study, an attempt has been made to map the hydrothermally altered zones using multispectral and hyperspectral datasets of South East Rajasthan. Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Hyperion (Level1R) dataset have been processed to generate different Band Ratio Composites (BRCs). For this study, ASTER derived BRCs were generated to delineate the alteration zones, gossans, abundant clays and host rocks. ASTER and Hyperion images were further processed to extract mineral end members and classified mineral maps have been produced using Spectral Angle Mapper (SAM) method. Results were validated with the geological map of the area which shows positive agreement with the image processing outputs. Thus, this study concludes that the band ratios and image processing in combination play significant role in demarcation of alteration zones which may provide pathfinders for mineral prospecting studies. Keywords—Advanced space-borne thermal emission and reflection radiometer, ASTER, Hyperion, Band ratios, Alteration zones, spectral angle mapper.",TRUE,noun phrase
R146,Geology,R109216,Principal Component Analysis for Alteration Mapping,S498393,R109217,Data Dimensionality Reduction Methods,R108113,Principal Component Analysis (PCA),"Reducing the number of image bands input for principal component analysis (PCA) ensures that certain materials will not be mapped and increases the likelihood that others will be unequivocally mapped into only one of the principal component images. In arid terrain, PCA of four TM bands will avoid iron-oxide and thus more reliably detect hydroxyl-bearing minerals if only one input band is from the visible spectrum. PCA for iron-oxide mapping will avoid hydroxyls if only one of the SWIR bands is used. A simple principal component color composite image can then be created in which anomalous concentrations of hydroxyl, hydroxyl plus iron-oxide, and iron-oxide are displayed brightly in red-green-blue (RGB) color space. This composite allows qualitative inferences on alteration type and intensity to be made which can be widely applied.",TRUE,noun phrase
R146,Geology,R109222,"Targeting key alteration minerals in epithermal deposits in Patagonia, Argentina, using ASTER imagery and principal component analysis",S498420,R109223,Data Dimensionality Reduction Methods,R108113,Principal Component Analysis (PCA),"Principal component analysis (PCA) is an image processing technique that has been commonly applied to Landsat Thematic Mapper (TM) data to locate hydrothermal alteration zones related to metallic deposits. With the advent of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), a 14-band multispectral sensor operating onboard the Earth Observation System (EOS)-Terra satellite, the availability of spectral information in the shortwave infrared (SWIR) portion of the electromagnetic spectrum has been greatly increased. This allows detailed spectral characterization of surface targets, particularly of those belonging to the groups of minerals with diagnostic spectral features in this wavelength range, including phyllosilicates (‘clay’ minerals), sulphates and carbonates, among others. In this study, PCA was applied to ASTER bands covering the SWIR with the objective of mapping the occurrence of mineral endmembers related to an epithermal gold prospect in Patagonia, Argentina. The results illustrate ASTER's ability to provide information on alteration minerals which are valuable for mineral exploration activities and support the role of PCA as a very effective and robust image processing technique for that purpose.",TRUE,noun phrase
R146,Geology,R108144,Mapping of Alteration Zones in Mineral Rich Belt of South-East Rajasthan Using Remote Sensing Techniques,S493186,R108145,Spectral Mapping Technique,R108184,Spectral Angle Mapper (SAM),"Remote sensing techniques have emerged as an asset for various geological studies. Satellite images obtained by different sensors contain plenty of information related to the terrain. Digital image processing further helps in customized ways for the prospecting of minerals. In this study, an attempt has been made to map the hydrothermally altered zones using multispectral and hyperspectral datasets of South East Rajasthan. Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Hyperion (Level1R) dataset have been processed to generate different Band Ratio Composites (BRCs). For this study, ASTER derived BRCs were generated to delineate the alteration zones, gossans, abundant clays and host rocks. ASTER and Hyperion images were further processed to extract mineral end members and classified mineral maps have been produced using Spectral Angle Mapper (SAM) method. Results were validated with the geological map of the area which shows positive agreement with the image processing outputs. Thus, this study concludes that the band ratios and image processing in combination play significant role in demarcation of alteration zones which may provide pathfinders for mineral prospecting studies. Keywords—Advanced space-borne thermal emission and reflection radiometer, ASTER, Hyperion, Band ratios, Alteration zones, spectral angle mapper.",TRUE,noun phrase
R148,Geophysics and Seismology,R110002,"Application of support vector regression analysis to estimate total organic carbon content of Cambay shale in Cambay basin, India – a case study",S501706,R110004,Area of study,R110014,"Cambay Basin, India","Abstract The objective of the present study is to estimate total organic carbon (TOC) content over the entire thickness of Cambay Shale, in the boreholes of Jambusar–Broach block of Cambay Basin, India. To achieve this objective, support vector regression (SVR), a supervised data mining technique, has been utilized using five basic wireline logs as input variables. Suitable SVR model has been developed by selecting epsilon-SVR algorithm and varying three different kernel functions and parameters like gamma and cost on a sample dataset. The best result is obtained when the radial-basis kernel function with gamma = 1 and cost = 1, are used. Finally, the performance of developed SVR model is compared with the ΔlogR method. The TOC computed by SVR method is found to be more precise than the ΔlogR method, as it has better agreement with the core-TOC. Thus, in the present study area, the SVR method is found to be a powerful tool for estimating TOC of Cambay Shale in a continuous and rapid manner.",TRUE,noun phrase
R148,Geophysics and Seismology,R110002,"Application of support vector regression analysis to estimate total organic carbon content of Cambay shale in Cambay basin, India – a case study",S501740,R110004,Machine learning algorithms,L362755,support vector regression,"Abstract The objective of the present study is to estimate total organic carbon (TOC) content over the entire thickness of Cambay Shale, in the boreholes of Jambusar–Broach block of Cambay Basin, India. To achieve this objective, support vector regression (SVR), a supervised data mining technique, has been utilized using five basic wireline logs as input variables. Suitable SVR model has been developed by selecting epsilon-SVR algorithm and varying three different kernel functions and parameters like gamma and cost on a sample dataset. The best result is obtained when the radial-basis kernel function with gamma = 1 and cost = 1, are used. Finally, the performance of developed SVR model is compared with the ΔlogR method. The TOC computed by SVR method is found to be more precise than the ΔlogR method, as it has better agreement with the core-TOC. Thus, in the present study area, the SVR method is found to be a powerful tool for estimating TOC of Cambay Shale in a continuous and rapid manner.",TRUE,noun phrase
R136,Graphics,R38461,Rich Representations of Visual Content for Screen Reader Users,S126214,R38463,User recommendation,R38465,Design space,"Alt text (short for ""alternative text"") is descriptive text associated with an image in HTML and other document formats. Screen reader technologies speak the alt text aloud to people who are visually impaired. Introduced with HTML 2.0 in 1995, the alt attribute has not evolved despite significant changes in technology over the past two decades. In light of the expanding volume, purpose, and importance of digital imagery, we reflect on how alt text could be supplemented to offer a richer experience of visual content to screen reader users. Our contributions include articulating the design space of representations of visual content for screen reader users, prototypes illustrating several points within this design space, and evaluations of several of these new image representations with people who are blind. We close by discussing the implications of our taxonomy, prototypes, and user study findings.",TRUE,noun phrase
R136,Graphics,R35052,Investigating Correlations of Automatically Extracted Multimodal Features and Lecture Video Quality,S122373,R35054,Has evaluation,R4856,User Study,"Ranking and recommendation of multimedia content such as videos is usually realized with respect to the relevance to a user query. However, for lecture videos and MOOCs (Massive Open Online Courses) it is not only required to retrieve relevant videos, but particularly to find lecture videos of high quality that facilitate learning, for instance, independent of the video's or speaker's popularity. Thus, metadata about a lecture video's quality are crucial features for learning contexts, e.g., lecture video recommendation in search as learning scenarios. In this paper, we investigate whether automatically extracted features are correlated to quality aspects of a video. A set of scholarly videos from a Mass Open Online Course (MOOC) is analyzed regarding audio, linguistic, and visual features. Furthermore, a set of cross-modal features is proposed which are derived by combining transcripts, audio, video, and slide content. A user study is conducted to investigate the correlations between the automatically collected features and human ratings of quality aspects of a lecture video. Finally, the impact of our features on the knowledge gain of the participants is discussed.",TRUE,noun phrase
R136,Graphics,R6531,Using Semantics for Interactive Visual Analysis of Linked Open Data,S8112,R6532,implementation,R6533,Vis Wizard,"Providing easy to use methods for visual analysis of Linked Data is often hindered by the complexity of semantic technologies. On the other hand, semantic information inherent to Linked Data provides opportunities to support the user in interactively analysing the data. This paper provides a demonstration of an interactive, Web-based visualisation tool, the ""Vis Wizard"", which makes use of semantics to simplify the process of setting up visualisations, transforming the data and, most importantly, interactively analysing multiple datasets using brushing and linking methods.",TRUE,noun phrase
R93,Human and Clinical Nutrition,R182137,Understanding the Linkages between Crop Diversity and Household Dietary Diversity in the Semi-Arid Regions of India,S704520,R182139,statistical_methods,R182140,Multiple linear regression model,"Agriculture is fundamental to achieving nutrition goals; it provides the food, energy, and nutrients essential for human health and well-being. This paper has examined crop diversity and dietary diversity in six villages using the ICRISAT Village Level Studies (VLS) data from the Telangana and Maharashtra states of India. The study has used the data of cultivating households for constructing the crop diversity index while dietary diversity data is from the special purpose nutritional surveys conducted by ICRISAT in the six villages. The study has revealed that the cropping pattern is not uniform across the six study villages with dominance of mono cropping in Telangana villages and of mixed cropping in Maharashtra villages. The analysis has indicated a positive and significant correlation between crop diversity and household dietary diversity at the bivariate level. In multiple linear regression model, controlling for the other covariates, crop diversity has not shown a significant association with household dietary diversity. However, other covariates have shown strong association with dietary diversity. The regression results have revealed that households which cultivated minimum one food crop in a single cropping year have a significant and positive relationship with dietary diversity. From the study it can be inferred that crop diversity alone does not affect the household dietary diversity in the semi-arid tropics. Enhancing the evidence base and future research, especially in the fragile environment of semi-arid tropics, is highly recommended.",TRUE,noun phrase
R93,Human and Clinical Nutrition,R184009,"Market Access, Production Diversity, and Diet Diversity: Evidence From India",S707046,R184011,statistical_methods,R182404,Ordinary least squares regression,"Background: Recent literature, largely from Africa, shows mixed effects of own-production on diet diversity. However, the role of own-production, relative to markets, in influencing food consumption becomes more pronounced as market integration increases. Objective: This paper investigates the relative importance of two factors - production diversity and household market integration - for the intake of a nutritious diet by women and households in rural India. Methods: Data analysis is based on primary data from an extensive agriculture-nutrition survey of 3600 Indian households that was collected in 2017. Dietary diversity scores are constructed for women and households is based on 24-hour and 7-day recall periods. Household market integration is measured as monthly household expenditure on key non-staple food groups. We measure production diversity in two ways - field-level and on-farm production diversity - in order to account for the cereal centric rice-wheat cropping system found in our study locations. The analysis is based on Ordinary Least Squares regressions where we control for a variety of village, household, and individual level covariates that affect food consumption, and village fixed effects. Robustness checks are done by way of using a Poisson regression specifications and 7-day recall period. Results: Conventional measures of field-level production diversity, like the number of crops or food groups grown, have no significant association with diet diversity. In contrast, it is on-farm production diversity (the field-level cultivation of pulses and on-farm livestock management, and kitchen gardens in the longer run) that is significantly associated with improved dietary diversity scores, thus suggesting the importance of non-staples in improving both individual and household dietary diversity. Furthermore, market purchases of non-staples like pulses and dairy products are associated with a significantly higher dietary diversity. Other significant determinants of dietary diversity include women’s literacy and awareness of nutrition. These results mostly remain robust to changes in the recall period of the diet diversity measure and the nature of the empirical specification. Conclusions: This study contributes to the scarce empirical evidence related to diets in India. Additionally, our results indicate some key intervention areas - promoting livestock rearing, strengthening households’ market integration (for purchase of non-staples) and increasing women’s awareness about nutrition. These are more impactful than raising production diversity. ",TRUE,noun phrase
R40,Immunology and Infectious Disease,R185380,Estimating Uncertainty and Interpretability in Deep Learning for Coronavirus (COVID-19) Detection,S709980,R185381,Has implementation,L479444,Bayesian Convolutional Neural Network,"Deep Learning has achieved state of the art performance in medical imaging. However, these methods for disease detection focus exclusively on improving the accuracy of classification or predictions without quantifying uncertainty in a decision. Knowing how much confidence there is in a computer-based medical diagnosis is essential for gaining clinicians trust in the technology and therefore improve treatment. Today, the 2019 Coronavirus (SARS-CoV-2) infections are a major healthcare challenge around the world. Detecting COVID-19 in X-ray images is crucial for diagnosis, assessment and treatment. However, diagnostic uncertainty in the report is a challenging and yet inevitable task for radiologist. In this paper, we investigate how drop-weights based Bayesian Convolutional Neural Networks (BCNN) can estimate uncertainty in Deep Learning solution to improve the diagnostic performance of the human-machine team using publicly available COVID-19 chest X-ray dataset and show that the uncertainty in prediction is highly correlates with accuracy of prediction. We believe that the availability of uncertainty-aware deep learning solution will enable a wider adoption of Artificial Intelligence (AI) in a clinical setting.",TRUE,noun phrase
R40,Immunology and Infectious Disease,R142295,Phase 1 Assessment of the Safety and Immunogenicity of an mRNA- Lipid Nanoparticle Vaccine Candidate Against SARS-CoV-2 in Human Volunteers,S571857,R142296,Delivery Vehicle,R136238,Lipid nanoparticles,"There is an urgent need for vaccines to counter the COVID-19 pandemic due to infections with severe acute respiratory syndrome coronavirus (SARS-CoV-2). Evidence from convalescent sera and preclinical studies has identified the viral Spike (S) protein as a key antigenic target for protective immune responses. We have applied an mRNA-based technology platform, RNActive, to develop CVnCoV which contains sequence optimized mRNA coding for a stabilized form of S protein encapsulated in lipid nanoparticles (LNP). Following demonstration of protective immune responses against SARS-CoV-2 in animal models we performed a dose-escalation phase 1 study in healthy 18-60 year-old volunteers. This interim analysis shows that two doses of CVnCoV ranging from 2 g to 12 g per dose, administered 28 days apart were safe. No vaccine-related serious adverse events were reported. There were dose-dependent increases in frequency and severity of solicited systemic adverse events, and to a lesser extent of local reactions, but the majority were mild or moderate and transient in duration. Immune responses when measured as IgG antibodies against S protein or its receptor-binding domain (RBD) by ELISA, and SARS-CoV-2-virus neutralizing antibodies measured by micro-neutralization, displayed dose-dependent increases. Median titers measured in these assays two weeks after the second 12 g dose were comparable to the median titers observed in convalescent sera from COVID-19 patients. Seroconversion (defined as a 4-fold increase over baseline titer) of virus neutralizing antibodies two weeks after the second vaccination occurred in all participants who received 12 g doses. Preliminary results in the subset of subjects who were enrolled with known SARS-CoV-2 seropositivity at baseline show that CVnCoV is also safe and well tolerated in this population, and is able to boost the pre-existing immune response even at low dose levels. Based on these results, the 12 g dose is selected for further clinical investigation, including a phase 2b/3 study that will investigate the efficacy, safety, and immunogenicity of the candidate vaccine CVnCoV.",TRUE,noun phrase
R42,Immunology of Infectious Disease,R110568,Longitudinal assessment of IFN-I activity and immune profile in critically ill COVID-19 patients with acute respiratory distress syndrome,S503664,R110570,Study population,R110571,Critically ill COVID-19 patients,"Abstract Background Since the onset of the pandemic, only few studies focused on longitudinal immune monitoring in critically ill COVID-19 patients with acute respiratory distress syndrome (ARDS) whereas their hospital stay may last for several weeks. Consequently, the question of whether immune parameters may drive or associate with delayed unfavorable outcome in these critically ill patients remains unsolved. Methods We present a dynamic description of immuno-inflammatory derangements in 64 critically ill COVID-19 patients including plasma IFNα2 levels and IFN-stimulated genes (ISG) score measurements. Results ARDS patients presented with persistently decreased lymphocyte count and mHLA-DR expression and increased cytokine levels. Type-I IFN response was initially induced with elevation of IFNα2 levels and ISG score followed by a rapid decrease over time. Survivors and non-survivors presented with apparent common immune responses over the first 3 weeks after ICU admission mixing gradual return to normal values of cellular markers and progressive decrease of cytokines levels including IFNα2. Only plasma TNF-α presented with a slow increase over time and higher values in non-survivors compared with survivors. This paralleled with an extremely high occurrence of secondary infections in COVID-19 patients with ARDS. Conclusions Occurrence of ARDS in response to SARS-CoV2 infection appears to be strongly associated with the intensity of immune alterations upon ICU admission of COVID-19 patients. In these critically ill patients, immune profile presents with similarities with the delayed step of immunosuppression described in bacterial sepsis.",TRUE,noun phrase
R351,Industrial and Organizational Psychology,R76559,Socioeconomic status and well-being during COVID-19: A resource-based examination.,S352203,R76566,Indicator for well-being,R77143,Life satisfaction,"The authors assess levels and within-person changes in psychological well-being (i.e., depressive symptoms and life satisfaction) from before to during the COVID-19 pandemic for individuals in the United States, in general and by socioeconomic status (SES). The data is from 2 surveys of 1,143 adults from RAND Corporation's nationally representative American Life Panel, the first administered between April-June, 2019 and the second during the initial peak of the pandemic in the United States in April, 2020. Depressive symptoms during the pandemic were higher than population norms before the pandemic. Depressive symptoms increased from before to during COVID-19 and life satisfaction decreased. Individuals with higher education experienced a greater increase in depressive symptoms and a greater decrease in life satisfaction from before to during COVID-19 in comparison to those with lower education. Supplemental analysis illustrates that income had a curvilinear relationship with changes in well-being, such that individuals at the highest levels of income experienced a greater decrease in life satisfaction from before to during COVID-19 than individuals with lower levels of income. We draw on conservation of resources theory and the theory of fundamental social causes to examine four key mechanisms (perceived financial resources, perceived control, interpersonal resources, and COVID-19-related knowledge/news consumption) underlying the relationship between SES and well-being during COVID-19. These resources explained changes in well-being for the sample as a whole but did not provide insight into why individuals of higher education experienced a greater decline in well-being from before to during COVID-19. (PsycInfo Database Record (c) 2020 APA, all rights reserved).",TRUE,noun phrase
R351,Industrial and Organizational Psychology,R76567,Individual differences and changes in subjective wellbeing during the early stages of the COVID-19 pandemic.,S352204,R76571,Indicator for well-being,R77143,Life satisfaction,"The COVID-19 pandemic has considerably impacted many people's lives. This study examined changes in subjective wellbeing between December 2019 and May 2020 and how stress appraisals and coping strategies relate to individual differences and changes in subjective wellbeing during the early stages of the pandemic. Data were collected at 4 time points from 979 individuals in Germany. Results showed that, on average, life satisfaction, positive affect, and negative affect did not change significantly between December 2019 and March 2020 but decreased between March and May 2020. Across the latter timespan, individual differences in life satisfaction were positively related to controllability appraisals, active coping, and positive reframing, and negatively related to threat and centrality appraisals and planning. Positive affect was positively related to challenge and controllable-by-self appraisals, active coping, using emotional support, and religion, and negatively related to threat appraisal and humor. Negative affect was positively related to threat and centrality appraisals, denial, substance use, and self-blame, and negatively related to controllability appraisals and emotional support. Contrary to expectations, the effects of stress appraisals and coping strategies on changes in subjective wellbeing were small and mostly nonsignificant. These findings imply that the COVID-19 pandemic represents not only a major medical and economic crisis, but also has a psychological dimension, as it can be associated with declines in key facets of people's subjective wellbeing. Psychological practitioners should address potential declines in subjective wellbeing with their clients and attempt to enhance clients' general capability to use functional stress appraisals and effective coping strategies. (PsycInfo Database Record (c) 2020 APA, all rights reserved).",TRUE,noun phrase
R358,Inequality and Stratification,R75946,Who is most affected by the Corona crisis? An analysis of changes in stress and well-being in Switzerland,S352213,R77086,Indicator for well-being,R77143,Life satisfaction,"ABSTRACT This study analyses the consequences of the Covid-19 crisis on stress and well-being in Switzerland. In particular, we assess whether vulnerable groups in terms of social isolation, increased workload and limited socioeconomic resources are affected more than others. Using longitudinal data from the Swiss Household Panel, including a specific Covid-19 study, we estimate change score models to predict changes in perceived stress and life satisfaction at the end of the semi-lockdown in comparison to before the crisis. We find no general change in life satisfaction and a small decrease in stress. Yet, in line with our expectations, more vulnerable groups in terms of social isolation (young adults, Covid-19 risk group members, individuals without a partner), workload (women) and socioeconomic resources (unemployed and those who experienced a deteriorating financial situation) reported a decrease in life satisfaction. Stress levels decreased most strongly among high earners, workers on short-time work and the highly educated.",TRUE,noun phrase
R358,Inequality and Stratification,R75946,Who is most affected by the Corona crisis? An analysis of changes in stress and well-being in Switzerland,S352058,R77086,has data,R77116,Swiss Household Panel,"ABSTRACT This study analyses the consequences of the Covid-19 crisis on stress and well-being in Switzerland. In particular, we assess whether vulnerable groups in terms of social isolation, increased workload and limited socioeconomic resources are affected more than others. Using longitudinal data from the Swiss Household Panel, including a specific Covid-19 study, we estimate change score models to predict changes in perceived stress and life satisfaction at the end of the semi-lockdown in comparison to before the crisis. We find no general change in life satisfaction and a small decrease in stress. Yet, in line with our expectations, more vulnerable groups in terms of social isolation (young adults, Covid-19 risk group members, individuals without a partner), workload (women) and socioeconomic resources (unemployed and those who experienced a deteriorating financial situation) reported a decrease in life satisfaction. Stress levels decreased most strongly among high earners, workers on short-time work and the highly educated.",TRUE,noun phrase
R278,Information Science,R5020,Curating Scientific Information in Knowledge Infrastructures,S5533,R5027,Material,R5032,(elaborated) data products,"Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information – meaningful secondary or derived data – about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known “(elaborated) data products,” for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation – as machine readable data and their machine-readable meaning – is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.",TRUE,noun phrase
R278,Information Science,R5259,OpenBiodiv: A Knowledge Graph for Literature-Extracted Linked Open Data in Biodiversity Science,S494994,R5269,method,R5280,a graph database,"Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.",TRUE,noun phrase
R278,Information Science,R5020,Curating Scientific Information in Knowledge Infrastructures,S5535,R5027,Material,R5034,a Jupyter based prototype infrastructure,"Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information – meaningful secondary or derived data – about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known “(elaborated) data products,” for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation – as machine readable data and their machine-readable meaning – is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.",TRUE,noun phrase
R278,Information Science,R5259,OpenBiodiv: A Knowledge Graph for Literature-Extracted Linked Open Data in Biodiversity Science,S5812,R5269,Material,R5276,a Linked Open Dataset,"Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.",TRUE,noun phrase
R278,Information Science,R6758,Situational Knowledge Representation for Traffic Observed by a Pavement Vibration Sensor Network,S9243,R6824,utilizes,R6760,A sensor network,"Information systems that build on sensor networks often process data produced by measuring physical properties. These data can serve in the acquisition of knowledge for real-world situations that are of interest to information services and, ultimately, to people. Such systems face a common challenge, namely the considerable gap between the data produced by measurement and the abstract terminology used to describe real-world situations. We present and discuss the architecture of a software system that utilizes sensor data, digital signal processing, machine learning, and knowledge representation and reasoning to acquire, represent, and infer knowledge about real-world situations observable by a sensor network. We demonstrate the application of the system to vehicle detection and classification by measurement of road pavement vibration. Thus, real-world situations involve vehicles and information for their type, speed, and driving direction.",TRUE,noun phrase
R278,Information Science,R5020,Curating Scientific Information in Knowledge Infrastructures,S5539,R5027,Process,R5038,a studied natural phenomenon,"Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information – meaningful secondary or derived data – about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known “(elaborated) data products,” for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation – as machine readable data and their machine-readable meaning – is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.",TRUE,noun phrase
R278,Information Science,R5259,OpenBiodiv: A Knowledge Graph for Literature-Extracted Linked Open Data in Biodiversity Science,S5806,R5269,Material,R5270,a substantial pool,"Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.",TRUE,noun phrase
R278,Information Science,R5166,More Complete Resultset Retrieval from Large Heterogeneous RDF Sources,S5709,R5171,Material,R5185,active SPARQL endpoints,"Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service ""Where is My URI"" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.",TRUE,noun phrase
R278,Information Science,R5166,More Complete Resultset Retrieval from Large Heterogeneous RDF Sources,S5706,R5171,Material,R5182,an integrated query engine,"Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service ""Where is My URI"" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.",TRUE,noun phrase
R278,Information Science,R109860,Applying weighted PageRank to author citation networks,S501289,R109862,Scientific network(s),L362491,Author Citation,"This article aims to identify whether different weighted PageRank algorithms can be applied to author citation networks to measure the popularity and prestige of a scholar from a citation perspective. Information retrieval (IR) was selected as a test field and data from 1956–2008 were collected from Web of Science. Weighted PageRank with citation and publication as weighted vectors were calculated on author citation networks. The results indicate that both popularity rank and prestige rank were highly correlated with the weighted PageRank. Principal component analysis was conducted to detect relationships among these different measures. For capturing prize winners within the IR field, prestige rank outperformed all the other measures. © 2011 Wiley Periodicals, Inc.",TRUE,noun phrase
R278,Information Science,R74640,BioPortal as a dataset of linked biomedical ontologies and terminologies in RDF,S342920,R74642,Application field,R74648,Biomedical ontologies,"BioPortal is a repository of biomedical ontologies-the largest such repository, with more than 300 ontologies to date. This set includes ontologies that were developed in OWL, OBO and other formats, as well as a large number of medical terminologies that the US National Library of Medicine distributes in its own proprietary format. We have published the RDF version of all these ontologies at http://sparql.bioontology.org. This dataset contains 190M triples, representing both metadata and content for the 300 ontologies. We use the metadata that the ontology authors provide and simple RDFS reasoning in order to provide dataset users with uniform access to key properties of the ontologies, such as lexical properties for the class names and provenance data. The dataset also contains 9.8M cross-ontology mappings of different types, generated both manually and automatically, which come with their own metadata.",TRUE,noun phrase
R278,Information Science,R68931,The New DBpedia Release Cycle: Increasing Agility and Efficiency in Knowledge Extraction Workflows,S327339,R68933,Data,R68939,challenges of size and complexity,"Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility , to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .",TRUE,noun phrase
R278,Information Science,R5259,OpenBiodiv: A Knowledge Graph for Literature-Extracted Linked Open Data in Biodiversity Science,S5810,R5269,Material,R5274,collective biodiversity knowledge,"Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.",TRUE,noun phrase
R278,Information Science,R5259,OpenBiodiv: A Knowledge Graph for Literature-Extracted Linked Open Data in Biodiversity Science,S5807,R5269,Material,R5271,communal knowledge,"Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.",TRUE,noun phrase
R278,Information Science,R136019,Ontology-based E-learning Content Recommender System for Addressing the Pure Cold-start Problem,S538548,R136021,keywords,R136025,content recommenders,"E-learning recommender systems are gaining significance nowadays due to its ability to enhance the learning experience by providing tailor-made services based on learner preferences. A Personalized Learning Environment (PLE) that automatically adapts to learner characteristics such as learning styles and knowledge level can recommend appropriate learning resources that would favor the learning process and improve learning outcomes. The pure cold-start problem is a relevant issue in PLEs, which arises due to the lack of prior information about the new learner in the PLE to create appropriate recommendations. This article introduces a semantic framework based on ontology to address the pure cold-start problem in content recommenders. The ontology encapsulates the domain knowledge about the learners as well as Learning Objects (LOs). The semantic model that we built has been experimented with different combinations of the key learner parameters such as learning style, knowledge level, and background knowledge. The proposed framework utilizes these parameters to build natural learner groups from the learner ontology using SPARQL queries. The ontology holds 480 learners’ data, 468 annotated learning objects with 5,600 learner ratings. A multivariate k-means clustering algorithm, an unsupervised machine learning technique for grouping similar data, is used to evaluate the learner similarity computation accuracy. The learner satisfaction achieved with the proposed model is measured based on the ratings given by the 40 participants of the experiments. From the evaluation perspective, it is evident that 79% of the learners are satisfied with the recommendations generated by the proposed model in pure cold-start condition.",TRUE,noun phrase
R278,Information Science,R146490,Rapid implementation of mobile technology for real-time epidemiology of COVID-19,S586588,R146501,Epidemiological surveillance software,R146512,COVID Symptom Tracker,"The rapid pace of the coronavirus disease 2019 (COVID-19) pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) presents challenges to the robust collection of population-scale data to address this global health crisis. We established the COronavirus Pandemic Epidemiology (COPE) Consortium to unite scientists with expertise in big data research and epidemiology to develop the COVID Symptom Study, previously known as the COVID Symptom Tracker, mobile application. This application—which offers data on risk factors, predictive symptoms, clinical outcomes, and geographical hotspots—was launched in the United Kingdom on 24 March 2020 and the United States on 29 March 2020 and has garnered more than 2.8 million users as of 2 May 2020. Our initiative offers a proof of concept for the repurposing of existing approaches to enable rapidly scalable epidemiologic data collection and analysis, which is critical for a data-driven response to this public health challenge.",TRUE,noun phrase
R278,Information Science,R146600,Coronavirus disease 2019 (COVID-19) surveillance system: Development of COVID-19 minimum data set and interoperable reporting framework,S586872,R146602,Epidemiological surveillance system purpose,R146603,COVID-19 surveillance,"INTRODUCTION: The 2019 coronavirus disease (COVID-19) is a major global health concern. Joint efforts for effective surveillance of COVID-19 require immediate transmission of reliable data. In this regard, a standardized and interoperable reporting framework is essential in a consistent and timely manner. Thus, this research aimed at to determine data requirements towards interoperability. MATERIALS AND METHODS: In this cross-sectional and descriptive study, a combination of literature study and expert consensus approach was used to design COVID-19 Minimum Data Set (MDS). A MDS checklist was extracted and validated. The definitive data elements of the MDS were determined by applying the Delphi technique. Then, the existing messaging and data standard templates (Health Level Seven-Clinical Document Architecture [HL7-CDA] and SNOMED-CT) were used to design the surveillance interoperable framework. RESULTS: The proposed MDS was divided into administrative and clinical sections with three and eight data classes and 29 and 40 data fields, respectively. Then, for each data field, structured data values along with SNOMED-CT codes were defined and structured according HL7-CDA standard. DISCUSSION AND CONCLUSION: The absence of effective and integrated system for COVID-19 surveillance can delay critical public health measures, leading to increased disease prevalence and mortality. The heterogeneity of reporting templates and lack of uniform data sets hamper the optimal information exchange among multiple systems. Thus, developing a unified and interoperable reporting framework is more effective to prompt reaction to the COVID-19 outbreak.",TRUE,noun phrase
R278,Information Science,R186167,Scalable SPARQL querying of large RDF graphs,S711754,R186169,Material,R186177,data across nodes,"The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.",TRUE,noun phrase
R278,Information Science,R108652,"A streamlined workflow for conversion, peer review, and publication of genomics metadata as omics data papers ",S495081,R108654,Has result,R108683,data import workflow,"Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.",TRUE,noun phrase
R278,Information Science,R108652,"A streamlined workflow for conversion, peer review, and publication of genomics metadata as omics data papers ",S495049,R108654,Material,R108655,Data papers,"Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.",TRUE,noun phrase
R278,Information Science,R68931,The New DBpedia Release Cycle: Increasing Agility and Efficiency in Knowledge Extraction Workflows,S327343,R68933,Method,R68943,DBpedia Information Extraction Framework (DIEF),"Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility , to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .",TRUE,noun phrase
R278,Information Science,R38544,Estimating relative depth in single images via rankboost,S126446,R38546,Has approach,R38156,Deep Learning,"In this paper, we present a novel approach to estimate the relative depth of regions in monocular images. There are several contributions. First, the task of monocular depth estimation is considered as a learning-to-rank problem which offers several advantages compared to regression approaches. Second, monocular depth clues of human perception are modeled in a systematic manner. Third, we show that these depth clues can be modeled and integrated appropriately in a Rankboost framework. For this purpose, a space-efficient version of Rankboost is derived that makes it applicable to rank a large number of objects, as posed by the given problem. Finally, the monocular depth clues are combined with results from a deep learning approach. Experimental results show that the error rate is reduced by adding the monocular features while outperforming state-of-the-art systems.",TRUE,noun phrase
R278,Information Science,R38066,Ontology-based exchange of product data semantics,S125169,R38068,hasRepresentationMasterData,R38073,Description Logic,"An increasing trend toward product development in a collaborative environment has resulted in the use of various software tools to enhance the product design. This requires a meaningful representation and exchange of product data semantics across different application domains. This paper proposes an ontology-based framework to enable such semantic interoperability. A standards-based approach is used to develop a Product Semantic Representation Language (PSRL). Formal description logic (DAML+OIL) is used to encode the PSRL. Mathematical logic and corresponding reasoning is used to determine semantic equivalences between an application ontology and the PSRL. The semantic equivalence matrix enables resolution of ambiguities created due to differences in syntaxes and meanings associated with terminologies in different application domains. Successful semantic interoperability will form the basis of seamless communication and thereby enable better integration of product development systems. Note to Practitioners-Semantic interoperability of product information refers to automating the exchange of meaning associated with the data, among information resources throughout the product development. This research is motivated by the problems in enabling such semantic interoperability. First, product information is formalized into an explicit, extensible, and comprehensive product semantics representation language (PSRL). The PSRL is open and based on standard W3C constructs. Next, in order to enable semantic translation, the paper describes a procedure to semi-automatically determine mappings between exactly equivalent concepts across representations of the interacting applications. The paper demonstrates that this approach to translation is feasible, but it has not yet been implemented commercially. Current limitations and the directions for further research are discussed. Future research addresses the determination of semantic similarities (not exact equivalences) between the interacting information resources.",TRUE,noun phrase
R278,Information Science,R150376,The Pattern of Patterns: What is a pattern in conceptual modeling?,S603075,R150377,Process,R150380,design problem,"It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.",TRUE,noun phrase
R278,Information Science,R5166,More Complete Resultset Retrieval from Large Heterogeneous RDF Sources,S5704,R5171,Material,R5180,distributed data,"Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service ""Where is My URI"" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.",TRUE,noun phrase
R278,Information Science,R46590,Rule-based named entity recognition for drug-related crime news documents,S142494,R46591,Language/domain,L87614,Drug-related crime news documents,"Drug abuse pertains to the consumption of a substance that may induce adverse effects to a person. In international security studies, drug trafficking has become an important topic. In this regard, drug-related crimes are identified as an extremely significant challenge faced by any community. Several techniques for investigations in the crime domain have been implemented by many researchers. However, most of these researchers focus on extracting general crime entities. The number of studies that focus on the drug crime domain is relatively limited. This paper mainly aims to propose a rule-based named entity recognition model for drug-related crime news documents. In this work, a set of heuristic and grammatical rules is used to extract named entities, such as types of drugs, amount of drugs, price of drugs, drug hiding methods, and the nationality of the suspect. A set of grammatical and heuristic rules is established based on part-ofspeech information, developed gazetteers, and indicator word lists. The combined approach of heuristic and grammatical rules achieves a good performance with an overall precision of 86%, a recall of 87%, and an F1-measure of 87%. Results indicate that the ensemble of both heuristic and grammatical rules improves the extraction effectiveness in terms of macro-F1 for all entities.",TRUE,noun phrase
R278,Information Science,R146244,Improvements in Timeliness Resulting from Implementation of Electronic Laboratory Reporting and an Electronic Disease Surveillance System,S585485,R146246,Epidemiological surveillance software,R146251,Electronic Laboratory Reporting,"Objectives. Electronic laboratory reporting (ELR) reduces the time between communicable disease diagnosis and case reporting to local health departments (LHDs). However, it also imposes burdens on public health agencies, such as increases in the number of unique and duplicate case reports. We assessed how ELR affects the timeliness and accuracy of case report processing within public health agencies. Methods. Using data from May–August 2010 and January–March 2012, we assessed timeliness by calculating the time between receiving a case at the LHD and reporting the case to the state (first stage of reporting) and between submitting the report to the state and submitting it to the Centers for Disease Control and Prevention (second stage of reporting). We assessed accuracy by calculating the proportion of cases returned to the LHD for changes or additional information. We compared timeliness and accuracy for ELR and non-ELR cases. Results. ELR was associated with decreases in case processing time (median = 40 days for ELR cases vs. 52 days for non-ELR cases in 2010; median = 20 days for ELR cases vs. 25 days for non-ELR cases in 2012; both p<0.001). ELR also allowed time to reduce the backlog of unreported cases. Finally, ELR was associated with higher case reporting accuracy (in 2010, 2% of ELR case reports vs. 8% of non-ELR case reports were returned; in 2012, 2% of ELR case reports vs. 6% of non-ELR case reports were returned; both p<0.001). Conclusion. The overall impact of increased ELR is more efficient case processing at both local and state levels.",TRUE,noun phrase
R278,Information Science,R38544,Estimating relative depth in single images via rankboost,S126417,R38546,Has metric,R6036,Error rate,"In this paper, we present a novel approach to estimate the relative depth of regions in monocular images. There are several contributions. First, the task of monocular depth estimation is considered as a learning-to-rank problem which offers several advantages compared to regression approaches. Second, monocular depth clues of human perception are modeled in a systematic manner. Third, we show that these depth clues can be modeled and integrated appropriately in a Rankboost framework. For this purpose, a space-efficient version of Rankboost is derived that makes it applicable to rank a large number of objects, as posed by the given problem. Finally, the monocular depth clues are combined with results from a deep learning approach. Experimental results show that the error rate is reduced by adding the monocular features while outperforming state-of-the-art systems.",TRUE,noun phrase
R278,Information Science,R145085,"Developing open source, self-contained disease surveillance software applications for use in resource-limited settings",S580529,R145088,Epidemiological surveillance software,R145099,ESSENCE Desktop,"Abstract Background Emerging public health threats often originate in resource-limited countries. In recognition of this fact, the World Health Organization issued revised International Health Regulations in 2005, which call for significantly increased reporting and response capabilities for all signatory nations. Electronic biosurveillance systems can improve the timeliness of public health data collection, aid in the early detection of and response to disease outbreaks, and enhance situational awareness. Methods As components of its Suite for Automated Global bioSurveillance (SAGES) program, The Johns Hopkins University Applied Physics Laboratory developed two open-source, electronic biosurveillance systems for use in resource-limited settings. OpenESSENCE provides web-based data entry, analysis, and reporting. ESSENCE Desktop Edition provides similar capabilities for settings without internet access. Both systems may be configured to collect data using locally available cell phone technologies. Results ESSENCE Desktop Edition has been deployed for two years in the Republic of the Philippines. Local health clinics have rapidly adopted the new technology to provide daily reporting, thus eliminating the two-to-three week data lag of the previous paper-based system. Conclusions OpenESSENCE and ESSENCE Desktop Edition are two open-source software products with the capability of significantly improving disease surveillance in a wide range of resource-limited settings. These products, and other emerging surveillance technologies, can assist resource-limited countries compliance with the revised International Health Regulations.",TRUE,noun phrase
R278,Information Science,R108652,"A streamlined workflow for conversion, peer review, and publication of genomics metadata as omics data papers ",S495053,R108654,Material,R108659,European Nucleotide Archive,"Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.",TRUE,noun phrase
R278,Information Science,R108690,Open Science meets Food Modelling: Introducing the Food Modelling Journal (FMJ),S496099,R108916,Method,R108919,Food modelling,"This Editorial describes the rationale, focus, scope and technology behind the newly launched, open access, innovative Food Modelling Journal (FMJ). The Journal is designed to publish those outputs of the research cycle that usually precede the publication of the research article, but have their own value and re-usability potential. Such outputs are methods, models, software and data. The Food Modelling Journal is launched by the AGINFRA+ community and is integrated with the AGINFRA+ Virtual Research Environment (VRE) to facilitate and streamline the authoring, peer review and publication of the manuscripts via the ARPHA Publishing Platform.",TRUE,noun phrase
R278,Information Science,R108690,Open Science meets Food Modelling: Introducing the Food Modelling Journal (FMJ),S496097,R108916,Has implementation,R108691,Food Modelling Journal,"This Editorial describes the rationale, focus, scope and technology behind the newly launched, open access, innovative Food Modelling Journal (FMJ). The Journal is designed to publish those outputs of the research cycle that usually precede the publication of the research article, but have their own value and re-usability potential. Such outputs are methods, models, software and data. The Food Modelling Journal is launched by the AGINFRA+ community and is integrated with the AGINFRA+ Virtual Research Environment (VRE) to facilitate and streamline the authoring, peer review and publication of the manuscripts via the ARPHA Publishing Platform.",TRUE,noun phrase
R278,Information Science,R108690,Open Science meets Food Modelling: Introducing the Food Modelling Journal (FMJ),S496121,R108916,used in,R108691,Food Modelling Journal,"This Editorial describes the rationale, focus, scope and technology behind the newly launched, open access, innovative Food Modelling Journal (FMJ). The Journal is designed to publish those outputs of the research cycle that usually precede the publication of the research article, but have their own value and re-usability potential. Such outputs are methods, models, software and data. The Food Modelling Journal is launched by the AGINFRA+ community and is integrated with the AGINFRA+ Virtual Research Environment (VRE) to facilitate and streamline the authoring, peer review and publication of the manuscripts via the ARPHA Publishing Platform.",TRUE,noun phrase
R278,Information Science,R5020,Curating Scientific Information in Knowledge Infrastructures,S5538,R5027,Process,R5037,further processing,"Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information – meaningful secondary or derived data – about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known “(elaborated) data products,” for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation – as machine readable data and their machine-readable meaning – is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.",TRUE,noun phrase
R278,Information Science,R150376,The Pattern of Patterns: What is a pattern in conceptual modeling?,S603085,R150377,Material,R150390,generalizable reusable solution,"It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.",TRUE,noun phrase
R278,Information Science,R108652,"A streamlined workflow for conversion, peer review, and publication of genomics metadata as omics data papers ",S495079,R108654,Data,R108665,genomic and other omics datasets,"Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.",TRUE,noun phrase
R278,Information Science,R108652,"A streamlined workflow for conversion, peer review, and publication of genomics metadata as omics data papers ",S495059,R108654,Material,R108665,genomic and other omics datasets,"Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.",TRUE,noun phrase
R278,Information Science,R108652,"A streamlined workflow for conversion, peer review, and publication of genomics metadata as omics data papers ",S495078,R108654,Data,R108658,genomics metadata,"Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.",TRUE,noun phrase
R278,Information Science,R108652,"A streamlined workflow for conversion, peer review, and publication of genomics metadata as omics data papers ",S495052,R108654,Material,R108658,genomics metadata,"Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.",TRUE,noun phrase
R278,Information Science,R5166,More Complete Resultset Retrieval from Large Heterogeneous RDF Sources,S5707,R5171,Material,R5183,heterogeneous RDF data sources,"Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service ""Where is My URI"" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.",TRUE,noun phrase
R278,Information Science,R68408,"A Systematic Review of Information Literacy Programs in Higher Education: Effects of Face-to-Face, Online, and Blended Formats on Student Skills and Views",S328216,R68411,Educational context,R27741,Higher Education,"Objective – Evidence from systematic reviews a decade ago suggested that face-to-face and online methods to provide information literacy training in universities were equally effective in terms of skills learnt, but there was a lack of robust comparative research. The objectives of this review were (1) to update these findings with the inclusion of more recent primary research; (2) to further enhance the summary of existing evidence by including studies of blended formats (with components of both online and face-to-face teaching) compared to single format education; and (3) to explore student views on the various formats employed. Methods – Authors searched seven databases along with a range of supplementary search methods to identify comparative research studies, dated January 1995 to October 2016, exploring skill outcomes for students enrolled in higher education programs. There were 33 studies included, of which 19 also contained comparative data on student views. Where feasible, meta-analyses were carried out to provide summary estimates of skills development and a thematic analysis was completed to identify student views across the different formats. Results – A large majority of studies (27 of 33; 82%) found no statistically significant difference between formats in skills outcomes for students. Of 13 studies that could be included in a meta-analysis, the standardized mean difference (SMD) between skill test results for face-to-face versus online formats was -0.01 (95% confidence interval -0.28 to 0.26). Of ten studies comparing blended to single delivery format, seven (70%) found no statistically significant difference between formats, and the remaining studies had mixed outcomes. From the limited evidence available across all studies, there is a potential dichotomy between outcomes measured via skill test and assignment (course work) which is worthy of further investigation. The thematic analysis of student views found no preference in relation to format on a range of measures in 14 of 19 studies (74%). The remainder identified that students perceived advantages and disadvantages for each format but had no overall preference. Conclusions – There is compelling evidence that information literacy training is effective and well received across a range of delivery formats. Further research looking at blended versus single format methods, and the time implications for each, as well as comparing assignment to skill test outcomes would be valuable. Future studies should adopt a methodologically robust design (such as the randomized controlled trial) with a large student population and validated outcome measures.",TRUE,noun phrase
R278,Information Science,R68931,The New DBpedia Release Cycle: Increasing Agility and Efficiency in Knowledge Extraction Workflows,S327340,R68933,Data,R68940,higher level of maintainability,"Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility , to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .",TRUE,noun phrase
R278,Information Science,R46643,Active machine learning technique for named entity recognition,S142858,R46644,Language/domain,L87874,Hindi and Bengali,"One difficulty with machine learning for information extraction is the high cost of collecting labeled examples. Active Learning can make more efficient use of the learner's time by asking them to label only instances that are most useful for the trainer. In random sampling approach, unlabeled data is selected for annotation at random and thus can't yield the desired results. In contrast, active learning selects the useful data from a huge pool of unlabeled data for the classifier. The strategies used often classify the corpus tokens (or, data points) into wrong classes. The classifier is confused between two categories if the token is located near the margin. We propose a novel method for solving this problem and show that it favorably results in the increased performance. Our approach is based on the supervised machine learning algorithm, namely Support Vector Machine (SVM). The proposed approach is applied for solving the problem of named entity recognition (NER) in two Indian languages, namely Hindi and Bengali. Results show that proposed active learning based technique indeed improves the performance of the system.",TRUE,noun phrase
R278,Information Science,R70868,The Cooperation Databank,S353329,R70869,Domain,R78068,Human cooperation,"Publishing studies using standardized, machine-readable formats will enable machines toperform meta-analyses on-demand. To build a semantically-enhanced technology that embodiesthese functions, we developed the Cooperation Databank (CoDa) – a databank that contains2,641 studies on human cooperation (1958-2017) conducted in 78 countries involving 356,680participants. Experts annotated these studies for 312 variables, including the quantitative results(13, 959 effect sizes). We designed an ontology that defines and relates concepts in cooperationresearch and that can represent the relationships between individual study results. We havecreated a research platform that, based on the dataset, enables users to retrieve studies that testthe relation of variables with cooperation, visualize these study results, and perform (1) metaanalyses, (2) meta-regressions, (3) estimates of publication bias, and (4) statistical poweranalyses for future studies. We leveraged the dataset with visualization tools that allow users toexplore the ontology of concepts in cooperation research and to plot a citation network of thehistory of studies. CoDa offers a vision of how publishing studies in a machine-readable formatcan establish institutions and tools that improve scientific practices and knowledge.
",TRUE,noun phrase
R278,Information Science,R109146,Integrity constraints in OWL,S504246,R109148,Has result,L364250,IC validation can be reduced to query answering,"In many data-centric semantic web applications, it is desirable to use OWL to encode the Integrity Constraints (IC) that must be satisfied by instance data. However, challenges arise due to the Open World Assumption (OWA) and the lack of a Unique Name Assumption (UNA) in OWL's standard semantics. In particular, conditions that trigger constraint violations in systems using the Closed World Assumption (CWA), will generate new inferences in standard OWL-based reasoning applications. In this paper, we present an alternative IC semantics for OWL that allows applications to work with the CWA and the weak UNA. Ontology modelers can choose which OWL axioms to be interpreted with our IC semantics. Thus application developers are able to combine open world reasoning with closed world constraint validation in a flexible way. We also show that IC validation can be reduced to query answering under certain conditions. Finally, we describe our prototype implementation based on the OWL reasoner Pellet.",TRUE,noun phrase
R278,Information Science,R150376,The Pattern of Patterns: What is a pattern in conceptual modeling?,S603079,R150377,Data,R150384,increased performance,"It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.",TRUE,noun phrase
R278,Information Science,R5020,Curating Scientific Information in Knowledge Infrastructures,S5531,R5027,Material,R5030,individual scientists,"Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information – meaningful secondary or derived data – about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known “(elaborated) data products,” for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation – as machine readable data and their machine-readable meaning – is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.",TRUE,noun phrase
R278,Information Science,R68931,The New DBpedia Release Cycle: Increasing Agility and Efficiency in Knowledge Extraction Workflows,S327336,R68933,Material,R68936,"large, open data","Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility , to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .",TRUE,noun phrase
R278,Information Science,R135998,A Hybrid Knowlegde-Based Approach for Recommending Massive Learning Activities,S538493,R136000,keywords,R136003,learning activities,"In recent years, the development of recommender systems has attracted increased interest in several domains, especially in e-learning. Massive Open Online Courses have brought a revolution. However, deficiency in support and personalization in this context drive learners to lose their motivation and leave the learning process. To overcome this problem we focus on adapting learning activities to learners' needs using a recommender system.This paper attempts to provide an introduction to different recommender systems for e-learning settings, as well as to present our proposed recommender system for massive learning activities in order to provide learners with the suitable learning activities to follow the learning process and maintain their motivation. We propose a hybrid knowledge-based recommender system based on ontology for recommendation of e-learning activities to learners in the context of MOOCs. In the proposed recommendation approach, ontology is used to model and represent the knowledge about the domain model, learners and learning activities.",TRUE,noun phrase
R278,Information Science,R135998,A Hybrid Knowlegde-Based Approach for Recommending Massive Learning Activities,S538496,R136000,Personalisation features,R136003,learning activities,"In recent years, the development of recommender systems has attracted increased interest in several domains, especially in e-learning. Massive Open Online Courses have brought a revolution. However, deficiency in support and personalization in this context drive learners to lose their motivation and leave the learning process. To overcome this problem we focus on adapting learning activities to learners' needs using a recommender system.This paper attempts to provide an introduction to different recommender systems for e-learning settings, as well as to present our proposed recommender system for massive learning activities in order to provide learners with the suitable learning activities to follow the learning process and maintain their motivation. We propose a hybrid knowledge-based recommender system based on ontology for recommendation of e-learning activities to learners in the context of MOOCs. In the proposed recommendation approach, ontology is used to model and represent the knowledge about the domain model, learners and learning activities.",TRUE,noun phrase
R278,Information Science,R136019,Ontology-based E-learning Content Recommender System for Addressing the Pure Cold-start Problem,S538545,R136021,keywords,R135499,learning objects,"E-learning recommender systems are gaining significance nowadays due to its ability to enhance the learning experience by providing tailor-made services based on learner preferences. A Personalized Learning Environment (PLE) that automatically adapts to learner characteristics such as learning styles and knowledge level can recommend appropriate learning resources that would favor the learning process and improve learning outcomes. The pure cold-start problem is a relevant issue in PLEs, which arises due to the lack of prior information about the new learner in the PLE to create appropriate recommendations. This article introduces a semantic framework based on ontology to address the pure cold-start problem in content recommenders. The ontology encapsulates the domain knowledge about the learners as well as Learning Objects (LOs). The semantic model that we built has been experimented with different combinations of the key learner parameters such as learning style, knowledge level, and background knowledge. The proposed framework utilizes these parameters to build natural learner groups from the learner ontology using SPARQL queries. The ontology holds 480 learners’ data, 468 annotated learning objects with 5,600 learner ratings. A multivariate k-means clustering algorithm, an unsupervised machine learning technique for grouping similar data, is used to evaluate the learner similarity computation accuracy. The learner satisfaction achieved with the proposed model is measured based on the ratings given by the 40 participants of the experiments. From the evaluation perspective, it is evident that 79% of the learners are satisfied with the recommendations generated by the proposed model in pure cold-start condition.",TRUE,noun phrase
R278,Information Science,R5166,More Complete Resultset Retrieval from Large Heterogeneous RDF Sources,S5699,R5171,Material,R5175,"LOD Laudromat, SPARQL endpoints","Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service ""Where is My URI"" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.",TRUE,noun phrase
R278,Information Science,R6758,Situational Knowledge Representation for Traffic Observed by a Pavement Vibration Sensor Network,S9387,R6824,employs,R6858,Machine Learning,"Information systems that build on sensor networks often process data produced by measuring physical properties. These data can serve in the acquisition of knowledge for real-world situations that are of interest to information services and, ultimately, to people. Such systems face a common challenge, namely the considerable gap between the data produced by measurement and the abstract terminology used to describe real-world situations. We present and discuss the architecture of a software system that utilizes sensor data, digital signal processing, machine learning, and knowledge representation and reasoning to acquire, represent, and infer knowledge about real-world situations observable by a sensor network. We demonstrate the application of the system to vehicle detection and classification by measurement of road pavement vibration. Thus, real-world situations involve vehicles and information for their type, speed, and driving direction.",TRUE,noun phrase
R278,Information Science,R5020,Curating Scientific Information in Knowledge Infrastructures,S5534,R5027,Material,R5033,machine readable data,"Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information – meaningful secondary or derived data – about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known “(elaborated) data products,” for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation – as machine readable data and their machine-readable meaning – is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.",TRUE,noun phrase
R278,Information Science,R108652,"A streamlined workflow for conversion, peer review, and publication of genomics metadata as omics data papers ",S495066,R108654,Process,R108671,manuscript peer review,"Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.",TRUE,noun phrase
R278,Information Science,R186167,Scalable SPARQL querying of large RDF graphs,S711748,R186169,Material,R186171,many data sets,"The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.",TRUE,noun phrase
R278,Information Science,R5020,Curating Scientific Information in Knowledge Infrastructures,S5537,R5027,Data,R5036,meaningful secondary or derived data,"Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information – meaningful secondary or derived data – about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known “(elaborated) data products,” for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation – as machine readable data and their machine-readable meaning – is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.",TRUE,noun phrase
R278,Information Science,R108652,"A streamlined workflow for conversion, peer review, and publication of genomics metadata as omics data papers ",S495068,R108654,Method,R108673,metadata import workflow,"Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.",TRUE,noun phrase
R278,Information Science,R38897,Multi-Agent Systems: A Survey,S139155,R45056,applicable in,R45061,Modeling Complex Systems,"Multi-agent systems (MASs) have received tremendous attention from scholars in different disciplines, including computer science and civil engineering, as a means to solve complex problems by subdividing them into smaller tasks. The individual tasks are allocated to autonomous entities, known as agents. Each agent decides on a proper action to solve the task using multiple inputs, e.g., history of actions, interactions with its neighboring agents, and its goal. The MAS has found multiple applications, including modeling complex systems, smart grids, and computer networks. Despite their wide applicability, there are still a number of challenges faced by MAS, including coordination between agents, security, and task allocation. This survey provides a comprehensive discussion of all aspects of MAS, starting from definitions, features, applications, challenges, and communications to evaluation. A classification on MAS applications and challenges is provided along with references for further studies. We expect this paper to serve as an insightful and comprehensive resource on the MAS for researchers and practitioners in the area.",TRUE,noun phrase
R278,Information Science,R150376,The Pattern of Patterns: What is a pattern in conceptual modeling?,S603077,R150377,Data,R150382,modeling patterns,"It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.",TRUE,noun phrase
R278,Information Science,R5166,More Complete Resultset Retrieval from Large Heterogeneous RDF Sources,S5715,R5171,Data,R5191,"My URI"" (WIMU)","Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service ""Where is My URI"" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.",TRUE,noun phrase
R278,Information Science,R68408,"A Systematic Review of Information Literacy Programs in Higher Education: Effects of Face-to-Face, Online, and Blended Formats on Student Skills and Views",S329177,R68411,Has result,R69345,no preference in relation to format,"Objective – Evidence from systematic reviews a decade ago suggested that face-to-face and online methods to provide information literacy training in universities were equally effective in terms of skills learnt, but there was a lack of robust comparative research. The objectives of this review were (1) to update these findings with the inclusion of more recent primary research; (2) to further enhance the summary of existing evidence by including studies of blended formats (with components of both online and face-to-face teaching) compared to single format education; and (3) to explore student views on the various formats employed. Methods – Authors searched seven databases along with a range of supplementary search methods to identify comparative research studies, dated January 1995 to October 2016, exploring skill outcomes for students enrolled in higher education programs. There were 33 studies included, of which 19 also contained comparative data on student views. Where feasible, meta-analyses were carried out to provide summary estimates of skills development and a thematic analysis was completed to identify student views across the different formats. Results – A large majority of studies (27 of 33; 82%) found no statistically significant difference between formats in skills outcomes for students. Of 13 studies that could be included in a meta-analysis, the standardized mean difference (SMD) between skill test results for face-to-face versus online formats was -0.01 (95% confidence interval -0.28 to 0.26). Of ten studies comparing blended to single delivery format, seven (70%) found no statistically significant difference between formats, and the remaining studies had mixed outcomes. From the limited evidence available across all studies, there is a potential dichotomy between outcomes measured via skill test and assignment (course work) which is worthy of further investigation. The thematic analysis of student views found no preference in relation to format on a range of measures in 14 of 19 studies (74%). The remainder identified that students perceived advantages and disadvantages for each format but had no overall preference. Conclusions – There is compelling evidence that information literacy training is effective and well received across a range of delivery formats. Further research looking at blended versus single format methods, and the time implications for each, as well as comparing assignment to skill test outcomes would be valuable. Future studies should adopt a methodologically robust design (such as the randomized controlled trial) with a large student population and validated outcome measures.",TRUE,noun phrase
R278,Information Science,R182018,AUPress: A Comparison of an Open Access University Press with Traditional Presses,S704103,R182019,Result,L475078,no significant difference,"This study is a comparison of AUPress with three other traditional (non-open access) Canadian university presses. The analysis is based on the rankings that are correlated with book sales on Amazon.com and Amazon.ca. Statistical methods include the sampling of the sales ranking of randomly selected books from each press. The results of one-way ANOVA analyses show that there is no significant difference in the ranking of printed books sold by AUPress in comparison with traditional university presses. However, AUPress, can demonstrate a significantly larger readership for its books as evidenced by the number of downloads of the open electronic versions.",TRUE,noun phrase
R278,Information Science,R76423,Expanding horizons in historical linguistics with the 400-million word Corpus of Historical American English,S351829,R76425,Corpus genres,R77053,non-fiction books,"The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.",TRUE,noun phrase
R278,Information Science,R108652,"A streamlined workflow for conversion, peer review, and publication of genomics metadata as omics data papers ",S495069,R108654,Method,R108674,omics data paper structure,"Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.",TRUE,noun phrase
R278,Information Science,R108652,"A streamlined workflow for conversion, peer review, and publication of genomics metadata as omics data papers ",S495054,R108654,Material,R108660,omics data paper template,"Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.",TRUE,noun phrase
R278,Information Science,R150376,The Pattern of Patterns: What is a pattern in conceptual modeling?,S603081,R150377,Data,R150386,ontology-driven conceptual model patterns,"It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.",TRUE,noun phrase
R278,Information Science,R108652,"A streamlined workflow for conversion, peer review, and publication of genomics metadata as omics data papers ",S495063,R108654,Process,R108668,open data publishing,"Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.",TRUE,noun phrase
R278,Information Science,R5259,OpenBiodiv: A Knowledge Graph for Literature-Extracted Linked Open Data in Biodiversity Science,S5822,R5269,Data,R5286,OpenBiodiv encompasses data,"Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.",TRUE,noun phrase
R278,Information Science,R73156,ORCID: a system to uniquely identify researchers,S338720,R73158,uses identifier system,R73166,ORCID Identifiers,"The Open Researcher & Contributor ID (ORCID) registry presents a unique opportunity to solve the problem of author name ambiguity. At its core the value of the ORCID registry is that it crosses disciplines, organizations, and countries, linking ORCID with both existing identifier schemes as well as publications and other research activities. By supporting linkages across multiple datasets – clinical trials, publications, patents, datasets – such a registry becomes a switchboard for researchers and publishers alike in managing the dissemination of research findings. We describe use cases for embedding ORCID identifiers in manuscript submission workflows, prior work searches, manuscript citations, and repository deposition. We make recommendations for storing and displaying ORCID identifiers in publication metadata to include ORCID identifiers, with CrossRef integration as a specific example. Finally, we provide an overview of ORCID membership and integration tools and resources.",TRUE,noun phrase
R278,Information Science,R150376,The Pattern of Patterns: What is a pattern in conceptual modeling?,S603082,R150377,Data,R150387,pattern properties,"It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.",TRUE,noun phrase
R278,Information Science,R136019,Ontology-based E-learning Content Recommender System for Addressing the Pure Cold-start Problem,S538546,R136021,keywords,R136023,personalized learning environment,"E-learning recommender systems are gaining significance nowadays due to its ability to enhance the learning experience by providing tailor-made services based on learner preferences. A Personalized Learning Environment (PLE) that automatically adapts to learner characteristics such as learning styles and knowledge level can recommend appropriate learning resources that would favor the learning process and improve learning outcomes. The pure cold-start problem is a relevant issue in PLEs, which arises due to the lack of prior information about the new learner in the PLE to create appropriate recommendations. This article introduces a semantic framework based on ontology to address the pure cold-start problem in content recommenders. The ontology encapsulates the domain knowledge about the learners as well as Learning Objects (LOs). The semantic model that we built has been experimented with different combinations of the key learner parameters such as learning style, knowledge level, and background knowledge. The proposed framework utilizes these parameters to build natural learner groups from the learner ontology using SPARQL queries. The ontology holds 480 learners’ data, 468 annotated learning objects with 5,600 learner ratings. A multivariate k-means clustering algorithm, an unsupervised machine learning technique for grouping similar data, is used to evaluate the learner similarity computation accuracy. The learner satisfaction achieved with the proposed model is measured based on the ratings given by the 40 participants of the experiments. From the evaluation perspective, it is evident that 79% of the learners are satisfied with the recommendations generated by the proposed model in pure cold-start condition.",TRUE,noun phrase
R278,Information Science,R76423,Expanding horizons in historical linguistics with the 400-million word Corpus of Historical American English,S351827,R76425,Corpus genres,R77051,popular magazines,"The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.",TRUE,noun phrase
R278,Information Science,R150376,The Pattern of Patterns: What is a pattern in conceptual modeling?,S603076,R150377,Process,R150381,Positive effects,"It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.",TRUE,noun phrase
R278,Information Science,R150376,The Pattern of Patterns: What is a pattern in conceptual modeling?,S603074,R150377,Process,R150379,process of creating,"It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.",TRUE,noun phrase
R278,Information Science,R150376,The Pattern of Patterns: What is a pattern in conceptual modeling?,S603078,R150377,Data,R150383,properties of patterns,"It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.",TRUE,noun phrase
R278,Information Science,R70878,The MIPS mammalian protein–protein interaction database,S337241,R70879,Domain,L243471,Protein-protein interaction,SUMMARY The MIPS mammalian protein-protein interaction database (MPPI) is a new resource of high-quality experimental protein interaction data in mammals. The content is based on published experimental evidence that has been processed by human expert curators. We provide the full dataset for download and a flexible and powerful web interface for users with various requirements.,TRUE,noun phrase
R278,Information Science,R5259,OpenBiodiv: A Knowledge Graph for Literature-Extracted Linked Open Data in Biodiversity Science,S494997,R5269,Data,R108642,published articles or monographs,"Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.",TRUE,noun phrase
R278,Information Science,R5259,OpenBiodiv: A Knowledge Graph for Literature-Extracted Linked Open Data in Biodiversity Science,S5809,R5269,Material,R5273,published articles or monographs,"Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.",TRUE,noun phrase
R278,Information Science,R136019,Ontology-based E-learning Content Recommender System for Addressing the Pure Cold-start Problem,S538547,R136021,keywords,R136024,pure cold-start problem,"E-learning recommender systems are gaining significance nowadays due to its ability to enhance the learning experience by providing tailor-made services based on learner preferences. A Personalized Learning Environment (PLE) that automatically adapts to learner characteristics such as learning styles and knowledge level can recommend appropriate learning resources that would favor the learning process and improve learning outcomes. The pure cold-start problem is a relevant issue in PLEs, which arises due to the lack of prior information about the new learner in the PLE to create appropriate recommendations. This article introduces a semantic framework based on ontology to address the pure cold-start problem in content recommenders. The ontology encapsulates the domain knowledge about the learners as well as Learning Objects (LOs). The semantic model that we built has been experimented with different combinations of the key learner parameters such as learning style, knowledge level, and background knowledge. The proposed framework utilizes these parameters to build natural learner groups from the learner ontology using SPARQL queries. The ontology holds 480 learners’ data, 468 annotated learning objects with 5,600 learner ratings. A multivariate k-means clustering algorithm, an unsupervised machine learning technique for grouping similar data, is used to evaluate the learner similarity computation accuracy. The learner satisfaction achieved with the proposed model is measured based on the ratings given by the 40 participants of the experiments. From the evaluation perspective, it is evident that 79% of the learners are satisfied with the recommendations generated by the proposed model in pure cold-start condition.",TRUE,noun phrase
R278,Information Science,R186167,Scalable SPARQL querying of large RDF graphs,S711747,R186169,Material,R186170,RDF data,"The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.",TRUE,noun phrase
R278,Information Science,R135998,A Hybrid Knowlegde-Based Approach for Recommending Massive Learning Activities,S538491,R136000,keywords,R136001,recommender system,"In recent years, the development of recommender systems has attracted increased interest in several domains, especially in e-learning. Massive Open Online Courses have brought a revolution. However, deficiency in support and personalization in this context drive learners to lose their motivation and leave the learning process. To overcome this problem we focus on adapting learning activities to learners' needs using a recommender system.This paper attempts to provide an introduction to different recommender systems for e-learning settings, as well as to present our proposed recommender system for massive learning activities in order to provide learners with the suitable learning activities to follow the learning process and maintain their motivation. We propose a hybrid knowledge-based recommender system based on ontology for recommendation of e-learning activities to learners in the context of MOOCs. In the proposed recommendation approach, ontology is used to model and represent the knowledge about the domain model, learners and learning activities.",TRUE,noun phrase
R278,Information Science,R68931,The New DBpedia Release Cycle: Increasing Agility and Efficiency in Knowledge Extraction Workflows,S327354,R68933,Method,R68949,Release cycle,"Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility , to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .",TRUE,noun phrase
R278,Information Science,R5020,Curating Scientific Information in Knowledge Infrastructures,S5532,R5027,Material,R5031,Research infrastructures and research communities,"Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information – meaningful secondary or derived data – about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known “(elaborated) data products,” for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation – as machine readable data and their machine-readable meaning – is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.",TRUE,noun phrase
R278,Information Science,R186167,Scalable SPARQL querying of large RDF graphs,S711752,R186169,Material,R186175,scalable RDF data management system,"The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.",TRUE,noun phrase
R278,Information Science,R172842,A capability maturity model for scientific data management,S689606,R172844,has research field,L463666,Scientific Data Management,"In this poster, we propose a capability-maturity model (CMM) for scientific data management that includes a set of process areas required for data management, grouped at three levels of organizational capability maturity. The goal is to provide a framework for comparing and improving project and organizational data management practices.",TRUE,noun phrase
R278,Information Science,R38066,Ontology-based exchange of product data semantics,S125164,R38068,hasMappingtoSource,R38070,Semantic Translation,"An increasing trend toward product development in a collaborative environment has resulted in the use of various software tools to enhance the product design. This requires a meaningful representation and exchange of product data semantics across different application domains. This paper proposes an ontology-based framework to enable such semantic interoperability. A standards-based approach is used to develop a Product Semantic Representation Language (PSRL). Formal description logic (DAML+OIL) is used to encode the PSRL. Mathematical logic and corresponding reasoning is used to determine semantic equivalences between an application ontology and the PSRL. The semantic equivalence matrix enables resolution of ambiguities created due to differences in syntaxes and meanings associated with terminologies in different application domains. Successful semantic interoperability will form the basis of seamless communication and thereby enable better integration of product development systems. Note to Practitioners-Semantic interoperability of product information refers to automating the exchange of meaning associated with the data, among information resources throughout the product development. This research is motivated by the problems in enabling such semantic interoperability. First, product information is formalized into an explicit, extensible, and comprehensive product semantics representation language (PSRL). The PSRL is open and based on standard W3C constructs. Next, in order to enable semantic translation, the paper describes a procedure to semi-automatically determine mappings between exactly equivalent concepts across representations of the interacting applications. The paper demonstrates that this approach to translation is feasible, but it has not yet been implemented commercially. Current limitations and the directions for further research are discussed. Future research addresses the determination of semantic similarities (not exact equivalences) between the interacting information resources.",TRUE,noun phrase
R278,Information Science,R186167,Scalable SPARQL querying of large RDF graphs,S711750,R186169,Material,R186173,Semantic Web community,"The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.",TRUE,noun phrase
R278,Information Science,R113025,SHACL4P: SHACL constraints validation within Protégé ontology editor,S512277,R113027,supported data modelling language,R113028,Shapes Constraint Language (SHACL),"Recently, Semantic Web Technologies (SWT) have been introduced and adopted to address the problem of enterprise data integration (e.g., to solve the problem of terms and concepts heterogeneity within large organizations). One of the challenges of adopting SWT for enterprise data integration is to provide the means to define and validate structural constraints over Resource Description Framework (RDF) graphs. This is difficult since RDF graph axioms behave like implications instead of structural constraints. SWT researchers and practitioners have proposed several solutions to address this challenge (e.g., SPIN and Shape Expression). However, to the best of our knowledge, none of them provide an integrated solution within open source ontology editors (e.g., Protégé). We identified this absence of the integrated solution and developed SHACL4P, a Protégé plugin for defining and validating Shapes Constraint Language (SHACL), the upcoming W3C standard for constraint validation within Protégé ontology editor.",TRUE,noun phrase
R278,Information Science,R186167,Scalable SPARQL querying of large RDF graphs,S711751,R186169,Material,R186174,single node,"The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.",TRUE,noun phrase
R278,Information Science,R68931,The New DBpedia Release Cycle: Increasing Agility and Efficiency in Knowledge Extraction Workflows,S327338,R68933,Data,R68938,size and data quality,"Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility , to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .",TRUE,noun phrase
R278,Information Science,R76438,"Tools for historical corpus research, and a corpus of Latin",S351878,R76440,Application,R77067,Sketch Engine,"We present LatinISE, a Latin corpus for the Sketch Engine. LatinISE consists of Latin works comprising a total of 13 million words, covering the time span from the 2 nd century B. C. to the 21 st century A. D. LatinISE is provided with rich metadata mark-up, including author, title, genre, era, date and century, as well as book, section, paragraph and line of verses. We have automatically annotated LatinISE with lemma and part-of-speech information. The annotation enables the users to search the corpus with a number of criteria, ranging from lemma, part-of-speech, context, to subcorpora defined chronologically or by genre. We also illustrate word sketches, one-page summaries of a word’s corpus-based collocational behaviour. Our future plan is to produce word sketches for Latin words by adding richer morphological and syntactic annotation to the corpus.",TRUE,noun phrase
R278,Information Science,R38897,Multi-Agent Systems: A Survey,S139156,R45056,applicable in,R45062,Smart Grids,"Multi-agent systems (MASs) have received tremendous attention from scholars in different disciplines, including computer science and civil engineering, as a means to solve complex problems by subdividing them into smaller tasks. The individual tasks are allocated to autonomous entities, known as agents. Each agent decides on a proper action to solve the task using multiple inputs, e.g., history of actions, interactions with its neighboring agents, and its goal. The MAS has found multiple applications, including modeling complex systems, smart grids, and computer networks. Despite their wide applicability, there are still a number of challenges faced by MAS, including coordination between agents, security, and task allocation. This survey provides a comprehensive discussion of all aspects of MAS, starting from definitions, features, applications, challenges, and communications to evaluation. A classification on MAS applications and challenges is provided along with references for further studies. We expect this paper to serve as an insightful and comprehensive resource on the MAS for researchers and practitioners in the area.",TRUE,noun phrase
R278,Information Science,R175056,Attracting new users or business as usual? A case study of converting academic subscription-based journals to open access,S693350,R175064,research_field_investigated,R136187,Social Sciences,"Abstract This paper studies a selection of 11 Norwegian journals in the humanities and social sciences and their conversion from subscription to open access, a move heavily incentivized by governmental mandates and open access policies. By investigating the journals’ visiting logs in the period 2014–2019, the study finds that a conversion to open access induces higher visiting numbers; all journals in the study had a significant increase, which can be attributed to the conversion. Converting a journal had no spillover in terms of increased visits to previously published articles still behind the paywall in the same journals. Visits from previously subscribing Norwegian higher education institutions did not account for the increase in visits, indicating that the increase must be accounted for by visitors from other sectors. The results could be relevant for policymakers concerning the effects of strict policies targeting economically vulnerable national journals, and could further inform journal owners and editors on the effects of converting to open access.",TRUE,noun phrase
R278,Information Science,R5166,More Complete Resultset Retrieval from Large Heterogeneous RDF Sources,S5703,R5171,Material,R5179,SPARQL endpoints,"Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service ""Where is My URI"" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.",TRUE,noun phrase
R278,Information Science,R108652,"A streamlined workflow for conversion, peer review, and publication of genomics metadata as omics data papers ",S495056,R108654,Material,R108662,standard-compliant metadata,"Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.",TRUE,noun phrase
R278,Information Science,R5166,More Complete Resultset Retrieval from Large Heterogeneous RDF Sources,S5711,R5171,Material,R5187,state-of-the-art real-data benchmarks,"Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service ""Where is My URI"" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.",TRUE,noun phrase
R278,Information Science,R150376,The Pattern of Patterns: What is a pattern in conceptual modeling?,S603073,R150377,Process,R150378,structured methods,"It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.",TRUE,noun phrase
R278,Information Science,R5020,Curating Scientific Information in Knowledge Infrastructures,S5536,R5027,Data,R5035,such primary data,"Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information – meaningful secondary or derived data – about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known “(elaborated) data products,” for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation – as machine readable data and their machine-readable meaning – is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.",TRUE,noun phrase
R278,Information Science,R76423,Expanding horizons in historical linguistics with the 400-million word Corpus of Historical American English,S351831,R76425,Has preprocessing steps,R77055,tagged for part-of-speech,"The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.",TRUE,noun phrase
R278,Information Science,R38897,Multi-Agent Systems: A Survey,S127907,R39116,Challenges,R39139,Task allocation,"Multi-agent systems (MASs) have received tremendous attention from scholars in different disciplines, including computer science and civil engineering, as a means to solve complex problems by subdividing them into smaller tasks. The individual tasks are allocated to autonomous entities, known as agents. Each agent decides on a proper action to solve the task using multiple inputs, e.g., history of actions, interactions with its neighboring agents, and its goal. The MAS has found multiple applications, including modeling complex systems, smart grids, and computer networks. Despite their wide applicability, there are still a number of challenges faced by MAS, including coordination between agents, security, and task allocation. This survey provides a comprehensive discussion of all aspects of MAS, starting from definitions, features, applications, challenges, and communications to evaluation. A classification on MAS applications and challenges is provided along with references for further studies. We expect this paper to serve as an insightful and comprehensive resource on the MAS for researchers and practitioners in the area.",TRUE,noun phrase
R278,Information Science,R5259,OpenBiodiv: A Knowledge Graph for Literature-Extracted Linked Open Data in Biodiversity Science,S494996,R5269,Data,R5277,taxonomic treatments,"Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.",TRUE,noun phrase
R278,Information Science,R5259,OpenBiodiv: A Knowledge Graph for Literature-Extracted Linked Open Data in Biodiversity Science,S5813,R5269,Material,R5277,taxonomic treatments,"Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.",TRUE,noun phrase
R278,Information Science,R145901,"Evaluating the electronic tuberculosis register surveillance system in Eden District, Western Cape, South Africa, 2015",S585083,R145915,Epidemiological surveillance users,R145927,TB nurses,"ABSTRACT Background: Tuberculosis (TB) surveillance data are crucial to the effectiveness of National TB Control Programs. In South Africa, few surveillance system evaluations have been undertaken to provide a rigorous assessment of the platform from which the national and district health systems draws data to inform programs and policies. Objective: Evaluate the attributes of Eden District’s TB surveillance system, Western Cape Province, South Africa. Methods: Data quality, sensitivity and positive predictive value were assessed using secondary data from 40,033 TB cases entered in Eden District’s ETR.Net from 2007 to 2013, and 79 purposively selected TB Blue Cards (TBCs), a medical patient file and source document for data entered into ETR.Net. Simplicity, flexibility, acceptability, stability and usefulness of the ETR.Net were assessed qualitatively through interviews with TB nurses, information health officers, sub-district and district coordinators involved in the TB surveillance. Results: TB surveillance system stakeholders report that Eden District’s ETR.Net system was simple, acceptable, flexible and stable, and achieves its objective of informing TB control program, policies and activities. Data were less complete in the ETR.Net (66–100%) than in the TBCs (76–100%), and concordant for most variables except pre-treatment smear results, antiretroviral therapy (ART) and treatment outcome. The sensitivity of recorded variables in ETR.Net was 98% for gender, 97% for patient category, 93% for ART, 92% for treatment outcome and 90% for pre-treatment smear grading. Conclusions: Our results reveal that the system provides useful information to guide TB control program activities in Eden District. However, urgent attention is needed to address gaps in clinical recording on the TBC and data capturing into the ETR.Net system. We recommend continuous training and support of TB personnel involved with TB care, management and surveillance on TB data recording into the TBCs and ETR.Net as well as the implementation of a well-structured quality control and assurance system.",TRUE,noun phrase
R278,Information Science,R5259,OpenBiodiv: A Knowledge Graph for Literature-Extracted Linked Open Data in Biodiversity Science,S5819,R5269,Material,R5283,the biodiversity domain,"Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.",TRUE,noun phrase
R278,Information Science,R5166,More Complete Resultset Retrieval from Large Heterogeneous RDF Sources,S5700,R5171,Material,R5176,the hundered of thousands of RDF datasets,"Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service ""Where is My URI"" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.",TRUE,noun phrase
R278,Information Science,R5259,OpenBiodiv: A Knowledge Graph for Literature-Extracted Linked Open Data in Biodiversity Science,S5821,R5269,method,R5285,the OpenBiodiv-O ontology,"Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.",TRUE,noun phrase
R278,Information Science,R5020,Curating Scientific Information in Knowledge Infrastructures,S5529,R5027,Material,R5028,the sciences,"Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information – meaningful secondary or derived data – about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known “(elaborated) data products,” for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation – as machine readable data and their machine-readable meaning – is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.",TRUE,noun phrase
R278,Information Science,R5166,More Complete Resultset Retrieval from Large Heterogeneous RDF Sources,S5705,R5171,Material,R5181,the SPARQL query language,"Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service ""Where is My URI"" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.",TRUE,noun phrase
R278,Information Science,R5166,More Complete Resultset Retrieval from Large Heterogeneous RDF Sources,S5698,R5171,Material,R5174,the Web of Data,"Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service ""Where is My URI"" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.",TRUE,noun phrase
R278,Information Science,R5020,Curating Scientific Information in Knowledge Infrastructures,S5530,R5027,Material,R5029,their research groups,"Interpreting observational data is a fundamental task in the sciences, specifically in earth and environmental science where observational data are increasingly acquired, curated, and published systematically by environmental research infrastructures. Typically subject to substantial processing, observational data are used by research communities, their research groups and individual scientists, who interpret such primary data for their meaning in the context of research investigations. The result of interpretation is information – meaningful secondary or derived data – about the observed environment. Research infrastructures and research communities are thus essential to evolving uninterpreted observational data to information. In digital form, the classical bearer of information are the commonly known “(elaborated) data products,” for instance maps. In such form, meaning is generally implicit e.g., in map colour coding, and thus largely inaccessible to machines. The systematic acquisition, curation, possible publishing and further processing of information gained in observational data interpretation – as machine readable data and their machine-readable meaning – is not common practice among environmental research infrastructures. For a use case in aerosol science, we elucidate these problems and present a Jupyter based prototype infrastructure that exploits a machine learning approach to interpretation and could support a research community in interpreting observational data and, more importantly, in curating and further using resulting information about a studied natural phenomenon.",TRUE,noun phrase
R278,Information Science,R150376,The Pattern of Patterns: What is a pattern in conceptual modeling?,S603087,R150377,Material,R150392,theoretical framework,"It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.",TRUE,noun phrase
R278,Information Science,R5166,More Complete Resultset Retrieval from Large Heterogeneous RDF Sources,S5710,R5171,Material,R5186,These data sources,"Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service ""Where is My URI"" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.",TRUE,noun phrase
R278,Information Science,R5166,More Complete Resultset Retrieval from Large Heterogeneous RDF Sources,S5701,R5171,Material,R5177,These datasets,"Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service ""Where is My URI"" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.",TRUE,noun phrase
R278,Information Science,R145318,"Electronic Surveillance System for the Early Notification of Community-Based Epidemics (ESSENCE): Overview, Components, and Public Health Applications",S581709,R145327,Statistical analysis techniques,R70373,Time series,"Background The Electronic Surveillance System for the Early Notification of Community-Based Epidemics (ESSENCE) is a secure web-based tool that enables health care practitioners to monitor health indicators of public health importance for the detection and tracking of disease outbreaks, consequences of severe weather, and other events of concern. The ESSENCE concept began in an internally funded project at the Johns Hopkins University Applied Physics Laboratory, advanced with funding from the State of Maryland, and broadened in 1999 as a collaboration with the Walter Reed Army Institute for Research. Versions of the system have been further developed by Johns Hopkins University Applied Physics Laboratory in multiple military and civilian programs for the timely detection and tracking of health threats. Objective This study aims to describe the components and development of a biosurveillance system increasingly coordinating all-hazards health surveillance and infectious disease monitoring among large and small health departments, to list the key features and lessons learned in the growth of this system, and to describe the range of initiatives and accomplishments of local epidemiologists using it. Methods The features of ESSENCE include spatial and temporal statistical alerting, custom querying, user-defined alert notifications, geographical mapping, remote data capture, and event communications. To expedite visualization, configurable and interactive modes of data stratification and filtering, graphical and tabular customization, user preference management, and sharing features allow users to query data and view geographic representations, time series and data details pages, and reports. These features allow ESSENCE users to gather and organize the resulting wealth of information into a coherent view of population health status and communicate findings among users. Results The resulting broad utility, applicability, and adaptability of this system led to the adoption of ESSENCE by the Centers for Disease Control and Prevention, numerous state and local health departments, and the Department of Defense, both nationally and globally. The open-source version of Suite for Automated Global Electronic bioSurveillance is available for global, resource-limited settings. Resourceful users of the US National Syndromic Surveillance Program ESSENCE have applied it to the surveillance of infectious diseases, severe weather and natural disaster events, mass gatherings, chronic diseases and mental health, and injury and substance abuse. Conclusions With emerging high-consequence communicable diseases and other health conditions, the continued user requirement–driven enhancements of ESSENCE demonstrate an adaptable disease surveillance capability focused on the everyday needs of public health. The challenge of a live system for widely distributed users with multiple different data sources and high throughput requirements has driven a novel, evolving architecture design.",TRUE,noun phrase
R278,Information Science,R38841,A Survey of Scholarly Data Visualization,S331938,R69808,has elements,R69813,Visualization tools,"Scholarly information usually contains millions of raw data, such as authors, papers, citations, as well as scholarly networks. With the rapid growth of the digital publishing and harvesting, how to visually present the data efficiently becomes challenging. Nowadays, various visualization techniques can be easily applied on scholarly data visualization and visual analysis, which enables scientists to have a better way to represent the structure of scholarly data sets and reveal hidden patterns in the data. In this paper, we first introduce the basic concepts and the collection of scholarly data. Then, we provide a comprehensive overview of related data visualization tools, existing techniques, as well as systems for the analyzing volumes of diverse scholarly data. Finally, open issues are discussed to pursue new solutions for abundant and complicated scholarly data visualization, as well as techniques, that support a multitude of facets.",TRUE,noun phrase
R278,Information Science,R109860,Applying weighted PageRank to author citation networks,S501287,R109862,Bibliographic data source,L362490,Web of Science,"This article aims to identify whether different weighted PageRank algorithms can be applied to author citation networks to measure the popularity and prestige of a scholar from a citation perspective. Information retrieval (IR) was selected as a test field and data from 1956–2008 were collected from Web of Science. Weighted PageRank with citation and publication as weighted vectors were calculated on author citation networks. The results indicate that both popularity rank and prestige rank were highly correlated with the weighted PageRank. Principal component analysis was conducted to detect relationships among these different measures. For capturing prize winners within the IR field, prestige rank outperformed all the other measures. © 2011 Wiley Periodicals, Inc.",TRUE,noun phrase
R278,Information Science,R145318,"Electronic Surveillance System for the Early Notification of Community-Based Epidemics (ESSENCE): Overview, Components, and Public Health Applications",S581705,R145327,Epidemiological surveillance software,R145328,Web-based tool,"Background The Electronic Surveillance System for the Early Notification of Community-Based Epidemics (ESSENCE) is a secure web-based tool that enables health care practitioners to monitor health indicators of public health importance for the detection and tracking of disease outbreaks, consequences of severe weather, and other events of concern. The ESSENCE concept began in an internally funded project at the Johns Hopkins University Applied Physics Laboratory, advanced with funding from the State of Maryland, and broadened in 1999 as a collaboration with the Walter Reed Army Institute for Research. Versions of the system have been further developed by Johns Hopkins University Applied Physics Laboratory in multiple military and civilian programs for the timely detection and tracking of health threats. Objective This study aims to describe the components and development of a biosurveillance system increasingly coordinating all-hazards health surveillance and infectious disease monitoring among large and small health departments, to list the key features and lessons learned in the growth of this system, and to describe the range of initiatives and accomplishments of local epidemiologists using it. Methods The features of ESSENCE include spatial and temporal statistical alerting, custom querying, user-defined alert notifications, geographical mapping, remote data capture, and event communications. To expedite visualization, configurable and interactive modes of data stratification and filtering, graphical and tabular customization, user preference management, and sharing features allow users to query data and view geographic representations, time series and data details pages, and reports. These features allow ESSENCE users to gather and organize the resulting wealth of information into a coherent view of population health status and communicate findings among users. Results The resulting broad utility, applicability, and adaptability of this system led to the adoption of ESSENCE by the Centers for Disease Control and Prevention, numerous state and local health departments, and the Department of Defense, both nationally and globally. The open-source version of Suite for Automated Global Electronic bioSurveillance is available for global, resource-limited settings. Resourceful users of the US National Syndromic Surveillance Program ESSENCE have applied it to the surveillance of infectious diseases, severe weather and natural disaster events, mass gatherings, chronic diseases and mental health, and injury and substance abuse. Conclusions With emerging high-consequence communicable diseases and other health conditions, the continued user requirement–driven enhancements of ESSENCE demonstrate an adaptable disease surveillance capability focused on the everyday needs of public health. The challenge of a live system for widely distributed users with multiple different data sources and high throughput requirements has driven a novel, evolving architecture design.",TRUE,noun phrase
R278,Information Science,R109860,Applying weighted PageRank to author citation networks,S501290,R109862,Social network analysis,L362492,Weighted PageRank,"This article aims to identify whether different weighted PageRank algorithms can be applied to author citation networks to measure the popularity and prestige of a scholar from a citation perspective. Information retrieval (IR) was selected as a test field and data from 1956–2008 were collected from Web of Science. Weighted PageRank with citation and publication as weighted vectors were calculated on author citation networks. The results indicate that both popularity rank and prestige rank were highly correlated with the weighted PageRank. Principal component analysis was conducted to detect relationships among these different measures. For capturing prize winners within the IR field, prestige rank outperformed all the other measures. © 2011 Wiley Periodicals, Inc.",TRUE,noun phrase
R278,Information Science,R150376,The Pattern of Patterns: What is a pattern in conceptual modeling?,S603083,R150377,Data,R150388,well-understood properties,"It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.",TRUE,noun phrase
R278,Information Science,R150376,The Pattern of Patterns: What is a pattern in conceptual modeling?,S603086,R150377,Material,R150391,writing and reading conceptual models,"It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R140070,Hackathons as Co-optation Ritual: Socializing Workers and Institutionalizing Innovation in the “New” Economy,S559075,R140072,has subject domain,R140078,“new” economy,"Abstract Hackathons, time-bounded events where participants write computer code and build apps, have become a popular means of socializing tech students and workers to produce “innovation” despite little promise of material reward. Although they offer participants opportunities for learning new skills and face-to-face networking and set up interaction rituals that create an emotional “high,” potential advantage is even greater for the events’ corporate sponsors, who use them to outsource work, crowdsource innovation, and enhance their reputation. Ethnographic observations and informal interviews at seven hackathons held in New York during the course of a single school year show how the format of the event and sponsors’ discursive tropes, within a dominant cultural frame reflecting the appeal of Silicon Valley, reshape unpaid and precarious work as an extraordinary opportunity, a ritual of ecstatic labor, and a collective imaginary for fictional expectations of innovation that benefits all, a powerful strategy for manufacturing workers’ consent in the “new” economy.",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R161681,Link Prediction of Weighted Triples for Knowledge Graph Completion Within the Scholarly Domain,S645607,R161682,Material,R161683,AIDA knowledge graph,"Knowledge graphs (KGs) are widely used for modeling scholarly communication, performing scientometric analyses, and supporting a variety of intelligent services to explore the literature and predict research dynamics. However, they often suffer from incompleteness (e.g., missing affiliations, references, research topics), leading to a reduced scope and quality of the resulting analyses. This issue is usually tackled by computing knowledge graph embeddings (KGEs) and applying link prediction techniques. However, only a few KGE models are capable of taking weights of facts in the knowledge graph into account. Such weights can have different meanings, e.g. describe the degree of association or the degree of truth of a certain triple. In this paper, we propose the Weighted Triple Loss, a new loss function for KGE models that takes full advantage of the additional numerical weights on facts and it is even tolerant to incorrect weights. We also extend the Rule Loss, a loss function that is able to exploit a set of logical rules, in order to work with weighted triples. The evaluation of our solutions on several knowledge graphs indicates significant performance improvements with respect to the state of the art. Our main use case is the large-scale AIDA knowledge graph, which describes 21 million research articles. Our approach enables to complete information about affiliation types, countries, and research topics, greatly improving the scope of the resulting scientometrics analyses and providing better support to systems for monitoring and predicting research dynamics.",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R140070,Hackathons as Co-optation Ritual: Socializing Workers and Institutionalizing Innovation in the “New” Economy,S559070,R140072,Has finding,R140073,appeal of Silicon Valley,"Abstract Hackathons, time-bounded events where participants write computer code and build apps, have become a popular means of socializing tech students and workers to produce “innovation” despite little promise of material reward. Although they offer participants opportunities for learning new skills and face-to-face networking and set up interaction rituals that create an emotional “high,” potential advantage is even greater for the events’ corporate sponsors, who use them to outsource work, crowdsource innovation, and enhance their reputation. Ethnographic observations and informal interviews at seven hackathons held in New York during the course of a single school year show how the format of the event and sponsors’ discursive tropes, within a dominant cultural frame reflecting the appeal of Silicon Valley, reshape unpaid and precarious work as an extraordinary opportunity, a ritual of ecstatic labor, and a collective imaginary for fictional expectations of innovation that benefits all, a powerful strategy for manufacturing workers’ consent in the “new” economy.",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R140043,Unleashing innovation through internal hackathons,S559028,R140045,Data,R140049,benefits (both expected and unexpected),"Hackathons have become an increasingly popular approach for organizations to both test their new products and services as well as to generate new ideas. Most events either focus on attracting external developers or requesting employees of the organization to focus on a specific problem. In this paper we describe extensions to this paradigm that open up the event to internal employees and preserve the open-ended nature of the hackathon itself. In this paper we describe our initial motivation and objectives for conducting an internal hackathon, our experience in pioneering an internal hackathon at AT&T including specific things we did to make the internal hackathon successful. We conclude with the benefits (both expected and unexpected) we achieved from the internal hackathon approach, and recommendations for continuing the use of this valuable tool within AT&T.",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R166504,bioNerDS: exploring bioinformatics’ database and software use through literature mining,S663230,R166505,Entity types,R166526,Biology-focused databases and software,"Abstract Background Biology-focused databases and software define bioinformatics and their use is central to computational biology. In such a complex and dynamic field, it is of interest to understand what resources are available, which are used, how much they are used, and for what they are used. While scholarly literature surveys can provide some insights, large-scale computer-based approaches to identify mentions of bioinformatics databases and software from primary literature would automate systematic cataloguing, facilitate the monitoring of usage, and provide the foundations for the recovery of computational methods for analysing biological data, with the long-term aim of identifying best/common practice in different areas of biology. Results We have developed bioNerDS, a named entity recogniser for the recovery of bioinformatics databases and software from primary literature. We identify such entities with an F-measure ranging from 63% to 91% at the mention level and 63-78% at the document level, depending on corpus. Not attaining a higher F-measure is mostly due to high ambiguity in resource naming, which is compounded by the on-going introduction of new resources. To demonstrate the software, we applied bioNerDS to full-text articles from BMC Bioinformatics and Genome Biology. General mention patterns reflect the remit of these journals, highlighting BMC Bioinformatics’s emphasis on new tools and Genome Biology’s greater emphasis on data analysis. The data also illustrates some shifts in resource usage: for example, the past decade has seen R and the Gene Ontology join BLAST and GenBank as the main components in bioinformatics processing. Conclusions We demonstrate the feasibility of automatically identifying resource names on a large-scale from the scientific literature and show that the generated data can be used for exploration of bioinformatics database and software usage. For example, our results help to investigate the rate of change in resource usage and corroborate the suspicion that a vast majority of resources are created, but rarely (if ever) used thereafter. bioNerDS is available at http://bionerds.sourceforge.net/.",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R166504,bioNerDS: exploring bioinformatics’ database and software use through literature mining,S663235,R166505,data source,R166529,BMC Bioinformatics,"Abstract Background Biology-focused databases and software define bioinformatics and their use is central to computational biology. In such a complex and dynamic field, it is of interest to understand what resources are available, which are used, how much they are used, and for what they are used. While scholarly literature surveys can provide some insights, large-scale computer-based approaches to identify mentions of bioinformatics databases and software from primary literature would automate systematic cataloguing, facilitate the monitoring of usage, and provide the foundations for the recovery of computational methods for analysing biological data, with the long-term aim of identifying best/common practice in different areas of biology. Results We have developed bioNerDS, a named entity recogniser for the recovery of bioinformatics databases and software from primary literature. We identify such entities with an F-measure ranging from 63% to 91% at the mention level and 63-78% at the document level, depending on corpus. Not attaining a higher F-measure is mostly due to high ambiguity in resource naming, which is compounded by the on-going introduction of new resources. To demonstrate the software, we applied bioNerDS to full-text articles from BMC Bioinformatics and Genome Biology. General mention patterns reflect the remit of these journals, highlighting BMC Bioinformatics’s emphasis on new tools and Genome Biology’s greater emphasis on data analysis. The data also illustrates some shifts in resource usage: for example, the past decade has seen R and the Gene Ontology join BLAST and GenBank as the main components in bioinformatics processing. Conclusions We demonstrate the feasibility of automatically identifying resource names on a large-scale from the scientific literature and show that the generated data can be used for exploration of bioinformatics database and software usage. For example, our results help to investigate the rate of change in resource usage and corroborate the suspicion that a vast majority of resources are created, but rarely (if ever) used thereafter. bioNerDS is available at http://bionerds.sourceforge.net/.",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R140080,Interdisciplinary Online Hackathons as an Approach to Combat the COVID-19 Pandemic: Case Study,S559108,R140092,Has method,R140097,case study,"Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19–related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R140080,Interdisciplinary Online Hackathons as an Approach to Combat the COVID-19 Pandemic: Case Study,S559115,R140092,has subject domain,R140104,challenges posed by the COVID-19 pandemic,"Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19–related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R140043,Unleashing innovation through internal hackathons,S559025,R140045,Has participants,R140047,employees of the organization,"Hackathons have become an increasingly popular approach for organizations to both test their new products and services as well as to generate new ideas. Most events either focus on attracting external developers or requesting employees of the organization to focus on a specific problem. In this paper we describe extensions to this paradigm that open up the event to internal employees and preserve the open-ended nature of the hackathon itself. In this paper we describe our initial motivation and objectives for conducting an internal hackathon, our experience in pioneering an internal hackathon at AT&T including specific things we did to make the internal hackathon successful. We conclude with the benefits (both expected and unexpected) we achieved from the internal hackathon approach, and recommendations for continuing the use of this valuable tool within AT&T.",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R140070,Hackathons as Co-optation Ritual: Socializing Workers and Institutionalizing Innovation in the “New” Economy,S559071,R140072,Has method,R140074,Ethnographic observations,"Abstract Hackathons, time-bounded events where participants write computer code and build apps, have become a popular means of socializing tech students and workers to produce “innovation” despite little promise of material reward. Although they offer participants opportunities for learning new skills and face-to-face networking and set up interaction rituals that create an emotional “high,” potential advantage is even greater for the events’ corporate sponsors, who use them to outsource work, crowdsource innovation, and enhance their reputation. Ethnographic observations and informal interviews at seven hackathons held in New York during the course of a single school year show how the format of the event and sponsors’ discursive tropes, within a dominant cultural frame reflecting the appeal of Silicon Valley, reshape unpaid and precarious work as an extraordinary opportunity, a ritual of ecstatic labor, and a collective imaginary for fictional expectations of innovation that benefits all, a powerful strategy for manufacturing workers’ consent in the “new” economy.",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R140043,Unleashing innovation through internal hackathons,S559024,R140045,Has participants,R140046,external developers,"Hackathons have become an increasingly popular approach for organizations to both test their new products and services as well as to generate new ideas. Most events either focus on attracting external developers or requesting employees of the organization to focus on a specific problem. In this paper we describe extensions to this paradigm that open up the event to internal employees and preserve the open-ended nature of the hackathon itself. In this paper we describe our initial motivation and objectives for conducting an internal hackathon, our experience in pioneering an internal hackathon at AT&T including specific things we did to make the internal hackathon successful. We conclude with the benefits (both expected and unexpected) we achieved from the internal hackathon approach, and recommendations for continuing the use of this valuable tool within AT&T.",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R140080,Interdisciplinary Online Hackathons as an Approach to Combat the COVID-19 Pandemic: Case Study,S559111,R140092,Material,R140100,frontline clinicians and researchers,"Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19–related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R166504,bioNerDS: exploring bioinformatics’ database and software use through literature mining,S663236,R166505,data source,R166523,Genome Biology,"Abstract Background Biology-focused databases and software define bioinformatics and their use is central to computational biology. In such a complex and dynamic field, it is of interest to understand what resources are available, which are used, how much they are used, and for what they are used. While scholarly literature surveys can provide some insights, large-scale computer-based approaches to identify mentions of bioinformatics databases and software from primary literature would automate systematic cataloguing, facilitate the monitoring of usage, and provide the foundations for the recovery of computational methods for analysing biological data, with the long-term aim of identifying best/common practice in different areas of biology. Results We have developed bioNerDS, a named entity recogniser for the recovery of bioinformatics databases and software from primary literature. We identify such entities with an F-measure ranging from 63% to 91% at the mention level and 63-78% at the document level, depending on corpus. Not attaining a higher F-measure is mostly due to high ambiguity in resource naming, which is compounded by the on-going introduction of new resources. To demonstrate the software, we applied bioNerDS to full-text articles from BMC Bioinformatics and Genome Biology. General mention patterns reflect the remit of these journals, highlighting BMC Bioinformatics’s emphasis on new tools and Genome Biology’s greater emphasis on data analysis. The data also illustrates some shifts in resource usage: for example, the past decade has seen R and the Gene Ontology join BLAST and GenBank as the main components in bioinformatics processing. Conclusions We demonstrate the feasibility of automatically identifying resource names on a large-scale from the scientific literature and show that the generated data can be used for exploration of bioinformatics database and software usage. For example, our results help to investigate the rate of change in resource usage and corroborate the suspicion that a vast majority of resources are created, but rarely (if ever) used thereafter. bioNerDS is available at http://bionerds.sourceforge.net/.",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R140070,Hackathons as Co-optation Ritual: Socializing Workers and Institutionalizing Innovation in the “New” Economy,S559072,R140072,Has method,R140075,informal interviews,"Abstract Hackathons, time-bounded events where participants write computer code and build apps, have become a popular means of socializing tech students and workers to produce “innovation” despite little promise of material reward. Although they offer participants opportunities for learning new skills and face-to-face networking and set up interaction rituals that create an emotional “high,” potential advantage is even greater for the events’ corporate sponsors, who use them to outsource work, crowdsource innovation, and enhance their reputation. Ethnographic observations and informal interviews at seven hackathons held in New York during the course of a single school year show how the format of the event and sponsors’ discursive tropes, within a dominant cultural frame reflecting the appeal of Silicon Valley, reshape unpaid and precarious work as an extraordinary opportunity, a ritual of ecstatic labor, and a collective imaginary for fictional expectations of innovation that benefits all, a powerful strategy for manufacturing workers’ consent in the “new” economy.",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R161681,Link Prediction of Weighted Triples for Knowledge Graph Completion Within the Scholarly Domain,S645608,R161682,Method,R161684,knowledge graph embeddings,"Knowledge graphs (KGs) are widely used for modeling scholarly communication, performing scientometric analyses, and supporting a variety of intelligent services to explore the literature and predict research dynamics. However, they often suffer from incompleteness (e.g., missing affiliations, references, research topics), leading to a reduced scope and quality of the resulting analyses. This issue is usually tackled by computing knowledge graph embeddings (KGEs) and applying link prediction techniques. However, only a few KGE models are capable of taking weights of facts in the knowledge graph into account. Such weights can have different meanings, e.g. describe the degree of association or the degree of truth of a certain triple. In this paper, we propose the Weighted Triple Loss, a new loss function for KGE models that takes full advantage of the additional numerical weights on facts and it is even tolerant to incorrect weights. We also extend the Rule Loss, a loss function that is able to exploit a set of logical rules, in order to work with weighted triples. The evaluation of our solutions on several knowledge graphs indicates significant performance improvements with respect to the state of the art. Our main use case is the large-scale AIDA knowledge graph, which describes 21 million research articles. Our approach enables to complete information about affiliation types, countries, and research topics, greatly improving the scope of the resulting scientometrics analyses and providing better support to systems for monitoring and predicting research dynamics.",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R140059,Open data hackathons: an innovative strategy to enhance entrepreneurial intention,S559056,R140061,Material,R140065,new applications,"
Purpose
In terms of entrepreneurship, open data benefits include economic growth, innovation, empowerment and new or improved products and services. Hackathons encourage the development of new applications using open data and the creation of startups based on these applications. Researchers focus on factors that affect nascent entrepreneurs’ decision to create a startup but researches in the field of open data hackathons have not been fully investigated yet. This paper aims to suggest a model that incorporates factors that affect the decision of establishing a startup by developers who have participated in open data hackathons.
Design/methodology/approach
In total, 70 papers were examined and analyzed using a three-phased literature review methodology, which was suggested by Webster and Watson (2002). These surveys investigated several factors that affect a nascent entrepreneur to create a startup.
Findings
Eventually, by identifying the motivations for developers to participate in a hackathon, and understanding the benefits of the use of open data, researchers will be able to elaborate the proposed model and evaluate if the contest has contributed to the decision of establish a startup and what factors affect the decision to establish a startup apply to open data developers, and if the participants of the contest agree with these factors.
Originality/value
The paper expands the scope of open data research on entrepreneurship field, stating the need for more research to be conducted regarding the open data in entrepreneurship through hackathons.
",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R140080,Interdisciplinary Online Hackathons as an Approach to Combat the COVID-19 Pandemic: Case Study,S559110,R140092,Has method,R140099,online survey,"Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19–related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R140059,Open data hackathons: an innovative strategy to enhance entrepreneurial intention,S559055,R140061,Data,R140064,open data,"
Purpose
In terms of entrepreneurship, open data benefits include economic growth, innovation, empowerment and new or improved products and services. Hackathons encourage the development of new applications using open data and the creation of startups based on these applications. Researchers focus on factors that affect nascent entrepreneurs’ decision to create a startup but researches in the field of open data hackathons have not been fully investigated yet. This paper aims to suggest a model that incorporates factors that affect the decision of establishing a startup by developers who have participated in open data hackathons.
Design/methodology/approach
In total, 70 papers were examined and analyzed using a three-phased literature review methodology, which was suggested by Webster and Watson (2002). These surveys investigated several factors that affect a nascent entrepreneur to create a startup.
Findings
Eventually, by identifying the motivations for developers to participate in a hackathon, and understanding the benefits of the use of open data, researchers will be able to elaborate the proposed model and evaluate if the contest has contributed to the decision of establish a startup and what factors affect the decision to establish a startup apply to open data developers, and if the participants of the contest agree with these factors.
Originality/value
The paper expands the scope of open data research on entrepreneurship field, stating the need for more research to be conducted regarding the open data in entrepreneurship through hackathons.
",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R140080,Interdisciplinary Online Hackathons as an Approach to Combat the COVID-19 Pandemic: Case Study,S559109,R140092,Has method,R140098,remote online health hackathon,"Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19–related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R140070,Hackathons as Co-optation Ritual: Socializing Workers and Institutionalizing Innovation in the “New” Economy,S559073,R140072,has Data Source,R140076,seven hackathons,"Abstract Hackathons, time-bounded events where participants write computer code and build apps, have become a popular means of socializing tech students and workers to produce “innovation” despite little promise of material reward. Although they offer participants opportunities for learning new skills and face-to-face networking and set up interaction rituals that create an emotional “high,” potential advantage is even greater for the events’ corporate sponsors, who use them to outsource work, crowdsource innovation, and enhance their reputation. Ethnographic observations and informal interviews at seven hackathons held in New York during the course of a single school year show how the format of the event and sponsors’ discursive tropes, within a dominant cultural frame reflecting the appeal of Silicon Valley, reshape unpaid and precarious work as an extraordinary opportunity, a ritual of ecstatic labor, and a collective imaginary for fictional expectations of innovation that benefits all, a powerful strategy for manufacturing workers’ consent in the “new” economy.",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R140059,Open data hackathons: an innovative strategy to enhance entrepreneurial intention,S559058,R140061,Method,R140067,three-phased literature review methodology,"
Purpose
In terms of entrepreneurship, open data benefits include economic growth, innovation, empowerment and new or improved products and services. Hackathons encourage the development of new applications using open data and the creation of startups based on these applications. Researchers focus on factors that affect nascent entrepreneurs’ decision to create a startup but researches in the field of open data hackathons have not been fully investigated yet. This paper aims to suggest a model that incorporates factors that affect the decision of establishing a startup by developers who have participated in open data hackathons.
Design/methodology/approach
In total, 70 papers were examined and analyzed using a three-phased literature review methodology, which was suggested by Webster and Watson (2002). These surveys investigated several factors that affect a nascent entrepreneur to create a startup.
Findings
Eventually, by identifying the motivations for developers to participate in a hackathon, and understanding the benefits of the use of open data, researchers will be able to elaborate the proposed model and evaluate if the contest has contributed to the decision of establish a startup and what factors affect the decision to establish a startup apply to open data developers, and if the participants of the contest agree with these factors.
Originality/value
The paper expands the scope of open data research on entrepreneurship field, stating the need for more research to be conducted regarding the open data in entrepreneurship through hackathons.
",TRUE,noun phrase
R137681,"Information Systems, Process and Knowledge Management",R140106,Smart Cities in Europe,S559127,R140108,has dataset,R140109,Urban Audit data set ,"Urban performance currently depends not only on a city's endowment of hard infrastructure (physical capital), but also, and increasingly so, on the availability and quality of knowledge communication and social infrastructure (human and social capital). The latter form of capital is decisive for urban competitiveness. Against this background, the concept of the “smart city” has recently been introduced as a strategic device to encompass modern urban production factors in a common framework and, in particular, to highlight the importance of Information and Communication Technologies (ICTs) in the last 20 years for enhancing the competitive profile of a city. The present paper aims to shed light on the often elusive definition of the concept of the “smart city.” We provide a focused and operational definition of this construct and present consistent evidence on the geography of smart cities in the EU27. Our statistical and graphical analyses exploit in depth, for the first time to our knowledge, the most recent version of the Urban Audit data set in order to analyze the factors determining the performance of smart cities. We find that the presence of a creative class, the quality of and dedicated attention to the urban environment, the level of education, and the accessibility to and use of ICTs for public administration are all positively correlated with urban wealth. This result prompts the formulation of a new strategic agenda for European cities that will allow them to achieve sustainable urban development and a better urban landscape.",TRUE,noun phrase
R128,Inorganic Chemistry,R111023,Access to divalent lanthanide NHC complexes by redox-transmetallation from silver and CO2 insertion reactions,S505553,R111028,Metal used,L365013,"Eu, Yb","Through a redox-transmetallation procedure, divalent NHC-LnII (NHC = N-heterocyclic carbene; Ln = Eu, Yb) complexes were obtained from the corresponding NHC-AgI. The lability of the NHC-LnII bond was investigated and treatment with CO2 led to insertion reactions without oxidation of the metal centre. The EuII complex [EuI2(IMes)(THF)3] (IMes = 1,3-dimesitylimidazol-2-ylidene) exhibits photoluminescence with a quantum yield reaching 53%.",TRUE,noun phrase
R128,Inorganic Chemistry,R110913,Multinuclear Lanthanide-Implanted Tetrameric Dawson-Type Phosphotungstates with Switchable Luminescence Behaviors Induced by Fast Photochromism,S505147,R110916,Application ,L364782,Luminescence behavior,"A series of benzoate-decorated lanthanide (Ln)-containing tetrameric Dawson-type phosphotungstates [N(CH3)4]6H20[{(P2W17O61)Ln(H2O)3Ln(C6H5COO)(H2O)6]}{[(P2W17O61)Ln(H2O)3}]2Cl2·98H2O [Ln = Sm (1), Eu (2), and Gd (3)] were made using a facile one-step assembly strategy and characterized by several techniques. Notably, the Ln-containing tetrameric Dawson-type polyoxoanions [{(P2W17O61)Ln(H2O)3Ln(C6H5COO)(H2O)6]}{[(P2W17O61)Ln(H2O)3}]224- are all established by four monolacunary Dawson-type [P2W17O61]10- segments, encapsulating a Ln3+ ion with two benzoates coordinating to the Ln3+ ions. 1-3 exhibit reversible photochromism, which can change from intrinsic white to blue for 6 min upon UV irradiation, and their colors gradually recover for 30 h in the dark. The solid-state photoluminescence spectra of 1 and 2 display characteristic emissions of Ln components based on 4f-4f transitions. Time-resolved emission spectra of 1 and 2 were also measured to authenticate the energy transfer from the phosphotungstate and organic chromophores to Eu3+. In particular, 1 shows an effectively switchable luminescence behavior induced by its fast photochromism.",TRUE,noun phrase
R128,Inorganic Chemistry,R160686,Modified TMAH based etchant for improved etching characteristics on Si{1 0 0} wafer,S640846,R160688,Primary etching solution,R160680,tetramethylammonium hydroxide,"Wet bulk micromachining is a popular technique for the fabrication of microstructures in research labs as well as in industry. However, increasing the throughput still remains an active area of research, and can be done by increasing the etching rate. Moreover, the release time of a freestanding structure can be reduced if the undercutting rate at convex corners can be improved. In this paper, we investigate a non-conventional etchant in the form of NH2OH added in 5 wt% tetramethylammonium hydroxide (TMAH) to determine its etching characteristics. Our analysis is focused on a Si{1 0 0} wafer as this is the most widely used in the fabrication of planer devices (e.g. complementary metal oxide semiconductors) and microelectromechanical systems (e.g. inertial sensors). We perform a systematic and parametric analysis with concentrations of NH2OH varying from 5% to 20% in step of 5%, all in 5 wt% TMAH, to obtain the optimum concentration for achieving improved etching characteristics including higher etch rate, undercutting at convex corners, and smooth etched surface morphology. Average surface roughness (R a), etch depth, and undercutting length are measured using a 3D scanning laser microscope. Surface morphology of the etched Si{1 0 0} surface is examined using a scanning electron microscope. Our investigation has revealed a two-fold increment in the etch rate of a {1 0 0} surface with the addition of NH2OH in the TMAH solution. Additionally, the incorporation of NH2OH significantly improves the etched surface morphology and the undercutting at convex corners, which is highly desirable for the quick release of microstructures from the substrate. The results presented in this paper are extremely useful for engineering applications and will open a new direction of research for scientists in both academic and industrial laboratories.",TRUE,noun phrase
R310,Labor Economics,R77070,The Impact of the Coronavirus Lockdown on Mental Health: Evidence from the US,S352206,R77073,Indicator for well-being,R44677,Mental health,"The coronavirus outbreak has caused significant disruptions to people’s lives. We document the impact of state-wide stay-at-home orders on mental health using real time survey data in the US. The lockdown measures lowered mental health by 0.085 standard deviations. This large negative effect is entirely driven by women. As a result of the lockdown measures, the existing gender gap in mental health has increased by 66%. The negative effect on women’s mental health cannot be explained by an increase in financial worries or childcare responsibilities.",TRUE,noun phrase
R78413,Learner-Interface Interaction,R107663,Dimensions of transactional distance in the world wide web learning environment: a factor analysis,S489960,R107665,Has approach,R107666,Moore's Theory of Transactional Distance,"Moore's Theory of Transactional Distance hypothesizes that distance is a pedagogical, not geographic phenomenon. It is a distance of understandings and perceptions that might lead to a communication gap or a psychological space of potential misunderstandings between people. Moore also suggests that this distance has to be overcome if effective, deliberate, planned learning is to occur. However, the conceptualizations of transactional distance in a telecommunication era have not been systematically addressed. Investigating 71 learners' experiences with WorldWide Web, this study examined the postulate of Moore's theory and identified the dimensions (factors) constituting transactional distance in such learning environment. Exploratory factor analysis using a principal axis factor method was carried out. It was concluded that this concept represented multifaceted ideas. Transactional distance consisted of four dimensions-instructor-learner, learner-learner, learner-content, and learner-interface transactional distance. The results inform researchers and practitioners of Web-based instruction concerning the factors of transactional distance that need to be taken into account and overcome in WWW courses.",TRUE,noun phrase
R12,Life Sciences,R135895,A Semi-Automated Workflow for FAIR Maturity Indicators in the Life Sciences,S538028,R135909,method,L379147,FAIR balloon plot,"Data sharing and reuse are crucial to enhance scientific progress and maximize return of investments in science. Although attitudes are increasingly favorable, data reuse remains difficult due to lack of infrastructures, standards, and policies. The FAIR (findable, accessible, interoperable, reusable) principles aim to provide recommendations to increase data reuse. Because of the broad interpretation of the FAIR principles, maturity indicators are necessary to determine the FAIRness of a dataset. In this work, we propose a reproducible computational workflow to assess data FAIRness in the life sciences. Our implementation follows principles and guidelines recommended by the maturity indicator authoring group and integrates concepts from the literature. In addition, we propose a FAIR balloon plot to summarize and compare dataset FAIRness. We evaluated the feasibility of our method on three real use cases where researchers looked for six datasets to answer their scientific questions. We retrieved information from repositories (ArrayExpress, Gene Expression Omnibus, eNanoMapper, caNanoLab, NanoCommons and ChEMBL), a registry of repositories, and a searchable resource (Google Dataset Search) via application program interfaces (API) wherever possible. With our analysis, we found that the six datasets met the majority of the criteria defined by the maturity indicators, and we showed areas where improvements can easily be reached. We suggest that use of standard schema for metadata and the presence of specific attributes in registries of repositories could increase FAIRness of datasets.",TRUE,noun phrase
R12,Life Sciences,R135895,A Semi-Automated Workflow for FAIR Maturity Indicators in the Life Sciences,S537888,R135909,Domain,L379028,Life Sciences,"Data sharing and reuse are crucial to enhance scientific progress and maximize return of investments in science. Although attitudes are increasingly favorable, data reuse remains difficult due to lack of infrastructures, standards, and policies. The FAIR (findable, accessible, interoperable, reusable) principles aim to provide recommendations to increase data reuse. Because of the broad interpretation of the FAIR principles, maturity indicators are necessary to determine the FAIRness of a dataset. In this work, we propose a reproducible computational workflow to assess data FAIRness in the life sciences. Our implementation follows principles and guidelines recommended by the maturity indicator authoring group and integrates concepts from the literature. In addition, we propose a FAIR balloon plot to summarize and compare dataset FAIRness. We evaluated the feasibility of our method on three real use cases where researchers looked for six datasets to answer their scientific questions. We retrieved information from repositories (ArrayExpress, Gene Expression Omnibus, eNanoMapper, caNanoLab, NanoCommons and ChEMBL), a registry of repositories, and a searchable resource (Google Dataset Search) via application program interfaces (API) wherever possible. With our analysis, we found that the six datasets met the majority of the criteria defined by the maturity indicators, and we showed areas where improvements can easily be reached. We suggest that use of standard schema for metadata and the presence of specific attributes in registries of repositories could increase FAIRness of datasets.",TRUE,noun phrase
R112125,Machine Learning,R144923,A Fast and Accurate Dependency Parser using Neural Networks,S580034,R144925,uses,R144926,a small number of dense features,"Almost all current dependency parsers classify based on millions of sparse indicator features. Not only do these features generalize poorly, but the cost of feature computation restricts parsing speed significantly. In this work, we propose a novel way of learning a neural network classifier for use in a greedy, transition-based dependency parser. Because this classifier learns and uses just a small number of dense features, it can work very fast, while achieving an about 2% improvement in unlabeled and labeled attachment scores on both English and Chinese datasets. Concretely, our parser is able to parse more than 1000 sentences per second at 92.2% unlabeled attachment score on the English Penn Treebank.",TRUE,noun phrase
R112125,Machine Learning,R140156,OWL2Vec*: Embedding of OWL Ontologies,S560005,R140158,Has evaluation task,R140310,Class subsumption prediction,"Abstract Semantic embedding of knowledge graphs has been widely studied and used for prediction and statistical analysis tasks across various domains such as Natural Language Processing and the Semantic Web. However, less attention has been paid to developing robust methods for embedding OWL (Web Ontology Language) ontologies, which contain richer semantic information than plain knowledge graphs, and have been widely adopted in domains such as bioinformatics. In this paper, we propose a random walk and word embedding based ontology embedding method named , which encodes the semantics of an OWL ontology by taking into account its graph structure, lexical information and logical constructors. Our empirical evaluation with three real world datasets suggests that benefits from these three different aspects of an ontology in class membership prediction and class subsumption prediction tasks. Furthermore, often significantly outperforms the state-of-the-art methods in our experiments.",TRUE,noun phrase
R112125,Machine Learning,R159399,"DEHB: Evolutionary Hyperband for Scalable, Robust and Efficient Hyperparameter Optimization",S635025,R159430,keywords,R159439,Differential Evolution,"Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever. Here, we combine the advantages of the popular bandit-based HPO method Hyperband (HB) and the evolutionary search approach of Differential Evolution (DE) to yield a new HPO method which we call DEHB. Comprehensive results on a very broad range of HPO problems, as well as a wide range of tabular benchmarks from neural architecture search, demonstrate that DEHB achieves strong performance far more robustly than all previous HPO methods we are aware of, especially for high-dimensional problems with discrete input dimensions. For example, DEHB is up to 1000x faster than random search. It is also efficient in computational time, conceptually simple and easy to implement, positioning it well to become a new default HPO method.",TRUE,noun phrase
R112125,Machine Learning,R140245,Onto2vec: joint vector-based representation of biological entities and their ontology-based annotations,S559975,R140247,Uses dataset,R140298,Gene Ontology (GO),"Motivation Biological knowledge is widely represented in the form of ontology‐based annotations: ontologies describe the phenomena assumed to exist within a domain, and the annotations associate a (kind of) biological entity with a set of phenomena within the domain. The structure and information contained in ontologies and their annotations make them valuable for developing machine learning, data analysis and knowledge extraction algorithms; notably, semantic similarity is widely used to identify relations between biological entities, and ontology‐based annotations are frequently used as features in machine learning applications. Results We propose the Onto2Vec method, an approach to learn feature vectors for biological entities based on their annotations to biomedical ontologies. Our method can be applied to a wide range of bioinformatics research problems such as similarity‐based prediction of interactions between proteins, classification of interaction types using supervised learning, or clustering. To evaluate Onto2Vec, we use the gene ontology (GO) and jointly produce dense vector representations of proteins, the GO classes to which they are annotated, and the axioms in GO that constrain these classes. First, we demonstrate that Onto2Vec‐generated feature vectors can significantly improve prediction of protein‐protein interactions in human and yeast. We then illustrate how Onto2Vec representations provide the means for constructing data‐driven, trainable semantic similarity measures that can be used to identify particular relations between proteins. Finally, we use an unsupervised clustering approach to identify protein families based on their Enzyme Commission numbers. Our results demonstrate that Onto2Vec can generate high quality feature vectors from biological entities and ontologies. Onto2Vec has the potential to significantly outperform the state‐of‐the‐art in several predictive applications in which ontologies are involved. Availability and implementation https://github.com/bio‐ontology‐research‐group/onto2vec",TRUE,noun phrase
R112125,Machine Learning,R140171,On2vec: Embedding-based relation prediction for ontology population,S560072,R140173,Has method ,R140315,Hierarchy Model,"Populating ontology graphs represents a long-standing problem for the Semantic Web community. Recent advances in translation-based graph embedding methods for populating instance-level knowledge graphs lead to promising new approaching for the ontology population problem. However, unlike instance-level graphs, the majority of relation facts in ontology graphs come with comprehensive semantic relations, which often include the properties of transitivity and symmetry, as well as hierarchical relations. These comprehensive relations are often too complex for existing graph embedding methods, and direct application of such methods is not feasible. Hence, we propose On2Vec, a novel translation-based graph embedding method for ontology population. On2Vec integrates two model components that effectively characterize comprehensive relation facts in ontology graphs. The first is the Component-specific Model that encodes concepts and relations into low-dimensional embedding spaces without a loss of relational properties; the second is the Hierarchy Model that performs focused learning of hierarchical relation facts. Experiments on several well-known ontology graphs demonstrate the promising capabilities of On2Vec in predicting and verifying new relation facts. These promising results also make possible significant improvements in related methods.",TRUE,noun phrase
R112125,Machine Learning,R159399,"DEHB: Evolutionary Hyperband for Scalable, Robust and Efficient Hyperparameter Optimization",S635023,R159430,keywords,R159437,hyperparameter optimization,"Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever. Here, we combine the advantages of the popular bandit-based HPO method Hyperband (HB) and the evolutionary search approach of Differential Evolution (DE) to yield a new HPO method which we call DEHB. Comprehensive results on a very broad range of HPO problems, as well as a wide range of tabular benchmarks from neural architecture search, demonstrate that DEHB achieves strong performance far more robustly than all previous HPO methods we are aware of, especially for high-dimensional problems with discrete input dimensions. For example, DEHB is up to 1000x faster than random search. It is also efficient in computational time, conceptually simple and easy to implement, positioning it well to become a new default HPO method.",TRUE,noun phrase
R112125,Machine Learning,R157417,Autoformer: Searching transformers for visual recognition,S631123,R157419,keywords,R157438,Image Classification,"Recently, pure transformer-based models have shown great potentials for vision tasks such as image classification and detection. However, the design of transformer networks is challenging. It has been observed that the depth, embedding dimension, and number of heads can largely affect the performance of vision transformers. Previous models configure these dimensions based upon manual crafting. In this work, we propose a new one-shot architecture search framework, namely AutoFormer, dedicated to vision transformer search. AutoFormer entangles the weights of different blocks in the same layers during supernet training. Benefiting from the strategy, the trained supernet allows thousands of subnets to be very well-trained. Specifically, the performance of these subnets with weights inherited from the supernet is comparable to those retrained from scratch. Besides, the searched models, which we refer to AutoFormers, surpass the recent state-of-the-arts such as ViT and DeiT. In particular, AutoFormer-tiny/small/base achieve 74.7%/81.7%/82.4% top-1 accuracy on ImageNet with 5.7M/22.9M/53.7M parameters, respectively. Lastly, we verify the transferability of AutoFormer by providing the performance on downstream benchmarks and distillation experiments. Code and models are available at https://github.com/microsoft/Cream.",TRUE,noun phrase
R112125,Machine Learning,R140135,node2vec: Scalable Feature Learning for Networks,S559566,R140137,Has evaluation task,R140222,Link prediction,"Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.",TRUE,noun phrase
R112125,Machine Learning,R159399,"DEHB: Evolutionary Hyperband for Scalable, Robust and Efficient Hyperparameter Optimization",S635018,R159430,keywords,R159432,Machine Learning,"Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever. Here, we combine the advantages of the popular bandit-based HPO method Hyperband (HB) and the evolutionary search approach of Differential Evolution (DE) to yield a new HPO method which we call DEHB. Comprehensive results on a very broad range of HPO problems, as well as a wide range of tabular benchmarks from neural architecture search, demonstrate that DEHB achieves strong performance far more robustly than all previous HPO methods we are aware of, especially for high-dimensional problems with discrete input dimensions. For example, DEHB is up to 1000x faster than random search. It is also efficient in computational time, conceptually simple and easy to implement, positioning it well to become a new default HPO method.",TRUE,noun phrase
R112125,Machine Learning,R140135,node2vec: Scalable Feature Learning for Networks,S559565,R140137,Has evaluation task,R140220,multi-label classification,"Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.",TRUE,noun phrase
R112125,Machine Learning,R140183,Bio-joie: Joint representation learning of biological knowledge bases,S560046,R140185,Has evaluation task,R140294,PPI type prediction,"The widespread of Coronavirus has led to a worldwide pandemic with a high mortality rate. Currently, the knowledge accumulated from different studies about this virus is very limited. Leveraging a wide-range of biological knowledge, such as gene on-tology and protein-protein interaction (PPI) networks from other closely related species presents a vital approach to infer the molecular impact of a new species. In this paper, we propose the transferred multi-relational embedding model Bio-JOIE to capture the knowledge of gene ontology and PPI networks, which demonstrates superb capability in modeling the SARS-CoV-2-human protein interactions. Bio-JOIE jointly trains two model components. The knowledge model encodes the relational facts from the protein and GO domains into separated embedding spaces, using a hierarchy-aware encoding technique employed for the GO terms. On top of that, the transfer model learns a non-linear transformation to transfer the knowledge of PPIs and gene ontology annotations across their embedding spaces. By leveraging only structured knowledge, Bio-JOIE significantly outperforms existing state-of-the-art methods in PPI type prediction on multiple species. Furthermore, we also demonstrate the potential of leveraging the learned representations on clustering proteins with enzymatic function into enzyme commission families. Finally, we show that Bio-JOIE can accurately identify PPIs between the SARS-CoV-2 proteins and human proteins, providing valuable insights for advancing research on this new disease.",TRUE,noun phrase
R112125,Machine Learning,R144933,Generating Typed Dependency Parses from Phrase Structure Parses,S584992,R144935,Other resources,R144867,the Stanford parser,"This paper describes a system for extracting typed dependency parses of English sentences from phrase structure parses. In order to capture inherent relations occurring in corpus texts that can be critical in real-world applications, many NP relations are included in the set of grammatical relations used. We provide a comparison of our system with Minipar and the Link parser. The typed dependency extraction facility described here is integrated in the Stanford Parser, available for download.",TRUE,noun phrase
R112125,Machine Learning,R157417,Autoformer: Searching transformers for visual recognition,S631116,R157419,keywords,R157433,Vision Transformer,"Recently, pure transformer-based models have shown great potentials for vision tasks such as image classification and detection. However, the design of transformer networks is challenging. It has been observed that the depth, embedding dimension, and number of heads can largely affect the performance of vision transformers. Previous models configure these dimensions based upon manual crafting. In this work, we propose a new one-shot architecture search framework, namely AutoFormer, dedicated to vision transformer search. AutoFormer entangles the weights of different blocks in the same layers during supernet training. Benefiting from the strategy, the trained supernet allows thousands of subnets to be very well-trained. Specifically, the performance of these subnets with weights inherited from the supernet is comparable to those retrained from scratch. Besides, the searched models, which we refer to AutoFormers, surpass the recent state-of-the-arts such as ViT and DeiT. In particular, AutoFormer-tiny/small/base achieve 74.7%/81.7%/82.4% top-1 accuracy on ImageNet with 5.7M/22.9M/53.7M parameters, respectively. Lastly, we verify the transferability of AutoFormer by providing the performance on downstream benchmarks and distillation experiments. Code and models are available at https://github.com/microsoft/Cream.",TRUE,noun phrase
R112125,Machine Learning,R157417,Autoformer: Searching transformers for visual recognition,S631117,R157419,keywords,R157434,One-Shot,"Recently, pure transformer-based models have shown great potentials for vision tasks such as image classification and detection. However, the design of transformer networks is challenging. It has been observed that the depth, embedding dimension, and number of heads can largely affect the performance of vision transformers. Previous models configure these dimensions based upon manual crafting. In this work, we propose a new one-shot architecture search framework, namely AutoFormer, dedicated to vision transformer search. AutoFormer entangles the weights of different blocks in the same layers during supernet training. Benefiting from the strategy, the trained supernet allows thousands of subnets to be very well-trained. Specifically, the performance of these subnets with weights inherited from the supernet is comparable to those retrained from scratch. Besides, the searched models, which we refer to AutoFormers, surpass the recent state-of-the-arts such as ViT and DeiT. In particular, AutoFormer-tiny/small/base achieve 74.7%/81.7%/82.4% top-1 accuracy on ImageNet with 5.7M/22.9M/53.7M parameters, respectively. Lastly, we verify the transferability of AutoFormer by providing the performance on downstream benchmarks and distillation experiments. Code and models are available at https://github.com/microsoft/Cream.",TRUE,noun phrase
R112125,Machine Learning,R140132,DeepWalk: online learning of social representations,S559554,R140134,Has method ,R135534,Random walk,"We present DeepWalk, a novel approach for learning latent representations of vertices in a network. These latent representations encode social relations in a continuous vector space, which is easily exploited by statistical models. DeepWalk generalizes recent advancements in language modeling and unsupervised feature learning (or deep learning) from sequences of words to graphs. DeepWalk uses local information obtained from truncated random walks to learn latent representations by treating walks as the equivalent of sentences. We demonstrate DeepWalk's latent representations on several multi-label network classification tasks for social networks such as BlogCatalog, Flickr, and YouTube. Our results show that DeepWalk outperforms challenging baselines which are allowed a global view of the network, especially in the presence of missing information. DeepWalk's representations can provide F1 scores up to 10% higher than competing methods when labeled data is sparse. In some experiments, DeepWalk's representations are able to outperform all baseline methods while using 60% less training data. DeepWalk is also scalable. It is an online learning algorithm which builds useful incremental results, and is trivially parallelizable. These qualities make it suitable for a broad class of real world applications such as network classification, and anomaly detection.",TRUE,noun phrase
R126,Materials Chemistry,R142138,Mapping Intracellular Temperature Using Green Fluorescent Protein,S571058,R142140,Readout,R142142,Polarization anisotropy,"Heat is of fundamental importance in many cellular processes such as cell metabolism, cell division and gene expression. (1-3) Accurate and noninvasive monitoring of temperature changes in individual cells could thus help clarify intricate cellular processes and develop new applications in biology and medicine. Here we report the use of green fluorescent proteins (GFP) as thermal nanoprobes suited for intracellular temperature mapping. Temperature probing is achieved by monitoring the fluorescence polarization anisotropy of GFP. The method is tested on GFP-transfected HeLa and U-87 MG cancer cell lines where we monitored the heat delivery by photothermal heating of gold nanorods surrounding the cells. A spatial resolution of 300 nm and a temperature accuracy of about 0.4 °C are achieved. Benefiting from its full compatibility with widely used GFP-transfected cells, this approach provides a noninvasive tool for fundamental and applied research in areas ranging from molecular biology to therapeutic and diagnostic studies.",TRUE,noun phrase
R126,Materials Chemistry,R142153,CdSe Quantum Dots for Two-Photon Fluorescence Thermal Imaging,S571116,R142155,Material,R142156,Quantum dots,"The technological development of quantum dots has ushered in a new era in fluorescence bioimaging, which was propelled with the advent of novel multiphoton fluorescence microscopes. Here, the potential use of CdSe quantum dots has been evaluated as fluorescent nanothermometers for two-photon fluorescence microscopy. In addition to the enhancement in spatial resolution inherent to any multiphoton excitation processes, two-photon (near-infrared) excitation leads to a temperature sensitivity of the emission intensity much higher than that achieved under one-photon (visible) excitation. The peak emission wavelength is also temperature sensitive, providing an additional approach for thermal imaging, which is particularly interesting for systems where nanoparticles are not homogeneously dispersed. On the basis of these superior thermal sensitivity properties of the two-photon excited fluorescence, we have demonstrated the ability of CdSe quantum dots to image a temperature gradient artificially created in a biocompatible fluid (phosphate-buffered saline) and also their ability to measure an intracellular temperature increase externally induced in a single living cell.",TRUE,noun phrase
R136138,Medical Informatics and Medical Bioinformatics,R148112,"2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text",S593903,R148114,Data coverage,R148117,Clinical records,"The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records presented three tasks: a concept extraction task focused on the extraction of medical concepts from patient reports; an assertion classification task focused on assigning assertion types for medical problem concepts; and a relation classification task focused on assigning relation types that hold between medical problems, tests, and treatments. i2b2 and the VA provided an annotated reference standard corpus for the three tasks. Using this reference standard, 22 systems were developed for concept extraction, 21 for assertion classification, and 16 for relation classification. These systems showed that machine learning approaches could be augmented with rule-based systems to determine concepts, assertions, and relations. Depending on the task, the rule-based systems can either provide input for machine learning or post-process the output of machine learning. Ensembles of classifiers, information from unlabeled data, and external knowledge sources can help when the training data are inadequate.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R138920,Functionalization of Silver Nanoparticles Loaded with Paclitaxel-induced A549 Cells Apoptosis Through ROS-Mediated Signaling Pathways,S552008,R138924,has cell line,R73070,A549 cells,"Background: Paclitaxel (PTX) is one of the most important and effective anticancer drugs for the treatment of human cancer. However, its low solubility and severe adverse effects limited clinical use. To overcome this limitation, nanotechnology has been used to overcome tumors due to its excellent antimicrobial activity. Objective: This study was to demonstrate the anticancer properties of functionalization silver nanoparticles loaded with paclitaxel (Ag@PTX) induced A549 cells apoptosis through ROS-mediated signaling pathways. Methods: The Ag@PTX nanoparticles were charged with a zeta potential of about -17 mv and characterized around 2 nm with a narrow size distribution. Results: Ag@PTX significantly decreased the viability of A549 cells and possessed selectivity between cancer and normal cells. Ag@PTX induced A549 cells apoptosis was confirmed by nuclear condensation, DNA fragmentation, and activation of caspase-3. Furthermore, Ag@PTX enhanced the anti-cancer activity of A549 cells through ROS-mediated p53 and AKT signalling pathways. Finally, in a xenograft nude mice model, Ag@PTX suppressed the growth of tumors. Conclusion: Our findings suggest that Ag@PTX may be a candidate as a chemopreventive agent and could be a highly efficient way to achieve anticancer synergism for human cancers.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R110242,"Comparison, synthesis and evaluation of anticancer drug-loaded polymeric nanoparticles on breast cancer cell lines",S505746,R110244,keywords,R111070,anti-cancer drugs,"Breast cancer is a major form of cancer, with a high mortality rate in women. It is crucial to achieve more efficient and safe anticancer drugs. Recent developments in medical nanotechnology have resulted in novel advances in cancer drug delivery. Cisplatin, doxorubicin, and 5-fluorouracil are three important anti-cancer drugs which have poor water-solubility. In this study, we used cisplatin, doxorubicin, and 5-fluorouracil-loaded polycaprolactone-polyethylene glycol (PCL-PEG) nanoparticles to improve the stability and solubility of molecules in drug delivery systems. The nanoparticles were prepared by a double emulsion method and characterized with Fourier Transform Infrared (FTIR) spectroscopy and Hydrogen-1 nuclear magnetic resonance (1HNMR). Cells were treated with equal concentrations of cisplatin, doxorubicin and 5-fluorouracil-loaded PCL-PEG nanoparticles, and free cisplatin, doxorubicin and 5-fluorouracil. The 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay confirmed that cisplatin, doxorubicin, and 5-fluorouracil-loaded PCL-PEG nanoparticles enhanced cytotoxicity and drug delivery in T47D and MCF7 breast cancer cells. However, the IC50 value of doxorubicin was lower than the IC50 values of both cisplatin and 5-fluorouracil, where the difference was statistically considered significant (p˂0.05). However, the IC50 value of all drugs on T47D were lower than those on MCF7.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R138621,Targeted Delivery of Insoluble Cargo (Paclitaxel) by PEGylated Chitosan Nanoparticles Grafted with Arg-Gly-Asp (RGD),S550824,R138623,keywords,R138630,Arg-Gly-Asp (RGD),"Poor delivery of insoluble anticancer drugs has so far precluded their clinical application. In this study, we developed a tumor-targeting delivery system for insoluble drug (paclitaxel, PTX) by PEGylated O-carboxymethyl-chitosan (CMC) nanoparticles grafted with cyclic Arg-Gly-Asp (RGD) peptide. To improve the loading efficiency (LE), we combined O/W/O double emulsion method with temperature-programmed solidification technique and controlled PTX within the matrix network as in situ nanocrystallite form. Furthermore, these CMC nanoparticles were PEGylated, which could reduce recognition by the reticuloendothelial system (RES) and prolong the circulation time in blood. In addition, further graft of cyclic RGD peptide at the terminal of PEG chain endowed these nanoparticles with higher affinity to in vitro Lewis lung carcinoma (LLC) cells and in vivo tumor tissue. These outstanding properties enabled as-designed nanodevice to exhibit a greater tumor growth inhibition effect and much lower side effects over the commercial formulation Taxol.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R138621,Targeted Delivery of Insoluble Cargo (Paclitaxel) by PEGylated Chitosan Nanoparticles Grafted with Arg-Gly-Asp (RGD),S550823,R138623,Polymer,R138630,Arg-Gly-Asp (RGD),"Poor delivery of insoluble anticancer drugs has so far precluded their clinical application. In this study, we developed a tumor-targeting delivery system for insoluble drug (paclitaxel, PTX) by PEGylated O-carboxymethyl-chitosan (CMC) nanoparticles grafted with cyclic Arg-Gly-Asp (RGD) peptide. To improve the loading efficiency (LE), we combined O/W/O double emulsion method with temperature-programmed solidification technique and controlled PTX within the matrix network as in situ nanocrystallite form. Furthermore, these CMC nanoparticles were PEGylated, which could reduce recognition by the reticuloendothelial system (RES) and prolong the circulation time in blood. In addition, further graft of cyclic RGD peptide at the terminal of PEG chain endowed these nanoparticles with higher affinity to in vitro Lewis lung carcinoma (LLC) cells and in vivo tumor tissue. These outstanding properties enabled as-designed nanodevice to exhibit a greater tumor growth inhibition effect and much lower side effects over the commercial formulation Taxol.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R110242,"Comparison, synthesis and evaluation of anticancer drug-loaded polymeric nanoparticles on breast cancer cell lines",S505753,R110244,keywords,R111067,Breast cancer,"Breast cancer is a major form of cancer, with a high mortality rate in women. It is crucial to achieve more efficient and safe anticancer drugs. Recent developments in medical nanotechnology have resulted in novel advances in cancer drug delivery. Cisplatin, doxorubicin, and 5-fluorouracil are three important anti-cancer drugs which have poor water-solubility. In this study, we used cisplatin, doxorubicin, and 5-fluorouracil-loaded polycaprolactone-polyethylene glycol (PCL-PEG) nanoparticles to improve the stability and solubility of molecules in drug delivery systems. The nanoparticles were prepared by a double emulsion method and characterized with Fourier Transform Infrared (FTIR) spectroscopy and Hydrogen-1 nuclear magnetic resonance (1HNMR). Cells were treated with equal concentrations of cisplatin, doxorubicin and 5-fluorouracil-loaded PCL-PEG nanoparticles, and free cisplatin, doxorubicin and 5-fluorouracil. The 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay confirmed that cisplatin, doxorubicin, and 5-fluorouracil-loaded PCL-PEG nanoparticles enhanced cytotoxicity and drug delivery in T47D and MCF7 breast cancer cells. However, the IC50 value of doxorubicin was lower than the IC50 values of both cisplatin and 5-fluorouracil, where the difference was statistically considered significant (p˂0.05). However, the IC50 value of all drugs on T47D were lower than those on MCF7.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R110813,Resveratrol loaded polymeric micelles for theranostic targeting of breast cancer cells,S505743,R110815,keywords,R111067,Breast cancer,"Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179±22 nm were produced. Res was loaded with high EE of 73±0.9% and DL content of 6.2±0.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R137522,"Paclitaxel-loaded poly(D,L-lactide-co-glycolide) nanoparticles for radiotherapy in hypoxic human tumor cells in vitro",S544440,R137524,keywords,R137531,Cellular uptake,"Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5%. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel;Drug delivery;Nanoparticle;Radiotherapy;Hypoxia;Human tumor cells;cellular uptake",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R149143,Ferumoxytol for treatment of iron deficiency anemia in patients with chronic kidney disease,S597619,R149145,Indication,R149161,Chronic kidney disease,"Background: Iron deficiency anemia (IDA) is a common problem in patients with chronic kidney disease (CKD). Use of intravenous (i.v.) iron effectively treats the resultant anemia, but available iron products have side effects or dosing regimens that limit safety and convenience. Objective: Ferumoxytol (Feraheme™) is a new i.v. iron product recently approved for use in treatment of IDA in CKD patients. This article reviews the structure, pharmacokinetics, and clinical trial results on ferumoxytol. The author also offers his opinions on the role of this product in clinical practice. Methods: This review encompasses important information contained in clinical and preclinical studies of ferumoxytol and is supplemented with information from the US Food and Drug Administration. Results/conclusion: Ferumoxytol offers substantial safety and superior efficacy compared with oral iron therapy. As ferumoxytol can be administered as 510 mg in < 1 min, it is substantially more convenient than other iron products in nondialysis patients. Although further experience with this product is needed in patients at higher risk of drug reactions, ferumoxytol is likely to be highly useful in the hospital and outpatient settings for treatment of IDA.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R141417,"Multiplex Paper-Based Colorimetric DNA Sensor Using Pyrrolidinyl Peptide Nucleic Acid-Induced AgNPs Aggregation for Detecting MERS-CoV, MTB, and HPV Oligonucleotides",S566063,R141418,Mechanism of Antiviral Action,L397315,color change,"The development of simple fluorescent and colorimetric assays that enable point-of-care DNA and RNA detection has been a topic of significant research because of the utility of such assays in resource limited settings. The most common motifs utilize hybridization to a complementary detection strand coupled with a sensitive reporter molecule. Here, a paper-based colorimetric assay for DNA detection based on pyrrolidinyl peptide nucleic acid (acpcPNA)-induced nanoparticle aggregation is reported as an alternative to traditional colorimetric approaches. PNA probes are an attractive alternative to DNA and RNA probes because they are chemically and biologically stable, easily synthesized, and hybridize efficiently with the complementary DNA strands. The acpcPNA probe contains a single positive charge from the lysine at C-terminus and causes aggregation of citrate anion-stabilized silver nanoparticles (AgNPs) in the absence of complementary DNA. In the presence of target DNA, formation of the anionic DNA-acpcPNA duplex results in dispersion of the AgNPs as a result of electrostatic repulsion, giving rise to a detectable color change. Factors affecting the sensitivity and selectivity of this assay were investigated, including ionic strength, AgNP concentration, PNA concentration, and DNA strand mismatches. The method was used for screening of synthetic Middle East respiratory syndrome coronavirus (MERS-CoV), Mycobacterium tuberculosis (MTB), and human papillomavirus (HPV) DNA based on a colorimetric paper-based analytical device developed using the aforementioned principle. The oligonucleotide targets were detected by measuring the color change of AgNPs, giving detection limits of 1.53 (MERS-CoV), 1.27 (MTB), and 1.03 nM (HPV). The acpcPNA probe exhibited high selectivity for the complementary oligonucleotides over single-base-mismatch, two-base-mismatch, and noncomplementary DNA targets. The proposed paper-based colorimetric DNA sensor has potential to be an alternative approach for simple, rapid, sensitive, and selective DNA detection.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R141415,Development of Label-Free Colorimetric Assay for MERS-CoV Using Gold Nanoparticles,S566045,R141416,Mechanism of Antiviral Action,L397300,Color changes of AuNPs,"Worldwide outbreaks of infectious diseases necessitate the development of rapid and accurate diagnostic methods. Colorimetric assays are a representative tool to simply identify the target molecules in specimens through color changes of an indicator (e.g., nanosized metallic particle, and dye molecules). The detection method is used to confirm the presence of biomarkers visually and measure absorbance of the colored compounds at a specific wavelength. In this study, we propose a colorimetric assay based on an extended form of double-stranded DNA (dsDNA) self-assembly shielded gold nanoparticles (AuNPs) under positive electrolyte (e.g., 0.1 M MgCl2) for detection of Middle East respiratory syndrome coronavirus (MERS-CoV). This platform is able to verify the existence of viral molecules through a localized surface plasmon resonance (LSPR) shift and color changes of AuNPs in the UV–vis wavelength range. We designed a pair of thiol-modified probes at either the 5′ end or 3′ end to organize complementary base pairs with upstream of the E protein gene (upE) and open reading frames (ORF) 1a on MERS-CoV. The dsDNA of the target and probes forms a disulfide-induced long self-assembled complex, which protects AuNPs from salt-induced aggregation and transition of optical properties. This colorimetric assay could discriminate down to 1 pmol/μL of 30 bp MERS-CoV and further be adapted for convenient on-site detection of other infectious diseases, especially in resource-limited settings.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R137522,"Paclitaxel-loaded poly(D,L-lactide-co-glycolide) nanoparticles for radiotherapy in hypoxic human tumor cells in vitro",S544435,R137524,keywords,R137527,Drug delivery,"Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5%. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel;Drug delivery;Nanoparticle;Radiotherapy;Hypoxia;Human tumor cells;cellular uptake",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R141102,Oral delivery of anti-TNF antibody shielded by natural polyphenol-mediated supramolecular assembly for inflammatory bowel disease therapy,S563779,R141104,Colitis model,R140256,DSS-induced colitis,"Rationale: Anti-tumor necrosis factor (TNF) therapy is a very effective way to treat inflammatory bowel disease. However, systemic exposure to anti-TNF-α antibodies through current clinical systemic administration can cause serious adverse effects in many patients. Here, we report a facile prepared self-assembled supramolecular nanoparticle based on natural polyphenol tannic acid and poly(ethylene glycol) containing polymer for oral antibody delivery. Method: This supramolecular nanoparticle was fabricated within minutes in aqueous solution and easily scaled up to gram level due to their pH-dependent reversible assembly. DSS-induced colitis model was prepared to evaluate the ability of inflammatory colon targeting ability and therapeutic efficacy of this antibody-loaded nanoparticles. Results: This polyphenol-based nanoparticle can be aqueous assembly without organic solvent and thus scaled up easily. The oral administration of antibody loaded nanoparticle achieved high accumulation in the inflamed colon and low systemic exposure. The novel formulation of anti-TNF-α antibodies administrated orally achieved high efficacy in the treatment of colitis mice compared with free antibodies administered orally. The average weight, colon length, and inflammatory factors in colon and serum of colitis mice after the treatment of novel formulation of anti-TNF-α antibodies even reached the similar level to healthy controls. Conclusion: This polyphenol-based supramolecular nanoparticle is a promising platform for oral delivery of antibodies for the treatment of inflammatory bowel diseases, which may have promising clinical translation prospects.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R144353,Exosome-based nanocarriers as bio-inspired and versatile vehicles for drug delivery: recent advances and challenges,S578078,R144356,Type of nanocarrier,R144362,Extracellular vesicles,"Recent decades have witnessed the fast and impressive development of nanocarriers as a drug delivery system. Considering the safety, delivery efficiency and stability of nanocarriers, there are many obstacles in accomplishing successful clinical translation of these nanocarrier-based drug delivery systems. The gap has urged drug delivery scientists to develop innovative nanocarriers with high compatibility, stability and longer circulation time. Exosomes are nanometer-sized, lipid-bilayer-enclosed extracellular vesicles secreted by many types of cells. Exosomes serving as versatile drug vehicles have attracted increasing attention due to their inherent ability of shuttling proteins, lipids and genes among cells and their natural affinity to target cells. Attractive features of exosomes, such as nanoscopic size, low immunogenicity, high biocompatibility, encapsulation of various cargoes and the ability to overcome biological barriers, distinguish them from other nanocarriers. To date, exosome-based nanocarriers delivering small molecule drugs as well as bioactive macromolecules have been developed for the treatment of many prevalent and obstinate diseases including cancer, CNS disorders and some other degenerative diseases. Exosome-based nanocarriers have a huge prospect in overcoming many hindrances encountered in drug and gene delivery. This review highlights the advances as well as challenges of exosome-based nanocarriers as drug vehicles. Special focus has been placed on the advantages of exosomes in delivering various cargoes and in treating obstinate diseases, aiming to offer new insights for exploring exosomes in the field of drug delivery.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R138607,A Novel Nanoparticle Formulation for Sustained Paclitaxel Delivery,S550719,R138609,Polymer,L387538,Glyceryl monooleate (GMO),"PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R137522,"Paclitaxel-loaded poly(D,L-lactide-co-glycolide) nanoparticles for radiotherapy in hypoxic human tumor cells in vitro",S544439,R137524,keywords,R137530,Human tumor cells,"Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5%. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel;Drug delivery;Nanoparticle;Radiotherapy;Hypoxia;Human tumor cells;cellular uptake",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R138611,Paclitaxel/Chitosan Nanosupensions Provide Enhanced Intravesical Bladder Cancer Therapy with Sustained and Prolonged Delivery of Paclitaxel,S550763,R138615,keywords,L387565,intravesical instillation,"Bladder cancer (BC) is a very common cancer. Nonmuscle-invasive bladder cancer (NMIBC) is the most common type of bladder cancer. After postoperative tumor resection, chemotherapy intravesical instillation is recommended as a standard treatment to significantly reduce recurrences. Nanomedicine-mediated delivery of a chemotherapeutic agent targeting cancer could provide a solution to obtain longer residence time and high bioavailability of an anticancer drug. The approach described here provides a nanomedicine with sustained and prolonged delivery of paclitaxel and enhanced therapy of intravesical bladder cancer, which is paclitaxel/chitosan (PTX/CS) nanosupensions (NSs). The positively charged PTX/CS NSs exhibited a rod-shaped morphology with a mean diameter about 200 nm. They have good dispersivity in water without any protective agents, and the positively charged properties make them easy to be adsorbed on the inner mucosa of the bladder through electrostatic adsorption. PTX/CS NSs also had a high drug loading capacity and can maintain sustained release of paclitaxel which could be prolonged over 10 days. Cell experiments in vitro demonstrated that PTX/CS NSs had good biocompatibility and effective bladder cancer cell proliferation inhibition. The significant anticancer efficacy against intravesical bladder cancer was verified by an in situ bladder cancer model. The paclitaxel/chitosan nanosupensions could provide sustained delivery of chemotherapeutic agents with significant anticancer efficacy against intravesical bladder cancer.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R142718,Cellular Uptake Mechanism of Paclitaxel Nanocrystals Determined by Confocal Imaging and Kinetic Measurement,S573496,R142720,has cell line,R142736,KB cells,"Nanocrystal formulation has become a viable solution for delivering poorly soluble drugs including chemotherapeutic agents. The purpose of this study was to examine cellular uptake of paclitaxel nanocrystals by confocal imaging and concentration measurement. It was found that drug nanocrystals could be internalized by KB cells at much higher concentrations than a conventional, solubilized formulation. The imaging and quantitative results suggest that nanocrystals could be directly taken up by cells as solid particles, likely via endocytosis. Moreover, it was found that polymer treatment to drug nanocrystals, such as surface coating and lattice entrapment, significantly influenced the cellular uptake. While drug molecules are in the most stable physical state, nanocrystals of a poorly soluble drug are capable of achieving concentrated intracellular presence enabling needed therapeutic effects.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R144268,"PEG–lipid micelles as drug carriers: physiochemical attributes, formulation principles and biological implication",S577528,R144270,Type of nanocarrier,R144263,Lipid micelles,"Abstract PEG–lipid micelles, primarily conjugates of polyethylene glycol (PEG) and distearyl phosphatidylethanolamine (DSPE) or PEG–DSPE, have emerged as promising drug-delivery carriers to address the shortcomings associated with new molecular entities with suboptimal biopharmaceutical attributes. The flexibility in PEG–DSPE design coupled with the simplicity of physical drug entrapment have distinguished PEG–lipid micelles as versatile and effective drug carriers for cancer therapy. They were shown to overcome several limitations of poorly soluble drugs such as non-specific biodistribution and targeting, lack of water solubility and poor oral bioavailability. Therefore, considerable efforts have been made to exploit the full potential of these delivery systems; to entrap poorly soluble drugs and target pathological sites both passively through the enhanced permeability and retention (EPR) effect and actively by linking the terminal PEG groups with targeting ligands, which were shown to increase delivery efficiency and tissue specificity. This article reviews the current state of PEG–lipid micelles as delivery carriers for poorly soluble drugs, their biological implications and recent developments in exploring their active targeting potential. In addition, this review sheds light on the physical properties of PEG–lipid micelles and their relevance to the inherent advantages and applications of PEG–lipid micelles for drug delivery.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R144353,Exosome-based nanocarriers as bio-inspired and versatile vehicles for drug delivery: recent advances and challenges,S578070,R144356,Advantages,L404608,Low immunogenicity,"Recent decades have witnessed the fast and impressive development of nanocarriers as a drug delivery system. Considering the safety, delivery efficiency and stability of nanocarriers, there are many obstacles in accomplishing successful clinical translation of these nanocarrier-based drug delivery systems. The gap has urged drug delivery scientists to develop innovative nanocarriers with high compatibility, stability and longer circulation time. Exosomes are nanometer-sized, lipid-bilayer-enclosed extracellular vesicles secreted by many types of cells. Exosomes serving as versatile drug vehicles have attracted increasing attention due to their inherent ability of shuttling proteins, lipids and genes among cells and their natural affinity to target cells. Attractive features of exosomes, such as nanoscopic size, low immunogenicity, high biocompatibility, encapsulation of various cargoes and the ability to overcome biological barriers, distinguish them from other nanocarriers. To date, exosome-based nanocarriers delivering small molecule drugs as well as bioactive macromolecules have been developed for the treatment of many prevalent and obstinate diseases including cancer, CNS disorders and some other degenerative diseases. Exosome-based nanocarriers have a huge prospect in overcoming many hindrances encountered in drug and gene delivery. This review highlights the advances as well as challenges of exosome-based nanocarriers as drug vehicles. Special focus has been placed on the advantages of exosomes in delivering various cargoes and in treating obstinate diseases, aiming to offer new insights for exploring exosomes in the field of drug delivery.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R148267,Enhanced delivery of etoposide across the blood–brain barrier to restrain brain tumor growth using melanotransferrin antibody- and tamoxifen-conjugated solid lipid nanoparticles,S594419,R148269,Surface functionalized with,R148273,Melanotransferrin antibody (MA),"Abstract Melanotransferrin antibody (MA) and tamoxifen (TX) were conjugated on etoposide (ETP)-entrapped solid lipid nanoparticles (ETP-SLNs) to target the blood–brain barrier (BBB) and glioblastom multiforme (GBM). MA- and TX-conjugated ETP-SLNs (MA–TX–ETP–SLNs) were used to infiltrate the BBB comprising a monolayer of human astrocyte-regulated human brain-microvascular endothelial cells (HBMECs) and to restrain the proliferation of malignant U87MG cells. TX-grafted ETP-SLNs (TX–ETP–SLNs) significantly enhanced the BBB permeability coefficient for ETP and raised the fluorescent intensity of calcein-AM when compared with ETP-SLNs. In addition, surface MA could increase the BBB permeability coefficient for ETP about twofold. The viability of HBMECs was higher than 86%, suggesting a high biocompatibility of MA–TX–ETP-SLNs. Moreover, the efficiency in antiproliferation against U87MG cells was in the order of MA–TX–ETP-SLNs > TX–ETP-SLNs > ETP-SLNs > SLNs. The capability of MA–TX–ETP-SLNs to target HBMECs and U87MG cells during internalization was verified by immunochemical staining of expressed melanotransferrin. MA–TX–ETP-SLNs can be a potent pharmacotherapy to deliver ETP across the BBB to GBM.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R110813,Resveratrol loaded polymeric micelles for theranostic targeting of breast cancer cells,S505305,R110815,Cytotoxicity assay,R110257,MTT assay,"Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179±22 nm were produced. Res was loaded with high EE of 73±0.9% and DL content of 6.2±0.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R138043,"Paclitaxel-loaded PLGA nanoparticles surface modified with transferrin and Pluronic®P85, anin vitrocell line andin vivobiodistribution studies on rat model",S546349,R138045,keywords,L384152,multidrug resistance,"The development of multidrug resistance (due to drug efflux by P-glycoproteins) is a major drawback with the use of paclitaxel (PTX) in the treatment of cancer. The rationale behind this study is to prepare PTX nanoparticles (NPs) for the reversal of multidrug resistance based on the fact that PTX loaded into NPs is not recognized by P-glycoproteins and hence is not effluxed out of the cell. Also, the intracellular penetration of the NPs could be enhanced by anchoring transferrin (Tf) on the PTX-PLGA-NPs. PTX-loaded PLGA NPs (PTX-PLGA-NPs), Pluronic®P85-coated PLGA NPs (P85-PTX-PLGA-NPs), and Tf-anchored PLGA NPs (Tf-PTX-PLGA-NPs) were prepared and evaluted for cytotoxicity and intracellular uptake using C6 rat glioma cell line. A significant increase in cytotoxicity was observed in the order of Tf-PTX-PLGA-NPs > P85-PTX-PLGA-NPs > PTX-PLGA-NPs in comparison to drug solution. In vivo biodistribution on male Sprague–Dawley rats bearing C6 glioma (subcutaneous) showed higher tumor PTX concentrations in animals administered with PTX-NPs compared to drug solution.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R110242,"Comparison, synthesis and evaluation of anticancer drug-loaded polymeric nanoparticles on breast cancer cell lines",S502564,R110244,Polymer charachterisation,R72128,nuclear magnetic resonance,"Breast cancer is a major form of cancer, with a high mortality rate in women. It is crucial to achieve more efficient and safe anticancer drugs. Recent developments in medical nanotechnology have resulted in novel advances in cancer drug delivery. Cisplatin, doxorubicin, and 5-fluorouracil are three important anti-cancer drugs which have poor water-solubility. In this study, we used cisplatin, doxorubicin, and 5-fluorouracil-loaded polycaprolactone-polyethylene glycol (PCL-PEG) nanoparticles to improve the stability and solubility of molecules in drug delivery systems. The nanoparticles were prepared by a double emulsion method and characterized with Fourier Transform Infrared (FTIR) spectroscopy and Hydrogen-1 nuclear magnetic resonance (1HNMR). Cells were treated with equal concentrations of cisplatin, doxorubicin and 5-fluorouracil-loaded PCL-PEG nanoparticles, and free cisplatin, doxorubicin and 5-fluorouracil. The 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay confirmed that cisplatin, doxorubicin, and 5-fluorouracil-loaded PCL-PEG nanoparticles enhanced cytotoxicity and drug delivery in T47D and MCF7 breast cancer cells. However, the IC50 value of doxorubicin was lower than the IC50 values of both cisplatin and 5-fluorouracil, where the difference was statistically considered significant (p˂0.05). However, the IC50 value of all drugs on T47D were lower than those on MCF7.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R110813,Resveratrol loaded polymeric micelles for theranostic targeting of breast cancer cells,S505809,R110815,Polymer,R111091,Pluronic F127,"Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179±22 nm were produced. Res was loaded with high EE of 73±0.9% and DL content of 6.2±0.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R141395,Enhanced Ability of Oligomeric Nanobodies Targeting MERS Coronavirus Receptor-Binding Domain,S565831,R141396,Mechanism of Antiviral Action,L397116,RBD–receptor binding inhibition,"Middle East respiratory syndrome (MERS) coronavirus (MERS-CoV), an infectious coronavirus first reported in 2012, has a mortality rate greater than 35%. Therapeutic antibodies are key tools for preventing and treating MERS-CoV infection, but to date no such agents have been approved for treatment of this virus. Nanobodies (Nbs) are camelid heavy chain variable domains with properties distinct from those of conventional antibodies and antibody fragments. We generated two oligomeric Nbs by linking two or three monomeric Nbs (Mono-Nbs) targeting the MERS-CoV receptor-binding domain (RBD), and compared their RBD-binding affinity, RBD–receptor binding inhibition, stability, and neutralizing and cross-neutralizing activity against MERS-CoV. Relative to Mono-Nb, dimeric Nb (Di-Nb) and trimeric Nb (Tri-Nb) had significantly greater ability to bind MERS-CoV RBD proteins with or without mutations in the RBD, thereby potently blocking RBD–MERS-CoV receptor binding. The engineered oligomeric Nbs were very stable under extreme conditions, including low or high pH, protease (pepsin), chaotropic denaturant (urea), and high temperature. Importantly, Di-Nb and Tri-Nb exerted significantly elevated broad-spectrum neutralizing activity against at least 19 human and camel MERS-CoV strains isolated in different countries and years. Overall, the engineered Nbs could be developed into effective therapeutic agents for prevention and treatment of MERS-CoV infection.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R155603,Effect of dimethyl‐β‐cyclodextrin concentrations on the pulmonary delivery of recombinant human growth hormone dry powder in rats,S623661,R155605,Uses drug,R155606,Recombinant human growth hormone,"The aim of this article is to prepare and characterize inhalable dry powders of recombinant human growth hormone (rhGH), and assess their efficacy for systemic delivery of the protein in rats. The powders were prepared by spray drying using dimethyl-beta-cyclodextrin (DMbetaCD) at different molar ratios in the initial feeds. Size exclusive chromatography was performed in order to determine protecting effect of DMbetaCD on the rhGH aggregation during spray drying. By increasing the concentration of DMbetaCD, rhGH aggregation was decreased from 9.67 (in the absence of DMbetaCD) to 0.84% (using DMbetaCD at 1000 molar ratio in the spray solution). The aerosol performance of the spray dried (SD) powders was evaluated using Andersen cascade impactor. Fine particle fraction values of 53.49%, 33.40%, and 23.23% were obtained using DMbetaCD at 10, 100, and 1000 molar ratio, respectively. In vivo studies showed the absolute bioavailability of 25.38%, 76.52%, and 63.97% after intratracheal insufflation of the powders produced after spray drying of the solutions containing DMbetaCD at 10, 100, and 1000 molar ratio, respectively in rat. In conclusion, appropriate cyclodextrin concentration was achieved considering the protein aggregation and aerosol performance of the SD powders and the systemic absorption following administration through the rat lung.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R141407,A self-adjuvanted nanoparticle based vaccine against infectious bronchitis virus,S565963,R141408,Type of nanoparticles,L397230,Self-Assembling Protein Nanoparticle (SAPN),"Infectious bronchitis virus (IBV) affects poultry respiratory, renal and reproductive systems. Currently the efficacy of available live attenuated or killed vaccines against IBV has been challenged. We designed a novel IBV vaccine alternative using a highly innovative platform called Self-Assembling Protein Nanoparticle (SAPN). In this vaccine, B cell epitopes derived from the second heptad repeat (HR2) region of IBV spike proteins were repetitively presented in its native trimeric conformation. In addition, flagellin was co-displayed in the SAPN to achieve a self-adjuvanted effect. Three groups of chickens were immunized at four weeks of age with the vaccine prototype, IBV-Flagellin-SAPN, a negative-control construct Flagellin-SAPN or a buffer control. The immunized chickens were challenged with 5x104.7 EID50 IBV M41 strain. High antibody responses were detected in chickens immunized with IBV-Flagellin-SAPN. In ex vivo proliferation tests, peripheral mononuclear cells (PBMCs) derived from IBV-Flagellin-SAPN immunized chickens had a significantly higher stimulation index than that of PBMCs from chickens receiving Flagellin-SAPN. Chickens immunized with IBV-Flagellin-SAPN had a significant reduction of tracheal virus shedding and lesser tracheal lesion scores than did negative control chickens. The data demonstrated that the IBV-Flagellin-SAPN holds promise as a vaccine for IBV.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R148187,Dual-Peptide-Functionalized Albumin-Based Nanoparticles with pH-Dependent Self-Assembly Behavior for Drug Delivery,S594180,R148188,Nanoparticles preparation method,L413145,Self-assembly through electrostatic interaction,"Drug delivery has become an important strategy for improving the chemotherapy efficiency. Here we developed a multifunctionalized nanosized albumin-based drug-delivery system with tumor-targeting, cell-penetrating, and endolysosomal pH-responsive properties. cRGD-BSA/KALA/DOX nanoparticles were fabricated by self-assembly through electrostatic interaction between cell-penetrating peptide KALA and cRGD-BSA, with cRGD as a tumor-targeting ligand. Under endosomal/lysosomal acidic conditions, the changes in the electric charges of cRGD-BSA and KALA led to the disassembly of the nanoparticles to accelerate intracellular drug release. cRGD-BSA/KALA/DOX nanoparticles showed an enhanced inhibitory effect in the growth of αvβ3-integrin-overexpressed tumor cells, indicating promising application in cancer treatments.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R138920,Functionalization of Silver Nanoparticles Loaded with Paclitaxel-induced A549 Cells Apoptosis Through ROS-Mediated Signaling Pathways,S552014,R138924,keywords,R138925,Silver nanoparticles,"Background: Paclitaxel (PTX) is one of the most important and effective anticancer drugs for the treatment of human cancer. However, its low solubility and severe adverse effects limited clinical use. To overcome this limitation, nanotechnology has been used to overcome tumors due to its excellent antimicrobial activity. Objective: This study was to demonstrate the anticancer properties of functionalization silver nanoparticles loaded with paclitaxel (Ag@PTX) induced A549 cells apoptosis through ROS-mediated signaling pathways. Methods: The Ag@PTX nanoparticles were charged with a zeta potential of about -17 mv and characterized around 2 nm with a narrow size distribution. Results: Ag@PTX significantly decreased the viability of A549 cells and possessed selectivity between cancer and normal cells. Ag@PTX induced A549 cells apoptosis was confirmed by nuclear condensation, DNA fragmentation, and activation of caspase-3. Furthermore, Ag@PTX enhanced the anti-cancer activity of A549 cells through ROS-mediated p53 and AKT signalling pathways. Finally, in a xenograft nude mice model, Ag@PTX suppressed the growth of tumors. Conclusion: Our findings suggest that Ag@PTX may be a candidate as a chemopreventive agent and could be a highly efficient way to achieve anticancer synergism for human cancers.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R138920,Functionalization of Silver Nanoparticles Loaded with Paclitaxel-induced A549 Cells Apoptosis Through ROS-Mediated Signaling Pathways,S552025,R138924,Type of inorganic nanoparticles,R138925,Silver nanoparticles,"Background: Paclitaxel (PTX) is one of the most important and effective anticancer drugs for the treatment of human cancer. However, its low solubility and severe adverse effects limited clinical use. To overcome this limitation, nanotechnology has been used to overcome tumors due to its excellent antimicrobial activity. Objective: This study was to demonstrate the anticancer properties of functionalization silver nanoparticles loaded with paclitaxel (Ag@PTX) induced A549 cells apoptosis through ROS-mediated signaling pathways. Methods: The Ag@PTX nanoparticles were charged with a zeta potential of about -17 mv and characterized around 2 nm with a narrow size distribution. Results: Ag@PTX significantly decreased the viability of A549 cells and possessed selectivity between cancer and normal cells. Ag@PTX induced A549 cells apoptosis was confirmed by nuclear condensation, DNA fragmentation, and activation of caspase-3. Furthermore, Ag@PTX enhanced the anti-cancer activity of A549 cells through ROS-mediated p53 and AKT signalling pathways. Finally, in a xenograft nude mice model, Ag@PTX suppressed the growth of tumors. Conclusion: Our findings suggest that Ag@PTX may be a candidate as a chemopreventive agent and could be a highly efficient way to achieve anticancer synergism for human cancers.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R144256,"Nanoemulsions: formation, properties and applications",S577499,R144258,Advantages,L404245,Small size,"Nanoemulsions are kinetically stable liquid-in-liquid dispersions with droplet sizes on the order of 100 nm. Their small size leads to useful properties such as high surface area per unit volume, robust stability, optically transparent appearance, and tunable rheology. Nanoemulsions are finding application in diverse areas such as drug delivery, food, cosmetics, pharmaceuticals, and material synthesis. Additionally, they serve as model systems to understand nanoscale colloidal dispersions. High and low energy methods are used to prepare nanoemulsions, including high pressure homogenization, ultrasonication, phase inversion temperature and emulsion inversion point, as well as recently developed approaches such as bubble bursting method. In this review article, we summarize the major methods to prepare nanoemulsions, theories to predict droplet size, physical conditions and chemical additives which affect droplet stability, and recent applications.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R144336,Solid Lipid Nanoparticles: Emerging Colloidal Nano Drug Delivery Systems,S577962,R144338,Type of nanocarrier,R144265,Solid lipid nanoparticles,"Solid lipid nanoparticles (SLNs) are nanocarriers developed as substitute colloidal drug delivery systems parallel to liposomes, lipid emulsions, polymeric nanoparticles, and so forth. Owing to their unique size dependent properties and ability to incorporate drugs, SLNs present an opportunity to build up new therapeutic prototypes for drug delivery and targeting. SLNs hold great potential for attaining the goal of targeted and controlled drug delivery, which currently draws the interest of researchers worldwide. The present review sheds light on different aspects of SLNs including fabrication and characterization techniques, formulation variables, routes of administration, surface modifications, toxicity, and biomedical applications.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R148280,Lactoferrin bioconjugated solid lipid nanoparticles: a new drug delivery system for potential brain targeting,S594462,R148282,Type of nanocarrier,R148284,Solid lipid nanoparticles (SLN),"Abstract Background: Delivery of drugs to brain is a subtle task in the therapy of many severe neurological disorders. Solid lipid nanoparticles (SLN) easily diffuse the blood–brain barrier (BBB) due to their lipophilic nature. Furthermore, ligand conjugation on SLN surface enhances the targeting efficiency. Lactoferin (Lf) conjugated SLN system is first time attempted for effective brain targeting in this study. Purpose: Preparation of Lf-modified docetaxel (DTX)-loaded SLN for proficient delivery of DTX to brain. Methods: DTX-loaded SLN were prepared using emulsification and solvent evaporation method and conjugation of Lf on SLN surface (C-SLN) was attained through carbodiimide chemistry. These lipidic nanoparticles were evaluated by DLS, AFM, FTIR, XRD techniques and in vitro release studies. Colloidal stability study was performed in biologically simulated environment (normal saline and serum). These lipidic nanoparticles were further evaluated for its targeting mechanism for uptake in brain tumour cells and brain via receptor saturation studies and distribution studies in brain, respectively. Results: Particle size of lipidic nanoparticles was found to be optimum. Surface morphology (zeta potential, AFM) and surface chemistry (FTIR) confirmed conjugation of Lf on SLN surface. Cytotoxicity studies revealed augmented apoptotic activity of C-SLN than SLN and DTX. Enhanced cytotoxicity was demonstrated by receptor saturation and uptake studies. Brain concentration of DTX was elevated significantly with C-SLN than marketed formulation. Conclusions: It is evident from the cytotoxicity, uptake that SLN has potential to deliver drug to brain than marketed formulation but conjugating Lf on SLN surface (C-SLN) further increased the targeting potential for brain tumour. Moreover, brain distribution studies corroborated the use of C-SLN as a viable vehicle to target drug to brain. Hence, C-SLN was demonstrated to be a promising DTX delivery system to brain as it possessed remarkable biocompatibility, stability and efficacy than other reported delivery systems.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R144491,Curcumin Loaded-PLGA Nanoparticles Conjugated with Tet-1 Peptide for Potential Use in Alzheimer's Disease,S578732,R144492,Surface functionalized with,R144493,Tet-1 peptide,"Alzheimer's disease is a growing concern in the modern world. As the currently available medications are not very promising, there is an increased need for the fabrication of newer drugs. Curcumin is a plant derived compound which has potential activities beneficial for the treatment of Alzheimer's disease. Anti-amyloid activity and anti-oxidant activity of curcumin is highly beneficial for the treatment of Alzheimer's disease. The insolubility of curcumin in water restricts its use to a great extend, which can be overcome by the synthesis of curcumin nanoparticles. In our work, we have successfully synthesized water-soluble PLGA coated- curcumin nanoparticles and characterized it using different techniques. As drug targeting to diseases of cerebral origin are difficult due to the stringency of blood-brain barrier, we have coupled the nanoparticle with Tet-1 peptide, which has the affinity to neurons and possess retrograde transportation properties. Our results suggest that curcumin encapsulated-PLGA nanoparticles are able to destroy amyloid aggregates, exhibit anti-oxidative property and are non-cytotoxic. The encapsulation of the curcumin in PLGA does not destroy its inherent properties and so, the PLGA-curcumin nanoparticles can be used as a drug with multiple functions in treating Alzheimer's disease proving it to be a potential therapeutic tool against this dreaded disease.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R138043,"Paclitaxel-loaded PLGA nanoparticles surface modified with transferrin and Pluronic®P85, anin vitrocell line andin vivobiodistribution studies on rat model",S546285,R138045,Polymer,R138049,Transferrin (Tf),"The development of multidrug resistance (due to drug efflux by P-glycoproteins) is a major drawback with the use of paclitaxel (PTX) in the treatment of cancer. The rationale behind this study is to prepare PTX nanoparticles (NPs) for the reversal of multidrug resistance based on the fact that PTX loaded into NPs is not recognized by P-glycoproteins and hence is not effluxed out of the cell. Also, the intracellular penetration of the NPs could be enhanced by anchoring transferrin (Tf) on the PTX-PLGA-NPs. PTX-loaded PLGA NPs (PTX-PLGA-NPs), Pluronic®P85-coated PLGA NPs (P85-PTX-PLGA-NPs), and Tf-anchored PLGA NPs (Tf-PTX-PLGA-NPs) were prepared and evaluted for cytotoxicity and intracellular uptake using C6 rat glioma cell line. A significant increase in cytotoxicity was observed in the order of Tf-PTX-PLGA-NPs > P85-PTX-PLGA-NPs > PTX-PLGA-NPs in comparison to drug solution. In vivo biodistribution on male Sprague–Dawley rats bearing C6 glioma (subcutaneous) showed higher tumor PTX concentrations in animals administered with PTX-NPs compared to drug solution.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R148330,Enhanced Intracellular Delivery and Chemotherapy for Glioma Rats by Transferrin-Conjugated Biodegradable Polymersomes Loaded with Doxorubicin,S594650,R148332,Surface functionalized with,R138049,Transferrin (Tf),"A brain drug delivery system for glioma chemotherapy based on transferrin-conjugated biodegradable polymersomes, Tf-PO-DOX, was made and evaluated with doxorubicin (DOX) as a model drug. Biodegradable polymersomes (PO) loaded with doxorubicin (DOX) were prepared by the nanoprecipitation method (PO-DOX) and then conjugated with transferrin (Tf) to yield Tf-PO-DOX with an average diameter of 107 nm and surface Tf molecule number per polymersome of approximately 35. Compared with PO-DOX and free DOX, Tf-PO-DOX demonstrated the strongest cytotoxicity against C6 glioma cells and the greatest intracellular delivery. It was shown in pharmacokinetic and brain distribution experiments that Tf-PO significantly enhanced brain delivery of DOX, especially the delivery of DOX into brain tumor cells. Pharmacodynamics results revealed a significant reduction of tumor volume and a significant increase of median survival time in the group of Tf-PO-DOX compared with those in saline control animals, animals treated with PO-DOX, and free DOX solution. By terminal deoxynucleotidyl transferase-mediated dUTP nick-end-labeling, Tf-PO-DOX could extensively make tumor cell apoptosis. These results indicated that Tf-PO-DOX could significantly enhance the intracellular delivery of DOX in glioma and the chemotherapeutic effect of DOX for glioma rats.",TRUE,noun phrase
R67,Medicinal Chemistry and Pharmaceutics,R141401,Application of camelid heavy-chain variable domains (VHHs) in prevention and treatment of bacterial and viral infections,S565902,R141402,Virus,L397178,viral infections,"ABSTRACT Camelid heavy-chain variable domains (VHHs) are the smallest, intact, antigen-binding units to occur in nature. VHHs possess high degrees of solubility and robustness enabling generation of multivalent constructs with increased avidity – characteristics that mark their superiority to other antibody fragments and monoclonal antibodies. Capable of effectively binding to molecular targets inaccessible to classical immunotherapeutic agents and easily produced in microbial culture, VHHs are considered promising tools for pharmaceutical biotechnology. With the aim to demonstrate the perspective and potential of VHHs for the development of prophylactic and therapeutic drugs to target diseases caused by bacterial and viral infections, this review article will initially describe the structural features that underlie the unique properties of VHHs and explain the methods currently used for the selection and recombinant production of pathogen-specific VHHs, and then thoroughly summarize the experimental findings of five distinct studies that employed VHHs as inhibitors of host–pathogen interactions or neutralizers of infectious agents. Past and recent studies suggest the potential of camelid heavy-chain variable domains as a novel modality of immunotherapeutic drugs and a promising alternative to monoclonal antibodies. VHHs demonstrate the ability to interfere with bacterial pathogenesis by preventing adhesion to host tissue and sequestering disease-causing bacterial toxins. To protect from viral infections, VHHs may be employed as inhibitors of viral entry by binding to viral coat proteins or blocking interactions with cell-surface receptors. The implementation of VHHs as immunotherapeutic agents for infectious diseases is of considerable potential and set to contribute to public health in the near future.",TRUE,noun phrase
R55,Microbial Physiology,R49446,The Impact of Pyroglutamate: Sulfolobus acidocaldarius Has a Growth Advantage over Saccharolobus solfataricus in Glutamate-Containing Media,S147540,R49467,Organism,L90709,Saccharolobus solfataricus,"Microorganisms are well adapted to their habitat but are partially sensitive to toxic metabolites or abiotic compounds secreted by other organisms or chemically formed under the respective environmental conditions. Thermoacidophiles are challenged by pyroglutamate, a lactam that is spontaneously formed by cyclization of glutamate under aerobic thermoacidophilic conditions. It is known that growth of the thermoacidophilic crenarchaeon Saccharolobus solfataricus (formerly Sulfolobus solfataricus) is completely inhibited by pyroglutamate. In the present study, we investigated the effect of pyroglutamate on the growth of S. solfataricus and the closely related crenarchaeon Sulfolobus acidocaldarius. In contrast to S. solfataricus, S. acidocaldarius was successfully cultivated with pyroglutamate as a sole carbon source. Bioinformatical analyses showed that both members of the Sulfolobaceae have at least one candidate for a 5-oxoprolinase, which catalyses the ATP-dependent conversion of pyroglutamate to glutamate. In S. solfataricus, we observed the intracellular accumulation of pyroglutamate and crude cell extract assays showed a less effective degradation of pyroglutamate. Apparently, S. acidocaldarius seems to be less versatile regarding carbohydrates and prefers peptidolytic growth compared to S. solfataricus. Concludingly, S. acidocaldarius exhibits a more efficient utilization of pyroglutamate and is not inhibited by this compound, making it a better candidate for applications with glutamate-containing media at high temperatures.",TRUE,noun phrase
R55,Microbial Physiology,R49446,The Impact of Pyroglutamate: Sulfolobus acidocaldarius Has a Growth Advantage over Saccharolobus solfataricus in Glutamate-Containing Media,S147534,R49452,Organism,L90703,Sulfolobus acidocaldarius,"Microorganisms are well adapted to their habitat but are partially sensitive to toxic metabolites or abiotic compounds secreted by other organisms or chemically formed under the respective environmental conditions. Thermoacidophiles are challenged by pyroglutamate, a lactam that is spontaneously formed by cyclization of glutamate under aerobic thermoacidophilic conditions. It is known that growth of the thermoacidophilic crenarchaeon Saccharolobus solfataricus (formerly Sulfolobus solfataricus) is completely inhibited by pyroglutamate. In the present study, we investigated the effect of pyroglutamate on the growth of S. solfataricus and the closely related crenarchaeon Sulfolobus acidocaldarius. In contrast to S. solfataricus, S. acidocaldarius was successfully cultivated with pyroglutamate as a sole carbon source. Bioinformatical analyses showed that both members of the Sulfolobaceae have at least one candidate for a 5-oxoprolinase, which catalyses the ATP-dependent conversion of pyroglutamate to glutamate. In S. solfataricus, we observed the intracellular accumulation of pyroglutamate and crude cell extract assays showed a less effective degradation of pyroglutamate. Apparently, S. acidocaldarius seems to be less versatile regarding carbohydrates and prefers peptidolytic growth compared to S. solfataricus. Concludingly, S. acidocaldarius exhibits a more efficient utilization of pyroglutamate and is not inhibited by this compound, making it a better candidate for applications with glutamate-containing media at high temperatures.",TRUE,noun phrase
R112127,Multiagent Systems,R138297,A Multi-Agent System for the management of E-Government Services,S548015,R138299,has Recommended items,R138239,Government services,"This paper aims at studying the exploitation of intelligent agents for supporting citizens to access e-government services. To this purpose, it proposes a multi-agent system capable of suggesting to the users the most interesting services for them; specifically, these suggestions are computed by taking into account both their exigencies/preferences and the capabilities of the devices they are currently exploiting. The paper first describes the proposed system and, then, reports various experimental results. Finally, it presents a comparison between our system and other related ones already presented in the literature.",TRUE,noun phrase
R279,Nanoscience and Nanotechnology,R148377,Bioinspired Cocatalysts Decorated WO3 Nanotube Toward Unparalleled Hydrogen Sulfide Chemiresistor,S594905,R148380,keywords,L413546, cellulose nanocrystals,"Herein, we incorporated dual biotemplates, i.e., cellulose nanocrystals (CNC) and apoferritin, into electrospinning solution to achieve three distinct benefits, i.e., (i) facile synthesis of a WO3 nanotube by utilizing the self-agglomerating nature of CNC in the core of as-spun nanofibers, (ii) effective sensitization by partial phase transition from WO3 to Na2W4O13 induced by interaction between sodium-doped CNC and WO3 during calcination, and (iii) uniform functionalization with monodispersive apoferritin-derived Pt catalytic nanoparticles (2.22 ± 0.42 nm). Interestingly, the sensitization effect of Na2W4O13 on WO3 resulted in highly selective H2S sensing characteristics against seven different interfering molecules. Furthermore, synergistic effects with a bioinspired Pt catalyst induced a remarkably enhanced H2S response ( Rair/ Rgas = 203.5), unparalleled selectivity ( Rair/ Rgas < 1.3 for the interfering molecules), and rapid response (<10 s)/recovery (<30 s) time at 1 ppm of H2S under 95% relative humidity level. This work paves the way for a new class of cosensitization routes to overcome critical shortcomings of SMO-based chemical sensors, thus providing a potential platform for diagnosis of halitosis.",TRUE,noun phrase
R279,Nanoscience and Nanotechnology,R148377,Bioinspired Cocatalysts Decorated WO3 Nanotube Toward Unparalleled Hydrogen Sulfide Chemiresistor,S594908,R148380,keywords,L413549, chemical sensor,"Herein, we incorporated dual biotemplates, i.e., cellulose nanocrystals (CNC) and apoferritin, into electrospinning solution to achieve three distinct benefits, i.e., (i) facile synthesis of a WO3 nanotube by utilizing the self-agglomerating nature of CNC in the core of as-spun nanofibers, (ii) effective sensitization by partial phase transition from WO3 to Na2W4O13 induced by interaction between sodium-doped CNC and WO3 during calcination, and (iii) uniform functionalization with monodispersive apoferritin-derived Pt catalytic nanoparticles (2.22 ± 0.42 nm). Interestingly, the sensitization effect of Na2W4O13 on WO3 resulted in highly selective H2S sensing characteristics against seven different interfering molecules. Furthermore, synergistic effects with a bioinspired Pt catalyst induced a remarkably enhanced H2S response ( Rair/ Rgas = 203.5), unparalleled selectivity ( Rair/ Rgas < 1.3 for the interfering molecules), and rapid response (<10 s)/recovery (<30 s) time at 1 ppm of H2S under 95% relative humidity level. This work paves the way for a new class of cosensitization routes to overcome critical shortcomings of SMO-based chemical sensors, thus providing a potential platform for diagnosis of halitosis.",TRUE,noun phrase
R279,Nanoscience and Nanotechnology,R148377,Bioinspired Cocatalysts Decorated WO3 Nanotube Toward Unparalleled Hydrogen Sulfide Chemiresistor,S594907,R148380,keywords,L413548, WO3 nanotube,"Herein, we incorporated dual biotemplates, i.e., cellulose nanocrystals (CNC) and apoferritin, into electrospinning solution to achieve three distinct benefits, i.e., (i) facile synthesis of a WO3 nanotube by utilizing the self-agglomerating nature of CNC in the core of as-spun nanofibers, (ii) effective sensitization by partial phase transition from WO3 to Na2W4O13 induced by interaction between sodium-doped CNC and WO3 during calcination, and (iii) uniform functionalization with monodispersive apoferritin-derived Pt catalytic nanoparticles (2.22 ± 0.42 nm). Interestingly, the sensitization effect of Na2W4O13 on WO3 resulted in highly selective H2S sensing characteristics against seven different interfering molecules. Furthermore, synergistic effects with a bioinspired Pt catalyst induced a remarkably enhanced H2S response ( Rair/ Rgas = 203.5), unparalleled selectivity ( Rair/ Rgas < 1.3 for the interfering molecules), and rapid response (<10 s)/recovery (<30 s) time at 1 ppm of H2S under 95% relative humidity level. This work paves the way for a new class of cosensitization routes to overcome critical shortcomings of SMO-based chemical sensors, thus providing a potential platform for diagnosis of halitosis.",TRUE,noun phrase
R279,Nanoscience and Nanotechnology,R137470,Fabrication and characterization of Ga-doped ZnO / Si heterojunction nanodiodes,S544109,R137471,ZnO film deposition method,L383160,chemical bath deposition ,"In this study, temperature-dependent electrical properties of n-type Ga-doped ZnO thin film / p-type Si nanowire heterojunction diodes were reported. Metal-assisted chemical etching (MACE) process was performed to fabricate Si nanowires. Ga-doped ZnO films were then deposited onto nanowires through chemical bath deposition (CBD) technique to build three-dimensional nanowire-based heterojunction diodes. Fabricated devices revealed significant diode characteristics in the temperature range of 220 - 360 K. Electrical measurements shown that diodes had a well-defined rectifying behavior with a good rectification ratio of 103 ±3 V at room temperature. Ideality factor (n) were changed from 2.2 to 1.2 with increasing temperature.In this study, temperature-dependent electrical properties of n-type Ga-doped ZnO thin film / p-type Si nanowire heterojunction diodes were reported. Metal-assisted chemical etching (MACE) process was performed to fabricate Si nanowires. Ga-doped ZnO films were then deposited onto nanowires through chemical bath deposition (CBD) technique to build three-dimensional nanowire-based heterojunction diodes. Fabricated devices revealed significant diode characteristics in the temperature range of 220 - 360 K. Electrical measurements shown that diodes had a well-defined rectifying behavior with a good rectification ratio of 103 ±3 V at room temperature. Ideality factor (n) were changed from 2.2 to 1.2 with increasing temperature.",TRUE,noun phrase
R279,Nanoscience and Nanotechnology,R137470,Fabrication and characterization of Ga-doped ZnO / Si heterojunction nanodiodes,S544125,R137471,keywords,L383175,Ga-doped ZnO films,"In this study, temperature-dependent electrical properties of n-type Ga-doped ZnO thin film / p-type Si nanowire heterojunction diodes were reported. Metal-assisted chemical etching (MACE) process was performed to fabricate Si nanowires. Ga-doped ZnO films were then deposited onto nanowires through chemical bath deposition (CBD) technique to build three-dimensional nanowire-based heterojunction diodes. Fabricated devices revealed significant diode characteristics in the temperature range of 220 - 360 K. Electrical measurements shown that diodes had a well-defined rectifying behavior with a good rectification ratio of 103 ±3 V at room temperature. Ideality factor (n) were changed from 2.2 to 1.2 with increasing temperature.In this study, temperature-dependent electrical properties of n-type Ga-doped ZnO thin film / p-type Si nanowire heterojunction diodes were reported. Metal-assisted chemical etching (MACE) process was performed to fabricate Si nanowires. Ga-doped ZnO films were then deposited onto nanowires through chemical bath deposition (CBD) technique to build three-dimensional nanowire-based heterojunction diodes. Fabricated devices revealed significant diode characteristics in the temperature range of 220 - 360 K. Electrical measurements shown that diodes had a well-defined rectifying behavior with a good rectification ratio of 103 ±3 V at room temperature. Ideality factor (n) were changed from 2.2 to 1.2 with increasing temperature.",TRUE,noun phrase
R279,Nanoscience and Nanotechnology,R143734,Ni and NiO Nanoparticles Decorated Metal–Organic Framework Nanosheets: Facile Synthesis and High-Performance Nonenzymatic Glucose Detection in Human Serum,S575796,R143738,keywords,L403352,Glucose sensor,"Ni-MOF (metal-organic framework)/Ni/NiO/carbon frame nanocomposite was formed by combing Ni and NiO nanoparticles and a C frame with Ni-MOF using an efficient one-step calcination method. The morphology and structure of Ni-MOF/Ni/NiO/C nanocomposite were characterized by transmission electron microscopy (TEM), X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), and energy disperse spectroscopy (EDS) mapping. Ni-MOF/Ni/NiO/C nanocomposites were immobilized onto glassy carbon electrodes (GCEs) with Nafion film to construct high-performance nonenzymatic glucose and H2O2 electrochemical sensors. Cyclic voltammetric (CV) study showed Ni-MOF/Ni/NiO/C nanocomposite displayed better electrocatalytic activity toward glucose oxidation as compared to Ni-MOF. Amperometric study indicated the glucose sensor displayed high performance, offering a low detection limit (0.8 μM), a high sensitivity of 367.45 mA M-1 cm-2, and a wide linear range (from 4 to 5664 μM). Importantly, good reproducibility, long-time stability, and excellent selectivity were obtained within the as-fabricated glucose sensor. Furthermore, the constructed high-performance sensor was utilized to monitor the glucose levels in human serum, and satisfactory results were obtained. It demonstrated the Ni-MOF/Ni/NiO/C nanocomposite can be used as a good electrochemical sensing material in practical biological applications.",TRUE,noun phrase
R279,Nanoscience and Nanotechnology,R143734,Ni and NiO Nanoparticles Decorated Metal–Organic Framework Nanosheets: Facile Synthesis and High-Performance Nonenzymatic Glucose Detection in Human Serum,S575797,R143738,keywords,L403353,Human serum,"Ni-MOF (metal-organic framework)/Ni/NiO/carbon frame nanocomposite was formed by combing Ni and NiO nanoparticles and a C frame with Ni-MOF using an efficient one-step calcination method. The morphology and structure of Ni-MOF/Ni/NiO/C nanocomposite were characterized by transmission electron microscopy (TEM), X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), and energy disperse spectroscopy (EDS) mapping. Ni-MOF/Ni/NiO/C nanocomposites were immobilized onto glassy carbon electrodes (GCEs) with Nafion film to construct high-performance nonenzymatic glucose and H2O2 electrochemical sensors. Cyclic voltammetric (CV) study showed Ni-MOF/Ni/NiO/C nanocomposite displayed better electrocatalytic activity toward glucose oxidation as compared to Ni-MOF. Amperometric study indicated the glucose sensor displayed high performance, offering a low detection limit (0.8 μM), a high sensitivity of 367.45 mA M-1 cm-2, and a wide linear range (from 4 to 5664 μM). Importantly, good reproducibility, long-time stability, and excellent selectivity were obtained within the as-fabricated glucose sensor. Furthermore, the constructed high-performance sensor was utilized to monitor the glucose levels in human serum, and satisfactory results were obtained. It demonstrated the Ni-MOF/Ni/NiO/C nanocomposite can be used as a good electrochemical sensing material in practical biological applications.",TRUE,noun phrase
R279,Nanoscience and Nanotechnology,R135569,A Highly Sensitive and Flexible Capacitive Pressure Sensor Based on a Porous Three-Dimensional PDMS/Microsphere Composite,S536342,R135573,keywords,R135586,Pressure sensor,"In recent times, polymer-based flexible pressure sensors have been attracting a lot of attention because of their various applications. A highly sensitive and flexible sensor is suggested, capable of being attached to the human body, based on a three-dimensional dielectric elastomeric structure of polydimethylsiloxane (PDMS) and microsphere composite. This sensor has maximal porosity due to macropores created by sacrificial layer grains and micropores generated by microspheres pre-mixed with PDMS, allowing it to operate at a wider pressure range (~150 kPa) while maintaining a sensitivity (of 0.124 kPa−1 in a range of 0~15 kPa) better than in previous studies. The maximized pores can cause deformation in the structure, allowing for the detection of small changes in pressure. In addition to exhibiting a fast rise time (~167 ms) and fall time (~117 ms), as well as excellent reproducibility, the fabricated pressure sensor exhibits reliability in its response to repeated mechanical stimuli (2.5 kPa, 1000 cycles). As an application, we develop a wearable device for monitoring repeated tiny motions, such as the pulse on the human neck and swallowing at the Adam’s apple. This sensory device is also used to detect movements in the index finger and to monitor an insole system in real-time.",TRUE,noun phrase
R279,Nanoscience and Nanotechnology,R137470,Fabrication and characterization of Ga-doped ZnO / Si heterojunction nanodiodes,S544124,R137471,keywords,L383174,Si nanowires,"In this study, temperature-dependent electrical properties of n-type Ga-doped ZnO thin film / p-type Si nanowire heterojunction diodes were reported. Metal-assisted chemical etching (MACE) process was performed to fabricate Si nanowires. Ga-doped ZnO films were then deposited onto nanowires through chemical bath deposition (CBD) technique to build three-dimensional nanowire-based heterojunction diodes. Fabricated devices revealed significant diode characteristics in the temperature range of 220 - 360 K. Electrical measurements shown that diodes had a well-defined rectifying behavior with a good rectification ratio of 103 ±3 V at room temperature. Ideality factor (n) were changed from 2.2 to 1.2 with increasing temperature.In this study, temperature-dependent electrical properties of n-type Ga-doped ZnO thin film / p-type Si nanowire heterojunction diodes were reported. Metal-assisted chemical etching (MACE) process was performed to fabricate Si nanowires. Ga-doped ZnO films were then deposited onto nanowires through chemical bath deposition (CBD) technique to build three-dimensional nanowire-based heterojunction diodes. Fabricated devices revealed significant diode characteristics in the temperature range of 220 - 360 K. Electrical measurements shown that diodes had a well-defined rectifying behavior with a good rectification ratio of 103 ±3 V at room temperature. Ideality factor (n) were changed from 2.2 to 1.2 with increasing temperature.",TRUE,noun phrase
R279,Nanoscience and Nanotechnology,R110312,Atomic Layer Deposition of Titanium Oxide on Single-Layer Graphene: An Atomic-Scale Study toward Understanding Nucleation and Growth,S502744,R110315,substrate,R110319,Single-Layer Graphene,"Controlled synthesis of a hybrid nanomaterial based on titanium oxide and single-layer graphene (SLG) using atomic layer deposition (ALD) is reported here. The morphology and crystallinity of the oxide layer on SLG can be tuned mainly with the deposition temperature, achieving either a uniform amorphous layer at 60 °C or ∼2 nm individual nanocrystals on the SLG at 200 °C after only 20 ALD cycles. A continuous and uniform amorphous layer formed on the SLG after 180 cycles at 60 °C can be converted to a polycrystalline layer containing domains of anatase TiO2 after a postdeposition annealing at 400 °C under vacuum. Using aberration-corrected transmission electron microscopy (AC-TEM), characterization of the structure and chemistry was performed on an atomic scale and provided insight into understanding the nucleation and growth. AC-TEM imaging and electron energy loss spectroscopy revealed that rocksalt TiO nanocrystals were occasionally formed at the early stage of nucleation after only 20 ALD cycles. Understanding and controlling nucleation and growth of the hybrid nanomaterial are crucial to achieving novel properties and enhanced performance for a wide range of applications that exploit the synergetic functionalities of the ensemble.",TRUE,noun phrase
R279,Nanoscience and Nanotechnology,R161508,Fabrication of a SnO2 Nanowire Gas Sensor and Sensor Performance for Hydrogen,S644975,R161510,Sensing element nanomaterial,L440641,SnO2 nanowires,SnO2 nanowire gas sensors have been fabricated on Cd−Au comb-shaped interdigitating electrodes using thermal evaporation of the mixed powders of SnO2 and active carbon. The self-assembly grown sensors have excellent performance in sensor response to hydrogen concentration in the range of 10 to 1000 ppm. This high response is attributed to the large portion of undercoordinated atoms on the surface of the SnO2 nanowires. The influence of the Debye length of the nanowires and the gap between electrodes in the gas sensor response is examined and discussed.,TRUE,noun phrase
R279,Nanoscience and Nanotechnology,R143705,A highly stretchable and sensitive strain sensor based on graphene–elastomer composites with a novel double-interconnected network,S575098,R143707,keywords,L402841,Strain Sensor,"The construction of a continuous conductive network with a low percolation threshold plays a key role in fabricating a high performance strain sensor. Herein, a highly stretchable and sensitive strain sensor based on binary rubber blend/graphene was fabricated by a simple and effective assembly approach. A novel double-interconnected network composed of compactly continuous graphene conductive networks was designed and constructed using the composites, thereby resulting in an ultralow percolation threshold of 0.3 vol%, approximately 12-fold lower than that of the conventional graphene-based composites with a homogeneously dispersed morphology (4.0 vol%). Near the percolation threshold, the sensors could be stretched in excess of 100% applied strain, and exhibited a high stretchability, sensitivity (gauge factor ∼82.5) and good reproducibility (∼300 cycles) of up to 100% strain under cyclic tensile tests. The proposed strategy provides a novel effective approach for constructing a double-interconnected conductive network using polymer composites, and is very competitive for developing and designing high performance strain sensors.",TRUE,noun phrase
R279,Nanoscience and Nanotechnology,R143695,Electrically conductive thermoplastic elastomer nanocomposites at ultralow graphene loading levels for strain sensor applications,S575023,R143697,keywords,L402779,Strain Sensors,"An electrically conductive ultralow percolation threshold of 0.1 wt% graphene was observed in the thermoplastic polyurethane (TPU) nanocomposites. The homogeneously dispersed graphene effectively enhanced the mechanical properties of TPU significantly at a low graphene loading of 0.2 wt%. These nanocomposites were subjected to cyclic loading to investigate the influences of graphene loading, strain amplitude and strain rate on the strain sensing performances. The two dimensional graphene and the flexible TPU matrix were found to endow these nanocomposites with a wide range of strain sensitivity (gauge factor ranging from 0.78 for TPU with 0.6 wt% graphene at the strain rate of 0.1 min−1 to 17.7 for TPU with 0.2 wt% graphene at the strain rate of 0.3 min−1) and good sensing stability for different strain patterns. In addition, these nanocomposites demonstrated good recoverability and reproducibility after stabilization by cyclic loading. An analytical model based on tunneling theory was used to simulate the resistance response to strain under different strain rates. The change in the number of conductive pathways and tunneling distance under strain was responsible for the observed resistance-strain behaviors. This study provides guidelines for the fabrication of graphene based polymer strain sensors.",TRUE,noun phrase
R279,Nanoscience and Nanotechnology,R110342,Structure and optical properties of TiO2 thin films deposited by ALD method,S502835,R110344,Material,R110301,Titanium dioxide,AbstractThis paper presents the results of study on titanium dioxide thin films prepared by atomic layer deposition method on a silicon substrate. The changes of surface morphology have been observed in topographic images performed with the atomic force microscope (AFM) and scanning electron microscope (SEM). Obtained roughness parameters have been calculated with XEI Park Systems software. Qualitative studies of chemical composition were also performed using the energy dispersive spectrometer (EDS). The structure of titanium dioxide was investigated by X-ray crystallography. A variety of crystalline TiO2was also confirmed by using the Raman spectrometer. The optical reflection spectra have been measured with UV-Vis spectrophotometry.,TRUE,noun phrase
R279,Nanoscience and Nanotechnology,R151352,Enzymatic glucose biosensor based on ZnO nanorod array grown by hydrothermal decomposition,S607157,R151354,ZnO form,L419827,ZnO nanorod array,"We report herein a glucose biosensor based on glucose oxidase (GOx) immobilized on ZnO nanorod array grown by hydrothermal decomposition. In a phosphate buffer solution with a pH value of 7.4, negatively charged GOx was immobilized on positively charged ZnO nanorods through electrostatic interaction. At an applied potential of +0.8V versus Ag∕AgCl reference electrode, ZnO nanorods based biosensor presented a high and reproducible sensitivity of 23.1μAcm−2mM−1 with a response time of less than 5s. The biosensor shows a linear range from 0.01to3.45mM and an experiment limit of detection of 0.01mM. An apparent Michaelis-Menten constant of 2.9mM shows a high affinity between glucose and GOx immobilized on ZnO nanorods.",TRUE,noun phrase
R145261,Natural Language Processing,R163666,An Overview of the Active Gene Annotation Corpus and the BioNLP OST 2019 AGAC Track Tasks,S653559,R163668,Dataset name,R163670,Active Gene Annotation Corpus (AGAC),"The active gene annotation corpus (AGAC) was developed to support knowledge discovery for drug repurposing. Based on the corpus, the AGAC track of the BioNLP Open Shared Tasks 2019 was organized, to facilitate cross-disciplinary collaboration across BioNLP and Pharmacoinformatics communities, for drug repurposing. The AGAC track consists of three subtasks: 1) named entity recognition, 2) thematic relation extraction, and 3) loss of function (LOF) / gain of function (GOF) topic classification. The AGAC track was participated by five teams, of which the performance are compared and analyzed. The the results revealed a substantial room for improvement in the design of the task, which we analyzed in terms of “imbalanced data”, “selective annotation” and “latent topic annotation”.",TRUE,noun phrase
R145261,Natural Language Processing,R163224,An empirical evaluation of resources for the identification of diseases and adverse effects in biomedical literature,S650999,R163226,Concept types,R163260,Adverse Effects,"The mentions of human health perturbations such as the diseases and adverse effects denote a special entity class in the biomedical literature. They help in understanding the underlying risk factors and develop a preventive rationale. The recognition of these named entities in texts through dictionary-based approaches relies on the availability of appropriate terminological resources. Although few resources are publicly available, not all are suitable for the text mining needs. Therefore, this work provides an overview of the well known resources with respect to human diseases and adverse effects such as the MeSH, MedDRA, ICD-10, SNOMED CT, and UMLS. Individual dictionaries are generated from these resources and their performance in recognizing the named entities is evaluated over a manually annotated corpus. In addition, the steps for curating the dictionaries, rule-based acronym disambiguation and their impact on the dictionary performance is discussed. The results show that the MedDRA and UMLS achieve the best recall. Besides this, MedDRA provides an additional benefit of achieving a higher precision. The combination of search results of all the dictionaries achieve a considerably high recall. The corpus is available on http://www.scai.fraunhofer.de/disease-ae-corpus.html",TRUE,noun phrase
R145261,Natural Language Processing,R165975,Named Entity Recognition for Astronomy Literature,S661484,R165977,Data domains,R165978,Astronomy journal articles,"We present a system for named entity recognition (ner) in astronomy journal articles. We have developed this system on a ne corpus comprising approximately 200,000 words of text from astronomy articles. These have been manually annotated with ∼40 entity types of interest to astronomers. We report on the challenges involved in extracting the corpus, defining entity classes and annotating scientific text. We investigate which features of an existing state-of-the-art Maximum Entropy approach perform well on astronomy text. Our system achieves an F-score of 87.8%.",TRUE,noun phrase
R145261,Natural Language Processing,R164317,Named Entity Recognition for Bacterial Type IV Secretion Systems,S655931,R164318,Entity types,R164326,Bacteria names,"Research on specialized biological systems is often hampered by a lack of consistent terminology, especially across species. In bacterial Type IV secretion systems genes within one set of orthologs may have over a dozen different names. Classifying research publications based on biological processes, cellular components, molecular functions, and microorganism species should improve the precision and recall of literature searches allowing researchers to keep up with the exponentially growing literature, through resources such as the Pathosystems Resource Integration Center (PATRIC, patricbrc.org). We developed named entity recognition (NER) tools for four entities related to Type IV secretion systems: 1) bacteria names, 2) biological processes, 3) molecular functions, and 4) cellular components. These four entities are important to pathogenesis and virulence research but have received less attention than other entities, e.g., genes and proteins. Based on an annotated corpus, large domain terminological resources, and machine learning techniques, we developed recognizers for these entities. High accuracy rates (>80%) are achieved for bacteria, biological processes, and molecular function. Contrastive experiments highlighted the effectiveness of alternate recognition strategies; results of term extraction on contrasting document sets demonstrated the utility of these classes for identifying T4SS-related documents.",TRUE,noun phrase
R145261,Natural Language Processing,R162482,BioCreative V track 4: a shared task for the extraction of causal network information using the Biological Expression Language,S648253,R162484,Other resources,R162486,Biological Expression Language (BEL),"Automatic extraction of biological network information is one of the most desired and most complex tasks in biological and medical text mining. Track 4 at BioCreative V attempts to approach this complexity using fragments of large-scale manually curated biological networks, represented in Biological Expression Language (BEL), as training and test data. BEL is an advanced knowledge representation format which has been designed to be both human readable and machine processable. The specific goal of track 4 was to evaluate text mining systems capable of automatically constructing BEL statements from given evidence text, and of retrieving evidence text for given BEL statements. Given the complexity of the task, we designed an evaluation methodology which gives credit to partially correct statements. We identified various levels of information expressed by BEL statements, such as entities, functions, relations, and introduced an evaluation framework which rewards systems capable of delivering useful BEL fragments at each of these levels. The aim of this evaluation method is to help identify the characteristics of the systems which, if combined, would be most useful for achieving the overall goal of automatically constructing causal biological networks from text.",TRUE,noun phrase
R145261,Natural Language Processing,R164317,Named Entity Recognition for Bacterial Type IV Secretion Systems,S655933,R164318,Entity types,R164327,Biological processes,"Research on specialized biological systems is often hampered by a lack of consistent terminology, especially across species. In bacterial Type IV secretion systems genes within one set of orthologs may have over a dozen different names. Classifying research publications based on biological processes, cellular components, molecular functions, and microorganism species should improve the precision and recall of literature searches allowing researchers to keep up with the exponentially growing literature, through resources such as the Pathosystems Resource Integration Center (PATRIC, patricbrc.org). We developed named entity recognition (NER) tools for four entities related to Type IV secretion systems: 1) bacteria names, 2) biological processes, 3) molecular functions, and 4) cellular components. These four entities are important to pathogenesis and virulence research but have received less attention than other entities, e.g., genes and proteins. Based on an annotated corpus, large domain terminological resources, and machine learning techniques, we developed recognizers for these entities. High accuracy rates (>80%) are achieved for bacteria, biological processes, and molecular function. Contrastive experiments highlighted the effectiveness of alternate recognition strategies; results of term extraction on contrasting document sets demonstrated the utility of these classes for identifying T4SS-related documents.",TRUE,noun phrase
R145261,Natural Language Processing,R164317,Named Entity Recognition for Bacterial Type IV Secretion Systems,S655937,R164318,Entity types,R164329,Cellular components,"Research on specialized biological systems is often hampered by a lack of consistent terminology, especially across species. In bacterial Type IV secretion systems genes within one set of orthologs may have over a dozen different names. Classifying research publications based on biological processes, cellular components, molecular functions, and microorganism species should improve the precision and recall of literature searches allowing researchers to keep up with the exponentially growing literature, through resources such as the Pathosystems Resource Integration Center (PATRIC, patricbrc.org). We developed named entity recognition (NER) tools for four entities related to Type IV secretion systems: 1) bacteria names, 2) biological processes, 3) molecular functions, and 4) cellular components. These four entities are important to pathogenesis and virulence research but have received less attention than other entities, e.g., genes and proteins. Based on an annotated corpus, large domain terminological resources, and machine learning techniques, we developed recognizers for these entities. High accuracy rates (>80%) are achieved for bacteria, biological processes, and molecular function. Contrastive experiments highlighted the effectiveness of alternate recognition strategies; results of term extraction on contrasting document sets demonstrated the utility of these classes for identifying T4SS-related documents.",TRUE,noun phrase
R145261,Natural Language Processing,R162457,Overview of the CHEMDNER patents task,S648159,R162459,Concept types,R150604,Chemical compounds,"A considerable effort has been made to extract biological and chemical entities, as well as their relationships, from the scientific literature, either manually through traditional literature curation or by using information extraction and text mining technologies. Medicinal chemistry patents contain a wealth of information, for instance to uncover potential biomarkers that might play a role in cancer treatment and prognosis. However, current biomedical annotation databases do not cover such information, partly due to limitations of publicly available biomedical patent mining software. As part of the BioCreative V CHEMDNER patents track, we present the results of the first named entity recognition (NER) assignment carried out to detect mentions of chemical compounds and genes/proteins in running patent text. More specifically, this task aimed to evaluate the performance of automatic name recognition strategies capable of isolating chemical names and gene and gene product mentions from surrounding text within patent titles and abstracts. A total of 22 unique teams submitted results for at least one of the three CHEMDNER subtasks. The first subtask, called the CEMP (chemical entity mention in patents) task, focused on the detection of chemical named entity mentions in patents, requesting teams to return the start and end indices corresponding to all the chemical entities found in a given record. A total of 21 teams submitted 93 runs, for this subtask. The top performing team reached an f-measure of 0.89 with a precision of 0.87 and a recall of 0.91. The CPD (chemical passage detection) task required the classification of patent titles and abstracts whether they do or do not contain chemical compound mentions. Nine teams returned predictions for this task (40 runs). The top run in terms of Matthew’s correlation coefficient (MCC) had a score of 0.88, the highest sensitivity ? Corresponding author",TRUE,noun phrase
R145261,Natural Language Processing,R162561,Overview of the NLM-Chem BioCreative VII track: Full-text Chemical Identification and Indexing in PubMed articles,S648641,R162563,subtasks,R162567,Chemical Indexing prediction task,"The BioCreative NLM-Chem track calls for a community effort to fine-tune automated recognition of chemical names in biomedical literature. Chemical names are one of the most searched biomedical entities in PubMed and – as highlighted during the COVID-19 pandemic – their identification may significantly advance research in multiple biomedical subfields. While previous community challenges focused on identifying chemical names mentioned in titles and abstracts, the full text contains valuable additional detail. We organized the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document. This manuscript summarizes the BioCreative NLM-Chem track. We received a total of 88 submissions in total from 17 teams worldwide. The highest performance achieved for the Chemical Identification task was 0.8672 f-score (0.8759 precision, 0.8587 recall) for strict NER performance and 0.8136 f-score (0.8621 precision, 0.7702 recall) for strict normalization performance. The highest performance achieved for the Chemical Indexing task was 0.4825 f-score (0.4397 precision, 0.5344 recall). The NLM-Chem track dataset and other challenge materials are publicly available at https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/. This community challenge demonstrated 1) the current substantial achievements in deep learning technologies can be utilized to further improve automated prediction accuracy, and 2) the Chemical Indexing task is substantially more challenging. We look forward to further development of biomedical text mining methods to respond to the rapid growth of biomedical literature. Keywords— biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; text mining; chemical entity recognition; chemical indexing",TRUE,noun phrase
R145261,Natural Language Processing,R162474,Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task,S686694,R172006,Relation types,R172007,Chemical-disease relation,"Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task—a result that approaches the human inter-annotator agreement (0.8875)—and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system’s ability to return real-time results: the average response time for each team’s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/",TRUE,noun phrase
R145261,Natural Language Processing,R182418,SPECTER: Document-level Representation Learning using Citation-informed Transformers,S705864,R182420,Has evaluation,L476026,Citation Prediction,"Representation learning is a critical ingredient for natural language processing systems. Recent Transformer language models like BERT learn powerful textual representations, but these models are targeted towards token- and sentence-level training objectives and do not leverage information on inter-document relatedness, which limits their document-level representation power. For applications on scientific documents, such as classification and recommendation, accurate embeddings of documents are a necessity. We propose SPECTER, a new method to generate document-level embedding of scientific papers based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, Specter can be easily applied to downstream applications without task-specific fine-tuning. Additionally, to encourage further research on document-level models, we introduce SciDocs, a new evaluation benchmark consisting of seven document-level tasks ranging from citation prediction, to document classification and recommendation. We show that Specter outperforms a variety of competitive baselines on the benchmark.",TRUE,noun phrase
R145261,Natural Language Processing,R69291,The ACL RD-TEC 2.0: A Language Resource for Evaluating Term Extraction and Entity Recognition Methods,S583782,R69292,Data domains,R322,Computational Linguistics,"This paper introduces the ACL Reference Dataset for Terminology Extraction and Classification, version 2.0 (ACL RD-TEC 2.0). The ACL RD-TEC 2.0 has been developed with the aim of providing a benchmark for the evaluation of term and entity recognition tasks based on specialised text from the computational linguistics domain. This release of the corpus consists of 300 abstracts from articles in the ACL Anthology Reference Corpus, published between 1978–2006. In these abstracts, terms (i.e., single or multi-word lexical units with a specialised meaning) are manually annotated. In addition to their boundaries in running text, annotated terms are classified into one of the seven categories method, tool, language resource (LR), LR product, model, measures and measurements, and other. To assess the quality of the annotations and to determine the difficulty of this annotation task, more than 171 of the abstracts are annotated twice, independently, by each of the two annotators. In total, 6,818 terms are identified and annotated in more than 1300 sentences, resulting in a specialised vocabulary made of 3,318 lexical forms, mapped to 3,471 concepts. We explain the development of the annotation guidelines and discuss some of the challenges we encountered in this annotation task.",TRUE,noun phrase
R145261,Natural Language Processing,R172672,Named Entity Recognition with Bidirectional LSTM-CNNs,S689040,R172674,Material,R172675,CoNLL-2003 dataset,"Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.",TRUE,noun phrase
R145261,Natural Language Processing,R162557,Overview of DrugProt BioCreative VII track: quality evaluation and large scale text mining of drug-gene/protein relations,S686943,R172093,Relation types,R172095,Direct Regulator,"Considering recent progress in NLP, deep learning techniques and biomedical language models there is a pressing need to generate annotated resources and comparable evaluation scenarios that enable the development of advanced biomedical relation extraction systems that extract interactions between drugs/chemical entities and genes, proteins or miRNAs. Building on the results and experience of the CHEMDNER, CHEMDNER patents and ChemProt tracks, we have posed the DrugProt track at BioCreative VII. The DrugProt track focused on the evaluation of automatic systems able to extract 13 different types of drug-genes/protein relations of importance to understand gene regulatory and pharmacological mechanisms. The DrugProt track addressed regulatory associations (direct/indirect, activator/inhibitor relations), certain types of binding associations (antagonist and agonist relations) as well as metabolic associations (substrate or product relations). To promote development of novel tools and offer a comparative evaluation scenario we have released 61,775 manually annotated gene mentions, 65,561 chemical and drug mentions and a total of 24,526 relationships manually labeled by domain experts. A total of 30 teams submitted results for the DrugProt main track, while 9 teams submitted results for the large-scale text mining subtrack that required processing of over 2,3 million records. Teams obtained very competitive results, with predictions reaching fmeasures of over 0.92 for some relation types (antagonist) and fmeasures across all relation types close to 0.8. INTRODUCTION Among the most relevant biological and pharmacological relation types are those that involve (a) chemical compounds and drugs as well as (b) gene products including genes, proteins, miRNAs. A variety of associations between chemicals and genes/proteins are described in the biomedical literature, and there is a growing interest in facilitating a more systematic extraction of these relations from the literature, either for manual database curation initiatives or to generate large knowledge graphs of importance for drug discovery, drug repurposing, building regulatory or interaction networks or to characterize off-target interactions of drugs that might be of importance to understand better adverse drug reactions. At BioCreative VI, the ChemProt track tried to promote the development of novel systems between chemicals and genes for groups of biologically related association types (ChemProt track relation groups or CPRs). Although the obtained results did have a considerable impact in the development and evaluation of new biomedical relation extraction systems, a limitation of grouping more specific relation types into broader groups was the difficulty to directly exploit the results for database curation efforts and biomedical knowledge graph mining application scenarios. The considerable interest in the integration of chemical and biomedical data for drug-discovery purposes, together with the ongoing curation of relationships between biological and chemical entities from scientific publications and patents due to the recent COVID-19 pandemic, motivated the DrugProt track of BioCreative VII, which proposed using more granular relation types. In order to facilitate the development of more granular relation extraction systems large manually annotated corpora are needed. Those corpora should include high-quality manually labled entity mentions together with exhaustive relation annotations generated by domain experts. TRACK AND CORPUS DESCRIPTION Corpus description To carry out the DrugProt track at BioCreative VII, we have released a large manually labelled corpus including annotations of mentions of chemical compounds and drugs as well as genes, proteins and miRNAs. Domain experts with experience in biomedical literature annotation and database curation annotated by hand all abstracts using the BRAT annotation interface. The manual labeling of chemicals and genes was done in separate steps and by different experts to avoid introducing biases during the text annotation process. The manual tagging of entity mentions of chemicals and drugs as well as genes, proteins and miRNAs was done following a carefully designed annotation process and in line with publicly released annotation guidelines. Gene/protein entity mentions were manually mapped to their corresponding biologic al database identifiers whenever possible and classified as either normalizable to databases (tag: GENE-Y) or non normalizable mentions (GENE-N). Teams that participated at the DrugProt track were only provided with this classification of gene mentions and not the actual database identifier to avoid usage of external knowledge bases for producing their predictions. The corpus construction process required first annotating exhaustively all chemical and gene mentions (phase 1). Afterwards the relation annotation phase followed (phase 2), were relationships between these two types of entities had to be labeled according to public available annotation guidelines. Thus, to facilitate the annotation of chemical-protein interactions, the DrugProt track organizers constructed very granular relation annotation rules described in a 33 pages annotation guidelines document. These guidelines were refined during an iterative process based on the annotation of sample documents. The guidelines provided the basic details of the chemicalprotein interaction annotation task and the conventions that had to be followed during the corpus construction process. They incorporated suggestions made by curators as well as observations of annotation inconsistencies encountered when comparing results from different human curators. In brief, DrugProt interactions covered direct interactions (when a physical contact existed between a chemical/drug and a gene/protein) as well as indirect regulatory interactions that alter either the function or the quantity of the gene/gene product. The aim of the iterative manual annotation cycle was to improve the quality and consistency of the guidelines. During the planning of the guidelines some rules had to be reformulated to make them more explicit and clear and additional rules were added wherever necessary to better cover the practical annotation scenario and for being more complete. The manual annotation task basically consisted of labeling or marking manually through a customized BRAT webinterface the interactions given the article abstracts as content. Figure 1 summarizes the DrugProt relation types included in the annotation guidelines. Fig. 1. Overview of the DrugProt relation type hierarchy. The corpus annotation carried out for the DrugProt track was exhaustive for all the types of interactions previously specified. This implied that mentions of other kind of relationships between chemicals and genes (e.g. phenotypic and biological responses) were not manually labelled. Moreover, the DrugProt relations are directed in the sense that only relations of “what a chemical does to a gene/protein"" (chemical → gene/protein direction) were annotated, and not vice versa. To establish a easy to understand relation nomenclature and avoid redundant class definitions, we reviewed several chemical repositories that included chemical – biology information. We revised DrugBank, the Therapeutic Targets Database (TTD) and ChEMBL, assay normalization ontologies (BAO) and previously existing formalizations for the annotation of relationships: the Biological Expression Language (BEL), curation guidelines for transcription regulation interactions (DNA-binding transcription factor – target gene interaction) and SIGNOR, a database of causal relationships between biological entities. Each of these resources inspired the definition of the subclasses DIRECT REGULATOR (e.g. DrugBank, ChEMBL, BAO and SIGNOR) and the INDIRECT REGULATOR (e.g. BEL, curation guidelines for transcription regulation interactions and SIGNOR). For example, DrugBank relationships for drugs included a total of 22 definitions, some of them overlapping with CHEMPROT subclasses (e.g. “Inhibitor”, “Antagonist”, “Agonist”,...), some of them being regarded as highly specific for the purpose of this task (e.g. “intercalation”, “cross-linking/alkylation”) or referring to biological roles (e.g. “Antibody”, “Incorporation into and Destabilization”) and others, partially overlapping between them (e.g. “Binder” and “Ligand”), that were merged into a single class. Concerning indirect regulatory aspects, the five classes of casual relationships between a subject and an object term defined by BEL (“decreases”, “directlyDecreases”, “increases”, “directlyIncreases” and “causesNoChange”) were highly inspiring. Subclasses definitions of pharmacological modes of action were defined according to the UPHAR/BPS Guide to Pharmacology in 2016. For the DrugProt track a very granular chemical-protein relation annotation was carried out, with the aim to cover most of the relations that are of importance from the point of view of biochemical and pharmacological/biomedical perspective. Nevertheless, for the DrugProt track only a total of 13 relation types were used, keeping those that had enough training instances/examples and sufficient manual annotation consistency. The final list of relation types used for this shared task was: INDIRECT-DOWNREGULATOR, INDIRECTUPREGULATOR, DIRECT-REGULATOR, ACTIVATOR, INHIBITOR, AGONIST, ANTAGONIST, AGONISTACTIVATOR, AGONIST-INHIBITOR, PRODUCT-OF, SUBSTRATE, SUBSTRATE_PRODUCT-OF or PART-OF. The DrugProt corpus was split randomly into training, development and test set. We also included a background and large scale background collection of records that were automatically annotated with drugs/chemicals and genes/proteins/miRNAs using an entity tagger trained on the manual DrugProt entity mentions. The background collections were merged with the test set to be able to get team predictions also for these records. Table 1 shows a su",TRUE,noun phrase
R145261,Natural Language Processing,R162546,Overview of the BioCreative VI Precision Medicine Track: mining protein interactions and mutations for precision medicine,S648577,R162553,subtasks,R162554,document triage task,"Abstract The Precision Medicine Initiative is a multicenter effort aiming at formulating personalized treatments leveraging on individual patient data (clinical, genome sequence and functional genomic data) together with the information in large knowledge bases (KBs) that integrate genome annotation, disease association studies, electronic health records and other data types. The biomedical literature provides a rich foundation for populating these KBs, reporting genetic and molecular interactions that provide the scaffold for the cellular regulatory systems and detailing the influence of genetic variants in these interactions. The goal of BioCreative VI Precision Medicine Track was to extract this particular type of information and was organized in two tasks: (i) document triage task, focused on identifying scientific literature containing experimentally verified protein–protein interactions (PPIs) affected by genetic mutations and (ii) relation extraction task, focused on extracting the affected interactions (protein pairs). To assist system developers and task participants, a large-scale corpus of PubMed documents was manually annotated for this task. Ten teams worldwide contributed 22 distinct text-mining models for the document triage task, and six teams worldwide contributed 14 different text-mining systems for the relation extraction task. When comparing the text-mining system predictions with human annotations, for the triage task, the best F-score was 69.06%, the best precision was 62.89%, the best recall was 98.0% and the best average precision was 72.5%. For the relation extraction task, when taking homologous genes into account, the best F-score was 37.73%, the best precision was 46.5% and the best recall was 54.1%. Submitted systems explored a wide range of methods, from traditional rule-based, statistical and machine learning systems to state-of-the-art deep learning methods. Given the level of participation and the individual team results we find the precision medicine track to be successful in engaging the text-mining research community. In the meantime, the track produced a manually annotated corpus of 5509 PubMed documents developed by BioGRID curators and relevant for precision medicine. The data set is freely available to the community, and the specific interactions have been integrated into the BioGRID data set. In addition, this challenge provided the first results of automatically identifying PubMed articles that describe PPI affected by mutations, as well as extracting the affected relations from those articles. Still, much progress is needed for computer-assisted precision medicine text mining to become mainstream. Future work should focus on addressing the remaining technical challenges and incorporating the practical benefits of text-mining tools into real-world precision medicine information-related curation.",TRUE,noun phrase
R145261,Natural Language Processing,R145803,End-to-end Neural Coreference Resolution,S629296,R145805,model,R156991,end-to-end coreference resolution model,"We introduce the first end-to-end coreference resolution model and show that it significantly outperforms all previous work without using a syntactic parser or hand-engineered mention detector. The key idea is to directly consider all spans in a document as potential mentions and learn distributions over possible antecedents for each. The model computes span embeddings that combine context-dependent boundary representations with a head-finding attention mechanism. It is trained to maximize the marginal likelihood of gold antecedent spans from coreference clusters and is factored to enable aggressive pruning of potential mentions. Experiments demonstrate state-of-the-art performance, with a gain of 1.5 F1 on the OntoNotes benchmark and by 3.1 F1 using a 5-model ensemble, despite the fact that this is the first approach to be successfully trained with no external resources.",TRUE,noun phrase
R145261,Natural Language Processing,R172653,Character-aware neural language models,S688993,R172655,Material,R172657,English Penn Treebank,"We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) and a highway network over characters, whose output is given to a long short-term memory (LSTM) recurrent neural network language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-of-the-art despite having 60% fewer parameters. On languages with rich morphology (Arabic, Czech, French, German, Spanish, Russian), the model outperforms word-level/morpheme-level LSTM baselines, again with fewer parameters. The results suggest that on many languages, character inputs are sufficient for language modeling. Analysis of word representations obtained from the character composition part of the model reveals that the model is able to encode, from characters only, both semantic and orthographic information.",TRUE,noun phrase
R145261,Natural Language Processing,R147125,WWW'18 Open Challenge: Financial Opinion Mining and Question Answering,S589363,R147127,Domain,R147128,financial domain,"The growing maturity of Natural Language Processing (NLP) techniques and resources is dramatically changing the landscape of many application domains which are dependent on the analysis of unstructured data at scale. The finance domain, with its reliance on the interpretation of multiple unstructured and structured data sources and its demand for fast and comprehensive decision making is already emerging as a primary ground for the experimentation of NLP, Web Mining and Information Retrieval (IR) techniques for the automatic analysis of financial news and opinions online. This challenge focuses on advancing the state-of-the-art of aspect-based sentiment analysis and opinion-based Question Answering for the financial domain.",TRUE,noun phrase
R145261,Natural Language Processing,R162352,Evaluation of BioCreAtIvE assessment of task 2,S647564,R162353,Other resources,R148554,Gene Ontology,"Abstract Background Molecular Biology accumulated substantial amounts of data concerning functions of genes and proteins. Information relating to functional descriptions is generally extracted manually from textual data and stored in biological databases to build up annotations for large collections of gene products. Those annotation databases are crucial for the interpretation of large scale analysis approaches using bioinformatics or experimental techniques. Due to the growing accumulation of functional descriptions in biomedical literature the need for text mining tools to facilitate the extraction of such annotations is urgent. In order to make text mining tools useable in real world scenarios, for instance to assist database curators during annotation of protein function, comparisons and evaluations of different approaches on full text articles are needed. Results The Critical Assessment for Information Extraction in Biology (BioCreAtIvE) contest consists of a community wide competition aiming to evaluate different strategies for text mining tools, as applied to biomedical literature. We report on task two which addressed the automatic extraction and assignment of Gene Ontology (GO) annotations of human proteins, using full text articles. The predictions of task 2 are based on triplets of protein – GO term – article passage . The annotation-relevant text passages were returned by the participants and evaluated by expert curators of the GO annotation (GOA) team at the European Institute of Bioinformatics (EBI). Each participant could submit up to three results for each sub-task comprising task 2. In total more than 15,000 individual results were provided by the participants. The curators evaluated in addition to the annotation itself, whether the protein and the GO term were correctly predicted and traceable through the submitted text fragment. Conclusion Concepts provided by GO are currently the most extended set of terms used for annotating gene products, thus they were explored to assess how effectively text mining tools are able to extract those annotations automatically. Although the obtained results are promising, they are still far from reaching the required performance demanded by real world applications. Among the principal difficulties encountered to address the proposed task, were the complex nature of the GO terms and protein names (the large range of variants which are used to express proteins and especially GO terms in free text), and the lack of a standard training set. A range of very different strategies were used to tackle this task. The dataset generated in line with the BioCreative challenge is publicly available and will allow new possibilities for training information extraction methods in the domain of molecular biology.",TRUE,noun phrase
R145261,Natural Language Processing,R162352,Evaluation of BioCreAtIvE assessment of task 2,S662811,R166394,Ontology used,R140298,Gene Ontology (GO),"Abstract Background Molecular Biology accumulated substantial amounts of data concerning functions of genes and proteins. Information relating to functional descriptions is generally extracted manually from textual data and stored in biological databases to build up annotations for large collections of gene products. Those annotation databases are crucial for the interpretation of large scale analysis approaches using bioinformatics or experimental techniques. Due to the growing accumulation of functional descriptions in biomedical literature the need for text mining tools to facilitate the extraction of such annotations is urgent. In order to make text mining tools useable in real world scenarios, for instance to assist database curators during annotation of protein function, comparisons and evaluations of different approaches on full text articles are needed. Results The Critical Assessment for Information Extraction in Biology (BioCreAtIvE) contest consists of a community wide competition aiming to evaluate different strategies for text mining tools, as applied to biomedical literature. We report on task two which addressed the automatic extraction and assignment of Gene Ontology (GO) annotations of human proteins, using full text articles. The predictions of task 2 are based on triplets of protein – GO term – article passage . The annotation-relevant text passages were returned by the participants and evaluated by expert curators of the GO annotation (GOA) team at the European Institute of Bioinformatics (EBI). Each participant could submit up to three results for each sub-task comprising task 2. In total more than 15,000 individual results were provided by the participants. The curators evaluated in addition to the annotation itself, whether the protein and the GO term were correctly predicted and traceable through the submitted text fragment. Conclusion Concepts provided by GO are currently the most extended set of terms used for annotating gene products, thus they were explored to assess how effectively text mining tools are able to extract those annotations automatically. Although the obtained results are promising, they are still far from reaching the required performance demanded by real world applications. Among the principal difficulties encountered to address the proposed task, were the complex nature of the GO terms and protein names (the large range of variants which are used to express proteins and especially GO terms in free text), and the lack of a standard training set. A range of very different strategies were used to tackle this task. The dataset generated in line with the BioCreative challenge is publicly available and will allow new possibilities for training information extraction methods in the domain of molecular biology.",TRUE,noun phrase
R145261,Natural Language Processing,R163595,Overview of the Bacteria Biotope Task at BioNLP Shared Task 2016,S659888,R165604,Entity types,R164585,Geographical places,"This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2016, which follows the previous 2013 and 2011 editions. The task focuses on the extraction of the locations (biotopes and geographical places) of bacteria from PubMe abstracts and the characterization of bacteria and their associated habitats with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on bacteria habitats for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, the challenge organization, and the evaluation metrics. We also provide an analysis of the results obtained by participants.",TRUE,noun phrase
R145261,Natural Language Processing,R187500,Incorporating non-local information into information extraction systems by Gibbs sampling,S717539,R187502,Method,R187508,Gibbs sampling,"Most current statistical natural language processing models use only local features so as to permit dynamic programming in inference, but this makes them unable to fully account for the long distance structure that is prevalent in language use. We show how to solve this dilemma with Gibbs sampling, a simple Monte Carlo method used to perform approximate inference in factored probabilistic models. By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference. We use this technique to augment an existing CRF-based information extraction system with long-distance dependency models, enforcing label consistency and extraction template consistency constraints. This technique results in an error reduction of up to 9% over state-of-the-art systems on two established information extraction tasks.",TRUE,noun phrase
R145261,Natural Language Processing,R171842,BC4GO: a full-text corpus for the BioCreative IV GO task,S686269,R171844,Coarse-grained Entity type,R171862,GO Term,"Gene function curation via Gene Ontology (GO) annotation is a common task among Model Organism Database groups. Owing to its manual nature, this task is considered one of the bottlenecks in literature curation. There have been many previous attempts at automatic identification of GO terms and supporting information from full text. However, few systems have delivered an accuracy that is comparable with humans. One recognized challenge in developing such systems is the lack of marked sentence-level evidence text that provides the basis for making GO annotations. We aim to create a corpus that includes the GO evidence text along with the three core elements of GO annotations: (i) a gene or gene product, (ii) a GO term and (iii) a GO evidence code. To ensure our results are consistent with real-life GO data, we recruited eight professional GO curators and asked them to follow their routine GO annotation protocols. Our annotators marked up more than 5000 text passages in 200 articles for 1356 distinct GO terms. For evidence sentence selection, the inter-annotator agreement (IAA) results are 9.3% (strict) and 42.7% (relaxed) in F1-measures. For GO term selection, the IAAs are 47% (strict) and 62.9% (hierarchical). Our corpus analysis further shows that abstracts contain ∼10% of relevant evidence sentences and 30% distinct GO terms, while the Results/Experiment section has nearly 60% relevant sentences and >70% GO terms. Further, of those evidence sentences found in abstracts, less than one-third contain enough experimental detail to fulfill the three core criteria of a GO annotation. This result demonstrates the need of using full-text articles for text mining GO annotations. Through its use at the BioCreative IV GO (BC4GO) task, we expect our corpus to become a valuable resource for the BioNLP research community. Database URL: http://www.biocreative.org/resources/corpora/bc-iv-go-task-corpus/.",TRUE,noun phrase
R145261,Natural Language Processing,R172653,Character-aware neural language models,S688999,R172655,Method,R172659,Highway network,"We describe a simple neural language model that relies only on character-level inputs. Predictions are still made at the word-level. Our model employs a convolutional neural network (CNN) and a highway network over characters, whose output is given to a long short-term memory (LSTM) recurrent neural network language model (RNN-LM). On the English Penn Treebank the model is on par with the existing state-of-the-art despite having 60% fewer parameters. On languages with rich morphology (Arabic, Czech, French, German, Spanish, Russian), the model outperforms word-level/morpheme-level LSTM baselines, again with fewer parameters. The results suggest that on many languages, character inputs are sufficient for language modeling. Analysis of word representations obtained from the character composition part of the model reveals that the model is able to encode, from characters only, both semantic and orthographic information.",TRUE,noun phrase
R145261,Natural Language Processing,R162526,Overview of the BioCreative VI text-mining services for Kinome Curation Track,S648481,R162528,Concept types,R162531,human protein kinase,"Abstract The text-mining services for kinome curation track, part of BioCreative VI, proposed a competition to assess the effectiveness of text mining to perform literature triage. The track has exploited an unpublished curated data set from the neXtProt database. This data set contained comprehensive annotations for 300 human protein kinases. For a given protein and a given curation axis [diseases or gene ontology (GO) biological processes], participants’ systems had to identify and rank relevant articles in a collection of 5.2 M MEDLINE citations (task 1) or 530 000 full-text articles (task 2). Explored strategies comprised named-entity recognition and machine-learning frameworks. For that latter approach, participants developed methods to derive a set of negative instances, as the databases typically do not store articles that were judged as irrelevant by curators. The supervised approaches proposed by the participating groups achieved significant improvements compared to the baseline established in a previous study and compared to a basic PubMed search.",TRUE,noun phrase
R145261,Natural Language Processing,R69291,The ACL RD-TEC 2.0: A Language Resource for Evaluating Term Extraction and Entity Recognition Methods,S587211,R69292,Concept types,R146665,Language Resource,"This paper introduces the ACL Reference Dataset for Terminology Extraction and Classification, version 2.0 (ACL RD-TEC 2.0). The ACL RD-TEC 2.0 has been developed with the aim of providing a benchmark for the evaluation of term and entity recognition tasks based on specialised text from the computational linguistics domain. This release of the corpus consists of 300 abstracts from articles in the ACL Anthology Reference Corpus, published between 1978–2006. In these abstracts, terms (i.e., single or multi-word lexical units with a specialised meaning) are manually annotated. In addition to their boundaries in running text, annotated terms are classified into one of the seven categories method, tool, language resource (LR), LR product, model, measures and measurements, and other. To assess the quality of the annotations and to determine the difficulty of this annotation task, more than 171 of the abstracts are annotated twice, independently, by each of the two annotators. In total, 6,818 terms are identified and annotated in more than 1300 sentences, resulting in a specialised vocabulary made of 3,318 lexical forms, mapped to 3,471 concepts. We explain the development of the annotation guidelines and discuss some of the challenges we encountered in this annotation task.",TRUE,noun phrase
R145261,Natural Language Processing,R161742,Relationship extraction for knowledge graph creation from biomedical literature,S646060,R161744,Method,R161771,machine learning method,"Biomedical research is growing at such an exponential pace that scientists, researchers, and practitioners are no more able to cope with the amount of published literature in the domain. The knowledge presented in the literature needs to be systematized in such a way that claims and hypotheses can be easily found, accessed, and validated. Knowledge graphs can provide such a framework for semantic knowledge representation from literature. However, in order to build a knowledge graph, it is necessary to extract knowledge as relationships between biomedical entities and normalize both entities and relationship types. In this paper, we present and compare a few rule-based and machine learning-based (Naive Bayes, Random Forests as examples of traditional machine learning methods and DistilBERT and T5-based models as examples of modern deep learning transformers) methods for scalable relationship extraction from biomedical literature, and for the integration into the knowledge graphs. We examine how resilient are these various methods to unbalanced and fairly small datasets, showing that transformer-based models handle well both small datasets, due to pre-training on large C4 dataset, as well as unbalanced data. The best performing model was the DistilBERT-based model fine-tuned on balanced data, with a reported F1-score of 0.89.",TRUE,noun phrase
R145261,Natural Language Processing,R162568,Overview of the BioCreative VII LitCovid Track: multi-label topic classification for COVID-19 literature annotation,S648669,R162570,Evaluation metrics,R162572,Macro F1,"The BioCreative LitCovid track calls for a community effort to tackle automated topic annotation for COVID-19 literature. The number of COVID-19-related articles in the literature is growing by about 10,000 articles per month, significantly challenging curation efforts and downstream interpretation. LitCovid is a literature database of COVID-19related articles in PubMed, which has accumulated more than 180,000 articles with millions of accesses each month by users worldwide. The rapid literature growth significantly increases the burden of LitCovid curation, especially for topic annotations. Topic annotation in LitCovid assigns one or more (up to eight) labels to articles. The annotated topics have been widely used both directly in LitCovid (e.g., accounting for ~20% of total uses) and downstream studies such as knowledge network generation and citation analysis. It is, therefore, important to develop innovative text mining methods to tackle the challenge. We organized the BioCreative LitCovid track to call for a community effort to tackle automated topic annotation for COVID-19 literature. This article summarizes the BioCreative LitCovid track in terms of data collection and team participation. The dataset is publicly available via https://ftp.ncbi.nlm.nih.gov/pub/lu/LitCovid/biocreative/. It consists of over 30K PubMed articles, one of the largest multilabel classification datasets on biomedical literature. There were 80 submissions in total from 19 teams worldwide. The highestperforming submissions achieved 0.8875, 0.9181, and 0.9394 for macro F1-score, micro F1-score, and instance-based F1-score, respectively. We look forward to further participation in developing biomedical text mining methods in response to the rapid growth of the COVID-19 literature. Keywords—biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; multi-label classification; COVID-19; LitCovid;",TRUE,noun phrase
R145261,Natural Language Processing,R69291,The ACL RD-TEC 2.0: A Language Resource for Evaluating Term Extraction and Entity Recognition Methods,S587220,R69292,Concept types,R146668,Measures and Measurements,"This paper introduces the ACL Reference Dataset for Terminology Extraction and Classification, version 2.0 (ACL RD-TEC 2.0). The ACL RD-TEC 2.0 has been developed with the aim of providing a benchmark for the evaluation of term and entity recognition tasks based on specialised text from the computational linguistics domain. This release of the corpus consists of 300 abstracts from articles in the ACL Anthology Reference Corpus, published between 1978–2006. In these abstracts, terms (i.e., single or multi-word lexical units with a specialised meaning) are manually annotated. In addition to their boundaries in running text, annotated terms are classified into one of the seven categories method, tool, language resource (LR), LR product, model, measures and measurements, and other. To assess the quality of the annotations and to determine the difficulty of this annotation task, more than 171 of the abstracts are annotated twice, independently, by each of the two annotators. In total, 6,818 terms are identified and annotated in more than 1300 sentences, resulting in a specialised vocabulary made of 3,318 lexical forms, mapped to 3,471 concepts. We explain the development of the annotation guidelines and discuss some of the challenges we encountered in this annotation task.",TRUE,noun phrase
R145261,Natural Language Processing,R162457,Overview of the CHEMDNER patents task,S686646,R171968,Data domains,R150691,MEDICINAL CHEMISTRY,"A considerable effort has been made to extract biological and chemical entities, as well as their relationships, from the scientific literature, either manually through traditional literature curation or by using information extraction and text mining technologies. Medicinal chemistry patents contain a wealth of information, for instance to uncover potential biomarkers that might play a role in cancer treatment and prognosis. However, current biomedical annotation databases do not cover such information, partly due to limitations of publicly available biomedical patent mining software. As part of the BioCreative V CHEMDNER patents track, we present the results of the first named entity recognition (NER) assignment carried out to detect mentions of chemical compounds and genes/proteins in running patent text. More specifically, this task aimed to evaluate the performance of automatic name recognition strategies capable of isolating chemical names and gene and gene product mentions from surrounding text within patent titles and abstracts. A total of 22 unique teams submitted results for at least one of the three CHEMDNER subtasks. The first subtask, called the CEMP (chemical entity mention in patents) task, focused on the detection of chemical named entity mentions in patents, requesting teams to return the start and end indices corresponding to all the chemical entities found in a given record. A total of 21 teams submitted 93 runs, for this subtask. The top performing team reached an f-measure of 0.89 with a precision of 0.87 and a recall of 0.91. The CPD (chemical passage detection) task required the classification of patent titles and abstracts whether they do or do not contain chemical compound mentions. Nine teams returned predictions for this task (40 runs). The top run in terms of Matthew’s correlation coefficient (MCC) had a score of 0.88, the highest sensitivity ? Corresponding author",TRUE,noun phrase
R145261,Natural Language Processing,R162568,Overview of the BioCreative VII LitCovid Track: multi-label topic classification for COVID-19 literature annotation,S648670,R162570,Evaluation metrics,R162573,Micro F1,"The BioCreative LitCovid track calls for a community effort to tackle automated topic annotation for COVID-19 literature. The number of COVID-19-related articles in the literature is growing by about 10,000 articles per month, significantly challenging curation efforts and downstream interpretation. LitCovid is a literature database of COVID-19related articles in PubMed, which has accumulated more than 180,000 articles with millions of accesses each month by users worldwide. The rapid literature growth significantly increases the burden of LitCovid curation, especially for topic annotations. Topic annotation in LitCovid assigns one or more (up to eight) labels to articles. The annotated topics have been widely used both directly in LitCovid (e.g., accounting for ~20% of total uses) and downstream studies such as knowledge network generation and citation analysis. It is, therefore, important to develop innovative text mining methods to tackle the challenge. We organized the BioCreative LitCovid track to call for a community effort to tackle automated topic annotation for COVID-19 literature. This article summarizes the BioCreative LitCovid track in terms of data collection and team participation. The dataset is publicly available via https://ftp.ncbi.nlm.nih.gov/pub/lu/LitCovid/biocreative/. It consists of over 30K PubMed articles, one of the largest multilabel classification datasets on biomedical literature. There were 80 submissions in total from 19 teams worldwide. The highestperforming submissions achieved 0.8875, 0.9181, and 0.9394 for macro F1-score, micro F1-score, and instance-based F1-score, respectively. We look forward to further participation in developing biomedical text mining methods in response to the rapid growth of the COVID-19 literature. Keywords—biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; multi-label classification; COVID-19; LitCovid;",TRUE,noun phrase
R145261,Natural Language Processing,R162352,Evaluation of BioCreAtIvE assessment of task 2,S662715,R166394,Data Domain,R17,Molecular Biology,"Abstract Background Molecular Biology accumulated substantial amounts of data concerning functions of genes and proteins. Information relating to functional descriptions is generally extracted manually from textual data and stored in biological databases to build up annotations for large collections of gene products. Those annotation databases are crucial for the interpretation of large scale analysis approaches using bioinformatics or experimental techniques. Due to the growing accumulation of functional descriptions in biomedical literature the need for text mining tools to facilitate the extraction of such annotations is urgent. In order to make text mining tools useable in real world scenarios, for instance to assist database curators during annotation of protein function, comparisons and evaluations of different approaches on full text articles are needed. Results The Critical Assessment for Information Extraction in Biology (BioCreAtIvE) contest consists of a community wide competition aiming to evaluate different strategies for text mining tools, as applied to biomedical literature. We report on task two which addressed the automatic extraction and assignment of Gene Ontology (GO) annotations of human proteins, using full text articles. The predictions of task 2 are based on triplets of protein – GO term – article passage . The annotation-relevant text passages were returned by the participants and evaluated by expert curators of the GO annotation (GOA) team at the European Institute of Bioinformatics (EBI). Each participant could submit up to three results for each sub-task comprising task 2. In total more than 15,000 individual results were provided by the participants. The curators evaluated in addition to the annotation itself, whether the protein and the GO term were correctly predicted and traceable through the submitted text fragment. Conclusion Concepts provided by GO are currently the most extended set of terms used for annotating gene products, thus they were explored to assess how effectively text mining tools are able to extract those annotations automatically. Although the obtained results are promising, they are still far from reaching the required performance demanded by real world applications. Among the principal difficulties encountered to address the proposed task, were the complex nature of the GO terms and protein names (the large range of variants which are used to express proteins and especially GO terms in free text), and the lack of a standard training set. A range of very different strategies were used to tackle this task. The dataset generated in line with the BioCreative challenge is publicly available and will allow new possibilities for training information extraction methods in the domain of molecular biology.",TRUE,noun phrase
R145261,Natural Language Processing,R162352,Evaluation of BioCreAtIvE assessment of task 2,S662552,R166359,Data domains,R17,Molecular Biology,"Abstract Background Molecular Biology accumulated substantial amounts of data concerning functions of genes and proteins. Information relating to functional descriptions is generally extracted manually from textual data and stored in biological databases to build up annotations for large collections of gene products. Those annotation databases are crucial for the interpretation of large scale analysis approaches using bioinformatics or experimental techniques. Due to the growing accumulation of functional descriptions in biomedical literature the need for text mining tools to facilitate the extraction of such annotations is urgent. In order to make text mining tools useable in real world scenarios, for instance to assist database curators during annotation of protein function, comparisons and evaluations of different approaches on full text articles are needed. Results The Critical Assessment for Information Extraction in Biology (BioCreAtIvE) contest consists of a community wide competition aiming to evaluate different strategies for text mining tools, as applied to biomedical literature. We report on task two which addressed the automatic extraction and assignment of Gene Ontology (GO) annotations of human proteins, using full text articles. The predictions of task 2 are based on triplets of protein – GO term – article passage . The annotation-relevant text passages were returned by the participants and evaluated by expert curators of the GO annotation (GOA) team at the European Institute of Bioinformatics (EBI). Each participant could submit up to three results for each sub-task comprising task 2. In total more than 15,000 individual results were provided by the participants. The curators evaluated in addition to the annotation itself, whether the protein and the GO term were correctly predicted and traceable through the submitted text fragment. Conclusion Concepts provided by GO are currently the most extended set of terms used for annotating gene products, thus they were explored to assess how effectively text mining tools are able to extract those annotations automatically. Although the obtained results are promising, they are still far from reaching the required performance demanded by real world applications. Among the principal difficulties encountered to address the proposed task, were the complex nature of the GO terms and protein names (the large range of variants which are used to express proteins and especially GO terms in free text), and the lack of a standard training set. A range of very different strategies were used to tackle this task. The dataset generated in line with the BioCreative challenge is publicly available and will allow new possibilities for training information extraction methods in the domain of molecular biology.",TRUE,noun phrase
R145261,Natural Language Processing,R164317,Named Entity Recognition for Bacterial Type IV Secretion Systems,S655935,R164318,Entity types,R164328,Molecular functions,"Research on specialized biological systems is often hampered by a lack of consistent terminology, especially across species. In bacterial Type IV secretion systems genes within one set of orthologs may have over a dozen different names. Classifying research publications based on biological processes, cellular components, molecular functions, and microorganism species should improve the precision and recall of literature searches allowing researchers to keep up with the exponentially growing literature, through resources such as the Pathosystems Resource Integration Center (PATRIC, patricbrc.org). We developed named entity recognition (NER) tools for four entities related to Type IV secretion systems: 1) bacteria names, 2) biological processes, 3) molecular functions, and 4) cellular components. These four entities are important to pathogenesis and virulence research but have received less attention than other entities, e.g., genes and proteins. Based on an annotated corpus, large domain terminological resources, and machine learning techniques, we developed recognizers for these entities. High accuracy rates (>80%) are achieved for bacteria, biological processes, and molecular function. Contrastive experiments highlighted the effectiveness of alternate recognition strategies; results of term extraction on contrasting document sets demonstrated the utility of these classes for identifying T4SS-related documents.",TRUE,noun phrase
R145261,Natural Language Processing,R163595,Overview of the Bacteria Biotope Task at BioNLP Shared Task 2016,S661187,R165871,Ontologies used,R165872,NCBI Taxonomy,"This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2016, which follows the previous 2013 and 2011 editions. The task focuses on the extraction of the locations (biotopes and geographical places) of bacteria from PubMe abstracts and the characterization of bacteria and their associated habitats with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on bacteria habitats for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, the challenge organization, and the evaluation metrics. We also provide an analysis of the results obtained by participants.",TRUE,noun phrase
R145261,Natural Language Processing,R163702,Bacteria Biotope at BioNLP Open Shared Tasks 2019,S661227,R165886,Ontologies used,R165872,NCBI Taxonomy,"This paper presents the fourth edition of the Bacteria Biotope task at BioNLP Open Shared Tasks 2019. The task focuses on the extraction of the locations and phenotypes of microorganisms from PubMed abstracts and full-text excerpts, and the characterization of these entities with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on biodiversity for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, and the challenge organization. We also provide an analysis of the results obtained by participants, and inspect the evolution of the results since the last edition in 2016.",TRUE,noun phrase
R145261,Natural Language Processing,R161707,Just Add Functions: A Neural-Symbolic Language Model,S686455,R171928,description,L462532,Neural network language models (NNLMs),"Neural network language models (NNLMs) have achieved ever-improving accuracy due to more sophisticated architectures and increasing amounts of training data. However, the inductive bias of these models (formed by the distributional hypothesis of language), while ideally suited to modeling most running text, results in key limitations for today's models. In particular, the models often struggle to learn certain spatial, temporal, or quantitative relationships, which are commonplace in text and are second-nature for human readers. Yet, in many cases, these relationships can be encoded with simple mathematical or logical expressions. How can we augment today's neural models with such encodings?In this paper, we propose a general methodology to enhance the inductive bias of NNLMs by incorporating simple functions into a neural architecture to form a hierarchical neural-symbolic language model (NSLM). These functions explicitly encode symbolic deterministic relationships to form probability distributions over words. We explore the effectiveness of this approach on numbers and geographic locations, and show that NSLMs significantly reduce perplexity in small-corpus language modeling, and that the performance improvement persists for rare tokens even on much larger corpora. The approach is simple and general, and we discuss how it can be applied to other word classes beyond numbers and geography.",TRUE,noun phrase
R145261,Natural Language Processing,R162526,Overview of the BioCreative VI text-mining services for Kinome Curation Track,S686784,R172039,data source,R162532,neXtProt database,"Abstract The text-mining services for kinome curation track, part of BioCreative VI, proposed a competition to assess the effectiveness of text mining to perform literature triage. The track has exploited an unpublished curated data set from the neXtProt database. This data set contained comprehensive annotations for 300 human protein kinases. For a given protein and a given curation axis [diseases or gene ontology (GO) biological processes], participants’ systems had to identify and rank relevant articles in a collection of 5.2 M MEDLINE citations (task 1) or 530 000 full-text articles (task 2). Explored strategies comprised named-entity recognition and machine-learning frameworks. For that latter approach, participants developed methods to derive a set of negative instances, as the databases typically do not store articles that were judged as irrelevant by curators. The supervised approaches proposed by the participating groups achieved significant improvements compared to the baseline established in a previous study and compared to a basic PubMed search.",TRUE,noun phrase
R145261,Natural Language Processing,R163595,Overview of the Bacteria Biotope Task at BioNLP Shared Task 2016,S661188,R165871,Ontologies used,R165873,OntoBiotope ontology,"This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2016, which follows the previous 2013 and 2011 editions. The task focuses on the extraction of the locations (biotopes and geographical places) of bacteria from PubMe abstracts and the characterization of bacteria and their associated habitats with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on bacteria habitats for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, the challenge organization, and the evaluation metrics. We also provide an analysis of the results obtained by participants.",TRUE,noun phrase
R145261,Natural Language Processing,R163702,Bacteria Biotope at BioNLP Open Shared Tasks 2019,S661229,R165886,Ontologies used,R165873,OntoBiotope ontology,"This paper presents the fourth edition of the Bacteria Biotope task at BioNLP Open Shared Tasks 2019. The task focuses on the extraction of the locations and phenotypes of microorganisms from PubMed abstracts and full-text excerpts, and the characterization of these entities with respect to reference knowledge sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by the importance of the knowledge on biodiversity for fundamental research and applications in microbiology. The paper describes the different proposed subtasks, the corpus characteristics, and the challenge organization. We also provide an analysis of the results obtained by participants, and inspect the evolution of the results since the last edition in 2016.",TRUE,noun phrase
R145261,Natural Language Processing,R162457,Overview of the CHEMDNER patents task,S686654,R171968,Data coverage,R171989,Patent titles,"A considerable effort has been made to extract biological and chemical entities, as well as their relationships, from the scientific literature, either manually through traditional literature curation or by using information extraction and text mining technologies. Medicinal chemistry patents contain a wealth of information, for instance to uncover potential biomarkers that might play a role in cancer treatment and prognosis. However, current biomedical annotation databases do not cover such information, partly due to limitations of publicly available biomedical patent mining software. As part of the BioCreative V CHEMDNER patents track, we present the results of the first named entity recognition (NER) assignment carried out to detect mentions of chemical compounds and genes/proteins in running patent text. More specifically, this task aimed to evaluate the performance of automatic name recognition strategies capable of isolating chemical names and gene and gene product mentions from surrounding text within patent titles and abstracts. A total of 22 unique teams submitted results for at least one of the three CHEMDNER subtasks. The first subtask, called the CEMP (chemical entity mention in patents) task, focused on the detection of chemical named entity mentions in patents, requesting teams to return the start and end indices corresponding to all the chemical entities found in a given record. A total of 21 teams submitted 93 runs, for this subtask. The top performing team reached an f-measure of 0.89 with a precision of 0.87 and a recall of 0.91. The CPD (chemical passage detection) task required the classification of patent titles and abstracts whether they do or do not contain chemical compound mentions. Nine teams returned predictions for this task (40 runs). The top run in terms of Matthew’s correlation coefficient (MCC) had a score of 0.88, the highest sensitivity ? Corresponding author",TRUE,noun phrase
R145261,Natural Language Processing,R172664,End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF,S689027,R172666,Result,R172670,POS tagging,"State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data pre-processing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks --- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both the two data --- 97.55\% accuracy for POS tagging and 91.21\% F1 for NER.",TRUE,noun phrase
R145261,Natural Language Processing,R162546,Overview of the BioCreative VI Precision Medicine Track: mining protein interactions and mutations for precision medicine,S648579,R162553,subtasks,R162555,relation extraction task,"Abstract The Precision Medicine Initiative is a multicenter effort aiming at formulating personalized treatments leveraging on individual patient data (clinical, genome sequence and functional genomic data) together with the information in large knowledge bases (KBs) that integrate genome annotation, disease association studies, electronic health records and other data types. The biomedical literature provides a rich foundation for populating these KBs, reporting genetic and molecular interactions that provide the scaffold for the cellular regulatory systems and detailing the influence of genetic variants in these interactions. The goal of BioCreative VI Precision Medicine Track was to extract this particular type of information and was organized in two tasks: (i) document triage task, focused on identifying scientific literature containing experimentally verified protein–protein interactions (PPIs) affected by genetic mutations and (ii) relation extraction task, focused on extracting the affected interactions (protein pairs). To assist system developers and task participants, a large-scale corpus of PubMed documents was manually annotated for this task. Ten teams worldwide contributed 22 distinct text-mining models for the document triage task, and six teams worldwide contributed 14 different text-mining systems for the relation extraction task. When comparing the text-mining system predictions with human annotations, for the triage task, the best F-score was 69.06%, the best precision was 62.89%, the best recall was 98.0% and the best average precision was 72.5%. For the relation extraction task, when taking homologous genes into account, the best F-score was 37.73%, the best precision was 46.5% and the best recall was 54.1%. Submitted systems explored a wide range of methods, from traditional rule-based, statistical and machine learning systems to state-of-the-art deep learning methods. Given the level of participation and the individual team results we find the precision medicine track to be successful in engaging the text-mining research community. In the meantime, the track produced a manually annotated corpus of 5509 PubMed documents developed by BioGRID curators and relevant for precision medicine. The data set is freely available to the community, and the specific interactions have been integrated into the BioGRID data set. In addition, this challenge provided the first results of automatically identifying PubMed articles that describe PPI affected by mutations, as well as extracting the affected relations from those articles. Still, much progress is needed for computer-assisted precision medicine text mining to become mainstream. Future work should focus on addressing the remaining technical challenges and incorporating the practical benefits of text-mining tools into real-world precision medicine information-related curation.",TRUE,noun phrase
R145261,Natural Language Processing,R163221,Extraction of Information from the Text of Chemical Patents. 1. Identification of Specific Chemical Names,S650943,R163223,Data formats,R163242,SGML (Standard Generalized Markup Language) ,"Much attention has been paid to translating isolated chemical names into forms such as connection tables, but less effort has been expended in identifying substance names in running text to make them available for processing. The requirement for automatic name identification becomes a more urgent priority today, not the least in light of the inherent importance of patents and the increasing complexity of newly synthesized substances and, with these, the need for error-free processing of information from patent and other documents. The elaboration of a methodology for isolating substance names in the text of English-language patents is described here, using, in part, the SGML (Standard Generalized Markup Language) of the patent text as an aid to this process. Evaluation of the procedures, which are still at an early stage of development, demonstrates that even simple methods can achieve very high degrees of success.",TRUE,noun phrase
R145261,Natural Language Processing,R172139,BioCreative VII–Task 3: Automatic Extraction of Medication Names in Tweets,S687089,R172140,Data domains,R172135,Social media,"We present the BioCreative VII Task 3 which focuses on drug names extraction from tweets. Recognized to provide unique insights into population health, detecting health related tweets is notoriously challenging for natural language processing tools. Tweets are written about any and all topics, most of them not related to health. Additionally, they are written with little regard for proper grammar, are inherently colloquial, and are almost never proof-read. Given a tweet, task 3 consists of detecting if the tweet has a mention of a drug name and, if so, extracting the span of the drug mention. We made available 182,049 tweets publicly posted by 212 Twitter users with all drugs mentions manually annotated. This corpus exhibits the natural and strongly imbalanced distribution of positive tweets, with only 442 tweets (0.2%) mentioning a drug. This task was an opportunity for participants to evaluate methods robust to classimbalance beyond the simple lexical match. A total of 65 teams registered, and 16 teams submitted a system run. We summarize the corpus and the tools created for the challenge, which is freely available at https://biocreative.bioinformatics.udel.edu/tasks/biocreativevii/track-3/. We analyze the methods and the results of the competing systems with a focus on learning from classimbalanced data. Keywords—social media; pharmacovigilance; named entity recognition; drug name extraction; class-imbalance.",TRUE,noun phrase
R145261,Natural Language Processing,R146741,The Stanford CoreNLP Natural Language Processing Toolkit,S629917,R146743,Tool,R157082,Stanford CoreNLP,"We describe the design and use of the Stanford CoreNLP toolkit, an extensible pipeline that provides core natural language analysis. This toolkit is quite widely used, both in the research NLP community and also among commercial and government users of open source NLP technology. We suggest that this follows from a simple, approachable design, straightforward interfaces, the inclusion of robust and good quality analysis components, and not requiring use of a large amount of associated baggage.",TRUE,noun phrase
R145261,Natural Language Processing,R146741,The Stanford CoreNLP Natural Language Processing Toolkit,S649598,R162883,Tool name,R162884,Stanford CoreNLP toolkit,"We describe the design and use of the Stanford CoreNLP toolkit, an extensible pipeline that provides core natural language analysis. This toolkit is quite widely used, both in the research NLP community and also among commercial and government users of open source NLP technology. We suggest that this follows from a simple, approachable design, straightforward interfaces, the inclusion of robust and good quality analysis components, and not requiring use of a large amount of associated baggage.",TRUE,noun phrase
R145261,Natural Language Processing,R146357,The STEM-ECR Dataset: Grounding Scientific Entity References in STEM Scholarly Content to Authoritative Encyclopedic and Lexicographic Sources,S586054,R146379,Dataset name,R146368,STEM-ECR v1.0 dataset,"We introduce the STEM (Science, Technology, Engineering, and Medicine) Dataset for Scientific Entity Extraction, Classification, and Resolution, version 1.0 (STEM-ECR v1.0). The STEM-ECR v1.0 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction, classification, and resolution tasks in a domain-independent fashion. It comprises abstracts in 10 STEM disciplines that were found to be the most prolific ones on a major publishing platform. We describe the creation of such a multidisciplinary corpus and highlight the obtained findings in terms of the following features: 1) a generic conceptual formalism for scientific entities in a multidisciplinary scientific context; 2) the feasibility of the domain-independent human annotation of scientific entities under such a generic formalism; 3) a performance benchmark obtainable for automatic extraction of multidisciplinary scientific entities using BERT-based neural models; 4) a delineated 3-step entity resolution procedure for human annotation of the scientific entities via encyclopedic entity linking and lexicographic word sense disambiguation; and 5) human evaluations of Babelfy returned encyclopedic links and lexicographic senses for our entities. Our findings cumulatively indicate that human annotation and automatic learning of multidisciplinary scientific concepts as well as their semantic disambiguation in a wide-ranging setting as STEM is reasonable.",TRUE,noun phrase
R145261,Natural Language Processing,R164551,BioNLP shared Task 2013 – An Overview of the Bacteria Biotope Task,S659857,R165593,Data coverage,R164461,Web pages,"This paper presents the Bacteria Biotope task of the BioNLP Shared Task 2013, which follows BioNLP-ST-11. The Bacteria Biotope task aims to extract the location of bacteria from scientific web pages and to characterize these locations with respect to the OntoBiotope ontology. Bacteria locations are crucil knowledge in biology for phenotype studies. The paper details the corpus specifications, the evaluation metrics, and it summarizes and discusses the participant results.",TRUE,noun phrase
R145261,Natural Language Processing,R172672,Named Entity Recognition with Bidirectional LSTM-CNNs,S689042,R172674,Result,R172677,F1 score,"Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.",TRUE,noun phrase
R137,Numerical Analysis/Scientific Computing,R38573,ACM-BCB '17 Tutorial: Robotics-inspired Algorithms for Modeling Protein Structures and Motions,S126471,R38575,Material,R38577,Computational Structural Biology,"With biomolecular structure recognized as central to understanding mechanisms in the cell, computational chemists and biophysicists have spent significant efforts on modeling structure and dynamics. While significant advances have been made, particularly in the design of sophisticated energetic models and molecular representations, such efforts are experiencing diminishing returns. One of the culprits is low exploration capability. The impasse has attracted AI researchers to offer adaptations of robot motion planning algorithms for modeling biomolecular structures and motions. This tutorial introduces students and researchers to robotics-inspired treatments and methodologies for understanding and elucidating the role of structure and dynamics in the function of biomolecules. The presentation is enhanced via an open-source software developed in the Shehu Computational Biology laboratory. The software allows researchers to integrate themselves in a new research domain and drive further research via plug-and-play capabilities. The hands-on approach in the the tutorial benefits both students and senior researchers keen to make contributions in computational structural biology.",TRUE,noun phrase
R137,Numerical Analysis/Scientific Computing,R35067,"Understanding, Categorizing and Predicting Semantic Image-Text Relations",S122401,R35069,Has metric,R35071,Cross-Modal Mutual Information,"Two modalities are often used to convey information in a complementary and beneficial manner, e.g., in online news, videos, educational resources, or scientific publications. The automatic understanding of semantic correlations between text and associated images as well as their interplay has a great potential for enhanced multimodal web search and recommender systems. However, automatic understanding of multimodal information is still an unsolved research problem. Recent approaches such as image captioning focus on precisely describing visual content and translating it to text, but typically address neither semantic interpretations nor the specific role or purpose of an image-text constellation. In this paper, we go beyond previous work and investigate, inspired by research in visual communication, useful semantic image-text relations for multimodal information retrieval. We derive a categorization of eight semantic image-text classes (e.g., ""illustration"" or ""anchorage"") and show how they can systematically be characterized by a set of three metrics: cross-modal mutual information, semantic correlation, and the status relation of image and text. Furthermore, we present a deep learning system to predict these classes by utilizing multimodal embeddings. To obtain a sufficiently large amount of training data, we have automatically collected and augmented data from a variety of datasets and web resources, which enables future research on this topic. Experimental results on a demanding test set demonstrate the feasibility of the approach.",TRUE,noun phrase
R137,Numerical Analysis/Scientific Computing,R35067,"Understanding, Categorizing and Predicting Semantic Image-Text Relations",S122404,R35069,Has implementation,R35049,Multimodal Embedding,"Two modalities are often used to convey information in a complementary and beneficial manner, e.g., in online news, videos, educational resources, or scientific publications. The automatic understanding of semantic correlations between text and associated images as well as their interplay has a great potential for enhanced multimodal web search and recommender systems. However, automatic understanding of multimodal information is still an unsolved research problem. Recent approaches such as image captioning focus on precisely describing visual content and translating it to text, but typically address neither semantic interpretations nor the specific role or purpose of an image-text constellation. In this paper, we go beyond previous work and investigate, inspired by research in visual communication, useful semantic image-text relations for multimodal information retrieval. We derive a categorization of eight semantic image-text classes (e.g., ""illustration"" or ""anchorage"") and show how they can systematically be characterized by a set of three metrics: cross-modal mutual information, semantic correlation, and the status relation of image and text. Furthermore, we present a deep learning system to predict these classes by utilizing multimodal embeddings. To obtain a sufficiently large amount of training data, we have automatically collected and augmented data from a variety of datasets and web resources, which enables future research on this topic. Experimental results on a demanding test set demonstrate the feasibility of the approach.",TRUE,noun phrase
R137,Numerical Analysis/Scientific Computing,R35067,"Understanding, Categorizing and Predicting Semantic Image-Text Relations",S122402,R35069,Has metric,R35072,Semantic Correlation,"Two modalities are often used to convey information in a complementary and beneficial manner, e.g., in online news, videos, educational resources, or scientific publications. The automatic understanding of semantic correlations between text and associated images as well as their interplay has a great potential for enhanced multimodal web search and recommender systems. However, automatic understanding of multimodal information is still an unsolved research problem. Recent approaches such as image captioning focus on precisely describing visual content and translating it to text, but typically address neither semantic interpretations nor the specific role or purpose of an image-text constellation. In this paper, we go beyond previous work and investigate, inspired by research in visual communication, useful semantic image-text relations for multimodal information retrieval. We derive a categorization of eight semantic image-text classes (e.g., ""illustration"" or ""anchorage"") and show how they can systematically be characterized by a set of three metrics: cross-modal mutual information, semantic correlation, and the status relation of image and text. Furthermore, we present a deep learning system to predict these classes by utilizing multimodal embeddings. To obtain a sufficiently large amount of training data, we have automatically collected and augmented data from a variety of datasets and web resources, which enables future research on this topic. Experimental results on a demanding test set demonstrate the feasibility of the approach.",TRUE,noun phrase
R137,Numerical Analysis/Scientific Computing,R38573,ACM-BCB '17 Tutorial: Robotics-inspired Algorithms for Modeling Protein Structures and Motions,S126470,R38575,Material,R38576,Shehu Computational Biology laboratory,"With biomolecular structure recognized as central to understanding mechanisms in the cell, computational chemists and biophysicists have spent significant efforts on modeling structure and dynamics. While significant advances have been made, particularly in the design of sophisticated energetic models and molecular representations, such efforts are experiencing diminishing returns. One of the culprits is low exploration capability. The impasse has attracted AI researchers to offer adaptations of robot motion planning algorithms for modeling biomolecular structures and motions. This tutorial introduces students and researchers to robotics-inspired treatments and methodologies for understanding and elucidating the role of structure and dynamics in the function of biomolecules. The presentation is enhanced via an open-source software developed in the Shehu Computational Biology laboratory. The software allows researchers to integrate themselves in a new research domain and drive further research via plug-and-play capabilities. The hands-on approach in the the tutorial benefits both students and senior researchers keen to make contributions in computational structural biology.",TRUE,noun phrase
R172,Oceanography,R160144,Nitrous oxide emissions from the Arabian Sea,S638064,R160172,Region of data collection,L437076,Arabian Sea,"Dissolved and atmospheric nitrous oxide (N2O) were measured on the legs 3 and 5 of the R/V Meteor cruise 32 in the Arabian Sea. A cruise track along 65°E was followed during both the intermonsoon (May 1995) and the southwest (SW) monsoon (July/August 1995) periods. During the second leg the coastal and open ocean upwelling regions off the Arabian Peninsula were also investigated. Mean N2O saturations for the oceanic regions of the Arabian Sea were in the range of 99–103% during the intermonsoon and 103–230% during the SW monsoon. Computed annual emissions of 0.8–1.5 Tg N2O for the Arabian Sea are considerably higher than previous estimates, indicating that the role of upwelling regions, such as the Arabian Sea, may be more important than previously assumed in global budgets of oceanic N2O emissions.",TRUE,noun phrase
R172,Oceanography,R160146,Variabilities in the fluxes and annual emissions of nitrous oxide from the Arabian Sea,S638084,R160173,Region of data collection,L437095,Arabian Sea,"Extensive measurements of nitrous oxide (N2O) have been made during April–May 1994 (intermonsoon), February–March 1995 (northeast monsoon), July–August 1995 and August 1996 (southwest monsoon) in the Arabian Sea. Low N2O supersaturations in the surface waters are observed during intermonsoon compared to those in northeast and southwest monsoons. Spatial distributions of supersaturations manifest the effects of larger mixing during winter cooling and wind‐driven upwelling during monsoon period off the Indian west coast. A net positive flux is observable during all the seasons, with no discernible differences from the open ocean to coastal regions. The average ocean‐to‐atmosphere fluxes of N2O are estimated, using wind speed dependent gas transfer velocity, to be of the order of 0.26, 0.003, and 0.51, and 0.78 pg (pico grams) cm−2 s−1 during northeast monsoon, intermonsoon, and southwest monsoon in 1995 and 1996, respectively. The lower range of annual emission of N2O is estimated to be 0.56–0.76 Tg N2O per year which constitutes 13–17% of the net global oceanic source. However, N2O emission from the Arabian Sea can be as high as 1.0 Tg N2O per year using different gas transfer models.",TRUE,noun phrase
R172,Oceanography,R160152,A revised nitrogen budget for the Arabian Sea,S637944,R160164,Region of data collection,L436978,Arabian Sea,"Despite its importance for the global oceanic nitrogen (N) cycle, considerable uncertainties exist about the N fluxes of the Arabian Sea. On the basis of our recent measurements during the German Arabian Sea Process Study as part of the Joint Global Ocean Flux Study (JGOFS) in 1995 and 1997, we present estimates of various N sources and sinks such as atmospheric dry and wet depositions of N aerosols, pelagic denitrification, nitrous oxide (N2O) emissions, and advective N input from the south. Additionally, we estimated the N burial in the deep sea and the sedimentary shelf denitrification. On the basis of our measurements and literature data, the N budget for the Arabian Sea was reassessed. It is dominated by the N loss due to denitrification, which is balanced by the advective input of N from the south. The role of N fixation in the Arabian Sea is still difficult to assess owing to the small database available; however, there are hints that it might be more important than previously thought. Atmospheric N depositions are important on a regional scale during the intermonsoon in the central Arabian Sea; however, they play only a minor role for the overall N cycling. Emissions of N2O and ammonia, deep‐sea N burial, and N inputs by rivers and marginal seas (i.e., Persian Gulf and Red Sea) are of minor importance. We found that the magnitude of the sedimentary denitrification at the shelf might be ∼17% of the total denitrification in the Arabian Sea, indicating that the shelf sediments might be of considerably greater importance for the N cycling in the Arabian Sea than previously thought. Sedimentary and pelagic denitrification together demand ∼6% of the estimated particulate organic nitrogen export flux from the photic zone. The main northward transport of N into the Arabian Sea occurs in the intermediate layers, indicating that the N cycle of the Arabian Sea might be sensitive to variations of the intermediate water circulation of the Indian Ocean.",TRUE,noun phrase
R172,Oceanography,R160155,Nitrous oxide cycling in the Arabian Sea,S637962,R160165,Region of data collection,L436993,Arabian Sea,"Depth profiles of dissolved nitrous oxide (N2O) were measured in the central and western Arabian Sea during four cruises in May and July–August 1995 and May–July 1997 as part of the German contribution to the Arabian Sea Process Study of the Joint Global Ocean Flux Study. The vertical distribution of N2O in the water column on a transect along 65°E showed a characteristic double-peak structure, indicating production of N2O associated with steep oxygen gradients at the top and bottom of the oxygen minimum zone. We propose a general scheme consisting of four ocean compartments to explain the N2O cycling as a result of nitrification and denitrification processes in the water column of the Arabian Sea. We observed a seasonal N2O accumulation at 600–800 m near the shelf break in the western Arabian Sea. We propose that, in the western Arabian Sea, N2O might also be formed during bacterial oxidation of organic matter by the reduction of IO3 − to I−, indicating that the biogeochemical cycling of N2O in the Arabian Sea during the SW monsoon might be more complex than previously thought. A compilation of sources and sinks of N2O in the Arabian Sea suggested that the N2O budget is reasonably balanced.",TRUE,noun phrase
R172,Oceanography,R160158,Nitrous oxide emissions from the Arabian Sea: A synthesis,S637972,R160166,Region of data collection,L437002,Arabian Sea,"Abstract. We computed high-resolution (1o latitude x 1o longitude) seasonal and annual nitrous oxide (N2O) concentration fields for the Arabian Sea surface layer using a database containing more than 2400 values measured between December 1977 and July 1997. N2O concentrations are highest during the southwest (SW) monsoon along the southern Indian continental shelf. Annual emissions range from 0.33 to 0.70 Tg N2O and are dominated by fluxes from coastal regions during the SW and northeast monsoons. Our revised estimate for the annual N2O flux from the Arabian Sea is much more tightly constrained than the previous consensus derived using averaged in-situ data from a smaller number of studies. However, the tendency to focus on measurements in locally restricted features in combination with insufficient seasonal data coverage leads to considerable uncertainties of the concentration fields and thus in the flux estimates, especially in the coastal zones of the northern and eastern Arabian Sea. The overall mean relative error of the annual N2O emissions from the Arabian Sea was estimated to be at least 65%.",TRUE,noun phrase
R172,Oceanography,R160735,Strong CO2emissions from the Arabian Sea during south-west monsoon,S641231,R160736,Region of data collection,L438902,Arabian Sea,"The partial pressure of CO2 (pCO2) was measured during the 1995 South‐West Monsoon in the Arabian Sea. The Arabian Sea was characterized throughout by a moderate supersaturation of 12–30 µatm. The stable atmospheric pCO2 level was around 345 µatm. An extreme supersaturation was found in areas of coastal upwelling off the Omani coast with pCO2 peak values in surface waters of 750 µatm. Such two‐fold saturation (218%) is rarely found elsewhere in open ocean environments. We also encountered cold upwelled water 300 nm off the Omani coast in the region of Ekman pumping, which was also characterized by a strongly elevated seawater pCO2 of up to 525 µatm. Due to the strong monsoonal wind forcing the Arabian Sea as a whole and the areas of upwelling in particular represent a significant source of atmospheric CO2 with flux densities from around 2 mmol m−2 d−1 in the open ocean to 119 mmol m−2 d−1 in coastal upwelling. Local air masses passing the area of coastal upwelling showed increasing CO2 concentrations, which are consistent with such strong emissions.",TRUE,noun phrase
R172,Oceanography,R155530,Nitrogen budgets following a Lagrangian strategy in the Western Tropical South Pacific Ocean: the prominent role of N<sub>2</sub> fixation (OUTPACE cruise),S623019,R155531,Season,L428922,Austral summer,"Abstract. We performed N budgets at three stations in the western tropical South Pacific (WTSP) Ocean during austral summer conditions (Feb. Mar. 2015) and quantified all major N fluxes both entering the system (N2 fixation, nitrate eddy diffusion, atmospheric deposition) and leaving the system (PN export). Thanks to a Lagrangian strategy, we sampled the same water mass for the entire duration of each long duration (5 days) station, allowing to consider only vertical exchanges. Two stations located at the western end of the transect (Melanesian archipelago (MA) waters, LD A and LD B) were oligotrophic and characterized by a deep chlorophyll maximum (DCM) located at 51 ± 18 m and 81 ± 9 m at LD A and LD B. Station LD C was characterized by a DCM located at 132 ± 7 m, representative of the ultra-oligotrophic waters of the South Pacific gyre (SPG water). N2 fixation rates were extremely high at both LD A (593 ± 51 µmol N m−2 d−1) and LD B (706 ± 302 µmol N m−2 d−1), and the diazotroph community was dominated by Trichodesmium. N2 fixation rates were lower (59 ± 16 µmol N m−2 d−1) at LD C and the diazotroph community was dominated by unicellular N2-fixing cyanobacteria (UCYN). At all stations, N2 fixation was the major source of new N (> 90 %) before atmospheric deposition and upward nitrate fluxes induced by turbulence. N2 fixation contributed circa 8–12 % of primary production in the MA region and 3 % in the SPG water and sustained nearly all new primary production at all stations. The e-ratio (e-ratio = PC export/PP) was maximum at LD A (9.7 %) and was higher than the e-ratio in most studied oligotrophic regions (~ 1 %), indicating a high efficiency of the WTSP to export carbon relative to primary production. The direct export of diazotrophs assessed by qPCR of the nifH gene in sediment traps represented up to 30.6 % of the PC export at LD A, while there contribution was 5 and
",TRUE,noun phrase
R172,Oceanography,R160757,Chemoautotrophy in the redox transition zone of the Cariaco Basin: A significant midwater source of organic carbon production,S641477,R160759,Region of data collection,L439100,Cariaco Basin,"During the CARIACO time series program, microbial standing stocks, bacterial production, and acetate turnover were consistently elevated in the redox transition zone (RTZ) of the Cariaco Basin, the depth interval (~240–450 m) of steepest gradient in oxidation‐reduction potential. Anomalously high fluxes of particulate carbon were captured in sediment traps below this zone (455 m) in 16 of 71 observations. Here we present new evidence that bacterial chemoautotrophy, fueled by reduced sulfur species, supports an active secondary microbial food web in the RTZ and is potentially a large midwater source of labile, chemically unique, sedimenting biogenic debris to the basin's interior. Dissolved inorganic carbon assimilation (27–159 mmol C m−2 d−1) in this zone was equivalent to 10%–333% of contemporaneous primary production, depending on the season. However, vertical diffusion rates to the RTZ of electron donors and electron acceptors were inadequate to support this production. Therefore, significant lateral intrusions of oxic waters, mixing processes, or intensive cycling of C, S, N, Mn, and Fe across the RTZ are necessary to balance electron equivalents. Chemoautotrophic production appears to be decoupled temporally from short‐term surface processes, such as seasonal upwelling and blooms, and potentially is more responsive to long‐term changes in surface productivity and deep‐water ventilation on interannual to decadal timescales. Findings suggest that midwater production of organic carbon may contribute a unique signature to the basin's sediment record, thereby altering its paleoclimatological interpretation.",TRUE,noun phrase
R172,Oceanography,R109573,N2 Fixation in the Eastern Arabian Sea: Probable Role of Heterotrophic Diazotrophs,S500245,R109591,Region of data collection,L361950,Eastern Arabian Sea,"Biogeochemical implications of global imbalance between the rates of marine dinitrogen (N2) fixation and denitrification have spurred us to understand the former process in the Arabian Sea, which contributes considerably to the global nitrogen budget. Heterotrophic bacteria have gained recent appreciation for their major role in marine N budget by fixing a significant amount of N2. Accordingly, we hypothesize a probable role of heterotrophic diazotrophs from the 15N2 enriched isotope labelling dark incubations that witnessed rates comparable to the light incubations in the eastern Arabian Sea during spring 2010. Maximum areal rates (8 mmol N m-2 d-1) were the highest ever observed anywhere in world oceans. Our results suggest that the eastern Arabian Sea gains ~92% of its new nitrogen through N2 fixation. Our results are consistent with the observations made in the same region in preceding year, i.e., during the spring of 2009.",TRUE,noun phrase
R172,Oceanography,R155573,Dynamic responses of picophytoplankton to physicochemical variation in the eastern Indian Ocean,S623486,R155575,Region of data collection,L429317,Eastern Indian Ocean,"Abstract Picophytoplankton were investigated during spring 2015 and 2016 extending from near‐shore coastal waters to oligotrophic open waters in the eastern Indian Ocean (EIO). They were typically composed of Prochlorococcus (Pro), Synechococcus (Syn), and picoeukaryotes (PEuks). Pro dominated most regions of the entire EIO and were approximately 1–2 orders of magnitude more abundant than Syn and PEuks. Under the influence of physicochemical conditions induced by annual variations of circulations and water masses, no coherent abundance and horizontal distributions of picophytoplankton were observed between spring 2015 and 2016. Although previous studies reported the limited effects of nutrients and heavy metals around coastal waters or upwelling zones could constrain Pro growth, Pro abundance showed strong positive correlation with nutrients, indicating the increase in nutrient availability particularly in the oligotrophic EIO could appreciably elevate their abundance. The exceptional appearance of picophytoplankton with high abundance along the equator appeared to be associated with the advection processes supported by the Wyrtki jets. For vertical patterns of picophytoplankton, a simple conceptual model was built based upon physicochemical parameters. However, Pro and PEuks simultaneously formed a subsurface maximum, while Syn generally restricted to the upper waters, significantly correlating with the combined effects of temperature, light, and nutrient availability. The average chlorophyll a concentrations (Chl a) of picophytoplankton accounted for above 49.6% and 44.9% of the total Chl a during both years, respectively, suggesting that picophytoplankton contributed a significant proportion of the phytoplankton community in the whole EIO.",TRUE,noun phrase
R172,Oceanography,R155540,Dinitrogen Fixation Across Physico‐Chemical Gradients of the Eastern Tropical North Pacific Oxygen Deficient Zone,S623149,R155542,Region of data collection,L429036,Eastern tropical north Pacific Ocean,"The Eastern Tropical North Pacific Ocean hosts one of the world's largest oceanic oxygen deficient zones (ODZs). Hot spots for reactive nitrogen (Nr) removal processes, ODZs generate conditions proposed to promote Nr inputs via dinitrogen (N2) fixation. In this study, we quantified N2 fixation rates by 15N tracer bioassay across oxygen, nutrient, and light gradients within and adjacent to the ODZ. Within subeuphotic oxygen‐deplete waters, N2 fixation was largely undetectable; however, addition of dissolved organic carbon stimulated N2 fixation in suboxic (<20 μmol/kg O2) waters, suggesting that diazotroph communities are likely energy limited or carbon limited and able to fix N2 despite high ambient concentrations of dissolved inorganic nitrogen. Elevated rates (>9 nmol N·L−1·day−1) were also observed in suboxic waters near volcanic islands where N2 fixation was quantifiable to 3,000 m. Within the overlying euphotic waters, N2 fixation rates were highest near the continent, exceeding 500 μmol N·m−2·day−1 at one third of inshore stations. These findings support the expansion of the known range of diazotrophs to deep, cold, and dissolved inorganic nitrogen‐replete waters. Additionally, this work bolsters calls for the reconsideration of ocean margins as important sources of Nr. Despite high rates at some inshore stations, regional N2 fixation appears insufficient to compensate for Nr loss locally as observed previously in the Eastern Tropical South Pacific ODZ.",TRUE,noun phrase
R172,Oceanography,R141328,High new production in the Bay of Bengal: Possible causes and implications,S565216,R141329,Sampling depth covered (m),R108808,Euphotic zone,"We report the first measurements of new production (15N tracer technique), the component of primary production that sustains on extraneous nutrient inputs to the euphotic zone, in the Bay of Bengal. Experiments done in two different seasons consistently show high new production (averaging around 4 mmol N m−2 d−1 during post monsoon and 5.4 mmol N m−2 d−1 during pre monsoon), validating the earlier conjecture of high new production, based on pCO2 measurements, in the Bay. Averaged over annual time scales, higher new production could cause higher rate of removal of organic carbon. This could also be one of the reasons for comparable organic carbon fluxes observed in the sediment traps of the Bay of Bengal and the eastern Arabian Sea. Thus, oceanic regions like Bay of Bengal may play a more significant role in removing the excess CO2 from the atmosphere than hitherto believed.",TRUE,noun phrase
R172,Oceanography,R141337,Nitrogen Uptake in the Northeastern Arabian Sea during Winter Cooling,S565394,R141339,Sampling depth covered (m),R108808,Euphotic zone,"The uptake of dissolved inorganic nitrogen by phytoplankton is an important aspect of the nitrogen cycle of oceans. Here, we present nitrate () and ammonium () uptake rates in the northeastern Arabian Sea using tracer technique. In this relatively underexplored region, productivity is high during winter due to supply of nutrients by convective mixing caused by the cooling of the surface by the northeast monsoon winds. Studies done during different months (January and late February-early March) of the northeast monsoon 2003 revealed a fivefold increase in the average euphotic zone integrated uptake from January (2.3 mmolN ) to late February-early March (12.7 mmolN ). The -ratio during January appeared to be affected by the winter cooling effect and increased by more than 50% from the southernmost station to the northern open ocean stations, indicating hydrographic and meteorological control. Estimates of residence time suggested that entrained in the water column during January contributed to the development of blooms during late February-early March.",TRUE,noun phrase
R172,Oceanography,R109394,Heterotrophic bacteria as major nitrogen fixers in the euphotic zone of the Indian Ocean,S499235,R109395,Water column zone,L361288,Euphotic zone,"Diazotrophy in the Indian Ocean is poorly understood compared to that in the Atlantic and Pacific Oceans. We first examined the basin‐scale community structure of diazotrophs and their nitrogen fixation activity within the euphotic zone during the northeast monsoon period along about 69°E from 17°N to 20°S in the oligotrophic Indian Ocean, where a shallow nitracline (49–59 m) prevailed widely and the sea surface temperature (SST) was above 25°C. Phosphate was detectable at the surface throughout the study area. The dissolved iron concentration and the ratio of iron to nitrate + nitrite at the surface were significantly higher in the Arabian Sea than in the equatorial and southern Indian Ocean. Nitrogen fixation in the Arabian Sea (24.6–47.1 μmolN m−2 d−1) was also significantly greater than that in the equatorial and southern Indian Ocean (6.27–16.6 μmolN m−2 d−1), indicating that iron could control diazotrophy in the Indian Ocean. Phylogenetic analysis of nifH showed that most diazotrophs belonged to the Proteobacteria and that cyanobacterial diazotrophs were absent in the study area except in the Arabian Sea. Furthermore, nitrogen fixation was not associated with light intensity throughout the study area. These results are consistent with nitrogen fixation in the Indian Ocean, being largely performed by heterotrophic bacteria and not by cyanobacteria. The low cyanobacterial diazotrophy was attributed to the shallow nitracline, which is rarely observed in the Pacific and Atlantic oligotrophic oceans. Because the shallower nitracline favored enhanced upward nitrate flux, the competitive advantage of cyanobacterial diazotrophs over nondiazotrophic phytoplankton was not as significant as it is in other oligotrophic oceans.",TRUE,noun phrase
R172,Oceanography,R147181,Contribution of picoplankton to the total particulate organic carbon (POC) concentration in the eastern South Pacific,S589900,R147182,Material/Method,L410655,Flow cytometry,"Abstract. Prochlorococcus, Synechococcus, picophytoeukaryotes and bacterioplankton abundances and contributions to the total particulate organic carbon concentration, derived from the total particle beam attenuation coefficient (cp), were determined across the eastern South Pacific between the Marquesas Islands and the coast of Chile. All flow cytometrically derived abundances decreased towards the hyper-oligotrophic centre of the gyre and were highest at the coast, except for Prochlorococcus, which was not detected under eutrophic conditions. Temperature and nutrient availability appeared important in modulating picophytoplankton abundance, according to the prevailing trophic conditions. Although the non-vegetal particles tended to dominate the cp signal everywhere along the transect (50 to 83%), this dominance seemed to weaken from oligo- to eutrophic conditions, the contributions by vegetal and non-vegetal particles being about equal under mature upwelling conditions. Spatial variability in the vegetal compartment was more important than the non-vegetal one in shaping the water column particle beam attenuation coefficient. Spatial variability in picophytoplankton biomass could be traced by changes in both total chlorophyll a (i.e. mono + divinyl chlorophyll a) concentration and cp. Finally, picophytoeukaryotes contributed ~38% on average to the total integrated phytoplankton carbon biomass or vegetal attenuation signal along the transect, as determined by size measurements (i.e. equivalent spherical diameter) on cells sorted by flow cytometry and optical theory. Although there are some uncertainties associated with these estimates, the new approach used in this work further supports the idea that picophytoeukaryotes play a dominant role in carbon cycling in the upper open ocean, even under hyper-oligotrophic conditions.",TRUE,noun phrase
R172,Oceanography,R160725,"Inverse estimates of anthropogenic CO2uptake, transport, and storage by the ocean: AIR-SEA EXCHANGE OF ANTHROPOGENIC CARBON",S641110,R160726,Method,L438798,Green's function inversion method,"Regional air‐sea fluxes of anthropogenic CO2 are estimated using a Green's function inversion method that combines data‐based estimates of anthropogenic CO2 in the ocean with information about ocean transport and mixing from a suite of Ocean General Circulation Models (OGCMs). In order to quantify the uncertainty associated with the estimated fluxes owing to modeled transport and errors in the data, we employ 10 OGCMs and three scenarios representing biases in the data‐based anthropogenic CO2 estimates. On the basis of the prescribed anthropogenic CO2 storage, we find a global uptake of 2.2 ± 0.25 Pg C yr−1, scaled to 1995. This error estimate represents the standard deviation of the models weighted by a CFC‐based model skill score, which reduces the error range and emphasizes those models that have been shown to reproduce observed tracer concentrations most accurately. The greatest anthropogenic CO2 uptake occurs in the Southern Ocean and in the tropics. The flux estimates imply vigorous northward transport in the Southern Hemisphere, northward cross‐equatorial transport, and equatorward transport at high northern latitudes. Compared with forward simulations, we find substantially more uptake in the Southern Ocean, less uptake in the Pacific Ocean, and less global uptake. The large‐scale spatial pattern of the estimated flux is generally insensitive to possible biases in the data and the models employed. However, the global uptake scales approximately linearly with changes in the global anthropogenic CO2 inventory. Considerable uncertainties remain in some regions, particularly the Southern Ocean.",TRUE,noun phrase
R172,Oceanography,R160712,Sea–air CO<sub>2</sub> fluxes in the Indian Ocean between 1990 and 2009,S640988,R160714,Region of data collection,L438696,Indian Ocean,"Abstract. The Indian Ocean (44° S–30° N) plays an important role in the global carbon cycle, yet it remains one of the most poorly sampled ocean regions. Several approaches have been used to estimate net sea–air CO2 fluxes in this region: interpolated observations, ocean biogeochemical models, atmospheric and ocean inversions. As part of the RECCAP (REgional Carbon Cycle Assessment and Processes) project, we combine these different approaches to quantify and assess the magnitude and variability in Indian Ocean sea–air CO2 fluxes between 1990 and 2009. Using all of the models and inversions, the median annual mean sea–air CO2 uptake of −0.37 ± 0.06 PgC yr−1 is consistent with the −0.24 ± 0.12 PgC yr−1 calculated from observations. The fluxes from the southern Indian Ocean (18–44° S; −0.43 ± 0.07 PgC yr−1 are similar in magnitude to the annual uptake for the entire Indian Ocean. All models capture the observed pattern of fluxes in the Indian Ocean with the following exceptions: underestimation of upwelling fluxes in the northwestern region (off Oman and Somalia), overestimation in the northeastern region (Bay of Bengal) and underestimation of the CO2 sink in the subtropical convergence zone. These differences were mainly driven by lack of atmospheric CO2 data in atmospheric inversions, and poor simulation of monsoonal currents and freshwater discharge in ocean biogeochemical models. Overall, the models and inversions do capture the phase of the observed seasonality for the entire Indian Ocean but overestimate the magnitude. The predicted sea–air CO2 fluxes by ocean biogeochemical models (OBGMs) respond to seasonal variability with strong phase lags with reference to climatological CO2 flux, whereas the atmospheric inversions predicted an order of magnitude higher seasonal flux than OBGMs. The simulated interannual variability by the OBGMs is weaker than that found by atmospheric inversions. Prediction of such weak interannual variability in CO2 fluxes by atmospheric inversions was mainly caused by a lack of atmospheric data in the Indian Ocean. The OBGM models suggest a small strengthening of the sink over the period 1990–2009 of −0.01 PgC decade−1. This is inconsistent with the observations in the southwestern Indian Ocean that shows the growth rate of oceanic pCO2 was faster than the observed atmospheric CO2 growth, a finding attributed to the trend of the Southern Annular Mode (SAM) during the 1990s.
",TRUE,noun phrase
R172,Oceanography,R160723,"Ocean carbon cycling in the Indian Ocean: 1. Spatiotemporal variability of inorganic carbon and air-sea CO2gas exchange: INDIAN OCEAN CARBON CYCLE, 1",S641085,R160724,Region of data collection,L438777,Indian Ocean,"The spatiotemporal variability of upper ocean inorganic carbon parameters and air‐sea CO2 exchange in the Indian Ocean was examined using inorganic carbon data collected as part of the World Ocean Circulation Experiment (WOCE) cruises in 1995. Multiple linear regression methods were used to interpolate and extrapolate the temporally and geographically limited inorganic carbon data set to the entire Indian Ocean basin using other climatological hydrographic and biogeochemical data. The spatiotemporal distributions of total carbon dioxide (TCO2), alkalinity, and seawater pCO2 were evaluated for the Indian Ocean and regions of interest including the Arabian Sea, Bay of Bengal, and 10°N–35°S zones. The Indian Ocean was a net source of CO2 to the atmosphere, and a net sea‐to‐air CO2 flux of +237 ± 132 Tg C yr−1 (+0.24 Pg C yr−1) was estimated. Regionally, the Arabian Sea, Bay of Bengal, and 10°N–10°S zones were perennial sources of CO2 to the atmosphere. In the 10°S–35°S zone, the CO2 sink or source status of the surface ocean shifts seasonally, although the region is a net oceanic sink of atmospheric CO2.",TRUE,noun phrase
R172,Oceanography,R147169,Relative influence of nitrogen and phosphorous availability on phytoplankton physiology and productivity in the oligotrophic sub-tropical North Atlantic Ocean,S589922,R147185,Region of data collection,L410668,North Atlantic Ocean,"Nutrient addition bioassay experiments were performed in the low‐nutrient, low‐chlorophyll oligotrophic subtropical North Atlantic Ocean to investigate the influence of nitrogen (N), phosphorus (P), and/or iron (Fe) on phytoplankton physiology and the limitation of primary productivity or picophytoplankton biomass. Additions of N alone resulted in 1.5‐2 fold increases in primary productivity and chlorophyll after 48 h, with larger (~threefold) increases observed for the addition of P in combination with N (NP). Measurements of cellular chlorophyll contents permitted evaluation of the physiological response of the photosynthetic apparatus to N and P additions in three picophytoplankton groups. In both Prochlorococcus and the picoeukaryotes, cellular chlorophyll increased by similar amounts in N and NP treatments relative to all other treatments, suggesting that pigment synthesis was N limited. In contrast, the increase of cellular chlorophyll was greater in NP than in N treatments in Synechococcus, suggestive of NP co‐limitation. Relative increases in cellular nucleic acid were also only observed in Synechococcus for NP treatments, indicating co‐limitation of net nucleic acid synthesis. A lack of response to relief of nutrient stress for the efficiency of photosystem II photochemistry, Fv :Fm, suggests that the low nutrient supply to this region resulted in a condition of balanced nutrient limited growth, rather than starvation. N thus appears to be the proximal (i.e. direct physiological) limiting nutrient in the oligotrophic sub‐tropical North Atlantic. In addition, some major picophytoplankton groups, as well as overall autotrophic community biomass, appears to be co‐limited by N and P.",TRUE,noun phrase
R172,Oceanography,R160733,Environmental controls on the seasonal carbon dioxide fluxes in the northeastern Indian Ocean,S641208,R160734,Region of data collection,L438882,Northeastern Indian Ocean,"Total carbon dioxide (TCO 2) and computations of partial pressure of carbon dioxide (pCO 2) had been examined in Northerneastern region of Indian Ocean. It exhibit seasonal and spatial variability. North-south gradients in the pCO 2 levels were closely related to gradients in salinity caused by fresh water discharge received from rivers. Eddies observed in this region helped to elevate the nutrients availability and the biological controls by increasing the productivity. These phenomena elevated the carbon dioxide draw down during the fair seasons. Seasonal fluxes estimated from local wind speed and air-sea carbon dioxide difference indicate that during southwest monsoon, the northeastern Indian Ocean acts as a strong sink of carbon dioxide (-20.04 mmol m –2 d -1 ). Also during fall intermonsoon the area acts as a weak sink of carbon dioxide (-4.69 mmol m –2 d -1 ). During winter monsoon, this region behaves as a weak carbon dioxide source with an average sea to air flux of 4.77 mmol m -2 d -1 . In the northern region, salinity levels in the surface level are high during winter compared to the other two seasons. Northeastern Indian Ocean shows significant intraseasonal variability in carbon dioxide fluxes that are mediated by eddies which provide carbon dioxide and nutrients from the subsurface waters to the mixed layer.",TRUE,noun phrase
R172,Oceanography,R160712,Sea–air CO<sub>2</sub> fluxes in the Indian Ocean between 1990 and 2009,S640986,R160714,Method,L438694,Ocean biogeochemical models,"Abstract. The Indian Ocean (44° S–30° N) plays an important role in the global carbon cycle, yet it remains one of the most poorly sampled ocean regions. Several approaches have been used to estimate net sea–air CO2 fluxes in this region: interpolated observations, ocean biogeochemical models, atmospheric and ocean inversions. As part of the RECCAP (REgional Carbon Cycle Assessment and Processes) project, we combine these different approaches to quantify and assess the magnitude and variability in Indian Ocean sea–air CO2 fluxes between 1990 and 2009. Using all of the models and inversions, the median annual mean sea–air CO2 uptake of −0.37 ± 0.06 PgC yr−1 is consistent with the −0.24 ± 0.12 PgC yr−1 calculated from observations. The fluxes from the southern Indian Ocean (18–44° S; −0.43 ± 0.07 PgC yr−1 are similar in magnitude to the annual uptake for the entire Indian Ocean. All models capture the observed pattern of fluxes in the Indian Ocean with the following exceptions: underestimation of upwelling fluxes in the northwestern region (off Oman and Somalia), overestimation in the northeastern region (Bay of Bengal) and underestimation of the CO2 sink in the subtropical convergence zone. These differences were mainly driven by lack of atmospheric CO2 data in atmospheric inversions, and poor simulation of monsoonal currents and freshwater discharge in ocean biogeochemical models. Overall, the models and inversions do capture the phase of the observed seasonality for the entire Indian Ocean but overestimate the magnitude. The predicted sea–air CO2 fluxes by ocean biogeochemical models (OBGMs) respond to seasonal variability with strong phase lags with reference to climatological CO2 flux, whereas the atmospheric inversions predicted an order of magnitude higher seasonal flux than OBGMs. The simulated interannual variability by the OBGMs is weaker than that found by atmospheric inversions. Prediction of such weak interannual variability in CO2 fluxes by atmospheric inversions was mainly caused by a lack of atmospheric data in the Indian Ocean. The OBGM models suggest a small strengthening of the sink over the period 1990–2009 of −0.01 PgC decade−1. This is inconsistent with the observations in the southwestern Indian Ocean that shows the growth rate of oceanic pCO2 was faster than the observed atmospheric CO2 growth, a finding attributed to the trend of the Southern Annular Mode (SAM) during the 1990s.
",TRUE,noun phrase
R172,Oceanography,R160712,Sea–air CO<sub>2</sub> fluxes in the Indian Ocean between 1990 and 2009,S640987,R160714,Method,L438695,Ocean inversions,"Abstract. The Indian Ocean (44° S–30° N) plays an important role in the global carbon cycle, yet it remains one of the most poorly sampled ocean regions. Several approaches have been used to estimate net sea–air CO2 fluxes in this region: interpolated observations, ocean biogeochemical models, atmospheric and ocean inversions. As part of the RECCAP (REgional Carbon Cycle Assessment and Processes) project, we combine these different approaches to quantify and assess the magnitude and variability in Indian Ocean sea–air CO2 fluxes between 1990 and 2009. Using all of the models and inversions, the median annual mean sea–air CO2 uptake of −0.37 ± 0.06 PgC yr−1 is consistent with the −0.24 ± 0.12 PgC yr−1 calculated from observations. The fluxes from the southern Indian Ocean (18–44° S; −0.43 ± 0.07 PgC yr−1 are similar in magnitude to the annual uptake for the entire Indian Ocean. All models capture the observed pattern of fluxes in the Indian Ocean with the following exceptions: underestimation of upwelling fluxes in the northwestern region (off Oman and Somalia), overestimation in the northeastern region (Bay of Bengal) and underestimation of the CO2 sink in the subtropical convergence zone. These differences were mainly driven by lack of atmospheric CO2 data in atmospheric inversions, and poor simulation of monsoonal currents and freshwater discharge in ocean biogeochemical models. Overall, the models and inversions do capture the phase of the observed seasonality for the entire Indian Ocean but overestimate the magnitude. The predicted sea–air CO2 fluxes by ocean biogeochemical models (OBGMs) respond to seasonal variability with strong phase lags with reference to climatological CO2 flux, whereas the atmospheric inversions predicted an order of magnitude higher seasonal flux than OBGMs. The simulated interannual variability by the OBGMs is weaker than that found by atmospheric inversions. Prediction of such weak interannual variability in CO2 fluxes by atmospheric inversions was mainly caused by a lack of atmospheric data in the Indian Ocean. The OBGM models suggest a small strengthening of the sink over the period 1990–2009 of −0.01 PgC decade−1. This is inconsistent with the observations in the southwestern Indian Ocean that shows the growth rate of oceanic pCO2 was faster than the observed atmospheric CO2 growth, a finding attributed to the trend of the Southern Annular Mode (SAM) during the 1990s.
",TRUE,noun phrase
R172,Oceanography,R147149,Evidence for efficient regenerated production and dinitrogen fixation in nitrogen-deficient waters of the South Pacific Ocean: impact on new and export production estimates,S589553,R147151,Region of data collection,L410362,South Pacific Ocean,"Abstract. One of the major objectives of the BIOSOPE cruise, carried out on the R/V Atalante from October-November 2004 in the South Pacific Ocean, was to establish productivity rates along a zonal section traversing the oligotrophic South Pacific Gyre (SPG). These results were then compared to measurements obtained from the nutrient – replete waters in the Chilean upwelling and around the Marquesas Islands. A dual 13C/15N isotope technique was used to estimate the carbon fixation rates, inorganic nitrogen uptake (including dinitrogen fixation), ammonium (NH4) and nitrate (NO3) regeneration and release of dissolved organic nitrogen (DON). The SPG exhibited the lowest primary production rates (0.15 g C m−2 d−1), while rates were 7 to 20 times higher around the Marquesas Islands and in the Chilean upwelling, respectively. In the very low productive area of the SPG, most of the primary production was sustained by active regeneration processes that fuelled up to 95% of the biological nitrogen demand. Nitrification was active in the surface layer and often balanced the biological demand for nitrate, especially in the SPG. The percentage of nitrogen released as DON represented a large proportion of the inorganic nitrogen uptake (13–15% in average), reaching 26–41% in the SPG, where DON production played a major role in nitrogen cycling. Dinitrogen fixation was detectable over the whole study area; even in the Chilean upwelling, where rates as high as 3 nmoles l−1 d−1 were measured. In these nutrient-replete waters new production was very high (0.69±0.49 g C m−2 d−1) and essentially sustained by nitrate levels. In the SPG, dinitrogen fixation, although occurring at much lower daily rates (≈1–2 nmoles l−1 d−1), sustained up to 100% of the new production (0.008±0.007 g C m−2 d−1) which was two orders of magnitude lower than that measured in the upwelling. The annual N2-fixation of the South Pacific is estimated to 21×1012g, of which 1.34×1012g is for the SPG only. Even if our ""snapshot"" estimates of N2-fixation rates were lower than that expected from a recent ocean circulation model, these data confirm that the N-deficiency South Pacific Ocean would provide an ideal ecological niche for the proliferation of N2-fixers which are not yet identified.",TRUE,noun phrase
R172,Oceanography,R147175,Niche partitioning by photosynthetic plankton as a driver of CO2-fixation across the oligotrophic South Pacific Subtropical Ocean,S589845,R147177,Region of data collection,L410608,South Pacific Ocean,"Abstract Oligotrophic ocean gyre ecosystems may be expanding due to rising global temperatures [1–5]. Models predicting carbon flow through these changing ecosystems require accurate descriptions of phytoplankton communities and their metabolic activities [6]. We therefore measured distributions and activities of cyanobacteria and small photosynthetic eukaryotes throughout the euphotic zone on a zonal transect through the South Pacific Ocean, focusing on the ultraoligotrophic waters of the South Pacific Gyre (SPG). Bulk rates of CO 2 fixation were low (0.1 µmol C l −1 d −1 ) but pervasive throughout both the surface mixed-layer (upper 150 m), as well as the deep chlorophyll a maximum of the core SPG. Chloroplast 16S rRNA metabarcoding, and single-cell 13 CO 2 uptake experiments demonstrated niche differentiation among the small eukaryotes and picocyanobacteria. Prochlorococcus abundances, activity, and growth were more closely associated with the rims of the gyre. Small, fast-growing, photosynthetic eukaryotes, likely related to the Pelagophyceae, characterized the deep chlorophyll a maximum. In contrast, a slower growing population of photosynthetic eukaryotes, likely comprised of Dictyochophyceae and Chrysophyceae, dominated the mixed layer that contributed 65–88% of the areal CO 2 fixation within the core SPG. Small photosynthetic eukaryotes may thus play an underappreciated role in CO 2 fixation in the surface mixed-layer waters of ultraoligotrophic ecosystems.",TRUE,noun phrase
R172,Oceanography,R155508,N2 Fixation and New Insights Into Nitrification From the Ice-Edge to the Equator in the South Pacific Ocean,S622809,R155510,Region of data collection,L428748,South Pacific Ocean,"Nitrogen (N) is an essential element for life and controls the magnitude of primary productivity in the ocean. In order to describe the microorganisms that catalyze N transformations in surface waters in the South Pacific Ocean, we collected high-resolution biotic and abiotic data along a 7000 km transect, from the Antarctic ice edge to the equator. The transect, conducted between late Austral autumn and early winter 2016, covered major oceanographic features such as the polar front (PF), the subtropical front (STF) and the Pacific equatorial divergence (PED). We measured N2 fixation and nitrification rates and quantified the relative abundances of diazotrophs and nitrifiers in a region where few to no rate measurements are available. Even though N2 fixation rates are usually below detection limits in cold environments, we were able to measure this N pathway at 7/10 stations in the cold and nutrient rich waters near the PF. This result highlights that N2 fixation rates continue to be measured outside the well-known subtropical regions. The majority of the mid to high N2 fixation rates (>∼20 nmol L–1 d–1), however, still occurred in the expected tropical and subtropical regions. High throughput sequence analyses of the dinitrogenase reductase gene (nifH) revealed that the nifH Cluster I dominated the diazotroph diversity throughout the transect. nifH gene richness did not show a latitudinal trend, nor was it significantly correlated with N2 fixation rates. Nitrification rates above the mixed layer in the Southern Ocean ranged between 56 and 1440 nmol L–1 d–1. Our data showed a decoupling between carbon and N assimilation (NO3– and NH4+ assimilation rates) in winter in the South Pacific Ocean. Phytoplankton community structure showed clear changes across the PF, the STF and the PED, defining clear biomes. Overall, these findings provide a better understanding of the ecosystem functionality in the South Pacific Ocean across key oceanographic biomes.",TRUE,noun phrase
R172,Oceanography,R155532,Linkage Between Dinitrogen Fixation and Primary Production in the Oligotrophic South Pacific Ocean,S623061,R155534,Region of data collection,L428960,South Pacific Ocean,"The import of nitrogen via dinitrogen fixation supports primary production, particularly in the oligotrophic ocean; however, to what extent dinitrogen fixation influences primary production, and the role of specific types of diazotrophs, remains poorly understood. We examined the relationship between primary production and dinitrogen fixation together with diazotroph community structure in the oligotrophic western and eastern South Pacific Ocean and found that dinitrogen fixation was higher than nitrate‐based new production. Primary production increased in the middle of the western subtropical region, where the cyanobacterium Trichodesmium dominated the diazotroph community and accounted for up to 7.8% of the phytoplankton community, and the abundance of other phytoplankton taxa (especially Prochlorococcus) was high. These results suggest that regenerated production was enhanced by nitrogen released from Trichodesmium and that carbon fixation by Trichodesmium also contributed significantly to total primary production. Although volumetric dinitrogen fixation was comparable between the western and eastern subtropical regions, primary production in the western waters was more than twice as high as that in the eastern waters, where UCYN‐A1 (photoheterotroph) and heterotrophic bacteria were the dominant diazotrophs. This suggests that dinitrogen fixed by these diazotrophs contributed relatively little to primary production of the wider community, and there was limited carbon fixation by these diazotrophs. Hence, we document how the community composition of diazotrophs in the field can be reflected in how much nitrogen becomes available to the wider phytoplankton community and in how much autotrophic diazotrophs themselves fix carbon and thereby influences the magnitude of local primary production.",TRUE,noun phrase
R172,Oceanography,R155520,Measurements of nitrogen fixation in the oligotrophic North Pacific Subtropical Gyre using a free-drifting submersible incubation device,S622920,R155522,Method,L428844,Submersible incubation device,"One challenge in field-based marine microbial ecology is to achieve sufficient spatial resolution to obtain representative information about microbial distributions and biogeochemical processes. The challenges are exacerbated when conducting rate measurements of biological processes due to potential perturbations during sampling and incubation. Here we present the first application of a robotic microlaboratory, the 4 L-submersible incubation device (SID), for conducting in situ measurements of the rates of biological nitrogen (N2) fixation (BNF). The free-drifting autonomous instrument obtains samples from the water column that are incubated in situ after the addition of 15N2 tracer. After each of up to four consecutive incubation experiments, the 4-L sample is filtered and chemically preserved. Measured BNF rates from two deployments of the SID in the oligotrophic North Pacific ranged from 0.8 to 2.8 nmol N L?1 day?1, values comparable with simultaneous rate measurements obtained using traditional conductivity–temperature–depth (CTD)–rosette sampling followed by on-deck or in situ incubation. Future deployments of the SID will help to better resolve spatial variability of oceanic BNF, particularly in areas where recovery of seawater samples by CTD compromises their integrity, e.g. anoxic habitats.",TRUE,noun phrase
R172,Oceanography,R155514,Biogeographic drivers of diazotrophs in the western Pacific Ocean,S622865,R155516,Region of data collection,L428796,Western Pacific Ocean,"The global budget of marine nitrogen (N) is not balanced, with N removal largely exceeding N fixation. One of the major causes of this imbalance is our inadequate understanding of the diversity and distribution of marine N2 fixers (diazotrophs) as well as their contribution to N2 fixation. Here, we performed a large‐scale cross‐system study spanning the South China Sea, Luzon Strait, Philippine Sea, and western tropical Pacific Ocean to compare the biogeography of seven major diazotrophic groups and N2 fixation rates in these ecosystems. Distinct spatial niche differentiation was observed. Trichodesmium was dominant in the South China Sea and western equatorial Pacific, whereas the unicellular cyanobacterium UCYN‐B dominated in the Philippine Sea. Furthermore, contrasting diel patterns of Trichodesmium nifH genes and UCYN‐B nifH gene transcript activity were observed. The heterotrophic diazotroph Gamma A phylotype was widespread throughout the western Pacific Ocean and occupied an ecological niche that overlapped with that of UCYN‐B. Moreover, Gamma A (or other possible unknown/undetected diazotrophs) rather than Trichodesmium and UCYN‐B may have been responsible for the high N2 fixation rates in some samples. Regional biogeochemistry analyses revealed cross‐system variations in N2‐fixing community composition and activity constrained by sea surface temperature, aerosol optical thickness, current velocity, mixed‐layer depth, and chlorophyll a concentration. These factors except for temperature essentially control/reflected iron supply/bioavailability and thus drive diazotroph biogeography. This study highlights biogeographical controls on marine N2 fixers and increases our understanding of global diazotroph biogeography.",TRUE,noun phrase
R272,"Operations Research, Systems Engineering and Industrial Engineering",R139463,"Energy management and optimization: case study of a textile plant in Istanbul, Turkey",S556466,R139465,Type of industry,L391249,Textile industry,"Purpose This paper aims to present the results of energy management and optimization studies in one Turkish textile factory. In a case study of a print and dye factory in Istanbul, the authors identified energy-sensitive processes and proposed energy management applications. Design/methodology/approach Appropriate energy management methods have been implemented in the factory, and the results were examined in terms of energy efficiency and cost reduction. Findings By applying the methods for fuel distribution optimization, the authors demonstrated that energy costs could be decreased by approximately. Originality/value Energy management is a vital issue for industries particularly in developing countries such as Turkey. Turkey is an energy poor country and imports more than half of its energy to satisfy its increasing domestic demands. An important share of these demands stems from the presence of a strong textile industry that operates throughout the country.",TRUE,noun phrase
R272,"Operations Research, Systems Engineering and Industrial Engineering",R139487,Scheduling with multi-attribute set-up times on unrelated parallel machines,S556517,R139489,Type of industry,L391300,Textile industry,"This paper studies a problem in the knitting process of the textile industry. In such a production system, each job has a number of attributes and each attribute has one or more levels. Because there is at least one different attribute level between two adjacent jobs, it is necessary to make a set-up adjustment whenever there is a switch to a different job. The problem can be formulated as a scheduling problem with multi-attribute set-up times on unrelated parallel machines. The objective of the problem is to assign jobs to different machines to minimise the makespan. A constructive heuristic is developed to obtain a qualified solution. To improve the solution further, a meta-heuristic that uses a genetic algorithm with a new crossover operator and three local searches are proposed. The computational experiments show that the proposed constructive heuristic outperforms two existed heuristics and the current scheduling method used by the case textile plant.",TRUE,noun phrase
R137635,"Optics, Quantum Optics and Physics of Atoms, Molecules and Plasmas",R148626,Dynamics of localized dissipative structures in a generalized Lugiato–Lefever model with negative quartic group-velocity dispersion,S595828,R148629,Mathematical model,L414035,Generalized Lugiato-Lefever equation,"We study localized dissipative structures in a generalized Lugiato-Lefever equation, exhibiting normal group-velocity dispersion and anomalous quartic group-velocity dispersion. In the conservative system, this parameter-regime has proven to enable generalized dispersion Kerr solitons. Here, we demonstrate via numerical simulations that our dissipative system also exhibits equivalent localized states, including special molecule-like two-color bound states recently reported. We investigate their generation, characterize the observed steady-state solution, and analyze their propagation dynamics under perturbations.",TRUE,noun phrase
R137635,"Optics, Quantum Optics and Physics of Atoms, Molecules and Plasmas",R148626,Dynamics of localized dissipative structures in a generalized Lugiato–Lefever model with negative quartic group-velocity dispersion,S595825,R148629,keywords,L414032,Localized dissipative structures,"We study localized dissipative structures in a generalized Lugiato-Lefever equation, exhibiting normal group-velocity dispersion and anomalous quartic group-velocity dispersion. In the conservative system, this parameter-regime has proven to enable generalized dispersion Kerr solitons. Here, we demonstrate via numerical simulations that our dissipative system also exhibits equivalent localized states, including special molecule-like two-color bound states recently reported. We investigate their generation, characterize the observed steady-state solution, and analyze their propagation dynamics under perturbations.",TRUE,noun phrase
R137635,"Optics, Quantum Optics and Physics of Atoms, Molecules and Plasmas",R148626,Dynamics of localized dissipative structures in a generalized Lugiato–Lefever model with negative quartic group-velocity dispersion,S595827,R148629,keywords,L414034,Lugiato-Lefever equation,"We study localized dissipative structures in a generalized Lugiato-Lefever equation, exhibiting normal group-velocity dispersion and anomalous quartic group-velocity dispersion. In the conservative system, this parameter-regime has proven to enable generalized dispersion Kerr solitons. Here, we demonstrate via numerical simulations that our dissipative system also exhibits equivalent localized states, including special molecule-like two-color bound states recently reported. We investigate their generation, characterize the observed steady-state solution, and analyze their propagation dynamics under perturbations.",TRUE,noun phrase
R137635,"Optics, Quantum Optics and Physics of Atoms, Molecules and Plasmas",R149004,Multi-frequency radiation of dissipative solitons in optical fiber cavities,S597031,R149005,keywords,L415022,Microring resonators,Abstract New resonant emission of dispersive waves by oscillating solitary structures in optical fiber cavities is considered analytically and numerically. The pulse propagation is described in the framework of the Lugiato-Lefever equation when a Hopf-bifurcation can result in the formation of oscillating dissipative solitons. The resonance condition for the radiation of the dissipative oscillating solitons is derived and it is demonstrated that the predicted resonances match the spectral lines observed in numerical simulations perfectly. The complex recoil of the radiation on the soliton dynamics is discussed. The reported effect can have importance for the generation of frequency combs in nonlinear microring resonators.,TRUE,noun phrase
R137635,"Optics, Quantum Optics and Physics of Atoms, Molecules and Plasmas",R149004,Multi-frequency radiation of dissipative solitons in optical fiber cavities,S597029,R149005,keywords,L415020,Oscillating dissipative solitons,Abstract New resonant emission of dispersive waves by oscillating solitary structures in optical fiber cavities is considered analytically and numerically. The pulse propagation is described in the framework of the Lugiato-Lefever equation when a Hopf-bifurcation can result in the formation of oscillating dissipative solitons. The resonance condition for the radiation of the dissipative oscillating solitons is derived and it is demonstrated that the predicted resonances match the spectral lines observed in numerical simulations perfectly. The complex recoil of the radiation on the soliton dynamics is discussed. The reported effect can have importance for the generation of frequency combs in nonlinear microring resonators.,TRUE,noun phrase
R129,Organic Chemistry,R137068,Visible-light photoredox-catalyzed C–O bond cleavage of diaryl ethers by acridinium photocatalysts at room temperature,S541544,R137071,Photoredox catalyst,L381361,Acridinium photocatalyst,"Abstract Cleavage of C–O bonds in lignin can afford the renewable aryl sources for fine chemicals. However, the high bond energies of these C–O bonds, especially the 4-O-5-type diaryl ether C–O bonds (~314 kJ/mol) make the cleavage very challenging. Here, we report visible-light photoredox-catalyzed C–O bond cleavage of diaryl ethers by an acidolysis with an aryl carboxylic acid and a following one-pot hydrolysis. Two molecules of phenols are obtained from one molecule of diaryl ether at room temperature. The aryl carboxylic acid used for the acidolysis can be recovered. The key to success of the acidolysis is merging visible-light photoredox catalysis using an acridinium photocatalyst and Lewis acid catalysis using Cu(TMHD) 2 . Preliminary mechanistic studies indicate that the catalytic cycle occurs via a rare selective electrophilic attack of the generated aryl carboxylic radical on the electron-rich aryl ring of the diphenyl ether. This transformation is applied to a gram-scale reaction and the model of 4-O-5 lignin linkages.",TRUE,noun phrase
R129,Organic Chemistry,R138423,Oxidative Depolymerization of Lignin in Ionic Liquids,S549351,R138425,Product,L386492,Aromatic aldehydes,"Beech lignin was oxidatively cleaved in ionic liquids to give phenols, unsaturated propylaromatics, and aromatic aldehydes. A multiparallel batch reactor system was used to screen different ionic liquids and metal catalysts. Mn(NO(3))(2) in 1-ethyl-3-methylimidazolium trifluoromethanesulfonate [EMIM][CF(3)SO(3)] proved to be the most effective reaction system. A larger scale batch reaction with this system in a 300 mL autoclave (11 g lignin starting material) resulted in a maximum conversion of 66.3 % (24 h at 100 degrees C, 84x10(5) Pa air). By adjusting the reaction conditions and catalyst loading, the selectivity of the process could be shifted from syringaldehyde as the predominant product to 2,6-dimethoxy-1,4-benzoquinone (DMBQ). Surprisingly, the latter could be isolated as a pure substance in 11.5 wt % overall yield by a simple extraction/crystallization process.",TRUE,noun phrase
R129,Organic Chemistry,R137068,Visible-light photoredox-catalyzed C–O bond cleavage of diaryl ethers by acridinium photocatalysts at room temperature,S541553,R137071,substrate,L381369,Aryl carboxylic acid,"Abstract Cleavage of C–O bonds in lignin can afford the renewable aryl sources for fine chemicals. However, the high bond energies of these C–O bonds, especially the 4-O-5-type diaryl ether C–O bonds (~314 kJ/mol) make the cleavage very challenging. Here, we report visible-light photoredox-catalyzed C–O bond cleavage of diaryl ethers by an acidolysis with an aryl carboxylic acid and a following one-pot hydrolysis. Two molecules of phenols are obtained from one molecule of diaryl ether at room temperature. The aryl carboxylic acid used for the acidolysis can be recovered. The key to success of the acidolysis is merging visible-light photoredox catalysis using an acridinium photocatalyst and Lewis acid catalysis using Cu(TMHD) 2 . Preliminary mechanistic studies indicate that the catalytic cycle occurs via a rare selective electrophilic attack of the generated aryl carboxylic radical on the electron-rich aryl ring of the diphenyl ether. This transformation is applied to a gram-scale reaction and the model of 4-O-5 lignin linkages.",TRUE,noun phrase
R129,Organic Chemistry,R137073,"Selective, Nickel-Catalyzed Hydrogenolysis of Aryl Ethers",S541563,R137075,substrate,L381376,Aryl ether,"A catalyst that cleaves aryl-oxygen bonds but not carbon-carbon bonds may help improve lignin processing. Selective hydrogenolysis of the aromatic carbon-oxygen (C-O) bonds in aryl ethers is an unsolved synthetic problem important for the generation of fuels and chemical feedstocks from biomass and for the liquefaction of coal. Currently, the hydrogenolysis of aromatic C-O bonds requires heterogeneous catalysts that operate at high temperature and pressure and lead to a mixture of products from competing hydrogenolysis of aliphatic C-O bonds and hydrogenation of the arene. Here, we report hydrogenolyses of aromatic C-O bonds in alkyl aryl and diaryl ethers that form exclusively arenes and alcohols. This process is catalyzed by a soluble nickel carbene complex under just 1 bar of hydrogen at temperatures of 80 to 120°C; the relative reactivity of ether substrates scale as Ar-OAr>>Ar-OMe>ArCH2-OMe (Ar, Aryl; Me, Methyl). Hydrogenolysis of lignin model compounds highlights the potential of this approach for the conversion of refractory aryl ether biopolymers to hydrocarbons.",TRUE,noun phrase
R129,Organic Chemistry,R137062,Cross-Coupling Reactions of Aryl Pivalates with Boronic Acids,S541493,R137064,substrate,L381320,Aryl pivalate,"The first cross-coupling of acylated phenol derivatives has been achieved. In the presence of an air-stable Ni(II) complex, readily accessible aryl pivalates participate in the Suzuki-Miyaura coupling with arylboronic acids. The process is tolerant of considerable variation in each of the cross-coupling components. In addition, a one-pot acylation/cross-coupling sequence has been developed. The potential to utilize an aryl pivalate as a directing group has also been demonstrated, along with the ability to sequentially cross-couple an aryl bromide followed by an aryl pivalate, using palladium and nickel catalysis, respectively.",TRUE,noun phrase
R129,Organic Chemistry,R138423,Oxidative Depolymerization of Lignin in Ionic Liquids,S549348,R138425,substrate,L386489,Beech lignin,"Beech lignin was oxidatively cleaved in ionic liquids to give phenols, unsaturated propylaromatics, and aromatic aldehydes. A multiparallel batch reactor system was used to screen different ionic liquids and metal catalysts. Mn(NO(3))(2) in 1-ethyl-3-methylimidazolium trifluoromethanesulfonate [EMIM][CF(3)SO(3)] proved to be the most effective reaction system. A larger scale batch reaction with this system in a 300 mL autoclave (11 g lignin starting material) resulted in a maximum conversion of 66.3 % (24 h at 100 degrees C, 84x10(5) Pa air). By adjusting the reaction conditions and catalyst loading, the selectivity of the process could be shifted from syringaldehyde as the predominant product to 2,6-dimethoxy-1,4-benzoquinone (DMBQ). Surprisingly, the latter could be isolated as a pure substance in 11.5 wt % overall yield by a simple extraction/crystallization process.",TRUE,noun phrase
R129,Organic Chemistry,R137062,Cross-Coupling Reactions of Aryl Pivalates with Boronic Acids,S541494,R137064,substrate,L381321,Boronic acid,"The first cross-coupling of acylated phenol derivatives has been achieved. In the presence of an air-stable Ni(II) complex, readily accessible aryl pivalates participate in the Suzuki-Miyaura coupling with arylboronic acids. The process is tolerant of considerable variation in each of the cross-coupling components. In addition, a one-pot acylation/cross-coupling sequence has been developed. The potential to utilize an aryl pivalate as a directing group has also been demonstrated, along with the ability to sequentially cross-couple an aryl bromide followed by an aryl pivalate, using palladium and nickel catalysis, respectively.",TRUE,noun phrase
R129,Organic Chemistry,R137059,Nickel-Catalyzed Cross-Coupling of Aryl Methyl Ethers with Aryl Boronic Esters,S541470,R137061,substrate,L381303,Boronic ester,The Ni(0)-catalyzed cross-coupling of alkenyl methyl ethers with boronic esters is described. Several types of alkenyl methyl ethers can be coupled with a wide range of boronic esters to give the stilbene derivatives.,TRUE,noun phrase
R129,Organic Chemistry,R137068,Visible-light photoredox-catalyzed C–O bond cleavage of diaryl ethers by acridinium photocatalysts at room temperature,S541552,R137071,substrate,L381368,Diaryl ethers,"Abstract Cleavage of C–O bonds in lignin can afford the renewable aryl sources for fine chemicals. However, the high bond energies of these C–O bonds, especially the 4-O-5-type diaryl ether C–O bonds (~314 kJ/mol) make the cleavage very challenging. Here, we report visible-light photoredox-catalyzed C–O bond cleavage of diaryl ethers by an acidolysis with an aryl carboxylic acid and a following one-pot hydrolysis. Two molecules of phenols are obtained from one molecule of diaryl ether at room temperature. The aryl carboxylic acid used for the acidolysis can be recovered. The key to success of the acidolysis is merging visible-light photoredox catalysis using an acridinium photocatalyst and Lewis acid catalysis using Cu(TMHD) 2 . Preliminary mechanistic studies indicate that the catalytic cycle occurs via a rare selective electrophilic attack of the generated aryl carboxylic radical on the electron-rich aryl ring of the diphenyl ether. This transformation is applied to a gram-scale reaction and the model of 4-O-5 lignin linkages.",TRUE,noun phrase
R129,Organic Chemistry,R137068,Visible-light photoredox-catalyzed C–O bond cleavage of diaryl ethers by acridinium photocatalysts at room temperature,S541540,R137071,catalyst,L381358,Lewis acid,"Abstract Cleavage of C–O bonds in lignin can afford the renewable aryl sources for fine chemicals. However, the high bond energies of these C–O bonds, especially the 4-O-5-type diaryl ether C–O bonds (~314 kJ/mol) make the cleavage very challenging. Here, we report visible-light photoredox-catalyzed C–O bond cleavage of diaryl ethers by an acidolysis with an aryl carboxylic acid and a following one-pot hydrolysis. Two molecules of phenols are obtained from one molecule of diaryl ether at room temperature. The aryl carboxylic acid used for the acidolysis can be recovered. The key to success of the acidolysis is merging visible-light photoredox catalysis using an acridinium photocatalyst and Lewis acid catalysis using Cu(TMHD) 2 . Preliminary mechanistic studies indicate that the catalytic cycle occurs via a rare selective electrophilic attack of the generated aryl carboxylic radical on the electron-rich aryl ring of the diphenyl ether. This transformation is applied to a gram-scale reaction and the model of 4-O-5 lignin linkages.",TRUE,noun phrase
R129,Organic Chemistry,R110941,Microwave-Assisted Cobinamide Synthesis,S505480,R110943,Special conditions,L364970,Microwave reactor,"We present a new method for the preparation of cobinamide (CN)2Cbi, a vitamin B12 precursor, that should allow its broader utility. Treatment of vitamin B12 with only NaCN and heating in a microwave reactor affords (CN)2Cbi as the sole product. The purification procedure was greatly simplified, allowing for easy isolation of the product in 94% yield. The use of microwave heating proved beneficial also for (CN)2Cbi(c-lactone) synthesis. Treatment of (CN)2Cbi with triethanolamine led to (CN)2Cbi(c-lactam).",TRUE,noun phrase
R129,Organic Chemistry,R138452,Light-Driven Depolymerization of Native Lignin Enabled by Proton-Coupled Electron Transfer,S549521,R138457,substrate,L386625,Native lignin,"Here, we report a catalytic, light-driven method for the redox-neutral depolymerization of native lignin biomass at ambient temperature. This transformation proceeds via a proton-coupled electron-transfer (PCET) activation of an alcohol O–H bond to generate a key alkoxy radical intermediate, which then facilitates the β-scission of a vicinal C–C bond. Notably, this single-step depolymerization is driven solely by visible-light irradiation, requires no stoichiometric chemical reagents, and produces no stoichiometric waste. This method exhibits good efficiency and excellent selectivity for the activation and fragmentation of the β-O-4 linkage in the polymer backbone, even in the presence of numerous other PCET-active functional groups. The feasibility of this protocol in enabling the cleavage of the β-1 linkage in model lignin dimers was also demonstrated. These results provide further evidence that visible-light photocatalysis can serve as a viable method for the direct conversion of lignin biomass into va...",TRUE,noun phrase
R129,Organic Chemistry,R138423,Oxidative Depolymerization of Lignin in Ionic Liquids,S549350,R138425,Product,L386491,Unsaturated propylaromatics,"Beech lignin was oxidatively cleaved in ionic liquids to give phenols, unsaturated propylaromatics, and aromatic aldehydes. A multiparallel batch reactor system was used to screen different ionic liquids and metal catalysts. Mn(NO(3))(2) in 1-ethyl-3-methylimidazolium trifluoromethanesulfonate [EMIM][CF(3)SO(3)] proved to be the most effective reaction system. A larger scale batch reaction with this system in a 300 mL autoclave (11 g lignin starting material) resulted in a maximum conversion of 66.3 % (24 h at 100 degrees C, 84x10(5) Pa air). By adjusting the reaction conditions and catalyst loading, the selectivity of the process could be shifted from syringaldehyde as the predominant product to 2,6-dimethoxy-1,4-benzoquinone (DMBQ). Surprisingly, the latter could be isolated as a pure substance in 11.5 wt % overall yield by a simple extraction/crystallization process.",TRUE,noun phrase
R68,Pharmacology,R109548,EFFECT OF PIOGLITAZONE AND GEMFIBROZIL ADMINISTRATION ON C-REACTIVE PROTEIN LEVELS IN NON-DIABETIC HYPERLIPIDEMIC RATS,S499899,R109550,Data,R109558,C-reactive protein (CRP) levels,"ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean±SD of 2.59±0.28mg/L, 2.63±0.32mg/L and 2.67±0.23mg/L at 0 week to 3.55±0.44mg/L, 3.59±0.34mg/L and 3.6±0.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean±SD of 2.93±0.33mg/L compared to control group’s 4.42±0.30mg/L and gemfibrozil group’s 4.28±0.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).",TRUE,noun phrase
R68,Pharmacology,R109548,EFFECT OF PIOGLITAZONE AND GEMFIBROZIL ADMINISTRATION ON C-REACTIVE PROTEIN LEVELS IN NON-DIABETIC HYPERLIPIDEMIC RATS,S499896,R109550,Material,R109555,Group A (control),"ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean±SD of 2.59±0.28mg/L, 2.63±0.32mg/L and 2.67±0.23mg/L at 0 week to 3.55±0.44mg/L, 3.59±0.34mg/L and 3.6±0.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean±SD of 2.93±0.33mg/L compared to control group’s 4.42±0.30mg/L and gemfibrozil group’s 4.28±0.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).",TRUE,noun phrase
R68,Pharmacology,R109548,EFFECT OF PIOGLITAZONE AND GEMFIBROZIL ADMINISTRATION ON C-REACTIVE PROTEIN LEVELS IN NON-DIABETIC HYPERLIPIDEMIC RATS,S499897,R109550,Material,R109556,"groups A, B and C","ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean±SD of 2.59±0.28mg/L, 2.63±0.32mg/L and 2.67±0.23mg/L at 0 week to 3.55±0.44mg/L, 3.59±0.34mg/L and 3.6±0.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean±SD of 2.93±0.33mg/L compared to control group’s 4.42±0.30mg/L and gemfibrozil group’s 4.28±0.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).",TRUE,noun phrase
R68,Pharmacology,R109548,EFFECT OF PIOGLITAZONE AND GEMFIBROZIL ADMINISTRATION ON C-REACTIVE PROTEIN LEVELS IN NON-DIABETIC HYPERLIPIDEMIC RATS,S499892,R109550,Material,R109551,high fat fed non-diabetic rats,"ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean±SD of 2.59±0.28mg/L, 2.63±0.32mg/L and 2.67±0.23mg/L at 0 week to 3.55±0.44mg/L, 3.59±0.34mg/L and 3.6±0.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean±SD of 2.93±0.33mg/L compared to control group’s 4.42±0.30mg/L and gemfibrozil group’s 4.28±0.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).",TRUE,noun phrase
R130,Physical Chemistry,R144081,A soluble cryogenic thermometer with high sensitivity based on excited-state configuration transformations,S576714,R144083,Readout,R144085,Emission intensity,"Cryogenic temperature detection plays an irreplaceable role in exploring nature. Developing high sensitivity, accurate, observable and convenient measurements of cryogenic temperature is not only a challenge but also an opportunity for the thermometer field. The small molecule 9-(9,9-dimethyl-9H-fluoren-3yl)-14-phenyl-9,14-dihydrodibenzo[a,c]phenazine (FIPAC) in 2-methyl-tetrahydrofuran (MeTHF) solution is utilized for the detection of cryogenic temperature with a wide range from 138 K to 343 K. This system possesses significantly high sensitivity at low temperature, which reaches as high as 19.4% K(-1) at 138 K. The temperature-dependent ratio of the dual emission intensity can be fitted as a single-exponential curve as a function of temperature. This single-exponential curve can be explained by the mechanism that the dual emission feature of FIPAC results from the excited-state configuration transformations upon heating or cooling, which is very different from the previously reported mechanisms. Here, our work gives an overall interpretation for this mechanism. Therefore, application of FIPAC as a cryogenic thermometer is experimentally and theoretically feasible.",TRUE,noun phrase
R130,Physical Chemistry,R135710,Continuous Symmetry Breaking Induced by Ion Pairing Effect in Heptamethine Cyanine Dyes: Beyond the Cyanine Limit,S536901,R135714,Class of compound,L378453,Heptamethine cyanine ,"The association of heptamethine cyanine cation 1(+) with various counterions A (A = Br(-), I(-), PF(6)(-), SbF(6)(-), B(C(6)F(5))(4)(-), TRISPHAT) was realized. The six different ion pairs have been characterized by X-ray diffraction, and their absorption properties were studied in polar (DCM) and apolar (toluene) solvents. A small, hard anion (Br(-)) is able to strongly polarize the polymethine chain, resulting in the stabilization of an asymmetric dipolar-like structure in the crystal and in nondissociating solvents. On the contrary, in more polar solvents or when it is associated with a bulky soft anion (TRISPHAT or B(C(6)F(5))(4)(-)), the same cyanine dye adopts preferentially the ideal polymethine state. The solid-state and solution absorption properties of heptamethine dyes are therefore strongly correlated to the nature of the counterion.",TRUE,noun phrase
R361,Place and Environment,R110280,From “Library as Place” to “Library as Platform”: Redesigning the 21st Century Academic Library,S502644,R110282,Has method,R110283,case studies,Originality/value This chapter adds to the body of case studies examining what the library of the future could look like in practice as well as theory.,TRUE,noun phrase
R361,Place and Environment,R110289,The Third Place: The Library as Collaborative and Community Space in a Time of Fiscal Restraint,S502676,R110291,Material,R110292,college library,"In a period of fiscal constraint, when assumptions about the library as place are being challenged, administrators question the contribution of every expense to student success. Libraries have been successful in migrating resources and services to a digital environment accessible beyond the library. What is the role of the library as place when users do not need to visit the building to utilize library services and resources? We argue that the college library building's core role is as a space for collaborative learning and community interaction that cannot be jettisoned in the new normal.",TRUE,noun phrase
R138056,Planetary Sciences,R155432,"Identification, distribution and possible origins of sulfates in Capri Chasma (Mars), inferred from CRISM data",S622412,R155434,Study Area,R155227, Capri Chasma,"CRISM is a hyperspectral imager onboard the Mars Reconnaissance Orbiter (MRO; NASA, 2005) which has been acquiring data since November 2006 and has targeted hydrated minerals previously detected by OMEGA (Mars Express; ESA, 2003). The present study focuses on hydrated minerals detected with CRISM at high spatial resolution in the vicinity of Capri Chasma, a canyon of the Valles Marineris system. CRISM data were processed and coupled with MRO and other spacecraft data, in particular HiRiSE (High Resolution Science Experiment, MRO) images. Detections revealed sulfates in abundance in Capri, especially linked to the interior layered deposits (ILD) that lie in the central part of the chasma. Both monohydrated and polyhydrated sulfates are found at different elevations and are associated with different layers. Monohydrated sulfates are widely detected over the massive light-toned cliffs of the ILD, whereas polyhydrated sulfates seem to form a basal and a top layer associated with lower-albedo deposits in flatter areas. Hydrated silicates (phyllosilicates or opaline silica) have also been detected very locally on two mounds about a few hundred meters in diameter at the bottom of the ILD cliffs. We suggest some formation models of these minerals that are consistent with our observations.",TRUE,noun phrase
R138056,Planetary Sciences,R155438,Elorza Crater on Mars: identification of phyllosilicate-bearing minerals by MRO-CRISM,S622324,R155440,Study Area,R155436, Elorza Crater,"The Elorza crater is located in the Ophir Planum region of Mars with a 40-km diameter, centered near 550.25’ W, 80.72’ S. Since the Elorza crater has clays rich deposits it was one of the important crater to understand the past aqueous alteration processes of planet Mars.",TRUE,noun phrase
R138056,Planetary Sciences,R147335,Goldschmidt crater and the Moon's north polar region: Results from the Moon Mineralogy Mapper (M3),S590728,R147337,Study Area,R147334, Goldschmidt crater,"[1] Soils within the impact crater Goldschmidt have been identified as spectrally distinct from the local highland material. High spatial and spectral resolution data from the Moon Mineralogy Mapper (M3) on the Chandrayaan-1 orbiter are used to examine the character of Goldschmidt crater in detail. Spectral parameters applied to a north polar mosaic of M3 data are used to discern large-scale compositional trends at the northern high latitudes, and spectra from three widely separated regions are compared to spectra from Goldschmidt. The results highlight the compositional diversity of the lunar nearside, in particular, where feldspathic soils with a low-Ca pyroxene component are pervasive, but exclusively feldspathic regions and small areas of basaltic composition are also observed. Additionally, we find that the relative strengths of the diagnostic OH/H2O absorption feature near 3000 nm are correlated with the mineralogy of the host material. On both global and local scales, the strongest hydrous absorptions occur on the more feldspathic surfaces. Thus, M3 data suggest that while the feldspathic soils within Goldschmidt crater are enhanced in OH/H2O compared to the relatively mafic nearside polar highlands, their hydration signatures are similar to those observed in the feldspathic highlands on the farside.",TRUE,noun phrase
R138056,Planetary Sciences,R147340,The distribution and purity of anorthosite across the Orientale basin: New perspectives from Moon Mineralogy Mapper data: CRYSTALLINE ANORTHOSITE ACROSS ORIENTALE,S590761,R147342,Study Area,R147339, Orientale basin,"The Orientale basin is a multiring impact structure on the western limb of the Moon that provides a clear view of the primary lunar crust exposed during basin formation. Previously, near‐infrared reflectance spectra suggested that Orientale's Inner Rook Ring (IRR) is very poor in mafic minerals and may represent anorthosite excavated from the Moon's upper crust. However, detailed assessment of the mineralogy of these anorthosites was prohibited because the available spectroscopic data sets did not identify the diagnostic plagioclase absorption feature near 1250 nm. Recently, however, this absorption has been identified in several spectroscopic data sets, including the Moon Mineralogy Mapper (M3), enabling the unique identification of a plagioclase‐dominated lithology at Orientale for the first time. Here we present the first in‐depth characterization of the Orientale anorthosites based on direct measurement of their plagioclase component. In addition, detailed geologic context of the exposures is discussed based on analysis of Lunar Reconnaissance Orbiter Narrow Angle Camera images for selected anorthosite identifications. The results confirm that anorthosite is overwhelmingly concentrated in the IRR. Comparison with nonlinear spectral mixing models suggests that the anorthosite is exceedingly pure, containing >95 vol % plagioclase in most areas and commonly ~99–100 vol %. These new data place important constraints on magma ocean crystallization scenarios, which must produce a zone of highly pure anorthosite spanning the entire lateral extent of the 430 km diameter IRR.",TRUE,noun phrase
R138056,Planetary Sciences,R138377,Discoveries on the lithology of lunar crater central peaks by SELENE Spectral Profiler,S549149,R138378,Study Area,L386338,Antoniadi crater,"The Spectral Profiler (SP) onboard the Japanese SELENE (KAGUYA) spacecraft is now providing global high spectral resolution visible‐near infrared continuous reflectance spectra of the Moon. The reflectance spectra of impact craters on the farside of the Moon reveal lithologies that were not previously identified. The achievements of SP so far include: the most definite detection of crystalline iron‐bearing plagioclase with its characteristic 1.3 μm absorption band on the Moon; a new interpretation of the lithology of Tsiolkovsky crater central peaks, previously classified as “olivine‐rich,” as mixtures of plagioclase and pyroxene; and the lower limit of Mg number of low‐Ca pyroxene found at Antoniadi crater central peak and peak ring which were estimated through direct comparison with laboratory spectra of natural and synthetic pyroxene samples.",TRUE,noun phrase
R138056,Planetary Sciences,R147331,Identification of Potential Mantle Rocks Around the Lunar Imbrium Basin,S590867,R147332,Supplementary information,R147359,Diviner Lunar Radiometer,"Basin‐forming impacts expose material from deep within the interior of the Moon. Given the number of lunar basins, one would expect to find samples of the lunar mantle among those returned by the Apollo or Luna missions or within the lunar meteorite collection. However, only a few candidate mantle samples have been identified. Some remotely detected locations have been postulated to contain mantle‐derived material, but none are mineralogically consistent upon study with multiple techniques. To locate potential remnants of the lunar mantle, we searched for early‐crystallizing minerals using data from the Moon Mineralogy Mapper (M3) and the Diviner Lunar Radiometer (Diviner). While the lunar crust is largely composed of plagioclase, the mantle should contain almost none. M3 spectra were used to identify massifs bearing mafic minerals and Diviner was used to constrain the relative abundance of plagioclase. Of the sites analyzed, only Mons Wolff was found to potentially contain mantle material.",TRUE,noun phrase
R138056,Planetary Sciences,R139661,Mineralogy of the MSL Curiosity landing site in Gale crater as observed by MRO/CRISM,S621158,R155216,Study Area,R139774,Gale crater,"Orbital data acquired by the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) and High Resolution Imaging Science Experiment instruments on the Mars Reconnaissance Orbiter (MRO) provide a synoptic view of compositional stratigraphy on the floor of Gale crater surrounding the area where the Mars Science Laboratory (MSL) Curiosity landed. Fractured, light‐toned material exhibits a 2.2 µm absorption consistent with enrichment in hydroxylated silica. This material may be distal sediment from the Peace Vallis fan, with cement and fracture fill containing the silica. This unit is overlain by more basaltic material, which has 1 µm and 2 µm absorptions due to pyroxene that are typical of Martian basaltic materials. Both materials are partially obscured by aeolian dust and basaltic sand. Dunes to the southeast exhibit differences in mafic mineral signatures, with barchan dunes enhanced in olivine relative to pyroxene‐containing longitudinal dunes. This compositional difference may be related to aeolian grain sorting.",TRUE,noun phrase
R138056,Planetary Sciences,R155200,Martian minerals components at Gale crater detected by MRO CRISM hyperspectral images,S620860,R155202,Study Area,R139774,Gale crater,"Gale Crater on Mars has the layered structure of deposit covered by the Noachian/Hesperian boundary. Mineral identification and classification at this region can provide important constrains on environment and geological evolution for Mars. Although Curiosity rove has provided the in-situ mineralogical analysis in Gale, but it restricted in small areas. Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) aboard the Mars Reconnaissance Orbiter (MRO) with enhanced spectral resolution can provide more information in spatial and time scale. In this paper, CRISM near-infrared spectral data are used to identify mineral classes and groups at Martian Gale region. By using diagnostic absorptions features analysis in conjunction with spectral angle mapper (SAM), detailed mineral species are identified at Gale region, e.g., kaolinite, chlorites, smectite, jarosite, and northupite. The clay minerals' diversity in Gale Crater suggests the variation of aqueous alteration. The detection of northupite suggests that the Gale region has experienced the climate change from moist condition with mineral dissolution to dryer climate with water evaporation. The presence of ferric sulfate mineral jarosite formed through the oxidation of iron sulfides in acidic environments shows the experience of acidic sulfur-rich condition in Gale history.",TRUE,noun phrase
R138056,Planetary Sciences,R155206,Geological characteristics of hydrated minerals on Mars from MRO CRISM images,S620902,R155208,Study Area,R139718,Jezero crater,"Identification of Martian surface minerals can contribute to understand the Martian environmental change and geological evolution as well as explore the habitability of the Mars. The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) aboard the Mars Reconnaissance Orbiter (MRO) is covers the visible to near infrared wavelengths along with enhanced spectral resolution, which provides ability to map the mineralogy on Mars. In this paper, based on the spectrum matching, mineral composition and geological evolution of Martian Jezero and Holden crater are analyzed using the MRO CRISM. The hydrated minerals are detected in the studied areas, including the carbonate, hydrated silicate and hydrated sulfate. These minerals suggested that the Holden and Jezero craters have experienced long time water-rock interactions. Also, the diverse alteration minerals found in these regions indicate the aqueous activities in multiple distinct environments.",TRUE,noun phrase
R138056,Planetary Sciences,R138512,Raman spectroscopy for mineral identification and quantification for in situ planetary surface analysis: A point count method,S550068,R138514,Rock type,L387081,KREEP basalt,"Quantification of mineral proportions in rocks and soils by Raman spectroscopy on a planetary surface is best done by taking many narrow-beam spectra from different locations on the rock or soil, with each spectrum yielding peaks from only one or two minerals. The proportion of each mineral in the rock or soil can then be determined from the fraction of the spectra that contain its peaks, in analogy with the standard petrographic technique of point counting. The method can also be used for nondestructive laboratory characterization of rock samples. Although Raman peaks for different minerals seldom overlap each other, it is impractical to obtain proportions of constituent minerals by Raman spectroscopy through analysis of peak intensities in a spectrum obtained by broad-beam sensing of a representative area of the target material. That is because the Raman signal strength produced by a mineral in a rock or soil is not related in a simple way through the Raman scattering cross section of that mineral to its proportion in the rock, and the signal-to-noise ratio of a Raman spectrum is poor when a sample is stimulated by a low-power laser beam of broad diameter. Results obtained by the Raman point-count method are demonstrated for a lunar thin section (14161,7062) and a rock fragment (15273,7039). Major minerals (plagioclase and pyroxene), minor minerals (cristobalite and K-feldspar), and accessory minerals (whitlockite, apatite, and baddeleyite) were easily identified. Identification of the rock types, KREEP basalt or melt rock, from the 100-location spectra was straightforward.",TRUE,noun phrase
R138056,Planetary Sciences,R138520,Silica polymorphs in lunar granite: Implications for granite petrogenesis on the Moon,S550158,R138522,Techniques/ Analysis,L387164,Laser Raman,"Abstract Granitic lunar samples largely consist of granophyric intergrowths of silica and K-feldspar. The identification of the silica polymorph present in the granophyre can clarify the petrogenesis of the lunar granites. The presence of tridymite or cristobalite would indicate rapid crystallization at high temperature. Quartz would indicate crystallization at low temperature or perhaps intrusive, slow crystallization, allowing for the orderly transformation from high-temperature silica polymorphs (tridymite or cristobalite). We identify the silica polymorphs present in four granitic lunar samples from the Apollo 12 regolith using laser Raman spectroscopy. Typically, lunar silica occurs with a hackle fracture pattern. We did an initial density calculation on the hackle fracture pattern of quartz and determined that the volume of quartz and fracture space is consistent with a molar volume contraction from tridymite or cristobalite, both of which are less dense than quartz. Moreover, we analyzed the silica in the granitic fragments from Apollo 12 by electron-probe microanalysis and found it contains up to 0.7 wt% TiO2, consistent with initial formation as the high-temperature silica polymorphs, which have more open crystal structures that can more readily accommodate cations other than Si. The silica in Apollo 12 granitic samples crystallized rapidly as tridymite or cristobalite, consistent with extrusive volcanism. The silica then inverted to quartz at a later time, causing it to contract and fracture. A hackle fracture pattern is common in silica occurring in extrusive lunar lithologies (e.g., mare basalt). The extrusive nature of these granitic samples makes them excellent candidates to be similar to the rocks that compose positive relief silicic features such as the Gruithuisen Domes.",TRUE,noun phrase
R138056,Planetary Sciences,R139653,"Phyllosilicate Diversity and Past Aqueous Activity Revealed at Mawrth Vallis, Mars",S557866,R139655,Study Area,R139727,Mawrth Vallis,"Observations by the Mars Reconnaissance Orbiter/Compact Reconnaissance Imaging Spectrometer for Mars in the Mawrth Vallis region show several phyllosilicate species, indicating a wide range of past aqueous activity. Iron/magnesium (Fe/Mg)–smectite is observed in light-toned outcrops that probably formed via aqueous alteration of basalt of the ancient cratered terrain. This unit is overlain by rocks rich in hydrated silica, montmorillonite, and kaolinite that may have formed via subsequent leaching of Fe and Mg through extended aqueous events or a change in aqueous chemistry. A spectral feature attributed to an Fe2+ phase is present in many locations in the Mawrth Vallis region at the transition from Fe/Mg-smectite to aluminum/silicon (Al/Si)–rich units. Fe2+-bearing materials in terrestrial sediments are typically associated with microorganisms or changes in pH or cations and could be explained here by hydrothermal activity. The stratigraphy of Fe/Mg-smectite overlain by a ferrous phase, hydrated silica, and then Al-phyllosilicates implies a complex aqueous history.",TRUE,noun phrase
R138056,Planetary Sciences,R155421,"Compositional stratigraphy of clay-bearing layered deposits at Mawrth Vallis, Mars: STRATIGRAPHY OF CLAY-BEARING DEPOSITS ON MARS",S622380,R155422,Study Area,R139727,Mawrth Vallis,"Phyllosilicates have previously been detected in layered outcrops in and around the Martian outflow channel Mawrth Vallis. CRISM spectra of these outcrops exhibit features diagnostic of kaolinite, montmorillonite, and Fe/Mg‐rich smectites, along with crystalline ferric oxide minerals such as hematite. These minerals occur in distinct stratigraphic horizons, implying changing environmental conditions and/or a variable sediment source for these layered deposits. Similar stratigraphic sequences occur on both sides of the outflow channel and on its floor, with Al‐clay‐bearing layers typically overlying Fe/Mg‐clay‐bearing layers. This pattern, combined with layer geometries measured using topographic data from HiRISE and HRSC, suggests that the Al‐clay‐bearing horizons at Mawrth Vallis postdate the outflow channel and may represent a later sedimentary or altered pyroclastic deposit that drapes the topography.",TRUE,noun phrase
R138056,Planetary Sciences,R160482,In situ optical measurements of Chang'E‐3 landing site in Mare Imbrium: 1. Mineral abundances inferred from spectral reflectance,S640032,R160483,Supplementary information,R160518,Moon Mineralogy mapper,The visible and near‐infrared imaging spectrometer on board the Yutu Rover of Chinese Chang'E‐3 mission measured the lunar surface reflectance at a close distance (~1 m) and collected four spectra at four different sites. These in situ lunar spectra have revealed less mature features than that measured remotely by spaceborne sensors such as the Moon Mineralogy Mapper instrument on board the Chandrayaan‐1 mission and the Spectral Profiler on board the Kaguya over the same region. Mineral composition analysis using a spectral lookup table populated with a radiative transfer mixing model has shown that the regolith at the landing site contains high abundance of olivine. The mineral abundance results are consistent with that inferred from the compound measurement made by the on board alpha‐particle X‐ray spectrometer.,TRUE,noun phrase
R138056,Planetary Sciences,R138512,Raman spectroscopy for mineral identification and quantification for in situ planetary surface analysis: A point count method,S550042,R138514,Techniques/ Analysis,L387056,Raman spectroscopy,"Quantification of mineral proportions in rocks and soils by Raman spectroscopy on a planetary surface is best done by taking many narrow-beam spectra from different locations on the rock or soil, with each spectrum yielding peaks from only one or two minerals. The proportion of each mineral in the rock or soil can then be determined from the fraction of the spectra that contain its peaks, in analogy with the standard petrographic technique of point counting. The method can also be used for nondestructive laboratory characterization of rock samples. Although Raman peaks for different minerals seldom overlap each other, it is impractical to obtain proportions of constituent minerals by Raman spectroscopy through analysis of peak intensities in a spectrum obtained by broad-beam sensing of a representative area of the target material. That is because the Raman signal strength produced by a mineral in a rock or soil is not related in a simple way through the Raman scattering cross section of that mineral to its proportion in the rock, and the signal-to-noise ratio of a Raman spectrum is poor when a sample is stimulated by a low-power laser beam of broad diameter. Results obtained by the Raman point-count method are demonstrated for a lunar thin section (14161,7062) and a rock fragment (15273,7039). Major minerals (plagioclase and pyroxene), minor minerals (cristobalite and K-feldspar), and accessory minerals (whitlockite, apatite, and baddeleyite) were easily identified. Identification of the rock types, KREEP basalt or melt rock, from the 100-location spectra was straightforward.",TRUE,noun phrase
R138056,Planetary Sciences,R138385,A new type of pyroclastic deposit on the Moon containing Fe-spinel and chromite,S549002,R138386,Study Area,L386208,Sinus Aestuum,"We present details of the identification of sites that show an absorption band at visible wavelengths and a strong 2 μm band using the SELENE Spectral Profiler. All the sites exhibiting the visible feature are found on the regional dark mantle deposit (DMD) at Sinus Aestuum. All the instances of the visible feature show a strong 2 μm band, suggestive of Fe‐ and Cr‐rich spinels, which are different from previously detected Mg‐rich spinel. Since no visible feature is observed in other DMDs, the DMD at Sinus Aestuum is unique on the Moon. The occurrence trend of the spinels at Sinus Aestuum is also different from that of the Mg‐rich spinels, which are associated with impact structures. This may suggest that the spinel at Sinus Aestuum is a different origin from that of the Mg‐rich spinel.",TRUE,noun phrase
R138056,Planetary Sciences,R155200,Martian minerals components at Gale crater detected by MRO CRISM hyperspectral images,S621143,R155202,Techniques,R108184,Spectral Angle Mapper (SAM),"Gale Crater on Mars has the layered structure of deposit covered by the Noachian/Hesperian boundary. Mineral identification and classification at this region can provide important constrains on environment and geological evolution for Mars. Although Curiosity rove has provided the in-situ mineralogical analysis in Gale, but it restricted in small areas. Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) aboard the Mars Reconnaissance Orbiter (MRO) with enhanced spectral resolution can provide more information in spatial and time scale. In this paper, CRISM near-infrared spectral data are used to identify mineral classes and groups at Martian Gale region. By using diagnostic absorptions features analysis in conjunction with spectral angle mapper (SAM), detailed mineral species are identified at Gale region, e.g., kaolinite, chlorites, smectite, jarosite, and northupite. The clay minerals' diversity in Gale Crater suggests the variation of aqueous alteration. The detection of northupite suggests that the Gale region has experienced the climate change from moist condition with mineral dissolution to dryer climate with water evaporation. The presence of ferric sulfate mineral jarosite formed through the oxidation of iron sulfides in acidic environments shows the experience of acidic sulfur-rich condition in Gale history.",TRUE,noun phrase
R138056,Planetary Sciences,R160482,In situ optical measurements of Chang'E‐3 landing site in Mare Imbrium: 1. Mineral abundances inferred from spectral reflectance,S640033,R160483,Supplementary information,R160524,spectral profiler,The visible and near‐infrared imaging spectrometer on board the Yutu Rover of Chinese Chang'E‐3 mission measured the lunar surface reflectance at a close distance (~1 m) and collected four spectra at four different sites. These in situ lunar spectra have revealed less mature features than that measured remotely by spaceborne sensors such as the Moon Mineralogy Mapper instrument on board the Chandrayaan‐1 mission and the Spectral Profiler on board the Kaguya over the same region. Mineral composition analysis using a spectral lookup table populated with a radiative transfer mixing model has shown that the regolith at the landing site contains high abundance of olivine. The mineral abundance results are consistent with that inferred from the compound measurement made by the on board alpha‐particle X‐ray spectrometer.,TRUE,noun phrase
R138056,Planetary Sciences,R138377,Discoveries on the lithology of lunar crater central peaks by SELENE Spectral Profiler,S548851,R138378,Instrument,L386069,Spectral Profiler (SP),"The Spectral Profiler (SP) onboard the Japanese SELENE (KAGUYA) spacecraft is now providing global high spectral resolution visible‐near infrared continuous reflectance spectra of the Moon. The reflectance spectra of impact craters on the farside of the Moon reveal lithologies that were not previously identified. The achievements of SP so far include: the most definite detection of crystalline iron‐bearing plagioclase with its characteristic 1.3 μm absorption band on the Moon; a new interpretation of the lithology of Tsiolkovsky crater central peaks, previously classified as “olivine‐rich,” as mixtures of plagioclase and pyroxene; and the lower limit of Mg number of low‐Ca pyroxene found at Antoniadi crater central peak and peak ring which were estimated through direct comparison with laboratory spectra of natural and synthetic pyroxene samples.",TRUE,noun phrase
R138056,Planetary Sciences,R138377,Discoveries on the lithology of lunar crater central peaks by SELENE Spectral Profiler,S548857,R138378,Study Area,L386075,Tsiolkovsky crater,"The Spectral Profiler (SP) onboard the Japanese SELENE (KAGUYA) spacecraft is now providing global high spectral resolution visible‐near infrared continuous reflectance spectra of the Moon. The reflectance spectra of impact craters on the farside of the Moon reveal lithologies that were not previously identified. The achievements of SP so far include: the most definite detection of crystalline iron‐bearing plagioclase with its characteristic 1.3 μm absorption band on the Moon; a new interpretation of the lithology of Tsiolkovsky crater central peaks, previously classified as “olivine‐rich,” as mixtures of plagioclase and pyroxene; and the lower limit of Mg number of low‐Ca pyroxene found at Antoniadi crater central peak and peak ring which were estimated through direct comparison with laboratory spectra of natural and synthetic pyroxene samples.",TRUE,noun phrase
R185,Plasma and Beam Physics,R145219,Stark broadening and atomic data for Ar XVI,S581335,R145241,Ionization_state,L406259,Ar XVI,"Stark broadening and atomic data calculations have been developed for the recent years, especially atomic and line broadening data for highly ionized ions of argon. We present in this paper atomic data (such as energy levels, line strengths, oscillator strengths and radiative decay rates) for Ar XVI ion and quantum Stark broadening calculations for 10 Ar XVI lines. Radiative atomic data for this ion have been calculated using the University College London (UCL) codes (SUPERSTRUCTURE, DISTORTED WAVE, JAJOM) and have been compared with other results. Using our quantum mechanical method, our Stark broadening calculations for Ar XVI lines are performed at electron density Ne = 10 20 cm−3 and for electron temperature varying from 7.5×10 to 7.5×10 K. No Stark broadening results in the literature to compare with. So, our results come to fill this lack of data.",TRUE,noun phrase
R185,Plasma and Beam Physics,R139089,The dynamics of radio-frequency driven atmospheric pressure plasma jets,S554204,R139173,Research_plan,L389983,Electron dynamics,"The complex dynamics of radio-frequency driven atmospheric pressure plasma jets is investigated using various optical diagnostic techniques and numerical simulations. Absolute number densities of ground state atomic oxygen radicals in the plasma effluent are measured by two-photon absorption laser induced fluorescence spectroscopy (TALIF). Spatial profiles are compared with (vacuum) ultra-violet radiation from excited states of atomic oxygen and molecular oxygen, respectively. The excitation and ionization dynamics in the plasma core are dominated by electron impact and observed by space and phase resolved optical emission spectroscopy (PROES). The electron dynamics is governed through the motion of the plasma boundary sheaths in front of the electrodes as illustrated in numerical simulations using a hybrid code based on fluid equations and kinetic treatment of electrons.",TRUE,noun phrase
R185,Plasma and Beam Physics,R139118,Determination of NO densities in a surface dielectric barrier discharge using optical emission spectroscopy,S554404,R139183,Research_plan,L390163,NO densities,A new computationally assisted diagnostic to measure NO densities in atmospheric-pressure microplasmas by Optical Emission Spectroscopy (OES) is developed and validated against absorption spectroscopy in a volume Dielectric Barrier Discharge (DBD). The OES method is then applied to a twin surface DBD operated in N 2 to measure the NO density as a function of the O 2 admixture ( 0.1%– 1%). The underlying rate equation model reveals that NO ( A 2 Σ + ) is primarily excited by reactions of the ground state NO ( X 2 Π ) with metastables N 2 ( A 3 Σ u + ).A new computationally assisted diagnostic to measure NO densities in atmospheric-pressure microplasmas by Optical Emission Spectroscopy (OES) is developed and validated against absorption spectroscopy in a volume Dielectric Barrier Discharge (DBD). The OES method is then applied to a twin surface DBD operated in N 2 to measure the NO density as a function of the O 2 admixture ( 0.1%– 1%). The underlying rate equation model reveals that NO ( A 2 Σ + ) is primarily excited by reactions of the ground state NO ( X 2 Π ) with metastables N 2 ( A 3 Σ u + ).,TRUE,noun phrase
R131,Polymer Chemistry,R161372,Biocatalytic Degradation Efficiency of Postconsumer Polyethylene Terephthalate Packaging Determined by Their Polymer Microstructures,S644430,R161375,Polymer,R161271,Polyethylene terephthalate,"Polyethylene terephthalate (PET) is the most important mass‐produced thermoplastic polyester used as a packaging material. Recently, thermophilic polyester hydrolases such as TfCut2 from Thermobifida fusca have emerged as promising biocatalysts for an eco‐friendly PET recycling process. In this study, postconsumer PET food packaging containers are treated with TfCut2 and show weight losses of more than 50% after 96 h of incubation at 70 °C. Differential scanning calorimetry analysis indicates that the high linear degradation rates observed in the first 72 h of incubation is due to the high hydrolysis susceptibility of the mobile amorphous fraction (MAF) of PET. The physical aging process of PET occurring at 70 °C is shown to gradually convert MAF to polymer microstructures with limited accessibility to enzymatic hydrolysis. Analysis of the chain‐length distribution of degraded PET by nuclear magnetic resonance spectroscopy reveals that MAF is rapidly hydrolyzed via a combinatorial exo‐ and endo‐type degradation mechanism whereas the remaining PET microstructures are slowly degraded only by endo‐type chain scission causing no detectable weight loss. Hence, efficient thermostable biocatalysts are required to overcome the competitive physical aging process for the complete degradation of postconsumer PET materials close to the glass transition temperature of PET.",TRUE,noun phrase
R131,Polymer Chemistry,R161372,Biocatalytic Degradation Efficiency of Postconsumer Polyethylene Terephthalate Packaging Determined by Their Polymer Microstructures,S644431,R161375,has result,R161378,weight loss,"Polyethylene terephthalate (PET) is the most important mass‐produced thermoplastic polyester used as a packaging material. Recently, thermophilic polyester hydrolases such as TfCut2 from Thermobifida fusca have emerged as promising biocatalysts for an eco‐friendly PET recycling process. In this study, postconsumer PET food packaging containers are treated with TfCut2 and show weight losses of more than 50% after 96 h of incubation at 70 °C. Differential scanning calorimetry analysis indicates that the high linear degradation rates observed in the first 72 h of incubation is due to the high hydrolysis susceptibility of the mobile amorphous fraction (MAF) of PET. The physical aging process of PET occurring at 70 °C is shown to gradually convert MAF to polymer microstructures with limited accessibility to enzymatic hydrolysis. Analysis of the chain‐length distribution of degraded PET by nuclear magnetic resonance spectroscopy reveals that MAF is rapidly hydrolyzed via a combinatorial exo‐ and endo‐type degradation mechanism whereas the remaining PET microstructures are slowly degraded only by endo‐type chain scission causing no detectable weight loss. Hence, efficient thermostable biocatalysts are required to overcome the competitive physical aging process for the complete degradation of postconsumer PET materials close to the glass transition temperature of PET.",TRUE,noun phrase
R245,Power and Energy,R137143,Optimal Decomposition and Reconstruction of Discrete Wavelet Transformation for Short-Term Load Forecasting,S541995,R137144,Location ,R137135,New England,"To achieve high accuracy in prediction, a load forecasting algorithm must model various consumer behaviors in response to weather conditions or special events. Different triggers will have various effects on different customers and lead to difficulties in constructing an adequate prediction model due to non-stationary and uncertain characteristics in load variations. This paper proposes an open-ended model of short-term load forecasting (STLF) which has general prediction ability to capture the non-linear relationship between the load demand and the exogenous inputs. The prediction method uses the whale optimization algorithm, discrete wavelet transform, and multiple linear regression model (WOA-DWT-MLR model) to predict both system load and aggregated load of power consumers. WOA is used to optimize the best combination of detail and approximation signals from DWT to construct an optimal MLR model. The proposed model is validated with both the system-side data set and the end-user data set for Independent System Operator-New England (ISO-NE) and smart meter load data, respectively, based on Mean Absolute Percentage Error (MAPE) criterion. The results demonstrate that the proposed method achieves lower prediction error than existing methods and can have consistent prediction of non-stationary load conditions that exist in both test systems. The proposed method is, thus, beneficial to use in the energy management system.",TRUE,noun phrase
R343,Psychology,R44964,Group psychodrama for korean college students,S137935,R44965,Country,L84317,South Korea,"ABSTRACT Psychodrama was first introduced in the Korean literature in 1972, but its generalization to college students did not occur until the 1990s. Despite findings from psychodrama studies with Korean college students supporting psychodrama as effective for developing and maintaining good interpersonal relationships, as well as decreasing anxiety and stress, it is still underutilized in South Korea. Accordingly, the current study looked at implementing a psychodrama program in South Korean universities. The positive results of the program implementation suggest that psychodrama is a useful technique for improving Korean college students’ general development and mental stability including secure attachment.",TRUE,noun phrase
R31,Public Health,R110448,The effect of social distance measures on COVID-19 epidemics in Europe: an interrupted time series analysis,S503234,R110452,Method,L363621,Interrupted time series,"Abstract Following the introduction of unprecedented “stay-at-home” national policies, the COVID-19 pandemic recently started declining in Europe. Our research aims were to characterize the changepoint in the flow of the COVID-19 epidemic in each European country and to evaluate the association of the level of social distancing with the observed decline in the national epidemics. Interrupted time series analyses were conducted in 28 European countries. Social distance index was calculated based on Google Community Mobility Reports. Changepoints were estimated by threshold regression, national findings were analyzed by Poisson regression, and the effect of social distancing in mixed effects Poisson regression model. Our findings identified the most probable changepoints in 28 European countries. Before changepoint, incidence of new COVID-19 cases grew by 24% per day on average. From the changepoint, this growth rate was reduced to 0.9%, 0.3% increase, and to 0.7% and 1.7% decrease by increasing social distancing quartiles. The beneficial effect of higher social distance quartiles (i.e., turning the increase into decline) was statistically significant for the fourth quartile. Notably, many countries in lower quartiles also achieved a flat epidemic curve. In these countries, other plausible COVID-19 containment measures could contribute to controlling the first wave of the disease. The association of social distance quartiles with viral spread could also be hindered by local bottlenecks in infection control. Our results allow for moderate optimism related to the gradual lifting of social distance measures in the general population, and call for specific attention to the protection of focal micro-societies enriching high-risk elderly subjects, including nursing homes and chronic care facilities.",TRUE,noun phrase
R11,Science,R151157,"Emergency Response Information System
Interoperability: Development of Chemical Incident
Response Data Model",S626174,R156027,paper:Study Type,L430922,Activity theory,"Emergency response requires an efficient information supply chain for the smooth operations of intraand inter-organizational emergency management processes. However, the breakdown of this information supply chain due to the lack of consistent data standards presents a significant problem. In this paper, we adopt a theory driven novel approach to develop a XML-based data model that prescribes a comprehensive set of data standards (semantics and internal structures) for emergency management to better address the challenges of information interoperability. Actual documents currently being used in mitigating chemical emergencies from a large number of incidents are used in the analysis stage. The data model development is guided by Activity Theory and is validated through a RFC-like process used in standards development. This paper applies the standards to the real case of a chemical incident scenario. Further, it complies with the national leading initiatives in emergency standards (National Information Exchange Model).",TRUE,noun phrase
R11,Science,R33985,Up-estuary dispersal of young-of-the-year bay anchovy Anchoa mitchilli in the Chesapeake Bay: inferences from microprobe analysis of strontium in otoliths,S117857,R33986,Species Order,R33984,Anchoa mitchilli,"Young-of-the-year (YOY) bay anchovy Anchoa mitchilli occur in higher proportion rel- ative to larvae in the upper Chesapeake Bay. This has led to the hypothesis that up-bay dispersal favors recruitment. Here we test whether recruitment of bay anchovy to different parts of the Chesa- peake Bay results from differential dispersal rates. Electron microprobe analysis of otolith strontium was used to hind-cast patterns and rates of movement across salinity zones. Individual chronologies of strontium were constructed for 55 bay anchovy aged 43 to 103 d collected at 5 Chesapeake Bay mainstem sites representing upper, middle, and lower regions of the bay during September 1998. Most YOY anchovy were estimated to have originated in the lower bay. Those collected at 5 and 11 psu sites exhibited the highest past dispersal rates, all in an up-estuary direction. No significant net dispersal up- or down-estuary occurred for recruits captured at the polyhaline (♢18 psu) site. Ini- tiation of ingress to lower salinity waters (<15 psu) was estimated to occur near metamorphosis, dur- ing the early juvenile stage, at sizes ♢ 25 mm standard length (SL) and ages ♢ 50 d after hatch. Esti- mated maximum upstream dispersal rate (over-the-ground speed) during the first 50 to 100 d of life exceeded 50 mm s -1 .",TRUE,noun phrase
R11,Science,R33827,Assessment of copy number variation using the Illumina Infinium 1M SNP-array: A comparison of methodological approaches in the Spanish Bladder Cancer/EPICURO study,S117325,R33828,Algorithm,R33826,and QuantiSNP,"High‐throughput single nucleotide polymorphism (SNP)‐array technologies allow to investigate copy number variants (CNVs) in genome‐wide scans and specific calling algorithms have been developed to determine CNV location and copy number. We report the results of a reliability analysis comparing data from 96 pairs of samples processed with CNVpartition, PennCNV, and QuantiSNP for Infinium Illumina Human 1Million probe chip data. We also performed a validity assessment with multiplex ligation‐dependent probe amplification (MLPA) as a reference standard. The number of CNVs per individual varied according to the calling algorithm. Higher numbers of CNVs were detected in saliva than in blood DNA samples regardless of the algorithm used. All algorithms presented low agreement with mean Kappa Index (KI) <66. PennCNV was the most reliable algorithm (KIw=98.96) when assessing the number of copies. The agreement observed in detecting CNV was higher in blood than in saliva samples. When comparing to MLPA, all algorithms identified poorly known copy aberrations (sensitivity = 0.19–0.28). In contrast, specificity was very high (0.97–0.99). Once a CNV was detected, the number of copies was truly assessed (sensitivity >0.62). Our results indicate that the current calling algorithms should be improved for high performance CNV analysis in genome‐wide scans. Further refinement is required to assess CNVs as risk factors in complex diseases.Hum Mutat 32:1–10, 2011. © 2011 Wiley‐Liss, Inc.",TRUE,noun phrase
R11,Science,R33991,Facultative catadromy of the eel Anguilla japonica between freshwater and seawater habitats,S117880,R33992,Species Order,R33989,Anguilla japonica,"To confirm the occurrence of marine residents of the Japanese eel, Anguilla japonica, which have never entered freshwater ('sea eels'), we measured Sr and Ca concentrations by X-ray electron microprobe analysis of the otoliths of 69 yellow and silver eels, collected from 10 localities in seawater and freshwater habitats around Japan, and classified their migratory histories. Two-dimen- sional images of the Sr concentration in the otoliths showed that all specimens generally had a high Sr core at the center of their otolith, which corresponded to a period of their leptocephalus and early glass eel stages in the ocean, but there were a variety of different patterns of Sr concentration and concentric rings outside the central core. Line analysis of Sr/Ca ratios along the radius of each otolith showed peaks (ca 15 × 10 -3 ) between the core and out to about 150 µm (elver mark). The pattern change of the Sr/Ca ratio outside of 150 µm indicated 3 general categories of migratory history: 'river eels', 'estuarine eels' and 'sea eels'. These 3 categories corresponded to mean values of Sr/Ca ratios of ≥ 6.0 × 10 -3 for sea eels, which spent most of their life in the sea and did not enter freshwater, of 2.5 to 6.0 × 10 -3 for estuarine eels, which inhabited estuaries or switched between different habitats, and of <2.5 × 10 -3 for river eels, which entered and remained in freshwater river habitats after arrival in the estuary. The occurrence of sea eels was 20% of all specimens examined and that of river eels, 23%, while estuarine eels were the most prevalent (57%). The occurrence of sea eels was confirmed at 4 localities in Japanese coastal waters, including offshore islands, a small bay and an estuary. The finding of estuarine eels as an intermediate type, which appear to frequently move between different habitats, and their presence at almost all localities, suggested that A. japonica has a flexible pattern of migration, with an ability to adapt to various habitats and salinities. Thus, anguillid eel migrations into freshwater are clearly not an obligatory migratory pathway, and this form of diadromy should be defined as facultative catadromy, with the sea eel as one of several ecophenotypes. Furthermore, this study indicates that eels which utilize the marine environment to various degrees during their juve- nile growth phase may make a substantial contribution to the spawning stock each year.",TRUE,noun phrase
R11,Science,R34001,Use of otolith Sr:Ca ratios to study the riverine migratory behaviors of Japanese eel Anguilla japonica,S117933,R34002,Species Order,R33989,Anguilla japonica,"To understand the migratory behavior and habitat use of the Japanese eel Anguilla japonica in the Kaoping River, SW Taiwan, the temporal changes of strontium (Sr) and calcium (Ca) contents in otoliths of the eels in combination with age data were examined by wavelength dispersive X-ray spectrometry with an electron probe microanalyzer. Ages of the eel were determined by the annulus mark in their otolith. The pattern of the Sr:Ca ratios in the otoliths, before the elver stage, was similar among all specimens. Post-elver stage Sr:Ca ratios indicated that the eels experienced different salinity histories in their growth phase yellow stage. The mean (±SD) Sr:Ca ratios in otoliths beyond elver check of the 6 yellow eels from the freshwater middle reach were 1.8 ± 0.2 x 10 -3 with a maximum value of 3.73 x 10 -3 . Sr:Ca ratios of less than 4 x 10-3 were used to discriminate the freshwater from seawater resident eels. Eels from the lower reach of the river were classified into 3 types: (1) freshwater contingents, Sr:Ca ratio <4 x 10 -3 , constituted 14 % of the eels examined; (2) seawater contingent, Sr:Ca ratio 5.1 ± 1.1 x 10-3 (5%); and (3) estuarine contingent, Sr:Ca ratios ranged from 0 to 10 x 10 -3 , with migration between freshwater and seawater (81 %). The frequency distribution of the 3 contingents differed between yellow and silver eel stages (0.01 < p < 0.05 for each case) and changed with age of the eel, indicating that most of the eels stayed in the estuary for the first year then migrated to the freshwater until 6 yr old. The eel population in the river system was dominated by the estuarine contingent, probably because the estuarine environment was more stable and had a larger carrying capacity than the freshwater middle reach did, and also due to a preference for brackish water by the growth-phase, yellow eel.",TRUE,noun phrase
R11,Science,R33999,Migratory behaviour and habitat use by American eels Anguilla rostrata as revealed by otolith microchemistry,S117920,R34000,Species Order,R33998,Anguilla rostrata,"The environmental history of American eels Anguilla rostrata from the East River, Nova Scotia, was investigated by electron microprobe analysis of the Sr:Ca ratio along transects of the eel otolith. The mean (±SD) Sr:Ca ratio in the otoliths of juvenile American eels was 5.42 × 10 -3 ± 1.22 × 10 -3 at the elver check and decreased to 2.38 × 10 -3 ± 0.99 × 10 -3 at the first annulus for eels that migrated directly into the river but increased to 7.28 × 10 -3 ± 1.09 × 10 -3 for eels that had remained in the estuary for 1 yr or more before entering the river. At the otolith edge, Sr:Ca ratios of 4.0 × 10 -3 or less indicated freshwater residence and ratios of 5.0 × 10 -3 or more indicated estuarine residence. Four distinct but interrelated behavioural groups were identified by the temporal changes in Sr:Ca ratios in their otoliths: (1) entrance into freshwater as an elver, (2) coastal or estuarine residence for 1y r or more before entering freshwater, and, after entering freshwater, (3) continuous freshwater res- idence until the silver eel stage and (4) freshwater residence for 1 yr or more before engaging in peri- odic, seasonal movements between estuary and freshwater until the silver eel stage. Small (< 70 mm total length), highly pigmented elvers that arrived early in the elver run were confirmed as slow growing age-1 juvenile eels. Juvenile eels that remained 1 yr or more in the estuary before entering the river contributed to the production of silver eels to a relatively greater extent than did elvers that entered the river during the year of continental arrival.",TRUE,noun phrase
R11,Science,R33795,Comparative analysis of algorithms for identifying amplifications and deletions in array CGH data,S117178,R33796,Platform,R33792,array CGH,"MOTIVATION Array Comparative Genomic Hybridization (CGH) can reveal chromosomal aberrations in the genomic DNA. These amplifications and deletions at the DNA level are important in the pathogenesis of cancer and other diseases. While a large number of approaches have been proposed for analyzing the large array CGH datasets, the relative merits of these methods in practice are not clear. RESULTS We compare 11 different algorithms for analyzing array CGH data. These include both segment detection methods and smoothing methods, based on diverse techniques such as mixture models, Hidden Markov Models, maximum likelihood, regression, wavelets and genetic algorithms. We compute the Receiver Operating Characteristic (ROC) curves using simulated data to quantify sensitivity and specificity for various levels of signal-to-noise ratio and different sizes of abnormalities. We also characterize their performance on chromosomal regions of interest in a real dataset obtained from patients with Glioblastoma Multiforme. While comparisons of this type are difficult due to possibly sub-optimal choice of parameters in the methods, they nevertheless reveal general characteristics that are helpful to the biological investigator.",TRUE,noun phrase
R11,Science,R31336,Composition estimations in a middle-vessel batch distillation column using artificial neural networks,S105118,R31337,Systems applied,R31331,Batch distillation,"A virtual sensor that estimates product compositions in a middle-vessel batch distillation column has been developed. The sensor is based on a recurrent artificial neural network, and uses information available from secondary measurements (such as temperatures and flow rates). The criteria adopted for selecting the most suitable training data set and the benefits deriving from pre-processing these data by means of principal component analysis are demonstrated by simulation. The effects of sensor location, model initialization, and noisy temperature measurements on the performance of the soft sensor are also investigated. It is shown that the estimated compositions are in good agreement with the actual values.",TRUE,noun phrase
R11,Science,R31638,"Dynamic modeling and optimal control of batch reactors, based on structure approaching hybrid neural networks",S105993,R31639,Systems applied,R31384,Batch reactor,"A novel Structure Approaching Hybrid Neural Network (SAHNN) approach to model batch reactors is presented. The Virtual Supervisor−Artificial Immune Algorithm method is utilized for the training of SAHNN, especially for the batch processes with partial unmeasurable state variables. SAHNN involves the use of approximate mechanistic equations to characterize unmeasured state variables. Since the main interest in batch process operation is on the end-of-batch product quality, an extended integral square error control index based on the SAHNN model is applied to track the desired temperature profile of a batch process. This approach introduces model mismatches and unmeasured disturbances into the optimal control strategy and provides a feedback channel for control. The performance of robustness and antidisturbances of the control system are then enhanced. The simulation result indicates that the SAHNN model and model-based optimal control strategy of the batch process are effective.",TRUE,noun phrase
R11,Science,R25451,One-Day Bayesian Cloning of Type 1 Diabetes Subjects: Toward a Single-Day UVA/Padova Type 1 Diabetes Simulator,S76325,R25452,Method,L47613,Bayesian method,"Objective: The UVA/Padova Type 1 Diabetes (T1DM) Simulator has been shown to be representative of a T1DM population observed in a clinical trial, but has not yet been identified on T1DM data. Moreover, the current version of the simulator is “single meal” while making it “single-day centric,” i.e., by describing intraday variability, would be a step forward to create more realistic in silico scenarios. Here, we propose a Bayesian method for the identification of the model from plasma glucose and insulin concentrations only, by exploiting the prior model parameter distribution. Methods: The database consists of 47 T1DM subjects, who received dinner, breakfast, and lunch (respectively, 80, 50, and 60 CHO grams) in three 23-h occasions (one openand one closed-loop). The model is identified using the Bayesian Maximum a Posteriori technique, where the prior parameter distribution is that of the simulator. Diurnal variability of glucose absorption and insulin sensitivity is allowed. Results: The model well describes glucose traces (coefficient of determination R2 = 0.962 ± 0.027) and the posterior parameter distribution is similar to that included in the simulator. Absorption parameters at breakfast are significantly different from those at lunch and dinner, reflecting more rapid dynamics of glucose absorption. Insulin sensitivity varies in each individual but without a specific pattern. Conclusion: The incorporation of glucose absorption and insulin sensitivity diurnal variability into the simulator makes it more realistic. Significance: The proposed method, applied to the increasing number of longterm artificial pancreas studies, will allow to describe week/month variability, thus further refining the simulator.",TRUE,noun phrase
R11,Science,R25070,Leveraging online social networks and external data sources to predict personality,S74208,R25071,Machine learning algorithms,L46097,Bayesian Network,"Over the past decade, people have been expressing more and more of their personalities online. Online social networks such as Facebook.com capture much of individuals' personalities through their published interests, attributes and social interactions. Knowledge of an individual's personality can be of wide utility, either for social research, targeted marketing or a variety of other fields A key problem to predicting and utilizing personality information is the myriad of ways it is expressed across various people, locations and cultures. Similarly, a model predicting personality based on online data which cannot be extrapolated to ""real world"" situations is of limited utility for researchers. This paper presents initial work done on generating a probabilistic model of personality which uses representations of people's connections to other people, places, cultures, and ideas, as expressed through Face book. To this end, personality was predicted using a machine learning method known as a Bayesian Network. The model was trained using Face book data combined with external data sources to allow further inference. The results of this paper present one predictive model of personality that this project has produced. This model demonstrates the potential of this methodology in two ways: First, it is able to explain up to 56% of all variation in a personality trait from a sample of 615 individuals. Second it is able to clearly present how this variability is explained through findings such as how to determine how agreeable a man is based on his age, number of Face book wall posts, and his willingness to disclose his preference for music made by Lady Gaga.",TRUE,noun phrase
R11,Science,R152993,Citizens’ adaptive or avoiding behavioral response to an emergency message on their mobile phone,S616930,R153913,paper: Theory / Construct / Model,L425481,Behavioral Avoidance,"Abstract Since November 2012, Dutch civil defense organizations employ NL-Alert, a cellular broadcast-based warning system to inform the public. Individuals receive a message on their mobile phone about the actual threat, as well as some advice how to deal with the situation at hand. This study reports on the behavioral effects of NL-Alert (n = 643). The current risk communication literature suggested underlying mechanisms as perceived threat, efficacy beliefs, social norms, information sufficiency, and perceived message quality. Results indicate that adaptive behavior and behavioral avoidance can be predicted by subsets of these determinants. Affective and social predictors appear to be more important in this context that socio-cognitive predictors. Implications for the use of cellular broadcast systems like NL-Alert as a warning tool in emergency situations are discussed.",TRUE,noun phrase
R11,Science,R156002,Citizens' adaptive or avoiding behavioral response to an emergency message on their mobile phone,S626735,R156103,paper: Theory / Construct / Model,L431407,Behavioral Avoidance,"Abstract Since November 2012, Dutch civil defense organizations employ NL-Alert, a cellular broadcast-based warning system to inform the public. Individuals receive a message on their mobile phone about the actual threat, as well as some advice how to deal with the situation at hand. This study reports on the behavioral effects of NL-Alert (n = 643). The current risk communication literature suggested underlying mechanisms as perceived threat, efficacy beliefs, social norms, information sufficiency, and perceived message quality. Results indicate that adaptive behavior and behavioral avoidance can be predicted by subsets of these determinants. Affective and social predictors appear to be more important in this context that socio-cognitive predictors. Implications for the use of cellular broadcast systems like NL-Alert as a warning tool in emergency situations are discussed.",TRUE,noun phrase
R11,Science,R26828,An Integrated Model and Solution Approach for Fleet Sizing with Heterogeneous Assets,S86023,R26829,Method,R26826,Benders decomposition,"This paper addresses a fleet-sizing problem in the context of the truck-rental industry. Specifically, trucks that vary in capacity and age are utilized over space and time to meet customer demand. Operational decisions (including demand allocation and empty truck repositioning) and tactical decisions (including asset procurements and sales) are explicitly examined in a linear programming model to determine the optimal fleet size and mix. The method uses a time-space network, common to fleet-management problems, but also includes capital cost decisions, wherein assets of different ages carry different costs, as is common to replacement analysis problems. A two-phase solution approach is developed to solve large-scale instances of the problem. Phase I allocates customer demand among assets through Benders decomposition with a demand-shifting algorithm assuring feasibility in each subproblem. Phase II uses the initial bounds and dual variables from Phase I and further improves the solution convergence without increasing computer memory requirements through the use of Lagrangian relaxation. Computational studies are presented to show the effectiveness of the approach for solving large problems within reasonable solution gaps.",TRUE,noun phrase
R11,Science,R25997,Automated labeling in document images,S80516,R26016,Application Domain,L50887,biomedical journals,"The National Library of Medicine (NLM) is developing an automated system to produce bibliographic records for its MEDLINER database. This system, named Medical Article Record System (MARS), employs document image analysis and understanding techniques and optical character recognition (OCR). This paper describes a key module in MARS called the Automated Labeling (AL) module, which labels all zones of interest (title, author, affiliation, and abstract) automatically. The AL algorithm is based on 120 rules that are derived from an analysis of journal page layouts and features extracted from OCR output. Experiments carried out on more than 11,000 articles in over 1,000 biomedical journals show the accuracy of this rule-based algorithm to exceed 96%.",TRUE,noun phrase
R11,Science,R25979,Syntactic segmentation and labeling of digitized pages from technical journals,S80402,R26008,Key Idea,L50789,block grammar,A method for extracting alternating horizontal and vertical projection profiles are from nested sub-blocks of scanned page images of technical documents is discussed. The thresholded profile strings are parsed using the compiler utilities Lex and Yacc. The significant document components are demarcated and identified by the recursive application of block grammars. Backtracking for error recovery and branch and bound for maximum-area labeling are implemented with Unix Shell programs. Results of the segmentation and labeling process are stored in a labeled x-y tree. It is shown that families of technical documents that share the same layout conventions can be readily analyzed. Results from experiments in which more than 20 types of document entities were identified in sample pages from two journals are presented. >,TRUE,noun phrase
R11,Science,R26248,Omya Hustadmarmor optimizes its supply chain for delivering calcium carbonate slurry to European paper manufacturers,S82750,R26374,Products,R26373,Calcium carbonate slurry,"The Norwegian company Omya Hustadmarmor supplies calcium carbonate slurry to European paper manufacturers from a single processing plant, using chemical tank ships of various sizes to transport its products. Transportation costs are lower for large ships than for small ships, but their use increases planning complexity and creates problems in production. In 2001, the company faced overwhelming operational challenges and sought operations-research-based planning support. The CEO, Sturla Steinsvik, contacted More Research Molde, which conducted a project that led to the development of a decision-support system (DSS) for maritime inventory routing. The core of the DSS is an optimization model that is solved through a metaheuristic-based algorithm. The system helps planners to make stronger, faster decisions and has increased predictability and flexibility throughout the supply chain. It has saved production and transportation costs close to US$7 million a year. We project additional direct savings of nearly US$4 million a year as the company adds even larger ships to the fleet as a result of the project. In addition, the company has avoided investments of US$35 million by increasing capacity utilization. Finally, the project has had a positive environmental effect by reducing overall oil consumption by more than 10 percent.",TRUE,noun phrase
R11,Science,R151288,"Factors impacting the intention to use emergency
notification services in campus emergencies: an empirical investigation",S616824,R153901,Emergency Type,L425387,Campus emergencies,"Research problem: This study investigates the factors influencing students' intentions to use emergency notification services to receive news about campus emergencies through short-message systems (SMS) and social network sites (SNS). Research questions: (1) What are the critical factors that influence students' intention to use SMS to receive emergency notifications? (2) What are the critical factors that influence students' intention to use SNS to receive emergency notifications? Literature review: By adapting Media Richness theory and prior research on emergency notifications, we propose that perceived media richness, perceived trust in information, perceived risk, perceived benefit, and perceived social influence impact the intention to use SMS and SNS to receive emergency notifications. Methodology: We conducted a quantitative, survey-based study that tested our model in five different scenarios, using logistic regression to test the research hypotheses with 574 students of a large research university in the northeastern US. Results and discussion: Results suggest that students' intention to use SNS is impacted by media richness, perceived benefit, and social influence, while students' intention to use SMS is influenced by trust and perceived benefit. Implications to emergency managers suggest how to more effectively manage and market the service through both channels. The results also suggest using SNS as an additional means of providing emergency notifications at academic institutions.",TRUE,noun phrase
R11,Science,R27735,A video game improves behavioral outcomes in adolescents and young adults with cancer: A randomized trial,S90290,R27736,Topic,R27734,Cancer treatment,"OBJECTIVE. Suboptimal adherence to self-administered medications is a common problem. The purpose of this study was to determine the effectiveness of a video-game intervention for improving adherence and other behavioral outcomes for adolescents and young adults with malignancies including acute leukemia, lymphoma, and soft-tissue sarcoma. METHODS. A randomized trial with baseline and 1- and 3-month assessments was conducted from 2004 to 2005 at 34 medical centers in the United States, Canada, and Australia. A total of 375 male and female patients who were 13 to 29 years old, had an initial or relapse diagnosis of a malignancy, and currently undergoing treatment and expected to continue treatment for at least 4 months from baseline assessment were randomly assigned to the intervention or control group. The intervention was a video game that addressed issues of cancer treatment and care for teenagers and young adults. Outcome measures included adherence, self-efficacy, knowledge, control, stress, and quality of life. For patients who were prescribed prophylactic antibiotics, adherence to trimethoprim-sulfamethoxazole was tracked by electronic pill-monitoring devices (n = 200). Adherence to 6-mercaptopurine was assessed through serum metabolite assays (n = 54). RESULTS. Adherence to trimethoprim-sulfamethoxazole and 6-mercaptopurine was greater in the intervention group. Self-efficacy and knowledge also increased in the intervention group compared with the control group. The intervention did not affect self-report measures of adherence, stress, control, or quality of life. CONCLUSIONS. The video-game intervention significantly improved treatment adherence and indicators of cancer-related self-efficacy and knowledge in adolescents and young adults who were undergoing cancer therapy. The findings support current efforts to develop effective video-game interventions for education and training in health care.",TRUE,noun phrase
R11,Science,R30753,The evacuation optimal network design problem: model formulation and comparisons,S103579,R30921,Decisions First-stage,R30841,Capacity expansion,"Abstract The goal of this paper is twofold. First, we present a stochastic programming-based model that provides optimal design solutions for transportation networks in light of possible emergency evacuations. Second, as traffic congestion is a growing problem in metropolitan areas around the world, decision makers might not be willing to design transportation networks solely for evacuation purposes since daily traffic patterns differ tremendously from traffic observed during evacuations. This is especially true when potential disaster locations are limited in number and confined to specific regions (e.g. coastal regions might be more prone to flooding). However, as extreme events such as excessive rainfall become more prevalent everywhere, it is less obvious that the design of transportation networks for evacuation planning and congestion reduction is mutually exclusive. That is, capacity expansion decisions to reduce congestion might also be reasonable from an evacuation planning point of view. Conversely, expansion decisions for evacuation planning might turn out to be effective for congestion relief. To date, no numerical evidence has been presented in the literature to support or disprove these conjectures. Preliminary numerical evidence is provided in this paper.",TRUE,noun phrase
R11,Science,R27774,"Deal or No Deal: using games to improve student learning, retention and decision-making",S90453,R27775,Method,R25361,Case Study,"Student understanding and retention can be enhanced and improved by providing alternative learning activities and environments. Education theory recognizes the value of incorporating alternative activities (games, exercises and simulations) to stimulate student interest in the educational environment, enhance transfer of knowledge and improve learned retention with meaningful repetition. In this case study, we investigate using an online version of the television game show, ‘Deal or No Deal’, to enhance student understanding and retention by playing the game to learn expected value in an introductory statistics course, and to foster development of critical thinking skills necessary to succeed in the modern business environment. Enhancing the thinking process of problem solving using repetitive games should also improve a student's ability to follow non-mathematical problem-solving processes, which should improve the overall ability to process information and make logical decisions. Learning and retention are measured to evaluate the success of the students’ performance.",TRUE,noun phrase
R11,Science,R31364,Artificial neural networks to infer biomass and product concentration during the production of penicillin G acylase from Bacillus megaterium,S105197,R31365,Objective/estimate(s) process systems,R31363,Cellular concentration,"BACKGROUND: Production of microbial enzymes in bioreactors is a complex process including such phenomena as metabolic networks and mass transport resistances. The use of neural networks (NNs) to infer the state of bioreactors may be an interesting option that may handle the nonlinear dynamics of biomass growth and protein production. RESULTS: Feedforward multilayer perceptron (MLP) NNs were used for identification of the cultivation phase of Bacillus megaterium to produce the enzyme penicillin G acylase (EC. 3.5.1.11). The following variables were used as input to the net: run time and carbon dioxide concentration in the exhausted gas. The NN output associates a numerical value to the metabolic state of the cultivation, close to 0 during the lag phase, close to 1 during the exponential phase and approximately 2 for the stationary phase. This is a non-conventional approach for pattern recognition. During the exponential phase, another MLP was used to infer cellular concentration. Time, carbon dioxide concentration and stirrer speed form an integrated net input vector. Cellular concentrations provided by the NN were used in a hybrid approach to estimate product concentrations of the enzyme. The model employed a first-order approximation. CONCLUSION: Results showed that the algorithm was able to infer accurate values of cellular and product concentrations up to the end of the exponential growth phase, where an industrial run should stop. Copyright © 2008 Society of Chemical Industry",TRUE,noun phrase
R11,Science,R33280,Identifying the factors influencing the performance of reverse supply chains (RSC),S115333,R33281,Critical success factors,R33276,channel relationship,"This paper aims to extract the factors influencing the performance of reverse supply chains (RSCs) based on the structure equation model (SEM). We first introduce the definition of RSC and describe its current status and follow this with a literature review of previous RSC studies and the technology acceptance model . We next develop our research model and 11 hypotheses and then use SEM to test our model and identify those factors that actually influence the success of RSC. Next, we use both questionnaire and web‐based methods to survey five companies which have RSC operation experience in China and Korea. Using the 168 responses, we used measurement modeling test and SEM to validate our proposed hypotheses. As a result, nine hypotheses were accepted while two were rejected. We found that ease of use, perceived usefulness, service quality, channel relationship and RSC cost were the five most important factors which influence the success of RSC. Finally, we conclude by highlighting our research contribution and propose future research.",TRUE,noun phrase
R11,Science,R26419,Characterization of Two Chitinase Genes and One Chitosanase Gene Encoded by Chlorella Virus PBCV-1,S83009,R26420,Sources,L52416,Chlorella virus,"Chlorella virus PBCV-1 encodes two putative chitinase genes, a181/182r and a260r, and one chitosanase gene, a292l. The three genes were cloned and expressed in Escherichia coli. The recombinant A181/182R protein has endochitinase activity, recombinant A260R has both endochitinase and exochitinase activities, and recombinant A292L has chitosanase activity. Transcription of a181/182r, a260r, and a292l genes begins at 30, 60, and 60 min p.i., respectively; transcription of all three genes continues until the cells lyse. A181/182R, A260R, and A292L proteins are first detected by Western blots at 60, 90, and 120 min p.i., respectively. Therefore, a181/182r is an early gene and a260r and a292l are late genes. All three genes are widespread in chlorella viruses. Phylogenetic analyses indicate that the ancestral condition of the a181/182r gene arose from the most recent common ancestor of a gene found in tobacco, whereas the genealogical position of the a260r gene could not be unambiguously resolved.",TRUE,noun phrase
R11,Science,R70589,Electronic health record-based detection of risk factors for Clostridium difficile infection relapse,S336276,R70590,Features,R46892,clinical data,"Objective. A major challenge in treating Clostridium difficile infection (CDI) is relapse. Many new therapies are being developed to help prevent this outcome. We sought to establish risk factors for relapse and determine whether fields available in an electronic health record (EHR) could be used to identify high-risk patients for targeted relapse prevention strategies. Design. Retrospective cohort study. Setting. Large clinical data warehouse at a 4-hospital healthcare organization. Participants. Data were gathered from January 2006 through October 2010. Subjects were all inpatient episodes of a positive C. difficile test where patients were available for 56 days of follow-up. Methods. Relapse was defined as another positive test between 15 and 56 days after the initial test. Multivariable regression was performed to identify factors independently associated with CDI relapse. Results. Eight hundred twenty-nine episodes met eligibility criteria, and 198 resulted in relapse (23.9%). In the final multivariable analysis, risk of relapse was associated with age (odds ratio [OR], 1.02 per year [95% confidence interval (CI), 1.01–1.03]), fluoroquinolone exposure in the 90 days before diagnosis (OR, 1.58 [95% CI, 1.11–2.26]), intensive care unit stay in the 30 days before diagnosis (OR, 0.47 [95% CI, 0.30–0.75]), cephalosporin (OR, 1.80 [95% CI, 1.19–2.71]), proton pump inhibitor (PPI; OR, 1.55 [95% CI, 1.05–2.29]), and metronidazole exposure after diagnosis (OR, 2.74 [95% CI, 1.64–4.60]). A prediction model tuned to ensure a 50% probability of relapse would flag 14.6% of CDI episodes. Conclusions. Data from a comprehensive EHR can be used to identify patients at high risk for CDI relapse. Major risk factors include antibiotic and PPI exposure.",TRUE,noun phrase
R11,Science,R70585,Development and validation of a Clostridium difficile infection risk prediction model,S335915,R70586,Infection,L242701,Clostridium difficile infection,"Objective. To develop and validate a risk prediction model that could identify patients at high risk for Clostridium difficile infection (CDI) before they develop disease. Design and Setting. Retrospective cohort study in a tertiary care medical center. Patients. Patients admitted to the hospital for at least 48 hours during the calendar year 2003. Methods. Data were collected electronically from the hospital's Medical Informatics database and analyzed with logistic regression to determine variables that best predicted patients' risk for development of CDI. Model discrimination and calibration were calculated. The model was bootstrapped 500 times to validate the predictive accuracy. A receiver operating characteristic curve was calculated to evaluate potential risk cutoffs. Results. A total of 35,350 admitted patients, including 329 with CDI, were studied. Variables in the risk prediction model were age, CDI pressure, times admitted to hospital in the previous 60 days, modified Acute Physiology Score, days of treatment with high-risk antibiotics, whether albumin level was low, admission to an intensive care unit, and receipt of laxatives, gastric acid suppressors, or antimotility drugs. The calibration and discrimination of the model were very good to excellent (C index, 0.88; Brier score, 0.009). Conclusions. The CDI risk prediction model performed well. Further study is needed to determine whether it could be used in a clinical setting to prevent CDI-associated outcomes and reduce costs.",TRUE,noun phrase
R11,Science,R70587,Prediction of Recurrent Clostridium Difficile Infection Using Comprehensive Electronic Medical Records in an Integrated Healthcare Delivery System,S335928,R70588,Infection,L242712,Clostridium difficile infection,"BACKGROUND Predicting recurrent Clostridium difficile infection (rCDI) remains difficult. METHODS. We employed a retrospective cohort design. Granular electronic medical record (EMR) data had been collected from patients hospitalized at 21 Kaiser Permanente Northern California hospitals. The derivation dataset (2007–2013) included data from 9,386 patients who experienced incident CDI (iCDI) and 1,311 who experienced their first CDI recurrences (rCDI). The validation dataset (2014) included data from 1,865 patients who experienced incident CDI and 144 who experienced rCDI. Using multiple techniques, including machine learning, we evaluated more than 150 potential predictors. Our final analyses evaluated 3 models with varying degrees of complexity and 1 previously published model. RESULTS Despite having a large multicenter cohort and access to granular EMR data (eg, vital signs, and laboratory test results), none of the models discriminated well (c statistics, 0.591–0.605), had good calibration, or had good explanatory power. CONCLUSIONS Our ability to predict rCDI remains limited. Given currently available EMR technology, improvements in prediction will require incorporating new variables because currently available data elements lack adequate explanatory power. Infect Control Hosp Epidemiol 2017;38:1196–1203",TRUE,noun phrase
R11,Science,R70589,Electronic health record-based detection of risk factors for Clostridium difficile infection relapse,S336267,R70590,Infection,R70634,Clostridium difficile infection,"Objective. A major challenge in treating Clostridium difficile infection (CDI) is relapse. Many new therapies are being developed to help prevent this outcome. We sought to establish risk factors for relapse and determine whether fields available in an electronic health record (EHR) could be used to identify high-risk patients for targeted relapse prevention strategies. Design. Retrospective cohort study. Setting. Large clinical data warehouse at a 4-hospital healthcare organization. Participants. Data were gathered from January 2006 through October 2010. Subjects were all inpatient episodes of a positive C. difficile test where patients were available for 56 days of follow-up. Methods. Relapse was defined as another positive test between 15 and 56 days after the initial test. Multivariable regression was performed to identify factors independently associated with CDI relapse. Results. Eight hundred twenty-nine episodes met eligibility criteria, and 198 resulted in relapse (23.9%). In the final multivariable analysis, risk of relapse was associated with age (odds ratio [OR], 1.02 per year [95% confidence interval (CI), 1.01–1.03]), fluoroquinolone exposure in the 90 days before diagnosis (OR, 1.58 [95% CI, 1.11–2.26]), intensive care unit stay in the 30 days before diagnosis (OR, 0.47 [95% CI, 0.30–0.75]), cephalosporin (OR, 1.80 [95% CI, 1.19–2.71]), proton pump inhibitor (PPI; OR, 1.55 [95% CI, 1.05–2.29]), and metronidazole exposure after diagnosis (OR, 2.74 [95% CI, 1.64–4.60]). A prediction model tuned to ensure a 50% probability of relapse would flag 14.6% of CDI episodes. Conclusions. Data from a comprehensive EHR can be used to identify patients at high risk for CDI relapse. Major risk factors include antibiotic and PPI exposure.",TRUE,noun phrase
R11,Science,R70591,Improving Risk Prediction of Clostridium Difficile Infection Using Temporal Event-Pairs,S335953,R70592,Infection,L242733,Clostridium difficile infection,"Clostridium Difficile Infection (CDI) is a contagious healthcare-associated infection that imposes a significant burden on the healthcare system. In 2011 alone, half a million patients suffered from CDI in the United States, 29,000 dying within 30 days of diagnosis. Determining which hospital patients are at risk for developing CDI is critical to helping healthcare workers take timely measures to prevent or detect and treat this infection. We improve the state of the art of CDI risk prediction by designing an ensemble logistic regression classifier that given partial patient visit histories, outputs the risk of patients acquiring CDI during their current hospital visit. The novelty of our approach lies in the representation of each patient visit as a collection of co-occurring and chronologically ordered pairs of events. This choice is motivated by our hypothesis that CDI risk is influenced not just by individual events (e.g., Being prescribed a first generation cephalosporin antibiotic), but by the temporal ordering of individual events (e.g., Antibiotic prescription followed by transfer to a certain hospital unit). While this choice explodes the number of features, we use a randomized greedy feature selection algorithm followed by BIC minimization to reduce the dimensionality of the feature space, while retaining the most relevant features. We apply our approach to a rich dataset from the University of Iowa Hospitals and Clinics (UIHC), curated from diverse sources, consisting of 200,000 visits (30,000 per year, 2006-2011) involving 125,000 unique patients, 2 million diagnoses, 8 million prescriptions, 400,000 room transfers spanning a hospital with 700 patient rooms and 200 units. Our approach to classification produces better risk predictions (AUC) than existing risk estimators for CDI, even when trained just on data available at patient admission. It also identifies novel risk factors for CDI that are combinations of co-occurring and chronologically ordered events.",TRUE,noun phrase
R11,Science,R70593,A Multi-Center Prospective Derivation and Validation of a Clinical Prediction Tool for Severe Clostridium difficile Infection,S335967,R70594,Infection,L242745,Clostridium difficile infection,"Background and Aims Prediction of severe clinical outcomes in Clostridium difficile infection (CDI) is important to inform management decisions for optimum patient care. Currently, treatment recommendations for CDI vary based on disease severity but validated methods to predict severe disease are lacking. The aim of the study was to derive and validate a clinical prediction tool for severe outcomes in CDI. Methods A cohort totaling 638 patients with CDI was prospectively studied at three tertiary care clinical sites (Boston, Dublin and Houston). The clinical prediction rule (CPR) was developed by multivariate logistic regression analysis using the Boston cohort and the performance of this model was then evaluated in the combined Houston and Dublin cohorts. Results The CPR included the following three binary variables: age ≥ 65 years, peak serum creatinine ≥2 mg/dL and peak peripheral blood leukocyte count of ≥20,000 cells/μL. The Clostridium difficile severity score (CDSS) correctly classified 76.5% (95% CI: 70.87-81.31) and 72.5% (95% CI: 67.52-76.91) of patients in the derivation and validation cohorts, respectively. In the validation cohort, CDSS scores of 0, 1, 2 or 3 were associated with severe clinical outcomes of CDI in 4.7%, 13.8%, 33.3% and 40.0% of cases respectively. Conclusions We prospectively derived and validated a clinical prediction rule for severe CDI that is simple, reliable and accurate and can be used to identify high-risk patients most likely to benefit from measures to prevent complications of CDI.",TRUE,noun phrase
R11,Science,R70595,"A Generalizable, Data-Driven Approach to Predict Daily Risk of Clostridium difficile Infection at Two Large Academic Health Centers",S335979,R70596,Infection,L242755,Clostridium difficile infection,"OBJECTIVE An estimated 293,300 healthcare-associated cases of Clostridium difficile infection (CDI) occur annually in the United States. To date, research has focused on developing risk prediction models for CDI that work well across institutions. However, this one-size-fits-all approach ignores important hospital-specific factors. We focus on a generalizable method for building facility-specific models. We demonstrate the applicability of the approach using electronic health records (EHR) from the University of Michigan Hospitals (UM) and the Massachusetts General Hospital (MGH). METHODS We utilized EHR data from 191,014 adult admissions to UM and 65,718 adult admissions to MGH. We extracted patient demographics, admission details, patient history, and daily hospitalization details, resulting in 4,836 features from patients at UM and 1,837 from patients at MGH. We used L2 regularized logistic regression to learn the models, and we measured the discriminative performance of the models on held-out data from each hospital. RESULTS Using the UM and MGH test data, the models achieved area under the receiver operating characteristic curve (AUROC) values of 0.82 (95% confidence interval [CI], 0.80–0.84) and 0.75 ( 95% CI, 0.73–0.78), respectively. Some predictive factors were shared between the 2 models, but many of the top predictive factors differed between facilities. CONCLUSION A data-driven approach to building models for estimating daily patient risk for CDI was used to build institution-specific models at 2 large hospitals with different patient populations and EHR systems. In contrast to traditional approaches that focus on developing models that apply across hospitals, our generalizable approach yields risk-stratification models tailored to an institution. These hospital-specific models allow for earlier and more accurate identification of high-risk patients and better targeting of infection prevention strategies. Infect Control Hosp Epidemiol 2018;39:425–433",TRUE,noun phrase
R11,Science,R29359,Cloud ERP system customization challenges,S97618,R29360,field,R29356,Cloud ERP,"Customization is one of the known challenges in traditional ERP systems. With the advent of Cloud ERP systems, a question of determining the state of such systems regarding customization and configuration ability arises. As there are only a few literature sources partially covering this topic, a more comprehensive and systematic literature review is needed. Thus, this paper presents a literature review performed in order to give an overview of reported research on ""Cloud ERP Customization"" topic performed in the last 5 years. In two search iterations, a total of 32 relevant papers are identified and analyzed. The results show that several dominant research trends are identified along with 12 challenges and issues. Additionally, based on the results, the possible future researches are proposed.",TRUE,noun phrase
R11,Science,R33976,"Reconstructing habitat use of Coilia mystus and Coilia ectenes of the Yangtze River estuary, and of Coilia ectenes of Taihu Lake, based on otolith strontium and calcium",S117820,R33977,Species Order,R33972,Coilia mystus,"The habitat use and migratory patterns of Osbeck’s grenadier anchovy Coilia mystus in the Yangtze estuary and the estuarine tapertail anchovy Coilia ectenes from the Yangtze estuary and Taihu Lake, China, were studied by examining the environmental signatures of strontium and calcium in their otoliths using electron probe microanalysis. The results indicated that Taihu C. ectenes utilizes only freshwater habitats, whereas the habitat use patterns of Yangtze C. ectenes and C. mystus were much more flexible, apparently varying among fresh, brackish and marine areas. The present study suggests that the spawning populations of Yangtze C. ectenes and C. mystus in the Yangtze estuary consist of individuals with different migration histories, and individuals of these two Yangtze Coilia species seem to use a variety of different habitats during the non-spawning seasons.",TRUE,noun phrase
R11,Science,R33476,An empirical study on the impact of critical success factors on the balanced scorecard performance in Korean green supply chain management enterprises,S115697,R33477,Critical success factors,R33470,Collaboration with partners,"Rapid industrial modernisation and economic reform have been features of the Korean economy since the 1990s, and have brought with it substantial environmental problems. In response to these problems, the Korean government has been developing approaches to promote cleaner production technologies. Green supply chain management (GSCM) is emerging to be an important approach for Korean enterprises to improve performance. The purpose of this study is to examine the impact of GSCM CSFs (critical success factors) on the BSC (balanced scorecard) performance by the structural equation modelling, using empirical results from 249 enterprise respondents involved in national GSCM business in Korea. Planning and implementation was a dominant antecedent factor in this study, followed by collaboration with partners and integration of infrastructure. However, activation of support was a negative impact to the finance performance, raising the costs and burdens. It was found out that there were important implications in the implementation of GSCM.",TRUE,noun phrase
R11,Science,R33534,Application of critical success factors in supply chain management,S115823,R33535,Critical success factors,R33533,collaborative partnership,"This study is the first attempt that assembled published academic work on critical success factors (CSFs) in supply chain management (SCM) fields. The purpose of this study are to review the CSFs in SCM and to uncover the major CSFs that are apparent in SCM literatures. This study apply literature survey techniques from published CSFs studies in SCM. A collection of 42 CSFs studies in various SCM fields are obtained from major databases. The search uses keywords such as as supply chain management, critical success factors, logistics management and supply chain drivers and barriers. From the literature survey, four major CSFs are proposed. The factors are collaborative partnership, information technology, top management support and human resource. It is hoped that this review will serve as a platform for future research in SCM and CSFs studies. Plus, this study contribute to existing SCM knowledge and further appraise the concept of CSFs.",TRUE,noun phrase
R11,Science,R33406,A study of supplier selection factors for high-tech industries in the supply chain,S115565,R33407,Critical success factors,R33404,commercial image,"Amid the intensive competition among global industries, the relationship between manufacturers and suppliers has turned from antagonist to cooperative. Through partnerships, both parties can be mutually benefited, and the key factor that maintains such relationship lies in how manufacturers select proper suppliers. The purpose of this study is to explore the key factors considered by manufacturers in supplier selection and the relationships between these factors. Through a literature review, eight supplier selection factors, comprising price response capability, quality management capability, technological capability, delivery capability, flexible capability, management capability, commercial image, and financial capability are derived. Based on the theoretic foundation proposed by previous researchers, a causal model of supplier selection factors is further constructed. The results of a survey on high-tech industries are used to verify the relationships between the eight factors using structural equation modelling (SEM). Based on the empirical results, conclusions and suggestions are finally proposed as a reference for manufacturers and suppliers.",TRUE,noun phrase
R11,Science,R34288,Macroeconomic Convergence in Southern Africa,S119264,R34289,Justification/ recommendation,L72045,Common long-run trends,"In this paper we aim to answer the following two questions: 1) has the Common Monetary Area in Southern Africa (henceforth CMA) ever been an optimal currency area (OCA)? 2) What are the costs and benefits of the CMA for its participating countries? In order to answer these questions, we carry out a two-step econometric exercise based on the theory of generalised purchasing power parity (G-PPP). The econometric evidence shows that the CMA (but also Botswana as a de facto member) form an OCA given the existence of common long-run trends in their bilateral real exchange rates. Second, we also test that in the case of the CMA and Botswana the smoothness of the operation of the common currency area — measured through the degree of relative price correlation — depends on a variety of factors. These factors signal both the advantages and disadvantages of joining a monetary union. On the one hand, the more open and more similarly diversified the economies are, the higher the benefits they ... Ce Document de travail s'efforce de repondre a deux questions : 1) la zone monetaire commune de l'Afrique australe (Common Monetary Area - CMA) a-t-elle vraiment reussi a devenir une zone monetaire optimale ? 2) quels sont les couts et les avantages de la CMA pour les pays participants ? Nous avons effectue un exercice econometrique en deux etapes base sur la theorie des parites de pouvoir d'achat generalisees. D'apres les resultats econometriques, la CMA (avec le Botswana comme membre de facto) est effectivement une zone monetaire optimale etant donne les evolutions communes sur le long terme de leurs taux de change bilateraux. Nous avons egalement mis en evidence que le bon fonctionnement de l'union monetaire — mesure par le degre de correlation des prix relatifs — depend de plusieurs facteurs. Ces derniers revelent a la fois les couts et les avantages de l'appartenance a une union monetaire. D'un cote, plus les economies sont ouvertes et diversifiees de facon comparable, plus ...",TRUE,noun phrase
R11,Science,R151254,"CAMPUS EMERGENCY NOTIFICATION SYSTEMS:
AN EXAMINATION OF FACTORS AFFECTING
COMPLIANCE WITH ALERTS1",S612295,R153071,paper: Theory / Concept / Model,L422190,Compliance theory,"The increasing number of campus-related emergency incidents, in combination with the requirements imposed by the Clery Act, have prompted college campuses to develop emergency notification systems to inform community members of extreme events that may affect them. Merely deploying emergency notification systems on college campuses, however, does not guarantee that these systems will be effective; student compliance plays a very important role in establishing such effectiveness. Immediate compliance with alerts, as opposed to delayed compliance or noncompliance, is a key factor in improving student safety on campuses. This paper investigates the critical antecedents that motivate students to comply immediately with messages from campus emergency notification systems. Drawing on Etzioni's compliance theory, a model is developed. Using a scenario-based survey method, the model is tested in five types of events--snowstorm, active shooter, building fire, health-related, and robbery--and with more than 800 college students from the Northern region of the United States. The results from this study suggest that subjective norm and information quality trust are, in general, the most important factors that promote immediate compliance. This research contributes to the literature on compliance, emergency notification systems, and emergency response policies.",TRUE,noun phrase
R11,Science,R151254,"CAMPUS EMERGENCY NOTIFICATION SYSTEMS:
AN EXAMINATION OF FACTORS AFFECTING
COMPLIANCE WITH ALERTS1",S626518,R156076,paper: Theory / Construct / Model,L431217,Compliance theory,"The increasing number of campus-related emergency incidents, in combination with the requirements imposed by the Clery Act, have prompted college campuses to develop emergency notification systems to inform community members of extreme events that may affect them. Merely deploying emergency notification systems on college campuses, however, does not guarantee that these systems will be effective; student compliance plays a very important role in establishing such effectiveness. Immediate compliance with alerts, as opposed to delayed compliance or noncompliance, is a key factor in improving student safety on campuses. This paper investigates the critical antecedents that motivate students to comply immediately with messages from campus emergency notification systems. Drawing on Etzioni's compliance theory, a model is developed. Using a scenario-based survey method, the model is tested in five types of events--snowstorm, active shooter, building fire, health-related, and robbery--and with more than 800 college students from the Northern region of the United States. The results from this study suggest that subjective norm and information quality trust are, in general, the most important factors that promote immediate compliance. This research contributes to the literature on compliance, emergency notification systems, and emergency response policies.",TRUE,noun phrase
R11,Science,R29072,Real-time facial feature detection using conditional regression forests,S96260,R29073,Methods,R29071,Conditional regression forest,"Although facial feature detection from 2D images is a well-studied field, there is a lack of real-time methods that estimate feature points even on low quality images. Here we propose conditional regression forest for this task. While regression forest learn the relations between facial image patches and the location of feature points from the entire set of faces, conditional regression forest learn the relations conditional to global face properties. In our experiments, we use the head pose as a global property and demonstrate that conditional regression forests outperform regression forests for facial feature detection. We have evaluated the method on the challenging Labeled Faces in the Wild [20] database where close-to-human accuracy is achieved while processing images in real-time.",TRUE,noun phrase
R11,Science,R151260,social media affordances for connective action: An examination of microbloggin use during the gulf of Mexcio oil spill,S612333,R153074,paper: Theory / Concept / Model,L422225,Connective affordances,"This research questions how social media use affords new forms of organizing and collective engagement. The concept of connective action has been introduced to characterize such new forms of collective engagement in which actors coproduce and circulate content based upon an issue of mutual interest. Yet, how the use of social media actually affords connective action still needed to be investigated. Mixed methods analyses of microblogging use during the Gulf of Mexico oil spill bring insights onto this question and reveal in particular how multiple actors enacted emerging and interdependent roles with their distinct patterns of feature use. The findings allow us to elaborate upon the concept of connective affordances as collective level affordances actualized by actors in team interdependent roles. Connective affordances extend research on affordances as a relational concept by considering not only the relationships between technology and users but also the interdependence type among users and the effects of this interdependence onto what users can do with the technology. This study contributes to research on social media use by paying close attention to how distinct patterns of feature use enact emerging roles. Adding to IS scholarship on the collective use of technology, it considers how the patterns of feature use for emerging groups of actors are intricately and mutually related to each other.",TRUE,noun phrase
R11,Science,R151260,social media affordances for connective action: An examination of microbloggin use during the gulf of Mexcio oil spill,S626556,R156079,paper: Theory / Construct / Model,L431252,Connective affordances,"This research questions how social media use affords new forms of organizing and collective engagement. The concept of connective action has been introduced to characterize such new forms of collective engagement in which actors coproduce and circulate content based upon an issue of mutual interest. Yet, how the use of social media actually affords connective action still needed to be investigated. Mixed methods analyses of microblogging use during the Gulf of Mexico oil spill bring insights onto this question and reveal in particular how multiple actors enacted emerging and interdependent roles with their distinct patterns of feature use. The findings allow us to elaborate upon the concept of connective affordances as collective level affordances actualized by actors in team interdependent roles. Connective affordances extend research on affordances as a relational concept by considering not only the relationships between technology and users but also the interdependence type among users and the effects of this interdependence onto what users can do with the technology. This study contributes to research on social media use by paying close attention to how distinct patterns of feature use enact emerging roles. Adding to IS scholarship on the collective use of technology, it considers how the patterns of feature use for emerging groups of actors are intricately and mutually related to each other.",TRUE,noun phrase
R11,Science,R26893,Minimum Vehicle Fleet Size Under Time-Window Constraints at a Container Terminal,S86349,R26894,Modality,R26892,Container terminal,"Products can be transported in containers from one port to another. At a container terminal these containers are transshipped from one mode of transportation to another. Cranes remove containers from a ship and put them at a certain time (i.e., release time) into a buffer area with limited capacity. A vehicle lifts a container from the buffer area before the buffer area is full (i.e., in due time) and transports the container from the buffer area to the storage area. At the storage area the container is placed in another buffer area. The advantage of using these buffer areas is the resultant decoupling of the unloading and transportation processes. We study the case in which each container has a time window [release time, due time] in which the transportation should start.The objective is to minimize the vehicle fleet size such that the transportation of each container starts within its time window. No literature has been found studying this relevant problem. We have developed an integer linear programming model to solve the problem of determining vehicle requirements under time-window constraints. We use simulation to validate the estimates of the vehicle fleet size by the analytical model. We test the ability of the model under various conditions. From these numerical experiments we conclude that the results of the analytical model are close to the results of the simulation model. Furthermore, we conclude that the analytical model performs well in the context of a container terminal.",TRUE,noun phrase
R11,Science,R25977,Page grammars and page parsing. A syntactic approach to document layout recognition,S80391,R26007,Logical Structure Representation,L50780,context-free string grammar,"Describes a syntactic approach to deducing the logical structure of printed documents from their physical layout. Page layout is described by a two-dimensional grammar, similar to a context-free string grammar, and a chart parser is used to parse segmented page images according to the grammar. This process is part of a system which reads scanned document images and produces computer-readable text in a logical mark-up format such as SGML. The system is briefly outlined, the grammar formalism and the parsing algorithm are described in detail, and some experimental results are reported.<>",TRUE,noun phrase
R11,Science,R26819,The Heterogeneous Vehicle-Routing Game,S85985,R26820,Method,R26818,Cooperative game theory,"In this paper, we study a cost-allocation problem that arises in a distribution-planning situation at the Logistics Department at Norsk Hydro Olje AB, Stockholm, Sweden. We consider the routes from one depot during one day. The total distribution cost for these routes is to be divided among the customers that are visited. This cost-allocation problem is formulated as a vehicle-routing game (VRG), allowing the use of vehicles with different capacities. Cost-allocation methods based on different concepts from cooperative game theory, such as the core and the nucleolus, are discussed. A procedure that can be used to investigate whether the core is empty or not is presented, as well as a procedure to compute the nucleolus. Computational results for the Norsk Hydro case are presented and discussed.",TRUE,noun phrase
R11,Science,R29243,Critical elements for a successful enterprise resource planning implementation in small-and medium-sized enterprises,S97151,R29244,Foci,R29241,Critical elements,"The body of research relating to the implementation of enterprise resource planning (ERP) systems in small- and medium-sized enterprises (SMEs) has been increasing rapidly over the last few years. It is important, particularly for SMEs, to recognize the elements for a successful ERP implementation in their environments. This research aims to examine the critical elements that constitute a successful ERP implementation in SMEs. The objective is to identify the constituents within the critical elements. A comprehensive literature review and interviews with eight SMEs in the UK were carried out. The results serve as the basic input into the formation of the critical elements and their constituents. Three main critical elements are formed: critical success factors, critical people and critical uncertainties. Within each critical element, the related constituents are identified. Using the process theory approach, the constituents within each critical element are linked to their specific phase(s) of ERP implementation. Ten constituents for critical success factors were found, nine constituents for critical people and 21 constituents for critical uncertainties. The research suggests that a successful ERP implementation often requires the identification and management of the critical elements and their constituents at each phase of implementation. The results are constructed as a reference framework that aims to provide researchers and practitioners with indicators and guidelines to improve the success rate of ERP implementation in SMEs.",TRUE,noun phrase
R11,Science,R33447,Linking Success Factors to Financial Performance,S115632,R33448,Critical success factors,R33123,customer focus,"Problem statement: Based on a literature survey, an attempt has been made in this study to develop a framework for identifying the success factors. In addition, a list of key success factors is presented. The emphasis is on success factors dealing with breadth of services, internationalization of operations, industry focus, customer focus, 3PL experience, relationship with 3PLs, investment in quality assets, investment in information systems, availability of skilled professionals and supply chain integration. In developing the factors an effort has been made to align and relate them to financial performance. Conclusion/Recommendations: We found success factors “relationship with 3PLs and skilled logistics professionals” would substantially improves financial performance metric profit growth. Our findings also contribute to managerial practice by offering a benchmarking tool that can be used by managers in the 3PL service provider industry in India.",TRUE,noun phrase
R11,Science,R27347,Influence of the shot peening temperature on the relaxation behaviour of residual stresses during cyclic bending,S88244,R27348,Special Notes,R27346,cyclic bending,"Shot peening of steels at elevated temperatures (warm peening) can improve the fatigue behaviour of workpieces. For the steel AI Sf 4140 (German grade 42CrM04) in a quenched and tempered condition, it is shown that this is not only caused by the higher compressive residual stresses induced but also due to an enlarged stability of these residual stresses during cyclic bending. This can be explained by strain aging effects during shot peening, which cause different and more stable dislocation structures.",TRUE,noun phrase
R11,Science,R33189,Critical success factors of web-based supply-chain management systems: an exploratory study,S115176,R33190,Critical success factors,R33187,data security,"This paper reports the results of a survey on the critical success factors (CSFs) of web-based supply-chain management systems (WSCMS). An empirical study was conducted and an exploratory factor analysis of the survey data revealed five major dimensions of the CSFs for WSCMS implementation, namely (1) communication, (2) top management commitment, (3) data security, (4) training and education, and (5) hardware and software reliability. The findings of the results provide insights for companies using or planning to use WSCMS.",TRUE,noun phrase
R11,Science,R33406,A study of supplier selection factors for high-tech industries in the supply chain,S115562,R33407,Critical success factors,R33401,delivery capability,"Amid the intensive competition among global industries, the relationship between manufacturers and suppliers has turned from antagonist to cooperative. Through partnerships, both parties can be mutually benefited, and the key factor that maintains such relationship lies in how manufacturers select proper suppliers. The purpose of this study is to explore the key factors considered by manufacturers in supplier selection and the relationships between these factors. Through a literature review, eight supplier selection factors, comprising price response capability, quality management capability, technological capability, delivery capability, flexible capability, management capability, commercial image, and financial capability are derived. Based on the theoretic foundation proposed by previous researchers, a causal model of supplier selection factors is further constructed. The results of a survey on high-tech industries are used to verify the relationships between the eight factors using structural equation modelling (SEM). Based on the empirical results, conclusions and suggestions are finally proposed as a reference for manufacturers and suppliers.",TRUE,noun phrase
R11,Science,R28926,Operational modeling and simulation of an inter-bay AMHS in semiconductor wafer fabrication,S95504,R28927,Objective function(s),R28922,Delivery time,"This paper studies the operational logic in an inter-bay automated material handling system (AMHS) in semiconductor wafer fabrication. This system consists of stockers located in a two-floor layout. Automated moving devices transfer lots between stockers within the same floor (intra-floor lot transfer) or between different floors (inter-floor lot transfer). Intra-floor lot-transferring transports use a two-rail one-directional system, whereas inter-floor lot-transferring transports use lifters. The decision problem consists of selecting rails and lifters that minimize average lot-delivery time. Several operation rules to deliver lots from source stocker to destination stocker are proposed and their performance is evaluated by discrete event simulation.",TRUE,noun phrase
R11,Science,R28891,Simulation based comparison of semiconductor AMHS alternatives: continuous flow vs. overhead monorail,S95394,R28892,Performance measures,L58424,Delivery time distribution,"Automation is an essential component in today's semiconductor manufacturing. As factories migrate to 300 mm technology, automated handling becomes increasingly important for variety of ergonomic, safety, and yield considerations. Traditional semiconductor AMHS systems, such as the Overhead Monorail Vehicles (OMV) or Overhead Hoist, can be overly expensive. Cost projections for a 300 mm inter/intrabay AMHS installation are in the range of $50 M-$100 M. As an alternative, a lower cost alternative AMHS, called Continuous Flow Transport has been proposed. The CFT system is similar to what has historically been identified as a conveyor based movement system. The CFT system provides cost savings at reduced flexibility and longer delivery time. This study compares the CFT to Overhead Monorail transport, determining a cumulative delivery time distribution. As expected, the CFT system requires a longer average delivery time interval than OMV, but may provide total savings through reduced transport variability.",TRUE,noun phrase
R11,Science,R30654,Dental erosion among children in an Istanbul public school,S102280,R30655,Aim of the study,L61405,Dental erosion,"The aim of this study was to evaluate the prevalence, clinical manifestations, and etiology of dental erosion among children. A total of 153 healthy, 11-year-old children were sampled from a downtown public school in Istanbul, Turkey comprised of middle-class children. Data were obtained via: (1) dinical examination; (2) questionnaire; and (3) standardized data records. A new dental erosion index for children designed by O'Sullivan (2000) was used. Twenty-eight percent (N=43) of the children exhibited dental erosion. Of children who consumed orange juice, 32% showed erosion, while 40% who consumed carbonated beverages showed erosion. Of children who consumed fruit yogurt, 36% showed erosion. Of children who swam professionally in swimming pools, 60% showed erosion. Multiple regression analysis revealed no relationship between dental erosion and related erosive sources (P > .05).",TRUE,noun phrase
R11,Science,R32597,Ship detection and classification in high-resolution remote sensing imagery using shape-driven segmentation method,S111006,R32598,Main purpose,R32577,Detection and classification,"High-resolution remote sensing imagery provides an important data source for ship detection and classification. However, due to shadow effect, noise and low-contrast between objects and background existing in this kind of data, traditional segmentation approaches have much difficulty in separating ship targets from complex sea-surface background. In this paper, we propose a novel coarse-to-fine segmentation strategy for identifying ships in 1-meter resolution imagery. This approach starts from a coarse segmentation by selecting local intensity variance as detection feature to segment ship objects from background. After roughly obtaining the regions containing ship candidates, a shape-driven level-set segmentation is used to extract precise boundary of each object which is good for the following stages such as detection and classification. Experimental results show that the proposed approach outperforms other algorithms in terms of recognition accuracy.",TRUE,noun phrase
R11,Science,R26910,An Effective Multirestart Deterministic Annealing Metaheuristic for the Fleet Size and Mix Vehicle-Routing Problem with Time Windows,S86419,R26911,Method,R26908,Deterministic annealing,"This paper presents a new deterministic annealing metaheuristic for the fleet size and mix vehicle-routing problem with time windows. The objective is to service, at minimal total cost, a set of customers within their time windows by a heterogeneous capacitated vehicle fleet. First, we motivate and define the problem. We then give a mathematical formulation of the most studied variant in the literature in the form of a mixed-integer linear program. We also suggest an industrially relevant, alternative definition that leads to a linear mixed-integer formulation. The suggested metaheuristic solution method solves both problem variants and comprises three phases. In Phase 1, high-quality initial solutions are generated by means of a savings-based heuristic that combines diversification strategies with learning mechanisms. In Phase 2, an attempt is made to reduce the number of routes in the initial solution with a new local search procedure. In Phase 3, the solution from Phase 2 is further improved by a set of four local search operators that are embedded in a deterministic annealing framework to guide the improvement process. Some new implementation strategies are also suggested for efficient time window feasibility checks. Extensive computational experiments on the 168 benchmark instances have shown that the suggested method outperforms the previously published results and found 167 best-known solutions. Experimental results are also given for the new problem variant.",TRUE,noun phrase
R11,Science,R29881,Environmental Kuznets curve: evidences from developed and developing economies,S99155,R29882,Shape of EKC,R29876,Developing countries,"Previous studies show that the environmental quality and economic growth can be represented by the inverted U curve called Environmental Kuznets Curve (EKC). In this study, we conduct empirical analyses on detecting the existence of EKC using the five common pollutants emissions (i.e. CO2, SO2, BOD, SPM10, and GHG) as proxy for environmental quality. The data spanning from year 1961 to 2009 and cover 40 countries. We seek to investigate if the EKC hypothesis holds in two groups of economies, i.e. developed versus developing economies. Applying panel data approach, our results show that the EKC does not hold in all countries. We also detect the existence of U shape and increasing trend in other cases. The results reveal that CO2 and SPM10 are good data to proxy for environmental pollutant and they can be explained well by GDP. Also, it is observed that the developed countries have higher turning points than the developing countries. Higher economic growth may lead to different impacts on environmental quality in different economies.",TRUE,noun phrase
R11,Science,R33521,Evaluating the critical success factors of supplier development: a case study,S115794,R33522,Critical success factors,R33520,direct involvement,"Purpose – The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach – In total, 13 CSFs for SD are identified (i.e. long‐term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings – The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long‐term strategic goal is found to be ...",TRUE,noun phrase
R11,Science,R151127,"Distributed Group
Support Systems",S626076,R156012,Technology,L430839,Distributed group support systems,"Distributed group support systems are likely to be widely used in the future as a means for dispersed groups of people to work together through computer networks. They combine characteristics of computer-mediated communication systems with the specialized tools and processes developed in the context of group decision support systems, to provide communications, a group memory, and tools and structures to coordinate the group process and analyze data. These tools and structures can take a wide variety of forms in order to best support computer-mediated interaction for different types of tasks and groups. This article summarizes five case studies of different distributed group support systems developed by the authors and their colleagues over the last decade to support different types of tasks and to accommodate fairly large numbers of participants (tens to hundreds). The case studies are placed within conceptual frameworks that aid in classifying and comparing such systems. The results of the case studies demonstrate that design requirements and the associated research issues for group support systems an be very different in the distributed environment compared to the decision room approach.",TRUE,noun phrase
R11,Science,R28016,Domain Transformation-Based Efficient Cost Aggregation for Local Stereo Matching,S91379,R28017,Algorithm,R28015,Domain transformation,"Binocular stereo matching is one of the most important algorithms in the field of computer vision. Adaptive support-weight approaches, the current state-of-the-art local methods, produce results comparable to those generated by global methods. However, excessive time consumption is the main problem of these algorithms since the computational complexity is proportionally related to the support window size. In this paper, we present a novel cost aggregation method inspired by domain transformation, a recently proposed dimensionality reduction technique. This transformation enables the aggregation of 2-D cost data to be performed using a sequence of 1-D filters, which lowers computation and memory costs compared to conventional 2-D filters. Experiments show that the proposed method outperforms the state-of-the-art local methods in terms of computational performance, since its computational complexity is independent of the input parameters. Furthermore, according to the experimental results with the Middlebury dataset and real-world images, our algorithm is currently one of the most accurate and efficient local algorithms.",TRUE,noun phrase
R11,Science,R27011,A dynamic model and algorithm for fleet planning,S86838,R27012,Method,R26179,dynamic programming,"By analysing the merits and demerits of the existing linear model for fleet planning, this paper presents an algorithm which combines the linear programming technique with that of dynamic programming to improve the solution to linear model for fleet planning. This new approach has not only the merits that the linear model for fleet planning has, but also the merit of saving computing time. The numbers of ships newly added into the fleet every year are always integers in the final optimal solution. The last feature of the solution directly meets the requirements of practical application. Both the mathematical model of the dynamic fleet planning and its algorithm are put forward in this paper. A calculating example is also given.",TRUE,noun phrase
R11,Science,R70554,Prediction of Sepsis in the Intensive Care Unit With Minimal Electronic Health Record Data: A Machine Learning Approach,S335832,R70577,Objective,L242638,Early warning,"Background Sepsis is one of the leading causes of mortality in hospitalized patients. Despite this fact, a reliable means of predicting sepsis onset remains elusive. Early and accurate sepsis onset predictions could allow more aggressive and targeted therapy while maintaining antimicrobial stewardship. Existing detection methods suffer from low performance and often require time-consuming laboratory test results. Objective To study and validate a sepsis prediction method, InSight, for the new Sepsis-3 definitions in retrospective data, make predictions using a minimal set of variables from within the electronic health record data, compare the performance of this approach with existing scoring systems, and investigate the effects of data sparsity on InSight performance. Methods We apply InSight, a machine learning classification system that uses multivariable combinations of easily obtained patient data (vitals, peripheral capillary oxygen saturation, Glasgow Coma Score, and age), to predict sepsis using the retrospective Multiparameter Intelligent Monitoring in Intensive Care (MIMIC)-III dataset, restricted to intensive care unit (ICU) patients aged 15 years or more. Following the Sepsis-3 definitions of the sepsis syndrome, we compare the classification performance of InSight versus quick sequential organ failure assessment (qSOFA), modified early warning score (MEWS), systemic inflammatory response syndrome (SIRS), simplified acute physiology score (SAPS) II, and sequential organ failure assessment (SOFA) to determine whether or not patients will become septic at a fixed period of time before onset. We also test the robustness of the InSight system to random deletion of individual input observations. Results In a test dataset with 11.3% sepsis prevalence, InSight produced superior classification performance compared with the alternative scores as measured by area under the receiver operating characteristic curves (AUROC) and area under precision-recall curves (APR). In detection of sepsis onset, InSight attains AUROC = 0.880 (SD 0.006) at onset time and APR = 0.595 (SD 0.016), both of which are superior to the performance attained by SIRS (AUROC: 0.609; APR: 0.160), qSOFA (AUROC: 0.772; APR: 0.277), and MEWS (AUROC: 0.803; APR: 0.327) computed concurrently, as well as SAPS II (AUROC: 0.700; APR: 0.225) and SOFA (AUROC: 0.725; APR: 0.284) computed at admission (P<.001 for all comparisons). Similar results are observed for 1-4 hours preceding sepsis onset. In experiments where approximately 60% of input data are deleted at random, InSight attains an AUROC of 0.781 (SD 0.013) and APR of 0.401 (SD 0.015) at sepsis onset time. Even with 60% of data missing, InSight remains superior to the corresponding SIRS scores (AUROC and APR, P<.001), qSOFA scores (P=.0095; P<.001) and superior to SOFA and SAPS II computed at admission (AUROC and APR, P<.001), where all of these comparison scores (except InSight) are computed without data deletion. Conclusions Despite using little more than vitals, InSight is an effective tool for predicting sepsis onset and performs well even with randomly missing data.",TRUE,noun phrase
R11,Science,R70556,Detecting pathogen exposure during the non-symptomatic incubation period using physiological data,S335843,R70578,Objective,L242648,Early warning,"Early pathogen exposure detection allows better patient care and faster implementation of public health measures (patient isolation, contact tracing). Existing exposure detection most frequently relies on overt clinical symptoms, namely fever, during the infectious prodromal period. We have developed a robust machine learning based method to better detect asymptomatic states during the incubation period using subtle, sub-clinical physiological markers. Starting with high-resolution physiological waveform data from non-human primate studies of viral (Ebola, Marburg, Lassa, and Nipah viruses) and bacterial (Y. pestis) exposure, we processed the data to reduce short-term variability and normalize diurnal variations, then provided these to a supervised random forest classification algorithm and post-classifier declaration logic step to reduce false alarms. In most subjects detection is achieved well before the onset of fever; subject cross-validation across exposure studies (varying viruses, exposure routes, animal species, and target dose) lead to 51h mean early detection (at 0.93 area under the receiver-operating characteristic curve [AUCROC]). Evaluating the algorithm against entirely independent datasets for Lassa, Nipah, and Y. pestis exposures un-used in algorithm training and development yields a mean 51h early warning time (at AUCROC=0.95). We discuss which physiological indicators are most informative for early detection and options for extending this capability to limited datasets such as those available from wearable, non-invasive, ECG-based sensors.",TRUE,noun phrase
R11,Science,R70554,Prediction of Sepsis in the Intensive Care Unit With Minimal Electronic Health Record Data: A Machine Learning Approach,S335650,R70555,Setting,L242478,Early warning,"Background Sepsis is one of the leading causes of mortality in hospitalized patients. Despite this fact, a reliable means of predicting sepsis onset remains elusive. Early and accurate sepsis onset predictions could allow more aggressive and targeted therapy while maintaining antimicrobial stewardship. Existing detection methods suffer from low performance and often require time-consuming laboratory test results. Objective To study and validate a sepsis prediction method, InSight, for the new Sepsis-3 definitions in retrospective data, make predictions using a minimal set of variables from within the electronic health record data, compare the performance of this approach with existing scoring systems, and investigate the effects of data sparsity on InSight performance. Methods We apply InSight, a machine learning classification system that uses multivariable combinations of easily obtained patient data (vitals, peripheral capillary oxygen saturation, Glasgow Coma Score, and age), to predict sepsis using the retrospective Multiparameter Intelligent Monitoring in Intensive Care (MIMIC)-III dataset, restricted to intensive care unit (ICU) patients aged 15 years or more. Following the Sepsis-3 definitions of the sepsis syndrome, we compare the classification performance of InSight versus quick sequential organ failure assessment (qSOFA), modified early warning score (MEWS), systemic inflammatory response syndrome (SIRS), simplified acute physiology score (SAPS) II, and sequential organ failure assessment (SOFA) to determine whether or not patients will become septic at a fixed period of time before onset. We also test the robustness of the InSight system to random deletion of individual input observations. Results In a test dataset with 11.3% sepsis prevalence, InSight produced superior classification performance compared with the alternative scores as measured by area under the receiver operating characteristic curves (AUROC) and area under precision-recall curves (APR). In detection of sepsis onset, InSight attains AUROC = 0.880 (SD 0.006) at onset time and APR = 0.595 (SD 0.016), both of which are superior to the performance attained by SIRS (AUROC: 0.609; APR: 0.160), qSOFA (AUROC: 0.772; APR: 0.277), and MEWS (AUROC: 0.803; APR: 0.327) computed concurrently, as well as SAPS II (AUROC: 0.700; APR: 0.225) and SOFA (AUROC: 0.725; APR: 0.284) computed at admission (P<.001 for all comparisons). Similar results are observed for 1-4 hours preceding sepsis onset. In experiments where approximately 60% of input data are deleted at random, InSight attains an AUROC of 0.781 (SD 0.013) and APR of 0.401 (SD 0.015) at sepsis onset time. Even with 60% of data missing, InSight remains superior to the corresponding SIRS scores (AUROC and APR, P<.001), qSOFA scores (P=.0095; P<.001) and superior to SOFA and SAPS II computed at admission (AUROC and APR, P<.001), where all of these comparison scores (except InSight) are computed without data deletion. Conclusions Despite using little more than vitals, InSight is an effective tool for predicting sepsis onset and performs well even with randomly missing data.",TRUE,noun phrase
R11,Science,R70556,Detecting pathogen exposure during the non-symptomatic incubation period using physiological data,S335665,R70557,Setting,L242491,Early warning,"Early pathogen exposure detection allows better patient care and faster implementation of public health measures (patient isolation, contact tracing). Existing exposure detection most frequently relies on overt clinical symptoms, namely fever, during the infectious prodromal period. We have developed a robust machine learning based method to better detect asymptomatic states during the incubation period using subtle, sub-clinical physiological markers. Starting with high-resolution physiological waveform data from non-human primate studies of viral (Ebola, Marburg, Lassa, and Nipah viruses) and bacterial (Y. pestis) exposure, we processed the data to reduce short-term variability and normalize diurnal variations, then provided these to a supervised random forest classification algorithm and post-classifier declaration logic step to reduce false alarms. In most subjects detection is achieved well before the onset of fever; subject cross-validation across exposure studies (varying viruses, exposure routes, animal species, and target dose) lead to 51h mean early detection (at 0.93 area under the receiver-operating characteristic curve [AUCROC]). Evaluating the algorithm against entirely independent datasets for Lassa, Nipah, and Y. pestis exposures un-used in algorithm training and development yields a mean 51h early warning time (at AUCROC=0.95). We discuss which physiological indicators are most informative for early detection and options for extending this capability to limited datasets such as those available from wearable, non-invasive, ECG-based sensors.",TRUE,noun phrase
R11,Science,R25547,Model-driven development of user interfaces for educational games,S76990,R25548,Game Genres,L48140,Educational Games,"The main topic of this paper is the problem of developing user interfaces for educational games. Focus of educational games is usually on the knowledge while it should be evenly distributed to the user interface as well. Our proposed solution is based on the model-driven approach, thus we created a framework that incorporates meta-models, models, transformations and software tools. We demonstrated practical application of the mentioned framework by developing user interface for educational adventure game.",TRUE,noun phrase
R11,Science,R25993,Logical structure analysis of document images based on emergent computation,S80492,R26015,Key Idea,L50865,emergent computation,"A new method for logical structure analysis of document images is proposed in this paper as the basis for a document reader which can extract logical information from various printed documents. The proposed system consists of five basic modules: typography analysis, object recognition, object segmentation, object grouping and object modification. Emergent computation, which is a key concept of artificial life, is adopted for the cooperative interaction among the modules in the system in order to achieve an effective and flexible behavior of the whole system. It has two principal advantages over other methods: adaptive system configuration for various and complex logical structures, and robust document analysis that is tolerant of erroneous feature detection.",TRUE,noun phrase
R11,Science,R28403,Study on a Liner Shipping Network Design Considering Empty Container Reposition,S93113,R28404,emarkable factor,R28358,Empty container repositioning,"Empty container allocation problems arise due to imbalance on trades. Imbalanced trade is a common fact in the liner shipping,creating the necessity of repositioning empty containers from import-dominant ports to export-dominant ports in an economic and efficient way. The present work configures a liner shipping network, by performing the routes assignment and their integration to maximize the profit for a liner shipping company. The empty container repositioning problem is expressly taken into account in whole process. By considering the empty container repositioning problem in the network design, the choice of routes will be also influenced by the empty container flow, resulting in an optimum network, both for loaded and empty cargo. The Liner Shipping Network Design Program (LS-NET program) will define the best set of routes among a set of candidate routes, the best composition of the fleet for the network and configure the empty container repositioning network. Further, a network of Asian ports was studied and the results obtained show that considering the empty container allocation problem in the designing process can influence the final configuration of the network.",TRUE,noun phrase
R11,Science,R31524,Energy efficiency estimation based on data fusion strategy: Case study of ethylene product industry,S105657,R31525,Objective/estimate(s) process systems,R31522,Energy efficiencies of ethylene,"Data fusion is an emerging technology to fuse data from multiple data or information of the environment through measurement and detection to make a more accurate and reliable estimation or decision. In this Article, energy consumption data are collected from ethylene plants with the high temperature steam cracking process technology. An integrated framework of the energy efficiency estimation is proposed on the basis of data fusion strategy. A Hierarchical Variable Variance Fusion (HVVF) algorithm and a Fuzzy Analytic Hierarchy Process (FAHP) method are proposed to estimate energy efficiencies of ethylene equipments. For different equipment scales with the same process technology, the HVVF algorithm is used to estimate energy efficiency ranks among different equipments. For different technologies based on HVVF results, the FAHP method based on the approximate fuzzy eigenvector is used to get energy efficiency indices (EEI) of total ethylene industries. The comparisons are used to assess energy utilization...",TRUE,noun phrase
R11,Science,R33521,Evaluating the critical success factors of supplier development: a case study,S115790,R33522,Critical success factors,R33516,environmental readiness,"Purpose – The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach – In total, 13 CSFs for SD are identified (i.e. long‐term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings – The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long‐term strategic goal is found to be ...",TRUE,noun phrase
R11,Science,R33388,"E-procurement, the golden key to optimizing the supply chains system",S115537,R33389,SCM field,R33215,e-Procurement in supply chain,"Procurement is an important component in the field of operating resource management and e-procurement is the golden key to optimizing the supply chains system. Global firms are optimistic on the level of savings that can be achieved through full implementation of e-procurement strategies. E-procurement is an Internet-based business process for obtaining materials and services and managing their inflow into the organization. In this paper, the subjects of supply chains and e-procurement and its benefits to organizations have been studied. Also, e-procurement in construction and its drivers and barriers have been discussed and a framework of supplier selection in an e-procurement environment has been demonstrated. This paper also has addressed critical success factors in adopting e-procurement in supply chains. Keywords—E-Procurement, Supply Chain, Benefits, Construction, Drivers, Barriers, Supplier Selection, CFSs.",TRUE,noun phrase
R11,Science,R30651,Prevalence of erosive tooth wear and associated risk factors in 2-7-year-old German kindergarten children,S102268,R30652,Aim of the study,L61398,Erosive tooth wear,"OBJECTIVES The aims of this study were to (1) investigate prevalence and severity of erosive tooth wear among kindergarten children and (2) determine the relationship between dental erosion and dietary intake, oral hygiene behaviour, systemic diseases and salivary concentration of calcium and phosphate. MATERIALS AND METHODS A sample of 463 children (2-7 years old) from 21 kindergartens were examined under standardized conditions by a calibrated examiner. Dental erosion of primary and permanent teeth was recorded using a scoring system based on O'Sullivan Index [Eur J Paediatr Dent 2 (2000) 69]. Data on the rate and frequency of dietary intake, systemic diseases and oral hygiene behaviour were obtained from a questionnaire completed by the parents. Unstimulated saliva samples of 355 children were analysed for calcium and phosphate concentration by colorimetric assessment. Descriptive statistics and multiple regression analysis were applied to the data. RESULTS Prevalence of erosion amounted to 32% and increased with increasing age of the children. Dentine erosion affecting at least one tooth could be observed in 13.2% of the children. The most affected teeth were the primary maxillary first and second incisors (15.5-25%) followed by the canines (10.5-12%) and molars (1-5%). Erosions on primary mandibular teeth were as follows: incisors: 1.5-3%, canines: 5.5-6% and molars: 3.5-5%. Erosions of the primary first and second molars were mostly seen on the occlusal surfaces (75.9%) involving enamel or enamel-dentine but not the pulp. In primary first and second incisors and canines, erosive lesions were often located incisally (51.2%) or affected multiple surfaces (28.9%). None of the permanent incisors (n = 93) or first molars (n=139) showed signs of erosion. Dietary factors, oral hygiene behaviour, systemic diseases and salivary calcium and phosphate concentration were not associated with the presence of erosion. CONCLUSIONS Erosive tooth wear of primary teeth was frequently seen in primary dentition. As several children showed progressive erosion into dentine or exhibited severe erosion affecting many teeth, preventive and therapeutic measures are recommended.",TRUE,noun phrase
R11,Science,R29273,Enterprise resource planning post-adoption value: a literature review amongst small and medium enterprises,S97253,R29274,Foci,R29270,ERP value,"It is consensual that Enterprise Resource Planning (ERP) after a successful implementation has significant effects on the productivity of firm as well small and medium-sized enterprises (SMEs) recognized as fundamentally different environments compared to large enterprises. There are few reviews in the literature about the post-adoption phase and even fewer at SME level. Furthermore, to the best of our knowledge there is none with focus in ERP value stage. This review will fill this gap. It provides an updated bibliography of ERP publications published in the IS journal and conferences during the period of 2000 and 2012. A total of 33 articles from 21 journals and 12 conferences are reviewed. The main focus of this paper is to shed the light on the areas that lack sufficient research within the ERP in SME domain, in particular in ERP business value stage, suggest future research avenues, as well as, present the current research findings that could support researchers and practitioners when embarking on ERP projects.",TRUE,noun phrase
R11,Science,R33205,An Exploratory Study of the Success Factors for Extranet Adoption in E-Supply Chain,S115207,R33206,SCM field,R33200,e-Supply chain,"Extranet is an enabler/system that enriches the information service quality in e-supply chain. This paper uses factor analysis to determine four extranet success factors: system quality, information quality, service quality, and work performance quality. A critical analysis of areas that require improvement is also conducted.",TRUE,noun phrase
R11,Science,R30747,Decomposition algorithms for the design of a nonsimultaneous capacitated evacuation tree network,S103558,R30918,Decisions First-stage,R30830,Evacuation tree,"In this article, we examine the design of an evacuation tree, in which evacuation is subject to capacity restrictions on arcs. The cost of evacuating people in the network is determined by the sum of penalties incurred on arcs on which they travel, where penalties are determined according to a nondecreasing function of time. Given a discrete set of disaster scenarios affecting network population, arc capacities, transit times, and penalty functions, we seek to establish an optimal a priori evacuation tree that minimizes the expected evacuation penalty. The solution strategy is based on Benders decomposition, in which the master problem is a mixed‐integer program and each subproblem is a time‐expanded network flow problem. We provide efficient methods for obtaining primal and dual subproblem solutions, and analyze techniques for improving the strength of the master problem formulation, thus reducing the number of master problem solutions required for the algorithm's convergence. We provide computational results to compare the efficiency of our methods on a set of randomly generated test instances. © 2008 Wiley Periodicals, Inc. NETWORKS, 2009",TRUE,noun phrase
R11,Science,R33521,Evaluating the critical success factors of supplier development: a case study,S115791,R33522,Critical success factors,R33517,external environment,"Purpose – The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach – In total, 13 CSFs for SD are identified (i.e. long‐term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings – The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long‐term strategic goal is found to be ...",TRUE,noun phrase
R11,Science,R151192,"The Role of Social Media during Queensland Floods:
An Empirical Investigation on the Existence of Multiple Communities of Practice (MCoPs)",S626305,R156045,Technology,L431035,Facebook and Twitter,"The notion of communities getting together during a disaster to help each other is common. However, how does this communal activity happen within the online world? Here we examine this issue using the Communities of Practice (CoP) approach. We extend CoP to multiple CoP (MCoPs) and examine the role of social media applications in disaster management, extending work done by Ahmed (2011). Secondary data in the form of newspaper reports during 2010 to 2011 were analysed to understand how social media, particularly Facebook and Twitter, facilitated the process of communication among various communities during the Queensland floods in 2010. The results of media-content analysis along with the findings of relevant literature were used to extend our existing understanding on various communities of practice involved in disaster management, their communication tasks and the role of Twitter and Facebook as common conducive platforms of communication during disaster management alongside traditional communication channels.",TRUE,noun phrase
R11,Science,R33406,A study of supplier selection factors for high-tech industries in the supply chain,S115566,R33407,Critical success factors,R33405,financial capability,"Amid the intensive competition among global industries, the relationship between manufacturers and suppliers has turned from antagonist to cooperative. Through partnerships, both parties can be mutually benefited, and the key factor that maintains such relationship lies in how manufacturers select proper suppliers. The purpose of this study is to explore the key factors considered by manufacturers in supplier selection and the relationships between these factors. Through a literature review, eight supplier selection factors, comprising price response capability, quality management capability, technological capability, delivery capability, flexible capability, management capability, commercial image, and financial capability are derived. Based on the theoretic foundation proposed by previous researchers, a causal model of supplier selection factors is further constructed. The results of a survey on high-tech industries are used to verify the relationships between the eight factors using structural equation modelling (SEM). Based on the empirical results, conclusions and suggestions are finally proposed as a reference for manufacturers and suppliers.",TRUE,noun phrase
R11,Science,R27015,Strategic fleet size planning for maritime refrigerated containers,S86859,R27016,problem,R26882,Fleet sizing,"In the present economic climate, it is often the case that profits can only be improved, or for that matter maintained, by improving efficiency and cutting costs. This is particularly notorious in the shipping business, where it has been seen that the competition is getting tougher among carriers, thus alliances and partnerships are resulting for cost effective services in recent years. In this scenario, effective planning methods are important not only for strategic but also operating tasks, covering their entire transportation systems. Container fleet size planning is an important part of the strategy of any shipping line. This paper addresses the problem of fleet size planning for refrigerated containers, to achieve cost-effective services in a competitive maritime shipping market. An analytical model is first discussed to determine the optimal size of an own dry container fleet. Then, this is extended for an own refrigerated container fleet, which is the case when an extremely unbalanced trade represents one of the major investment decisions to be taken by liner operators. Next, a simulation model is developed for fleet sizing in a more practical situation and, by using this, various scenarios are analysed to determine the most convenient composition of refrigerated fleet between own and leased containers for the transpacific cargo trade.",TRUE,noun phrase
R11,Science,R33406,A study of supplier selection factors for high-tech industries in the supply chain,S115563,R33407,Critical success factors,R33402,flexible capability,"Amid the intensive competition among global industries, the relationship between manufacturers and suppliers has turned from antagonist to cooperative. Through partnerships, both parties can be mutually benefited, and the key factor that maintains such relationship lies in how manufacturers select proper suppliers. The purpose of this study is to explore the key factors considered by manufacturers in supplier selection and the relationships between these factors. Through a literature review, eight supplier selection factors, comprising price response capability, quality management capability, technological capability, delivery capability, flexible capability, management capability, commercial image, and financial capability are derived. Based on the theoretic foundation proposed by previous researchers, a causal model of supplier selection factors is further constructed. The results of a survey on high-tech industries are used to verify the relationships between the eight factors using structural equation modelling (SEM). Based on the empirical results, conclusions and suggestions are finally proposed as a reference for manufacturers and suppliers.",TRUE,noun phrase
R11,Science,R28908,An AGV routing policy reflecting the current and future state of semiconductor and LCD production lines,S95455,R28909,Objective function(s),R28906,Flow time,"This paper presents an efficient policy for AGV and part routing in semiconductor and LCD production bays using information on the future state of systems where AGVs play a central role in material handling. These highly informative systems maintain a great deal of information on current and near-future status, such as the arrival and operation completion times of parts, thereby enabling a new approach for production shop control. Efficient control of AGVs is vital in semiconductor and LCD plants because AGV systems often limit the total production capacity of these very expensive plants. With the proposed procedure, the cell controller records the future events chronologically and uses this information to determine the destination and source of parts between the parts' operation machine and temporary storage. It is shown by simulation that the new control policy reduces AGV requirements and flow time of parts.",TRUE,noun phrase
R11,Science,R25479,A Long-Term Model of the Glucose-Insulin Dynamics of Type 1 Diabetes,S76514,R25480,Objective,L47760,For two days,"A new glucose-insulin model is introduced which fits with the clinical data from in- and outpatients for two days. Its stability property is consistent with the glycemia behavior for type 1 diabetes. This is in contrast to traditional glucose-insulin models. Prior models fit with clinical data for a few hours only or display some nonnatural equilibria. The parameters of this new model are identifiable from standard clinical data as continuous glucose monitoring, insulin injection, and carbohydrate estimate. Moreover, it is shown that the parameters from the model allow the computation of the standard tools used in functional insulin therapy as the basal rate of insulin and the insulin sensitivity factor. This is a major outcome as they are required in therapeutic education of type 1 diabetic patients.",TRUE,noun phrase
R11,Science,R26326,Redesigning distribution operations: a case study on integrating inventory management and vehicle routes design,S82732,R26369,Products,R26368,Frozen products,"This paper describes a real-world application concerning the distribution in Portugal of frozen products of a world-wide food and beverage company. Its focus is the development of a model to support negotiations between a logistics operator and retailers, establishing a common basis for a co-operative scheme in supply chain management. A periodic review policy is adopted and an optimisation procedure based on the heuristic proposed by Viswanathan and Mathur (Mgmnt Sci., 1997, 43, 294–312) is used to devise guidelines for inventory replenishment frequencies and for the design of routes to be used in the distribution process. This provides an integrated approach of the two logistics functions—inventory management and routing—with the objective of minimising long-term average costs, considering an infinite time horizon. A framework to estimate inventory levels, namely safety stocks, is also presented. The model provides full information concerning the expected performance of the proposed solution, which can be compared against the present situation, allowing each party to assess its benefits and drawbacks.",TRUE,noun phrase
R11,Science,R31552,On-line soft sensor for polyethylene process with multiple production grades,S105716,R31553,Types,R31549,Fuzzy c-means (FCM),"Abstract Since online measurement of the melt index (MI) of polyethylene is difficult, a virtual sensor model is desirable. However, a polyethylene process usually produces products with multiple grades. The relation between process and quality variables is highly nonlinear. Besides, a virtual sensor model in real plant process with many inputs has to deal with collinearity and time-varying issues. A new recursive algorithm, which models a multivariable, time-varying and nonlinear system, is presented. Principal component analysis (PCA) is used to eliminate the collinearity. Fuzzy c-means (FCM) and fuzzy Takagi–Sugeno (FTS) modeling are used to decompose the nonlinear system into several linear subsystems. Effectiveness of the model is demonstrated using real plant data from a polyethylene process.",TRUE,noun phrase
R11,Science,R26235,A genetic algorithm approach to the integrated inventory-distribution problem,S82020,R26236,approach,R3072,Genetic algorithm,We introduce a new genetic algorithm (GA) approach for the integrated inventory distribution problem (IIDP). We present the developed genetic representation and use a randomized version of a previously developed construction heuristic to generate the initial random population. We design suitable crossover and mutation operators for the GA improvement phase. The comparison of results shows the significance of the designed GA over the construction heuristic and demonstrates the capability of reaching solutions within 20% of the optimum on sets of randomly generated test problems.,TRUE,noun phrase
R11,Science,R32612,Ship detection by salient convex boundaries,S111099,R32613,Satellite sensor,R32570,Google Earth,"Automatic ship detection from remote sensing imagery has many applications, such as maritime security, traffic surveillance, fisheries management. However, it is still a difficult task for noise and distractors. This paper is concerned with perceptual organization, which detect salient convex structures of ships from noisy images. Because the line segments of contour of ships compose a convex set, a local gradient analysis is adopted to filter out the edges which are not on the contour as preprocess. For convexity is the significant feature, we apply the salience as the prior probability to detect. Feature angle constraint helps us compute probability estimate and choose correct contour in many candidate closed line groups. Finally, the experimental results are demonstrated on the satellite imagery from Google earth.",TRUE,noun phrase
R11,Science,R32756,Fusing local texture description of saliency map and enhanced global statistics for ship scene detection,S112073,R32757,Satellite sensor,R32570,Google Earth,"In this paper, we introduce a new feature representation based on fusing local texture description of saliency map and enhanced global statistics for ship scene detection in very high-resolution remote sensing images in inland, coastal, and oceanic regions. First, two low computational complexity methods are adopted. Specifically, the Itti attention model is used to extract saliency map, from which local texture histograms are extracted by LBP with uniform pattern. Meanwhile, Gabor filters with multi-scale and multi-orientation are convolved with the input image to extract Gist, means and variances which are used to form the enhanced global statistics. Second, sliding window-based detection is applied to obtain local image patches and extract the fusion of local and global features. SVM with RBF kernel is then used for training and classification. Such detection manner could remove coastal and oceanic regions effectively. Moreover, the ship scene region of interest can be detected accurately. Experiments on 20 very high-resolution remote sensing images collected by Google Earth shows that the fusion feature has advantages than LBP, Saliency map-based LBP and Gist, respectively. Furthermore, desirable results can be obtained in the ship scene detection.",TRUE,noun phrase
R11,Science,R32838,A ship target automatic detection method for high-resolution remote sensing,S112569,R32839,Satellite sensor,R32570,Google Earth,"With the increasement of spatial resolution of remote sensing, the ship detection methods for low-resolution images are no longer suitable. In this study, a ship target automatic detection method for high-resolution remote sensing is proposed, which mainly contains steps of Otsu binary segmentation, morphological operation, calculation of target features and target judgment. The results show that almost all of the offshore ships can be detected, and the total detection rates are 94% and 91% with the experimental Google Earth data and GF-1 data respectively. The ship target automatic detection method proposed in this study is more suitable for detecting ship targets offshore rather than anchored along the dock.",TRUE,noun phrase
R11,Science,R27505,An investigation of cointegration and causality between energy consumption and economic growth,S89156,R27506,Methodology,R27484,Granger causality,"This paper reexamines the causality between energy consumption and economic growth with both bivariate and multivariate models by applying the recently developed methods of cointegration and Hsiao`s version of the Granger causality to transformed U.S. data for the period 1947-1990. The Phillips-Perron (PP) tests reveal that the original series are not stationary and, therefore, a first differencing is performed to secure stationarity. The study finds no causal linkages between energy consumption and economic growth. Energy and gross national product (GNP) each live a life of its own. The results of this article are consistent with some of the past studies that find no relationship between energy and GNP but are contrary to some other studies that find GNP unidirectionally causes energy consumption. Both the bivariate and trivariate models produce the similar results. We also find that there is no causal relationship between energy consumption and industrial production. The United States is basically a service-oriented economy and changes in energy consumption can cause little or no changes in GNP. In other words, an implementation of energy conservation policy may not impair economic growth. 27 refs., 5 tabs.",TRUE,noun phrase
R11,Science,R28965,Prioritizing Clinical Information System Project Risk Factors: A Delphi Study,S95626,R28966,Application area studied,R28964,Health care,"Identifying the risks associated with the implementation of clinical information systems (CIS) in health care organizations can be a major challenge for managers, clinicians, and IT specialists, as there are numerous ways in which they can be described and categorized. Risks vary in nature, severity, and consequence, so it is important that those considered to be high-level risks be identified, understood, and managed. This study addresses this issue by first reviewing the extant literature on IT/CIS project risks, and second conducting a Delphi survey among 21 experts highly involved in CIS projects in Canada. In addition to providing a comprehensive list of risk factors and their relative importance, this study is helpful in unifying the literature on IT implementation and health informatics. Our risk factor-oriented research actually confirmed many of the factors found to be important in both these streams.",TRUE,noun phrase
R11,Science,R31456,Applications of Artificial Neural Network for the Prediction of Flow Boiling Curves,S105465,R31457,Objective/estimate(s) process systems,R31455,Heat flux,"An artificial neural network (ANN) was applied successfully to predict flow boiling curves. The databases used in the analysis are from the 1960's, including 1,305 data points which cover these parameter ranges: pressure P=100–1,000 kPa, mass flow rate G=40–500 kg/m2-s, inlet subcooling ΔTsub =0–35°C, wall superheat ΔTw = 10–300°C and heat flux Q=20–8,000kW/m2. The proposed methodology allows us to achieve accurate results, thus it is suitable for the processing of the boiling curve data. The effects of the main parameters on flow boiling curves were analyzed using the ANN. The heat flux increases with increasing inlet subcooling for all heat transfer modes. Mass flow rate has no significant effects on nucleate boiling curves. The transition boiling and film boiling heat fluxes will increase with an increase in the mass flow rate. Pressure plays a predominant role and improves heat transfer in all boiling regions except the film boiling region. There are slight differences between the steady and the transient boiling curves in all boiling regions except the nucleate region. The transient boiling curve lies below the corresponding steady boiling curve.",TRUE,noun phrase
R11,Science,R31771,MSAP markers and global cytosine methylation in plants: a literature survey and comparative analysis for a wild-growing species,S107136,R31772,Sp,L64201,Helleborus foetidus,"Methylation of DNA cytosines affects whether transposons are silenced and genes are expressed, and is a major epigenetic mechanism whereby plants respond to environmental change. Analyses of methylation‐sensitive amplification polymorphism (MS‐AFLP or MSAP) have been often used to assess methyl‐cytosine changes in response to stress treatments and, more recently, in ecological studies of wild plant populations. MSAP technique does not require a sequenced reference genome and provides many anonymous loci randomly distributed over the genome for which the methylation status can be ascertained. Scoring of MSAP data, however, is not straightforward, and efforts are still required to standardize this step to make use of the potential to distinguish between methylation at different nucleotide contexts. Furthermore, it is not known how accurately MSAP infers genome‐wide cytosine methylation levels in plants. Here, we analyse the relationship between MSAP results and the percentage of global cytosine methylation in genomic DNA obtained by HPLC analysis. A screening of literature revealed that methylation of cytosines at cleavage sites assayed by MSAP was greater than genome‐wide estimates obtained by HPLC, and percentages of methylation at different nucleotide contexts varied within and across species. Concurrent HPLC and MSAP analyses of DNA from 200 individuals of the perennial herb Helleborus foetidus confirmed that methyl‐cytosine was more frequent in CCGG contexts than in the genome as a whole. In this species, global methylation was unrelated to methylation at the inner CG site. We suggest that global HPLC and context‐specific MSAP methylation estimates provide complementary information whose combination can improve our current understanding of methylation‐based epigenetic processes in nonmodel plants.",TRUE,noun phrase
R11,Science,R26808,A heuristic column generation method for the heterogeneous fleet VRP,S86132,R26852,Method,R26323,Heuristic column generation,"This paper presents a heuristic column generation method for solving vehicle routing problems with a heterogeneous fleet of vehicles. The method may also solve the fleet size and composition vehicle routing problem and new best known solutions are reported for a set of classical problems. Numerical results show that the method is robust and efficient, particularly for medium and large size problem instances.",TRUE,noun phrase
R11,Science,R27960,Efficient Disparity Estimation Using Hierarchical Bilateral Disparity Structure Based Graph Cut Algorithm With a Foreground Boundary Refinement Mechanism,S91159,R27961,Algorithm,R27959,Hierarchical bilateral disparity structure (HBDS),"The disparity estimation problem is commonly solved using graph cut (GC) methods, in which the disparity assignment problem is transformed to one of minimizing global energy function. Although such an approach yields an accurate disparity map, the computational cost is relatively high. Accordingly, this paper proposes a hierarchical bilateral disparity structure (HBDS) algorithm in which the efficiency of the GC method is improved without any loss in the disparity estimation performance by dividing all the disparity levels within the stereo image hierarchically into a series of bilateral disparity structures of increasing fineness. To address the well-known foreground fattening effect, a disparity refinement process is proposed comprising a fattening foreground region detection procedure followed by a disparity recovery process. The efficiency and accuracy of the HBDS-based GC algorithm are compared with those of the conventional GC method using benchmark stereo images selected from the Middlebury dataset. In addition, the general applicability of the proposed approach is demonstrated using several real-world stereo images.",TRUE,noun phrase
R11,Science,R33534,Application of critical success factors in supply chain management,S115821,R33535,Critical success factors,R33532,human resource,"This study is the first attempt that assembled published academic work on critical success factors (CSFs) in supply chain management (SCM) fields. The purpose of this study are to review the CSFs in SCM and to uncover the major CSFs that are apparent in SCM literatures. This study apply literature survey techniques from published CSFs studies in SCM. A collection of 42 CSFs studies in various SCM fields are obtained from major databases. The search uses keywords such as as supply chain management, critical success factors, logistics management and supply chain drivers and barriers. From the literature survey, four major CSFs are proposed. The factors are collaborative partnership, information technology, top management support and human resource. It is hoped that this review will serve as a platform for future research in SCM and CSFs studies. Plus, this study contribute to existing SCM knowledge and further appraise the concept of CSFs.",TRUE,noun phrase
R11,Science,R151214,"Organizational Resilience
and Using Information
and Communication
Technologies to Rebuild
Communication Structures",S626372,R156056,Emergency Type,L431091,hurricane Katrina,"This study employs the perspective of organizational resilience to examine how information and communication technologies (ICTs) were used by organizations to aid in their recovery after Hurricane Katrina. In-depth interviews enabled longitudinal analysis of ICT use. Results showed that organizations enacted a variety of resilient behaviors through adaptive ICT use, including information sharing, (re)connection, and resource acquisition. Findings emphasize the transition of ICT use across different stages of recovery, including an anticipated stage. Key findings advance organizational resilience theory with an additional source of resilience, external availability. Implications and contributions to the literature of ICTs in disaster contexts and organizational resilience are discussed.",TRUE,noun phrase
R11,Science,R151226,"Measuring Mobile ICT Literacy: Short-Message Performance
Assessment in Emergency Response Settings",S612193,R153057,paper: Theory / Concept / Model,L422102,ICT literacy,"Research problem: A construct mediated in digital environments, information communication technology (ICT) literacy is operationally defined as the ability of individuals to participate effectively in transactions that invoke illocutionary action. This study investigates ICT literacy through a simulation designed to capture that construct, to deploy the construct model to measure participant improvement of ICT literacy under experimental conditions, and to estimate the potential for expanded model development. Research questions: How might a multidisciplinary literature review inform a model for ICT literacy? How might a simulation be designed that enables sufficient construct representation for modeling? How might prepost testing simulation be designed to investigate the potential for improved command of ICT literacy? How might a regression model account for variance within the model by the addition of affective elements to a cognitive model? Literature review: Existing conceptualizations of the ICT communication environment demonstrate the need for a new communication model that is sensitive to short text messaging demands in crisis communication settings. As a result of this prefect storm of limits requiring the communicator to rely on critical thinking, awareness of context, and information integration, we designed a cognitive-affective model informed by genre theory to capture the ICT construct: A sociocognitive ability that, at its most effective, facilitates illocutionary action-to confirm and warn, to advise and ask, and to thank and request-for specific audiences of emergency responders. Methodology: A prepost design with practitioner subjects (N=50) allowed investigation of performance improvement on tasks demanding illocutionary action after training on tasks of high, moderate, and low demand. Through a model based on the independent variables character count, wordcount, and decreased time on task (X) as related to the dependent variable of an overall episode score (Y), we were able to examine the internal construct strength with and without the addition of affective independent variables. Results and discussion: Of the three prepost models used to study the impact of training, participants demonstrated statistically significant improvement on episodes of high demand on all cognitive model variables. The addition of affective variables, such as attitudes toward text messaging, allowed increased model strength on tasks of high and moderate complexity. These findings suggest that an empirical basis for the construct of ICT literacy is possible and that, under simulation conditions, practitioner improvement may be demonstrated. Practically, it appears that it is possible to train emergency responders to improve their command of ICT literacy so that those most in need of humanitarian response during a crisis may receive it. Future research focusing on communication in digital environments will undoubtedly extend these finding in terms of construct validation and deployment in crisis settings.",TRUE,noun phrase
R11,Science,R151226,"Measuring Mobile ICT Literacy: Short-Message Performance
Assessment in Emergency Response Settings",S626416,R156062,paper: Theory / Construct / Model,L431129,ICT literacy,"Research problem: A construct mediated in digital environments, information communication technology (ICT) literacy is operationally defined as the ability of individuals to participate effectively in transactions that invoke illocutionary action. This study investigates ICT literacy through a simulation designed to capture that construct, to deploy the construct model to measure participant improvement of ICT literacy under experimental conditions, and to estimate the potential for expanded model development. Research questions: How might a multidisciplinary literature review inform a model for ICT literacy? How might a simulation be designed that enables sufficient construct representation for modeling? How might prepost testing simulation be designed to investigate the potential for improved command of ICT literacy? How might a regression model account for variance within the model by the addition of affective elements to a cognitive model? Literature review: Existing conceptualizations of the ICT communication environment demonstrate the need for a new communication model that is sensitive to short text messaging demands in crisis communication settings. As a result of this prefect storm of limits requiring the communicator to rely on critical thinking, awareness of context, and information integration, we designed a cognitive-affective model informed by genre theory to capture the ICT construct: A sociocognitive ability that, at its most effective, facilitates illocutionary action-to confirm and warn, to advise and ask, and to thank and request-for specific audiences of emergency responders. Methodology: A prepost design with practitioner subjects (N=50) allowed investigation of performance improvement on tasks demanding illocutionary action after training on tasks of high, moderate, and low demand. Through a model based on the independent variables character count, wordcount, and decreased time on task (X) as related to the dependent variable of an overall episode score (Y), we were able to examine the internal construct strength with and without the addition of affective independent variables. Results and discussion: Of the three prepost models used to study the impact of training, participants demonstrated statistically significant improvement on episodes of high demand on all cognitive model variables. The addition of affective variables, such as attitudes toward text messaging, allowed increased model strength on tasks of high and moderate complexity. These findings suggest that an empirical basis for the construct of ICT literacy is possible and that, under simulation conditions, practitioner improvement may be demonstrated. Practically, it appears that it is possible to train emergency responders to improve their command of ICT literacy so that those most in need of humanitarian response during a crisis may receive it. Future research focusing on communication in digital environments will undoubtedly extend these finding in terms of construct validation and deployment in crisis settings.",TRUE,noun phrase
R11,Science,R26196,Improving the distribution of industrial gases with an on-line computerized routing and scheduling optimizer,S82675,R26354,Products,R26353,Industrial gases,"For Air Products and Chemicals, Inc., inventory management of industrial gases at customer locations is integrated with vehicle scheduling and dispatching. Their advanced decision support system includes on-line data entry functions, customer usage forecasting, a time/distance network with a shortest path algorithm to compute intercustomer travel times and distances, a mathematical optimization module to produce daily delivery schedules, and an interactive schedule change interface. The optimization module uses a sophisticated Lagrangian relaxation algorithm to solve mixed integer programs with up to 800,000 variables and 200,000 constraints to near optimality. The system, first implemented in October, 1981, has been saving between 6% to 10% of operating costs.",TRUE,noun phrase
R11,Science,R33447,Linking Success Factors to Financial Performance,S115633,R33448,Critical success factors,R33441,industry focus,"Problem statement: Based on a literature survey, an attempt has been made in this study to develop a framework for identifying the success factors. In addition, a list of key success factors is presented. The emphasis is on success factors dealing with breadth of services, internationalization of operations, industry focus, customer focus, 3PL experience, relationship with 3PLs, investment in quality assets, investment in information systems, availability of skilled professionals and supply chain integration. In developing the factors an effort has been made to align and relate them to financial performance. Conclusion/Recommendations: We found success factors “relationship with 3PLs and skilled logistics professionals” would substantially improves financial performance metric profit growth. Our findings also contribute to managerial practice by offering a benchmarking tool that can be used by managers in the 3PL service provider industry in India.",TRUE,noun phrase
R11,Science,R33205,An Exploratory Study of the Success Factors for Extranet Adoption in E-Supply Chain,S115204,R33206,Critical success factors,R33202,information quality,"Extranet is an enabler/system that enriches the information service quality in e-supply chain. This paper uses factor analysis to determine four extranet success factors: system quality, information quality, service quality, and work performance quality. A critical analysis of areas that require improvement is also conducted.",TRUE,noun phrase
R11,Science,R33348,Critical success factors for B2B e‐commerce use within the UK NHS pharmaceutical supply chain,S115462,R33349,Critical success factors,R33202,information quality,"Purpose – The purpose of this paper is to determine those factors perceived by users to influence the successful on‐going use of e‐commerce systems in business‐to‐business (B2B) buying and selling transactions through examination of the views of individuals acting in both purchasing and selling roles within the UK National Health Service (NHS) pharmaceutical supply chain.Design/methodology/approach – Literature from the fields of operations and supply chain management (SCM) and information systems (IS) is used to determine candidate factors that might influence the success of the use of e‐commerce. A questionnaire based on these is used for primary data collection in the UK NHS pharmaceutical supply chain. Factor analysis is used to analyse the data.Findings – The paper yields five composite factors that are perceived by users to influence successful e‐commerce use. “System quality,” “information quality,” “management and use,” “world wide web – assurance and empathy,” and “trust” are proposed as potentia...",TRUE,noun phrase
R11,Science,R33521,Evaluating the critical success factors of supplier development: a case study,S115789,R33522,Critical success factors,R33515,information sharing,"Purpose – The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach – In total, 13 CSFs for SD are identified (i.e. long‐term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings – The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long‐term strategic goal is found to be ...",TRUE,noun phrase
R11,Science,R33534,Application of critical success factors in supply chain management,S115822,R33535,Critical success factors,R33194,information technology,"This study is the first attempt that assembled published academic work on critical success factors (CSFs) in supply chain management (SCM) fields. The purpose of this study are to review the CSFs in SCM and to uncover the major CSFs that are apparent in SCM literatures. This study apply literature survey techniques from published CSFs studies in SCM. A collection of 42 CSFs studies in various SCM fields are obtained from major databases. The search uses keywords such as as supply chain management, critical success factors, logistics management and supply chain drivers and barriers. From the literature survey, four major CSFs are proposed. The factors are collaborative partnership, information technology, top management support and human resource. It is hoped that this review will serve as a platform for future research in SCM and CSFs studies. Plus, this study contribute to existing SCM knowledge and further appraise the concept of CSFs.",TRUE,noun phrase
R11,Science,R33521,Evaluating the critical success factors of supplier development: a case study,S115788,R33522,Critical success factors,R33514,innovation capability,"Purpose – The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach – In total, 13 CSFs for SD are identified (i.e. long‐term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings – The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long‐term strategic goal is found to be ...",TRUE,noun phrase
R11,Science,R33102,The integrated logistics management system: a framework and case study,S115030,R33103,SCM field,R33092,Integrated logistics management system,"Presents a framework for distribution companies to establish and improve their logistics systems continuously. Recently, much attention has been given to automation in services, the use of new information technology and the integration of the supply chain. Discusses these areas, which have great potential to increase logistics productivity and provide customers with high level service. The exploration of each area is enriched with Taiwanese logistics management practices and experiences. Includes a case study of one prominent food processor and retailer in Taiwan in order to demonstrate the pragmatic operations of the integrated logistics management system. Also, a survey of 45 Taiwanese retailers was conducted to investigate the extent of logistics management in Taiwan. Concludes by suggesting how distribution companies can overcome noticeable logistics management barriers, build store automation systems, and follow the key steps to logistics success.",TRUE,noun phrase
R11,Science,R33447,Linking Success Factors to Financial Performance,S115637,R33448,Critical success factors,R33445,investment in information,"Problem statement: Based on a literature survey, an attempt has been made in this study to develop a framework for identifying the success factors. In addition, a list of key success factors is presented. The emphasis is on success factors dealing with breadth of services, internationalization of operations, industry focus, customer focus, 3PL experience, relationship with 3PLs, investment in quality assets, investment in information systems, availability of skilled professionals and supply chain integration. In developing the factors an effort has been made to align and relate them to financial performance. Conclusion/Recommendations: We found success factors “relationship with 3PLs and skilled logistics professionals” would substantially improves financial performance metric profit growth. Our findings also contribute to managerial practice by offering a benchmarking tool that can be used by managers in the 3PL service provider industry in India.",TRUE,noun phrase
R11,Science,R33447,Linking Success Factors to Financial Performance,S115636,R33448,Critical success factors,R33444,investment in quality,"Problem statement: Based on a literature survey, an attempt has been made in this study to develop a framework for identifying the success factors. In addition, a list of key success factors is presented. The emphasis is on success factors dealing with breadth of services, internationalization of operations, industry focus, customer focus, 3PL experience, relationship with 3PLs, investment in quality assets, investment in information systems, availability of skilled professionals and supply chain integration. In developing the factors an effort has been made to align and relate them to financial performance. Conclusion/Recommendations: We found success factors “relationship with 3PLs and skilled logistics professionals” would substantially improves financial performance metric profit growth. Our findings also contribute to managerial practice by offering a benchmarking tool that can be used by managers in the 3PL service provider industry in India.",TRUE,noun phrase
R11,Science,R34057,Migration and rearing histories of chinook salmon (Oncorhynchus tshawytscha) determined by ion microprobe Sr isotope and Sr/Ca transects of otoliths,S118142,R34058,Analytical method,R34056,Ion microprobe,"Strontium isotope and Sr/Ca ratios measured in situ by ion microprobe along radial transects of otoliths of juvenile chinook salmon (Oncorhynchus tshawytscha) vary between watersheds with contrasting geology. Otoliths from ocean-type chinook from Skagit River estuary, Washington, had prehatch regions with 87Sr/86Sr ratios of ~0.709, suggesting a maternally inherited marine signature, extensive fresh water growth zones with 87Sr/86Sr ratios similar to those of the Skagit River at ~0.705, and marine-like 87Sr/86Sr ratios near their edges. Otoliths from stream-type chinook from central Idaho had prehatch 87Sr/86Sr ratios ≥0.711, indicating that a maternal marine Sr isotopic signature is not preserved after the ~1000- to 1400-km migration from the Pacific Ocean. 87Sr/86Sr ratios in the outer portions of otoliths from these Idaho juveniles were similar to those of their respective streams (~0.7080.722). For Skagit juveniles, fresh water growth was marked by small decreases in otolith Sr/Ca, with increases in ...",TRUE,noun phrase
R11,Science,R29907,"A panel estimation of the relationship between trade liberalization, economic growth and CO2 emissions in BRICS countries",S99230,R29908,Methodology,R29905,Kao Panel,"In the last few years, several studies have found an inverted-U relationship between per capita income and environmental degradation. This relationship, known as the environmental Kuznets curve (EKC), suggests that environmental degradation increases in the early stages of growth, but it eventually decreases as income exceeds a threshold level. However, this paper investigation relationship between per capita CO2 emission, growth economics and trade liberalization based on econometric techniques of unit root test, co-integration and a panel data set during the period 1960-1996 for BRICS countries. Data properties were analyzed to determine their stationarity using the LLC , IPS , ADF and PP unit root tests which indicated that the series are I(1). We find a cointegration relationship between per capita CO2 emission, growth economics and trade liberalization by applying Kao panel cointegration test. The evidence indi cates that in the long-run trade liberalization has a positive significant impact on CO2 emissions and impact of trade liberalization on emissions growth depends on the level of income Our findings suggest that there is a quadratic relationship between relationship between real GDP and CO2 emissions for the region as a whole. The estimated long-run coefficients of real GDP and its square satisfy the EKC hypothesis in all of studied countries. Our estimation shows that the inflection point or optimal point real GDP per capita is about 5269.4 dollars. The results show that on average, sample countries are on the positive side of the inverted U curve. The turning points are very low in some cases and very high in other cases, hence providing poor evidence in support of the EKC hypothesis. Thus, our findings suggest that all BRICS countries need to sacrifice economic growth to decrease their emission levels",TRUE,noun phrase
R11,Science,R34276,Macroeconomic Shock Synchronization in the East African Community,S119219,R34277,Justification/ recommendation,L72016,Lack of macroeconomic convergence," The East African Community’s (EAC) economic integration has gained momentum recently, with the EAC countries aiming to adopt a single currency in 2015. This article evaluates empirically the readiness of the EAC countries for monetary union. First, structural similarity in terms of similarity of production and exports of the EAC countries is measured. Second, the symmetry of shocks is examined with structural vector auto-regression analysis (SVAR). The lack of macroeconomic convergence gives evidence against a hurried transition to a monetary union. Given the divergent macroeconomic outcomes, structural reforms, including closing infrastructure gaps and harmonizing macroeconomic policies that would raise synchronization of business cycles, need to be in place before moving to monetary union. ",TRUE,noun phrase
R11,Science,R26828,An Integrated Model and Solution Approach for Fleet Sizing with Heterogeneous Assets,S86024,R26829,Method,R26827,Lagrangian relaxation,"This paper addresses a fleet-sizing problem in the context of the truck-rental industry. Specifically, trucks that vary in capacity and age are utilized over space and time to meet customer demand. Operational decisions (including demand allocation and empty truck repositioning) and tactical decisions (including asset procurements and sales) are explicitly examined in a linear programming model to determine the optimal fleet size and mix. The method uses a time-space network, common to fleet-management problems, but also includes capital cost decisions, wherein assets of different ages carry different costs, as is common to replacement analysis problems. A two-phase solution approach is developed to solve large-scale instances of the problem. Phase I allocates customer demand among assets through Benders decomposition with a demand-shifting algorithm assuring feasibility in each subproblem. Phase II uses the initial bounds and dual variables from Phase I and further improves the solution convergence without increasing computer memory requirements through the use of Lagrangian relaxation. Computational studies are presented to show the effectiveness of the approach for solving large problems within reasonable solution gaps.",TRUE,noun phrase
R11,Science,R26883,Lagrangian Relaxation Methods for Solving the Minimum Fleet Size Multiple Traveling Salesman Problem with Time Windows,S86295,R26884,Method,R26827,Lagrangian relaxation,"We consider the problem of finding the minimum number of vehicles required to visit once a set of nodes subject to time window constraints, for a homogeneous fleet of vehicles located at a common depot. This problem can be formulated as a network flow problem with additional time constraints. The paper presents an optimal solution approach using the augmented Lagrangian method. Two Lagrangian relaxations are studied. In the first one, the time constraints are relaxed producing network subproblems which are easy to solve, but the bound obtained is weak. In the second relaxation, constraints requiring that each node be visited are relaxed producing shortest path subproblems with time window constraints and integrality conditions. The bound produced is always excellent. Numerical results for several actual school busing problems with up to 223 nodes are discussed. Comparisons with a set partitioning formulation solved by column generation are given.",TRUE,noun phrase
R11,Science,R32413,Chemical composition and biological activities of a new essential oil chemotype of Tunisian Artemisia herba alba Asso,S110055,R32414,Country / Plant part CS,L65993,Leaves and flowers,"The aim of the present study was to investigate the chemical composition, antioxidant, angiotensin Iconverting enzyme (ACE) inhibitory, antibacterial and antifungal activities of the essential oil of Artemisia herba alba Asso (Aha), a traditional medicinal plant widely growing in Tunisia. The essential oil from the air dried leaves and flowers of Aha were extracted by hydrodistillation and analyzed by GC and GC/MS. More than fifty compounds, out of which 48 were identified. The main chemical class of the oil was represented by oxygenated monoterpenes (50.53%). These were represented by 21 derivatives, among which the cis -chrysantenyl acetate (10.60%), the sabinyl acetate (9.13%) and the α-thujone (8.73%) were the principal compounds. Oxygenated sesquiterpenes, particularly arbusculones were identified in the essential oil at relatively high rates. The Aha essential oil was found to have an interesting antioxidant activity as evaluated by the 2,2-diphenyl-1-picrylhydrazyl and the β-carotene bleaching methods. The Aha essential oil also exhibited an inhibitory activity towards the ACE. The antimicrobial activities of Aha essential oil was evaluated against six bacterial strains and three fungal strains by the agar diffusion method and by determining the inhibition zone. The inhibition zones were in the range of 8-51 mm. The essential oil exhibited a strong growth inhibitory activity on all the studied fungi. Our findings demonstrated that Aha growing wild in South-Western of Tunisia seems to be a new chemotype and its essential oil might be a natural potential source for food preservation and for further investigation by developing new bioactive substances.",TRUE,noun phrase
R11,Science,R28558,Undifferentiated Sarcoma of the Liver in a 21-year-old Woman: Case Report,S93847,R28559,Surgery,L57606,Left lob,"A successful surgical case of malignant undifferentiated (embryonal) sarcoma of the liver (USL), a rare tumor normally found in children, is reported. The patient was a 21-year-old woman, complaining of epigastric pain and abdominal fullness. Chemical analyses of the blood and urine and complete blood counts revealed no significant changes, and serum alpha-fetoprotein levels were within normal limits. A physical examination demonstrated a film, slightly tender lesion at the liver's edge palpable 10 cm below the xiphoid process. CT scan and ultrasonography showed an oval mass, confined to the left lobe of the liver, which proved to be hypovascular on angiography. At laparotomy, a large, 18 x 15 x 13 cm tumor, found in the left hepatic lobe was resected. The lesion was dark red in color, encapsulated, smooth surfaced and of an elastic firm consistency. No metastasis was apparent. Histological examination resulted in a diagnosis of undifferentiated sarcoma of the liver. Three courses of adjuvant chemotherapy, including adriamycin, cis-diaminodichloroplatinum, vincristine and dacarbazine were administered following the surgery with no serious adverse effects. The patient remains well with no evidence of recurrence 12 months after her operation.",TRUE,noun phrase
R11,Science,R26828,An Integrated Model and Solution Approach for Fleet Sizing with Heterogeneous Assets,S86022,R26829,Method,R26825,Linear programming,"This paper addresses a fleet-sizing problem in the context of the truck-rental industry. Specifically, trucks that vary in capacity and age are utilized over space and time to meet customer demand. Operational decisions (including demand allocation and empty truck repositioning) and tactical decisions (including asset procurements and sales) are explicitly examined in a linear programming model to determine the optimal fleet size and mix. The method uses a time-space network, common to fleet-management problems, but also includes capital cost decisions, wherein assets of different ages carry different costs, as is common to replacement analysis problems. A two-phase solution approach is developed to solve large-scale instances of the problem. Phase I allocates customer demand among assets through Benders decomposition with a demand-shifting algorithm assuring feasibility in each subproblem. Phase II uses the initial bounds and dual variables from Phase I and further improves the solution convergence without increasing computer memory requirements through the use of Lagrangian relaxation. Computational studies are presented to show the effectiveness of the approach for solving large problems within reasonable solution gaps.",TRUE,noun phrase
R11,Science,R26929,A Decision Support System for Fleet Management: A Linear Programming Approach,S86513,R26930,Method,R26825,Linear programming,"This paper describes a successful implementation of a decision support system that is used by the fleet management division at North American Van Lines to plan fleet configuration. At the heart of the system is a large linear programming (LP) model that helps management decide what type of tractors to sell to owner/operators or to trade in each week. The system is used to answer a wide variety of “What if” questions, many of which have significant financial impact.",TRUE,noun phrase
R11,Science,R26992,Optimal liner fleet routeing strategies,S86748,R26993,Method,R26825,Linear programming,"The objective of this paper is to suggest practical optimization models for routing strategies for liner fleets. Many useful routing and scheduling problems have been studied in the transportation literature. As for ship scheduling or routing problems, relatively less effort has been devoted, in spite of the fact that sea transportation involves large capital and operating costs. This paper suggests two optimization models that can be useful to liner shipping companies. One is a linear programming model of profit maximization, which provides an optimal routing mix for each ship available and optimal service frequencies for each candidate route. The other model is a mixed integer programming model with binary variables which not only provides optimal routing mixes and service frequencies but also best capital investment alternatives to expand fleet capacity. This model is a cost minimization model.",TRUE,noun phrase
R11,Science,R27011,A dynamic model and algorithm for fleet planning,S86837,R27012,Method,R26825,Linear programming,"By analysing the merits and demerits of the existing linear model for fleet planning, this paper presents an algorithm which combines the linear programming technique with that of dynamic programming to improve the solution to linear model for fleet planning. This new approach has not only the merits that the linear model for fleet planning has, but also the merit of saving computing time. The numbers of ships newly added into the fleet every year are always integers in the final optimal solution. The last feature of the solution directly meets the requirements of practical application. Both the mathematical model of the dynamic fleet planning and its algorithm are put forward in this paper. A calculating example is also given.",TRUE,noun phrase
R11,Science,R25571,A Model-Based Approach for Designing Location-Based Games,S77131,R25572,Game Genres,L48245,Location-based Games,"Location-Based Games (LBGs) are a subclass of pervasive games that make use of location technologies to consider the players' geographic position in the game rules and mechanics. This research presents a model to describe and represent LBGs. The proposed model decouples location, mechanics, and game content from their implementation. We aim at allowing LBGs to be edited quickly and deployed on many platforms. The core model component is LEGaL, a language derived from NCL (Nested Context Language) to model and represented the game structure and its multimedia contents (e.g., video, audio, 3D objects, etc.). It allows the modelling of mission-based games by supporting spatial and temporal relationships between game elements and multimedia documents. We validated our approach by implementing a LEGaL interpreter, which was coupled to an LBG authoring tool and a Game Server. These tools enabled us to reimplement a real LBG using the proposed model to attest its utility. We also edited the original game by using an external tool to showcase how simple is to transpose an LBG using the concepts introduced in this work. Results indicate both the model and LEGaL can be used to foster the design of LBGs.",TRUE,noun phrase
R11,Science,R33118,The elements of a successful logistics partnership,S115056,R33119,SCM field,R33111,Logistics partnership,"Describes the elements of a successful logistics partnership. Looks at what can cause failure and questions whether the benefits of a logistics partnership are worth the effort required. Concludes that strategic alliances are increasingly becoming a matter of survival, not merely a matter of competitive advantage. Refers to the example of the long‐term relationship between Kimberly‐Clark Corporation and Interamerican group’s Tricor Warehousing, Inc.",TRUE,noun phrase
R11,Science,R30726,Relationship between sports drinks and dental erosion in 304 university athletes in Columbus,S102577,R30727,efered index,R30725,Lussi Index,"Acidic soft drinks, including sports drinks, have been implicated in dental erosion with limited supporting data in scarce erosion studies worldwide. The purpose of this study was to determine the prevalence of dental erosion in a sample of athletes at a large Midwestern state university in the USA, and to evaluate whether regular consumption of sports drinks was associated with dental erosion. A cross-sectional, observational study was done using a convenience sample of 304 athletes, selected irrespective of sports drinks usage. The Lussi Index was used in a blinded clinical examination to grade the frequency and severity of erosion of all tooth surfaces excluding third molars and incisal surfaces of anterior teeth. A self-administered questionnaire was used to gather details on sports drink usage, lifestyle, health problems, dietary and oral health habits. Intraoral color slides were taken of all teeth with erosion. Sports drinks usage was found in 91.8% athletes and the total prevalence of erosion was 36.5%. Nonparametric tests and stepwise regression analysis using history variables showed no association between dental erosion and the use of sports drinks, quantity and frequency of consumption, years of usage and nonsport usage of sports drinks. The most significant predictor of erosion was found to be not belonging to the African race (p < 0.0001). The results of this study reveal no relationship between consumption of sports drinks and dental erosion.",TRUE,noun phrase
R11,Science,R33348,Critical success factors for B2B e‐commerce use within the UK NHS pharmaceutical supply chain,S115463,R33349,Critical success factors,R33346,management and use,"Purpose – The purpose of this paper is to determine those factors perceived by users to influence the successful on‐going use of e‐commerce systems in business‐to‐business (B2B) buying and selling transactions through examination of the views of individuals acting in both purchasing and selling roles within the UK National Health Service (NHS) pharmaceutical supply chain.Design/methodology/approach – Literature from the fields of operations and supply chain management (SCM) and information systems (IS) is used to determine candidate factors that might influence the success of the use of e‐commerce. A questionnaire based on these is used for primary data collection in the UK NHS pharmaceutical supply chain. Factor analysis is used to analyse the data.Findings – The paper yields five composite factors that are perceived by users to influence successful e‐commerce use. “System quality,” “information quality,” “management and use,” “world wide web – assurance and empathy,” and “trust” are proposed as potentia...",TRUE,noun phrase
R11,Science,R33406,A study of supplier selection factors for high-tech industries in the supply chain,S115564,R33407,Critical success factors,R33403,management capability,"Amid the intensive competition among global industries, the relationship between manufacturers and suppliers has turned from antagonist to cooperative. Through partnerships, both parties can be mutually benefited, and the key factor that maintains such relationship lies in how manufacturers select proper suppliers. The purpose of this study is to explore the key factors considered by manufacturers in supplier selection and the relationships between these factors. Through a literature review, eight supplier selection factors, comprising price response capability, quality management capability, technological capability, delivery capability, flexible capability, management capability, commercial image, and financial capability are derived. Based on the theoretic foundation proposed by previous researchers, a causal model of supplier selection factors is further constructed. The results of a survey on high-tech industries are used to verify the relationships between the eight factors using structural equation modelling (SEM). Based on the empirical results, conclusions and suggestions are finally proposed as a reference for manufacturers and suppliers.",TRUE,noun phrase
R11,Science,R26279,A Markov Decision Model and Decomposition Heuristic for Dynamic Vehicle Dispatching,S82248,R26280,approach,R26277,Markov decision process,"We describe a dynamic and stochastic vehicle dispatching problem called the delivery dispatching problem. This problem is modeled as a Markov decision process. Because exact solution of this model is impractical, we adopt a heuristic approach for handling the problem. The heuristic is based in part on a decomposition of the problem by customer, where customer subproblems generate penalty functions that are applied in a master dispatching problem. We describe how to compute bounds on the algorithm's performance, and apply it to several examples with good results.",TRUE,noun phrase
R11,Science,R26313,The Stochastic Inventory Routing Problem with Direct Deliveries,S82439,R26314,approach,R26277,Markov decision process,"Vendor managed inventory replenishment is a business practice in which vendors monitor their customers' inventories, and decide when and how much inventory should be replenished. The inventory routing problem addresses the coordination of inventory management and transportation. The ability to solve the inventory routing problem contributes to the realization of the potential savings in inventory and transportation costs brought about by vendor managed inventory replenishment. The inventory routing problem is hard, especially if a large number of customers is involved. We formulate the inventory routing problem as a Markov decision process, and we propose approximation methods to find good solutions with reasonable computational effort. Computational results are presented for the inventory routing problem with direct deliveries.",TRUE,noun phrase
R11,Science,R26317,Price-Directed Replenishment of Subsets: Methodology and Its Application to Inventory Routing,S82455,R26318,approach,R26277,Markov decision process,"The idea of price-directed control is to use an operating policy that exploits optimal dual prices from a mathematical programming relaxation of the underlying control problem. We apply it to the problem of replenishing inventory to subsets of products/locations, such as in the distribution of industrial gases, so as to minimize long-run time average replenishment costs. Given a marginal value for each product/location, whenever there is a stockout the dispatcher compares the total value of each feasible replenishment with its cost, and chooses one that maximizes the surplus. We derive this operating policy using a linear functional approximation to the optimal value function of a semi-Markov decision process on continuous spaces. This approximation also leads to a math program whose optimal dual prices yield values and whose optimal objective value gives a lower bound on system performance. We use duality theory to show that optimal prices satisfy several structural properties and can be interpreted as estimates of lowest achievable marginal costs. On real-world instances, the price-directed policy achieves superior, near optimal performance as compared with other approaches.",TRUE,noun phrase
R11,Science,R26319,A Price-Directed Approach to Stochastic Inventory/Routing,S82472,R26320,approach,R26277,Markov decision process,"We consider a new approach to stochastic inventory/routing that approximates the future costs of current actions using optimal dual prices of a linear program. We obtain two such linear programs by formulating the control problem as a Markov decision process and then replacing the optimal value function with the sum of single-customer inventory value functions. The resulting approximation yields statewise lower bounds on optimal infinite-horizon discounted costs. We present a linear program that takes into account inventory dynamics and economics in allocating transportation costs for stochastic inventory routing. On test instances we find that these allocations do not introduce any error in the value function approximations relative to the best approximations that can be achieved without them. Also, unlike other approaches, we do not restrict the set of allowable vehicle itineraries in any way. Instead, we develop an efficient algorithm to both generate and eliminate itineraries during solution of the linear programs and control policy. In simulation experiments, the price-directed policy outperforms other policies from the literature.",TRUE,noun phrase
R11,Science,R26321,Dynamic Programming Approximations for a Stochastic Inventory Routing Problem,S82491,R26322,approach,R26277,Markov decision process,"This work is motivated by the need to solve the inventory routing problem when implementing a business practice called vendor managed inventory replenishment (VMI). With VMI, vendors monitor their customers' inventories and decide when and how much inventory should be replenished at each customer. The inventory routing problem attempts to coordinate inventory replenishment and transportation in such a way that the cost is minimized over the long run. We formulate a Markov decision process model of the stochastic inventory routing problem and propose approximation methods to find good solutions with reasonable computational effort. We indicate how the proposed approach can be used for other Markov decision processes involving the control of multiple resources.",TRUE,noun phrase
R11,Science,R33461,Supply chain management: success factors from the Malaysian manufacturer's perspective,S115670,R33462,Critical success factors,R33458,material flow management,"The purpose of this paper is to shed the light on the critical success factors that lead to high supply chain performance outcomes in a Malaysian manufacturing company. The critical success factors consist of relationship with customer and supplier, information communication and technology (ICT), material flow management, corporate culture and performance measurement. Questionnaire was the main instrument for the study and it was distributed to 84 staff from departments of purchasing, planning, logistics and operation. Data analysis was conducted by employing descriptive analysis (mean and standard deviation), reliability analysis, Pearson correlation analysis and multiple regression. The findings show that there are relationships exist between relationship with customer and supplier, ICT, material flow management, performance measurement and supply chain management (SCM) performance, but not for corporate culture. Forming a good customer and supplier relationship is the main predictor of SCM performance, followed by performance measurement, material flow management and ICT. It is recommended that future study to determine additional success factors that are pertinent to firms’ current SCM strategies and directions, competitive advantages and missions. Logic suggests that further study to include more geographical data coverage, other nature of businesses and research instruments. Key words: Supply chain management, critical success factor.",TRUE,noun phrase
R11,Science,R151288,"Factors impacting the intention to use emergency
notification services in campus emergencies: an empirical investigation",S612418,R153088,paper: Theory / Concept / Model,L422296,media richness,"Research problem: This study investigates the factors influencing students' intentions to use emergency notification services to receive news about campus emergencies through short-message systems (SMS) and social network sites (SNS). Research questions: (1) What are the critical factors that influence students' intention to use SMS to receive emergency notifications? (2) What are the critical factors that influence students' intention to use SNS to receive emergency notifications? Literature review: By adapting Media Richness theory and prior research on emergency notifications, we propose that perceived media richness, perceived trust in information, perceived risk, perceived benefit, and perceived social influence impact the intention to use SMS and SNS to receive emergency notifications. Methodology: We conducted a quantitative, survey-based study that tested our model in five different scenarios, using logistic regression to test the research hypotheses with 574 students of a large research university in the northeastern US. Results and discussion: Results suggest that students' intention to use SNS is impacted by media richness, perceived benefit, and social influence, while students' intention to use SMS is influenced by trust and perceived benefit. Implications to emergency managers suggest how to more effectively manage and market the service through both channels. The results also suggest using SNS as an additional means of providing emergency notifications at academic institutions.",TRUE,noun phrase
R11,Science,R151288,"Factors impacting the intention to use emergency
notification services in campus emergencies: an empirical investigation",S616828,R153901,paper: Theory / Construct / Model,L425391,media richness,"Research problem: This study investigates the factors influencing students' intentions to use emergency notification services to receive news about campus emergencies through short-message systems (SMS) and social network sites (SNS). Research questions: (1) What are the critical factors that influence students' intention to use SMS to receive emergency notifications? (2) What are the critical factors that influence students' intention to use SNS to receive emergency notifications? Literature review: By adapting Media Richness theory and prior research on emergency notifications, we propose that perceived media richness, perceived trust in information, perceived risk, perceived benefit, and perceived social influence impact the intention to use SMS and SNS to receive emergency notifications. Methodology: We conducted a quantitative, survey-based study that tested our model in five different scenarios, using logistic regression to test the research hypotheses with 574 students of a large research university in the northeastern US. Results and discussion: Results suggest that students' intention to use SNS is impacted by media richness, perceived benefit, and social influence, while students' intention to use SMS is influenced by trust and perceived benefit. Implications to emergency managers suggest how to more effectively manage and market the service through both channels. The results also suggest using SNS as an additional means of providing emergency notifications at academic institutions.",TRUE,noun phrase
R11,Science,R31552,On-line soft sensor for polyethylene process with multiple production grades,S105720,R31553,Objective/estimate(s) process systems,R31550,Melt index,"Abstract Since online measurement of the melt index (MI) of polyethylene is difficult, a virtual sensor model is desirable. However, a polyethylene process usually produces products with multiple grades. The relation between process and quality variables is highly nonlinear. Besides, a virtual sensor model in real plant process with many inputs has to deal with collinearity and time-varying issues. A new recursive algorithm, which models a multivariable, time-varying and nonlinear system, is presented. Principal component analysis (PCA) is used to eliminate the collinearity. Fuzzy c-means (FCM) and fuzzy Takagi–Sugeno (FTS) modeling are used to decompose the nonlinear system into several linear subsystems. Effectiveness of the model is demonstrated using real plant data from a polyethylene process.",TRUE,noun phrase
R11,Science,R31599,Melt index prediction based on fuzzy neural networks and PSO algorithm with online correction strategy,S105863,R31600,Objective/estimate(s) process systems,R31550,Melt index,"A black-box modeling scheme to predict melt index (MI) in the industrial propylene polymerization process is presented. MI is one of the most important quality variables determining product specification, and is influenced by a large number of process variables. Considering it is costly and time consuming to measure MI in laboratory, a much cheaper and faster statistical modeling method is presented here to predicting MI online, which involves technologies of fuzzy neural network, particle swarm optimization (PSO) algorithm, and online correction strategy (OCS). The learning efficiency and prediction precision of the proposed model are checked based on real plant history data, and the comparison between different learning algorithms is carried out in detail to reveal the advantage of the proposed best-neighbor PSO (BNPSO) algorithm with OCS. © 2011 American Institute of Chemical Engineers AIChE J, 2012",TRUE,noun phrase
R11,Science,R30735,Patterns of tooth wear associated with methamphetamine use,S102622,R30736,Study population,L61616,Methamphetamine users,"BACKGROUND Methamphetamine (MAP) abuse is a significant worldwide problem. This prospective study was conducted to determine if MAP users had distinct patterns of tooth wear. METHODS Methamphetamine users were identified and interviewed about their duration and preferred route of MAP use. Study participants were interviewed in the emergency department of a large urban university hospital serving a geographic area with a high rate of illicit MAP production and consumption. Tooth wear was documented for each study participant and scored using a previously validated index and demographic information was obtained using a questionnaire. RESULTS Forty-three MAP patients were interviewed. Preferred route of administration was injection (37%) followed by snorting (33%). Patients who preferentially snorted MAP had significantly higher tooth wear in the anterior maxillary teeth than patients who injected, smoked, or ingested MAP (P = 0.005). CONCLUSION Patients who use MAP have distinct patterns of wear based on route of administration. This difference may be explained anatomically.",TRUE,noun phrase
R11,Science,R32854,Coarse-to-fine ship detection using visual saliency fusion and feature encoding for optical satellite images,S112675,R32855,Satellite sensor,R32853,Microsoft Virtual Earth,"In order to overcome cloud clutters and varied sizes of objects in high-resolution optical satellite images, a novel coarse-to-fine ship detection framework is proposed. Initially, a modified saliency fusion algorithm is derived to reduce cloud clutters and extract ship candidates. Then, in coarse discrimination stage, candidates are described by introducing shape feature to eliminate regions which are not conform to ship characteristics. In fine discrimination stage, candidates are represented by local descriptor-based feature encoding, and then linear SVM is used for discrimination. Experiments on 60 images (including 467 objects) collected from Microsoft Virtual Earth demonstrate the effectiveness of the proposed framework. Specifically, the fusion of visual saliency achieves 17.07% higher Precision and 7.23% higher Recall compared with those of individual one. Moreover, using local descriptor in fine discrimination makes Precision and F-measure further be improved by 7.23% and 1.74%, respectively.",TRUE,noun phrase
R11,Science,R26173,An Integrated Inventory Allocation and Vehicle Routing Problem,S81703,R26174,approach,R26171,Mixed integer program,"We address the problem of distributing a limited amount of inventory among customers using a fleet of vehicles so as to maximize profit. Both the inventory allocation and the vehicle routing problems are important logistical decisions. In many practical situations, these two decisions are closely interrelated, and therefore, require a systematic approach to take into account both activities jointly. We formulate the integrated problem as a mixed integer program and develop a Lagrangian-based procedure to generate both good upper bounds and heuristic solutions. Computational results show that the procedure is able to generate solutions with small gaps between the upper and lower bounds for a wide range of cost structures.",TRUE,noun phrase
R11,Science,R26839,Valid inequalities for the fleet size and mix vehicle routing problem with fixed costs,S86075,R26840,Method,R26793,Mixed integer programming,"In the well‐known vehicle routing problem (VRP), a set of identical vehicles located at a central depot is to be optimally routed to supply customers with known demands subject to vehicle capacity constraints. An important variant of the VRP arises when a mixed fleet of vehicles, characterized by different capacities and costs, is available for distribution activities. The problem is known as fleet size and mix VRP with fixed costs FSMF and has several practical applications. In this article, we present a new mixed integer programming formulation for FSMF based on a two‐commodity network flow approach. New valid inequalities are proposed to strengthen the linear programming relaxation of the mathematical formulation. The effectiveness of the proposed cuts is extensively tested on benchmark instances. © 2009 Wiley Periodicals, Inc. NETWORKS, 2009",TRUE,noun phrase
R11,Science,R34014,"Dynamics of white perch Morone americana population contingents in the Patuxent River estuary, Maryland, USA",S117985,R34016,Species Order,R34013,Morone americana,"Alternative migratory pathways in the life histories of fishes can be difficult to assess but may have great importance to the dynamics of spatially structured populations. We used Sr/Ca in otoliths as a tracer of time spent in freshwater and brackish habitats to study the ontogenetic mov- ments of white perch Morone americana in the Patuxent River estuary. We observed that, soon after the larvae metamorphose, juveniles either move to brackish habitats (brackish contingent) or take up residency in tidal fresh water (freshwater contingent) for the first year of life. In one intensively stud- ied cohort of juveniles, the mean age at which individuals moved into brackish environments was 45 d (post-hatch), corresponding to the metamorphosis of lavae to juveniles and settlement in littoral habitats. Back-calculated growth rates of the freshwater contingent at this same age (median = 0.6 mm d -1 ) were significantly higher than the brackish contingent (median = 0.5 mm d -1 ). Strong year-class variability (>100-fold) was evident from juvenile surveys and from the age composition of adults sampled during spawning. Adult samples were dominated by the brackish contingent (93% of n = 363), which exhibited a significantly higher growth rate (von Bertalanffy, k = 0.67 yr -1 ) than the freshwater contingent (k = 0.39 yr -1 ). Combined with evidence that the relative frequency of the brackish contingent has increased in year-classes with high juvenile recruitment, these results impli- cate brackish environments as being important for maintaining abundance and productivity of the population. By comparison, disproportionately greater recruitment to the adult population by the freshwater contingent during years of low juvenile abundance suggested that freshwater habitats sustain a small but crucial reproductive segment of the population. Thus, both contingents appeared to have unique and complementary roles in the population dynamics of white perch.",TRUE,noun phrase
R11,Science,R34039,Stable isotope (δ13C and δ18O) and Sr/Ca composition of otoliths as proxies for environmental salinity experienced by an estuarine fish,S118064,R34040,Species Order,R34013,Morone americana,"The ability to identify past patterns of salinity habitat use in coastal fishes is viewed as a critical development in evaluating nursery habitats and their role in population dynamics. The utility of otolith tracers (δ 13 C, δ 18 O, and Sr/Ca) as proxies for environmental salinity was tested for the estuarine-dependent juvenile white perch Morone americana. Analysis of water samples revealed a positive relationship between the salinity gradient and δ 18 O, δ 13 C, and Sr/Ca values of water in the Patuxent River estuary. Similarly, analysis of otolith material from young-of-the-year white perch (2001, 2004, 2005) revealed a positive relationship between salinity and otolith δ 13 C, δ 18 O, and Sr/Ca values. In classifying fish to their known salinity habitat, δ 18 O and Sr/Ca were moderately accurate tracers (53 to 79% and 75% correct classification, respectively), and δ 13 C provided near complete dis- crimination between habitats (93 to 100% correct classification). Further, δ 13 C exhibited the lowest inter-annual variability and the largest range of response across salinity habitats. Thus, across estuaries, it is expected that resolution and reliability of salinity histories of juvenile white perch will be improved through the application of stable isotopes as tracers of salinity history.",TRUE,noun phrase
R11,Science,R34072,Migratory environmental history of the grey mullet Mugil cephalus as revealed by otolith Sr:Ca ratios,S118195,R34073,Species Order,R34070,Mugil cephalus,"We used an electron probe microanalyzer (EPMA) to determine the migratory environ- mental history of the catadromous grey mullet Mugil cephalus from the Sr:Ca ratios in otoliths of 10 newly recruited juveniles collected from estuaries and 30 adults collected from estuaries, nearshore (coastal waters and bay) and offshore, in the adjacent waters off Taiwan. Mean (±SD) Sr:Ca ratios at the edges of adult otoliths increased significantly from 6.5 ± 0.9 × 10 -3 in estuaries and nearshore waters to 8.9 ± 1.4 × 10 -3 in offshore waters (p < 0.01), corresponding to increasing ambi- ent salinity from estuaries and nearshore to offshore waters. The mean Sr:Ca ratios decreased sig- nificantly from the core (11.2 ± 1.2 × 10 -3 ) to the otolith edge (6.2 ± 1.4 × 10 -3 ) in juvenile otoliths (p < 0.001). The mullet generally spawned offshore and recruited to the estuary at the juvenile stage; therefore, these data support the use of Sr:Ca ratios in otoliths to reconstruct the past salinity history of the mullet. A life-history scan of the otolith Sr:Ca ratios indicated that the migratory environmen- tal history of the mullet beyond the juvenile stage consists of 2 types. In Type 1 mullet, Sr:Ca ratios range between 4.0 × 10 -3 and 13.9 × 10 -3 , indicating that they migrated between estuary and offshore waters but rarely entered the freshwater habitat. In Type 2 mullet, the Sr:Ca ratios decreased to a minimum value of 0.4 × 10 -3 , indicating that the mullet migrated to a freshwater habitat. Most mullet beyond the juvenile stage migrated from estuary to offshore waters, but a few mullet less than 2 yr old may have migrated into a freshwater habitat. Most mullet collected nearshore and offshore were of Type 1, while those collected from the estuaries were a mixture of Types 1 and 2. The mullet spawning stock consisted mainly of Type 1 fish. The growth rates of the mullet were similar for Types 1 and 2. The migratory patterns of the mullet were more divergent than indicated by previous reports of their catadromous behavior.",TRUE,noun phrase
R11,Science,R29741,Economic Development and Environmental Quality in Nigeria: Is There an Environmental Kuznets Curve?,S98692,R29742,EKC Turnaround point(s),R29737,Nested model,"This study utilizes standard- and nested-EKC models to investigate the income-environment relation for Nigeria, between 1960 and 2008. The results from the standard-EKC model provides weak evidence of an inverted-U shaped relationship with turning point (T.P) around $280.84, while the nested model presents strong evidence of an N-shaped relationship between income and emissions in Nigeria, with a T.P around $237.23. Tests for structural breaks caused by the 1973 oil price shocks and 1986 Structural Adjustment are not rejected, implying that these factors have not significantly affected the income-environment relationship in Nigeria. Further, results from the rolling interdecadal analysis shows that the observed relationship is stable and insensitive to the sample interval chosen. Overall, our findings imply that economic development is compatible with environmental improvements in Nigeria. However, tighter and concentrated environmental policy regimes will be required to ensure that the relationship is maintained around the first two-strands of the N-shape",TRUE,noun phrase
R11,Science,R30634,For your eyes only,S102178,R30635,Methods,L61346,Novel correlation filter ,"In this paper, we take a look at an enhanced approach for eye detection under difficult acquisition circumstances such as low-light, distance, pose variation, and blur. We present a novel correlation filter based eye detection pipeline that is specifically designed to reduce face alignment errors, thereby increasing eye localization accuracy and ultimately face recognition accuracy. The accuracy of our eye detector is validated using data derived from the Labeled Faces in the Wild (LFW) and the Face Detection on Hard Datasets Competition 2011 (FDHD) sets. The results on the LFW dataset also show that the proposed algorithm exhibits enhanced performance, compared to another correlation filter based detector, and that a considerable increase in face recognition accuracy may be achieved by focusing more effort on the eye localization stage of the face recognition process. Our results on the FDHD dataset show that our eye detector exhibits superior performance, compared to 11 different state-of-the-art algorithms, on the entire set of difficult data without any per set modifications to our detection or preprocessing algorithms. The immediate application of eye detection is automatic face recognition, though many good applications exist in other areas, including medical research, training simulators, communication systems for the disabled, and automotive engineering.",TRUE,noun phrase
R11,Science,R151260,social media affordances for connective action: An examination of microbloggin use during the gulf of Mexcio oil spill,S626552,R156079,Emergency Type,L431248,oil spill,"This research questions how social media use affords new forms of organizing and collective engagement. The concept of connective action has been introduced to characterize such new forms of collective engagement in which actors coproduce and circulate content based upon an issue of mutual interest. Yet, how the use of social media actually affords connective action still needed to be investigated. Mixed methods analyses of microblogging use during the Gulf of Mexico oil spill bring insights onto this question and reveal in particular how multiple actors enacted emerging and interdependent roles with their distinct patterns of feature use. The findings allow us to elaborate upon the concept of connective affordances as collective level affordances actualized by actors in team interdependent roles. Connective affordances extend research on affordances as a relational concept by considering not only the relationships between technology and users but also the interdependence type among users and the effects of this interdependence onto what users can do with the technology. This study contributes to research on social media use by paying close attention to how distinct patterns of feature use enact emerging roles. Adding to IS scholarship on the collective use of technology, it considers how the patterns of feature use for emerging groups of actors are intricately and mutually related to each other.",TRUE,noun phrase
R11,Science,R34060,Population structure of sympatric anadromous and nonanadromous Oncorhynchus mykiss: evidence from spawning surveys and otolith microchemistry,S118150,R34061,Species Order,R34059,Oncorhynchus mykiss,"Reproductive isolation between steelhead and resident rainbow trout (Oncorhynchus mykiss) was examined in the Deschutes River, Oregon, through surveys of spawning timing and location. Otolith microchemistry was used to de- termine the occurrence of steelhead and resident rainbow trout progeny in the adult populations of steelhead and resi - dent rainbow trout in the Deschutes River and in the Babine River, British Columbia. In the 3 years studied, steelhead spawning occurred from mid March through May and resident rainbow trout spawning occurred from mid March through August. The timing of 50% spawning was 9-10 weeks earlier for steelhead than for resident rainbow trout. Spawning sites selected by steelhead were in deeper water and had larger substrate than those selected by resident rain - bow trout. Maternal origin was identified by comparing Sr/Ca ratios in the primordia and freshwater growth regions of the otolith with a wavelength-dispersive electron microprobe. In the Deschutes River, only steelhead of steelhead mater - nal origin and resident rainbow trout of resident rainbow trout origin were observed. In the Babine River, steelhead of resident rainbow trout origin and resident rainbow trout of steelhead maternal origin were also observed. Based on these findings, we suggest that steelhead and resident rainbow trout in the Deschutes River may constitute reproduc- tively isolated populations.",TRUE,noun phrase
R11,Science,R34057,Migration and rearing histories of chinook salmon (Oncorhynchus tshawytscha) determined by ion microprobe Sr isotope and Sr/Ca transects of otoliths,S118137,R34058,Species Order,R34055,Oncorhynchus tshawytscha,"Strontium isotope and Sr/Ca ratios measured in situ by ion microprobe along radial transects of otoliths of juvenile chinook salmon (Oncorhynchus tshawytscha) vary between watersheds with contrasting geology. Otoliths from ocean-type chinook from Skagit River estuary, Washington, had prehatch regions with 87Sr/86Sr ratios of ~0.709, suggesting a maternally inherited marine signature, extensive fresh water growth zones with 87Sr/86Sr ratios similar to those of the Skagit River at ~0.705, and marine-like 87Sr/86Sr ratios near their edges. Otoliths from stream-type chinook from central Idaho had prehatch 87Sr/86Sr ratios ≥0.711, indicating that a maternal marine Sr isotopic signature is not preserved after the ~1000- to 1400-km migration from the Pacific Ocean. 87Sr/86Sr ratios in the outer portions of otoliths from these Idaho juveniles were similar to those of their respective streams (~0.7080.722). For Skagit juveniles, fresh water growth was marked by small decreases in otolith Sr/Ca, with increases in ...",TRUE,noun phrase
R11,Science,R34264,A Fast-Track East African Community Monetary Union? Convergence Evidence from a Cointegration Analysis.,S119147,R34265,Justification/ recommendation,L71968,Only partial convergence,"There is a proposal for a fast-tracked approach to the African Community (EAC) monetary union. This paper uses cointegration techniques to determine whether the member countries would form a successful monetary union based on the long-run behavior of nominal and real exchange rates and monetary base. The three variables are each analyzed for co-movements among the five countries. The empirical results indicate only partial convergence for the variables considered, suggesting there could be substantial costs for the member countries from a fast-tracked process. This implies the EAC countries need significant adjustments to align their monetary policies and to allow a period of monetary policy coordination to foster convergence that will improve the chances of a sustainable currency union.",TRUE,noun phrase
R11,Science,R34097,Estimating contemporary early life-history dispersal in an estuarine fish: integrating molecular and otolith elemental approaches,S118253,R34098,Species Order,R34095,Osmerus mordax,"Dispersal during the early life history of the anadromous rainbow smelt, Osmerus mordax, was examined using assignment testing and mixture analysis of multilocus genotypes and otolith elemental composition. Six spawning areas and associated estuarine nurseries were sampled throughout southeastern Newfoundland. Samples of adults and juveniles isolated by > 25 km displayed moderate genetic differentiation (FST ~ 0.05), whereas nearby (< 25 km) spawning and nursery samples displayed low differentiation (FST < 0.01). Self‐assignment and mixture analysis of adult spawning samples supported the hypothesis of independence of isolated spawning locations (> 80% self‐assignment) with nearby runs self‐assigning at rates between 50 % and 70%. Assignment and mixture analysis of juveniles using adult baselines indicated high local recruitment at several locations (70–90%). Nearby (< 25 km) estuaries at the head of St Mary's Bay showed mixtures of individuals (i.e. 20–40% assignment to adjacent spawning location). Laser ablation inductively coupled mass spectrometry transects across otoliths of spawning adults of unknown dispersal history were used to estimate dispersal among estuaries across the first year of life. Single‐element trends and multivariate discriminant function analysis (Sr:Ca and Ba:Ca) classified the majority of samples as estuarine suggesting limited movement between estuaries (< 0.5%). The mixtures of juveniles evident in the genetic data at nearby sites and a lack of evidence of straying in the otolith data support a hypothesis of selective mortality of immigrants. If indeed selective mortality of immigrants reduces the survivorship of dispersers, estimates of dispersal in marine environments that neglect survival may significantly overestimate gene flow.",TRUE,noun phrase
R11,Science,R26966,Fleet Size and Mix Optimization for Paratransit Services,S86643,R26967,Industry,R26965,Paratransit service,"Most paratransit agencies use a mix of different types of vehicles ranging from small sedans to large converted vans as a cost-effective way to meet the diverse travel needs and seating requirements of their clients. Currently, decisions on what types of vehicles and how many vehicles to use are mostly made by service managers on an ad hoc basis without much systematic analysis and optimization. The objective of this research is to address the underlying fleet size and mix problem and to develop a practical procedure that can be used to determine the optimal fleet mix for a given application. A real-life example illustrates the relationship between the performance of a paratransit service system and the size of its service vehicles. A heuristic procedure identifies the optimal fleet mix that maximizes the operating efficiency of a service system. A set of recommendations is offered for future research; the most important is the need to incorporate a life-cycle cost framework into the paratransit service planning process.",TRUE,noun phrase
R11,Science,R31198,A patchwork model for evolutionary algorithms with structure and variable size populations,S104609,R31199,Name,L62553,Patchwork model,"The paper investigates a new PATCHWORK model for structured population in evolutionary search, where population size may vary. This model allows control of both population diversity and selective pressure, and its operators are local in scope. Moreover, the PATCHWORK model gives a significant flexibility for introducing many additional concepts, like behavioral rules for individuals. First experiments allowed us to observe some interesting patterns which emerged during evolutionary process.",TRUE,noun phrase
R11,Science,R28307,Schedule Design and Container Routing in Liner Shipping,S92680,R28308,emarkable factor,R28208,Penalty cost,"A liner shipping company seeks to provide liner services with shorter transit time compared with the benchmark of market-level transit time because of the ever-increasing competition. When the itineraries of its liner service routes are determined, the liner shipping company designs the schedules of the liner routes such that the wait time at transshipment ports is minimized. As a result of transshipment, multiple paths are available for delivering containers from the origin port to the destination port. Therefore, the medium-term (3 to 6 months) schedule design problem and the operational-level container-routing problem must be investigated simultaneously. The schedule design and container-routing problems were formulated by minimization of the sum of the total transshipment cost and penalty cost associated with longer transit time than the market-level transit time, minus the bonus for shorter transit time. The formulation is nonlinear, noncontinuous, and nonconvex. A genetic local search approach was developed to find good solutions to the problem. The proposed solution method was applied to optimize the Asia–Europe–Oceania liner shipping services of a global liner company.",TRUE,noun phrase
R11,Science,R31787,DNA methylation and embryogenic compe- tence in leaves and callus of napiergrass (Pennisetum purpureum Schum,S107222,R31788,Sp,L64261,Pennisetum purpureum,Quantitative and qualitative levels of DNA methylation were evaluated in leaves and callus of Pennisetum purpureum Schum. The level of methylation did not change during leaf differentiation or aging and similar levels of methylation were found in embryogenic and nonembryogenic callus.,TRUE,noun phrase
R11,Science,R33280,Identifying the factors influencing the performance of reverse supply chains (RSC),S115336,R33281,Critical success factors,R33278,perceived usefulness,"This paper aims to extract the factors influencing the performance of reverse supply chains (RSCs) based on the structure equation model (SEM). We first introduce the definition of RSC and describe its current status and follow this with a literature review of previous RSC studies and the technology acceptance model . We next develop our research model and 11 hypotheses and then use SEM to test our model and identify those factors that actually influence the success of RSC. Next, we use both questionnaire and web‐based methods to survey five companies which have RSC operation experience in China and Korea. Using the 168 responses, we used measurement modeling test and SEM to validate our proposed hypotheses. As a result, nine hypotheses were accepted while two were rejected. We found that ease of use, perceived usefulness, service quality, channel relationship and RSC cost were the five most important factors which influence the success of RSC. Finally, we conclude by highlighting our research contribution and propose future research.",TRUE,noun phrase
R11,Science,R30717,The prevalence of non-carious cervical lesions in permanent dentition,S102524,R30718,Study population,L61553,Permanent dentition,"A non-carious cervical lesion (NCCL) is the loss of hard dental tissue on the neck of the tooth, most frequently located on the vestibular plane. Causal agents are diverse and mutually interrelated. In the present study all vestibular NCCL were observed and recorded by the tooth wear index (TWI). The aim of the study was to determine the prevalence and severity of NCCL. For this purpose, 18555 teeth from the permanent dentition were examined in a population from the city of Rijeka, Croatia. Subjects were divided into six age groups. The teeth with most NCCL were the lower premolars, which also had the largest percentage of higher index levels, indicating the greater severity of the lesions. The most frequent index level was 1, and the prevalence and severity of the lesions increased with age.",TRUE,noun phrase
R11,Science,R33476,An empirical study on the impact of critical success factors on the balanced scorecard performance in Korean green supply chain management enterprises,S115700,R33477,Critical success factors,R33472,planning and implementation,"Rapid industrial modernisation and economic reform have been features of the Korean economy since the 1990s, and have brought with it substantial environmental problems. In response to these problems, the Korean government has been developing approaches to promote cleaner production technologies. Green supply chain management (GSCM) is emerging to be an important approach for Korean enterprises to improve performance. The purpose of this study is to examine the impact of GSCM CSFs (critical success factors) on the BSC (balanced scorecard) performance by the structural equation modelling, using empirical results from 249 enterprise respondents involved in national GSCM business in Korea. Planning and implementation was a dominant antecedent factor in this study, followed by collaboration with partners and integration of infrastructure. However, activation of support was a negative impact to the finance performance, raising the costs and burdens. It was found out that there were important implications in the implementation of GSCM.",TRUE,noun phrase
R11,Science,R31413,Online prediction of polymer product quality in an industrial reactor using recurrent neural networks,S105339,R31414,Objective/estimate(s) process systems,R31343,Polymer product quality,"In this paper, internally recurrent neural networks (IRNN) are used to predict a key polymer product quality variable from an industrial polymerization reactor. IRNN are selected as the modeling tools for two reasons: 1) over the wide range of operating regions required to make multiple polymer grades, the process is highly nonlinear; and 2) the finishing of the polymer product after it leaves the reactor imparts significant dynamics to the process by ""mixing"" effects. IRNN are shown to be very effective tools for predicting key polymer quality variables from secondary measurements taken around the reactor.",TRUE,noun phrase
R11,Science,R30662,The prevalence of dental erosion in preschool children in China,S102306,R30663,Study population,R30660,Preschool children,00-5712/$ see front matter q 200 i:10.1016/j.jdent.2004.08.007 * Corresponding author. Tel.: C44 7 5 1282. E-mail address: r.bedi@eastman.uc Summary Objective. To describe the prevalence of dental erosion and associated factors in preschool children in Guangxi and Hubei provinces of China. Methods. Dental examinations were carried out on 1949 children aged 3–5 years. Measurement of erosion was confined to primary maxillary incisors. The erosion index used was based upon the 1993 UK National Survey of Children’s Dental Health. The children’s general information as well as social background and dietary habits were collected based on a structured questionnaire. Results. A total of 112 children (5.7%) showed erosion on their maxillary incisors. Ninety-five (4.9%) was scored as being confined to enamel and 17 (0.9%) as erosion extending into dentine or pulp. There was a positive association between erosion and social class in terms of parental education. A significantly higher prevalence of erosion was observed in children whose parents had post-secondary education than those whose parents had secondary or lower level of education. There was also a correlation between the presence of dental erosion and intake of fruit drink from a feeding bottle or consumption of fruit drinks at bedtime. Conclusion. Erosion is not a serious problem for dental heath in Chinese preschool children. The prevalence of erosion is associated with social and dietary factors in this sample of children. q 2004 Elsevier Ltd. All rights reserved.,TRUE,noun phrase
R11,Science,R30689,"Erosion, caries and rampant caries in preschool children in Jeddah, Saudi Arabia",S102402,R30690,Study population,R30660,Preschool children,"OBJECTIVES The objective of this study was to determine the prevalence of dental erosion in preschool children in Jeddah, Saudi Arabia, and to relate this to caries and rampant caries in the same children. METHODS A sample of 987 children (2-5 years) was drawn from 17 kindergartens. Clinical examinations were carried out under standardised conditions by a trained and calibrated examiner (M.Al-M.). Measurement of erosion was confined to primary maxillary incisors and used a scoring system and criteria based on those used in the UK National Survey of Child Dental Health. Caries was diagnosed using BASCD criteria. Rampant caries was defined as caries affecting the smooth surfaces of two or more maxillary incisors. RESULTS Of the 987 children, 309 (31%) had evidence of erosion. For 186 children this was confined to enamel but for 123 it involved dentine and/or pulp. Caries were diagnosed in 720 (73%) of the children and rampant caries in 336 (34%). The mean dmft for the 987 children was 4.80 (+/-4.87). Of the 384 children who had caries but not rampant caries, 141 (37%) had erosion, a significantly higher proportion than the 72 (27%) out of 267 who were clinically caries free (SND=2.61, P<0.01). Of the 336 with rampant caries, 96 (29%) also had evidence of erosion. CONCLUSIONS The level of erosion was similar to that seen in children of an equivalent age in the UK. Caries was a risk factor for erosion in this group of children.",TRUE,noun phrase
R11,Science,R31460,Using artificial neural network to predict the pressure drop in a rotating packed bed,S105476,R31461,Objective/estimate(s) process systems,R31458,Pressure drop,"Although rotating beds are good equipments for intensified separations and multiphase reactions, but the fundamentals of its hydrodynamics are still unknown. In the wide range of operating conditions, the pressure drop across an irrigated bed is significantly lower than dry bed. In this regard, an approach based on artificial intelligence, that is, artificial neural network (ANN) has been proposed for prediction of the pressure drop across the rotating packed beds (RPB). The experimental data sets used as input data (280 data points) were divided into training and testing subsets. The training data set has been used to develop the ANN model while the testing data set was used to validate the performance of the trained ANN model. The results of the predicted pressure drop values with the experimental values show a good agreement between the prediction and experimental results regarding to some statistical parameters, for example (AARD% = 4.70, MSE = 2.0 × 10−5 and R2 = 0.9994). The designed ANN model can estimate the pressure drop in the countercurrent flow rotating packed bed with unexpected phenomena for higher pressure drop in dry bed than in wet bed. Also, the designed ANN model has been able to predict the pressure drop in a wet bed with the good accuracy with experimental.",TRUE,noun phrase
R11,Science,R30658,Dental erosion in 12-year-old schoolchildren: A cross- sectional study in Southern Brazil,S102295,R30659,Aim of the study,L61414,Prevalence of dental erosion,"OBJECTIVE The aim of this study was to assess the prevalence and severity of dental erosion among 12-year-old schoolchildren in Joaçaba, southern Brazil, and to compare prevalence between boys and girls, and between public and private school students. METHODS A cross-sectional study was carried out involving all of the municipality's 499, 12-year-old schoolchildren. The dental erosion index proposed by O'Sullivan was used for the four maxillary incisors. Data analysis included descriptive statistics, location, distribution, and extension of affected area and severity of dental erosion. RESULTS The prevalence of dental erosion was 13.0% (95% confidence interval = 9.0-17.0). There was no statistically significant difference in prevalence between boys and girls, but prevalence was higher in private schools (21.1%) than in public schools (9.7%) (P < 0.001). Labial surfaces were less often affected than palatal surfaces. Enamel loss was the most prevalent type of dental erosion (4.86 of 100 incisors). Sixty-three per cent of affected teeth showed more than a half of their surface affected. CONCLUSION The prevalence of dental erosion in 12-year-old schoolchildren living in a small city in southern Brazil appears to be lower than that seen in most of epidemiological studies carried out in different parts of the world. Further longitudinal studies should be conducted in Brazil in order to measure the incidence of dental erosion and its impact on children's quality of life.",TRUE,noun phrase
R11,Science,R30662,The prevalence of dental erosion in preschool children in China,S102308,R30663,Aim of the study,L61421,Prevalence of dental erosion,00-5712/$ see front matter q 200 i:10.1016/j.jdent.2004.08.007 * Corresponding author. Tel.: C44 7 5 1282. E-mail address: r.bedi@eastman.uc Summary Objective. To describe the prevalence of dental erosion and associated factors in preschool children in Guangxi and Hubei provinces of China. Methods. Dental examinations were carried out on 1949 children aged 3–5 years. Measurement of erosion was confined to primary maxillary incisors. The erosion index used was based upon the 1993 UK National Survey of Children’s Dental Health. The children’s general information as well as social background and dietary habits were collected based on a structured questionnaire. Results. A total of 112 children (5.7%) showed erosion on their maxillary incisors. Ninety-five (4.9%) was scored as being confined to enamel and 17 (0.9%) as erosion extending into dentine or pulp. There was a positive association between erosion and social class in terms of parental education. A significantly higher prevalence of erosion was observed in children whose parents had post-secondary education than those whose parents had secondary or lower level of education. There was also a correlation between the presence of dental erosion and intake of fruit drink from a feeding bottle or consumption of fruit drinks at bedtime. Conclusion. Erosion is not a serious problem for dental heath in Chinese preschool children. The prevalence of erosion is associated with social and dietary factors in this sample of children. q 2004 Elsevier Ltd. All rights reserved.,TRUE,noun phrase
R11,Science,R33406,A study of supplier selection factors for high-tech industries in the supply chain,S115559,R33407,Critical success factors,R33398,Price response capability,"Amid the intensive competition among global industries, the relationship between manufacturers and suppliers has turned from antagonist to cooperative. Through partnerships, both parties can be mutually benefited, and the key factor that maintains such relationship lies in how manufacturers select proper suppliers. The purpose of this study is to explore the key factors considered by manufacturers in supplier selection and the relationships between these factors. Through a literature review, eight supplier selection factors, comprising price response capability, quality management capability, technological capability, delivery capability, flexible capability, management capability, commercial image, and financial capability are derived. Based on the theoretic foundation proposed by previous researchers, a causal model of supplier selection factors is further constructed. The results of a survey on high-tech industries are used to verify the relationships between the eight factors using structural equation modelling (SEM). Based on the empirical results, conclusions and suggestions are finally proposed as a reference for manufacturers and suppliers.",TRUE,noun phrase
R11,Science,R30608,A robust eye localization method for low quality face images,S102078,R30623,Methods,L61287,Probabilistic Cascade ,"Eye localization is an important part in face recognition system, because its precision closely affects the performance of face recognition. Although various methods have already achieved high precision on the face images with high quality, their precision will drop on low quality images. In this paper, we propose a robust eye localization method for low quality face images to improve the eye detection rate and localization precision. First, we propose a probabilistic cascade (P-Cascade) framework, in which we reformulate the traditional cascade classifier in a probabilistic way. The P-Cascade can give chance to each image patch contributing to the final result, regardless the patch is accepted or rejected by the cascade. Second, we propose two extensions to further improve the robustness and precision in the P-Cascade framework. There are: (1) extending feature set, and (2) stacking two classifiers in multiple scales. Extensive experiments on JAFFE, BioID, LFW and a self-collected video surveillance database show that our method is comparable to state-of-the-art methods on high quality images and can work well on low quality images. This work supplies a solid base for face recognition applications under unconstrained or surveillance environments.",TRUE,noun phrase
R11,Science,R30608,A robust eye localization method for low quality face images,S102145,R30631,Methods,L61325,Probabilistic cascade ,"Eye localization is an important part in face recognition system, because its precision closely affects the performance of face recognition. Although various methods have already achieved high precision on the face images with high quality, their precision will drop on low quality images. In this paper, we propose a robust eye localization method for low quality face images to improve the eye detection rate and localization precision. First, we propose a probabilistic cascade (P-Cascade) framework, in which we reformulate the traditional cascade classifier in a probabilistic way. The P-Cascade can give chance to each image patch contributing to the final result, regardless the patch is accepted or rejected by the cascade. Second, we propose two extensions to further improve the robustness and precision in the P-Cascade framework. There are: (1) extending feature set, and (2) stacking two classifiers in multiple scales. Extensive experiments on JAFFE, BioID, LFW and a self-collected video surveillance database show that our method is comparable to state-of-the-art methods on high quality images and can work well on low quality images. This work supplies a solid base for face recognition applications under unconstrained or surveillance environments.",TRUE,noun phrase
R11,Science,R31336,Composition estimations in a middle-vessel batch distillation column using artificial neural networks,S105120,R31337,Objective/estimate(s) process systems,R31335,Product compositions,"A virtual sensor that estimates product compositions in a middle-vessel batch distillation column has been developed. The sensor is based on a recurrent artificial neural network, and uses information available from secondary measurements (such as temperatures and flow rates). The criteria adopted for selecting the most suitable training data set and the benefits deriving from pre-processing these data by means of principal component analysis are demonstrated by simulation. The effects of sensor location, model initialization, and noisy temperature measurements on the performance of the soft sensor are also investigated. It is shown that the estimated compositions are in good agreement with the actual values.",TRUE,noun phrase
R11,Science,R33375,Critical factors for implementing green supply chain management practice,S115509,R33376,Critical success factors,R33372,product recycling,"Purpose – The purpose of this paper is to explore critical factors for implementing green supply chain management (GSCM) practice in the Taiwanese electrical and electronics industries relative to European Union directives.Design/methodology/approach – A tentative list of critical factors of GSCM was developed based on a thorough and detailed analysis of the pertinent literature. The survey questionnaire contained 25 items, developed based on the literature and interviews with three industry experts, specifically quality and product assurance representatives. A total of 300 questionnaires were mailed out, and 87 were returned, of which 84 were valid, representing a response rate of 28 percent. Using the data collected, the identified critical factors were performed via factor analysis to establish reliability and validity.Findings – The results show that 20 critical factors were extracted into four dimensions, which denominated supplier management, product recycling, organization involvement and life cycl...",TRUE,noun phrase
R11,Science,R28067,Hardware-Efficient Design of Real-Time Profile Shape Matching Stereo Vision Algorithm on FPGA,S91632,R28068,Algorithm,R27919,Profile shape matching,"A variety of platforms, such as micro-unmanned vehicles, are limited in the amount of computational hardware they can support due to weight and power constraints. An efficient stereo vision algorithm implemented on an FPGA would be able to minimize payload and power consumption in microunmanned vehicles, while providing 3D information and still leaving computational resources available for other processing tasks. This work presents a hardware design of the efficient profile shape matching stereo vision algorithm. Hardware resource usage is presented for the targeted micro-UV platform, Helio-copter, that uses the Xilinx Virtex 4 FX60 FPGA. Less than a fifth of the resources on this FGPA were used to produce dense disparity maps for image sizes up to 450 × 375, with the ability to scale up easily by increasing BRAM usage. A comparison is given of accuracy, speed performance, and resource usage of a census transform-based stereo vision FPGA implementation by Jin et al. Results show that the profile shape matching algorithm is an efficient real-time stereo vision algorithm for hardware implementation for resource limited systems such as microunmanned vehicles.",TRUE,noun phrase
R11,Science,R25959,Immunochromatographic Assay for Quantitation of Milk Progesterone.,S80043,R25960,Analyte,L50502,Progesterone in bovine milk,"We describe a rapid immunochromatographic method for the quantitation of progesterone in bovine milk. The method is based on a 'competitive' assay format using the monoclonal antibody to progesterone and a progesterone-protein conjugate labelled with colloidal gold particles. The monoclonal antibody to progesterone is immobilized as a narrow detection zone on a porous membrane. The sample is mixed with colloidal gold particles coated with progesterone-protein conjugate, and the mixture is allowed to migrate past the detection zone. Migration is facilitated by capillary forces. The amount of labelled progesterone-protein conjugate bound to the detection zone, as detected by photometric scanning, is inversely proportional to the amount of progesterone present in the sample. Analysis is complete in less than 10 min. The method has a practical detection limit of 5 ng of progesterone per ml of bovine milk.",TRUE,noun phrase
R11,Science,R33521,Evaluating the critical success factors of supplier development: a case study,S115792,R33522,Critical success factors,R33518,project completion experience,"Purpose – The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach – In total, 13 CSFs for SD are identified (i.e. long‐term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings – The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long‐term strategic goal is found to be ...",TRUE,noun phrase
R11,Science,R33521,Evaluating the critical success factors of supplier development: a case study,S115786,R33522,Critical success factors,R33512,proximity to manufacturing base,"Purpose – The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach – In total, 13 CSFs for SD are identified (i.e. long‐term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings – The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long‐term strategic goal is found to be ...",TRUE,noun phrase
R11,Science,R30702,"Tooth wear among psychiatric patients: prevalence, distribution, and associat- ed factors",S102453,R30703,Study population,L61506,Psychiatric patients,"PURPOSE The purpose of this study was to evaluate the prevalence, distribution, and associated factors of tooth wear among psychiatric patients. MATERIALS AND METHODS Tooth wear was evaluated using the tooth wear index with scores ranging from 0 to 4. The presence of predisposing factors was recorded in 143 psychiatric patients attending the outpatient clinic at the Prince Rashed Hospital in northern Jordan. RESULTS The prevalence of a tooth wear score of 3 in at least one tooth was 90.9%. Patients in the age group 16 to 25 had the lowest prevalence (78.6%) of tooth wear. Increasing age was found to be a significant risk factor for the prevalence of tooth wear (P < .005). The occlusal/incisal surfaces were the most affected by wear, with mandibular teeth being more affected than maxillary teeth, followed by the palatal surface of the maxillary anterior teeth and then the buccal/labial surface of the mandibular teeth. The factors found to be associated with tooth wear were age, retirement and unemployment, masseter muscle pain, depression, and anxiety. CONCLUSION Patients' psychiatric condition and prescribed medication may be considered factors that influence tooth wear.",TRUE,noun phrase
R11,Science,R33489,Identifying critical enablers and pathways to high performance supply chain quality management,S115746,R33490,Critical success factors,R33488,quality management,"Purpose – The aim of this paper is threefold: first, to examine the content of supply chain quality management (SCQM); second, to identify the structure of SCQM; and third, to show ways for finding improvement opportunities and organizing individual institution's resources/actions into collective performance outcomes.Design/methodology/approach – To meet the goals of this work, the paper uses abductive reasoning and two qualitative methods: content analysis and formal concept analysis (FCA). Primary data were collected from both original design manufacturers (ODMs) and original equipment manufacturers (OEMs) in Taiwan.Findings – According to the qualitative empirical study, modern enterprises need to pay immediate attention to the following two pathways: a compliance approach and a voluntary approach. For the former, three strategic content variables are identified: training programs, ISO, and supplier quality audit programs. As for initiating a voluntary effort, modern lead firms need to instill “motivat...",TRUE,noun phrase
R11,Science,R33406,A study of supplier selection factors for high-tech industries in the supply chain,S115560,R33407,Critical success factors,R33399,quality management capability,"Amid the intensive competition among global industries, the relationship between manufacturers and suppliers has turned from antagonist to cooperative. Through partnerships, both parties can be mutually benefited, and the key factor that maintains such relationship lies in how manufacturers select proper suppliers. The purpose of this study is to explore the key factors considered by manufacturers in supplier selection and the relationships between these factors. Through a literature review, eight supplier selection factors, comprising price response capability, quality management capability, technological capability, delivery capability, flexible capability, management capability, commercial image, and financial capability are derived. Based on the theoretic foundation proposed by previous researchers, a causal model of supplier selection factors is further constructed. The results of a survey on high-tech industries are used to verify the relationships between the eight factors using structural equation modelling (SEM). Based on the empirical results, conclusions and suggestions are finally proposed as a reference for manufacturers and suppliers.",TRUE,noun phrase
R11,Science,R151192,"The Role of Social Media during Queensland Floods:
An Empirical Investigation on the Existence of Multiple Communities of Practice (MCoPs)",S626300,R156045,Emergency Type,L431030,Queensland flood,"The notion of communities getting together during a disaster to help each other is common. However, how does this communal activity happen within the online world? Here we examine this issue using the Communities of Practice (CoP) approach. We extend CoP to multiple CoP (MCoPs) and examine the role of social media applications in disaster management, extending work done by Ahmed (2011). Secondary data in the form of newspaper reports during 2010 to 2011 were analysed to understand how social media, particularly Facebook and Twitter, facilitated the process of communication among various communities during the Queensland floods in 2010. The results of media-content analysis along with the findings of relevant literature were used to extend our existing understanding on various communities of practice involved in disaster management, their communication tasks and the role of Twitter and Facebook as common conducive platforms of communication during disaster management alongside traditional communication channels.",TRUE,noun phrase
R11,Science,R30807,Implementation of Equity in Resource Allocation for Regional Earthquake Risk Mitigation Using Two-Stage Stochastic Programming,S103809,R30947,Second-stage2,R30905,Reconstruction expenditures,"This article presents a new methodology to implement the concept of equity in regional earthquake risk mitigation programs using an optimization framework. It presents a framework that could be used by decisionmakers (government and authorities) to structure budget allocation strategy toward different seismic risk mitigation measures, i.e., structural retrofitting for different building structural types in different locations and planning horizons. A two‐stage stochastic model is developed here to seek optimal mitigation measures based on minimizing mitigation expenditures, reconstruction expenditures, and especially large losses in highly seismically active countries. To consider fairness in the distribution of financial resources among different groups of people, the equity concept is incorporated using constraints in model formulation. These constraints limit inequity to the user‐defined level to achieve the equity‐efficiency tradeoff in the decision‐making process. To present practical application of the proposed model, it is applied to a pilot area in Tehran, the capital city of Iran. Building stocks, structural vulnerability functions, and regional seismic hazard characteristics are incorporated to compile a probabilistic seismic risk model for the pilot area. Results illustrate the variation of mitigation expenditures by location and structural type for buildings. These expenditures are sensitive to the amount of available budget and equity consideration for the constant risk aversion. Most significantly, equity is more easily achieved if the budget is unlimited. Conversely, increasing equity where the budget is limited decreases the efficiency. The risk‐return tradeoff, equity‐reconstruction expenditures tradeoff, and variation of per‐capita expected earthquake loss in different income classes are also presented.",TRUE,noun phrase
R11,Science,R27366,Consideration of shot peening treatment applied to a high strength aeronautical steel with different hardnesses,S88338,R27367,Special Notes,R27364,Rotating bend,"One of the most important components in a aircraft is its landing gear, due to the high load that it is submitted to during, principally, the take off and landing. For this reason, the AISI 4340 steel is widely used in the aircraft industry for fabrication of structural components, in which strength and toughness are fundamental design requirements [I]. Fatigue is an important parameter to be considered in the behavior of mechanical components subjected to constant and variable amplitude loading. One of the known ways to improve fatigue resistance is by using the shot peening process to induce a conlpressive residual stress in the surface layers of the material, making the nucleation and propagation of fatigue cracks more difficult [2,3]. The shot peening results depend on various parameters. These parameters can be grouped in three different classes according to I<. Fathallah et a1 (41: parameters describing the treated part, parameters of stream energy produced by the process and parameters describing the contact conditions. Furthermore, relaxation of the CKSF induced by shot peening has been observed during the fatigue process 15-71. In the present research the gain in fatigue life of AISI 4340 steel, obtained by shot peening treatment, is evaluated under the two different hardnesses used in landing gear. Rotating bending fatigue tests were conducted and the CRSF was measured by an x-ray tensometry prior and during fatigue tests. The evaluation of fatigue life due the shot peening in relation to the relaxation of CRSF, of crack sources position and roughness variation is done.",TRUE,noun phrase
R11,Science,R151228,"Community intelligence and social media services: A rumor
theoretic analysis of tweets during social crisis",S612204,R153058,paper: Theory / Concept / Model,L422112,Rumor theory,"Recent extreme events show that Twitter, a micro-blogging service, is emerging as the dominant social reporting tool to spread information on social crises. It is elevating the online public community to the status of first responders who can collectively cope with social crises. However, at the same time, many warnings have been raised about the reliability of community intelligence obtained through social reporting by the amateur online community. Using rumor theory, this paper studies citizen-driven information processing through Twitter services using data from three social crises: the Mumbai terrorist attacks in 2008, the Toyota recall in 2010, and the Seattle cafe shooting incident in 2012. We approach social crises as communal efforts for community intelligence gathering and collective information processing to cope with and adapt to uncertain external situations. We explore two issues: (1) collective social reporting as an information processing mechanism to address crisis problems and gather community intelligence, and (2) the degeneration of social reporting into collective rumor mills. Our analysis reveals that information with no clear source provided was the most important, personal involvement next in importance, and anxiety the least yet still important rumor causing factor on Twitter under social crisis situations.",TRUE,noun phrase
R11,Science,R151228,"Community intelligence and social media services: A rumor
theoretic analysis of tweets during social crisis",S626427,R156063,paper: Theory / Construct / Model,L431139,Rumor theory,"Recent extreme events show that Twitter, a micro-blogging service, is emerging as the dominant social reporting tool to spread information on social crises. It is elevating the online public community to the status of first responders who can collectively cope with social crises. However, at the same time, many warnings have been raised about the reliability of community intelligence obtained through social reporting by the amateur online community. Using rumor theory, this paper studies citizen-driven information processing through Twitter services using data from three social crises: the Mumbai terrorist attacks in 2008, the Toyota recall in 2010, and the Seattle cafe shooting incident in 2012. We approach social crises as communal efforts for community intelligence gathering and collective information processing to cope with and adapt to uncertain external situations. We explore two issues: (1) collective social reporting as an information processing mechanism to address crisis problems and gather community intelligence, and (2) the degeneration of social reporting into collective rumor mills. Our analysis reveals that information with no clear source provided was the most important, personal involvement next in importance, and anxiety the least yet still important rumor causing factor on Twitter under social crisis situations.",TRUE,noun phrase
R11,Science,R25473,Toward a Run-to-Run Adaptive Artificial Pancreas: In Silico Results,S76471,R25474,Method,L47726,run-to-run (R2R) approach,"Objective: Contemporary and future outpatient long-term artificial pancreas (AP) studies need to cope with the well-known large intra- and interday glucose variability occurring in type 1 diabetic (T1D) subjects. Here, we propose an adaptive model predictive control (MPC) strategy to account for it and test it in silico. Methods: A run-to-run (R2R) approach adapts the subcutaneous basal insulin delivery during the night and the carbohydrate-to-insulin ratio (CR) during the day, based on some performance indices calculated from subcutaneous continuous glucose sensor data. In particular, R2R aims, first, to reduce the percentage of time in hypoglycemia and, secondarily, to improve the percentage of time in euglycemia and average glucose. In silico simulations are performed by using the University of Virginia/Padova T1D simulator enriched by incorporating three novel features: intra- and interday variability of insulin sensitivity, different distributions of CR at breakfast, lunch, and dinner, and dawn phenomenon. Results: After about two months, using the R2R approach with a scenario characterized by a random $\pm$30% variation of the nominal insulin sensitivity the time in range and the time in tight range are increased by 11.39% and 44.87%, respectively, and the time spent above 180 mg/dl is reduced by 48.74%. Conclusions : An adaptive MPC algorithm based on R2R shows in silico great potential to capture intra- and interday glucose variability by improving both overnight and postprandial glucose control without increasing hypoglycemia. Significance: Making an AP adaptive is key for long-term real-life outpatient studies. These good in silico results are very encouraging and worth testing in vivo.",TRUE,noun phrase
R11,Science,R28200,Empty container reposition planning for intra-Asia liner shipping,S92206,R28201,Inventory policy,R28199,Safety stock,This paper addresses empty container reposition planning by plainly considering safety stock management and geographical regions. This plan could avoid drawback in practice which collects mass empty containers at a port then repositions most empty containers at a time. Empty containers occupy slots on vessel and the liner shipping company loses chance to yield freight revenue. The problem is drawn up as a two-stage problem. The upper problem is identified to estimate the empty container stock at each port and the lower problem models the empty container reposition planning with shipping service network as the Transportation Problem by Liner Problem. We looked at case studies of the Taiwan Liner Shipping Company to show the application of the proposed model. The results show the model provides optimization techniques to minimize cost of empty container reposition and to provide an evidence to adjust strategy of restructuring the shipping service network.,TRUE,noun phrase
R11,Science,R34053,"Coexistence of anadromous and lacustrine life histories of the shirauo, Sala- nichthys microdon",S118120,R34054,Species Order,R34052,Salangichthys microdon,"The environmental history of the shirauo, Salangichthys microdon, was examined in terms of strontium (Sr) and calcium (Ca) uptake in the otolith, by means of wavelength dispersive X-ray spectrometry on an electron microprobe. Anadromous and lacustrine type of the shirauo were found to occur sympatric. Otolith Sr concentration or Sr : Ca ratios of anadromous shirauo fluctuated strongly along the life-history transect in accordance with the migration (habitat) pattern from sea to freshwater. In contrast, the Sr concentration or the Sr : Ca ratios of lacustrine shirauo remained at consistently low levels throughout the otolith. The higher ratios in anadromous shirauo, in the otolith region from the core to 90–230 μm, corresponded to the initial sea-going period, probably reflecting the ambient salinity or the seawater–freshwater gradient in Sr concentration. The findings clearly indicated that otolith Sr : Ca ratios reflected individual life histories, enabling these anadromous shirauo to be distinguished from lacustrine shirauo.",TRUE,noun phrase
R11,Science,R34050,Evidence of multiple migrations between freshwater and marine habitats of Salvelinus leucomaenis.,S118107,R34051,Species Order,R34048,Salvelinus leucomaenis,The migratory history of the white-spotted charr Salvelinus leucomaenis was examined using otolith microchemical analysis. The fish migrated between freshwater and marine environments multiple times during their life history. Some white-spotted charr used an estuarine habitat prior to smolting and repeated seaward migration within a year.,TRUE,noun phrase
R11,Science,R26343,Scenario Tree-Based Heuristics for Stochastic Inventory-Routing Problems,S82614,R26344,approach,R26338,scenario tree,"In vendor-managed inventory replenishment, the vendor decides when to make deliveries to customers, how much to deliver, and how to combine shipments using the available vehicles. This gives rise to the inventory-routing problem in which the goal is to coordinate inventory replenishment and transportation to minimize costs. The problem tackled in this paper is the stochastic inventory-routing problem, where stochastic demands are specified through general discrete distributions. The problem is formulated as a discounted infinite-horizon Markov decision problem. Heuristics based on finite scenario trees are developed. Computational results confirm the efficiency of these heuristics.",TRUE,noun phrase
R11,Science,R32656,A sea-land segmentation scheme based on statistical model of sea,S111371,R32657,Main purpose,R32655,Sea-land segmentation,"Sea-land segmentation is a key step for target detection. Due to the complex texture and uneven gray value of the land in optical remote sensing image, traditional sea-land segmentation algorithms often recognize land as sea incorrectly. A new segmentation scheme is presented in this paper to solve this problem. This scheme determines the threshold according to the adaptively established statistical model of the sea area, and removes the incorrectly classified land according to the difference of the variance in the statistical model between land and sea. Experimental results show our segmentation scheme has small computation complexity, and it has better performance and higher robustness compared to the traditional algorithms.",TRUE,noun phrase
R11,Science,R32559,The Potential for Using Very High Spatial Resolution Imagery for Marine Search and Rescue Surveillance,S110770,R32560,Main purpose,R32558,Search and rescue,"Abstract Recreational boating activities represent one of the highest risk populations in the marine environment. Moreover, there is a trend of increased risk exposure by recreational boaters such as those who undertake adventure tourism, sport fishing/hunting, and personal watercraft (PWC) activities. When trying to plan search and rescue activities, there are data deficiencies regarding inventories, activity type, and spatial location of small, recreational boats. This paper examines the current body of research in the application of remote sensing technology in marine search and rescue. The research suggests commercially available very high spatial resolution satellite (VHSR) imagery can be used to detect small recreational vessels using a sub‐pixel detection methodology. The sub‐pixel detection method utilizes local image statistics based on spatio‐spectral considerations. This methodology would have to be adapted for use with VHSR imagery as it was originally used in hyperspectral imaging. Further, the authors examine previous research on ‘target characterization’ which uses a combination of spectral based classification, and context based feature extraction to generate information such as: length, heading, position, and material of construction for target vessels. This technique is based on pixel‐based processing used in generic digital image processing and computer vision. Finally, a preliminary recreational vessel surveillance system ‐ called Marine Recreational Vessel Reconnaissance (MRV Recon) is tested on some modified VHSR imagery.",TRUE,noun phrase
R11,Science,R32573,An Enhanced Spatio-spectral Template for Automatic Small Recreational Vessel Detection,S110861,R32574,Main purpose,R32558,Search and rescue,"This paper examines the performance of a spatiospectral template on Ikonos imagery to automatically detect small recreational boats. The spatiospectral template is utilized and then enhanced through the use of a weighted Euclidean distance metric adapted from the Mahalanobis distance metric. The aim is to assist the Canadian Coast Guard in gathering data on recreational boating for the modeling of search and rescue incidence risk. To test the detection accuracy of the enhanced spatiospectral template, a dataset was created by gathering position and attribute data for 53 recreational vessel targets purposely moored for this research within Cadboro Bay, British Columbia, Canada. The Cadboro Bay study site containing the targets was imaged using Ikonos. Overall detection accuracy was 77%. Targets were broken down into 2 categories: 1) Category A-less than 6 m in length, and Category B-more than 6 m long. The detection rate for Category B targets was 100%, while the detection rate for Category A targets was 61%. It is important to note that some Category A targets were intentionally selected for their small size to test the detection limits of the enhanced spatiospectral template. The smallest target detected was 2.2 m long and 1.1 m wide. The analysis also revealed that the ability to detect targets between 2.2 and 6 m long was diminished if the target was dark in color.",TRUE,noun phrase
R11,Science,R34599,L-diversity: privacy beyond k-anonymity,S120569,R34600,Background information,R34509,Sensitive attributes,"Publishing data about individuals without revealing sensitive information about them is an important problem. In recent years, a new definition of privacy called \kappa-anonymity has gained popularity. In a \kappa-anonymized dataset, each record is indistinguishable from at least k—1 other records with respect to certain ""identifying"" attributes. In this paper we show with two simple attacks that a \kappa-anonymized dataset has some subtle, but severe privacy problems. First, we show that an attacker can discover the values of sensitive attributes when there is little diversity in those sensitive attributes. Second, attackers often have background knowledge, and we show that \kappa-anonymity does not guarantee privacy against attackers using background knowledge. We give a detailed analysis of these two attacks and we propose a novel and powerful privacy definition called \ell-diversity. In addition to building a formal foundation for \ell-diversity, we show in an experimental evaluation that \ell-diversity is practical and can be implemented efficiently.",TRUE,noun phrase
R11,Science,R25549,Model-Driven Serious Game Development Integration of the Gamification Modeling Language GaML with Unity,S77004,R25550,Game Genres,L48151,Serious Games,"The development of gamification within non-game information systems as well as serious games has recently gained an important role in a variety of business fields due to promising behavioral or psychological improvements. However, industries still struggle with the high efforts of implementing gameful affordances in non-game systems. In order to decrease factors such as project costs, development cycles, and resource consumption as well as to improve the quality of products, the gamification modeling language has been proposed in prior research. However, the language is on a descriptive level only, i.e., Cannot be used to automatically generate executable software artifacts. In this paper and based on this language, we introduce a model-driven architecture for designing as well as generating building blocks for serious games. Furthermore, we give a validation of our approach by going through the different steps of designing an achievement system in the context of an existing serious game.",TRUE,noun phrase
R11,Science,R25573,Models and mechanisms for implementing playful scenarios,S77147,R25574,Game Genres,L48258,Serious Games,"Serious games are becoming an increasingly used alternative in technical/professional/academic fields. However, scenario development poses a challenging problem since it is an expensive task, only devoted to computer specialists (game developers, programmers…). The ultimate goal of our work is to propose a new scenario-building approach capable of ensuring a high degree of deployment and reusability. Thus, we will define in this paper a new generation mechanism. This mechanism is built upon a model driven architecture (MDA). We have started up by enriching the existing standards, which resulted in defining a new generic meta-model (CIM). The resulting meta-model is capable of describing and standardizing game scenarios. Then, we have laid down a new transformational mechanism in order to integrate the indexed game components into operational platforms (PSM). Finally, the effectiveness of our strategy was assessed under two separate contexts (target platforms) : the claroline-connect platform and the unity 3D environment.",TRUE,noun phrase
R11,Science,R33205,An Exploratory Study of the Success Factors for Extranet Adoption in E-Supply Chain,S115205,R33206,Critical success factors,R33203,service quality,"Extranet is an enabler/system that enriches the information service quality in e-supply chain. This paper uses factor analysis to determine four extranet success factors: system quality, information quality, service quality, and work performance quality. A critical analysis of areas that require improvement is also conducted.",TRUE,noun phrase
R11,Science,R33280,Identifying the factors influencing the performance of reverse supply chains (RSC),S115334,R33281,Critical success factors,R33203,service quality,"This paper aims to extract the factors influencing the performance of reverse supply chains (RSCs) based on the structure equation model (SEM). We first introduce the definition of RSC and describe its current status and follow this with a literature review of previous RSC studies and the technology acceptance model . We next develop our research model and 11 hypotheses and then use SEM to test our model and identify those factors that actually influence the success of RSC. Next, we use both questionnaire and web‐based methods to survey five companies which have RSC operation experience in China and Korea. Using the 168 responses, we used measurement modeling test and SEM to validate our proposed hypotheses. As a result, nine hypotheses were accepted while two were rejected. We found that ease of use, perceived usefulness, service quality, channel relationship and RSC cost were the five most important factors which influence the success of RSC. Finally, we conclude by highlighting our research contribution and propose future research.",TRUE,noun phrase
R11,Science,R27007,Optimal fleet design in a ship routing problem,S86811,R27008,Method,R26813,set partitioning,"Abstract The problem of deciding an optimal fleet (the type of ships and the number of each type) in a real liner shipping problem is considered. The liner shipping problem is a multi-trip vehicle routing problem, and consists of deciding weekly routes for the selected ships. A solution method consisting of three phases is presented. In phase 1, all feasible single routes are generated for the largest ship available. Some of these routes will use only a small portion of the ship’s capacity and can be performed by smaller ships at less cost. This fact is used when calculating the cost of each route. In phase 2, the single routes generated in phase 1 are combined into multiple routes. By solving a set partitioning problem (phase 3), where the columns are the routes generated in phases 1 and 2, we find both the optimal fleet and the coherent routes for the fleet.",TRUE,noun phrase
R11,Science,R27018,Robust ship scheduling with multiple time windows,S86874,R27019,Method,R26813,set partitioning,"We present a ship scheduling problem concerned with the pickup and delivery of bulk cargoes within given time windows. As the ports are closed for service at night and during weekends, the wide time windows can be regarded as multiple time windows. Another issue is that the loading/discharging times of cargoes may take several days. This means that a ship will stay idle much of the time in port, and the total time at port will depend on the ship's arrival time. Ship scheduling is associated with uncertainty due to bad weather at sea and unpredictable service times in ports. Our objective is to make robust schedules that are less likely to result in ships staying idle in ports during the weekend, and impose penalty costs for arrivals at risky times (i.e., close to weekends). A set partitioning approach is proposed to solve the problem. The columns correspond to feasible ship schedules that are found a priori. They are generated taking the uncertainty and multiple time windows into account. The computational results show that we can increase the robustness of the schedules at the sacrifice of increased transportation costs. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 611–625, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/nav.10033",TRUE,noun phrase
R11,Science,R30805,Bi-objective stochastic programming models for determining depot locations in disaster relief operations,S104154,R31098,Features,R31097,Several model variants,"This paper presents two-stage bi-objective stochastic programming models for disaster relief operations. We consider a problem that occurs in the aftermath of a natural disaster: a transportation system for supplying disaster victims with relief goods must be established. We propose bi-objective optimization models with a monetary objective and humanitarian objective. Uncertainty in the accessibility of the road network is modeled by a discrete set of scenarios. The key features of our model are the determination of locations for intermediate depots and acquisition of vehicles. Several model variants are considered. First, the operating budget can be fixed at the first stage for all possible scenarios or determined for each scenario at the second stage. Second, the assignment of vehicles to a depot can be either fixed or free. Third, we compare a heterogeneous vehicle fleet to a homogeneous fleet. We study the impact of the variants on the solutions. The set of Pareto-optimal solutions is computed by applying the adaptive Epsilon-constraint method. We solve the deterministic equivalents of the two-stage stochastic programs using the MIP-solver CPLEX.",TRUE,noun phrase
R11,Science,R33447,Linking Success Factors to Financial Performance,S115638,R33448,Critical success factors,R33446,skilled logistics,"Problem statement: Based on a literature survey, an attempt has been made in this study to develop a framework for identifying the success factors. In addition, a list of key success factors is presented. The emphasis is on success factors dealing with breadth of services, internationalization of operations, industry focus, customer focus, 3PL experience, relationship with 3PLs, investment in quality assets, investment in information systems, availability of skilled professionals and supply chain integration. In developing the factors an effort has been made to align and relate them to financial performance. Conclusion/Recommendations: We found success factors “relationship with 3PLs and skilled logistics professionals” would substantially improves financial performance metric profit growth. Our findings also contribute to managerial practice by offering a benchmarking tool that can be used by managers in the 3PL service provider industry in India.",TRUE,noun phrase
R11,Science,R33802,Assessment of algorithms for high throughput detection of genomic copy number variation in oligonucleotide microarray data,S117212,R33803,Platform,R33800,SNP array,"Abstract Background Genomic deletions and duplications are important in the pathogenesis of diseases, such as cancer and mental retardation, and have recently been shown to occur frequently in unaffected individuals as polymorphisms. Affymetrix GeneChip whole genome sampling analysis (WGSA) combined with 100 K single nucleotide polymorphism (SNP) genotyping arrays is one of several microarray-based approaches that are now being used to detect such structural genomic changes. The popularity of this technology and its associated open source data format have resulted in the development of an increasing number of software packages for the analysis of copy number changes using these SNP arrays. Results We evaluated four publicly available software packages for high throughput copy number analysis using synthetic and empirical 100 K SNP array data sets, the latter obtained from 107 mental retardation (MR) patients and their unaffected parents and siblings. We evaluated the software with regards to overall suitability for high-throughput 100 K SNP array data analysis, as well as effectiveness of normalization, scaling with various reference sets and feature extraction, as well as true and false positive rates of genomic copy number variant (CNV) detection. Conclusion We observed considerable variation among the numbers and types of candidate CNVs detected by different analysis approaches, and found that multiple programs were needed to find all real aberrations in our test set. The frequency of false positive deletions was substantial, but could be greatly reduced by using the SNP genotype information to confirm loss of heterozygosity.",TRUE,noun phrase
R11,Science,R33819,The Effect of Algorithms on Copy Number Variant Detection,S117282,R33820,Platform,R33800,SNP array,"Background The detection of copy number variants (CNVs) and the results of CNV-disease association studies rely on how CNVs are defined, and because array-based technologies can only infer CNVs, CNV-calling algorithms can produce vastly different findings. Several authors have noted the large-scale variability between CNV-detection methods, as well as the substantial false positive and false negative rates associated with those methods. In this study, we use variations of four common algorithms for CNV detection (PennCNV, QuantiSNP, HMMSeg, and cnvPartition) and two definitions of overlap (any overlap and an overlap of at least 40% of the smaller CNV) to illustrate the effects of varying algorithms and definitions of overlap on CNV discovery. Methodology and Principal Findings We used a 56 K Illumina genotyping array enriched for CNV regions to generate hybridization intensities and allele frequencies for 48 Caucasian schizophrenia cases and 48 age-, ethnicity-, and gender-matched control subjects. No algorithm found a difference in CNV burden between the two groups. However, the total number of CNVs called ranged from 102 to 3,765 across algorithms. The mean CNV size ranged from 46 kb to 787 kb, and the average number of CNVs per subject ranged from 1 to 39. The number of novel CNVs not previously reported in normal subjects ranged from 0 to 212. Conclusions and Significance Motivated by the availability of multiple publicly available genome-wide SNP arrays, investigators are conducting numerous analyses to identify putative additional CNVs in complex genetic disorders. However, the number of CNVs identified in array-based studies, and whether these CNVs are novel or valid, will depend on the algorithm(s) used. Thus, given the variety of methods used, there will be many false positives and false negatives. Both guidelines for the identification of CNVs inferred from high-density arrays and the establishment of a gold standard for validation of CNVs are needed.",TRUE,noun phrase
R11,Science,R33824,Accuracy of CNV Detection from GWAS Data,S117305,R33825,Platform,R33800,SNP array,"Several computer programs are available for detecting copy number variants (CNVs) using genome-wide SNP arrays. We evaluated the performance of four CNV detection software suites—Birdsuite, Partek, HelixTree, and PennCNV-Affy—in the identification of both rare and common CNVs. Each program's performance was assessed in two ways. The first was its recovery rate, i.e., its ability to call 893 CNVs previously identified in eight HapMap samples by paired-end sequencing of whole-genome fosmid clones, and 51,440 CNVs identified by array Comparative Genome Hybridization (aCGH) followed by validation procedures, in 90 HapMap CEU samples. The second evaluation was program performance calling rare and common CNVs in the Bipolar Genome Study (BiGS) data set (1001 bipolar cases and 1033 controls, all of European ancestry) as measured by the Affymetrix SNP 6.0 array. Accuracy in calling rare CNVs was assessed by positive predictive value, based on the proportion of rare CNVs validated by quantitative real-time PCR (qPCR), while accuracy in calling common CNVs was assessed by false positive/false negative rates based on qPCR validation results from a subset of common CNVs. Birdsuite recovered the highest percentages of known HapMap CNVs containing >20 markers in two reference CNV datasets. The recovery rate increased with decreased CNV frequency. In the tested rare CNV data, Birdsuite and Partek had higher positive predictive values than the other software suites. In a test of three common CNVs in the BiGS dataset, Birdsuite's call was 98.8% consistent with qPCR quantification in one CNV region, but the other two regions showed an unacceptable degree of accuracy. We found relatively poor consistency between the two “gold standards,” the sequence data of Kidd et al., and aCGH data of Conrad et al. Algorithms for calling CNVs especially common ones need substantial improvement, and a “gold standard” for detection of CNVs remains to be established.",TRUE,noun phrase
R11,Science,R151216,All Crises Opportunities? A Comparison of How Corporate and Government Organizations Responded to the 2009 Flu Pandemic.,S626383,R156057,Technology,L431101,social media,"Through a quantitative content analysis, this study applies situational crisis communication theory (SCCT) to investigate how 13 corporate and government organizations responded to the first phase of the 2009 flu pandemic. The results indicate that government organizations emphasized providing instructing information to their primary publics such as guidelines about how to respond to the crisis. On the other hand, organizations representing corporate interests emphasized reputation management in their crisis responses, frequently adopting denial, diminish, and reinforce response strategies. In addition, both government and corporate organizations used social media more often than traditional media in responding to the crisis. Finally, the study expands SCCT's response options.",TRUE,noun phrase
R11,Science,R151232,Synchronizing Crisis Responses after a Transgression: An Analysis of BP's Enacted Crisis Response to the Deepwater Horizon Crisis in 2010.,S626437,R156065,Technology,L431147,social media,"Purpose – With the explosion of the Deepwater Horizon oil well in the Gulf of Mexico on April 20, 2010 and until the well was officially “killed” on September 19, 2010, British Petroleum (BP) did not merely experience a crisis but a five‐month marathon of sustained, multi‐media engagement. Whereas traditional public relations theory teaches us that an organization should synchronize its messages across channels, there are no models to understand how an organization may strategically coordinate public relations messaging across traditional and social media platforms. This is especially important in the new media environment where social media (e.g. Facebook and Twitter) are increasingly being used in concert with traditional public relations tools (e.g. press releases) as a part of an organization's stakeholder engagement strategy. This paper seeks to address these issues.Design/methodology/approach – The present study is a content analysis examining all of BP's press releases (N=126), its Facebook posts (...",TRUE,noun phrase
R11,Science,R151238,Social Media and Emergency Management: Exploring State and Local Tweets,S626463,R156068,Technology,L431170,social media,"Social media for emergency management has emerged as a vital resource for government agencies across the globe. In this study, we explore social media strategies employed by governments to respond to major weather-related events. Using social media monitoring software, we analyze how social media is used in six cities following storms in the winter of 2012. We listen, monitor, and assess online discourse available on the full range of social media outlets (e.g., Twitter, Facebook, blogs). To glean further insight, we conduct a survey and extract themes from citizen comments and government's response. We conclude with recommendations on how practitioners can develop social media strategies that enable citizen participation in emergency management.",TRUE,noun phrase
R11,Science,R151256,"ICT-Enabled Community Empowerment in Crisis
Response: Social Media in Thailand Flooding 2011",S626531,R156077,Technology,L431229,social media,"In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency’s perspective, which undermines communities’ role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.",TRUE,noun phrase
R11,Science,R151258,"Role of Social Media in Social Change:
An Analysis of Collective Sense Making
During the 2011 Egypt Revolution",S626544,R156078,Technology,L431241,social media,"This study explores the role of social media in social change by analyzing Twitter data collected during the 2011 Egypt Revolution. Particular attention is paid to the notion of collective sense making, which is considered a critical aspect for the emergence of collective action for social change. We suggest that collective sense making through social media can be conceptualized as human-machine collaborative information processing that involves an interplay of signs, Twitter grammar, humans, and social technologies. We focus on the occurrences of hashtags among a high volume of tweets to study the collective sense-making phenomena of milling and keynoting. A quantitative Markov switching analysis is performed to understand how the hashtag frequencies vary over time, suggesting structural changes that depict the two phenomena. We further explore different hashtags through a qualitative content analysis and find that, although many hashtags were used as symbolic anchors to funnel online users' attention to the Egypt Revolution, other hashtags were used as part of tweet sentences to share changing situational information. We suggest that hashtags functioned as a means to collect information and maintain situational awareness during the unstable political situation of the Egypt Revolution.",TRUE,noun phrase
R11,Science,R151276,"Terse Messaging and Public Health in the Midst of Natural Disasters:
The Case of the Boulder Floods",S626611,R156087,Technology,L431299,social media,"Social media are quickly becoming the channel of choice for disseminating emergency warning messages. However, relatively little data-driven research exists to inform effective message design when using these media. The present study addresses that void by examining terse health-related warning messages sent by public safety agencies over Twitter during the 2013 Boulder, CO, floods. An examination of 5,100 tweets from 52 Twitter accounts over the course of the 5-day flood period yielded several key conclusions and implications. First, public health messages posted by local emergency management leaders are most frequently retweeted by organizations in our study. Second, emergency public health messages focus primarily on drinking water in this event. Third, terse messages can be designed in ways that include imperative/instructional and declarative/explanatory styles of content, both of which are essential for promoting public health during crises. These findings demonstrate that even terse messages delivered via Twitter ought to provide information about the hazard event, its impact, and actionable instructions for self-protection.",TRUE,noun phrase
R11,Science,R151302,"Digitally enabled disaster response: the
emergence of social media as boundary
objects in a flooding disaster",S626685,R156098,Technology,L431362,social media,"In recent times, social media has been increasingly playing a critical role in response actions following natural catastrophes. From facilitating the recruitment of volunteers during an earthquake to supporting emotional recovery after a hurricane, social media has demonstrated its power in serving as an effective disaster response platform. Based on a case study of Thailand flooding in 2011 – one of the worst flooding disasters in more than 50 years that left the country severely impaired – this paper provides an in‐depth understanding on the emergent roles of social media in disaster response. Employing the perspective of boundary object, we shed light on how different boundary spanning competences of social media emerged in practice to facilitate cross‐boundary response actions during a disaster, with an aim to promote further research in this area. We conclude this paper with guidelines for response agencies and impacted communities to deploy social media for future disaster response.",TRUE,noun phrase
R11,Science,R152934,All Crises Opportunities? A Comparison of How Corporate and Government Organizations Responded to the 2009 Flu Pandemic.”,S616570,R153865,Technology,L425169,social media,"Through a quantitative content analysis, this study applies situational crisis communication theory (SCCT) to investigate how 13 corporate and government organizations responded to the first phase of the 2009 flu pandemic. The results indicate that government organizations emphasized providing instructing information to their primary publics such as guidelines about how to respond to the crisis. On the other hand, organizations representing corporate interests emphasized reputation management in their crisis responses, frequently adopting denial, diminish, and reinforce response strategies. In addition, both government and corporate organizations used social media more often than traditional media in responding to the crisis. Finally, the study expands SCCT's response options.",TRUE,noun phrase
R11,Science,R152943,Synchronizing Crisis Responses after a Transgression: An Analysis of BP’s Enacted Crisis Response to the Deepwater Horizon Crisis in 2010.”,S613873,R153453,Technology,L423045,social media,"Purpose – With the explosion of the Deepwater Horizon oil well in the Gulf of Mexico on April 20, 2010 and until the well was officially “killed” on September 19, 2010, British Petroleum (BP) did not merely experience a crisis but a five‐month marathon of sustained, multi‐media engagement. Whereas traditional public relations theory teaches us that an organization should synchronize its messages across channels, there are no models to understand how an organization may strategically coordinate public relations messaging across traditional and social media platforms. This is especially important in the new media environment where social media (e.g. Facebook and Twitter) are increasingly being used in concert with traditional public relations tools (e.g. press releases) as a part of an organization's stakeholder engagement strategy. This paper seeks to address these issues.Design/methodology/approach – The present study is a content analysis examining all of BP's press releases (N=126), its Facebook posts (...",TRUE,noun phrase
R11,Science,R153561,"Synchronizing Crisis Responses after a Transgression: An Analysis of BP's Enacted Crisis Response to the Deepwater Horizon Crisis in 2010.""",S616624,R153873,Technology,L425215,social media,"Purpose – With the explosion of the Deepwater Horizon oil well in the Gulf of Mexico on April 20, 2010 and until the well was officially “killed” on September 19, 2010, British Petroleum (BP) did not merely experience a crisis but a five‐month marathon of sustained, multi‐media engagement. Whereas traditional public relations theory teaches us that an organization should synchronize its messages across channels, there are no models to understand how an organization may strategically coordinate public relations messaging across traditional and social media platforms. This is especially important in the new media environment where social media (e.g. Facebook and Twitter) are increasingly being used in concert with traditional public relations tools (e.g. press releases) as a part of an organization's stakeholder engagement strategy. This paper seeks to address these issues.Design/methodology/approach – The present study is a content analysis examining all of BP's press releases (N=126), its Facebook posts (...",TRUE,noun phrase
R11,Science,R153575,"ICT-Enabled Community Empowerment in Crisis
Response: Social Media in Thailand Flooding 2011.",S616718,R153885,Technology,L425297,social media,"In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency’s perspective, which undermines communities’ role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.",TRUE,noun phrase
R11,Science,R26519,Effects of natural products on soil organisms and plant health enhancement,S83421,R26520,Applications,R26518,Soil and plant revitalizer,"TerraPy, Magic Wet and Chitosan are soil and plant revitalizers based on natural renewable raw materials. These products stimulate microbial activity in the soil and promote plant growth. Their importance to practical agriculture can be seen in their ability to improve soil health, especially where intensive cultivation has shifted the biological balance in the soil ecosystem to high numbers of plant pathogens. The objective of this study was to investigate the plant beneficial capacities of TerraPy, Magic Wet and Chitosan and to evaluate their effect on bacterial and nematode communities in soils. Tomato seedlings (Lycopersicum esculentum cv. Hellfrucht Frühstamm) were planted into pots containing a sand/soil mixture (1:1, v/v) and were treated with TerraPy, Magic Wet and Chitosan at 200 kg/ha. At 0, 1, 3, 7 and 14 days after inoculation the following soil parameters were evaluated: soil pH, bacterial and fungal population density (cfu/g soil), total number of saprophytic and plant-parasitic nematodes. At the final sampling date tomato shoot and root fresh weight as well as Meloidogyne infestation was recorded. Plant growth was lowest and nematode infestation was highest in the control. Soil bacterial population densities increased within 24 hours after treatment between 4-fold (Magic Wet) and 19-fold (Chitosan). Bacterial richness and diversity were not significantly altered. Dominant bacterial genera were Acinetobacter (41%) and Pseudomonas (22%) for TerraPy, Pseudomonas (30%) and Acinetobacter (13%) for Magic Wet, Acinetobacter (8.9%) and Pseuodomonas (81%) for Chitosan and Bacillus (42%) and Pseudomonas (32%) for the control. Increased microbial activity also was associated with higher numbers of saprophytic nematodes. The results demonstrated the positive effects of natural products in stimulating soil microbial activity and thereby the antagonistic potential in soils leading to a reduction in nematode infestation and improved plant growth.",TRUE,noun phrase
R11,Science,R26192,Deliveries in an inventory/routing problem using stochastic dynamic programming,S81786,R26193,approach,R26191,Stochastic dynamic programming,"An industrial gases tanker vehicle visitsn customers on a tour, with a possible ( n + 1)st customer added at the end. The amount of needed product at each customer is a known random process, typically a Wiener process. The objective is to adjust dynamically the amount of product provided on scene to each customer so as to minimize total expected costs, comprising costs of earliness, lateness, product shortfall, and returning to the depot nonempty. Earliness costs are computed by invocation of an annualized incremental cost argument. Amounts of product delivered to each customer are not known until the driver is on scene at the customer location, at which point the customer is either restocked to capacity or left with some residual empty capacity, the policy determined by stochastic dynamic programming. The methodology has applications beyond industrial gases.",TRUE,noun phrase
R11,Science,R26936,Vehicle fleet planning the road transportation industry,S86536,R26937,Method,R26934,Stochastic programming,"Planning the composition of a vehicle fleet in order to satisfy transportation service demands is an important resource management activity for any trucking company. Its complexity is such, however, that formal fleet management cannot be done adequately without the help of a decision support system. An important part of such a system is the generation of minimal discounted cost plans covering the purchase, replacement, sale, and/or rental of the vehicles necessary to deal with a seasonal stochastic demand. A stochastic programming model is formulated to address this problem. It reduces to a separable program based on information about the service demand, the state of the current fleet, and the cash flows generated by an acquisition/disposal plan. An efficient algorithm for solving the model is also presented. The discussion concerns the operations of a number of Canadian road carriers. >",TRUE,noun phrase
R11,Science,R26295,Heuristics for a One-Warehouse Multiretailer Distribution Problem with Performance Bounds,S82334,R26296,approach,R26294,Submodular approximation,"We investigate the one warehouse multiretailer distribution problem with traveling salesman tour vehicle routing costs. We model the system in the framework of the more general production/distribution system with arbitrary non-negative monotone joint order costs. We develop polynomial time heuristics whose policy costs are provably close to the cost of an optimal policy. In particular, we show that given a submodular function which is close to the true order cost then we can find a power-of-two policy whose cost is only moderately greater than the cost of an optimal policy. Since such submodular approximations exist for traveling salesman tour vehicle routing costs we present a detailed description of heuristics for the one warehouse multiretailer distribution problem. We formulate a nonpolynomial dynamic program that computes optimal power-of-two policies for the one warehouse multiretailer system assuming only that the order costs are non-negative monotone. Finally, we perform computational tests which compare our heuristics to optimal power of two policies for problems of up to sixteen retailers. We also perform computational tests on larger problems; these tests give us insight into what policies one should employ.",TRUE,noun phrase
R11,Science,R33521,Evaluating the critical success factors of supplier development: a case study,S115787,R33522,Critical success factors,R33513,supplier certification,"Purpose – The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach – In total, 13 CSFs for SD are identified (i.e. long‐term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings – The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long‐term strategic goal is found to be ...",TRUE,noun phrase
R11,Science,R33375,Critical factors for implementing green supply chain management practice,S115508,R33376,Critical success factors,R33371,Supplier management,"Purpose – The purpose of this paper is to explore critical factors for implementing green supply chain management (GSCM) practice in the Taiwanese electrical and electronics industries relative to European Union directives.Design/methodology/approach – A tentative list of critical factors of GSCM was developed based on a thorough and detailed analysis of the pertinent literature. The survey questionnaire contained 25 items, developed based on the literature and interviews with three industry experts, specifically quality and product assurance representatives. A total of 300 questionnaires were mailed out, and 87 were returned, of which 84 were valid, representing a response rate of 28 percent. Using the data collected, the identified critical factors were performed via factor analysis to establish reliability and validity.Findings – The results show that 20 critical factors were extracted into four dimensions, which denominated supplier management, product recycling, organization involvement and life cycl...",TRUE,noun phrase
R11,Science,R33521,Evaluating the critical success factors of supplier development: a case study,S115793,R33522,Critical success factors,R33519,supplier status,"Purpose – The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach – In total, 13 CSFs for SD are identified (i.e. long‐term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings – The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long‐term strategic goal is found to be ...",TRUE,noun phrase
R11,Science,R33261,Supply Base Reduction: An Empirical Study of Critical Success Factors,S115298,R33262,SCM field,R33253,Supply base reduction,"SUMMARY One important factor in the design of an organization's supply chain is the number of suppliers used for a given product or service. Supply base reduction is one option useful in managing the supply base. The current paper reports the results of case studies in 10 organizations that recently implemented supply base reduction activities. Specifically, the paper identifies the key success factors in supply base reduction efforts and prescribes processes to capture the benefits of supply base reduction.",TRUE,noun phrase
R11,Science,R33305,Supply chain management in SMEs: development of constructs and propositions,S115376,R33306,Critical success factors,R33299,supply chain integration,"Purpose – The purpose of this paper is to review the literature on supply chain management (SCM) practices in small and medium scale enterprises (SMEs) and outlines the key insights.Design/methodology/approach – The paper describes a literature‐based research that has sought understand the issues of SCM for SMEs. The methodology is based on critical review of 77 research papers from high‐quality, international refereed journals. Mainly, issues are explored under three categories – supply chain integration, strategy and planning and implementation. This has supported the development of key constructs and propositions.Findings – The research outcomes are three fold. Firstly, paper summarizes the reported literature and classifies it based on their nature of work and contributions. Second, paper demonstrates the overall approach towards the development of constructs, research questions, and investigative questions leading to key proposition for the further research. Lastly, paper outlines the key findings an...",TRUE,noun phrase
R11,Science,R33447,Linking Success Factors to Financial Performance,S115639,R33448,Critical success factors,R33299,supply chain integration,"Problem statement: Based on a literature survey, an attempt has been made in this study to develop a framework for identifying the success factors. In addition, a list of key success factors is presented. The emphasis is on success factors dealing with breadth of services, internationalization of operations, industry focus, customer focus, 3PL experience, relationship with 3PLs, investment in quality assets, investment in information systems, availability of skilled professionals and supply chain integration. In developing the factors an effort has been made to align and relate them to financial performance. Conclusion/Recommendations: We found success factors “relationship with 3PLs and skilled logistics professionals” would substantially improves financial performance metric profit growth. Our findings also contribute to managerial practice by offering a benchmarking tool that can be used by managers in the 3PL service provider industry in India.",TRUE,noun phrase
R11,Science,R33461,Supply chain management: success factors from the Malaysian manufacturer's perspective,S115673,R33462,SCM field,R33457,Supply chain performance,"The purpose of this paper is to shed the light on the critical success factors that lead to high supply chain performance outcomes in a Malaysian manufacturing company. The critical success factors consist of relationship with customer and supplier, information communication and technology (ICT), material flow management, corporate culture and performance measurement. Questionnaire was the main instrument for the study and it was distributed to 84 staff from departments of purchasing, planning, logistics and operation. Data analysis was conducted by employing descriptive analysis (mean and standard deviation), reliability analysis, Pearson correlation analysis and multiple regression. The findings show that there are relationships exist between relationship with customer and supplier, ICT, material flow management, performance measurement and supply chain management (SCM) performance, but not for corporate culture. Forming a good customer and supplier relationship is the main predictor of SCM performance, followed by performance measurement, material flow management and ICT. It is recommended that future study to determine additional success factors that are pertinent to firms’ current SCM strategies and directions, competitive advantages and missions. Logic suggests that further study to include more geographical data coverage, other nature of businesses and research instruments. Key words: Supply chain management, critical success factor.",TRUE,noun phrase
R11,Science,R33287,Implementing supply chain quality management,S115353,R33288,SCM field,R33282,Supply chain quality management,"This paper describes a strategic framework for the development of supply chain quality management (SCQM). The framework integrates both vision- and gap-driven change approaches to evaluate not only the implementation gaps but also their potential countermeasures. Based on literature review, drivers of supply chain quality are identified. They are: supply chain competence, critical success factors (CSF), strategic components, and SCQ practices/activities/programmes. Based on SCQM literature, five survey items are also presented in this study for each drive. The Analytic Hierarchy Process (AHP) is used to develop priority indices for these survey items. Knowledge of these critical dimensions and possible implementation discrepancies could help multinational enterprises and their supply chain partners lay out effective and efficient SCQM plans.",TRUE,noun phrase
R11,Science,R33489,Identifying critical enablers and pathways to high performance supply chain quality management,S115749,R33490,SCM field,R33282,Supply chain quality management,"Purpose – The aim of this paper is threefold: first, to examine the content of supply chain quality management (SCQM); second, to identify the structure of SCQM; and third, to show ways for finding improvement opportunities and organizing individual institution's resources/actions into collective performance outcomes.Design/methodology/approach – To meet the goals of this work, the paper uses abductive reasoning and two qualitative methods: content analysis and formal concept analysis (FCA). Primary data were collected from both original design manufacturers (ODMs) and original equipment manufacturers (OEMs) in Taiwan.Findings – According to the qualitative empirical study, modern enterprises need to pay immediate attention to the following two pathways: a compliance approach and a voluntary approach. For the former, three strategic content variables are identified: training programs, ISO, and supplier quality audit programs. As for initiating a voluntary effort, modern lead firms need to instill “motivat...",TRUE,noun phrase
R11,Science,R70608,Automated Detection of Postoperative Surgical Site Infections Using Supervised Methods with Electronic Health Record Data,S336068,R70609,Infection,L242822,Surgical Site Infection,"The National Surgical Quality Improvement Project (NSQIP) is widely recognized as “the best in the nation” surgical quality improvement resource in the United States. In particular, it rigorously defines postoperative morbidity outcomes, including surgical adverse events occurring within 30 days of surgery. Due to its manual yet expensive construction process, the NSQIP registry is of exceptionally high quality, but its high cost remains a significant bottleneck to NSQIP’s wider dissemination. In this work, we propose an automated surgical adverse events detection tool, aimed at accelerating the process of extracting postoperative outcomes from medical charts. As a prototype system, we combined local EHR data with the NSQIP gold standard outcomes and developed machine learned models to retrospectively detect Surgical Site Infections (SSI), a particular family of adverse events that NSQIP extracts. The built models have high specificity (from 0.788 to 0.988) as well as very high negative predictive values (>0.98), reliably eliminating the vast majority of patients without SSI, thereby significantly reducing the NSQIP extractors’ burden.",TRUE,noun phrase
R11,Science,R70614,Maximizing Interpretability and Cost-Effectiveness of Surgical Site Infection (SSI) Predictive Models Using Feature-Specific Regularized Logistic Regression on Preoperative Temporal Data,S336104,R70615,Infection,L242852,Surgical Site Infection,"This study describes a novel approach to solve the surgical site infection (SSI) classification problem. Feature engineering has traditionally been one of the most important steps in solving complex classification problems, especially in cases with temporal data. The described novel approach is based on abstraction of temporal data recorded in three temporal windows. Maximum likelihood L1-norm (lasso) regularization was used in penalized logistic regression to predict the onset of surgical site infection occurrence based on available patient blood testing results up to the day of surgery. Prior knowledge of predictors (blood tests) was integrated in the modelling by introduction of penalty factors depending on blood test prices and an early stopping parameter limiting the maximum number of selected features used in predictive modelling. Finally, solutions resulting in higher interpretability and cost-effectiveness were demonstrated. Using repeated holdout cross-validation, the baseline C-reactive protein (CRP) classifier achieved a mean AUC of 0.801, whereas our best full lasso model achieved a mean AUC of 0.956. Best model testing results were achieved for full lasso model with maximum number of features limited at 20 features with an AUC of 0.967. Presented models showed the potential to not only support domain experts in their decision making but could also prove invaluable for improvement in prediction of SSI occurrence, which may even help setting new guidelines in the field of preoperative SSI prevention and surveillance.",TRUE,noun phrase
R11,Science,R70616,An Unsupervised Multivariate Time Series Kernel Approach for Identifying Patients with Surgical Site Infection from Blood Samples,S336116,R70617,Infection,L242862,Surgical Site Infection,"A large fraction of the electronic health records consists of clinical measurements collected over time, such as blood tests, which provide important information about the health status of a patient. These sequences of clinical measurements are naturally represented as time series, characterized by multiple variables and the presence of missing data, which complicate analysis. In this work, we propose a surgical site infection detection framework for patients undergoing colorectal cancer surgery that is completely unsupervised, hence alleviating the problem of getting access to labelled training data. The framework is based on powerful kernels for multivariate time series that account for missing data when computing similarities. Our approach show superior performance compared to baselines that have to resort to imputation techniques and performs comparable to a supervised classification baseline.",TRUE,noun phrase
R11,Science,R70618,A diagnostic algorithm for the surveillance of deep surgical site infections after colorectal surgery,S336128,R70619,Infection,L242872,Surgical Site Infection,"Abstract Objective: Surveillance of surgical site infections (SSIs) is important for infection control and is usually performed through retrospective manual chart review. The aim of this study was to develop an algorithm for the surveillance of deep SSIs based on clinical variables to enhance efficiency of surveillance. Design: Retrospective cohort study (2012–2015). Setting: A Dutch teaching hospital. Participants: We included all consecutive patients who underwent colorectal surgery excluding those with contaminated wounds at the time of surgery. All patients were evaluated for deep SSIs through manual chart review, using the Centers for Disease Control and Prevention (CDC) criteria as the reference standard. Analysis: We used logistic regression modeling to identify predictors that contributed to the estimation of diagnostic probability. Bootstrapping was applied to increase generalizability, followed by assessment of statistical performance and clinical implications. Results: In total, 1,606 patients were included, of whom 129 (8.0%) acquired a deep SSI. The final model included postoperative length of stay, wound class, readmission, reoperation, and 30-day mortality. The model achieved 68.7% specificity and 98.5% sensitivity and an area under the receiver operator characteristic (ROC) curve (AUC) of 0.950 (95% CI, 0.932–0.969). Positive and negative predictive values were 21.5% and 99.8%, respectively. Applying the algorithm resulted in a 63.4% reduction in the number of records requiring full manual review (from 1,606 to 590). Conclusions: This 5-parameter model identified 98.5% of patients with a deep SSI. The model can be used to develop semiautomatic surveillance of deep SSIs after colorectal surgery, which may further improve efficiency and quality of SSI surveillance.",TRUE,noun phrase
R11,Science,R70622,Improving Prediction of Surgical Site Infection Risk with Multilevel Modeling,S336154,R70623,Infection,L242894,Surgical Site Infection,"Background Surgical site infection (SSI) surveillance is a key factor in the elaboration of strategies to reduce SSI occurrence and in providing surgeons with appropriate data feedback (risk indicators, clinical prediction rule). Aim To improve the predictive performance of an individual-based SSI risk model by considering a multilevel hierarchical structure. Patients and Methods Data were collected anonymously by the French SSI active surveillance system in 2011. An SSI diagnosis was made by the surgical teams and infection control practitioners following standardized criteria. A random 20% sample comprising 151 hospitals, 502 wards and 62280 patients was used. Three-level (patient, ward, hospital) hierarchical logistic regression models were initially performed. Parameters were estimated using the simulation-based Markov Chain Monte Carlo procedure. Results A total of 623 SSI were diagnosed (1%). The hospital level was discarded from the analysis as it did not contribute to variability of SSI occurrence (p = 0.32). Established individual risk factors (patient history, surgical procedure and hospitalization characteristics) were identified. A significant heterogeneity in SSI occurrence between wards was found (median odds ratio [MOR] 3.59, 95% credibility interval [CI] 3.03 to 4.33) after adjusting for patient-level variables. The effects of the follow-up duration varied between wards (p<10−9), with an increased heterogeneity when follow-up was <15 days (MOR 6.92, 95% CI 5.31 to 9.07]). The final two-level model significantly improved the discriminative accuracy compared to the single level reference model (p<10−9), with an area under the ROC curve of 0.84. Conclusion This study sheds new light on the respective contribution of patient-, ward- and hospital-levels to SSI occurrence and demonstrates the significant impact of the ward level over and above risk factors present at patient level (i.e., independently from patient case-mix).",TRUE,noun phrase
R11,Science,R70624,Predictive Modeling of Surgical Site Infections Using Sparse Laboratory Data,S336166,R70625,Infection,L242904,Surgical Site Infection,"As part of a data mining competition, a training and test set of laboratory test data about patients with and without surgical site infection (SSI) were provided. The task was to develop predictive models with training set and identify patients with SSI in the no label test set. Lab test results are vital resources that guide healthcare providers make decisions about all aspects of surgical patient management. Many machine learning models were developed after pre-processing and imputing the lab tests data and only the top performing methods are discussed. Overall, RANDOM FOREST algorithms performed better than Support Vector Machine and Logistic Regression. Using a set of 74 lab tests, with RF, there were only 4 false positives in the training set and predicted 35 out of 50 SSI patients in the test set (Accuracy 0.86, Sensitivity 0.68, and Specificity 0.91). Optimal ways to address healthcare data quality concerns and imputation methods as well as newer generalizable algorithms need to be explored further to decipher new associations and knowledge among laboratory biomarkers and SSI.",TRUE,noun phrase
R11,Science,R70626,Data-driven Temporal Prediction of Surgical Site Infection,S336180,R70627,Infection,L242916,Surgical Site Infection,"Analysis of data from Electronic Health Records (EHR) presents unique challenges, in particular regarding nonuniform temporal resolution of longitudinal variables. A considerable amount of patient information is available in the EHR - including blood tests that are performed routinely during inpatient follow-up. These data are useful for the design of advanced machine learning-based methods and prediction models. Using a matched cohort of patients undergoing gastrointestinal surgery (101 cases and 904 controls), we built a prediction model for post-operative surgical site infections (SSIs) using Gaussian process (GP) regression, time warping and imputation methods to manage the sparsity of the data source, and support vector machines for classification. For most blood tests, wider confidence intervals after imputation were obtained in patients with SSI. Predictive performance with individual blood tests was maintained or improved by joint model prediction, and non-linear classifiers performed consistently better than linear models.",TRUE,noun phrase
R11,Science,R70628,Classification of postoperative surgical site infections from blood measurements with missing data using recurrent neural networks,S336192,R70629,Infection,L242926,Surgical Site Infection,"Clinical measurements that can be represented as time series constitute an important fraction of the electronic health records and are often both uncertain and incomplete. Recurrent neural networks are a special class of neural networks that are particularly suitable to process time series data but, in their original formulation, cannot explicitly deal with missing data. In this paper, we explore imputation strategies for handling missing values in classifiers based on recurrent neural network (RNN) and apply a recently proposed recurrent architecture, the Gated Recurrent Unit with Decay, specifically designed to handle missing data. We focus on the problem of detecting surgical site infection in patients by analyzing time series of their blood sample measurements and we compare the results obtained with different RNN-based classifiers.",TRUE,noun phrase
R11,Science,R33486,Understanding the Success Factors of Sustainable Supply Chain Management: Empirical Evidence from the Electrics and Electronics Industry,S115731,R33487,SCM field,R33484,Sustainable supply chain,"Recent studies have reported that organizations are often unable to identify the key success factors of Sustainable Supply Chain Management (SSCM) and to understand their implications for management practice. For this reason, the implementation of SSCM often does not result in noticeable benefits. So far, research has failed to offer any explanations for this discrepancy. In view of this fact, our study aims at identifying and analyzing the factors that underlie successful SSCM. Success factors are identified by means of a systematic literature review and are then integrated into an explanatory model. Consequently, the proposed success factor model is tested on the basis of an empirical study focusing on recycling networks of the electrics and electronics industry. We found that signaling, information provision and the adoption of standards are crucial preconditions for strategy commitment, mutual learning, the establishment of ecological cycles and hence for the overall success of SSCM. Copyright © 2011 John Wiley & Sons, Ltd and ERP Environment.",TRUE,noun phrase
R11,Science,R33205,An Exploratory Study of the Success Factors for Extranet Adoption in E-Supply Chain,S115203,R33206,Critical success factors,R33201,System quality,"Extranet is an enabler/system that enriches the information service quality in e-supply chain. This paper uses factor analysis to determine four extranet success factors: system quality, information quality, service quality, and work performance quality. A critical analysis of areas that require improvement is also conducted.",TRUE,noun phrase
R11,Science,R33348,Critical success factors for B2B e‐commerce use within the UK NHS pharmaceutical supply chain,S115461,R33349,Critical success factors,R33201,System quality,"Purpose – The purpose of this paper is to determine those factors perceived by users to influence the successful on‐going use of e‐commerce systems in business‐to‐business (B2B) buying and selling transactions through examination of the views of individuals acting in both purchasing and selling roles within the UK National Health Service (NHS) pharmaceutical supply chain.Design/methodology/approach – Literature from the fields of operations and supply chain management (SCM) and information systems (IS) is used to determine candidate factors that might influence the success of the use of e‐commerce. A questionnaire based on these is used for primary data collection in the UK NHS pharmaceutical supply chain. Factor analysis is used to analyse the data.Findings – The paper yields five composite factors that are perceived by users to influence successful e‐commerce use. “System quality,” “information quality,” “management and use,” “world wide web – assurance and empathy,” and “trust” are proposed as potentia...",TRUE,noun phrase
R11,Science,R33406,A study of supplier selection factors for high-tech industries in the supply chain,S115561,R33407,Critical success factors,R33400,technological capability,"Amid the intensive competition among global industries, the relationship between manufacturers and suppliers has turned from antagonist to cooperative. Through partnerships, both parties can be mutually benefited, and the key factor that maintains such relationship lies in how manufacturers select proper suppliers. The purpose of this study is to explore the key factors considered by manufacturers in supplier selection and the relationships between these factors. Through a literature review, eight supplier selection factors, comprising price response capability, quality management capability, technological capability, delivery capability, flexible capability, management capability, commercial image, and financial capability are derived. Based on the theoretic foundation proposed by previous researchers, a causal model of supplier selection factors is further constructed. The results of a survey on high-tech industries are used to verify the relationships between the eight factors using structural equation modelling (SEM). Based on the empirical results, conclusions and suggestions are finally proposed as a reference for manufacturers and suppliers.",TRUE,noun phrase
R11,Science,R33933,Automatic Test Data Generation Based on Ant Colony Optimization,S117633,R33934,Fields,R33865,Test Data Generation,Software testing is a crucial measure used to assure the quality of software. Path testing can detect bugs earlier because of it performs higher error coverage. This paper presents a model of generating test data based on an improved ant colony optimization and path coverage criteria. Experiments show that the algorithm has a better performance than other two algorithms and improve the efficiency of test data generation notably.,TRUE,noun phrase
R11,Science,R29841,"An Econometric Analysis for CO2 Emissions, Energy Consumption, Economic Growth, Foreign Trade and Urbanization of Japan",S99018,R29842,Type of data,R29654,Time series,"This paper examines the dynamic causal relationship between carbon dioxide emissions, energy consumption, economic growth, foreign trade and urbanization using time series data for the period of 1960-2009. Short-run unidirectional causalities are found from energy consumption and trade openness to carbon dioxide emissions, from trade openness to energy consumption, from carbon dioxide emissions to economic growth, and from economic growth to trade openness. The test results also support the evidence of existence of long-run relationship among the variables in the form of Equation (1) which also conform the results of bounds and Johansen conintegration tests. It is found that over time higher energy consumption in Japan gives rise to more carbon dioxide emissions as a result the environment will be polluted more. But in respect of economic growth, trade openness and urbanization the environmental quality is found to be normal good in the long-run.",TRUE,noun phrase
R11,Science,R29843,"An econometric study of carbon dioxide (CO2) emissions, energy consumption, and economic growth of Pakistan",S99033,R29844,Type of data,R29654,Time series,"Purpose – The purpose of this paper is to examine the relationship among environmental pollution, economic growth and energy consumption per capita in the case of Pakistan. The per capital carbon dioxide (CO2) emission is used as the environmental indicator, the commercial energy use per capita as the energy consumption indicator, and the per capita gross domestic product (GDP) as the economic indicator.Design/methodology/approach – The investigation is made on the basis of the environmental Kuznets curve (EKC), using time series data from 1971 to 2006, by applying different econometric tools like ADF Unit Root Johansen Co‐integration VECM and Granger causality tests.Findings – The Granger causality test shows that there is a long term relationship between these three indicators, with bidirectional causality between per capita CO2 emission and per capita energy consumption. A monotonically increasing curve between GDP and CO2 emission has been found for the sample period, rejecting the EKC relationship, i...",TRUE,noun phrase
R11,Science,R30088,Environmental Kuznets curve in an open economy: a bounds testing and causality analysis for Tunisia,S99735,R30089,Type of data,R29654,Time series,"The aim of this paper is to investigate the existence of environmental Kuznets curve (EKC) in an open economy like Tunisia using annual time series data for the period of 1971-2010. The ARDL bounds testing approach to cointegration is applied to test long run relationship in the presence of structural breaks and vector error correction model (VECM) to detect the causality among the variables. The robustness of causality analysis has been tested by applying the innovative accounting approach (IAA). The findings of this paper confirmed the long run relationship between economic growth, energy consumption, trade openness and CO2 emissions in Tunisian Economy. The results also indicated the existence of EKC confirmed by the VECM and IAA approaches. The study has significant contribution for policy implications to curtail energy pollutants by implementing environment friendly regulations to sustain the economic development in Tunisia.",TRUE,noun phrase
R11,Science,R33189,Critical success factors of web-based supply-chain management systems: an exploratory study,S115174,R33190,Critical success factors,R33130,Top management commitment,"This paper reports the results of a survey on the critical success factors (CSFs) of web-based supply-chain management systems (WSCMS). An empirical study was conducted and an exploratory factor analysis of the survey data revealed five major dimensions of the CSFs for WSCMS implementation, namely (1) communication, (2) top management commitment, (3) data security, (4) training and education, and (5) hardware and software reliability. The findings of the results provide insights for companies using or planning to use WSCMS.",TRUE,noun phrase
R11,Science,R33521,Evaluating the critical success factors of supplier development: a case study,S115783,R33522,Critical success factors,R33130,Top management commitment,"Purpose – The purpose of this paper is to identify and evaluate the critical success factors (CSFs) responsible for supplier development (SD) in a manufacturing supply chain environment.Design/methodology/approach – In total, 13 CSFs for SD are identified (i.e. long‐term strategic goal; top management commitment; incentives; supplier's supplier condition; proximity to manufacturing base; supplier certification; innovation capability; information sharing; environmental readiness; external environment; project completion experience; supplier status and direct involvement) through extensive literature review and discussion held with managers/engineers in different Indian manufacturing companies. A fuzzy analytic hierarchy process (FAHP) is proposed and developed to evaluate the degree of impact of each CSF on SD.Findings – The degree of impact for each CSF on SD is established for an Indian company. The results are discussed in detail with managerial implications. The long‐term strategic goal is found to be ...",TRUE,noun phrase
R11,Science,R33534,Application of critical success factors in supply chain management,S115820,R33535,Critical success factors,R33099,top management support,"This study is the first attempt that assembled published academic work on critical success factors (CSFs) in supply chain management (SCM) fields. The purpose of this study are to review the CSFs in SCM and to uncover the major CSFs that are apparent in SCM literatures. This study apply literature survey techniques from published CSFs studies in SCM. A collection of 42 CSFs studies in various SCM fields are obtained from major databases. The search uses keywords such as as supply chain management, critical success factors, logistics management and supply chain drivers and barriers. From the literature survey, four major CSFs are proposed. The factors are collaborative partnership, information technology, top management support and human resource. It is hoped that this review will serve as a platform for future research in SCM and CSFs studies. Plus, this study contribute to existing SCM knowledge and further appraise the concept of CSFs.",TRUE,noun phrase
R11,Science,R34519,Measuring Topological Anonymity in Social Networks,S120479,R34570,Anonymistion algorithm/method,R34518,Topological anonymity,"While privacy preservation of data mining approaches has been an important topic for a number of years, privacy of social network data is a relatively new area of interest. Previous research has shown that anonymization alone may not be sufficient for hiding identity information on certain real world data sets. In this paper, we focus on understanding the impact of network topology and node substructure on the level of anonymity present in the network. We present a new measure, topological anonymity, that quantifies the amount of privacy preserved in different topological structures. The measure uses a combination of known social network metrics and attempts to identify when node and edge inference breeches arise in these graphs.",TRUE,noun phrase
R11,Science,R25531,A DSL for rapid prototyping of cross-platform tower defense games,S76888,R25532,Game Genres,L48062,Tower Defense Games,"Because of the increasing expansion of the videogame industry, shorten videogame time to market for diverse platforms (e.g, Mac, android, iOS, BlackBerry) is a quest. This paper presents how a Domain Specific Language (DSL) in conjunction with Model-Driven Engineering (MDE) techniques can automate the development of games, in particular, tower defense games such as Plants vs. Zombies. The DSL allows the expression of structural and behavioral aspects of tower defense games. The MDE techniques allow us to generate code from the game expressed in the DSL. The generated code is written in an existing open source language that leverages the portability of the games. We present our approach using an example so-called Space Attack. The example shows the significant benefits offered by our proposal in terms of productivity and portability.",TRUE,noun phrase
R11,Science,R33189,Critical success factors of web-based supply-chain management systems: an exploratory study,S115175,R33190,Critical success factors,R33186,training and education,"This paper reports the results of a survey on the critical success factors (CSFs) of web-based supply-chain management systems (WSCMS). An empirical study was conducted and an exploratory factor analysis of the survey data revealed five major dimensions of the CSFs for WSCMS implementation, namely (1) communication, (2) top management commitment, (3) data security, (4) training and education, and (5) hardware and software reliability. The findings of the results provide insights for companies using or planning to use WSCMS.",TRUE,noun phrase
R11,Science,R28919,Simulation analysis of dispatching rules for an automated interbay material handling system in wafer fab,S95479,R28920,Objective function(s),R28916,Transport time,"Here, the performance evaluation of a double-loop interbay automated material handling system (AMHS) in wafer fab was analysed by considering the effects of the dispatching rules. Discrete event simulation models based on SIMPLE++ were developed to implement the heuristic dispatching rules in such an AMHS system with a zone control scheme to avoid vehicle collision. The layout of an interbay system is a combination configuration in which the hallway contains double loops and the vehicles have double capacity. The results show that the dispatching rule has a significant impact on average transport time, waiting time, throughput and vehicle utilization. The combination of the shortest distance with nearest vehicle and the first encounter first served rule outperformed the other rules. Furthermore, the relationship between vehicle number and material flow rate by experimenting with a simulation model was investigated. The optimum combination of these two factors can be obtained by response surface methodology.",TRUE,noun phrase
R11,Science,R26929,A Decision Support System for Fleet Management: A Linear Programming Approach,S86511,R26930,Industry,R26928,Van lines,"This paper describes a successful implementation of a decision support system that is used by the fleet management division at North American Van Lines to plan fleet configuration. At the heart of the system is a large linear programming (LP) model that helps management decide what type of tractors to sell to owner/operators or to trade in each week. The system is used to answer a wide variety of “What if” questions, many of which have significant financial impact.",TRUE,noun phrase
R11,Science,R25981,Document image segmentation and text area ordering,S80420,R26009,Application Domain,L50805,various documents,"A system for document image segmentation and ordering text areas is described and applied to both Japanese and English complex printed page layouts. There is no need to make any assumption about the shape of blocks, hence the segmentation technique can handle not only skewed images without skew-correction but also documents where column are not rectangular. In this technique, on the bottom-up strategy, the connected components are extracted from the reduced image, and classified according to their local information. The connected components are merged into lines, and lines are merged into areas. Extracted text areas are classified as body, caption, header, and footer. A tree graph of the layout of body texts is made, and we get the order of texts by preorder traversal on the graph. The authors introduce the influence range of each node, a procedure for the title part, and extraction of the white horizontal separator. Making it possible to get good results on various documents. The total system is fast and compact.<>",TRUE,noun phrase
R11,Science,R34581,Edge Anonymity in Social Network Graphs,S120510,R34582,Background information,R34500,Vertex degree,"Edges in social network graphs may represent sensitive relationships. In this paper, we consider the problem of edges anonymity in graphs. We propose a probabilistic notion of edge anonymity, called graph confidence, which is general enough to capture the privacy breach made by an adversary who can pinpoint target persons in a graph partition based on any given set of topological features of vertexes. We consider a special type of edge anonymity problem which uses vertex degree to partition a graph. We analyze edge disclosure in real-world social networks and show that although some graphs can preserve vertex anonymity, they may still not preserve edge anonymity. We present three heuristic algorithms that protect edge anonymity using edge swap or edge deletion. Our experimental results, based on three real-world social networks and several utility measures, show that these algorithms can effectively preserve edge anonymity yet obtain anonymous graphs of acceptable utility.",TRUE,noun phrase
R11,Science,R32687,Maritime situation awareness capabilities from satellite and terrestrial sensor systems,S111579,R32688,Main purpose,R32545,Vessel detection,"Maritime situation awareness is supported by a combination of satellite, airborne, and terrestrial sensor systems. This paper presents several solutions to process that sensor data into information that supports operator decisions. Examples are vessel detection algorithms based on multispectral image techniques in combination with background subtraction, feature extraction techniques that estimate the vessel length to support vessel classification, and data fusion techniques to combine image based information, detections from coastal radar, and reports from cooperative systems such as (satellite) AIS. Other processing solutions include persistent tracking techniques that go beyond kinematic tracking, and include environmental information from navigation charts, and if available, ELINT reports. And finally rule-based and statistical solutions for the behavioural analysis of anomalous vessels. With that, trends and future work will be presented.",TRUE,noun phrase
R11,Science,R32694,NEAR REAL-TIME AUTOMATIC MARINE VESSEL DETECTION ON OPTICAL SATELLITE IMAGES,S111627,R32695,Main purpose,R32545,Vessel detection,"Abstract. Vessel monitoring and surveillance is important for maritime safety and security, environment protection and border control. Ship monitoring systems based on Synthetic-aperture Radar (SAR) satellite images are operational. On SAR images the ships made of metal with sharp edges appear as bright dots and edges, therefore they can be well distinguished from the water. Since the radar is independent from the sun light and can acquire images also by cloudy weather and rain, it provides a reliable service. Vessel detection from spaceborne optical images (VDSOI) can extend the SAR based systems by providing more frequent revisit times and overcoming some drawbacks of the SAR images (e.g. lower spatial resolution, difficult human interpretation). Optical satellite images (OSI) can have a higher spatial resolution thus enabling the detection of smaller vessels and enhancing the vessel type classification. The human interpretation of an optical image is also easier than as of SAR image. In this paper I present a rapid automatic vessel detection method which uses pattern recognition methods, originally developed in the computer vision field. In the first step I train a binary classifier from image samples of vessels and background. The classifier uses simple features which can be calculated very fast. For the detection the classifier is slided along the image in various directions and scales. The detector has a cascade structure which rejects most of the background in the early stages which leads to faster execution. The detections are grouped together to avoid multiple detections. Finally the position, size(i.e. length and width) and heading of the vessels is extracted from the contours of the vessel. The presented method is parallelized, thus it runs fast (in minutes for 16000 × 16000 pixels image) on a multicore computer, enabling near real-time applications, e.g. one hour from image acquisition to end user.",TRUE,noun phrase
R11,Science,R32726,Texture-based vessel classifier for electro-optical satellite imagery,S111853,R32727,Main purpose,R32545,Vessel detection,"Satellite imagery provides a valuable source of information for maritime surveillance. The vast majority of the research regarding satellite imagery for maritime surveillance focuses on vessel detection and image enhancement, whilst vessel classification remains a largely unexplored research topic. This paper presents a vessel classifier for spaceborne electro-optical imagery based on a feature representative across all satellite imagery, texture. Local Binary Patterns were selected to represent vessels for their high distinctivity and low computational complexity. Considering vessels characteristic super-structure, the extracted vessel signatures are sub-divided in three sections bow, middle and stern. A hierarchical decision-level classification is proposed, analysing first each vessel section individually and then combining the results in the second stage. The proposed approach is evaluated with the electro-optical satellite image dataset presented in [1]. Experimental results reveal an accuracy of 85.64% across four vessel categories.",TRUE,noun phrase
R11,Science,R25555,Virtual worlds on demand? Model-driven development of javascript-based virtual world UI components for mobile apps,S77037,R25556,Game Genres,L48175,Virtual Worlds,"Virtual worlds and avatar-based interactive computer games are a hype among consumers and researchers for many years now. In recent years, such games on mobile devices also became increasingly important. However, most virtual worlds require the use of proprietary clients and authoring environments and lack portability, which limits their usefulness for targeting wider audiences like e.g. in consumer marketing or sales. Using mobile devices and client-side web technologies like i.e. JavaScript in combination with a more automatic generation of customer-specific virtual worlds could help to overcome these limitations. Here, model-driven software development (MDD) provides a promising approach for automating the creation of user interface (UI) components for games on mobile devices. Therefore, in this paper an approach is proposed for the model-driven generation of UI components for virtual worlds using JavaScript and the upcoming Famo.us framework. The feasibilty of the approach is evaluated by implementing a proof-of-concept scenario.",TRUE,noun phrase
R11,Science,R28919,Simulation analysis of dispatching rules for an automated interbay material handling system in wafer fab,S95480,R28920,Objective function(s),R28917,waiting time,"Here, the performance evaluation of a double-loop interbay automated material handling system (AMHS) in wafer fab was analysed by considering the effects of the dispatching rules. Discrete event simulation models based on SIMPLE++ were developed to implement the heuristic dispatching rules in such an AMHS system with a zone control scheme to avoid vehicle collision. The layout of an interbay system is a combination configuration in which the hallway contains double loops and the vehicles have double capacity. The results show that the dispatching rule has a significant impact on average transport time, waiting time, throughput and vehicle utilization. The combination of the shortest distance with nearest vehicle and the first encounter first served rule outperformed the other rules. Furthermore, the relationship between vehicle number and material flow rate by experimenting with a simulation model was investigated. The optimum combination of these two factors can be obtained by response surface methodology.",TRUE,noun phrase
R11,Science,R27347,Influence of the shot peening temperature on the relaxation behaviour of residual stresses during cyclic bending,S88243,R27348,Special Notes,R27345,Warm peening,"Shot peening of steels at elevated temperatures (warm peening) can improve the fatigue behaviour of workpieces. For the steel AI Sf 4140 (German grade 42CrM04) in a quenched and tempered condition, it is shown that this is not only caused by the higher compressive residual stresses induced but also due to an enlarged stability of these residual stresses during cyclic bending. This can be explained by strain aging effects during shot peening, which cause different and more stable dislocation structures.",TRUE,noun phrase
R11,Science,R27359,"Residual Stress Relaxation and Fatigue Strength of AISI 4140 under Torsional Loading after Conventional Shot Peening, Stress Peening and Warm Peening",S88312,R27360,Special Notes,R27345,Warm peening,"Cylindrical rods of 450°C quenched and tempered AISI 41 40 were conventionally shot peened, stress peened and warm peened while rotating in the peening device. Warm peening at Tpeen = 310°C was conducted using a modified air blast shot peening machine with an electric air flow heater system. To perform stress peening using a torsional pre-stress, a device was conceived which allowed rotating pre-stressed samples without having material of the pre-loading gadget between the shot and the samples. Thus, same peening conditions for all peening procedures were ensured. The residual stress distributions present after the different peening procedures were evaluated and compared with results obtained after peening of flat material of the same steel. The differently peened samples were subjected to torsional pulsating stresses (R = 0) at different loadings to investigate their residual stress relaxation behavior. Additionally, the pulsating torsional strengths for the differently peened samples were determined.",TRUE,noun phrase
R11,Science,R27362,Influence of Optimized Warm Peening on Residual Stress Stability and Fatigue Strength of AISI 4140 in Different Material States,S88326,R27363,Special Notes,R27345,Warm peening,"Using a modified air blasting machine warm peening at 20 O C < T I 410 ""C was feasible. An optimized peening temperature of about 310 ""C was identified for a 450 ""C quenched and ternpered steel AISI 4140. Warm peening was also investigated for a normalized, a 650 ""C quenched and tempered, and a martensitically hardened material state. The quasi static surface compressive yield strengths as well as the cyclic surface yield strengths were determined from residual stress relaxation tests conducted at different stress amplitudes and numbers of loading cycles. Dynamic and static strain aging effects acting during and after warm peening clearly increased the residual stress stability and the alternating bending strength for all material states.",TRUE,noun phrase
R11,Science,R151238,Social Media and Emergency Management: Exploring State and Local Tweets,S626459,R156068,Emergency Type,L431166,weather-related events,"Social media for emergency management has emerged as a vital resource for government agencies across the globe. In this study, we explore social media strategies employed by governments to respond to major weather-related events. Using social media monitoring software, we analyze how social media is used in six cities following storms in the winter of 2012. We listen, monitor, and assess online discourse available on the full range of social media outlets (e.g., Twitter, Facebook, blogs). To glean further insight, we conduct a survey and extract themes from citizen comments and government's response. We conclude with recommendations on how practitioners can develop social media strategies that enable citizen participation in emergency management.",TRUE,noun phrase
R11,Science,R25404,A Reliability Evaluation Framework on Composite Web Service,S76116,R25405,Area of use,R25402,web service,The composition of web-based services is a process that usually requires advanced programming skills and vast knowledge about specific technologies. How to carry out web service composition according to functional sufficiency and performance is widely studied. Non-functional characteristics like reliability and security play an important role in the selection of web services composition process. This paper provides a web service reliability model for atomic web service without structural information and the composite web service consist of atomic web service and its redundant services. It outlines a framework based on client feedback to gather trustworthiness attributes to service registry for reliability evaluation.,TRUE,noun phrase
R11,Science,R25426,An Enhance Approach For Web Services Discovery with QoS,S76210,R25427,Area of use,R25402,web service,"The Quality of Service for web services here mainly refers to the quality aspect of a web service. The QoS for web services is becoming increasingly important to service providers and service requesters due to increasing use of web services. Web services providing similar functionalities, more emphasis is being placed on how to find the service that best fits the consumer's requirements. In order to find services that best meet their QoS requirements, the service consumers and/or discovery agents need to know both the QoS information for the services and the reliability of this information. In this paper first of all we implement Reputation-Enhanced Web Services Discovery protocol. And after implementation we enhance the protocol over memory used, time to discovery and response time of given web service.",TRUE,noun phrase
R11,Science,R31212,MLGA: a multilevel cooperative genetic algorithm,S104676,R31213,Recombination Within island,L62596,Within group,"This paper incorporate the multilevel selection (MLS) theory into the genetic algorithm. Based on this theory, a Multilevel Cooperative Genetic Algorithm (MLGA) is presented. In MLGA, a species is subdivided in a set of populations, each population is subdivided in groups, and evolution occurs at two levels so called individual and group level. A fast population dynamics occurs at individual level. At this level, selection occurs between individuals of the same group. The popular genetic operators such as mutation and crossover are applied within groups. A slow population dynamics occurs at group level. At this level, selection occurs between groups of a population. A group level operator so called colonization is applied between groups in which a group is selected as extinct, and replaced by offspring of a colonist group. We used a set of well known numerical functions in order to evaluate performance of the proposed algorithm. The results showed that the MLGA is robust, and provides an efficient way for numerical function optimization.",TRUE,noun phrase
R11,Science,R33205,An Exploratory Study of the Success Factors for Extranet Adoption in E-Supply Chain,S115206,R33206,Critical success factors,R33204,work performance quality,"Extranet is an enabler/system that enriches the information service quality in e-supply chain. This paper uses factor analysis to determine four extranet success factors: system quality, information quality, service quality, and work performance quality. A critical analysis of areas that require improvement is also conducted.",TRUE,noun phrase
R11,Science,R30733,Oral health status of workers exposed to acid fumes in phosphate and battery industries in Jordan,S102612,R30734,Study population,L61610,Workers exposed to acid fumes,"OBJECTIVES To investigate the prevalence and nature of oral health problems among workers exposed to acid fumes in two industries in Jordan. SETTING Jordan's Phosphate Mining Company and a main private battery factory. DESIGN Comparison of general and oral health conditions between workers exposed to acid fumes and control group from the same workplace. SUBJECTS AND METHODS The sample consisted of 68 subjects from the phosphate industry (37 acid workers and 31 controls) drawn as a sample of convenience and 39 subjects from a battery factory (24 acid workers and 15 controls). Structured questionnaires on medical and dental histories were completed by interview. Clinical examinations were carried out to assess dental erosion, oral hygiene, and gingival health using the appropriate indices. Data were statistically analysed using Wilcoxon rank-sum test to assess the significance of differences between results attained by acid workers and control groups for the investigated parameters. RESULTS Differences in the erosion scores between acid workers in both industries and their controls were highly significant (P<0.05). In both industries, acid workers showed significantly higher oral hygiene scores, obtained by adding the debris and calculus scores, and gingival index scores than their controls (P<0.05). The single most common complaint was tooth hypersensitivity (80%) followed by dry mouth (77%) on average. CONCLUSION Exposure to acid fumes in the work place was significantly associated with dental erosion and deteriorated oral health status. Such exposure was also detrimental to general health. Findings pointed to the need of establishing appropriate educational, preventive and treatment measures coupled with efficient surveillance and environmental monitoring for detection of acid fumes in the workplace atmosphere.",TRUE,noun phrase
R11,Science,R25728,A fast high utility itemsets mining algorithm,S78109,R25729,Algorithm name,L48906,Two-Phase,"Association rule mining (ARM) identifies frequent itemsets from databases and generates association rules by considering each item in equal value. However, items are actually different in many aspects in a number of real applications, such as retail marketing, network log, etc. The difference between items makes a strong impact on the decision making in these applications. Therefore, traditional ARM cannot meet the demands arising from these applications. By considering the different values of individual items as utilities, utility mining focuses on identifying the itemsets with high utilities. As ""downward closure property"" doesn't apply to utility mining, the generation of candidate itemsets is the most costly in terms of time and memory space. In this paper, we present a Two-Phase algorithm to efficiently prune down the number of candidates and can precisely obtain the complete set of high utility itemsets. In the first phase, we propose a model that applies the ""transaction-weighted downward closure property"" on the search space to expedite the identification of candidates. In the second phase, one extra database scan is performed to identify the high utility itemsets. We also parallelize our algorithm on shared memory multi-process architecture using Common Count Partitioned Database (CCPD) strategy. We verify our algorithm by applying it to both synthetic and real databases. It performs very efficiently in terms of speed and memory cost, and shows good scalability on multiple processors, even on large databases that are difficult for existing algorithms to handle.",TRUE,noun phrase
R11,Science,R26272,On the Effectiveness of Direct Shipping Strategy for the One-Warehouse Multi-Retailer R-Systems,S82217,R26273,approach,R26267,Lower bound,"We consider the problem of integrating inventory control and vehicle routing into a cost-effective strategy for a distribution system consisting of one depot and many geographically dispersed retailers. All stock enters the system through the depot and is distributed to the retailers by vehicles of limited constant capacity. We assume that each one of the retailers faces a constant, retailer specific, demand rate and that inventory is charged only at the retailers but not at the depot. We provide a lower bound on the long run average cost over all inventory-routing strategies. We use this lower bound to show that the effectiveness of direct shipping over all inventory-routing strategies is at least 94% whenever the Economic Lot Size of each of the retailers is at least 71% of vehicle capacity. The effectiveness deteriorates as the Economic Lot Sizes become smaller. These results are important because they provide useful guidelines as to when to embark into the much more difficult task of finding cost-effective routes. Additional advantages of direct shipping are lower in-transit inventory and ease of coordination.",TRUE,noun phrase
R11,Science,R26274,Two-echelon distribution systems with vehicle routing costs and central inventory,S82232,R26275,approach,R26267,Lower bound,"We consider distribution systems with a single depot and many retailers each of which faces external demands for a single item that occurs at a specific deterministic demand rate. All stock enters the systems through the depot where it can be stored and then picked up and distributed to the retailers by a fleet of vehicles, combining deliveries into efficient routes. We extend earlier methods for obtaining low complexity lower bounds and heuristics for systems without central stock. We show under mild probabilistic assumptions that the generated solutions and bounds come asymptotically within a few percentage points of optimality (within the considered class of strategies). A numerical study exhibits the performance of these heuristics and bounds for problems of moderate size.",TRUE,noun phrase
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5784,R5230,Material,R5250,a multidimensional statistical model,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun phrase
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5767,R5230,Data,R5233,a wide variety of disciplines,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun phrase
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5772,R5230,Data,R5238,"affiliation, ethnicity, collaboration size","It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun phrase
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5771,R5230,Data,R5237,an extensive set of features,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun phrase
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5776,R5230,Data,R5242,any gender,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun phrase
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5779,R5230,Data,R5245,"Author-ity, a version of PubMed","It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun phrase
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5780,R5230,Data,R5246,byline position,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun phrase
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5769,R5230,Data,R5235,computationally disambiguated author names,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun phrase
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5787,R5230,Material,R5253,papers by authors,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun phrase
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5770,R5230,Data,R5236,prior publication count,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun phrase
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5785,R5230,Material,R5251,productive authors,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun phrase
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5775,R5230,Data,R5241,publication type,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun phrase
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5774,R5230,Data,R5240,reference/citation counts,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun phrase
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5786,R5230,Material,R5252,similar venues,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun phrase
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5773,R5230,Data,R5239,subject-matter novelty,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun phrase
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5783,R5230,Material,R5249,the bibliographic database JSTOR,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun phrase
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5777,R5230,Data,R5243,their novel journal publications,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,noun phrase
R141823,Semantic Web,R182024,DistSim - Scalable Distributed in-Memory Semantic Similarity Estimation for RDF Knowledge Graphs,S704150,R182026,Scalability Framework,L475105,Apache Spark,"In this paper, we present DistSim, a Scalable Distributed in-Memory Semantic Similarity Estimation framework for Knowledge Graphs. DistSim provides a multitude of state-of-the-art similarity estimators. We have developed the Similarity Estimation Pipeline by combining generic software modules. For large scale RDF data, DistSim proposes MinHash with locality sensitivity hashing to achieve better scalability over all-pair similarity estimations. The modules of DistSim can be set up using a multitude of (hyper)-parameters allowing to adjust the tradeoff between information taken into account, and processing time. Furthermore, the output of the Similarity Estimation Pipeline is native RDF. DistSim is integrated into the SANSA stack, documented in scala-docs, and covered by unit tests. Additionally, the variables and provided methods follow the Apache Spark MLlib name-space conventions. The performance of DistSim was tested over a distributed cluster, for the dimensions of data set size and processing power versus processing time, which shows the scalability of DistSim w.r.t. increasing data set sizes and processing power. DistSim is already in use for solving several RDF data analytics related use cases. Additionally, DistSim is available and integrated into the open-source GitHub project SANSA.",TRUE,noun phrase
R141823,Semantic Web,R142323,Mapping ER Schemas to OWL Ontologies,S572049,R142325,Learning method,R142369,Automatic mapping,"As the Semantic Web initiative gains momentum, a fundamental problem of integrating existing data-intensive WWW applications into the Semantic Web emerges. In order for today’s relational database supported Web applications to transparently participate in the Semantic Web, their associated database schemas need to be converted into semantically equivalent ontologies. In this paper we present a solution to an important special case of the automatic mapping problem with wide applicability: mapping well-formed Entity-Relationship (ER) schemas to semantically equivalent OWL Lite ontologies. We present a set of mapping rules that fully capture the ER schema semantics, along with an overview of an implementation of the complete mapping algorithm integrated into the current SFSU ER Design Tools software.",TRUE,noun phrase
R141823,Semantic Web,R185271,Multimedia ontology learning for automatic annotation and video browsing,S709711,R185273,Learning method,R185283,Bayesian network,"In this work, we offer an approach to combine standard multimedia analysis techniques with knowledge drawn from conceptual metadata provided by domain experts of a specialized scholarly domain, to learn a domain-specific multimedia ontology from a set of annotated examples. A standard Bayesian network learning algorithm that learns structure and parameters of a Bayesian network is extended to include media observables in the learning. An expert group provides domain knowledge to construct a basic ontology of the domain as well as to annotate a set of training videos. These annotations help derive the associations between high-level semantic concepts of the domain and low-level MPEG-7 based features representing audio-visual content of the videos. We construct a more robust and refined version of this ontology by learning from this set of conceptually annotated videos. To encode this knowledge, we use MOWL, a multimedia extension of Web Ontology Language (OWL) which is capable of describing domain concepts in terms of their media properties and of capturing the inherent uncertainties involved. We use the ontology specified knowledge for recognizing concepts relevant to a video to annotate fresh addition to the video database with relevant concepts in the ontology. These conceptual annotations are used to create hyperlinks in the video collection, to provide an effective video browsing interface to the user.",TRUE,noun phrase
R141823,Semantic Web,R149947,Image based mammographie ontology learning,S601098,R149949,Terms learning,R149953,Conceptual entities,"Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.",TRUE,noun phrase
R141823,Semantic Web,R180001,A Deep Learning based Approach for Precise Video Tagging,S702045,R180014,Learning method,R180015,Convolutional Neural Network,"With the increase in smart devices and abundance of video contents, efficient techniques for the indexing, analysis and retrieval of videos are becoming more and more desirable. Improved indexing and automated analysis of millions of videos could be accomplished by getting videos tagged automatically. A lot of existing methods fail to precisely tag videos because of their lack of ability to capture the video context. The context in a video represents the interactions of objects in a scene and their overall meaning. In this work, we propose a novel approach that integrates the video scene ontology with CNN (Convolutional Neural Network) for improved video tagging. Our method captures the content of a video by extracting the information from individual key frames. The key frames are then fed to a CNN based deep learning model to train its parameters. The trained parameters are used to generate the most frequent tags. Highly frequent tags are used to summarize the input video. The proposed technique is benchmarked on the most widely used dataset of video activities, namely, UCF-101. Our method managed to achieve an overall accuracy of 99.8% with an F1- score of 96.2%.",TRUE,noun phrase
R141823,Semantic Web,R185335,Ontology Learning Process as a Bottom-up Strategy for Building Domain-specific Ontology from Legal Texts,S709844,R185337,Application Domain,R185340,Criminal system,"The objective of this paper is to present the role of Ontology Learning Process in supporting an ontology engineer for creating and maintaining ontologies from textual resources. The knowledge structures that interest us are legal domain-specific ontologies. We will use these ontologies to build legal domain ontology for a Lebanese legal knowledge based system. The domain application of this work is the Lebanese criminal system. Ontologies can be learnt from various sources, such as databases, structured and unstructured documents. Here, the focus is on the acquisition of ontologies from unstructured text, provided as input. In this work, the Ontology Learning Process represents a knowledge extraction phase using Natural Language Processing techniques. The resulted ontology is considered as inexpressive ontology. There is a need to reengineer it in order to build a complete, correct and more expressive domain-specific ontology.",TRUE,noun phrase
R141823,Semantic Web,R142443,From Glossaries to Ontologies: Extracting Semantic Structure from Textual Definitions,S572298,R142445,Application Domain,R139987,cultural heritage,"Learning ontologies requires the acquisition of relevant domain concepts and taxonomic, as well as non-taxonomic, relations. In this chapter, we present a methodology for automatic ontology enrichment and document annotation with concepts and relations of an existing domain core ontology. Natural language definitions from available glossaries in a given domain are processed and regular expressions are applied to identify general-purpose and domain-specific relations. We evaluate the methodology performance in extracting hypernymy and non-taxonomic relations. To this end, we annotated and formalized a relevant fragment of the glossary of Art and Architecture (AAT) with a set of 10 relations (plus the hypernymy relation) defined in the CRM CIDOC cultural heritage core ontology, a recent W3C standard. Finally, we assessed the generality of the approach on a set of web pages from the domains of history and biography.",TRUE,noun phrase
R141823,Semantic Web,R162600,DBpedia Archivo - A Web-Scale Interface for Ontology Archiving under Consumer-oriented Aspects,S648821,R162602,Has result,R162612,DBpedia Archivo,"Abstract While thousands of ontologies exist on the web, a unified system for handling online ontologies – in particular with respect to discovery, versioning, access, quality-control, mappings – has not yet surfaced and users of ontologies struggle with many challenges. In this paper, we present an online ontology interface and augmented archive called DBpedia Archivo, that discovers, crawls, versions and archives ontologies on the DBpedia Databus. Based on this versioned crawl, different features, quality measures and, if possible, fixes are deployed to handle and stabilize the changes in the found ontologies at web-scale. A comparison to existing approaches and ontology repositories is given .",TRUE,noun phrase
R141823,Semantic Web,R142487,Medical Ontology Learning Based on Web Resources,S572396,R142489,output,R142501,Disease ontology,"In order to deal with heterogeneous knowledge in the medical field, this paper proposes a method which can learn a heavy-weighted medical ontology based on medical glossaries and Web resources. Firstly, terms and taxonomic relations are extracted based on disease and drug glossaries and a light-weighted ontology is constructed, Secondly, non-taxonomic relations are automatically learned from Web resources with linguistic patterns, and the two ontologies (disease and drug) are expanded from light-weighted level towards heavy-weighted level, At last, the disease ontology and drug ontology are integrated to create a practical medical ontology. Experiment shows that this method can integrate and expand medical terms with taxonomic and different kinds of non-taxonomic relations. Our experiments show that the performance is promising.",TRUE,noun phrase
R141823,Semantic Web,R142380,A Method for Building Domain Ontologies based on the Transformation of UML Models,S572098,R142382,output,R28769,Domain ontology,"Ontologies are used in the integration of information resources by describing the semantics of the information sources with machine understandable terms and definitions. But, creating an ontology is a difficult and time-consuming process, especially in the early stage of extracting key concepts and relations. This paper proposes a method for domain ontology building by extracting ontological knowledge from UML models of existing systems. We compare the UML model elements with the OWL ones and derive transformation rules between the corresponding model elements. Based on these rules, we define an XSLT document which implements the transformation processes. We expect that the proposed method reduce the cost and time for building domain ontologies with the reuse of existing UML models",TRUE,noun phrase
R141823,Semantic Web,R142487,Medical Ontology Learning Based on Web Resources,S572386,R142489,data source,R142498,Drug glossaries,"In order to deal with heterogeneous knowledge in the medical field, this paper proposes a method which can learn a heavy-weighted medical ontology based on medical glossaries and Web resources. Firstly, terms and taxonomic relations are extracted based on disease and drug glossaries and a light-weighted ontology is constructed, Secondly, non-taxonomic relations are automatically learned from Web resources with linguistic patterns, and the two ontologies (disease and drug) are expanded from light-weighted level towards heavy-weighted level, At last, the disease ontology and drug ontology are integrated to create a practical medical ontology. Experiment shows that this method can integrate and expand medical terms with taxonomic and different kinds of non-taxonomic relations. Our experiments show that the performance is promising.",TRUE,noun phrase
R141823,Semantic Web,R142487,Medical Ontology Learning Based on Web Resources,S572397,R142489,output,R142502,Drug ontology,"In order to deal with heterogeneous knowledge in the medical field, this paper proposes a method which can learn a heavy-weighted medical ontology based on medical glossaries and Web resources. Firstly, terms and taxonomic relations are extracted based on disease and drug glossaries and a light-weighted ontology is constructed, Secondly, non-taxonomic relations are automatically learned from Web resources with linguistic patterns, and the two ontologies (disease and drug) are expanded from light-weighted level towards heavy-weighted level, At last, the disease ontology and drug ontology are integrated to create a practical medical ontology. Experiment shows that this method can integrate and expand medical terms with taxonomic and different kinds of non-taxonomic relations. Our experiments show that the performance is promising.",TRUE,noun phrase
R141823,Semantic Web,R143899,An Integrated Approach to Drive Ontological Structure from Folksonomie,S576085,R143901,Learning method,R143903,Fuzzy clustering,"Web 2.0 is an evolution toward a more social, interactive and collaborative web, where user is at the center of service in terms of publications and reactions. This transforms the user from his old status as a consumer to a new one as a producer. Folksonomies are one of the technologies of Web 2.0 that permit users to annotate resources on the Web. This is done by allowing users to use any keyword or tag that they find relevant. Although folksonomies require a context-independent and inter-subjective definition of meaning, many researchers have proven the existence of an implicit semantics in these unstructured data. In this paper, we propose an improvement of our previous approach to extract ontological structures from folksonomies. The major contributions of this paper are a Normalized Co-occurrences in Distinct Users (NCDU) similarity measure, and a new algorithm to define context of tags and detect ambiguous ones. We compared our similarity measure to a widely used method for identifying similar tags based on the cosine measure. We also compared the new algorithm with the Fuzzy Clustering Algorithm (FCM) used in our original approach. The evaluation shows promising results and emphasizes the advantage of our approach.",TRUE,noun phrase
R141823,Semantic Web,R180001,A Deep Learning based Approach for Precise Video Tagging,S702029,R180003,Learning purpose,R180011,Improved video tagging,"With the increase in smart devices and abundance of video contents, efficient techniques for the indexing, analysis and retrieval of videos are becoming more and more desirable. Improved indexing and automated analysis of millions of videos could be accomplished by getting videos tagged automatically. A lot of existing methods fail to precisely tag videos because of their lack of ability to capture the video context. The context in a video represents the interactions of objects in a scene and their overall meaning. In this work, we propose a novel approach that integrates the video scene ontology with CNN (Convolutional Neural Network) for improved video tagging. Our method captures the content of a video by extracting the information from individual key frames. The key frames are then fed to a CNN based deep learning model to train its parameters. The trained parameters are used to generate the most frequent tags. Highly frequent tags are used to summarize the input video. The proposed technique is benchmarked on the most widely used dataset of video activities, namely, UCF-101. Our method managed to achieve an overall accuracy of 99.8% with an F1- score of 96.2%.",TRUE,noun phrase
R141823,Semantic Web,R165795,Automatic Subject Indexing with Knowledge Graphs,S660830,R165797,Method,R165799,KINDEX approach,"Automatic subject indexing has been a longstanding goal of digital curators to facilitate effective retrieval access to large collections of both online and offline information resources. Controlled vocabularies are often used for this purpose, as they standardise annotation practices and help users to navigate online resources through following interlinked topical concepts. However, to this date, the assignment of suitable text annotations from a controlled vocabulary is still largely done manually, or at most (semi-)automatically, even though effective machine learning tools are already in place. This is because existing procedures require a sufficient amount of training data and they have to be adapted to each vocabulary, language and application domain anew. In this paper, we argue that there is a third solution to subject indexing which harnesses cross-domain knowledge graphs. Our KINDEX approach fuses distributed knowledge graph information from different sources. Experimental evaluation shows that the approach achieves good accuracy scores by exploiting correspondence links of publicly available knowledge graphs.",TRUE,noun phrase
R141823,Semantic Web,R182024,DistSim - Scalable Distributed in-Memory Semantic Similarity Estimation for RDF Knowledge Graphs,S704146,R182026,Data types,L475102,Knowledge Graph,"In this paper, we present DistSim, a Scalable Distributed in-Memory Semantic Similarity Estimation framework for Knowledge Graphs. DistSim provides a multitude of state-of-the-art similarity estimators. We have developed the Similarity Estimation Pipeline by combining generic software modules. For large scale RDF data, DistSim proposes MinHash with locality sensitivity hashing to achieve better scalability over all-pair similarity estimations. The modules of DistSim can be set up using a multitude of (hyper)-parameters allowing to adjust the tradeoff between information taken into account, and processing time. Furthermore, the output of the Similarity Estimation Pipeline is native RDF. DistSim is integrated into the SANSA stack, documented in scala-docs, and covered by unit tests. Additionally, the variables and provided methods follow the Apache Spark MLlib name-space conventions. The performance of DistSim was tested over a distributed cluster, for the dimensions of data set size and processing power versus processing time, which shows the scalability of DistSim w.r.t. increasing data set sizes and processing power. DistSim is already in use for solving several RDF data analytics related use cases. Additionally, DistSim is available and integrated into the open-source GitHub project SANSA.",TRUE,noun phrase
R141823,Semantic Web,R149916,Image domain ontology fusion approach using multi-level inference mechanism,S601295,R149918,Learning method,R150002,Latent semantic analysis,"One of the main challenges in content-based or semantic image retrieval is still to bridge the gap between low-level features and semantic information. In this paper, An approach is presented using integrated multi-level image features in ontology fusion construction by a fusion framework, which based on the latent semantic analysis. The proposed method promotes images ontology fusion efficiently and broadens the application fields of image ontology retrieval system. The relevant experiment shows that this method ameliorates the problem, such as too many redundant data and relations, in the traditional ontology system construction, as well as improves the performance of semantic images retrieval.",TRUE,noun phrase
R141823,Semantic Web,R185335,Ontology Learning Process as a Bottom-up Strategy for Building Domain-specific Ontology from Legal Texts,S709843,R185337,Application Domain,R185339,Lebanese criminal system,"The objective of this paper is to present the role of Ontology Learning Process in supporting an ontology engineer for creating and maintaining ontologies from textual resources. The knowledge structures that interest us are legal domain-specific ontologies. We will use these ontologies to build legal domain ontology for a Lebanese legal knowledge based system. The domain application of this work is the Lebanese criminal system. Ontologies can be learnt from various sources, such as databases, structured and unstructured documents. Here, the focus is on the acquisition of ontologies from unstructured text, provided as input. In this work, the Ontology Learning Process represents a knowledge extraction phase using Natural Language Processing techniques. The resulted ontology is considered as inexpressive ontology. There is a need to reengineer it in order to build a complete, correct and more expressive domain-specific ontology.",TRUE,noun phrase
R141823,Semantic Web,R142487,Medical Ontology Learning Based on Web Resources,S572381,R142489,Learning method,R142493,Linguistic patterns,"In order to deal with heterogeneous knowledge in the medical field, this paper proposes a method which can learn a heavy-weighted medical ontology based on medical glossaries and Web resources. Firstly, terms and taxonomic relations are extracted based on disease and drug glossaries and a light-weighted ontology is constructed, Secondly, non-taxonomic relations are automatically learned from Web resources with linguistic patterns, and the two ontologies (disease and drug) are expanded from light-weighted level towards heavy-weighted level, At last, the disease ontology and drug ontology are integrated to create a practical medical ontology. Experiment shows that this method can integrate and expand medical terms with taxonomic and different kinds of non-taxonomic relations. Our experiments show that the performance is promising.",TRUE,noun phrase
R141823,Semantic Web,R142323,Mapping ER Schemas to OWL Ontologies,S572050,R142325,Learning method,R10019,mapping rules,"As the Semantic Web initiative gains momentum, a fundamental problem of integrating existing data-intensive WWW applications into the Semantic Web emerges. In order for today’s relational database supported Web applications to transparently participate in the Semantic Web, their associated database schemas need to be converted into semantically equivalent ontologies. In this paper we present a solution to an important special case of the automatic mapping problem with wide applicability: mapping well-formed Entity-Relationship (ER) schemas to semantically equivalent OWL Lite ontologies. We present a set of mapping rules that fully capture the ER schema semantics, along with an overview of an implementation of the complete mapping algorithm integrated into the current SFSU ER Design Tools software.",TRUE,noun phrase
R141823,Semantic Web,R142487,Medical Ontology Learning Based on Web Resources,S572383,R142489,data source,R142495,Medical glossaries,"In order to deal with heterogeneous knowledge in the medical field, this paper proposes a method which can learn a heavy-weighted medical ontology based on medical glossaries and Web resources. Firstly, terms and taxonomic relations are extracted based on disease and drug glossaries and a light-weighted ontology is constructed, Secondly, non-taxonomic relations are automatically learned from Web resources with linguistic patterns, and the two ontologies (disease and drug) are expanded from light-weighted level towards heavy-weighted level, At last, the disease ontology and drug ontology are integrated to create a practical medical ontology. Experiment shows that this method can integrate and expand medical terms with taxonomic and different kinds of non-taxonomic relations. Our experiments show that the performance is promising.",TRUE,noun phrase
R141823,Semantic Web,R149947,Image based mammographie ontology learning,S601103,R149949,Learning purpose,R149632,Ontology construction,"Understanding the content of an image is one of the challenges in the image processing field. Recently, the Content Based Image Retrieval (CBIR) and especially Semantic Content Based Image Retrieval (SCBIR) are the main goal of many research works. In medical field, understanding the content of an image is very helpful in the automatic decision making. In fact, analyzing the semantic information in an image support can assist the doctor to make the adequate diagnosis. This paper presents a new method for mammographic ontology learning from a set of mammographic images. The approach is based on four main modules: (1) the mammography segmentation, (2) the features extraction (3) the local ontology modeling and (4) the global ontology construction basing on merging the local ones. The first module allows detecting the pathological regions in the represented breast. The second module consists on extracting the most important features from the pathological zones. The third module allows modeling a local ontology by representing the pertinent entities (conceptual entities) as well as their correspondent features (shape, size, form, etc.) discovered in the previous step. The last module consists on merging the local ontologies extracted from a set of mammographies in order to obtain a global and exhaustive one. Our approach attempts to fully describe the semantic content of mammographic images in order to perform the domain knowledge modeling.",TRUE,noun phrase
R141823,Semantic Web,R185349,The Ontology Extraction & Maintenance Framework Text-To-Onto,S709894,R185350,Learning purpose,R149632,Ontology construction,"Ontologies play an increasingly important role in Knowledge Management. One of the main problems associated with ontologies is that they need to be constructed and maintained. Manual construction of larger ontologies is usually not feasible within companies because of the effort and costs required. Therefore, a semi-automatic approach to ontology construction and maintenance is what everybody is wishing for. The paper presents a framework for semi-automatically learning ontologies from domainspecific texts by applying machine learning techniques. The TEXT-TO-ONTO framework integrates manual engineering facilities to follow a balanced cooperative modelling paradigm.",TRUE,noun phrase
R141823,Semantic Web,R142443,From Glossaries to Ontologies: Extracting Semantic Structure from Textual Definitions,S572308,R142445,Learning purpose,R139412,Ontology enrichment,"Learning ontologies requires the acquisition of relevant domain concepts and taxonomic, as well as non-taxonomic, relations. In this chapter, we present a methodology for automatic ontology enrichment and document annotation with concepts and relations of an existing domain core ontology. Natural language definitions from available glossaries in a given domain are processed and regular expressions are applied to identify general-purpose and domain-specific relations. We evaluate the methodology performance in extracting hypernymy and non-taxonomic relations. To this end, we annotated and formalized a relevant fragment of the glossary of Art and Architecture (AAT) with a set of 10 relations (plus the hypernymy relation) defined in the CRM CIDOC cultural heritage core ontology, a recent W3C standard. Finally, we assessed the generality of the approach on a set of web pages from the domains of history and biography.",TRUE,noun phrase
R141823,Semantic Web,R149916,Image domain ontology fusion approach using multi-level inference mechanism,S601296,R149918,Learning method,R150003,Ontology fusion,"One of the main challenges in content-based or semantic image retrieval is still to bridge the gap between low-level features and semantic information. In this paper, An approach is presented using integrated multi-level image features in ontology fusion construction by a fusion framework, which based on the latent semantic analysis. The proposed method promotes images ontology fusion efficiently and broadens the application fields of image ontology retrieval system. The relevant experiment shows that this method ameliorates the problem, such as too many redundant data and relations, in the traditional ontology system construction, as well as improves the performance of semantic images retrieval.",TRUE,noun phrase
R141823,Semantic Web,R142443,From Glossaries to Ontologies: Extracting Semantic Structure from Textual Definitions,S572294,R142445,Learning method,R142451,Regular expression,"Learning ontologies requires the acquisition of relevant domain concepts and taxonomic, as well as non-taxonomic, relations. In this chapter, we present a methodology for automatic ontology enrichment and document annotation with concepts and relations of an existing domain core ontology. Natural language definitions from available glossaries in a given domain are processed and regular expressions are applied to identify general-purpose and domain-specific relations. We evaluate the methodology performance in extracting hypernymy and non-taxonomic relations. To this end, we annotated and formalized a relevant fragment of the glossary of Art and Architecture (AAT) with a set of 10 relations (plus the hypernymy relation) defined in the CRM CIDOC cultural heritage core ontology, a recent W3C standard. Finally, we assessed the generality of the approach on a set of web pages from the domains of history and biography.",TRUE,noun phrase
R141823,Semantic Web,R187232,Explainable cyber-physical energy systems based on knowledge graph,S716212,R187234,Application Domain,R45062,Smart Grids,"Explainability can help cyber-physical systems alleviating risk in automating decisions that are affecting our life. Building an explainable cyber-physical system requires deriving explanations from system events and causality between the system elements. Cyber-physical energy systems such as smart grids involve cyber and physical aspects of energy systems and other elements, namely social and economic. Moreover, a smart-grid scale can range from a small village to a large region across countries. Therefore, integrating these varieties of data and knowledge is a fundamental challenge to build an explainable cyber-physical energy system. This paper aims to use knowledge graph based framework to solve this challenge. The framework consists of an ontology to model and link data from various sources and graph-based algorithm to derive explanations from the events. A simulated demand response scenario covering the above aspects further demonstrates the applicability of this framework.",TRUE,noun phrase
R141823,Semantic Web,R142487,Medical Ontology Learning Based on Web Resources,S572380,R142489,Relationship learning,R142492,Taxonomic relations,"In order to deal with heterogeneous knowledge in the medical field, this paper proposes a method which can learn a heavy-weighted medical ontology based on medical glossaries and Web resources. Firstly, terms and taxonomic relations are extracted based on disease and drug glossaries and a light-weighted ontology is constructed, Secondly, non-taxonomic relations are automatically learned from Web resources with linguistic patterns, and the two ontologies (disease and drug) are expanded from light-weighted level towards heavy-weighted level, At last, the disease ontology and drug ontology are integrated to create a practical medical ontology. Experiment shows that this method can integrate and expand medical terms with taxonomic and different kinds of non-taxonomic relations. Our experiments show that the performance is promising.",TRUE,noun phrase
R141823,Semantic Web,R142380,A Method for Building Domain Ontologies based on the Transformation of UML Models,S572099,R142382,Learning method,R10018,transformation rules,"Ontologies are used in the integration of information resources by describing the semantics of the information sources with machine understandable terms and definitions. But, creating an ontology is a difficult and time-consuming process, especially in the early stage of extracting key concepts and relations. This paper proposes a method for domain ontology building by extracting ontological knowledge from UML models of existing systems. We compare the UML model elements with the OWL ones and derive transformation rules between the corresponding model elements. Based on these rules, we define an XSLT document which implements the transformation processes. We expect that the proposed method reduce the cost and time for building domain ontologies with the reuse of existing UML models",TRUE,noun phrase
R141823,Semantic Web,R142705,Ontology Learning from Thesauri: An Experience in the Urban Domain,S573349,R142707,Application Domain,R142708,Urban domain,"Ontology learning is the term used to encompass methods and techniques employed for the (semi-)automatic processing of knowledge resources that facilitate the acquisition of knowledge during ontology construction. This chapter focuses on ontology learning techniques using thesauri as input sources. Thesauri are one of the most promising sources for the creation of domain ontologies thanks to the richness of term definitions, the existence of a priori relationships between terms, and the consensus provided by their extensive use in the library context. Apart from reviewing the state of the art, this chapter shows how ontology learning techniques can be applied in the urban domain for the development of domain ontologies.",TRUE,noun phrase
R141823,Semantic Web,R142487,Medical Ontology Learning Based on Web Resources,S572384,R142489,data source,R142496,Web resources,"In order to deal with heterogeneous knowledge in the medical field, this paper proposes a method which can learn a heavy-weighted medical ontology based on medical glossaries and Web resources. Firstly, terms and taxonomic relations are extracted based on disease and drug glossaries and a light-weighted ontology is constructed, Secondly, non-taxonomic relations are automatically learned from Web resources with linguistic patterns, and the two ontologies (disease and drug) are expanded from light-weighted level towards heavy-weighted level, At last, the disease ontology and drug ontology are integrated to create a practical medical ontology. Experiment shows that this method can integrate and expand medical terms with taxonomic and different kinds of non-taxonomic relations. Our experiments show that the performance is promising.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338104,R71590,Material,R71592,"[A = Cs+, CH3NH3","Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338105,R71590,Material,R71593,+ (methylammonium or MA+) or,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R135948,Application of ALD-Al2O3 in CdS/CdTe Thin-Film Solar Cells,S538231,R135950,keywords,L379315,atomic layer deposition,"The application of thinner cadmium sulfide (CdS) window layer is a feasible approach to improve the performance of cadmium telluride (CdTe) thin film solar cells. However, the reduction of compactness and continuity of thinner CdS always deteriorates the device performance. In this work, transparent Al2O3 films with different thicknesses, deposited by using atomic layer deposition (ALD), were utilized as buffer layers between the front electrode transparent conductive oxide (TCO) and CdS layers to solve this problem, and then, thin-film solar cells with a structure of TCO/Al2O3/CdS/CdTe/BC/Ni were fabricated. The characteristics of the ALD-Al2O3 films were studied by UV–visible transmittance spectrum, Raman spectroscopy, and atomic force microscopy (AFM). The light and dark J–V performances of solar cells were also measured by specific instrumentations. The transmittance measurement conducted on the TCO/Al2O3 films verified that the transmittance of TCO/Al2O3 were comparable to that of single TCO layer, meaning that no extra absorption loss occurred when Al2O3 buffer layers were introduced into cells. Furthermore, due to the advantages of the ALD method, the ALD-Al2O3 buffer layers formed an extremely continuous and uniform coverage on the substrates to effectively fill and block the tiny leakage channels in CdS/CdTe polycrystalline films and improve the characteristics of the interface between TCO and CdS. However, as the thickness of alumina increased, the negative effects of cells were gradually exposed, especially the increase of the series resistance (Rs) and the more serious “roll-over” phenomenon. Finally, the cell conversion efficiency (η) of more than 13.0% accompanied by optimized uniformity performances was successfully achieved corresponding to the 10 nm thick ALD-Al2O3 thin film.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338122,R71590,Material,R71610,colloidal state,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71565,Bright Visible-Infrared Light Emitting Diodes Based on Hybrid Halide Perovskite with Spiro-OMeTAD as a Hole-Injecting Layer,S338072,R71567,Material,R71576,Compact TiO2 and Spiro-OMeTAD,"Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W·sr(-1)·m(-2) at a current density of 232 mA·cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 ± 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338119,R71590,Material,R71607,CsPbBr3 NCs,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338117,R71590,Material,R71605,cubic crystal structure,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71658,Interfacial Control Toward Efficient and Low-Voltage Perovskite Light-Emitting Diodes,S338235,R71659,Material,R71662,efficient near-infrared devices,"High-performance perovskite light-emitting diodes are achieved by an interfacial engineering approach, leading to the most efficient near-infrared devices produced using solution-processed emitters and efficient green devices at high brightness conditions.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71565,Bright Visible-Infrared Light Emitting Diodes Based on Hybrid Halide Perovskite with Spiro-OMeTAD as a Hole-Injecting Layer,S338073,R71567,Material,R71577,electron and hole injecting layers,"Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W·sr(-1)·m(-2) at a current density of 232 mA·cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 ± 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338124,R71590,Material,R71612,FA0.1Cs0.9PbI3 and FAPbI3 NCs,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338118,R71590,Material,R71606,FA0.1Cs0.9PbI3 NCs,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338114,R71590,Material,R71602,FAPbI3 and FA-doped CsPbI3 NCs,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338116,R71590,Material,R71604,FAPbI3 NCs,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R137466,Comparison of heterojunction device parameters for pure and doped ZnO thin films with IIIA (Al or In) elements grown on silicon at room ambient,S544059,R137468,keywords,L383145,Heterojunction parameters,"In this work, pure and IIIA element doped ZnO thin films were grown on p type silicon (Si) with (100) orientated surface by sol-gel method, and were characterized for comparing their electrical characteristics. The heterojunction parameters were obtained from the current-voltage (I-V) and capacitance-voltage (C-V) characteristics at room temperature. The ideality factor (n), saturation current (Io) and junction resistance of ZnO/p-Si heterojunction for both pure and doped (with Al or In) cases were determined by using different methods at room ambient. Other electrical parameters such as Fermi energy level (EF), barrier height (ΦB), acceptor concentration (Na), built-in potential (Φi) and voltage dependence of surface states (Nss) profile were obtained from the C-V measurements. The results reveal that doping ZnO with IIIA (Al or In) elements to fabricate n-ZnO/p-Si heterojunction can result in high performance diode characteristics.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71658,Interfacial Control Toward Efficient and Low-Voltage Perovskite Light-Emitting Diodes,S338233,R71659,Data,R71660,high brightness conditions,"High-performance perovskite light-emitting diodes are achieved by an interfacial engineering approach, leading to the most efficient near-infrared devices produced using solution-processed emitters and efficient green devices at high brightness conditions.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338107,R71590,Material,R71595,highly versatile photonic sources,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71658,Interfacial Control Toward Efficient and Low-Voltage Perovskite Light-Emitting Diodes,S338234,R71659,Material,R71661,High-performance perovskite light-emitting diodes,"High-performance perovskite light-emitting diodes are achieved by an interfacial engineering approach, leading to the most efficient near-infrared devices produced using solution-processed emitters and efficient green devices at high brightness conditions.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71565,Bright Visible-Infrared Light Emitting Diodes Based on Hybrid Halide Perovskite with Spiro-OMeTAD as a Hole-Injecting Layer,S338074,R71567,Material,R71578,injecting layer,"Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W·sr(-1)·m(-2) at a current density of 232 mA·cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 ± 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71565,Bright Visible-Infrared Light Emitting Diodes Based on Hybrid Halide Perovskite with Spiro-OMeTAD as a Hole-Injecting Layer,S338071,R71567,Material,R71575,injecting layers,"Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W·sr(-1)·m(-2) at a current density of 232 mA·cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 ± 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338111,R71590,Material,R71599,iodide-based compositions,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338108,R71590,Material,R71596,light-emitting diodes,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338115,R71590,Material,R71603,MA- or Cs-only cousins,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71565,Bright Visible-Infrared Light Emitting Diodes Based on Hybrid Halide Perovskite with Spiro-OMeTAD as a Hole-Injecting Layer,S338066,R71567,Data,R71570,maximum EQE value,"Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W·sr(-1)·m(-2) at a current density of 232 mA·cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 ± 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338126,R71590,Data,R71614,nearly cubic in shape,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71565,Bright Visible-Infrared Light Emitting Diodes Based on Hybrid Halide Perovskite with Spiro-OMeTAD as a Hole-Injecting Layer,S338076,R71567,Material,R71580,non-radiative recombination pathways,"Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W·sr(-1)·m(-2) at a current density of 232 mA·cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 ± 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338131,R71590,Data,R71619,peak PL wavelengths,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71565,Bright Visible-Infrared Light Emitting Diodes Based on Hybrid Halide Perovskite with Spiro-OMeTAD as a Hole-Injecting Layer,S338077,R71567,Material,R71581,perovskite material,"Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W·sr(-1)·m(-2) at a current density of 232 mA·cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 ± 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338110,R71590,Material,R71598,red and infrared spectral regions,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338112,R71590,Material,R71600,red-emissive CsPbI3 NCs,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338127,R71590,Data,R71615,similar sizes and morphologies,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R135926,Thin-Film Solar Cells with 19% Efficiency by Thermal Evaporation of CdSe and CdTe,S538096,R135931,substrate,L379204,SnO2-coated glass,CdTe-based solar cells exhibiting 19% power conversion efficiency were produced using widely available thermal evaporation deposition of the absorber layers on SnO2-coated glass with or without a t...,TRUE,noun phrase
R259,Semiconductor and Optical Materials,R135926,Thin-Film Solar Cells with 19% Efficiency by Thermal Evaporation of CdSe and CdTe,S538085,R135931,keywords,L379193,Solar cells,CdTe-based solar cells exhibiting 19% power conversion efficiency were produced using widely available thermal evaporation deposition of the absorber layers on SnO2-coated glass with or without a t...,TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71658,Interfacial Control Toward Efficient and Low-Voltage Perovskite Light-Emitting Diodes,S338236,R71659,Material,R71663,solution-processed emitters,"High-performance perovskite light-emitting diodes are achieved by an interfacial engineering approach, leading to the most efficient near-infrared devices produced using solution-processed emitters and efficient green devices at high brightness conditions.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338125,R71590,Data,R71613,uniform in size (10–15 nm),"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R71565,Bright Visible-Infrared Light Emitting Diodes Based on Hybrid Halide Perovskite with Spiro-OMeTAD as a Hole-Injecting Layer,S338075,R71567,Material,R71579,unsealed Pe-LEDs,"Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W·sr(-1)·m(-2) at a current density of 232 mA·cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 ± 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.",TRUE,noun phrase
R259,Semiconductor and Optical Materials,R137466,Comparison of heterojunction device parameters for pure and doped ZnO thin films with IIIA (Al or In) elements grown on silicon at room ambient,S544057,R137468,keywords,L383143,ZnO Thin film,"In this work, pure and IIIA element doped ZnO thin films were grown on p type silicon (Si) with (100) orientated surface by sol-gel method, and were characterized for comparing their electrical characteristics. The heterojunction parameters were obtained from the current-voltage (I-V) and capacitance-voltage (C-V) characteristics at room temperature. The ideality factor (n), saturation current (Io) and junction resistance of ZnO/p-Si heterojunction for both pure and doped (with Al or In) cases were determined by using different methods at room ambient. Other electrical parameters such as Fermi energy level (EF), barrier height (ΦB), acceptor concentration (Na), built-in potential (Φi) and voltage dependence of surface states (Nss) profile were obtained from the C-V measurements. The results reveal that doping ZnO with IIIA (Al or In) elements to fabricate n-ZnO/p-Si heterojunction can result in high performance diode characteristics.",TRUE,noun phrase
R281,Social and Behavioral Sciences,R70733,Everything in moderation: ICT and reading performance of Dutch 15-year-olds,S337481,R70736,includes,R71028,ICT autonomy,"Abstract Previous research on the relationship between students’ home and school Information and Communication Technology (ICT) resources and academic performance has shown ambiguous results. The availability of ICT resources at school has been found to be unrelated or negatively related to academic performance, whereas the availability of ICT resources at home has been found to be both positively and negatively related to academic performance. In addition, the frequency of use of ICT is related to students’ academic achievement. This relationship has been found to be negative for ICT use at school, however, for ICT use at home the literature on the relationship with academic performance is again ambiguous. In addition to ICT availability and ICT use, students’ attitudes towards ICT have also been found to play a role in student performance. In the present study, we examine how availability of ICT resources, students’ use of those resources (at school, outside school for schoolwork, outside school for leisure), and students’ attitudes toward ICT (interest in ICT, perceived ICT competence, perceived ICT autonomy) relate to individual differences in performance on a digital assessment of reading in one comprehensive model using the Dutch PISA 2015 sample of 5183 15-year-olds (49.2% male). Student gender and students’ economic, social, and cultural status accounted for a substantial part of the variation in digitally assessed reading performance. Controlling for these relationships, results indicated that students with moderate access to ICT resources, moderate use of ICT at school or outside school for schoolwork, and moderate interest in ICT had the highest digitally assessed reading performance. In contrast, students who reported moderate competence in ICT had the lowest digitally assessed reading performance. In addition, frequent use of ICT outside school for leisure was negatively related to digitally assessed reading performance, whereas perceived autonomy was positively related. Taken together, the findings suggest that excessive access to ICT resources, excessive use of ICT, and excessive interest in ICT is associated with lower digitally assessed reading performance.",TRUE,noun phrase
R281,Social and Behavioral Sciences,R70733,Everything in moderation: ICT and reading performance of Dutch 15-year-olds,S337498,R70736,includes,R71027,ICT competence,"Abstract Previous research on the relationship between students’ home and school Information and Communication Technology (ICT) resources and academic performance has shown ambiguous results. The availability of ICT resources at school has been found to be unrelated or negatively related to academic performance, whereas the availability of ICT resources at home has been found to be both positively and negatively related to academic performance. In addition, the frequency of use of ICT is related to students’ academic achievement. This relationship has been found to be negative for ICT use at school, however, for ICT use at home the literature on the relationship with academic performance is again ambiguous. In addition to ICT availability and ICT use, students’ attitudes towards ICT have also been found to play a role in student performance. In the present study, we examine how availability of ICT resources, students’ use of those resources (at school, outside school for schoolwork, outside school for leisure), and students’ attitudes toward ICT (interest in ICT, perceived ICT competence, perceived ICT autonomy) relate to individual differences in performance on a digital assessment of reading in one comprehensive model using the Dutch PISA 2015 sample of 5183 15-year-olds (49.2% male). Student gender and students’ economic, social, and cultural status accounted for a substantial part of the variation in digitally assessed reading performance. Controlling for these relationships, results indicated that students with moderate access to ICT resources, moderate use of ICT at school or outside school for schoolwork, and moderate interest in ICT had the highest digitally assessed reading performance. In contrast, students who reported moderate competence in ICT had the lowest digitally assessed reading performance. In addition, frequent use of ICT outside school for leisure was negatively related to digitally assessed reading performance, whereas perceived autonomy was positively related. Taken together, the findings suggest that excessive access to ICT resources, excessive use of ICT, and excessive interest in ICT is associated with lower digitally assessed reading performance.",TRUE,noun phrase
R281,Social and Behavioral Sciences,R70740,ICT Engagement: a new construct and its assessment in PISA 2015,S337505,R70741,includes,R71027,ICT competence,"Abstract As a relevant cognitive-motivational aspect of ICT literacy, a new construct ICT Engagement is theoretically based on self-determination theory and involves the factors ICT interest, Perceived ICT competence, Perceived autonomy related to ICT use, and ICT as a topic in social interaction. In this manuscript, we present different sources of validity supporting the construct interpretation of test scores in the ICT Engagement scale, which was used in PISA 2015. Specifically, we investigated the internal structure by dimensional analyses and investigated the relation of ICT Engagement aspects to other variables. The analyses are based on public data from PISA 2015 main study from Switzerland ( n = 5860) and Germany ( n = 6504). First, we could confirm the four-dimensional structure of ICT Engagement for the Swiss sample using a structural equation modelling approach. Second, ICT Engagement scales explained the highest amount of variance in ICT Use for Entertainment, followed by Practical use. Third, we found significantly lower values for girls in all ICT Engagement scales except ICT Interest. Fourth, we found a small negative correlation between the scores in the subscale “ICT as a topic in social interaction” and reading performance in PISA 2015. We could replicate most results for the German sample. Overall, the obtained results support the construct interpretation of the four ICT Engagement subscales.",TRUE,noun phrase
R281,Social and Behavioral Sciences,R70742,"A PISA-2015 Comparative Meta-Analysis between Singapore and Finland: Relations of Students’ Interest in Science, Perceived ICT Competence, and Environmental Awareness and Optimism",S337499,R70745,includes,R71027,ICT competence,"The aim of the present study is twofold: (1) to identify a factor structure between variables-interest in broad science topics, perceived information and communications technology (ICT) competence, environmental awareness and optimism; and (2) to explore the relations between these variables at the country level. The first part of the aim is addressed using exploratory factor analysis with data from the Program for International Student Assessment (PISA) for 15-year-old students from Singapore and Finland. The results show that a comparable structure with four factors was verified in both countries. Correlation analyses and linear regression were used to address the second part of the aim. The results show that adolescents’ interest in broad science topics can predict perceived ICT competence. Their interest in broad science topics and perceived ICT competence can predict environmental awareness in both countries. However, there is difference in predicting environmental optimism. Singaporean students’ interest in broad science topics and their perceived ICT competences are positive predictors, whereas environmental awareness is a negative predictor. Finnish students’ environmental awareness negatively predicted environmental optimism.",TRUE,noun phrase
R281,Social and Behavioral Sciences,R70746,Measurement invariance of the ICT engagement construct and its association with students’ performance in China and Germany: Evidence from PISA 2015 data,S337500,R70748,includes,R71027,ICT competence,"The present study investigated the factor structure of and measurement invariance in the information and communication technology (ICT) engagement construct, and the relationship between ICT engagement and students' performance on science, mathematics and reading in China and Germany. Samples were derived from the Programme for International Student Assessment (PISA) 2015 survey. Configural, metric and scalar equivalence were found in a multigroup exploratory structural equation model. In the regression model, a significantly positive association between interest in ICT and student achievement was found in China, in contrast to a significantly negative association in Germany. All achievement scores were negatively and significantly correlated with perceived ICT competence scores in China, whereas science and mathematics achievement scores were not predicted by scores on ICT competence in Germany. Similar patterns were found in China and Germany in terms of perceived autonomy in using ICT and social relatedness in using ICT to predict students' achievement. The implications of all the findings were discussed. [ABSTRACT FROM AUTHOR]",TRUE,noun phrase
R281,Social and Behavioral Sciences,R70740,ICT Engagement: a new construct and its assessment in PISA 2015,S337506,R70741,includes,R71029,ICT interest,"Abstract As a relevant cognitive-motivational aspect of ICT literacy, a new construct ICT Engagement is theoretically based on self-determination theory and involves the factors ICT interest, Perceived ICT competence, Perceived autonomy related to ICT use, and ICT as a topic in social interaction. In this manuscript, we present different sources of validity supporting the construct interpretation of test scores in the ICT Engagement scale, which was used in PISA 2015. Specifically, we investigated the internal structure by dimensional analyses and investigated the relation of ICT Engagement aspects to other variables. The analyses are based on public data from PISA 2015 main study from Switzerland ( n = 5860) and Germany ( n = 6504). First, we could confirm the four-dimensional structure of ICT Engagement for the Swiss sample using a structural equation modelling approach. Second, ICT Engagement scales explained the highest amount of variance in ICT Use for Entertainment, followed by Practical use. Third, we found significantly lower values for girls in all ICT Engagement scales except ICT Interest. Fourth, we found a small negative correlation between the scores in the subscale “ICT as a topic in social interaction” and reading performance in PISA 2015. We could replicate most results for the German sample. Overall, the obtained results support the construct interpretation of the four ICT Engagement subscales.",TRUE,noun phrase
R281,Social and Behavioral Sciences,R70742,"A PISA-2015 Comparative Meta-Analysis between Singapore and Finland: Relations of Students’ Interest in Science, Perceived ICT Competence, and Environmental Awareness and Optimism",S337471,R70745,Has method,R55185,Linear Regression,"The aim of the present study is twofold: (1) to identify a factor structure between variables-interest in broad science topics, perceived information and communications technology (ICT) competence, environmental awareness and optimism; and (2) to explore the relations between these variables at the country level. The first part of the aim is addressed using exploratory factor analysis with data from the Program for International Student Assessment (PISA) for 15-year-old students from Singapore and Finland. The results show that a comparable structure with four factors was verified in both countries. Correlation analyses and linear regression were used to address the second part of the aim. The results show that adolescents’ interest in broad science topics can predict perceived ICT competence. Their interest in broad science topics and perceived ICT competence can predict environmental awareness in both countries. However, there is difference in predicting environmental optimism. Singaporean students’ interest in broad science topics and their perceived ICT competences are positive predictors, whereas environmental awareness is a negative predictor. Finnish students’ environmental awareness negatively predicted environmental optimism.",TRUE,noun phrase
R353,Social Psychology,R75828,Decision-making at the sharp end: a survey of literature related to decision-making in humanitarian contexts,S346748,R75830,Domain,R75819,Humanitarian response,"Abstract In a humanitarian response, leaders are often tasked with making large numbers of decisions, many of which have significant consequences, in situations of urgency and uncertainty. These conditions have an impact on the decision-maker (causing stress, for example) and subsequently on how decisions get made. Evaluations of humanitarian action suggest that decision-making is an area of weakness in many operations. There are examples of important decisions being missed and of decision-making processes that are slow and ad hoc. As part of a research process to address these challenges, this article considers literature from the humanitarian and emergency management sectors that relates to decision-making. It outlines what the literature tells us about the nature of the decisions that leaders at the country level are taking during humanitarian operations, and the circumstances under which these decisions are taken. It then considers the potential application of two different types of decision-making process in these contexts: rational/analytical decision-making and naturalistic decision-making. The article concludes with broad hypotheses that can be drawn from the literature and with the recommendation that these be further tested by academics with an interest in the topic.",TRUE,noun phrase
R354,Sociology,R44689,Problem solving treatment and group psychoeducation for depression: multicentre randomised controlled trial. Outcomes of Depression International Network (ODIN) Group,S136581,R44690,Depression outcomes (sources),L83478,Beck Depression Inventory,"Abstract Objectives: To determine the acceptability of two psychological interventions for depressed adults in the community and their effect on caseness, symptoms, and subjective function. Design: A pragmatic multicentre randomised controlled trial, stratified by centre. Setting: Nine urban and rural communities in Finland, Republic of Ireland, Norway, Spain, and the United Kingdom. Participants: 452 participants aged 18 to 65, identified through a community survey with depressive or adjustment disorders according to the international classification of diseases, 10th revision or Diagnostic and Statistical Manual of Mental Disorders, fourth edition. Interventions: Six individual sessions of problem solving treatment (n=128), eight group sessions of the course on prevention of depression (n=108), and controls (n=189). Main outcome measures: Completion rates for each intervention, diagnosis of depression, and depressive symptoms and subjective function. Results: 63% of participants assigned to problem solving and 44% assigned to prevention of depression completed their intervention. The proportion of problem solving participants depressed at six months was 17% less than that for controls, giving a number needed to treat of 6; the mean difference in Beck depression inventory score was −2.63 (95% confidence interval −4.95 to −0.32), and there were significant improvements in SF-36 scores. For depression prevention, the difference in proportions of depressed participants was 14% (number needed to treat of 7); the mean difference in Beck depression inventory score was −1.50 (−4.16 to 1.17), and there were significant improvements in SF-36 scores. Such differences were not observed at 12 months. Neither specific diagnosis nor treatment with antidepressants affected outcome. Conclusions: When offered to adults with depressive disorders in the community, problem solving treatment was more acceptable than the course on prevention of depression. Both interventions reduced caseness and improved subjective function.",TRUE,noun phrase
R354,Sociology,R44709,Acute and one-year outcome of a randomised controlled trial of brief cognitive therapy for major depressive disorder in primary care,S136731,R44710,Depressive disorder,R44691,Major depressive disorder,"Background The consensus statement on the treatment of depression (Paykel & Priest, 1992) advocates the use of cognitive therapy techniques as an adjunct to medication. Method This paper describes a randomised controlled trial of brief cognitive therapy (BCT) plus ‘treatment as usual’ versus treatment as usual in the management of 48 patients with major depressive disorder presenting in primary care. Results At the end of the acute phase, significantly more subjects (P < 0.05) met recovery criteria in the intervention group (n=15) compared with the control group (n=8). When initial neuroticism scores were controlled for, reductions in Beck Depression Inventory and Hamilton Rating Scale for Depression scores favoured the BCT group throughout the 12 months of follow-up. Conclusions BCT may be beneficial, but given the time constraints, therapists need to be more rather than less skilled in cognitive therapy. This, plus methodological limitations, leads us to advise caution before applying this approach more widely in primary care.",TRUE,noun phrase
R354,Sociology,R44685,Treatment of dysthymia and minor depression in primary care: a randomized trial in patients aged 18 to 59 years,S136549,R44686,Depressive disorder,R44682,Minor depression or dysthymia,"OBJECTIVE The researchers evaluated the effectiveness of paroxetine and Problem-Solving Treatment for Primary Care (PST-PC) for patients with minor depression or dysthymia. STUDY DESIGN This was an 11-week randomized placebo-controlled trial conducted in primary care practices in 2 communities (Lebanon, NH, and Seattle, Wash). Paroxetine (n=80) or placebo (n=81) therapy was started at 10 mg per day and increased to a maximum 40 mg per day, or PST-PC was provided (n=80). There were 6 scheduled visits for all treatment conditions. POPULATION A total of 241 primary care patients with minor depression (n=114) or dysthymia (n=127) were included. Of these, 191 patients (79.3%) completed all treatment visits. OUTCOMES Depressive symptoms were measured using the 20-item Hopkins Depression Scale (HSCL-D-20). Remission was scored on the Hamilton Depression Rating Scale (HDRS) as less than or equal to 6 at 11 weeks. Functional status was measured with the physical health component (PHC) and mental health component (MHC) of the 36-item Medical Outcomes Study Short Form. RESULTS All treatment conditions showed a significant decline in depressive symptoms over the 11-week period. There were no significant differences between the interventions or by diagnosis. For dysthymia the remission rate for paroxetine (80%) and PST-PC (57%) was significantly higher than for placebo (44%, P=.008). The remission rate was high for minor depression (64%) and similar for each treatment group. For the MHC there were significant outcome differences related to baseline level for paroxetine compared with placebo. For the PHC there were no significant differences between the treatment groups. CONCLUSIONS For dysthymia, paroxetine and PST-PC improved remission compared with placebo plus nonspecific clinical management. Results varied for the other outcomes measured. For minor depression, the 3 interventions were equally effective; general clinical management (watchful waiting) is an appropriate treatment option.",TRUE,noun phrase
R354,Sociology,R44685,Treatment of dysthymia and minor depression in primary care: a randomized trial in patients aged 18 to 59 years,S136556,R44686,Setting,R44683,Primary care,"OBJECTIVE The researchers evaluated the effectiveness of paroxetine and Problem-Solving Treatment for Primary Care (PST-PC) for patients with minor depression or dysthymia. STUDY DESIGN This was an 11-week randomized placebo-controlled trial conducted in primary care practices in 2 communities (Lebanon, NH, and Seattle, Wash). Paroxetine (n=80) or placebo (n=81) therapy was started at 10 mg per day and increased to a maximum 40 mg per day, or PST-PC was provided (n=80). There were 6 scheduled visits for all treatment conditions. POPULATION A total of 241 primary care patients with minor depression (n=114) or dysthymia (n=127) were included. Of these, 191 patients (79.3%) completed all treatment visits. OUTCOMES Depressive symptoms were measured using the 20-item Hopkins Depression Scale (HSCL-D-20). Remission was scored on the Hamilton Depression Rating Scale (HDRS) as less than or equal to 6 at 11 weeks. Functional status was measured with the physical health component (PHC) and mental health component (MHC) of the 36-item Medical Outcomes Study Short Form. RESULTS All treatment conditions showed a significant decline in depressive symptoms over the 11-week period. There were no significant differences between the interventions or by diagnosis. For dysthymia the remission rate for paroxetine (80%) and PST-PC (57%) was significantly higher than for placebo (44%, P=.008). The remission rate was high for minor depression (64%) and similar for each treatment group. For the MHC there were significant outcome differences related to baseline level for paroxetine compared with placebo. For the PHC there were no significant differences between the treatment groups. CONCLUSIONS For dysthymia, paroxetine and PST-PC improved remission compared with placebo plus nonspecific clinical management. Results varied for the other outcomes measured. For minor depression, the 3 interventions were equally effective; general clinical management (watchful waiting) is an appropriate treatment option.",TRUE,noun phrase
R354,Sociology,R44693,A randomised controlled trial of cognitive behaviour therapy vs treatment as usual in the treatment of mild to moderate late life depression,S136612,R44694,Setting,R44683,Primary care,This study provides an empirical evaluation of Cognitive Behaviour Therapy (CBT) alone vs Treatment as usual (TAU) alone (generally pharmacotherapy) for late life depression in a UK primary care setting.,TRUE,noun phrase
R354,Sociology,R44702,Randomised controlled trial comparing problem solving treatment with amitriptyline and placebo for major depression in primary care,S136677,R44703,Setting,R44683,Primary care,"Abstract Objective: To determine whether, in the treatment of major depression in primary care, a brief psychological treatment (problem solving) was (a) as effective as antidepressant drugs and more effective than placebo; (b) feasible in practice; and (c) acceptable to patients. Design: Randomised controlled trial of problem solving treatment, amitriptyline plus standard clinical management, and drug placebo plus standard clinical management. Each treatment was delivered in six sessions over 12 weeks. Setting: Primary care in Oxfordshire. Subjects: 91 patients in primary care who had major depression. Main outcome measures: Observer and self reported measures of severity of depression, self reported measure of social outcome, and observer measure of psychological symptoms at six and 12 weeks; self reported measure of patient satisfaction at 12 weeks. Numbers of patients recovered at six and 12 weeks. Results: At six and 12 weeks the difference in score on the Hamilton rating scale for depression between problem solving and placebo treatments was significant (5.3 (95% confidence interval 1.6 to 9.0) and 4.7 (0.4 to 9.0) respectively), but the difference between problem solving and amitriptyline was not significant (1.8 (−1.8 to 5.5) and 0.9 (−3.3 to 5.2) respectively). At 12 weeks 60% (18/30) of patients given problem solving treatment had recovered on the Hamilton scale compared with 52% (16/31) given amitriptyline and 27% (8/30) given placebo. Patients were satisfied with problem solving treatment; all patients who completed treatment (28/30) rated the treatment as helpful or very helpful. The six sessions of problem solving treatment totalled a mean therapy time of 3 1/2 hours. Conclusions: As a treatment for major depression in primary care, problem solving treatment is effective, feasible, and acceptable to patients. Key messages Key messages Patient compliance with antidepressant treatment is often poor, so there is a need for a psychological treatment This study found that problem solving is an effective psychological treatment for major depression in primary care—as effective as amitriptyline and more effective than placebo Problem solving is a feasible treatment in primary care, being effective when given over six sessions by a general practitioner Problem solving treatment is acceptable to patients",TRUE,noun phrase
R354,Sociology,R44704,"Randomised controlled trial of problem solving treatment, antidepressant medication, and combined treatment for major depression in primary care",S136699,R44705,Setting,R44683,Primary care,"Abstract Objectives: To determine whether problem solving treatment combined with antidepressant medication is more effective than either treatment alone in the management of major depression in primary care. To assess the effectiveness of problem solving treatment when given by practice nurses compared with general practitioners when both have been trained in the technique. Design: Randomised controlled trial with four treatment groups. Setting: Primary care in Oxfordshire. Participants: Patients aged 18-65 years with major depression on the research diagnostic criteria—a score of 13 or more on the 17 item Hamilton rating scale for depression and a minimum duration of illness of four weeks. Interventions: Problem solving treatment by research general practitioner or research practice nurse or antidepressant medication or a combination of problem solving treatment and antidepressant medication. Main outcome measures: Hamilton rating scale for depression, Beck depression inventory, clinical interview schedule (revised), and the modified social adjustment schedule assessed at 6, 12, and 52 weeks. Results: Patients in all groups showed a clear improvement over 12 weeks. The combination of problem solving treatment and antidepressant medication was no more effective than either treatment alone. There was no difference in outcome irrespective of who delivered the problem solving treatment. Conclusions: Problem solving treatment is an effective treatment for depressive disorders in primary care. The treatment can be delivered by suitably trained practice nurses or general practitioners. The combination of this treatment with antidepressant medication is no more effective than either treatment alone. Key messages Problem solving treatment is an effective treatment for depressive disorders in primary care Problem solving treatment can be delivered by suitably trained practice nurses as effectively as by general practitioners The combination of problem solving treatment and antidepressant medication is no more effective than either treatment alone Problem solving treatment is most likely to benefit patients who have a depressive disorder of moderate severity and who wish to participate in an active psychological treatment",TRUE,noun phrase
R354,Sociology,R44709,Acute and one-year outcome of a randomised controlled trial of brief cognitive therapy for major depressive disorder in primary care,S136738,R44710,Setting,R44683,Primary care,"Background The consensus statement on the treatment of depression (Paykel & Priest, 1992) advocates the use of cognitive therapy techniques as an adjunct to medication. Method This paper describes a randomised controlled trial of brief cognitive therapy (BCT) plus ‘treatment as usual’ versus treatment as usual in the management of 48 patients with major depressive disorder presenting in primary care. Results At the end of the acute phase, significantly more subjects (P < 0.05) met recovery criteria in the intervention group (n=15) compared with the control group (n=8). When initial neuroticism scores were controlled for, reductions in Beck Depression Inventory and Hamilton Rating Scale for Depression scores favoured the BCT group throughout the 12 months of follow-up. Conclusions BCT may be beneficial, but given the time constraints, therapists need to be more rather than less skilled in cognitive therapy. This, plus methodological limitations, leads us to advise caution before applying this approach more widely in primary care.",TRUE,noun phrase
R354,Sociology,R44713,Telephone psychotherapy and telephone care management for primary care patients starting antidepressant treatment: a randomized controlled trial,S136761,R44714,Setting,R44683,Primary care,"CONTEXT Both antidepressant medication and structured psychotherapy have been proven efficacious, but less than one third of people with depressive disorders receive effective levels of either treatment. OBJECTIVE To compare usual primary care for depression with 2 intervention programs: telephone care management and telephone care management plus telephone psychotherapy. DESIGN Three-group randomized controlled trial with allocation concealment and blinded outcome assessment conducted between November 2000 and May 2002. SETTING AND PARTICIPANTS A total of 600 patients beginning antidepressant treatment for depression were systematically sampled from 7 group-model primary care clinics; patients already receiving psychotherapy were excluded. INTERVENTIONS Usual primary care; usual care plus a telephone care management program including at least 3 outreach calls, feedback to the treating physician, and care coordination; usual care plus care management integrated with a structured 8-session cognitive-behavioral psychotherapy program delivered by telephone. MAIN OUTCOME MEASURES Blinded telephone interviews at 6 weeks, 3 months, and 6 months assessed depression severity (Hopkins Symptom Checklist Depression Scale and the Patient Health Questionnaire), patient-rated improvement, and satisfaction with treatment. Computerized administrative data examined use of antidepressant medication and outpatient visits. RESULTS Treatment participation rates were 97% for telephone care management and 93% for telephone care management plus psychotherapy. Compared with usual care, the telephone psychotherapy intervention led to lower mean Hopkins Symptom Checklist Depression Scale depression scores (P =.02), a higher proportion of patients reporting that depression was ""much improved"" (80% vs 55%, P<.001), and a higher proportion of patients ""very satisfied"" with depression treatment (59% vs 29%, P<.001). The telephone care management program had smaller effects on patient-rated improvement (66% vs 55%, P =.04) and satisfaction (47% vs 29%, P =.001); effects on mean depression scores were not statistically significant. CONCLUSIONS For primary care patients beginning antidepressant treatment, a telephone program integrating care management and structured cognitive-behavioral psychotherapy can significantly improve satisfaction and clinical outcomes. These findings suggest a new public health model of psychotherapy for depression including active outreach and vigorous efforts to improve access to and motivation for treatment.",TRUE,noun phrase
R354,Sociology,R44719,Treatment of dysthymia and minor depression in primary care: A randomized controlled trial in older adults,S136814,R44720,Setting,R44683,Primary care,"CONTEXT Insufficient evidence exists for recommendation of specific effective treatments for older primary care patients with minor depression or dysthymia. OBJECTIVE To compare the effectiveness of pharmacotherapy and psychotherapy in primary care settings among older persons with minor depression or dysthymia. DESIGN Randomized, placebo-controlled trial (November 1995-August 1998). SETTING Four geographically and clinically diverse primary care practices. PARTICIPANTS A total of 415 primary care patients (mean age, 71 years) with minor depression (n = 204) or dysthymia (n = 211) and a Hamilton Depression Rating Scale (HDRS) score of at least 10 were randomized; 311 (74.9%) completed all study visits. INTERVENTIONS Patients were randomly assigned to receive paroxetine (n = 137) or placebo (n = 140), starting at 10 mg/d and titrated to a maximum of 40 mg/d, or problem-solving treatment-primary care (PST-PC; n = 138). For the paroxetine and placebo groups, the 6 visits over 11 weeks included general support and symptom and adverse effects monitoring; for the PST-PC group, visits were for psychotherapy. MAIN OUTCOME MEASURES Depressive symptoms, by the 20-item Hopkins Symptom Checklist Depression Scale (HSCL-D-20) and the HDRS; and functional status, by the Medical Outcomes Study Short-Form 36 (SF-36) physical and mental components. RESULTS Paroxetine patients showed greater (difference in mean [SE] 11-week change in HSCL-D-20 scores, 0.21 [0. 07]; P =.004) symptom resolution than placebo patients. Patients treated with PST-PC did not show more improvement than placebo (difference in mean [SE] change in HSCL-D-20 scores, 0.11 [0.13]; P =.13), but their symptoms improved more rapidly than those of placebo patients during the latter treatment weeks (P =.01). For dysthymia, paroxetine improved mental health functioning vs placebo among patients whose baseline functioning was high (difference in mean [SE] change in SF-36 mental component scores, 5.8 [2.02]; P =. 01) or intermediate (difference in mean [SE] change in SF-36 mental component scores, 4.4 [1.74]; P =.03). Mental health functioning in dysthymia patients was not significantly improved by PST-PC compared with placebo (P>/=.12 for low-, intermediate-, and high-functioning groups). For minor depression, both paroxetine and PST-PC improved mental health functioning in patients in the lowest tertile of baseline functioning (difference vs placebo in mean [SE] change in SF-36 mental component scores, 4.7 [2.03] for those taking paroxetine; 4.7 [1.96] for the PST-PC treatment; P =.02 vs placebo). CONCLUSIONS Paroxetine showed moderate benefit for depressive symptoms and mental health function in elderly patients with dysthymia and more severely impaired elderly patients with minor depression. The benefits of PST-PC were smaller, had slower onset, and were more subject to site differences than those of paroxetine.",TRUE,noun phrase
R354,Sociology,R44713,Telephone psychotherapy and telephone care management for primary care patients starting antidepressant treatment: a randomized controlled trial,S136758,R44714,Depression outcomes (sources),L83591,Symptom Checklist,"CONTEXT Both antidepressant medication and structured psychotherapy have been proven efficacious, but less than one third of people with depressive disorders receive effective levels of either treatment. OBJECTIVE To compare usual primary care for depression with 2 intervention programs: telephone care management and telephone care management plus telephone psychotherapy. DESIGN Three-group randomized controlled trial with allocation concealment and blinded outcome assessment conducted between November 2000 and May 2002. SETTING AND PARTICIPANTS A total of 600 patients beginning antidepressant treatment for depression were systematically sampled from 7 group-model primary care clinics; patients already receiving psychotherapy were excluded. INTERVENTIONS Usual primary care; usual care plus a telephone care management program including at least 3 outreach calls, feedback to the treating physician, and care coordination; usual care plus care management integrated with a structured 8-session cognitive-behavioral psychotherapy program delivered by telephone. MAIN OUTCOME MEASURES Blinded telephone interviews at 6 weeks, 3 months, and 6 months assessed depression severity (Hopkins Symptom Checklist Depression Scale and the Patient Health Questionnaire), patient-rated improvement, and satisfaction with treatment. Computerized administrative data examined use of antidepressant medication and outpatient visits. RESULTS Treatment participation rates were 97% for telephone care management and 93% for telephone care management plus psychotherapy. Compared with usual care, the telephone psychotherapy intervention led to lower mean Hopkins Symptom Checklist Depression Scale depression scores (P =.02), a higher proportion of patients reporting that depression was ""much improved"" (80% vs 55%, P<.001), and a higher proportion of patients ""very satisfied"" with depression treatment (59% vs 29%, P<.001). The telephone care management program had smaller effects on patient-rated improvement (66% vs 55%, P =.04) and satisfaction (47% vs 29%, P =.001); effects on mean depression scores were not statistically significant. CONCLUSIONS For primary care patients beginning antidepressant treatment, a telephone program integrating care management and structured cognitive-behavioral psychotherapy can significantly improve satisfaction and clinical outcomes. These findings suggest a new public health model of psychotherapy for depression including active outreach and vigorous efforts to improve access to and motivation for treatment.",TRUE,noun phrase
R140,Software Engineering,R76792,Mining Twitter Feeds for Software User Requirements,S491029,R76800,Machine learning algorithms,R76801,Naive bayes,"Twitter enables large populations of end-users of software to publicly share their experiences and concerns about software systems in the form of micro-blogs. Such data can be collected and classified to help software developers infer users' needs, detect bugs in their code, and plan for future releases of their systems. However, automatically capturing, classifying, and presenting useful tweets is not a trivial task. Challenges stem from the scale of the data available, its unique format, diverse nature, and high percentage of irrelevant information and spam. Motivated by these challenges, this paper reports on a three-fold study that is aimed at leveraging Twitter as a main source of software user requirements. The main objective is to enable a responsive, interactive, and adaptive data-driven requirements engineering process. Our analysis is conducted using 4,000 tweets collected from the Twitter feeds of 10 software systems sampled from a broad range of application domains. The results reveal that around 50% of collected tweets contain useful technical information. The results also show that text classifiers such as Support Vector Machines and Naive Bayes can be very effective in capturing and categorizing technically informative tweets. Additionally, the paper describes and evaluates multiple summarization strategies for generating meaningful summaries of informative software-relevant tweets.",TRUE,noun phrase
R140,Software Engineering,R78371,Automatic Classification of Non-Functional Requirements from Augmented App User Reviews,S491193,R78373,Machine learning algorithms,R78375,Naive bayes,"Context: The leading App distribution platforms, Apple App Store, Google Play, and Windows Phone Store, have over 4 million Apps. Research shows that user reviews contain abundant useful information which may help developers to improve their Apps. Extracting and considering Non-Functional Requirements (NFRs), which describe a set of quality attributes wanted for an App and are hidden in user reviews, can help developers to deliver a product which meets users' expectations. Objective: Developers need to be aware of the NFRs from massive user reviews during software maintenance and evolution. Automatic user reviews classification based on an NFR standard provides a feasible way to achieve this goal. Method: In this paper, user reviews were automatically classified into four types of NFRs (reliability, usability, portability, and performance), Functional Requirements (FRs), and Others. We combined four classification techniques BoW, TF-IDF, CHI2, and AUR-BoW (proposed in this work) with three machine learning algorithms Naive Bayes, J48, and Bagging to classify user reviews. We conducted experiments to compare the F-measures of the classification results through all the combinations of the techniques and algorithms. Results: We found that the combination of AUR-BoW with Bagging achieves the best result (a precision of 71.4%, a recall of 72.3%, and an F-measure of 71.8%) among all the combinations. Conclusion: Our finding shows that augmented user reviews can lead to better classification results, and the machine learning algorithm Bagging is more suitable for NFRs classification from user reviews than Naïve Bayes and J48.",TRUE,noun phrase
R140,Software Engineering,R152014,From scenario modeling to scenario programming for reactive systems with dynamic topology,S608758,R152016,model,R152018,Scenario Modeling,"Software-intensive systems often consist of cooperating reactive components. In mobile and reconfigurable systems, their topology changes at run-time, which influences how the components must cooperate. The Scenario Modeling Language (SML) offers a formal approach for specifying the reactive behavior such systems that aligns with how humans conceive and communicate behavioral requirements. Simulation and formal checks can find specification flaws early. We present a framework for the Scenario-based Programming (SBP) that reflects the concepts of SML in Java and makes the scenario modeling approach available for programming. SBP code can also be generated from SML and extended with platform-specific code, thus streamlining the transition from design to implementation. As an example serves a car-to-x communication system. Demo video and artifact: http://scenariotools.org/esecfse-2017-tool-demo/",TRUE,noun phrase
R140,Software Engineering,R152020,A Scenario-based MDE Process for Developing Reactive Systems: A Cleaning Robot Example,S608770,R152022,model,R152018,Scenario Modeling,"This paper presents the SCENARIOTOOLS solution for developing a cleaning robot system, an instance of the rover problem of the MDE Tools Challenge 2017. We present an MDE process that consists of (1) the modeling of the system behavior as a scenario-based assume-guarantee specification with SML (Scenario Modeling Language), (2) the formal realizabilitychecking and verification of the specification, (3) the generation of SBP (Scenario-Based Programming) Java code from the SML specification, and, finally, (4) adding platform-specific code to connect specification-level events with platform-level sensorand actuator-events. The resulting code can be executed on a RaspberryPi-based robot. The approach is suited for developing reactive systems with multiple cooperating components. Its strength is that the scenario-based modeling corresponds closely to how humans conceive and communicate behavioral requirements. SML in particular supports the modeling of environment assumptions and dynamic component structures. The formal checks ensure that the system satisfies its specification.",TRUE,noun phrase
R140,Software Engineering,R152014,From scenario modeling to scenario programming for reactive systems with dynamic topology,S608759,R152016,programming language,R152019,Scenario Modeling Language,"Software-intensive systems often consist of cooperating reactive components. In mobile and reconfigurable systems, their topology changes at run-time, which influences how the components must cooperate. The Scenario Modeling Language (SML) offers a formal approach for specifying the reactive behavior such systems that aligns with how humans conceive and communicate behavioral requirements. Simulation and formal checks can find specification flaws early. We present a framework for the Scenario-based Programming (SBP) that reflects the concepts of SML in Java and makes the scenario modeling approach available for programming. SBP code can also be generated from SML and extended with platform-specific code, thus streamlining the transition from design to implementation. As an example serves a car-to-x communication system. Demo video and artifact: http://scenariotools.org/esecfse-2017-tool-demo/",TRUE,noun phrase
R140,Software Engineering,R152020,A Scenario-based MDE Process for Developing Reactive Systems: A Cleaning Robot Example,S608771,R152022,programming language,R152019,Scenario Modeling Language,"This paper presents the SCENARIOTOOLS solution for developing a cleaning robot system, an instance of the rover problem of the MDE Tools Challenge 2017. We present an MDE process that consists of (1) the modeling of the system behavior as a scenario-based assume-guarantee specification with SML (Scenario Modeling Language), (2) the formal realizabilitychecking and verification of the specification, (3) the generation of SBP (Scenario-Based Programming) Java code from the SML specification, and, finally, (4) adding platform-specific code to connect specification-level events with platform-level sensorand actuator-events. The resulting code can be executed on a RaspberryPi-based robot. The approach is suited for developing reactive systems with multiple cooperating components. Its strength is that the scenario-based modeling corresponds closely to how humans conceive and communicate behavioral requirements. SML in particular supports the modeling of environment assumptions and dynamic component structures. The formal checks ensure that the system satisfies its specification.",TRUE,noun phrase
R140,Software Engineering,R74516,Measuring human values in software engineering,S342530,R74518,Subjects,L246624,software practitioners,"Background: Human values, such as prestige, social justice, and financial success, influence software production decision-making processes. While their subjectivity makes some values difficult to measure, their impact on software motivates our research. Aim: To contribute to the scientific understanding and the empirical investigation of human values in Software Engineering (SE). Approach: Drawing from social psychology, we consider values as mental representations to be investigated on three levels: at a system (L1), personal (L2), and instantiation level (L3). Method: We design and develop a selection of tools for the investigation of values at each level, and focus on the design, development, and use of the Values Q-Sort. Results: From our study with 12 software practitioners, it is possible to extract three values `prototypes' indicative of an emergent typology of values considerations in SE. Conclusions: The Values Q-Sort generates quantitative values prototypes indicating values relations (L1) as well as rich personal narratives (L2) that reflect specific software practices (L3). It thus offers a systematic, empirical approach to capturing values in SE.",TRUE,noun phrase
R140,Software Engineering,R76792,Mining Twitter Feeds for Software User Requirements,S491028,R76800,Machine learning algorithms,R76802,Support vector machines,"Twitter enables large populations of end-users of software to publicly share their experiences and concerns about software systems in the form of micro-blogs. Such data can be collected and classified to help software developers infer users' needs, detect bugs in their code, and plan for future releases of their systems. However, automatically capturing, classifying, and presenting useful tweets is not a trivial task. Challenges stem from the scale of the data available, its unique format, diverse nature, and high percentage of irrelevant information and spam. Motivated by these challenges, this paper reports on a three-fold study that is aimed at leveraging Twitter as a main source of software user requirements. The main objective is to enable a responsive, interactive, and adaptive data-driven requirements engineering process. Our analysis is conducted using 4,000 tweets collected from the Twitter feeds of 10 software systems sampled from a broad range of application domains. The results reveal that around 50% of collected tweets contain useful technical information. The results also show that text classifiers such as Support Vector Machines and Naive Bayes can be very effective in capturing and categorizing technically informative tweets. Additionally, the paper describes and evaluates multiple summarization strategies for generating meaningful summaries of informative software-relevant tweets.",TRUE,noun phrase
R140,Software Engineering,R49480,Software Architecture Optimization Methods: A Systematic Literature Review,S147718,R49482,Has method,R41033,Systematic Literature Review,"Due to significant industrial demands toward software systems with increasing complexity and challenging quality requirements, software architecture design has become an important development activity and the research domain is rapidly evolving. In the last decades, software architecture optimization methods, which aim to automate the search for an optimal architecture design with respect to a (set of) quality attribute(s), have proliferated. However, the reported results are fragmented over different research communities, multiple system domains, and multiple quality attributes. To integrate the existing research results, we have performed a systematic literature review and analyzed the results of 188 research papers from the different research communities. Based on this survey, a taxonomy has been created which is used to classify the existing research. Furthermore, the systematic analysis of the research literature provided in this review aims to help the research community in consolidating the existing research efforts and deriving a research agenda for future developments.",TRUE,noun phrase
R75,Systems and Integrative Physiology,R4918,"Pilot Study to Estimate ""Difficult"" Area in e-Learning Material by Physiological Measurements",S5403,R4928,Material,R4933,a page of learning materials,"To improve designs of e-learning materials, it is necessary to know which word or figure a learner felt ""difficult"" in the materials. In this pilot study, we measured electroencephalography (EEG) and eye gaze data of learners and analyzed to estimate which area they had difficulty to learn. The developed system realized simultaneous measurements of physiological data and subjective evaluations during learning. Using this system, we observed specific EEG activity in difficult pages. Integrating of eye gaze and EEG measurements raised a possibility to determine where a learner felt ""difficult"" in a page of learning materials. From these results, we could suggest that the multimodal measurements of EEG and eye gaze would lead to effective improvement of learning materials. For future study, more data collection using various materials and learners with different backgrounds is necessary. This study could lead to establishing a method to improve e-learning materials based on learners' mental states.",TRUE,noun phrase
R75,Systems and Integrative Physiology,R4918,"Pilot Study to Estimate ""Difficult"" Area in e-Learning Material by Physiological Measurements",S5402,R4928,Material,R4932,difficult pages,"To improve designs of e-learning materials, it is necessary to know which word or figure a learner felt ""difficult"" in the materials. In this pilot study, we measured electroencephalography (EEG) and eye gaze data of learners and analyzed to estimate which area they had difficulty to learn. The developed system realized simultaneous measurements of physiological data and subjective evaluations during learning. Using this system, we observed specific EEG activity in difficult pages. Integrating of eye gaze and EEG measurements raised a possibility to determine where a learner felt ""difficult"" in a page of learning materials. From these results, we could suggest that the multimodal measurements of EEG and eye gaze would lead to effective improvement of learning materials. For future study, more data collection using various materials and learners with different backgrounds is necessary. This study could lead to establishing a method to improve e-learning materials based on learners' mental states.",TRUE,noun phrase
R75,Systems and Integrative Physiology,R4918,"Pilot Study to Estimate ""Difficult"" Area in e-Learning Material by Physiological Measurements",S5400,R4928,Material,R4930,e-learning materials,"To improve designs of e-learning materials, it is necessary to know which word or figure a learner felt ""difficult"" in the materials. In this pilot study, we measured electroencephalography (EEG) and eye gaze data of learners and analyzed to estimate which area they had difficulty to learn. The developed system realized simultaneous measurements of physiological data and subjective evaluations during learning. Using this system, we observed specific EEG activity in difficult pages. Integrating of eye gaze and EEG measurements raised a possibility to determine where a learner felt ""difficult"" in a page of learning materials. From these results, we could suggest that the multimodal measurements of EEG and eye gaze would lead to effective improvement of learning materials. For future study, more data collection using various materials and learners with different backgrounds is necessary. This study could lead to establishing a method to improve e-learning materials based on learners' mental states.",TRUE,noun phrase
R75,Systems and Integrative Physiology,R4918,"Pilot Study to Estimate ""Difficult"" Area in e-Learning Material by Physiological Measurements",S5407,R4928,Data,R4937,learners' mental states,"To improve designs of e-learning materials, it is necessary to know which word or figure a learner felt ""difficult"" in the materials. In this pilot study, we measured electroencephalography (EEG) and eye gaze data of learners and analyzed to estimate which area they had difficulty to learn. The developed system realized simultaneous measurements of physiological data and subjective evaluations during learning. Using this system, we observed specific EEG activity in difficult pages. Integrating of eye gaze and EEG measurements raised a possibility to determine where a learner felt ""difficult"" in a page of learning materials. From these results, we could suggest that the multimodal measurements of EEG and eye gaze would lead to effective improvement of learning materials. For future study, more data collection using various materials and learners with different backgrounds is necessary. This study could lead to establishing a method to improve e-learning materials based on learners' mental states.",TRUE,noun phrase
R75,Systems and Integrative Physiology,R4918,"Pilot Study to Estimate ""Difficult"" Area in e-Learning Material by Physiological Measurements",S5401,R4928,Material,R4931,which word or figure a learner,"To improve designs of e-learning materials, it is necessary to know which word or figure a learner felt ""difficult"" in the materials. In this pilot study, we measured electroencephalography (EEG) and eye gaze data of learners and analyzed to estimate which area they had difficulty to learn. The developed system realized simultaneous measurements of physiological data and subjective evaluations during learning. Using this system, we observed specific EEG activity in difficult pages. Integrating of eye gaze and EEG measurements raised a possibility to determine where a learner felt ""difficult"" in a page of learning materials. From these results, we could suggest that the multimodal measurements of EEG and eye gaze would lead to effective improvement of learning materials. For future study, more data collection using various materials and learners with different backgrounds is necessary. This study could lead to establishing a method to improve e-learning materials based on learners' mental states.",TRUE,noun phrase
R106,Systems Biology,R49453,MetaboMAPS: Pathway sharing and multi-omics data visualization in metabolic context,S147510,R49455,Scope,L90696,Pathway sharing,"Metabolic pathways are an important part of systems biology research since they illustrate complex interactions between metabolites, enzymes, and regulators. Pathway maps are drawn to elucidate metabolism or to set data in a metabolic context. We present MetaboMAPS, a web-based platform to visualize numerical data on individual metabolic pathway maps. Metabolic maps can be stored, distributed and downloaded in SVG-format. MetaboMAPS was designed for users without computational background and supports pathway sharing without strict conventions. In addition to existing applications that established standards for well-studied pathways, MetaboMAPS offers a niche for individual, customized pathways beyond common knowledge, supporting ongoing research by creating publication-ready visualizations of experimental data.",TRUE,noun phrase
R30,Terrestrial and Aquatic Ecology,R171893,Alien plants can be associated with a decrease in local and regional native richness even when at low abundance,S686372,R171897,Has metric,R171901,critical abundance,"The impacts of alien plants on native richness are usually assessed at small spatial scales and in locations where the alien is at high abundance. But this raises two questions: to what extent do impacts occur where alien species are at low abundance, and do local impacts translate to effects at the landscape scale? In an analysis of 47 widespread alien plant species occurring across a 1,000 km2 landscape, we examined the relationship between their local abundance and native plant species richness in 594 grassland plots. We first defined the critical abundance at which these focal alien species were associated with a decline in native α‐richness (plot‐scale species numbers), and then assessed how this local decline was translated into declines in native species γ‐richness (landscape‐scale species numbers). After controlling for sampling biases and environmental gradients that might lead to spurious relationships, we found that eight out of 47 focal alien species were associated with a significant decline in native α‐richness as their local abundance increased. Most of these significant declines started at low to intermediate classes of abundance. For these eight species, declines in native γ‐richness were, on average, an order of magnitude (32.0 vs. 2.2 species) greater than those found for native α‐richness, mostly due to spatial homogenization of native communities. The magnitude of the decrease at the landscape scale was best explained by the number of plots where an alien species was found above its critical abundance. Synthesis. Even at low abundance, alien plants may impact native plant richness at both local and landscape scales. Local impacts may result in much greater declines in native richness at larger spatial scales. Quantifying impact at the landscape scale requires consideration of not only the prevalence of an alien plant, but also its critical abundance and its effect on native community homogenization. This suggests that management approaches targeting only those locations dominated by alien plants might not mitigate impacts effectively. Our integrated approach will improve the ranking of alien species risks at a spatial scale appropriate for prioritizing management and designing conservation policies.",TRUE,noun phrase
R369,"Theory, Knowledge and Science",R76758,Relational Representation Learning for Dynamic (Knowledge) Graphs: A Survey,S350445,R76760,Has approach,L250116,encoder-decoder perspective,"Graphs arise naturally in many real-world applications including social networks, recommender systems, ontologies, biology, and computational finance. Traditionally, machine learning models for graphs have been mostly designed for static graphs. However, many applications involve evolving graphs. This introduces important challenges for learning and inference since nodes, attributes, and edges change over time. In this survey, we review the recent advances in representation learning for dynamic graphs, including dynamic knowledge graphs. We describe existing models from an encoder-decoder perspective, categorize these encoders and decoders based on the techniques they employ, and analyze the approaches in each category. We also review several prominent applications and widely used datasets, and highlight directions for future research.",TRUE,noun phrase
R369,"Theory, Knowledge and Science",R76762,Virtual Knowledge Graphs: An Overview of Systems and Use Cases.,S350553,R76764,Has approach,L250185,ontology-based data access,"In this paper, we present the virtual knowledge graph (VKG) paradigm for data integration and access, also known in the literature as Ontology-based Data Access. Instead of structuring the integration layer as a collection of relational tables, the VKG paradigm replaces the rigid structure of tables with the flexibility of graphs that are kept virtual and embed domain knowledge. We explain the main notions of this paradigm, its tooling ecosystem and significant use cases in a wide range of applications. Finally, we discuss future research directions.",TRUE,noun phrase
R369,"Theory, Knowledge and Science",R75675,Knowledge Graph Refinement: A Survey of Approaches and Evaluation Methods,S350523,R75677,Has method,L250170,refinement methods ,"In the recent years, different Web knowledge graphs, both free and commercial, have been created. While Google coined the term ""Knowledge Graph"" in 2012, there are also a few openly available knowledge graphs, with DBpedia, YAGO, and Freebase being among the most prominent ones. Those graphs are often constructed from semi-structured knowledge, such as Wikipedia, or harvested from the web with a combination of statistical and linguistic methods. The result are large-scale knowledge graphs that try to make a good trade-off between completeness and correctness. In order to further increase the utility of such knowledge graphs, various refinement methods have been proposed, which try to infer and add missing knowledge to the graph, or identify erroneous pieces of information. In this article, we provide a survey of such knowledge graph refinement approaches, with a dual look at both the methods being proposed as well as the evaluation methodologies used.",TRUE,noun phrase
R141,Theory/Algorithms,R108835,A Parallelization Scheme for New DPD-B Thermostats,S495772,R108840,Has implementation,R108848,a new algorithm,"This paper presents the MPI parallelization of a new algorithm—DPD-B thermostat—for molecular dynamics simulations. The presented results are using Martini Coarse Grained Water System. It should be taken into account that molecular dynamics simulations are time consuming. In some cases the running time varies from days to weeks and even months. Therefore, parallelization is one solution for reducing the execution time. The paper describes the new algorithm, the main characteristics of the MPI parallelization of the new algorithm, and the simulation performances.",TRUE,noun phrase
R141,Theory/Algorithms,R8210,Analyzing Emergency Evacuation Strategies for Mass Gatherings using Crowd Simulation And Analysis framework: Hajj Scenario,S146039,R8211,Has implementation,R8238,Crowd Simulation and Analysis Framework,"Hajj is one of the largest mass gatherings where Muslims from all over the world gather in Makah each year for pilgrimage. A mass assembly of such scale bears a huge risk of disaster either natural or man-made. In the past few years, thousands of casualties have occurred while performing different Hajj rituals, especially during the Circumambulation of Kaba (Tawaf) due to stampede or chaos. During such calamitous situations, an appropriate evacuation strategy can help resolve the problem and mitigate further risk of causalities. It is however a daunting research problem to identify an optimal course of action based on several constraints. Modeling and analyzing such a problem of real-time and spatially explicit complexity requires a microscale crowd simulation and analysis framework. Which not only allows the modeler to express the spatial dimensions and features of the environment in real scale, but also provides modalities to capture complex crowd behaviors. In this paper, we propose an Agent-based Crowd Simulation & Analysis framework that incorporates the use of Anylogic Pedestrian library and integrates/interoperate Anylogic Simulation environment with the external modules for optimization and analysis. Hence provides a runtime environment for analyzing complex situations, e.g., emergency evacuation strategies. The key features of the proposed framework include: (i) Ability to model large crowd in a spatially explicit environment at real-scale; (ii) Simulation of complex crowd behavior such as emergency evacuation; (iii) Interoperability of optimization and analysis modules with simulation runtime for evaluating evacuation strategies. We present a case study of Hajj scenario as a proof of concept and a test bed for identifying and evaluating optimal strategies for crowd evacuation",TRUE,noun phrase
R141,Theory/Algorithms,R2008,Algorithm and Hardware for a Merge Sort Using Multiple Processors,S2020,R2012,Algorithm,R2014,Merge sort,"An algorithm is described that allows log (n) processors to sort n records in just over 2n write cycles, together with suitable hardware to support the algorithm. The algorithm is a parallel version of the straight merge sort. The passes of the merge sort are run overlapped, with each pass supported by a separate processor. The intermediate files of a serial merge sort are replaced by first-in first-out queues. The processors and queues may be implemented in conventional solid logic technology or in bubble technology. A hybrid technology is also appropriate.",TRUE,noun phrase
R141,Theory/Algorithms,R44108,Time Series Data Cleaning: A Survey,S134411,R44111,Definition,R44119,time series data,"Errors are prevalent in time series data, which is particularly common in the industrial field. Data with errors could not be stored in the database, which results in the loss of data assets. At present, to deal with these time series containing errors, besides keeping original erroneous data, discarding erroneous data and manually checking erroneous data, we can also use the cleaning algorithm widely used in the database to automatically clean the time series data. This survey provides a classification of time series data cleaning techniques and comprehensively reviews the state-of-the-art methods of each type. Besides we summarize data cleaning tools, systems and evaluation criteria from research and industry. Finally, we highlight possible directions time series data cleaning.",TRUE,noun phrase
R342,Urban Studies,R138635,Smart City Ontologies: Improving the effectiveness of smart city applications,S630596,R138637,Technology level,R157259,Digital Space,"This paper addresses the problem of low impact of smart city applications observed in the fields of energy and transport, which constitute high-priority domains for the development of smart cities. However, these are not the only fields where the impact of smart cities has been limited. The paper provides an explanation for the low impact of various individual applications of smart cities and discusses ways of improving their effectiveness. We argue that the impact of applications depends primarily on their ontology, and secondarily on smart technology and programming features. Consequently, we start by creating an overall ontology for the smart city, defining the building blocks of this ontology with respect to the most cited definitions of smart cities, and structuring this ontology with the Protégé 5.0 editor, defining entities, class hierarchy, object properties, and data type properties. We then analyze how the ontologies of a sample of smart city applications fit into the overall Smart City Ontology, the consistency between digital spaces, knowledge processes, city domains targeted by the applications, and the types of innovation that determine their impact. In conclusion, we underline the relationships between innovation and ontology, and discuss how we can improve the effectiveness of smart city applications, combining expert and user-driven ontology design with the integration and or-chestration of applications over platforms and larger city entities such as neighborhoods, districts, clusters, and sectors of city activities.",TRUE,noun phrase
R342,Urban Studies,R108909,Conundrum or paradox: deconstructing the spurious case of water scarcity in the Himalayan Region through an institutional economics narrative,S496083,R108911,Region of data collection,R108914,Eastern Himalaya,"Water scarcity in mountain regions such as the Himalaya has been studied with a pre-existing notion of scarcity justified by decades of communities' suffering from physical water shortages combined by difficulties of access. The Eastern Himalayan Region (EHR) of India receives significantly high amounts of annual precipitation. Studies have nonetheless shown that this region faces a strange dissonance: an acute water scarcity in a supposedly ‘water-rich’ region. The main objective of this paper is to decipher various drivers of water scarcity by locating the contemporary history of water institutions within the development trajectory of the Darjeeling region, particularly Darjeeling Municipal Town in West Bengal, India. A key feature of the region's urban water governance that defines the water scarcity narrative is the multiplicity of water institutions and the intertwining of formal and informal institutions at various scales. These factors affect the availability of and basic access to domestic water by communities in various ways resulting in the creation of a preferred water bundle consisting of informal water markets over and above traditional sourcing from springs and the formal water supply from the town municipality.",TRUE,noun phrase
R374,Urban Studies and Planning,R146416,Collaborating Filtering Community Image Recommendation System Based on Scene,S586216,R146418,uses Recommendation Method,R138244,Collaborative filtering,"With the advancement of smart city, the development of intelligent mobile terminal and wireless network, the traditional text information service no longer meet the needs of the community residents, community image service appeared as a new media service. “There are pictures of the truth” has become a community residents to understand and master the new dynamic community, image information service has become a new information service. However, there are two major problems in image information service. Firstly, the underlying eigenvalues extracted by current image feature extraction techniques are difficult for users to understand, and there is a semantic gap between the image content itself and the user’s understanding; secondly, in community life of the image data increasing quickly, it is difficult to find their own interested image data. Aiming at the two problems, this paper proposes a unified image semantic scene model to express the image content. On this basis, a collaborative filtering recommendation model of fusion scene semantics is proposed. In the recommendation model, a comprehensiveness and accuracy user interest model is proposed to improve the recommendation quality. The results of the present study have achieved good results in the pilot cities of Wenzhou and Yan'an, and it is applied normally.",TRUE,noun phrase
R374,Urban Studies and Planning,R146070,"Smart city initiatives in the context of digital transformation: scope, services and technologies",S584982,R146074,"Issue(s) Addressed ",R146076,Empowering social and collaboration interactions,"Digital transformation is an emerging trend in developing the way how the work is being done, and it is present in the private and public sector, in all industries and fields of work. Smart cities, as one of the concepts related to digital transformation, is usually seen as a matter of local governments, as it is their responsibility to ensure a better quality of life for the citizens. Some cities have already taken advantages of possibilities offered by the concept of smart cities, creating new values to all stakeholders interacting in the living city ecosystems, thus serving as examples of good practice, while others are still developing and growing on their intentions to become smart. This paper provides a structured literature analysis and investigates key scope, services and technologies related to smart cities and digital transformation as concepts of empowering social and collaboration interactions, in order to identify leading factors in most smart city initiatives.",TRUE,noun phrase
R374,Urban Studies and Planning,R146032,Digital transformation of existing cities,S584896,R146034,Components ,R146036,Information hub,"The article focuses on the range of problems arising on the way of innovative technologies implementation in the structure of existing cities. The concept of intellectualization of historic cities, as illustrated by Samara, is offered, which was chosen for the realization of a large Russian project “Smart City. Successful Region” in 2018. One of the problems was to study the experience of information hubs projecting with the purpose of determination of their priority functional directions. The following typology of information hubs was made: scientific and research ones, scientific and technical ones, innovative and cultural ones, cultural and informational ones, scientific and informational ones, technological ones, centres for data processing, scientific centres with experimental and production laboratories. As a result of the conducted research, a suggestion on smart city’s infrastructure is developed, the final levels of innovative technologies implementation in the structure of historic territories are determined. A model suggestion on the formation of a scientific and project centre with experimental and production laboratories branded as named “Park-plant” is developed. Smart (as well as real) city technologies, which are supposed to be placed on the territory of “Park-plant”, are systematized. The organizational structure of the promotion of model projects is offered according to the concept of “triad of development agents”, in which the flagship university – urban community – park-plant interact within the project programme. The effects of the development of the being renovated territory of the historic city centre are enumerated.",TRUE,noun phrase
R374,Urban Studies and Planning,R146039,The Evolving Enterprise Architecture: A Digital Transformation Perspective,S584914,R146041,Technologies Deployed,R109367,Internet of Things,"The advancement of technology has influenced all the enterprises. Enterprises should come up with the evolving approaches to face the challenges. With an evolving approach, the enterprise will be able to adapt to successive changes. Enterprise architecture is introduced as an approach to confront these challenges. The main issue is the generalization of this evolving approach to enterprise architecture. In an evolving approach, all aspects of the enterprise, as well as the ecosystem of the enterprise are considered. In this study, the notion of Internet of Things is considered as a transition factor in enterprise and enterprise architecture. Industry 4.0 and digital transformation have also been explored in the enterprise. Common challenges are extracted and defined.",TRUE,noun phrase
R374,Urban Studies and Planning,R146090,"Internet of Things, legal and regulatory framework in digital transformation from smart to intelligent cities",S585023,R146092,Technologies Deployed,R109367,Internet of Things,"Digital transformation from “Smart” to “Intelligent city” is based on new information technologies and knowledge, as well as on organizational and security processes. The authors of this paper will present the legal and regulatory framework and challenges of Internet of things in development of smart cities on the way to become intelligent cities. The special contribution of the paper will be an overview of new legal and regulatory framework General Data Protection Regulation (GDPR) which is of great importance for European union legal and regulation framework and bringing novelties in citizen's privacy and protection of personal data.",TRUE,noun phrase
R374,Urban Studies and Planning,R146122,Evolution of Enterprise Architecture for Digital Transformation,S585124,R146124,Technologies Deployed,R109367,Internet of Things,"The digital transformation of our life changes the way we work, learn, communicate, and collaborate. Enterprises are presently transforming their strategy, culture, processes, and their information systems to become digital. The digital transformation deeply disrupts existing enterprises and economies. Digitization fosters the development of IT systems with many rather small and distributed structures, like Internet of Things, Microservices and mobile services. Since years a lot of new business opportunities appear using the potential of services computing, Internet of Things, mobile systems, big data with analytics, cloud computing, collaboration networks, and decision support. Biological metaphors of living and adaptable ecosystems provide the logical foundation for self-optimizing and resilient run-time environments for intelligent business services and adaptable distributed information systems with service-oriented enterprise architectures. This has a strong impact for architecting digital services and products following both a value-oriented and a service perspective. The change from a closed-world modeling world to a more flexible open-world composition and evolution of enterprise architectures defines the moving context for adaptable and high distributed systems, which are essential to enable the digital transformation. The present research paper investigates the evolution of Enterprise Architecture considering new defined value-oriented mappings between digital strategies, digital business models and an improved digital enterprise architecture.",TRUE,noun phrase
R374,Urban Studies and Planning,R74326,A mixed-methods analysis of mobility behavior changes in the COVID-19 era in a rural case study,S580748,R74329,Area,L405846,rural area,"Abstract Background As a reaction to the novel coronavirus disease (COVID-19), countries around the globe have implemented various measures to reduce the spread of the virus. The transportation sector is particularly affected by the pandemic situation. The current study aims to contribute to the empirical knowledge regarding the effects of the coronavirus situation on the mobility of people by (1) broadening the perspective to the mobility rural area’s residents and (2) providing subjective data concerning the perceived changes of affected persons’ mobility practices, as these two aspects have scarcely been considered in research so far. Methods To address these research gaps, a mixed-methods study was conducted that integrates a qualitative telephone interview study ( N = 15) and a quantitative household survey ( N = 301). The rural district of Altmarkkreis Salzwedel in Northern Germany was chosen as a model region. Results The results provide in-depth insights into the changing mobility practices of residents of a rural area during the legal restrictions to stem the spread of the virus. A high share of respondents (62.6%) experienced no changes in their mobility behavior due to the COVID-19 pandemic situation. However, nearly one third of trips were also cancelled overall. A modal shift was observed towards the reduction of trips by car and bus, and an increase of trips by bike. The share of trips by foot was unchanged. The majority of respondents did not predict strong long-term effects of the corona pandemic on their mobility behavior.",TRUE,noun phrase
R374,Urban Studies and Planning,R146122,Evolution of Enterprise Architecture for Digital Transformation,S585129,R146124,Components ,R146129,Service perspective,"The digital transformation of our life changes the way we work, learn, communicate, and collaborate. Enterprises are presently transforming their strategy, culture, processes, and their information systems to become digital. The digital transformation deeply disrupts existing enterprises and economies. Digitization fosters the development of IT systems with many rather small and distributed structures, like Internet of Things, Microservices and mobile services. Since years a lot of new business opportunities appear using the potential of services computing, Internet of Things, mobile systems, big data with analytics, cloud computing, collaboration networks, and decision support. Biological metaphors of living and adaptable ecosystems provide the logical foundation for self-optimizing and resilient run-time environments for intelligent business services and adaptable distributed information systems with service-oriented enterprise architectures. This has a strong impact for architecting digital services and products following both a value-oriented and a service perspective. The change from a closed-world modeling world to a more flexible open-world composition and evolution of enterprise architectures defines the moving context for adaptable and high distributed systems, which are essential to enable the digital transformation. The present research paper investigates the evolution of Enterprise Architecture considering new defined value-oriented mappings between digital strategies, digital business models and an improved digital enterprise architecture.",TRUE,noun phrase
R374,Urban Studies and Planning,R142756,Smart City Ontologies: Improving the effectiveness of smart city applications,S579616,R144786,Linked Ontology,R144745,Smart City Ontology,"This paper addresses the problem of low impact of smart city applications observed in the fields of energy and transport, which constitute high-priority domains for the development of smart cities. However, these are not the only fields where the impact of smart cities has been limited. The paper provides an explanation for the low impact of various individual applications of smart cities and discusses ways of improving their effectiveness. We argue that the impact of applications depends primarily on their ontology, and secondarily on smart technology and programming features. Consequently, we start by creating an overall ontology for the smart city, defining the building blocks of this ontology with respect to the most cited definitions of smart cities, and structuring this ontology with the Protégé 5.0 editor, defining entities, class hierarchy, object properties, and data type properties. We then analyze how the ontologies of a sample of smart city applications fit into the overall Smart City Ontology, the consistency between digital spaces, knowledge processes, city domains targeted by the applications, and the types of innovation that determine their impact. In conclusion, we underline the relationships between innovation and ontology, and discuss how we can improve the effectiveness of smart city applications, combining expert and user-driven ontology design with the integration and or-chestration of applications over platforms and larger city entities such as neighborhoods, districts, clusters, and sectors of city activities.",TRUE,noun phrase
R374,Urban Studies and Planning,R146434,Skunkworks finder: unlocking the diversity advantage of urban innovation ecosystems,S586278,R146436,has Data Source,R139825,social media,"Entrepreneurs and start-up founders using innovation spaces and hubs often find themselves inside a filter bubble or echo chamber, where like-minded people tend to come up with similar ideas and recommend similar approaches to innovation. This trend towards homophily and a polarisation of like-mindedness is aggravated by algorithmic filtering and recommender systems embedded in mobile technology and social media platforms. Yet, genuine innovation thrives on social inclusion fostering a diversity of ideas. To escape these echo chambers, we designed and tested the Skunkworks Finder - an exploratory tool that employs social network analysis to help users discover spaces of difference and otherness in their local urban innovation ecosystem.",TRUE,noun phrase
R374,Urban Studies and Planning,R146443,Encouraging civic participation through local news aggregation,S586302,R146445,has Data Source,R139825,social media,"Traditional sources of information for small and rural communities have been disappearing over the past decade. A lot of the information and discussion related to such local geographic areas is now scattered across websites of numerous local organizations, individual blogs, social media and other user-generated media (YouTube, Flickr). It is important to capture this information and make it easily accessible to local citizens to facilitate citizen engagement and social interaction. Furthermore, a system that has location-based support can provide local citizens with an engaging way to interact with this information and identify the local issues most relevant to them. A location-based interface for a local geographic area enables people to identify and discuss local issues related to specific locations such as a particular street or a road construction site. We created an information aggregator, called the Virtual Town Square (VTS), to support and facilitate local discussion and interaction. We created a location-based interface for users to access the information collected by VTS. In this paper, we discuss focus group interviews with local citizens that motivated our design of a local news and information aggregator to facilitate civic participation. We then discuss the unique design challenges in creating such a local news aggregator and our design approach to create a local information ecosystem. We describe VTS and the initial evaluation and feedback we received from local users and through weekly meetings with community partners.",TRUE,noun phrase
R374,Urban Studies and Planning,R149031,Developing E-Government Coursework through the NASPAA Competencies Framework,S597411,R149033,has business competence,R149103,strategic planning," Information technology (IT) is often less emphasized in coursework related to public administration education, despite the growing need for technological capabilities in those joining the public sector workforce. This coupled with a lesser emphasis on e-government/IT skills by accreditation standards adds to the widening gap between theory and practice in the field. This study examines the emphasis placed on e-government/IT concepts in Master of Public Administration (MPA) and Master of Public Policy (MPP) programs, either through complete course offerings or through related courses such as public management, strategic planning, performance measurement and organization theory. Based on a content analysis of their syllabi, the paper analyzes the extent to which the IT/e-government courses in MPA/Master of Public Policy programs address the Network of Schools of Public Policy, Affairs, and Administration competency standards, and further discuss the orientation of the courses with two of the competencies: management and policy. Specifically, are e-government/IT courses more management-oriented or policy-oriented? Do public management, strategic planning, performance measurement, and organization theory courses address IT concerns? ",TRUE,noun phrase
R374,Urban Studies and Planning,R146060,Tools of quality economics: sustainable development of a ‘smart city’ under conditions of digital transformation of the economy,S584963,R146062,Components ,R146068,Sustainable development,"The article covers the issues of ensuring sustainable city development based on the achievements of digitalization. Attention is also paid to the use of quality economy tools in managing 'smart' cities under conditions of the digital transformation of the national economy. The current state of 'smart' cities and the main factors contributing to their sustainable development, including the digitalization requirements is analyzed. Based on the analysis of statistical material, the main prospects to form the 'smart city' concept, the possibility to assess such parameters as 'life quality', 'comfort', 'rational organization', 'opportunities', 'sustainable development', 'city environment accessibility', 'use of communication technologies'. The role of tools for quality economics is revealed in ensuring the big city life under conditions of digital economy. The concept of 'life quality' is considered, which currently is becoming one of the fundamental vectors of the human civilization development, a criterion that is increasingly used to compare countries and territories. Special attention is paid to such tools and methods of quality economics as standardization, metrology and quality management. It is proposed to consider these tools as a mechanism for solving the most important problems in the national economy development under conditions of digital transformation.",TRUE,noun phrase
R57,Virology,R36146,COVID-19 outbreak in Algeria: A mathematical model to predict the incidence,S123882,R36147,Methods,L74604,Alg-COVID-19 Model,"Abstract Introduction Since December 29, 2019 a pandemic of new novel coronavirus-infected pneumonia named COVID-19 has started from Wuhan, China, has led to 254 996 confirmed cases until midday March 20, 2020. Sporadic cases have been imported worldwide, in Algeria, the first case reported on February 25, 2020 was imported from Italy, and then the epidemic has spread to other parts of the country very quickly with 139 confirmed cases until March 21, 2020. Methods It is crucial to estimate the cases number growth in the early stages of the outbreak, to this end, we have implemented the Alg-COVID-19 Model which allows to predict the incidence and the reproduction number R0 in the coming months in order to help decision makers. The Alg-COVIS-19 Model initial equation 1, estimates the cumulative cases at t prediction time using two parameters: the reproduction number R0 and the serial interval SI. Results We found R0=2.55 based on actual incidence at the first 25 days, using the serial interval SI= 4,4 and the prediction time t=26. The herd immunity HI estimated is HI=61%. Also, The Covid-19 incidence predicted with the Alg-COVID-19 Model fits closely the actual incidence during the first 26 days of the epidemic in Algeria Fig. 1.A. which allows us to use it. According to Alg-COVID-19 Model, the number of cases will exceed 5000 on the 42 th day (April 7 th ) and it will double to 10000 on 46th day of the epidemic (April 11 th ), thus, exponential phase will begin (Table 1; Fig.1.B) and increases continuously until reaching à herd immunity of 61% unless serious preventive measures are considered. Discussion This model is valid only when the majority of the population is vulnerable to COVID-19 infection, however, it can be updated to fit the new parameters values.",TRUE,noun phrase
R57,Virology,R51373,Identification of antiviral drug candidates against SARS-CoV-2 from FDA-approved drugs,S157291,R51399,Has participant,R51251,antiviral drug,"Drug repositioning is the only feasible option to immediately address the COVID-19 global challenge. We screened a panel of 48 FDA-approved drugs against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) which were preselected by an assay of SARS-CoV. We identified 24 potential antiviral drug candidates against SARS-CoV-2 infection. Some drug candidates showed very low 50% inhibitory concentrations (IC 50 s), and in particular, two FDA-approved drugs—niclosamide and ciclesonide—were notable in some respects.",TRUE,noun phrase
R57,Virology,R41016,Unique epidemiological and clinical features of the emerging 2019 novel coronavirus pneumonia (COVID-19) implicate special control measures,S130153,R41025,data source,L79109,China CDC,"By 27 February 2020, the outbreak of coronavirus disease 2019 (COVID‐19) caused 82 623 confirmed cases and 2858 deaths globally, more than severe acute respiratory syndrome (SARS) (8273 cases, 775 deaths) and Middle East respiratory syndrome (MERS) (1139 cases, 431 deaths) caused in 2003 and 2013, respectively. COVID‐19 has spread to 46 countries internationally. Total fatality rate of COVID‐19 is estimated at 3.46% by far based on published data from the Chinese Center for Disease Control and Prevention (China CDC). Average incubation period of COVID‐19 is around 6.4 days, ranges from 0 to 24 days. The basic reproductive number (R0) of COVID‐19 ranges from 2 to 3.5 at the early phase regardless of different prediction models, which is higher than SARS and MERS. A study from China CDC showed majority of patients (80.9%) were considered asymptomatic or mild pneumonia but released large amounts of viruses at the early phase of infection, which posed enormous challenges for containing the spread of COVID‐19. Nosocomial transmission was another severe problem. A total of 3019 health workers were infected by 12 February 2020, which accounted for 3.83% of total number of infections, and extremely burdened the health system, especially in Wuhan. Limited epidemiological and clinical data suggest that the disease spectrum of COVID‐19 may differ from SARS or MERS. We summarize latest literatures on genetic, epidemiological, and clinical features of COVID‐19 in comparison to SARS and MERS and emphasize special measures on diagnosis and potential interventions. This review will improve our understanding of the unique features of COVID‐19 and enhance our control measures in the future.",TRUE,noun phrase
R57,Virology,R51231,Broad anti-coronaviral activity of FDA approved drugs against SARS-CoV-2 in vitro and SARS-CoV in vivo,S156798,R51233,Has participant,R51243,chloroquine and chlorpromazine,"Abstract SARS-CoV-2 emerged in China at the end of 2019 and has rapidly become a pandemic with roughly 2.7 million recorded COVID-19 cases and greater than 189,000 recorded deaths by April 23rd, 2020 (www.WHO.org). There are no FDA approved antivirals or vaccines for any coronavirus, including SARS-CoV-2. Current treatments for COVID-19 are limited to supportive therapies and off-label use of FDA approved drugs. Rapid development and human testing of potential antivirals is greatly needed. A quick way to test compounds with potential antiviral activity is through drug repurposing. Numerous drugs are already approved for human use and subsequently there is a good understanding of their safety profiles and potential side effects, making them easier to fast-track to clinical studies in COVID-19 patients. Here, we present data on the antiviral activity of 20 FDA approved drugs against SARS-CoV-2 that also inhibit SARS-CoV and MERS-CoV. We found that 17 of these inhibit SARS-CoV-2 at a range of IC50 values at non-cytotoxic concentrations. We directly follow up with seven of these to demonstrate all are capable of inhibiting infectious SARS-CoV-2 production. Moreover, we have evaluated two of these, chloroquine and chlorpromazine, in vivo using a mouse-adapted SARS-CoV model and found both drugs protect mice from clinical disease.",TRUE,noun phrase
R57,Virology,R41250,The impact of social distancing and epicenter lockdown on the COVID-19 epidemic in mainland China: A data-driven SEIQR model study,S134087,R44026,Method,R41244,data-driven susceptible-exposed-infectious-quarantine-recovered (SEIQR) models,"The outbreak of coronavirus disease 2019 (COVID-19) which originated in Wuhan, China, constitutes a public health emergency of international concern with a very high risk of spread and impact at the global level. We developed data-driven susceptible-exposed-infectious-quarantine-recovered (SEIQR) models to simulate the epidemic with the interventions of social distancing and epicenter lockdown. Population migration data combined with officially reported data were used to estimate model parameters, and then calculated the daily exported infected individuals by estimating the daily infected ratio and daily susceptible population size. As of Jan 01, 2020, the estimated initial number of latently infected individuals was 380.1 (95%-CI: 379.8~381.0). With 30 days of substantial social distancing, the reproductive number in Wuhan and Hubei was reduced from 2.2 (95%-CI: 1.4~3.9) to 1.58 (95%-CI: 1.34~2.07), and in other provinces from 2.56 (95%-CI: 2.43~2.63) to 1.65 (95%-CI: 1.56~1.76). We found that earlier intervention of social distancing could significantly limit the epidemic in mainland China. The number of infections could be reduced up to 98.9%, and the number of deaths could be reduced by up to 99.3% as of Feb 23, 2020. However, earlier epicenter lockdown would partially neutralize this favorable effect. Because it would cause in situ deteriorating, which overwhelms the improvement out of the epicenter. To minimize the epidemic size and death, stepwise implementation of social distancing in the epicenter city first, then in the province, and later the whole nation without the epicenter lockdown would be practical and cost-effective.",TRUE,noun phrase
R57,Virology,R44759,Transmission potential of COVID-19 in Iran,S137026,R44766,Method,L83758,generalized growth model,"Abstract We estimated the reproduction number of 2020 Iranian COVID-19 epidemic using two different methods: R 0 was estimated at 4.4 (95% CI, 3.9, 4.9) (generalized growth model) and 3.50 (1.28, 8.14) (epidemic doubling time) (February 19 - March 1) while the effective R was estimated at 1.55 (1.06, 2.57) (March 6-19).",TRUE,noun phrase
R57,Virology,R44776,Estimating the generation interval for COVID-19 based on symptom onset data,S137086,R44785,Method,L83792,generation interval,"Background: Estimating key infectious disease parameters from the COVID-19 outbreak is quintessential for modelling studies and guiding intervention strategies. Whereas different estimates for the incubation period distribution and the serial interval distribution have been reported, estimates of the generation interval for COVID-19 have not been provided. Methods: We used outbreak data from clusters in Singapore and Tianjin, China to estimate the generation interval from symptom onset data while acknowledging uncertainty about the incubation period distribution and the underlying transmission network. From those estimates we obtained the proportions pre-symptomatic transmission and reproduction numbers. Results: The mean generation interval was 5.20 (95%CI 3.78-6.78) days for Singapore and 3.95 (95%CI 3.01-4.91) days for Tianjin, China when relying on a previously reported incubation period with mean 5.2 and SD 2.8 days. The proportion of pre-symptomatic transmission was 48% (95%CI 32-67%) for Singapore and 62% (95%CI 50-76%) for Tianjin, China. Estimates of the reproduction number based on the generation interval distribution were slightly higher than those based on the serial interval distribution. Conclusions: Estimating generation and serial interval distributions from outbreak data requires careful investigation of the underlying transmission network. Detailed contact tracing information is essential for correctly estimating these quantities.",TRUE,noun phrase
R57,Virology,R36138,Estimating the generation interval for COVID-19 based on symptom onset data,S123840,R36141,Methods,L74573,generation interval,"Background: Estimating key infectious disease parameters from the COVID-19 outbreak is quintessential for modelling studies and guiding intervention strategies. Whereas different estimates for the incubation period distribution and the serial interval distribution have been reported, estimates of the generation interval for COVID-19 have not been provided. Methods: We used outbreak data from clusters in Singapore and Tianjin, China to estimate the generation interval from symptom onset data while acknowledging uncertainty about the incubation period distribution and the underlying transmission network. From those estimates we obtained the proportions pre-symptomatic transmission and reproduction numbers. Results: The mean generation interval was 5.20 (95%CI 3.78-6.78) days for Singapore and 3.95 (95%CI 3.01-4.91) days for Tianjin, China when relying on a previously reported incubation period with mean 5.2 and SD 2.8 days. The proportion of pre-symptomatic transmission was 48% (95%CI 32-67%) for Singapore and 62% (95%CI 50-76%) for Tianjin, China. Estimates of the reproduction number based on the generation interval distribution were slightly higher than those based on the serial interval distribution. Conclusions: Estimating generation and serial interval distributions from outbreak data requires careful investigation of the underlying transmission network. Detailed contact tracing information is essential for correctly estimating these quantities.",TRUE,noun phrase
R57,Virology,R44087,Modelling the Potential Health Impact of the COVID-19 Pandemic on a Hypothetical European Country,S134317,R44098,location,R44083,Hypothetical European Country,"A SEIR simulation model for the COVID-19 pandemic was developed (http://covidsim.eu) and applied to a hypothetical European country of 10 million population. Our results show which interventions potentially push the epidemic peak into the subsequent year (when vaccinations may be available) or which fail. Different levels of control (via contact reduction) resulted in 22% to 63% of the population sick, 0.2% to 0.6% hospitalised, and 0.07% to 0.28% dead (n=6,450 to 28,228).",TRUE,noun phrase
R57,Virology,R36132,Lessons drawn from China and South Korea for managing COVID-19 epidemic: insights from a comparative modeling study,S123798,R36137,location,R36136,mainland China,"We conducted a comparative study of COVID-19 epidemic in three different settings: mainland China, the Guangdong province of China and South Korea, by formulating two disease transmission dynamics models incorporating epidemic characteristics and setting-specific interventions, and fitting the models to multi-source data to identify initial and effective reproduction numbers and evaluate effectiveness of interventions. We estimated the initial basic reproduction number for South Korea, the Guangdong province and mainland China as 2.6 (95% confidence interval (CI): (2.5, 2.7)), 3.0 (95%CI: (2.6, 3.3)) and 3.8 (95%CI: (3.5,4.2)), respectively, given a serial interval with mean of 5 days with standard deviation of 3 days. We found that the effective reproduction number for the Guangdong province and mainland China has fallen below the threshold 1 since February 8th and 18th respectively, while the effective reproduction number for South Korea remains high, suggesting that the interventions implemented need to be enhanced in order to halt further infections. We also project the epidemic trend in South Korea under different scenarios where a portion or the entirety of the integrated package of interventions in China is used. We show that a coherent and integrated approach with stringent public health interventions is the key to the success of containing the epidemic in China and specially its provinces outside its epicenter, and we show that this approach can also be effective to mitigate the burden of the COVID-19 epidemic in South Korea. The experience of outbreak control in mainland China should be a guiding reference for the rest of the world including South Korea.",TRUE,noun phrase
R57,Virology,R37003,Real-Time Estimation of the Risk of Death from Novel Coronavirus (COVID-19) Infection: Inference Using Exported Cases,S124034,R37005,location,R36136,mainland China,"The exported cases of 2019 novel coronavirus (COVID-19) infection that were confirmed outside China provide an opportunity to estimate the cumulative incidence and confirmed case fatality risk (cCFR) in mainland China. Knowledge of the cCFR is critical to characterize the severity and understand the pandemic potential of COVID-19 in the early stage of the epidemic. Using the exponential growth rate of the incidence, the present study statistically estimated the cCFR and the basic reproduction number—the average number of secondary cases generated by a single primary case in a naïve population. We modeled epidemic growth either from a single index case with illness onset on 8 December 2019 (Scenario 1), or using the growth rate fitted along with the other parameters (Scenario 2) based on data from 20 exported cases reported by 24 January 2020. The cumulative incidence in China by 24 January was estimated at 6924 cases (95% confidence interval [CI]: 4885, 9211) and 19,289 cases (95% CI: 10,901, 30,158), respectively. The latest estimated values of the cCFR were 5.3% (95% CI: 3.5%, 7.5%) for Scenario 1 and 8.4% (95% CI: 5.3%, 12.3%) for Scenario 2. The basic reproduction number was estimated to be 2.1 (95% CI: 2.0, 2.2) and 3.2 (95% CI: 2.7, 3.7) for Scenarios 1 and 2, respectively. Based on these results, we argued that the current COVID-19 epidemic has a substantial potential for causing a pandemic. The proposed approach provides insights in early risk assessment using publicly available data.",TRUE,noun phrase
R57,Virology,R37006,Estimating the Unreported Number of Novel Coronavirus (2019-nCoV) Cases in China in the First Half of January 2020: A Data-Driven Modelling Analysis of the Early Outbreak,S124054,R37007,location,R36136,mainland China,"Background: In December 2019, an outbreak of respiratory illness caused by a novel coronavirus (2019-nCoV) emerged in Wuhan, China and has swiftly spread to other parts of China and a number of foreign countries. The 2019-nCoV cases might have been under-reported roughly from 1 to 15 January 2020, and thus we estimated the number of unreported cases and the basic reproduction number, R0, of 2019-nCoV. Methods: We modelled the epidemic curve of 2019-nCoV cases, in mainland China from 1 December 2019 to 24 January 2020 through the exponential growth. The number of unreported cases was determined by the maximum likelihood estimation. We used the serial intervals (SI) of infection caused by two other well-known coronaviruses (CoV), Severe Acute Respiratory Syndrome (SARS) and Middle East Respiratory Syndrome (MERS) CoVs, as approximations of the unknown SI for 2019-nCoV to estimate R0. Results: We confirmed that the initial growth phase followed an exponential growth pattern. The under-reporting was likely to have resulted in 469 (95% CI: 403–540) unreported cases from 1 to 15 January 2020. The reporting rate after 17 January 2020 was likely to have increased 21-fold (95% CI: 18–25) in comparison to the situation from 1 to 17 January 2020 on average. We estimated the R0 of 2019-nCoV at 2.56 (95% CI: 2.49–2.63). Conclusion: The under-reporting was likely to have occurred during the first half of January 2020 and should be considered in future investigation.",TRUE,noun phrase
R57,Virology,R44901,Real-Time Estimation of the Risk of Death from Novel Coronavirus (COVID-19) Infection: Inference Using Exported Cases,S137558,R44906,location,R44895,mainland China,"The exported cases of 2019 novel coronavirus (COVID-19) infection that were confirmed outside China provide an opportunity to estimate the cumulative incidence and confirmed case fatality risk (cCFR) in mainland China. Knowledge of the cCFR is critical to characterize the severity and understand the pandemic potential of COVID-19 in the early stage of the epidemic. Using the exponential growth rate of the incidence, the present study statistically estimated the cCFR and the basic reproduction number—the average number of secondary cases generated by a single primary case in a naïve population. We modeled epidemic growth either from a single index case with illness onset on 8 December 2019 (Scenario 1), or using the growth rate fitted along with the other parameters (Scenario 2) based on data from 20 exported cases reported by 24 January 2020. The cumulative incidence in China by 24 January was estimated at 6924 cases (95% confidence interval [CI]: 4885, 9211) and 19,289 cases (95% CI: 10,901, 30,158), respectively. The latest estimated values of the cCFR were 5.3% (95% CI: 3.5%, 7.5%) for Scenario 1 and 8.4% (95% CI: 5.3%, 12.3%) for Scenario 2. The basic reproduction number was estimated to be 2.1 (95% CI: 2.0, 2.2) and 3.2 (95% CI: 2.7, 3.7) for Scenarios 1 and 2, respectively. Based on these results, we argued that the current COVID-19 epidemic has a substantial potential for causing a pandemic. The proposed approach provides insights in early risk assessment using publicly available data.",TRUE,noun phrase
R57,Virology,R44910,Estimating the Unreported Number of Novel Coronavirus (2019-nCoV) Cases in China in the First Half of January 2020: A Data-Driven Modelling Analysis of the Early Outbreak,S137589,R44914,location,R44895,mainland China,"Background: In December 2019, an outbreak of respiratory illness caused by a novel coronavirus (2019-nCoV) emerged in Wuhan, China and has swiftly spread to other parts of China and a number of foreign countries. The 2019-nCoV cases might have been under-reported roughly from 1 to 15 January 2020, and thus we estimated the number of unreported cases and the basic reproduction number, R0, of 2019-nCoV. Methods: We modelled the epidemic curve of 2019-nCoV cases, in mainland China from 1 December 2019 to 24 January 2020 through the exponential growth. The number of unreported cases was determined by the maximum likelihood estimation. We used the serial intervals (SI) of infection caused by two other well-known coronaviruses (CoV), Severe Acute Respiratory Syndrome (SARS) and Middle East Respiratory Syndrome (MERS) CoVs, as approximations of the unknown SI for 2019-nCoV to estimate R0. Results: We confirmed that the initial growth phase followed an exponential growth pattern. The under-reporting was likely to have resulted in 469 (95% CI: 403–540) unreported cases from 1 to 15 January 2020. The reporting rate after 17 January 2020 was likely to have increased 21-fold (95% CI: 18–25) in comparison to the situation from 1 to 17 January 2020 on average. We estimated the R0 of 2019-nCoV at 2.56 (95% CI: 2.49–2.63). Conclusion: The under-reporting was likely to have occurred during the first half of January 2020 and should be considered in future investigation.",TRUE,noun phrase
R57,Virology,R36128,Risk estimation and prediction by modeling the transmission of the novel coronavirus (COVID-19) in mainland China excluding Hubei province,S123750,R36129,location,R36127,mainland China excluding Hubei province,"Background: In December 2019, an outbreak of coronavirus disease (COVID-19)was identified in Wuhan, China and, later on, detected in other parts of China. Our aim is to evaluate the effectiveness of the evolution of interventions and self-protection measures, estimate the risk of partial lifting control measures and predict the epidemic trend of the virus in mainland China excluding Hubei province based on the published data and a novel mathematical model. Methods: A novel COVID-19 transmission dynamic model incorporating the intervention measures implemented in China is proposed. We parameterize the model by using the Markov Chain Monte Carlo (MCMC) method and estimate the control reproduction number Rc, as well as the effective daily reproduction ratio Re(t), of the disease transmission in mainland China excluding Hubei province. Results: The estimation outcomes indicate that the control reproduction number is 3.36 (95% CI 3.20-3.64) and Re(t) has dropped below 1 since January 31st, 2020, which implies that the containment strategies implemented by the Chinese government in mainland China excluding Hubei province are indeed effective and magnificently suppressed COVID-19 transmission. Moreover, our results show that relieving personal protection too early may lead to the spread of disease for a longer time and more people would be infected, and may even cause epidemic or outbreak again. By calculating the effective reproduction ratio, we proved that the contact rate should be kept at least less than 30% of the normal level by April, 2020. Conclusions: To ensure the epidemic ending rapidly, it is necessary to maintain the current integrated restrict interventions and self-protection measures, including travel restriction, quarantine of entry, contact tracing followed by quarantine and isolation and reduction of contact, like wearing masks, etc. People should be fully aware of the real-time epidemic situation and keep sufficient personal protection until April. If all the above conditions are met, the outbreak is expected to be ended by April in mainland China apart from Hubei province.",TRUE,noun phrase
R57,Virology,R36106,Characterizing the transmission and identifying the control strategy for COVID-19 through epidemiological modeling,S123601,R36107,Methods,L74399,Monte carlo simulation,"The outbreak of the novel coronavirus disease, COVID-19, originating from Wuhan, China in early December, has infected more than 70,000 people in China and other countries and has caused more than 2,000 deaths. As the disease continues to spread, the biomedical society urgently began identifying effective approaches to prevent further outbreaks. Through rigorous epidemiological analysis, we characterized the fast transmission of COVID-19 with a basic reproductive number 5.6 and proved a sole zoonotic source to originate in Wuhan. No changes in transmission have been noted across generations. By evaluating different control strategies through predictive modeling and Monte carlo simulations, a comprehensive quarantine in hospitals and quarantine stations has been found to be the most effective approach. Government action to immediately enforce this quarantine is highly recommended.",TRUE,noun phrase
R57,Virology,R175260,Alphacoronavirus in a Daubenton’s Myotis Bat (Myotis daubentonii) in Sweden.,S694134,R175262,Has Host,L466710,Myotis daubentonii,"The ongoing COVID-19 pandemic has stimulated a search for reservoirs and species potentially involved in back and forth transmission. Studies have postulated bats as one of the key reservoirs of coronaviruses (CoVs), and different CoVs have been detected in bats. So far, CoVs have not been found in bats in Sweden and we therefore tested whether they carry CoVs. In summer 2020, we sampled a total of 77 adult bats comprising 74 Myotis daubentonii, 2 Pipistrellus pygmaeus, and 1 M. mystacinus bats in southern Sweden. Blood, saliva and feces were sampled, processed and subjected to a virus next-generation sequencing target enrichment protocol. An Alphacoronavirus was detected and sequenced from feces of a M. daubentonii adult female bat. Phylogenetic analysis of the almost complete virus genome revealed a close relationship with Finnish and Danish strains. This was the first finding of a CoV in bats in Sweden, and bats may play a role in the transmission cycle of CoVs in Sweden. Focused and targeted surveillance of CoVs in bats is warranted, with consideration of potential conflicts between public health and nature conservation required as many bat species in Europe are threatened and protected.",TRUE,noun phrase
R57,Virology,R41252,Spread of SARS-CoV-2 in the Icelandic Population,S130887,R41255,has study design,R41265,population screening,"Abstract Background During the current worldwide pandemic, coronavirus disease 2019 (Covid-19) was first diagnosed in Iceland at the end of February. However, data are limited on how SARS-CoV-2, the virus that causes Covid-19, enters and spreads in a population. Methods We targeted testing to persons living in Iceland who were at high risk for infection (mainly those who were symptomatic, had recently traveled to high-risk countries, or had contact with infected persons). We also carried out population screening using two strategies: issuing an open invitation to 10,797 persons and sending random invitations to 2283 persons. We sequenced SARS-CoV-2 from 643 samples. Results As of April 4, a total of 1221 of 9199 persons (13.3%) who were recruited for targeted testing had positive results for infection with SARS-CoV-2. Of those tested in the general population, 87 (0.8%) in the open-invitation screening and 13 (0.6%) in the random-population screening tested positive for the virus. In total, 6% of the population was screened. Most persons in the targeted-testing group who received positive tests early in the study had recently traveled internationally, in contrast to those who tested positive later in the study. Children under 10 years of age were less likely to receive a positive result than were persons 10 years of age or older, with percentages of 6.7% and 13.7%, respectively, for targeted testing; in the population screening, no child under 10 years of age had a positive result, as compared with 0.8% of those 10 years of age or older. Fewer females than males received positive results both in targeted testing (11.0% vs. 16.7%) and in population screening (0.6% vs. 0.9%). The haplotypes of the sequenced SARS-CoV-2 viruses were diverse and changed over time. The percentage of infected participants that was determined through population screening remained stable for the 20-day duration of screening. Conclusions In a population-based study in Iceland, children under 10 years of age and females had a lower incidence of SARS-CoV-2 infection than adolescents or adults and males. The proportion of infected persons identified through population screening did not change substantially during the screening period, which was consistent with a beneficial effect of containment efforts. (Funded by deCODE Genetics–Amgen.)",TRUE,noun phrase
R57,Virology,R178482,"Viral load dynamics and disease severity in patients infected with SARS-CoV-2 in Zhejiang province, China, January-March 2020: retrospective cohort study",S700066,R178485,Material,R178490,respiratory sample,"Abstract Objective To evaluate viral loads at different stages of disease progression in patients infected with the 2019 severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) during the first four months of the epidemic in Zhejiang province, China. Design Retrospective cohort study. Setting A designated hospital for patients with covid-19 in Zhejiang province, China. Participants 96 consecutively admitted patients with laboratory confirmed SARS-CoV-2 infection: 22 with mild disease and 74 with severe disease. Data were collected from 19 January 2020 to 20 March 2020. Main outcome measures Ribonucleic acid (RNA) viral load measured in respiratory, stool, serum, and urine samples. Cycle threshold values, a measure of nucleic acid concentration, were plotted onto the standard curve constructed on the basis of the standard product. Epidemiological, clinical, and laboratory characteristics and treatment and outcomes data were obtained through data collection forms from electronic medical records, and the relation between clinical data and disease severity was analysed. Results 3497 respiratory, stool, serum, and urine samples were collected from patients after admission and evaluated for SARS-CoV-2 RNA viral load. Infection was confirmed in all patients by testing sputum and saliva samples. RNA was detected in the stool of 55 (59%) patients and in the serum of 39 (41%) patients. The urine sample from one patient was positive for SARS-CoV-2. The median duration of virus in stool (22 days, interquartile range 17-31 days) was significantly longer than in respiratory (18 days, 13-29 days; P=0.02) and serum samples (16 days, 11-21 days; P<0.001). The median duration of virus in the respiratory samples of patients with severe disease (21 days, 14-30 days) was significantly longer than in patients with mild disease (14 days, 10-21 days; P=0.04). In the mild group, the viral loads peaked in respiratory samples in the second week from disease onset, whereas viral load continued to be high during the third week in the severe group. Virus duration was longer in patients older than 60 years and in male patients. Conclusion The duration of SARS-CoV-2 is significantly longer in stool samples than in respiratory and serum samples, highlighting the need to strengthen the management of stool samples in the prevention and control of the epidemic, and the virus persists longer with higher load and peaks later in the respiratory tissue of patients with severe disease.",TRUE,noun phrase
R57,Virology,R110684,Enterovirus inhibiting activities of two lupane triterpenoids and anthraquinones from senna siamea stem bark against three serotypes of echovirus,S504422,R110686,Source name,L364339,Senna siamea,"Echovirus 7, 13 and 19 are part of the diseases-causing enteroviruses identified in Nigeria. Presently, no treatment modality is clinically available against these enteric viruses. Herein, we investigated the ability of two anthraquinones (physcion and chrysophanol) and two lupane triterpenoids (betulinic acid and lupeol), isolated from the stem bark of Senna siamea, to reduce the viral-induced cytopathic effect on rhabdomyosarcoma cells using MTT (3-[4,5-dimethylthiazol–2-yl]-2,5diphenyltetrazolium bromide) colorimetric method. Viral-induced CPE by E7 and E19 was inhibited in the presence of all tested compounds, E13 was resistant to all the compounds except betulinic acid. Physcion was the most active with IC50 of 0.42 and 0.33 μg/mL on E7 and E19, respectively. We concluded that these compounds from Senna siamea possess anti-enteroviral activities and betulinic acid may represent a potential therapeutic agent to control E7, E13, and E19 infections, especially due its ability to inhibit CPE caused by the impervious E13.",TRUE,noun phrase
R57,Virology,R44776,Estimating the generation interval for COVID-19 based on symptom onset data,S137097,R44789,Method,L83798,serial interval,"Background: Estimating key infectious disease parameters from the COVID-19 outbreak is quintessential for modelling studies and guiding intervention strategies. Whereas different estimates for the incubation period distribution and the serial interval distribution have been reported, estimates of the generation interval for COVID-19 have not been provided. Methods: We used outbreak data from clusters in Singapore and Tianjin, China to estimate the generation interval from symptom onset data while acknowledging uncertainty about the incubation period distribution and the underlying transmission network. From those estimates we obtained the proportions pre-symptomatic transmission and reproduction numbers. Results: The mean generation interval was 5.20 (95%CI 3.78-6.78) days for Singapore and 3.95 (95%CI 3.01-4.91) days for Tianjin, China when relying on a previously reported incubation period with mean 5.2 and SD 2.8 days. The proportion of pre-symptomatic transmission was 48% (95%CI 32-67%) for Singapore and 62% (95%CI 50-76%) for Tianjin, China. Estimates of the reproduction number based on the generation interval distribution were slightly higher than those based on the serial interval distribution. Conclusions: Estimating generation and serial interval distributions from outbreak data requires careful investigation of the underlying transmission network. Detailed contact tracing information is essential for correctly estimating these quantities.",TRUE,noun phrase
R57,Virology,R36138,Estimating the generation interval for COVID-19 based on symptom onset data,S123851,R36142,Methods,L74581,serial interval,"Background: Estimating key infectious disease parameters from the COVID-19 outbreak is quintessential for modelling studies and guiding intervention strategies. Whereas different estimates for the incubation period distribution and the serial interval distribution have been reported, estimates of the generation interval for COVID-19 have not been provided. Methods: We used outbreak data from clusters in Singapore and Tianjin, China to estimate the generation interval from symptom onset data while acknowledging uncertainty about the incubation period distribution and the underlying transmission network. From those estimates we obtained the proportions pre-symptomatic transmission and reproduction numbers. Results: The mean generation interval was 5.20 (95%CI 3.78-6.78) days for Singapore and 3.95 (95%CI 3.01-4.91) days for Tianjin, China when relying on a previously reported incubation period with mean 5.2 and SD 2.8 days. The proportion of pre-symptomatic transmission was 48% (95%CI 32-67%) for Singapore and 62% (95%CI 50-76%) for Tianjin, China. Estimates of the reproduction number based on the generation interval distribution were slightly higher than those based on the serial interval distribution. Conclusions: Estimating generation and serial interval distributions from outbreak data requires careful investigation of the underlying transmission network. Detailed contact tracing information is essential for correctly estimating these quantities.",TRUE,noun phrase
R57,Virology,R178482,"Viral load dynamics and disease severity in patients infected with SARS-CoV-2 in Zhejiang province, China, January-March 2020: retrospective cohort study",S700073,R178491,Material,R178497,serum sample,"Abstract Objective To evaluate viral loads at different stages of disease progression in patients infected with the 2019 severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) during the first four months of the epidemic in Zhejiang province, China. Design Retrospective cohort study. Setting A designated hospital for patients with covid-19 in Zhejiang province, China. Participants 96 consecutively admitted patients with laboratory confirmed SARS-CoV-2 infection: 22 with mild disease and 74 with severe disease. Data were collected from 19 January 2020 to 20 March 2020. Main outcome measures Ribonucleic acid (RNA) viral load measured in respiratory, stool, serum, and urine samples. Cycle threshold values, a measure of nucleic acid concentration, were plotted onto the standard curve constructed on the basis of the standard product. Epidemiological, clinical, and laboratory characteristics and treatment and outcomes data were obtained through data collection forms from electronic medical records, and the relation between clinical data and disease severity was analysed. Results 3497 respiratory, stool, serum, and urine samples were collected from patients after admission and evaluated for SARS-CoV-2 RNA viral load. Infection was confirmed in all patients by testing sputum and saliva samples. RNA was detected in the stool of 55 (59%) patients and in the serum of 39 (41%) patients. The urine sample from one patient was positive for SARS-CoV-2. The median duration of virus in stool (22 days, interquartile range 17-31 days) was significantly longer than in respiratory (18 days, 13-29 days; P=0.02) and serum samples (16 days, 11-21 days; P<0.001). The median duration of virus in the respiratory samples of patients with severe disease (21 days, 14-30 days) was significantly longer than in patients with mild disease (14 days, 10-21 days; P=0.04). In the mild group, the viral loads peaked in respiratory samples in the second week from disease onset, whereas viral load continued to be high during the third week in the severe group. Virus duration was longer in patients older than 60 years and in male patients. Conclusion The duration of SARS-CoV-2 is significantly longer in stool samples than in respiratory and serum samples, highlighting the need to strengthen the management of stool samples in the prevention and control of the epidemic, and the virus persists longer with higher load and peaks later in the respiratory tissue of patients with severe disease.",TRUE,noun phrase
R57,Virology,R41169,Statistics based predictions of coronavirus 2019-nCoV spreading in mainland China,S130688,R41172,Method,R41166,SIR model,"Background. The epidemic outbreak cased by coronavirus 2019-nCoV is of great interest to researches because of the high rate of spread of the infection and the significant number of fatalities. A detailed scientific analysis of the phenomenon is yet to come, but the public is already interested in the questions of the duration of the epidemic, the expected number of patients and deaths. For long time predictions, the complicated mathematical models are necessary which need many efforts for unknown parameters identification and calculations. In this article, some preliminary estimates will be presented. Objective. Since the reliable long time data are available only for mainland China, we will try to predict the epidemic characteristics only in this area. We will estimate some of the epidemic characteristics and present the most reliable dependences for victim numbers, infected and removed persons versus time. Methods. In this study we use the known SIR model for the dynamics of an epidemic, the known exact solution of the linear equations and statistical approach developed before for investigation of the children disease, which occurred in Chernivtsi (Ukraine) in 1988-1989. Results. The optimal values of the SIR model parameters were identified with the use of statistical approach. The numbers of infected, susceptible and removed persons versus time were predicted. Conclusions. Simple mathematical model was used to predict the characteristics of the epidemic caused by coronavirus 2019-nCoV in mainland China. The further research should focus on updating the predictions with the use of fresh data and using more complicated mathematical models.",TRUE,noun phrase
R57,Virology,R70125,"Structure-Based Design, Synthesis, and Biological Evaluation of a Series of Novel and Reversible Inhibitors for the Severe Acute Respiratory Syndrome−Coronavirus Papain-Like Protease",S333212,R70126,has role,R48169,small molecule,"We describe here the design, synthesis, molecular modeling, and biological evaluation of a series of small molecule, nonpeptide inhibitors of SARS-CoV PLpro. Our initial lead compound was identified via high-throughput screening of a diverse chemical library. We subsequently carried out structure-activity relationship studies and optimized the lead structure to potent inhibitors that have shown antiviral activity against SARS-CoV infected Vero E6 cells. Upon the basis of the X-ray crystal structure of inhibitor 24-bound to SARS-CoV PLpro, a drug design template was created. Our structure-based modification led to the design of a more potent inhibitor, 2 (enzyme IC(50) = 0.46 microM; antiviral EC(50) = 6 microM). Interestingly, its methylamine derivative, 49, displayed good enzyme inhibitory potency (IC(50) = 1.3 microM) and the most potent SARS antiviral activity (EC(50) = 5.2 microM) in the series. We have carried out computational docking studies and generated a predictive 3D-QSAR model for SARS-CoV PLpro inhibitors.",TRUE,noun phrase
R57,Virology,R36132,Lessons drawn from China and South Korea for managing COVID-19 epidemic: insights from a comparative modeling study,S123782,R36133,location,R30061,South Korea,"We conducted a comparative study of COVID-19 epidemic in three different settings: mainland China, the Guangdong province of China and South Korea, by formulating two disease transmission dynamics models incorporating epidemic characteristics and setting-specific interventions, and fitting the models to multi-source data to identify initial and effective reproduction numbers and evaluate effectiveness of interventions. We estimated the initial basic reproduction number for South Korea, the Guangdong province and mainland China as 2.6 (95% confidence interval (CI): (2.5, 2.7)), 3.0 (95%CI: (2.6, 3.3)) and 3.8 (95%CI: (3.5,4.2)), respectively, given a serial interval with mean of 5 days with standard deviation of 3 days. We found that the effective reproduction number for the Guangdong province and mainland China has fallen below the threshold 1 since February 8th and 18th respectively, while the effective reproduction number for South Korea remains high, suggesting that the interventions implemented need to be enhanced in order to halt further infections. We also project the epidemic trend in South Korea under different scenarios where a portion or the entirety of the integrated package of interventions in China is used. We show that a coherent and integrated approach with stringent public health interventions is the key to the success of containing the epidemic in China and specially its provinces outside its epicenter, and we show that this approach can also be effective to mitigate the burden of the COVID-19 epidemic in South Korea. The experience of outbreak control in mainland China should be a guiding reference for the rest of the world including South Korea.",TRUE,noun phrase
R57,Virology,R44812,Pattern of early human-to-human transmission of Wuhan 2019-nCoV,S137230,R44815,Method,L83894,Stochastic simulations of early outbreak trajectories,"ABSTRACT On December 31, 2019, the World Health Organization was notified about a cluster of pneumonia of unknown aetiology in the city of Wuhan, China. Chinese authorities later identified a new coronavirus (2019-nCoV) as the causative agent of the outbreak. As of January 23, 2020, 655 cases have been confirmed in China and several other countries. Understanding the transmission characteristics and the potential for sustained human-to-human transmission of 2019-nCoV is critically important for coordinating current screening and containment strategies, and determining whether the outbreak constitutes a public health emergency of international concern (PHEIC). We performed stochastic simulations of early outbreak trajectories that are consistent with the epidemiological findings to date. We found the basic reproduction number, R 0 , to be around 2.2 (90% high density interval 1.4—3.8), indicating the potential for sustained human-to-human transmission. Transmission characteristics appear to be of a similar magnitude to severe acute respiratory syndrome-related coronavirus (SARS-CoV) and the 1918 pandemic influenza. These findings underline the importance of heightened screening, surveillance and control efforts, particularly at airports and other travel hubs, in order to prevent further international spread of 2019-nCoV.",TRUE,noun phrase
R57,Virology,R12243,Pattern of early human-to-human transmission of Wuhan 2019-nCoV,S18696,R12244,Methods,L12306,Stochastic simulations of early outbreak trajectories,"ABSTRACT On December 31, 2019, the World Health Organization was notified about a cluster of pneumonia of unknown aetiology in the city of Wuhan, China. Chinese authorities later identified a new coronavirus (2019-nCoV) as the causative agent of the outbreak. As of January 23, 2020, 655 cases have been confirmed in China and several other countries. Understanding the transmission characteristics and the potential for sustained human-to-human transmission of 2019-nCoV is critically important for coordinating current screening and containment strategies, and determining whether the outbreak constitutes a public health emergency of international concern (PHEIC). We performed stochastic simulations of early outbreak trajectories that are consistent with the epidemiological findings to date. We found the basic reproduction number, R 0 , to be around 2.2 (90% high density interval 1.4—3.8), indicating the potential for sustained human-to-human transmission. Transmission characteristics appear to be of a similar magnitude to severe acute respiratory syndrome-related coronavirus (SARS-CoV) and the 1918 pandemic influenza. These findings underline the importance of heightened screening, surveillance and control efforts, particularly at airports and other travel hubs, in order to prevent further international spread of 2019-nCoV.",TRUE,noun phrase
R57,Virology,R178482,"Viral load dynamics and disease severity in patients infected with SARS-CoV-2 in Zhejiang province, China, January-March 2020: retrospective cohort study",S700080,R178498,Material,R178503,stool sample,"Abstract Objective To evaluate viral loads at different stages of disease progression in patients infected with the 2019 severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) during the first four months of the epidemic in Zhejiang province, China. Design Retrospective cohort study. Setting A designated hospital for patients with covid-19 in Zhejiang province, China. Participants 96 consecutively admitted patients with laboratory confirmed SARS-CoV-2 infection: 22 with mild disease and 74 with severe disease. Data were collected from 19 January 2020 to 20 March 2020. Main outcome measures Ribonucleic acid (RNA) viral load measured in respiratory, stool, serum, and urine samples. Cycle threshold values, a measure of nucleic acid concentration, were plotted onto the standard curve constructed on the basis of the standard product. Epidemiological, clinical, and laboratory characteristics and treatment and outcomes data were obtained through data collection forms from electronic medical records, and the relation between clinical data and disease severity was analysed. Results 3497 respiratory, stool, serum, and urine samples were collected from patients after admission and evaluated for SARS-CoV-2 RNA viral load. Infection was confirmed in all patients by testing sputum and saliva samples. RNA was detected in the stool of 55 (59%) patients and in the serum of 39 (41%) patients. The urine sample from one patient was positive for SARS-CoV-2. The median duration of virus in stool (22 days, interquartile range 17-31 days) was significantly longer than in respiratory (18 days, 13-29 days; P=0.02) and serum samples (16 days, 11-21 days; P<0.001). The median duration of virus in the respiratory samples of patients with severe disease (21 days, 14-30 days) was significantly longer than in patients with mild disease (14 days, 10-21 days; P=0.04). In the mild group, the viral loads peaked in respiratory samples in the second week from disease onset, whereas viral load continued to be high during the third week in the severe group. Virus duration was longer in patients older than 60 years and in male patients. Conclusion The duration of SARS-CoV-2 is significantly longer in stool samples than in respiratory and serum samples, highlighting the need to strengthen the management of stool samples in the prevention and control of the epidemic, and the virus persists longer with higher load and peaks later in the respiratory tissue of patients with severe disease.",TRUE,noun phrase
R57,Virology,R41252,Spread of SARS-CoV-2 in the Icelandic Population,S130884,R41255,has study design,R41262,targeted testing,"Abstract Background During the current worldwide pandemic, coronavirus disease 2019 (Covid-19) was first diagnosed in Iceland at the end of February. However, data are limited on how SARS-CoV-2, the virus that causes Covid-19, enters and spreads in a population. Methods We targeted testing to persons living in Iceland who were at high risk for infection (mainly those who were symptomatic, had recently traveled to high-risk countries, or had contact with infected persons). We also carried out population screening using two strategies: issuing an open invitation to 10,797 persons and sending random invitations to 2283 persons. We sequenced SARS-CoV-2 from 643 samples. Results As of April 4, a total of 1221 of 9199 persons (13.3%) who were recruited for targeted testing had positive results for infection with SARS-CoV-2. Of those tested in the general population, 87 (0.8%) in the open-invitation screening and 13 (0.6%) in the random-population screening tested positive for the virus. In total, 6% of the population was screened. Most persons in the targeted-testing group who received positive tests early in the study had recently traveled internationally, in contrast to those who tested positive later in the study. Children under 10 years of age were less likely to receive a positive result than were persons 10 years of age or older, with percentages of 6.7% and 13.7%, respectively, for targeted testing; in the population screening, no child under 10 years of age had a positive result, as compared with 0.8% of those 10 years of age or older. Fewer females than males received positive results both in targeted testing (11.0% vs. 16.7%) and in population screening (0.6% vs. 0.9%). The haplotypes of the sequenced SARS-CoV-2 viruses were diverse and changed over time. The percentage of infected participants that was determined through population screening remained stable for the 20-day duration of screening. Conclusions In a population-based study in Iceland, children under 10 years of age and females had a lower incidence of SARS-CoV-2 infection than adolescents or adults and males. The proportion of infected persons identified through population screening did not change substantially during the screening period, which was consistent with a beneficial effect of containment efforts. (Funded by deCODE Genetics–Amgen.)",TRUE,noun phrase
R57,Virology,R36109,Transmission interval estimates suggest pre-symptomatic spread of COVID-19,S123630,R36112,location,R36111,"Tianjin, China","Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.",TRUE,noun phrase
R57,Virology,R36138,Estimating the generation interval for COVID-19 based on symptom onset data,S123849,R36142,location,R36111,"Tianjin, China","Background: Estimating key infectious disease parameters from the COVID-19 outbreak is quintessential for modelling studies and guiding intervention strategies. Whereas different estimates for the incubation period distribution and the serial interval distribution have been reported, estimates of the generation interval for COVID-19 have not been provided. Methods: We used outbreak data from clusters in Singapore and Tianjin, China to estimate the generation interval from symptom onset data while acknowledging uncertainty about the incubation period distribution and the underlying transmission network. From those estimates we obtained the proportions pre-symptomatic transmission and reproduction numbers. Results: The mean generation interval was 5.20 (95%CI 3.78-6.78) days for Singapore and 3.95 (95%CI 3.01-4.91) days for Tianjin, China when relying on a previously reported incubation period with mean 5.2 and SD 2.8 days. The proportion of pre-symptomatic transmission was 48% (95%CI 32-67%) for Singapore and 62% (95%CI 50-76%) for Tianjin, China. Estimates of the reproduction number based on the generation interval distribution were slightly higher than those based on the serial interval distribution. Conclusions: Estimating generation and serial interval distributions from outbreak data requires careful investigation of the underlying transmission network. Detailed contact tracing information is essential for correctly estimating these quantities.",TRUE,noun phrase
R57,Virology,R44731,Transmission interval estimates suggest pre-symptomatic spread of COVID-19,S136934,R44738,location,R36111,"Tianjin, China","Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.",TRUE,noun phrase
R57,Virology,R44776,Estimating the generation interval for COVID-19 based on symptom onset data,S137096,R44789,location,R43057,"Tianjin, China","Background: Estimating key infectious disease parameters from the COVID-19 outbreak is quintessential for modelling studies and guiding intervention strategies. Whereas different estimates for the incubation period distribution and the serial interval distribution have been reported, estimates of the generation interval for COVID-19 have not been provided. Methods: We used outbreak data from clusters in Singapore and Tianjin, China to estimate the generation interval from symptom onset data while acknowledging uncertainty about the incubation period distribution and the underlying transmission network. From those estimates we obtained the proportions pre-symptomatic transmission and reproduction numbers. Results: The mean generation interval was 5.20 (95%CI 3.78-6.78) days for Singapore and 3.95 (95%CI 3.01-4.91) days for Tianjin, China when relying on a previously reported incubation period with mean 5.2 and SD 2.8 days. The proportion of pre-symptomatic transmission was 48% (95%CI 32-67%) for Singapore and 62% (95%CI 50-76%) for Tianjin, China. Estimates of the reproduction number based on the generation interval distribution were slightly higher than those based on the serial interval distribution. Conclusions: Estimating generation and serial interval distributions from outbreak data requires careful investigation of the underlying transmission network. Detailed contact tracing information is essential for correctly estimating these quantities.",TRUE,noun phrase
R57,Virology,R70125,"Structure-Based Design, Synthesis, and Biological Evaluation of a Series of Novel and Reversible Inhibitors for the Severe Acute Respiratory Syndrome−Coronavirus Papain-Like Protease",S333204,R70126,Has participant,R70130,Vero E6 cells,"We describe here the design, synthesis, molecular modeling, and biological evaluation of a series of small molecule, nonpeptide inhibitors of SARS-CoV PLpro. Our initial lead compound was identified via high-throughput screening of a diverse chemical library. We subsequently carried out structure-activity relationship studies and optimized the lead structure to potent inhibitors that have shown antiviral activity against SARS-CoV infected Vero E6 cells. Upon the basis of the X-ray crystal structure of inhibitor 24-bound to SARS-CoV PLpro, a drug design template was created. Our structure-based modification led to the design of a more potent inhibitor, 2 (enzyme IC(50) = 0.46 microM; antiviral EC(50) = 6 microM). Interestingly, its methylamine derivative, 49, displayed good enzyme inhibitory potency (IC(50) = 1.3 microM) and the most potent SARS antiviral activity (EC(50) = 5.2 microM) in the series. We have carried out computational docking studies and generated a predictive 3D-QSAR model for SARS-CoV PLpro inhibitors.",TRUE,noun phrase
R57,Virology,R41250,The impact of social distancing and epicenter lockdown on the COVID-19 epidemic in mainland China: A data-driven SEIQR model study,S134095,R41251,location,R44040,"Wuhan, China","The outbreak of coronavirus disease 2019 (COVID-19) which originated in Wuhan, China, constitutes a public health emergency of international concern with a very high risk of spread and impact at the global level. We developed data-driven susceptible-exposed-infectious-quarantine-recovered (SEIQR) models to simulate the epidemic with the interventions of social distancing and epicenter lockdown. Population migration data combined with officially reported data were used to estimate model parameters, and then calculated the daily exported infected individuals by estimating the daily infected ratio and daily susceptible population size. As of Jan 01, 2020, the estimated initial number of latently infected individuals was 380.1 (95%-CI: 379.8~381.0). With 30 days of substantial social distancing, the reproductive number in Wuhan and Hubei was reduced from 2.2 (95%-CI: 1.4~3.9) to 1.58 (95%-CI: 1.34~2.07), and in other provinces from 2.56 (95%-CI: 2.43~2.63) to 1.65 (95%-CI: 1.56~1.76). We found that earlier intervention of social distancing could significantly limit the epidemic in mainland China. The number of infections could be reduced up to 98.9%, and the number of deaths could be reduced by up to 99.3% as of Feb 23, 2020. However, earlier epicenter lockdown would partially neutralize this favorable effect. Because it would cause in situ deteriorating, which overwhelms the improvement out of the epicenter. To minimize the epidemic size and death, stepwise implementation of social distancing in the epicenter city first, then in the province, and later the whole nation without the epicenter lockdown would be practical and cost-effective.",TRUE,noun phrase
R370,"Work, Economy and Organizations",R4208,You will be…: a study of job advertisements to determine employers' requirements for LIS professionals in the UK in 2007,S4315,R4212,method,R4215,content analysis,"Purpose – The purpose of this paper is to investigate what employers seek when recruiting library and information professionals in the UK and whether professional skills, generic skills or personal qualities are most in demand.Design/methodology/approach – A content analysis of a sample of 180 advertisements requiring a professional library or information qualification from Chartered Institute of Library and Information Professional's Library + Information Gazette over the period May 2006‐2007.Findings – The findings reveal that a multitude of skills and qualities are required in the profession. When the results were compared with Information National Training Organisation and Library and Information Management Employability Skills research, customer service, interpersonal and communication skills, and general computing skills emerged as the requirements most frequently sought by employers. Overall, requirements from the generic skills area were most important to employers, but the research also demonstra...",TRUE,noun phrase
R370,"Work, Economy and Organizations",R4234,An investigation of skill requirements for business and data analytics positions: A content analysis of job advertisements,S4346,R4241,method,R4244,content analysis,"Abstract Presently, analytics degree programs exhibit a growing trend to meet a strong market demand. To explore the skill sets required for analytics positions, the authors examined a sample of online job postings related to professions such as business analyst (BA), business intelligence analyst (BIA), data analyst (DA), and data scientist (DS) using content analysis. They present a ranked list of relevant skills belonging to specific skills categories for the studied positions. Also, they conducted a pairwise comparison between DA and DS as well as BA and BIA. Overall, the authors observed that decision making, organization, communication, and structured data management are key to all job categories. The analysis shows that technical skills like statistics and programming skills are in most demand for DAs. The analysis is useful for creating clear definitions with respect to required skills for job categories in the business and data analytics domain and for designing course curricula for this domain.",TRUE,noun phrase
R370,"Work, Economy and Organizations",R4337,You will be…: a study of job advertisements to determine employers' requirements for LIS professionals in the UK in 2007,S4502,R4341,method,L3149,content analysis,"Purpose – The purpose of this paper is to investigate what employers seek when recruiting library and information professionals in the UK and whether professional skills, generic skills or personal qualities are most in demand.Design/methodology/approach – A content analysis of a sample of 180 advertisements requiring a professional library or information qualification from Chartered Institute of Library and Information Professional's Library + Information Gazette over the period May 2006‐2007.Findings – The findings reveal that a multitude of skills and qualities are required in the profession. When the results were compared with Information National Training Organisation and Library and Information Management Employability Skills research, customer service, interpersonal and communication skills, and general computing skills emerged as the requirements most frequently sought by employers. Overall, requirements from the generic skills area were most important to employers, but the research also demonstra...",TRUE,noun phrase
R370,"Work, Economy and Organizations",R4347,An investigation of skill requirements for business and data analytics positions: A content analysis of job advertisements,S4526,R4354,method,L3158,content analysis,"Abstract Presently, analytics degree programs exhibit a growing trend to meet a strong market demand. To explore the skill sets required for analytics positions, the authors examined a sample of online job postings related to professions such as business analyst (BA), business intelligence analyst (BIA), data analyst (DA), and data scientist (DS) using content analysis. They present a ranked list of relevant skills belonging to specific skills categories for the studied positions. Also, they conducted a pairwise comparison between DA and DS as well as BA and BIA. Overall, the authors observed that decision making, organization, communication, and structured data management are key to all job categories. The analysis shows that technical skills like statistics and programming skills are in most demand for DAs. The analysis is useful for creating clear definitions with respect to required skills for job categories in the business and data analytics domain and for designing course curricula for this domain.",TRUE,noun phrase
R370,"Work, Economy and Organizations",R4511,Is knowledge and skills sought by employers: A content analysis of Australian is early career online job advertisements,S4820,R4518,method,R4520,content analysis,"The purpose of this paper is to develop an understanding of the knowledge, skills and competencies demanded of early career information systems (IS) graduates in Australia. Online job advertisements from 2006 were collected and investigated using content analysis software to determine the frequencies and patterns of occurrence of specific requirements. This analysis reveals a dominant cluster of core IS knowledge and competency skills that revolves around IS Development as the most frequently required category of knowledge (78% of ads) and is strongly associated with: Business Analysis, Systems Analysis; Management; Operations, Maintenance & Support; Communication Skills; Personal Characteristics; Computer Languages; Data & Information Management; Internet, Intranet, Web Applications; and Software Packages. Identification of the core cluster of IS knowledge and skills - in demand across a wide variety of jobs - is important to better understand employers' needs for and expectations from IS graduates and the implications for education programs. Much less prevalent is the second cluster that includes knowledge and skills at a more technical side of IS (Architecture and Infrastructure, Operating Systems, Networks, and Security). Issues raised include the nature of entry level positions and their role in the preparation of their incumbents for future more senior positions. The findings add an Australian perspective to the literature on information systems job ads and should be of value to educators, employers, as well as current and future IS professionals.",TRUE,noun phrase
R370,"Work, Economy and Organizations",R4534,Knowledge and Skill Requirements for Marketing Jobs in the 21st Century,S4855,R4539,method,L3300,content analysis,"This study examines the skills and conceptual knowledge that employers require for marketing positions at different levels ranging from entry- or lower-level jobs to middle- and senior-level positions. The data for this research are based on a content analysis of 500 marketing jobs posted on Monster.com for Atlanta, Chicago, Los Angeles, New York City, and Seattle. There were notable differences between the skills and conceptual knowledge required for entry-, lower-, middle-, and upper-level marketing jobs. Technical skills appear to be much more important at all levels than what was documented in earlier research. This study discusses the implications of these research findings for the professional school pedagogical model of marketing education.",TRUE,noun phrase
R370,"Work, Economy and Organizations",R4258,An Ontology-Based Approach for the Semantic Representation of Job Knowledge,S4386,R4268,Process,R4278,creation of a job knowledge (Job-Know) ontology,"The essential and significant components of one's job performance, such as facts, principles, and concepts are considered as job knowledge. This paper provides a framework for forging links between the knowledge, skills, and abilities taught in vocational education and training (VET) and competence prerequisites of jobs. Specifically, the study is aimed at creating an ontology for the semantic representation of that which is taught in the VET, that which is required on the job, and how the two are related. In particular, the creation of a job knowledge (Job-Know) ontology, which represents task and knowledge domains, and the relation between these two domains is discussed. Deploying the Job-Know ontology facilitates bridging job and knowledge elements collected from various sources (such as job descriptions), the identification of knowledge shortages and the determination of mismatches between the task and the knowledge domains that, in a broader perspective, facilitate the bridging requirements of labor market and education systems.",TRUE,noun phrase
R370,"Work, Economy and Organizations",R4290,Analyzing Computer Programming Job Trend Using Web Data Mining,S4413,R4294,method,R4297,data mining,"Today’s rapid changing and competitive environment requires educators to stay abreast of the job market in order to prepare their students for the jobs being demanded. This is more relevant about Information Technology (IT) jobs than others. However, to stay abreast of the market job demands require retrieving, sifting and analyzing large volume of data in order to understand the trends of the job market. Traditional methods of data collection and analysis are not sufficient for this kind of analysis due to the large volume of job data that is generated through the web and elsewhere. Luckily, the field of data mining has emerged to collect and sift through such large data volumes. However, even with data mining, appropriate data collection techniques and analysis need to be followed in order to correctly understand the trend. This paper illustrates our experience with employing mining techniques to understand the trend in IT Technology jobs. Data was collect using data mining techniques over a number of years from an online job agency. The data was then analyzed to reach a conclusion about the trends in the job market. Our experience in this regard along with literature review of the relevant topics is illustrated in this paper.",TRUE,noun phrase
R370,"Work, Economy and Organizations",R4329,Analyzing Computer Programming Job Trend Using Web Data Mining,S4483,R4333,method,R4336,data mining,"Today’s rapid changing and competitive environment requires educators to stay abreast of the job market in order to prepare their students for the jobs being demanded. This is more relevant about Information Technology (IT) jobs than others. However, to stay abreast of the market job demands require retrieving, sifting and analyzing large volume of data in order to understand the trends of the job market. Traditional methods of data collection and analysis are not sufficient for this kind of analysis due to the large volume of job data that is generated through the web and elsewhere. Luckily, the field of data mining has emerged to collect and sift through such large data volumes. However, even with data mining, appropriate data collection techniques and analysis need to be followed in order to correctly understand the trend. This paper illustrates our experience with employing mining techniques to understand the trend in IT Technology jobs. Data was collect using data mining techniques over a number of years from an online job agency. The data was then analyzed to reach a conclusion about the trends in the job market. Our experience in this regard along with literature review of the relevant topics is illustrated in this paper.",TRUE,noun phrase
R370,"Work, Economy and Organizations",R4542,Mining for Computing Jobs,S4876,R4549,method,L3310,data mining,"A Web content mining approach identified 20 job categories and the associated skills needs prevalent in the computing professions. Using a Web content data mining application, we extracted almost a quarter million unique IT job descriptions from various job search engines and distilled each to its required skill sets. We statistically examined these, revealing 20 clusters of similar skill sets that map to specific job definitions. The results allow software engineering professionals to tune their skills portfolio to match those in demand from real computing jobs across the US to attain more lucrative salaries and more mobility in a chaotic environment.",TRUE,noun phrase
R370,"Work, Economy and Organizations",R4372,Challenge: Processing web texts for classifying job offers,S4578,R4383,method,L3182,LDA-based algorithms,"Today the Web represents a rich source of labour market data for both public and private operators, as a growing number of job offers are advertised through Web portals and services. In this paper we apply and compare several techniques, namely explicit-rules, machine learning, and LDA-based algorithms to classify a real dataset of Web job offers collected from 12 heterogeneous sources against a standard classification system of occupations.",TRUE,noun phrase
R370,"Work, Economy and Organizations",R4308,Challenge: Processing web texts for classifying job offers,S4463,R4319,method,L3130,LDA-based algorithms ,"Today the Web represents a rich source of labour market data for both public and private operators, as a growing number of job offers are advertised through Web portals and services. In this paper we apply and compare several techniques, namely explicit-rules, machine learning, and LDA-based algorithms to classify a real dataset of Web job offers collected from 12 heterogeneous sources against a standard classification system of occupations.",TRUE,noun phrase
R370,"Work, Economy and Organizations",R4308,Challenge: Processing web texts for classifying job offers,S4464,R4319,method,R4322,machine learning,"Today the Web represents a rich source of labour market data for both public and private operators, as a growing number of job offers are advertised through Web portals and services. In this paper we apply and compare several techniques, namely explicit-rules, machine learning, and LDA-based algorithms to classify a real dataset of Web job offers collected from 12 heterogeneous sources against a standard classification system of occupations.",TRUE,noun phrase
R370,"Work, Economy and Organizations",R4372,Challenge: Processing web texts for classifying job offers,S4579,R4383,method,R4386,machine learning,"Today the Web represents a rich source of labour market data for both public and private operators, as a growing number of job offers are advertised through Web portals and services. In this paper we apply and compare several techniques, namely explicit-rules, machine learning, and LDA-based algorithms to classify a real dataset of Web job offers collected from 12 heterogeneous sources against a standard classification system of occupations.",TRUE,noun phrase
R370,"Work, Economy and Organizations",R4404,An Open and Data-driven Taxonomy of Skills Extracted from Online Job Adverts,S4621,R4409,method,R4413,machine learning methods,"In this work we offer an open and data-driven skills taxonomy, which is independent of ESCO and O*NET, two popular available taxonomies that are expert-derived. Since the taxonomy is created in an algorithmic way without expert elicitation, it can be quickly updated to reflect changes in labour demand and provide timely insights to support labour market decision-making. Our proposed taxonomy also captures links between skills, aggregated job titles, and the salaries mentioned in the millions of UK job adverts used in this analysis. To generate the taxonomy, we employ machine learning methods, such as word embeddings, network community detection algorithms and consensus clustering. We model skills as a graph with individual skills as vertices and their co-occurrences in job adverts as edges. The strength of the relationships between the skills is measured using both the frequency of actual co-occurrences of skills in the same advert as well as their shared context, based on a trained word embeddings model. Once skills are represented as a network, we hierarchically group them into clusters. To ensure the stability of the resulting clusters, we introduce bootstrapping and consensus clustering stages into the methodology. While we share initial results and describe the skill clusters, the main purpose of this paper is to outline the methodology for building the taxonomy.",TRUE,noun phrase
R370,"Work, Economy and Organizations",R4208,You will be…: a study of job advertisements to determine employers' requirements for LIS professionals in the UK in 2007,S4314,R4212,Process,R4214,recruiting library and information professionals,"Purpose – The purpose of this paper is to investigate what employers seek when recruiting library and information professionals in the UK and whether professional skills, generic skills or personal qualities are most in demand.Design/methodology/approach – A content analysis of a sample of 180 advertisements requiring a professional library or information qualification from Chartered Institute of Library and Information Professional's Library + Information Gazette over the period May 2006‐2007.Findings – The findings reveal that a multitude of skills and qualities are required in the profession. When the results were compared with Information National Training Organisation and Library and Information Management Employability Skills research, customer service, interpersonal and communication skills, and general computing skills emerged as the requirements most frequently sought by employers. Overall, requirements from the generic skills area were most important to employers, but the research also demonstra...",TRUE,noun phrase
R370,"Work, Economy and Organizations",R4347,An investigation of skill requirements for business and data analytics positions: A content analysis of job advertisements,S4521,R4354,Data,R4356,sample of online job postings,"Abstract Presently, analytics degree programs exhibit a growing trend to meet a strong market demand. To explore the skill sets required for analytics positions, the authors examined a sample of online job postings related to professions such as business analyst (BA), business intelligence analyst (BIA), data analyst (DA), and data scientist (DS) using content analysis. They present a ranked list of relevant skills belonging to specific skills categories for the studied positions. Also, they conducted a pairwise comparison between DA and DS as well as BA and BIA. Overall, the authors observed that decision making, organization, communication, and structured data management are key to all job categories. The analysis shows that technical skills like statistics and programming skills are in most demand for DAs. The analysis is useful for creating clear definitions with respect to required skills for job categories in the business and data analytics domain and for designing course curricula for this domain.",TRUE,noun phrase
R370,"Work, Economy and Organizations",R4234,An investigation of skill requirements for business and data analytics positions: A content analysis of job advertisements,S4345,R4241,Material,R4243,sample of online job postings,"Abstract Presently, analytics degree programs exhibit a growing trend to meet a strong market demand. To explore the skill sets required for analytics positions, the authors examined a sample of online job postings related to professions such as business analyst (BA), business intelligence analyst (BIA), data analyst (DA), and data scientist (DS) using content analysis. They present a ranked list of relevant skills belonging to specific skills categories for the studied positions. Also, they conducted a pairwise comparison between DA and DS as well as BA and BIA. Overall, the authors observed that decision making, organization, communication, and structured data management are key to all job categories. The analysis shows that technical skills like statistics and programming skills are in most demand for DAs. The analysis is useful for creating clear definitions with respect to required skills for job categories in the business and data analytics domain and for designing course curricula for this domain.",TRUE,noun phrase
R370,"Work, Economy and Organizations",R4404,An Open and Data-driven Taxonomy of Skills Extracted from Online Job Adverts,S4622,R4409,Has result,R4414,skills taxonomy,"In this work we offer an open and data-driven skills taxonomy, which is independent of ESCO and O*NET, two popular available taxonomies that are expert-derived. Since the taxonomy is created in an algorithmic way without expert elicitation, it can be quickly updated to reflect changes in labour demand and provide timely insights to support labour market decision-making. Our proposed taxonomy also captures links between skills, aggregated job titles, and the salaries mentioned in the millions of UK job adverts used in this analysis. To generate the taxonomy, we employ machine learning methods, such as word embeddings, network community detection algorithms and consensus clustering. We model skills as a graph with individual skills as vertices and their co-occurrences in job adverts as edges. The strength of the relationships between the skills is measured using both the frequency of actual co-occurrences of skills in the same advert as well as their shared context, based on a trained word embeddings model. Once skills are represented as a network, we hierarchically group them into clusters. To ensure the stability of the resulting clusters, we introduce bootstrapping and consensus clustering stages into the methodology. While we share initial results and describe the skill clusters, the main purpose of this paper is to outline the methodology for building the taxonomy.",TRUE,noun phrase
,Occupational psychology,R172966,Acceptance of Workplace Bullying Behaviors and Job Satisfaction: Moderated Mediation Analysis With Coping Self-Efficacy and Exposure to Bullying,S691092,R173008,Variables,L464047,Coping self-efficacy beliefs,"Previous research explored workplace climate as a factor of workplace bullying and coping with workplace bullying, but these concepts were not closely related to workplace bullying behaviors (WBBs). To examine whether the perceived exposure to bullying mediates the relationship between the climate of accepting WBBs and job satisfaction under the condition of different levels of WBBs coping self-efficacy beliefs, we performed moderated mediation analysis. The Negative Acts Questionnaire – Revised was given to 329 employees from Serbia for assessing perceived exposure to bullying. Leaving the original scale items, the instruction of the original Negative Acts Questionnaire – Revised was modified for assessing (1) the climate of accepting WBBs and (2) WBBs coping self-efficacy beliefs. There was a significant negative relationship between exposure to bullying and job satisfaction. WBB acceptance climate was positively related to exposure to workplace bullying and negatively related to job satisfaction. WBB acceptance climate had an indirect relationship with job satisfaction through bullying exposure, and the relationship between WBB acceptance and exposure to bullying was weaker among those who believed that they were more efficient in coping with workplace bullying. Workplace bullying could be sustained by WBB acceptance climate which threatens the job-related outcomes. WBBs coping self-efficacy beliefs have some buffering effects.",TRUE,noun phrase
,Occupational psychology,R172966,Acceptance of Workplace Bullying Behaviors and Job Satisfaction: Moderated Mediation Analysis With Coping Self-Efficacy and Exposure to Bullying,S691090,R173008,Variables,L464045,Job satisfaction,"Previous research explored workplace climate as a factor of workplace bullying and coping with workplace bullying, but these concepts were not closely related to workplace bullying behaviors (WBBs). To examine whether the perceived exposure to bullying mediates the relationship between the climate of accepting WBBs and job satisfaction under the condition of different levels of WBBs coping self-efficacy beliefs, we performed moderated mediation analysis. The Negative Acts Questionnaire – Revised was given to 329 employees from Serbia for assessing perceived exposure to bullying. Leaving the original scale items, the instruction of the original Negative Acts Questionnaire – Revised was modified for assessing (1) the climate of accepting WBBs and (2) WBBs coping self-efficacy beliefs. There was a significant negative relationship between exposure to bullying and job satisfaction. WBB acceptance climate was positively related to exposure to workplace bullying and negatively related to job satisfaction. WBB acceptance climate had an indirect relationship with job satisfaction through bullying exposure, and the relationship between WBB acceptance and exposure to bullying was weaker among those who believed that they were more efficient in coping with workplace bullying. Workplace bullying could be sustained by WBB acceptance climate which threatens the job-related outcomes. WBBs coping self-efficacy beliefs have some buffering effects.",TRUE,noun phrase
,electrical engineering,R145522,Remarkable Improvement in Foldability of Poly‐Si Thin‐Film Transistor on Polyimide Substrate Using Blue Laser Crystallization of Amorphous Si and Comparison with Conventional Poly‐Si Thin‐Film Transistor Used for Foldable Displays,S582730,R145526,substrate,L406995,Polyimide (PI) ,"Highly robust poly‐Si thin‐film transistor (TFT) on polyimide (PI) substrate using blue laser annealing (BLA) of amorphous silicon (a‐Si) for lateral crystallization is demonstrated. Its foldability is compared with the conventional excimer laser annealing (ELA) poly‐Si TFT on PI used for foldable displays exhibiting field‐effect mobility of 85 cm2 (V s)−1. The BLA poly‐Si TFT on PI exhibits the field‐effect mobility, threshold voltage (VTH), and subthreshold swing of 153 cm2 (V s)−1, −2.7 V, and 0.2 V dec−1, respectively. Most important finding is the excellent foldability of BLA TFT compared with the ELA poly‐Si TFTs on PI substrates. The VTH shift of BLA poly‐Si TFT is ≈0.1 V, which is much smaller than that (≈2 V) of ELA TFT on PI upon 30 000 cycle folding. The defects are generated at the grain boundary region of ELA poly‐Si during folding. However, BLA poly‐Si has no protrusion in the poly‐Si channel and thus no defect generation during folding. This leads to excellent foldability of BLA poly‐Si on PI substrate.",TRUE,noun phrase
,chemical engineering,R178365,Disorder–Order Transition—Improving the Moisture Sensitivity of Waterborne Nanocomposite Barriers,S699630,R178366,Fabrication method,L470910,Slot die,"Systematic studies on the influence of crystalline vs disordered nanocomposite structures on barrier properties and water vapor sensitivity are scarce as it is difficult to switch between the two morphologies without changing other critical parameters. By combining water-soluble poly(vinyl alcohol) (PVOH) and ultrahigh aspect ratio synthetic sodium fluorohectorite (Hec) as filler, we were able to fabricate nanocomposites from a single nematic aqueous suspension by slot die coating that, depending on the drying temperature, forms different desired morphologies. Increasing the drying temperature from 20 to 50 °C for the same formulation triggers phase segregation and disordered nanocomposites are obtained, while at room temperature, one-dimensional (1D) crystalline, intercalated hybrid Bragg Stacks form. The onset of swelling of the crystalline morphology is pushed to significantly higher relative humidity (RH). This disorder-order transition renders PVOH/Hec a promising barrier material at RH of up to 65%, which is relevant for food packaging. The oxygen permeability (OP) of the 1D crystalline PVOH/Hec is an order of magnitude lower compared to the OP of the disordered nanocomposite at this elevated RH (OP = 0.007 cm3 μm m-2 day-1 bar-1 cf. OP = 0.047 cm3 μm m-2 day-1 bar-1 at 23 °C and 65% RH).",TRUE,noun phrase
,electrical engineering,R145520,"Extremely Stable, High Performance Gd and Li Alloyed ZnO Thin Film Transistor by Spray Pyrolysis",S582949,R145521,keywords,L407157,Spray Pyrolysis,"The simultaneous doping effect of Gadolinium (Gd) and Lithium (Li) on zinc oxide (ZnO) thin‐film transistor (TFT) by spray pyrolysis using a ZrOx gate insulator is reported. Li doping in ZnO increases mobility significantly, whereas the presence of Gd improves the stability of the device. The Gd ratio in ZnO is varied from 0% to 20% and the Li ratio from 0% to 10%. The optimized ZnO TFT with codoping of 5% Li and 10% Gd exhibits the linear mobility of 25.87 cm2 V−1 s−1, the subthreshold swing of 204 mV dec−1, on/off current ratio of ≈108, and zero hysteresis voltage. The enhancement of both mobility and stability is due to an increase in grain size by Li incorporation and decrease of defect states by Gd doping. The negligible threshold voltage shift (∆VTH) under gate bias and zero hysteresis are due to the reduced defects in an oxide semiconductor and decreased traps at the LiGdZnO/ZrOx interface. Li doping can balance the reduction of the carrier concentration by Gd doping, which improves the mobility and stability of the ZnO TFT. Therefore, LiGdZnO TFT shows excellent electrical performance with high stability.",TRUE,noun phrase
,Occupational psychology,R172966,Acceptance of Workplace Bullying Behaviors and Job Satisfaction: Moderated Mediation Analysis With Coping Self-Efficacy and Exposure to Bullying,S691129,R173008,Variables,L464074,Workplace bullying,"Previous research explored workplace climate as a factor of workplace bullying and coping with workplace bullying, but these concepts were not closely related to workplace bullying behaviors (WBBs). To examine whether the perceived exposure to bullying mediates the relationship between the climate of accepting WBBs and job satisfaction under the condition of different levels of WBBs coping self-efficacy beliefs, we performed moderated mediation analysis. The Negative Acts Questionnaire – Revised was given to 329 employees from Serbia for assessing perceived exposure to bullying. Leaving the original scale items, the instruction of the original Negative Acts Questionnaire – Revised was modified for assessing (1) the climate of accepting WBBs and (2) WBBs coping self-efficacy beliefs. There was a significant negative relationship between exposure to bullying and job satisfaction. WBB acceptance climate was positively related to exposure to workplace bullying and negatively related to job satisfaction. WBB acceptance climate had an indirect relationship with job satisfaction through bullying exposure, and the relationship between WBB acceptance and exposure to bullying was weaker among those who believed that they were more efficient in coping with workplace bullying. Workplace bullying could be sustained by WBB acceptance climate which threatens the job-related outcomes. WBBs coping self-efficacy beliefs have some buffering effects.",TRUE,noun phrase
,electrical engineering,R145520,"Extremely Stable, High Performance Gd and Li Alloyed ZnO Thin Film Transistor by Spray Pyrolysis",S582946,R145521,keywords,L407154,Zinc Oxide,"The simultaneous doping effect of Gadolinium (Gd) and Lithium (Li) on zinc oxide (ZnO) thin‐film transistor (TFT) by spray pyrolysis using a ZrOx gate insulator is reported. Li doping in ZnO increases mobility significantly, whereas the presence of Gd improves the stability of the device. The Gd ratio in ZnO is varied from 0% to 20% and the Li ratio from 0% to 10%. The optimized ZnO TFT with codoping of 5% Li and 10% Gd exhibits the linear mobility of 25.87 cm2 V−1 s−1, the subthreshold swing of 204 mV dec−1, on/off current ratio of ≈108, and zero hysteresis voltage. The enhancement of both mobility and stability is due to an increase in grain size by Li incorporation and decrease of defect states by Gd doping. The negligible threshold voltage shift (∆VTH) under gate bias and zero hysteresis are due to the reduced defects in an oxide semiconductor and decreased traps at the LiGdZnO/ZrOx interface. Li doping can balance the reduction of the carrier concentration by Gd doping, which improves the mobility and stability of the ZnO TFT. Therefore, LiGdZnO TFT shows excellent electrical performance with high stability.",TRUE,noun phrase
R123,Analytical Chemistry,R140743,Flower-like Palladium Nanoclusters Decorated Graphene Electrodes for Ultrasensitive and Flexible Hydrogen Gas Sensing,S562335,R140745,Limit of detection (ppm),L394726,0.1,"Abstract Flower-like palladium nanoclusters (FPNCs) are electrodeposited onto graphene electrode that are prepared by chemical vapor deposition (CVD). The CVD graphene layer is transferred onto a poly(ethylene naphthalate) (PEN) film to provide a mechanical stability and flexibility. The surface of the CVD graphene is functionalized with diaminonaphthalene (DAN) to form flower shapes. Palladium nanoparticles act as templates to mediate the formation of FPNCs, which increase in size with reaction time. The population of FPNCs can be controlled by adjusting the DAN concentration as functionalization solution. These FPNCs_CG electrodes are sensitive to hydrogen gas at room temperature. The sensitivity and response time as a function of the FPNCs population are investigated, resulted in improved performance with increasing population. Furthermore, the minimum detectable level (MDL) of hydrogen is 0.1 ppm, which is at least 2 orders of magnitude lower than that of chemical sensors based on other Pd-based hybrid materials.",TRUE,number
R123,Analytical Chemistry,R140743,Flower-like Palladium Nanoclusters Decorated Graphene Electrodes for Ultrasensitive and Flexible Hydrogen Gas Sensing,S562337,R140745,Minimum experimental range (ppm),L394728,0.1,"Abstract Flower-like palladium nanoclusters (FPNCs) are electrodeposited onto graphene electrode that are prepared by chemical vapor deposition (CVD). The CVD graphene layer is transferred onto a poly(ethylene naphthalate) (PEN) film to provide a mechanical stability and flexibility. The surface of the CVD graphene is functionalized with diaminonaphthalene (DAN) to form flower shapes. Palladium nanoparticles act as templates to mediate the formation of FPNCs, which increase in size with reaction time. The population of FPNCs can be controlled by adjusting the DAN concentration as functionalization solution. These FPNCs_CG electrodes are sensitive to hydrogen gas at room temperature. The sensitivity and response time as a function of the FPNCs population are investigated, resulted in improved performance with increasing population. Furthermore, the minimum detectable level (MDL) of hydrogen is 0.1 ppm, which is at least 2 orders of magnitude lower than that of chemical sensors based on other Pd-based hybrid materials.",TRUE,number
R123,Analytical Chemistry,R139374,CuO Nanosheets for Sensitive and Selective Determination of H2S with High Recovery Ability,S555709,R139376,Maximum experimental range (ppm),L390841,1.2,"In this article, cupric oxide (CuO) leafletlike nanosheets have been synthesized by a facile, low-cost, and surfactant-free method, and they have further been successfully developed for sensitive and selective determination of hydrogen sulfide (H2S) with high recovery ability. The experimental results have revealed that the sensitivity and recovery time of the present H2S gas sensor are strongly dependent on the working temperature. The best H2S sensing performance has been achieved with a low detection limit of 2 ppb and broad linear range from 30 ppb to 1.2 ppm. The gas sensor is reversible, with a quick response time of 4 s and a short recovery time of 9 s. In addition, negligible responses can be observed exposed to 100-fold concentrations of other gases which may exist in the atmosphere such as nitrogen (N2), oxygen (O2), nitric oxide (NO), cabon monoxide (CO), nitrogen dioxide (NO2), hydrogen (H2), and so on, indicating relatively high selectivity of the present H2S sensor. The H2S sensor based on t...",TRUE,number
R20,Anatomy,R110614,Hypercholesterolemia in pregnant mice increases the susceptibility to atherosclerosis in adult life,S504058,R110616,p-value of atherosclerotic lesion sizes,L364141,0.01,"Purpose To determine the effects of hypercholesterolemia in pregnant mice on the susceptibility to atherosclerosis in adult life through a new animal modeling approach. Methods Male offspring from apoE−/− mice fed with regular (R) or high (H) cholesterol chow during pregnancy were randomly subjected to regular (Groups R–R and H–R, n = 10) or high cholesterol diet (Groups R–H and H–H, n = 10) for 14 weeks. Plasma lipid profiles were determined in all rats. The abdominal aorta was examined for the severity of atherosclerotic lesions in offspring. Results Lipids significantly increased while high-density lipoprotein-cholesterol/low-density lipoprotein-cholesterol decreased in mothers fed high cholesterol chow after delivery compared with before pregnancy (p < 0.01). Groups R–H and H–R indicated dyslipidemia and significant atherosclerotic lesions. Group H–H demonstrated the highest lipids, lowest high-density lipoprotein-cholesterol/low-density lipoprotein-cholesterol, highest incidence (90%), plaque area to luminal area ratio (0.78 ± 0.02) and intima to media ratio (1.57 ± 0.05). Conclusion Hypercholesterolemia in pregnant mice may increase susceptibility to atherosclerosis in their adult offspring.",TRUE,number
R133,Artificial Intelligence,R142185,Artificial Intelligence Applied to Breast MRI for Improved Diagnosis,S571314,R142188,AUC without AI ,L400980,0.71,"Background Recognition of salient MRI morphologic and kinetic features of various malignant tumor subtypes and benign diseases, either visually or with artificial intelligence (AI), allows radiologists to improve diagnoses that may improve patient treatment. Purpose To evaluate whether the diagnostic performance of radiologists in the differentiation of cancer from noncancer at dynamic contrast material-enhanced (DCE) breast MRI is improved when using an AI system compared with conventionally available software. Materials and Methods In a retrospective clinical reader study, images from breast DCE MRI examinations were interpreted by 19 breast imaging radiologists from eight academic and 11 private practices. Readers interpreted each examination twice. In the ""first read,"" they were provided with conventionally available computer-aided evaluation software, including kinetic maps. In the ""second read,"" they were also provided with AI analytics through computer-aided diagnosis software. Reader diagnostic performance was evaluated with receiver operating characteristic (ROC) analysis, with the area under the ROC curve (AUC) as a figure of merit in the task of distinguishing between malignant and benign lesions. The primary study end point was the difference in AUC between the first-read and the second-read conditions. Results One hundred eleven women (mean age, 52 years ± 13 [standard deviation]) were evaluated with a total of 111 breast DCE MRI examinations (54 malignant and 57 nonmalignant lesions). The average AUC of all readers improved from 0.71 to 0.76 (P = .04) when using the AI system. The average sensitivity improved when Breast Imaging Reporting and Data System (BI-RADS) category 3 was used as the cut point (from 90% to 94%; 95% confidence interval [CI] for the change: 0.8%, 7.4%) but not when using BI-RADS category 4a (from 80% to 85%; 95% CI: -0.9%, 11%). The average specificity showed no difference when using either BI-RADS category 4a or category 3 as the cut point (52% and 52% [95% CI: -7.3%, 6.0%], and from 29% to 28% [95% CI: -6.4%, 4.3%], respectively). Conclusion Use of an artificial intelligence system improves radiologists' performance in the task of differentiating benign and malignant MRI breast lesions. © RSNA, 2020 Online supplemental material is available for this article. See also the editorial by Krupinski in this issue.",TRUE,number
R133,Artificial Intelligence,R142185,Artificial Intelligence Applied to Breast MRI for Improved Diagnosis,S571313,R142188,AUC with AI ,L400979,0.76,"Background Recognition of salient MRI morphologic and kinetic features of various malignant tumor subtypes and benign diseases, either visually or with artificial intelligence (AI), allows radiologists to improve diagnoses that may improve patient treatment. Purpose To evaluate whether the diagnostic performance of radiologists in the differentiation of cancer from noncancer at dynamic contrast material-enhanced (DCE) breast MRI is improved when using an AI system compared with conventionally available software. Materials and Methods In a retrospective clinical reader study, images from breast DCE MRI examinations were interpreted by 19 breast imaging radiologists from eight academic and 11 private practices. Readers interpreted each examination twice. In the ""first read,"" they were provided with conventionally available computer-aided evaluation software, including kinetic maps. In the ""second read,"" they were also provided with AI analytics through computer-aided diagnosis software. Reader diagnostic performance was evaluated with receiver operating characteristic (ROC) analysis, with the area under the ROC curve (AUC) as a figure of merit in the task of distinguishing between malignant and benign lesions. The primary study end point was the difference in AUC between the first-read and the second-read conditions. Results One hundred eleven women (mean age, 52 years ± 13 [standard deviation]) were evaluated with a total of 111 breast DCE MRI examinations (54 malignant and 57 nonmalignant lesions). The average AUC of all readers improved from 0.71 to 0.76 (P = .04) when using the AI system. The average sensitivity improved when Breast Imaging Reporting and Data System (BI-RADS) category 3 was used as the cut point (from 90% to 94%; 95% confidence interval [CI] for the change: 0.8%, 7.4%) but not when using BI-RADS category 4a (from 80% to 85%; 95% CI: -0.9%, 11%). The average specificity showed no difference when using either BI-RADS category 4a or category 3 as the cut point (52% and 52% [95% CI: -7.3%, 6.0%], and from 29% to 28% [95% CI: -6.4%, 4.3%], respectively). Conclusion Use of an artificial intelligence system improves radiologists' performance in the task of differentiating benign and malignant MRI breast lesions. © RSNA, 2020 Online supplemental material is available for this article. See also the editorial by Krupinski in this issue.",TRUE,number
R133,Artificial Intelligence,R142196,Improving Breast Cancer Detection Accuracy of Mammography with the Concurrent Use of an Artificial Intelligence Tool,S571324,R142201,AUC without AI ,L400989,0.769,"Purpose To evaluate the benefits of an artificial intelligence (AI)-based tool for two-dimensional mammography in the breast cancer detection process. Materials and Methods In this multireader, multicase retrospective study, 14 radiologists assessed a dataset of 240 digital mammography images, acquired between 2013 and 2016, using a counterbalance design in which half of the dataset was read without AI and the other half with the help of AI during a first session and vice versa during a second session, which was separated from the first by a washout period. Area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and reading time were assessed as endpoints. Results The average AUC across readers was 0.769 (95% CI: 0.724, 0.814) without AI and 0.797 (95% CI: 0.754, 0.840) with AI. The average difference in AUC was 0.028 (95% CI: 0.002, 0.055, P = .035). Average sensitivity was increased by 0.033 when using AI support (P = .021). Reading time changed dependently to the AI-tool score. For low likelihood of malignancy (< 2.5%), the time was about the same in the first reading session and slightly decreased in the second reading session. For higher likelihood of malignancy, the reading time was on average increased with the use of AI. Conclusion This clinical investigation demonstrated that the concurrent use of this AI tool improved the diagnostic performance of radiologists in the detection of breast cancer without prolonging their workflow.Supplemental material is available for this article.© RSNA, 2020.",TRUE,number
R133,Artificial Intelligence,R142196,Improving Breast Cancer Detection Accuracy of Mammography with the Concurrent Use of an Artificial Intelligence Tool,S571325,R142201,AUC with AI ,L400990,0.797,"Purpose To evaluate the benefits of an artificial intelligence (AI)-based tool for two-dimensional mammography in the breast cancer detection process. Materials and Methods In this multireader, multicase retrospective study, 14 radiologists assessed a dataset of 240 digital mammography images, acquired between 2013 and 2016, using a counterbalance design in which half of the dataset was read without AI and the other half with the help of AI during a first session and vice versa during a second session, which was separated from the first by a washout period. Area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and reading time were assessed as endpoints. Results The average AUC across readers was 0.769 (95% CI: 0.724, 0.814) without AI and 0.797 (95% CI: 0.754, 0.840) with AI. The average difference in AUC was 0.028 (95% CI: 0.002, 0.055, P = .035). Average sensitivity was increased by 0.033 when using AI support (P = .021). Reading time changed dependently to the AI-tool score. For low likelihood of malignancy (< 2.5%), the time was about the same in the first reading session and slightly decreased in the second reading session. For higher likelihood of malignancy, the reading time was on average increased with the use of AI. Conclusion This clinical investigation demonstrated that the concurrent use of this AI tool improved the diagnostic performance of radiologists in the detection of breast cancer without prolonging their workflow.Supplemental material is available for this article.© RSNA, 2020.",TRUE,number
R133,Artificial Intelligence,R142180,Detection and Diagnosis of Breast Cancer Using Artificial Intelligence Based Assessment of Maximum Intensity Projection Dynamic Contrast-Enhanced Magnetic Resonance Images,S571294,R142184,AUC without AI ,L400975,0.884,"We aimed to evaluate an artificial intelligence (AI) system that can detect and diagnose lesions of maximum intensity projection (MIP) in dynamic contrast-enhanced (DCE) breast magnetic resonance imaging (MRI). We retrospectively gathered MIPs of DCE breast MRI for training and validation data from 30 and 7 normal individuals, 49 and 20 benign cases, and 135 and 45 malignant cases, respectively. Breast lesions were indicated with a bounding box and labeled as benign or malignant by a radiologist, while the AI system was trained to detect and calculate possibilities of malignancy using RetinaNet. The AI system was analyzed using test sets of 13 normal, 20 benign, and 52 malignant cases. Four human readers also scored these test data with and without the assistance of the AI system for the possibility of a malignancy in each breast. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were 0.926, 0.828, and 0.925 for the AI system; 0.847, 0.841, and 0.884 for human readers without AI; and 0.889, 0.823, and 0.899 for human readers with AI using a cutoff value of 2%, respectively. The AI system showed better diagnostic performance compared to the human readers (p = 0.002), and because of the increased performance of human readers with the assistance of the AI system, the AUC of human readers was significantly higher with than without the AI system (p = 0.039). Our AI system showed a high performance ability in detecting and diagnosing lesions in MIPs of DCE breast MRI and increased the diagnostic performance of human readers.",TRUE,number
R133,Artificial Intelligence,R142180,Detection and Diagnosis of Breast Cancer Using Artificial Intelligence Based Assessment of Maximum Intensity Projection Dynamic Contrast-Enhanced Magnetic Resonance Images,S571293,R142184,AUC with AI ,L400974,0.899,"We aimed to evaluate an artificial intelligence (AI) system that can detect and diagnose lesions of maximum intensity projection (MIP) in dynamic contrast-enhanced (DCE) breast magnetic resonance imaging (MRI). We retrospectively gathered MIPs of DCE breast MRI for training and validation data from 30 and 7 normal individuals, 49 and 20 benign cases, and 135 and 45 malignant cases, respectively. Breast lesions were indicated with a bounding box and labeled as benign or malignant by a radiologist, while the AI system was trained to detect and calculate possibilities of malignancy using RetinaNet. The AI system was analyzed using test sets of 13 normal, 20 benign, and 52 malignant cases. Four human readers also scored these test data with and without the assistance of the AI system for the possibility of a malignancy in each breast. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were 0.926, 0.828, and 0.925 for the AI system; 0.847, 0.841, and 0.884 for human readers without AI; and 0.889, 0.823, and 0.899 for human readers with AI using a cutoff value of 2%, respectively. The AI system showed better diagnostic performance compared to the human readers (p = 0.002), and because of the increased performance of human readers with the assistance of the AI system, the AUC of human readers was significantly higher with than without the AI system (p = 0.039). Our AI system showed a high performance ability in detecting and diagnosing lesions in MIPs of DCE breast MRI and increased the diagnostic performance of human readers.",TRUE,number
R133,Artificial Intelligence,R74026,Task 11 at SemEval-2021: NLPContributionGraph - Structuring Scholarly NLP Contributions for a Research Knowledge Graph,S340703,R74036,Pipelined Triples Extraction Performance F-score,L245355,22.28,"There is currently a gap between the natural language expression of scholarly publications and their structured semantic content modeling to enable intelligent content search. With the volume of research growing exponentially every year, a search feature operating over semantically structured content is compelling. The SemEval-2021 Shared Task NLPContributionGraph (a.k.a. ‘the NCG task’) tasks participants to develop automated systems that structure contributions from NLP scholarly articles in the English language. Being the first-of-its-kind in the SemEval series, the task released structured data from NLP scholarly articles at three levels of information granularity, i.e. at sentence-level, phrase-level, and phrases organized as triples toward Knowledge Graph (KG) building. The sentence-level annotations comprised the few sentences about the article’s contribution. The phrase-level annotations were scientific term and predicate phrases from the contribution sentences. Finally, the triples constituted the research overview KG. For the Shared Task, participating systems were then expected to automatically classify contribution sentences, extract scientific terms and relations from the sentences, and organize them as KG triples. Overall, the task drew a strong participation demographic of seven teams and 27 participants. The best end-to-end task system classified contribution sentences at 57.27% F1, phrases at 46.41% F1, and triples at 22.28% F1. While the absolute performance to generate triples remains low, as conclusion to the article, the difficulty of producing such data and as a consequence of modeling it is highlighted.",TRUE,number
R133,Artificial Intelligence,R74026,Task 11 at SemEval-2021: NLPContributionGraph - Structuring Scholarly NLP Contributions for a Research Knowledge Graph,S340701,R74036,Pipelined Phrases Extraction Performance F-score,L245353,46.4,"There is currently a gap between the natural language expression of scholarly publications and their structured semantic content modeling to enable intelligent content search. With the volume of research growing exponentially every year, a search feature operating over semantically structured content is compelling. The SemEval-2021 Shared Task NLPContributionGraph (a.k.a. ‘the NCG task’) tasks participants to develop automated systems that structure contributions from NLP scholarly articles in the English language. Being the first-of-its-kind in the SemEval series, the task released structured data from NLP scholarly articles at three levels of information granularity, i.e. at sentence-level, phrase-level, and phrases organized as triples toward Knowledge Graph (KG) building. The sentence-level annotations comprised the few sentences about the article’s contribution. The phrase-level annotations were scientific term and predicate phrases from the contribution sentences. Finally, the triples constituted the research overview KG. For the Shared Task, participating systems were then expected to automatically classify contribution sentences, extract scientific terms and relations from the sentences, and organize them as KG triples. Overall, the task drew a strong participation demographic of seven teams and 27 participants. The best end-to-end task system classified contribution sentences at 57.27% F1, phrases at 46.41% F1, and triples at 22.28% F1. While the absolute performance to generate triples remains low, as conclusion to the article, the difficulty of producing such data and as a consequence of modeling it is highlighted.",TRUE,number
R133,Artificial Intelligence,R74026,Task 11 at SemEval-2021: NLPContributionGraph - Structuring Scholarly NLP Contributions for a Research Knowledge Graph,S343291,R74036,Sentences Extraction Performance F-score,L246998,57.27,"There is currently a gap between the natural language expression of scholarly publications and their structured semantic content modeling to enable intelligent content search. With the volume of research growing exponentially every year, a search feature operating over semantically structured content is compelling. The SemEval-2021 Shared Task NLPContributionGraph (a.k.a. ‘the NCG task’) tasks participants to develop automated systems that structure contributions from NLP scholarly articles in the English language. Being the first-of-its-kind in the SemEval series, the task released structured data from NLP scholarly articles at three levels of information granularity, i.e. at sentence-level, phrase-level, and phrases organized as triples toward Knowledge Graph (KG) building. The sentence-level annotations comprised the few sentences about the article’s contribution. The phrase-level annotations were scientific term and predicate phrases from the contribution sentences. Finally, the triples constituted the research overview KG. For the Shared Task, participating systems were then expected to automatically classify contribution sentences, extract scientific terms and relations from the sentences, and organize them as KG triples. Overall, the task drew a strong participation demographic of seven teams and 27 participants. The best end-to-end task system classified contribution sentences at 57.27% F1, phrases at 46.41% F1, and triples at 22.28% F1. While the absolute performance to generate triples remains low, as conclusion to the article, the difficulty of producing such data and as a consequence of modeling it is highlighted.",TRUE,number
R133,Artificial Intelligence,R146694,A Robust and Real-Time Capable Envelope-Based Algorithm for Heart Sound Classification: Validation under Different Physiological Conditions,S587342,R146698,Average F1-Score,L409123,90.5,"This paper proposes a robust and real-time capable algorithm for classification of the first and second heart sounds. The classification algorithm is based on the evaluation of the envelope curve of the phonocardiogram. For the evaluation, in contrast to other studies, measurements on 12 probands were conducted in different physiological conditions. Moreover, for each measurement the auscultation point, posture and physical stress were varied. The proposed envelope-based algorithm is tested with two different methods for envelope curve extraction: the Hilbert transform and the short-time Fourier transform. The performance of the classification of the first heart sounds is evaluated by using a reference electrocardiogram. Overall, by using the Hilbert transform, the algorithm has a better performance regarding the F1-score and computational effort. The proposed algorithm achieves for the S1 classification an F1-score up to 95.7% and in average 90.5%. The algorithm is robust against the age, BMI, posture, heart rate and auscultation point (except measurements on the back) of the subjects.",TRUE,number
R133,Artificial Intelligence,R146694,A Robust and Real-Time Capable Envelope-Based Algorithm for Heart Sound Classification: Validation under Different Physiological Conditions,S587341,R146698,F1-score,L409122,95.7,"This paper proposes a robust and real-time capable algorithm for classification of the first and second heart sounds. The classification algorithm is based on the evaluation of the envelope curve of the phonocardiogram. For the evaluation, in contrast to other studies, measurements on 12 probands were conducted in different physiological conditions. Moreover, for each measurement the auscultation point, posture and physical stress were varied. The proposed envelope-based algorithm is tested with two different methods for envelope curve extraction: the Hilbert transform and the short-time Fourier transform. The performance of the classification of the first heart sounds is evaluated by using a reference electrocardiogram. Overall, by using the Hilbert transform, the algorithm has a better performance regarding the F1-score and computational effort. The proposed algorithm achieves for the S1 classification an F1-score up to 95.7% and in average 90.5%. The algorithm is robust against the age, BMI, posture, heart rate and auscultation point (except measurements on the back) of the subjects.",TRUE,number
R133,Artificial Intelligence,R146689,A Novel Method for Measuring the Timing of Heart Sound Components through Digital Phonocardiography,S587324,R146693,Accuracy,L409110,99.2,"The auscultation of heart sounds has been for decades a fundamental diagnostic tool in clinical practice. Higher effectiveness can be achieved by recording the corresponding biomedical signal, namely the phonocardiographic signal, and processing it by means of traditional signal processing techniques. An unavoidable processing step is the heart sound segmentation, which is still a challenging task from a technical viewpoint—a limitation of state-of-the-art approaches is the unavailability of trustworthy techniques for the detection of heart sound components. The aim of this work is to design a reliable algorithm for the identification and the classification of heart sounds’ main components. The proposed methodology was tested on a sample population of 24 healthy subjects over 10-min-long simultaneous electrocardiographic and phonocardiographic recordings and it was found capable of correctly detecting and classifying an average of 99.2% of the heart sounds along with their components. Moreover, the delay of each component with respect to the corresponding R-wave peak and the delay among the components of the same heart sound were computed: the resulting experimental values are coherent with what is expected from the literature and what was obtained by other studies.",TRUE,number
R175,"Atomic, Molecular and Optical Physics",R108954,Ultraviolet/vacuum-ultraviolet emission from a high power magnetron sputtering plasma with an aluminum target,S568519,R141763,Pressure,L399078,0.5,"We report the in situ measurement of the ultraviolet/vacuum-ultraviolet (UV/VUV) emission from a plasma produced by high power impulse magnetron sputtering with aluminum target, using argon as background gas. The UV/VUV detection system is based upon the quantification of the re-emitted fluorescence from a sodium salicylate layer that is placed in a housing inside the vacuum chamber, at 11 cm from the center of the cathode. The detector is equipped with filters that allow for differentiating various spectral regions, and with a front collimating tube that provides a spatial resolution ≈ 0.5 cm. Using various views of the plasma, the measured absolutely calibrated photon rates enable to calculate emissivities and irradiances based on a model of the ionization region. We present results that demonstrate that Al++ ions are responsible for most of the VUV irradiance. We also discuss the photoelectric emission due to irradiances on the target ~ 2×1018 s-1 cm-2 produced by high energy photons from resonance lines of Ar+.",TRUE,number
R104,Bioinformatics,R135546,Acute Lymphoblastic Leukemia Detection from Microscopic Images Using Weighted Ensemble of Convolutional Neural Networks,S536118,R135550,AUC,L378119,0.941,"Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.",TRUE,number
R104,Bioinformatics,R135546,Acute Lymphoblastic Leukemia Detection from Microscopic Images Using Weighted Ensemble of Convolutional Neural Networks,S536124,R135550,Classifier accuracy,L378125,86.2,"Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.",TRUE,number
R104,Bioinformatics,R135489,Identification of Leukemia Subtypes from Microscopic Images Using Convolutional Neural Network,S536117,R135491,Classifier accuracy,L378118,88.25,"Leukemia is a fatal cancer and has two main types: Acute and chronic. Each type has two more subtypes: Lymphoid and myeloid. Hence, in total, there are four subtypes of leukemia. This study proposes a new approach for diagnosis of all subtypes of leukemia from microscopic blood cell images using convolutional neural networks (CNN), which requires a large training data set. Therefore, we also investigated the effects of data augmentation for an increasing number of training samples synthetically. We used two publicly available leukemia data sources: ALL-IDB and ASH Image Bank. Next, we applied seven different image transformation techniques as data augmentation. We designed a CNN architecture capable of recognizing all subtypes of leukemia. Besides, we also explored other well-known machine learning algorithms such as naive Bayes, support vector machine, k-nearest neighbor, and decision tree. To evaluate our approach, we set up a set of experiments and used 5-fold cross-validation. The results we obtained from experiments showed that our CNN model performance has 88.25% and 81.74% accuracy, in leukemia versus healthy and multi-class classification of all subtypes, respectively. Finally, we also showed that the CNN model has a better performance than other well-known machine learning algorithms.",TRUE,number
R104,Bioinformatics,R135546,Acute Lymphoblastic Leukemia Detection from Microscopic Images Using Weighted Ensemble of Convolutional Neural Networks,S536116,R135550,F1-score,L378117,88.6,"Although automated Acute Lymphoblastic Leukemia (ALL) detection is essential, it is challenging due to the morphological correlation between malignant and normal cells. The traditional ALL classification strategy is arduous, time-consuming, often suffers inter-observer variations, and necessitates experienced pathologists. This article has automated the ALL detection task, employing deep Convolutional Neural Networks (CNNs). We explore the weighted ensemble of deep CNNs to recommend a better ALL cell classifier. The weights are estimated from ensemble candidates' corresponding metrics, such as accuracy, F1-score, AUC, and kappa values. Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network. We train and evaluate the proposed model utilizing the publicly available C-NMC-2019 ALL dataset. Our proposed weighted ensemble model has outputted a weighted F1-score of 88.6%, a balanced accuracy of 86.2%, and an AUC of 0.941 in the preliminary test set. The qualitative results displaying the gradient class activation maps confirm that the introduced model has a concentrated learned region. In contrast, the ensemble candidate models, such as Xception, VGG-16, DenseNet-121, MobileNet, and InceptionResNet-V2, separately produce coarse and scatter learned areas for most example cases. Since the proposed ensemble yields a better result for the aimed task, it can experiment in other domains of medical diagnostic applications.",TRUE,number
R137665,Coating and Surface Technology,R178362,Clay-Based Nanocomposite Coating for Flexible Optoelectronics Applying Commercial Polymers,S699610,R178364,"Coating thickness, micrometers",L470892,1.5,"Transparency, flexibility, and especially ultralow oxygen (OTR) and water vapor (WVTR) transmission rates are the key issues to be addressed for packaging of flexible organic photovoltaics and organic light-emitting diodes. Concomitant optimization of all essential features is still a big challenge. Here we present a thin (1.5 μm), highly transparent, and at the same time flexible nanocomposite coating with an exceptionally low OTR and WVTR (1.0 × 10(-2) cm(3) m(-2) day(-1) bar(-1) and <0.05 g m(-2) day(-1) at 50% RH, respectively). A commercially available polyurethane (Desmodur N 3600 and Desmophen 670 BA, Bayer MaterialScience AG) was filled with a delaminated synthetic layered silicate exhibiting huge aspect ratios of about 25,000. Functional films were prepared by simple doctor-blading a suspension of the matrix and the organophilized clay. This preparation procedure is technically benign, is easy to scale up, and may readily be applied for encapsulation of sensitive flexible electronics.",TRUE,number
R142,Earth Sciences,R160558,Classification of Iowa wetlands using an airborne hyperspectral image: a comparison of the spectral angle mapper classifier and an object-oriented approach,S640313,R160560,SAM Accuracy (%),L438331,63.53,"Wetlands mapping using multispectral imagery from Landsat multispectral scanner (MSS) and thematic mapper (TM) and Système pour l'observation de la Terre (SPOT) does not in general provide high classification accuracies because of poor spectral and spatial resolutions. This study tests the feasibility of using high-resolution hyperspectral imagery to map wetlands in Iowa with two nontraditional classification techniques: the spectral angle mapper (SAM) method and a new nonparametric object-oriented (OO) classification. The software programs used were ENVI and eCognition. Accuracies of these classified images were assessed by using the information collected through a field survey with a global positioning system and high-resolution color infrared images. Wetlands were identified more accurately with the OO method (overall accuracy 92.3%) than with SAM (63.53%). This paper also discusses the limitations of these classification techniques for wetlands, as well as discussing future directions for study.",TRUE,number
R142,Earth Sciences,R160571,Performance of Spectral Angle Mapper and Parallelepiped Classifiers in Agriculture Hyperspectral Image,S640359,R160573,SAM Accuracy (%),L438361,66.67,"Hyperspectral Imaging (HSI) is used to provide a wealth of information which can be used to address a variety of problems in different applications. The main requirement in all applications is the classification of HSI data. In this paper, supervised HSI classification algorithms are used to extract agriculture areas that specialize in wheat growing and get a classified image. In particular, Parallelepiped and Spectral Angel Mapper (SAM) algorithms are used. They are implemented by a software tool used to analyse and process geospatial images that is an Environment of Visualizing Images (ENVI). They are applied on Al-Kharj, Saudi Arabia as the study area. The overall accuracy after applying the algorithms on the image of the study area for SAM classification was 66.67%, and 33.33% for Parallelepiped classification. Therefore, SAM algorithm has provided a better a study area image classification.",TRUE,number
R142,Earth Sciences,R144024,Raman spectroscopy of the borosilicate mineral ferroaxinite,S576470,R144026,ferroaxinite (OH),L403652,3376,"Raman spectroscopy, complemented by infrared spectroscopy has been used to characterise the ferroaxinite minerals of theoretical formula Ca2Fe2+Al2BSi4O15(OH), a ferrous aluminium borosilicate. The Raman spectra are complex but are subdivided into sections based upon the vibrating units. The Raman spectra are interpreted in terms of the addition of borate and silicate spectra. Three characteristic bands of ferroaxinite are observed at 1082, 1056 and 1025 cm-1 and are attributed to BO4 stretching vibrations. Bands at 1003, 991, 980 and 963 cm-1 are assigned to SiO4 stretching vibrations. Bands are found in these positions for each of the ferroaxinites studied. No Raman bands were found above 1100 cm-1 showing that ferroaxinites contain only tetrahedral boron. The hydroxyl stretching region of ferroaxinites is characterised by a single Raman band between 3368 and 3376 cm-1, the position of which is sample dependent. Bands for ferroaxinite at 678, 643, 618, 609, 588, 572, 546 cm-1 may be attributed to the ν4 bending modes and the three bands at 484, 444 and 428 cm-1 may be attributed to the ν2 bending modes of the (SiO4)2-.",TRUE,number
R142,Earth Sciences,R144034,Raman spectroscopy of the joaquinite minerals,S576508,R144036,Joaquinite (OH),L403682,3600,"Selected joaquinite minerals have been studied by Raman spectroscopy. The minerals are categorised into two groups depending upon whether bands occur in the 3250 to 3450 cm−1 region and in the 3450 to 3600 cm−1 region, or in the latter region only. The first set of bands is attributed to water stretching vibrations and the second set to OH stretching bands. In the literature, X-ray diffraction could not identify the presence of OH units in the structure of joaquinite. Raman spectroscopy proves that the joaquinite mineral group contains OH units in their structure, and in some cases both water and OH units. A series of bands at 1123, 1062, 1031, 971, 912 and 892 cm−1 are assigned to SiO stretching vibrations. Bands above 1000 cm−1 are attributable to the νas modes of the (SiO4)4− and (Si2O7)6− units. Bands that are observed at 738, around 700, 682 and around 668, 621 and 602 cm−1 are attributed to OSiO bending modes. The patterns do not appear to match the published infrared spectral patterns of either (SiO4)4− or (Si2O7)6− units. The reason is attributed to the actual formulation of the joaquinite mineral, in which significant amounts of Ti or Nb and Fe are found. Copyright © 2007 John Wiley & Sons, Ltd.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145509,Identifying Canadian mosquito species through DNA barcodes,S582642,R145511,nearest neighbor distance lower limit,L406935,0.2,"Abstract A short fragment of mt DNA from the cytochrome c oxidase 1 (CO1) region was used to provide the first CO1 barcodes for 37 species of Canadian mosquitoes (Diptera: Culicidae) from the provinces Ontario and New Brunswick. Sequence variation was analysed in a 617‐bp fragment from the 5′ end of the CO1 region. Sequences of each mosquito species formed barcode clusters with tight cohesion that were usually clearly distinct from those of allied species. CO1 sequence divergences were, on average, nearly 20 times higher for congeneric species than for members of a species; divergences between congeneric species averaged 10.4% (range 0.2–17.2%), whereas those for conspecific individuals averaged 0.5% (range 0.0–3.9%).",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R139497,Congruence between morphology-based species and Barcode Index Numbers (BINs) in Neotropical Eumaeini (Lycaenidae),S556388,R139502,nearest neighbor distance lower limit,L391191,0.4,"Background With about 1,000 species in the Neotropics, the Eumaeini (Theclinae) are one of the most diverse butterfly tribes. Correct morphology-based identifications are challenging in many genera due to relatively little interspecific differences in wing patterns. Geographic infraspecific variation is sometimes more substantial than variation between species. In this paper we present a large DNA barcode dataset of South American Lycaenidae. We analyze how well DNA barcode BINs match morphologically delimited species. Methods We compare morphology-based species identifications with the clustering of molecular operational taxonomic units (MOTUs) delimitated by the RESL algorithm in BOLD, which assigns Barcode Index Numbers (BINs). We examine intra- and interspecific divergences for genera represented by at least four morphospecies. We discuss the existence of local barcode gaps in a genus by genus analysis. We also note differences in the percentage of species with barcode gaps in groups of lowland and high mountain genera. Results We identified 2,213 specimens and obtained 1,839 sequences of 512 species in 90 genera. Overall, the mean intraspecific divergence value of CO1 sequences was 1.20%, while the mean interspecific divergence between nearest congeneric neighbors was 4.89%, demonstrating the presence of a barcode gap. However, the gap seemed to disappear from the entire set when comparing the maximum intraspecific distance (8.40%) with the minimum interspecific distance (0.40%). Clear barcode gaps are present in many genera but absent in others. From the set of specimens that yielded COI fragment lengths of at least 650 bp, 75% of the a priori morphology-based identifications were unambiguously assigned to a single Barcode Index Number (BIN). However, after a taxonomic a posteriori review, the percentage of matched identifications rose to 85%. BIN splitting was observed for 17% of the species and BIN sharing for 9%. We found that genera that contain primarily lowland species show higher percentages of local barcode gaps and congruence between BINs and morphology than genera that contain exclusively high montane species. The divergence values to the nearest neighbors were significantly lower in high Andean species while the intra-specific divergence values were significantly lower in the lowland species. These results raise questions regarding the causes of observed low inter and high intraspecific genetic variation. We discuss incomplete lineage sorting and hybridization as most likely causes of this phenomenon, as the montane species concerned are relatively young and hybridization is probable. The release of our data set represents an essential baseline for a reference library for biological assessment studies of butterflies in mega diverse countries using modern high-throughput technologies an highlights the necessity of taxonomic revisions for various genera combining both molecular and morphological data.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R146938,Evaluation of DNA barcoding and identification of new haplomorphs in Canadian deerflies and horseflies,S588427,R146940,Maximum intraspecific distances averaged,L409692,0.49,"This paper reports the first tests of the suitability of the standardized mitochondrial cytochrome c oxidase subunit I (COI) barcoding system for the identification of Canadian deerflies and horseflies. Two additional mitochondrial molecular markers were used to determine whether unambiguous species recognition in tabanids can be achieved. Our 332 Canadian tabanid samples yielded 650 sequences from five genera and 42 species. Standard COI barcodes demonstrated a strong A + T bias (mean 68.1%), especially at third codon positions (mean 93.0%). Our preliminary test of this system showed that the standard COI barcode worked well for Canadian Tabanidae: the target DNA can be easily recovered from small amounts of insect tissue and aligned for all tabanid taxa. Each tabanid species possessed distinctive sets of COI haplotypes which discriminated well among species. Average conspecific Kimura two‐parameter (K2P) divergence (0.49%) was 12 times lower than the average divergence within species. Both the neighbour‐joining and the Bayesian methods produced trees with identical monophyletic species groups. Two species, Chrysops dawsoni Philip and Chrysops montanus Osten Sacken (Diptera: Tabanidae), showed relatively deep intraspecific sequence divergences (∼10 times the average) for all three mitochondrial gene regions analysed. We suggest provisional differentiation of Ch. montanus into two haplotypes, namely, Ch. montanus haplomorph 1 and Ch. montanus haplomorph 2, both defined by their molecular sequences and by newly discovered differences in structural features near their ocelli.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145509,Identifying Canadian mosquito species through DNA barcodes,S582638,R145511,Maximum intraspecific distances averaged,L406931,0.5,"Abstract A short fragment of mt DNA from the cytochrome c oxidase 1 (CO1) region was used to provide the first CO1 barcodes for 37 species of Canadian mosquitoes (Diptera: Culicidae) from the provinces Ontario and New Brunswick. Sequence variation was analysed in a 617‐bp fragment from the 5′ end of the CO1 region. Sequences of each mosquito species formed barcode clusters with tight cohesion that were usually clearly distinct from those of allied species. CO1 sequence divergences were, on average, nearly 20 times higher for congeneric species than for members of a species; divergences between congeneric species averaged 10.4% (range 0.2–17.2%), whereas those for conspecific individuals averaged 0.5% (range 0.0–3.9%).",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R146932,DNA barcodes reveal cryptic genetic diversity within the blackfly subgenus Trichodagmia Enderlein (Diptera: Simuliidae: Simulium) and related taxa in the New World,S588387,R146934,Maximum intraspecific distances averaged,L409669,0.5,"In this paper we investigate the utility of the COI DNA barcoding region for species identification and for revealing hidden diversity within the subgenus Trichodagmia and related taxa in the New World. In total, 24 morphospecies within the current expanded taxonomic concept of Trichodagmia were analyzed. Three species in the subgenus Aspathia and 10 species in the subgenus Simulium s.str. were also included in the analysis because of their putative phylogenetic relationship with Trichodagmia. In the Neighbour Joining analysis tree (NJ) derived from the DNA barcodes most of the specimens grouped together according to species or species groups as recognized by other morphotaxonomic studies. The interspecific genetic divergence averaged 11.2% (range 2.8–19.5%), whereas intraspecific genetic divergence within morphologically distinct species averaged 0.5% (range 0–1.2%). Higher values of genetic divergence (3.2–3.7%) in species complexes suggest the presence of cryptic diversity. The existence of well defined groups within S. piperi, S. duodenicornium, S. canadense and S. rostratum indicate the possible presence of cryptic species within these taxa. Also, the suspected presence of a sibling species in S. tarsatum and S. paynei is supported. DNA barcodes also showed that specimens from species that were taxonomically difficult to delimit such as S. hippovorum, S. rubrithorax, S. paynei, and other related taxa (S. solarii), grouped together in the NJ analysis, confirming the validity of their species status. The recovery of partial barcodes from specimens in collections was time consuming and PCR success was low from specimens more than 10 years old. However, when a sequence was obtained, it provided good resolution for species identification. Larvae preserved in ‘weak’ Carnoy’s solution (9:1 ethanol:acetic acid) provided full DNA barcodes. Adding legs directly to the PCR mix from recently collected and preserved adults was an inexpensive, fast methodology to obtain full barcodes. In summary, DNA barcoding combined with a sound morphotaxonomic framework provides an effective approach for the delineation of species and for the discovery of hidden diversity in the subgenus Trichodagmia.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R108983,"Barcoding the butterflies of southern South America: Species delimitation efficacy, cryptic diversity and geographic patterns of divergence",S497104,R108986,Maximum intraspecific distances averaged,L359910,0.69,"Because the tropical regions of America harbor the highest concentration of butterfly species, its fauna has attracted considerable attention. Much less is known about the butterflies of southern South America, particularly Argentina, where over 1,200 species occur. To advance understanding of this fauna, we assembled a DNA barcode reference library for 417 butterfly species of Argentina, focusing on the Atlantic Forest, a biodiversity hotspot. We tested the efficacy of this library for specimen identification, used it to assess the frequency of cryptic species, and examined geographic patterns of genetic variation, making this study the first large-scale genetic assessment of the butterflies of southern South America. The average sequence divergence to the nearest neighbor (i.e. minimum interspecific distance) was 6.91%, ten times larger than the mean distance to the furthest conspecific (0.69%), with a clear barcode gap present in all but four of the species represented by two or more specimens. As a consequence, the DNA barcode library was extremely effective in the discrimination of these species, allowing a correct identification in more than 95% of the cases. Singletons (i.e. species represented by a single sequence) were also distinguishable in the gene trees since they all had unique DNA barcodes, divergent from those of the closest non-conspecific. The clustering algorithms implemented recognized from 416 to 444 barcode clusters, suggesting that the actual diversity of butterflies in Argentina is 3%–9% higher than currently recognized. Furthermore, our survey added three new records of butterflies for the country (Eurema agave, Mithras hannelore, Melanis hillapana). In summary, this study not only supported the utility of DNA barcoding for the identification of the butterfly species of Argentina, but also highlighted several cases of both deep intraspecific and shallow interspecific divergence that should be studied in more detail.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R142471,DNA barcoding of Northern Nearctic Muscidae (Diptera) reveals high correspondence between morphological and molecular species limits,S572357,R142473,nearest neighbor distance lower limit,L401533,0.77,"Abstract Background Various methods have been proposed to assign unknown specimens to known species using their DNA barcodes, while others have focused on using genetic divergence thresholds to estimate “species” diversity for a taxon, without a well-developed taxonomy and/or an extensive reference library of DNA barcodes. The major goals of the present work were to: a) conduct the largest species-level barcoding study of the Muscidae to date and characterize the range of genetic divergence values in the northern Nearctic fauna; b) evaluate the correspondence between morphospecies and barcode groupings defined using both clustering-based and threshold-based approaches; and c) use the reference library produced to address taxonomic issues. Results Our data set included 1114 individuals and their COI sequences (951 from Churchill, Manitoba), representing 160 morphologically-determined species from 25 genera, covering 89% of the known fauna of Churchill and 23% of the Nearctic fauna. Following an iterative process through which all specimens belonging to taxa with anomalous divergence values and/or monophyly issues were re-examined, identity was modified for 9 taxa, including the reinstatement of Phaonia luteva (Walker) stat. nov. as a species distinct from Phaonia errans (Meigen). In the post-reassessment data set, no distinct gap was found between maximum pairwise intraspecific distances (range 0.00-3.01%) and minimum interspecific distances (range: 0.77-11.33%). Nevertheless, using a clustering-based approach, all individuals within 98% of species grouped with their conspecifics with high (>95%) bootstrap support; in contrast, a maximum species discrimination rate of 90% was obtained at the optimal threshold of 1.2%. DNA barcoding enabled the determination of females from 5 ambiguous species pairs and confirmed that 16 morphospecies were genetically distinct from named taxa. There were morphological differences among all distinct genetic clusters; thus, no cases of cryptic species were detected. Conclusions Our findings reveal the great utility of building a well-populated, species-level reference barcode database against which to compare unknowns. When such a library is unavailable, it is still possible to obtain a fairly accurate (within ~10%) rapid assessment of species richness based upon a barcode divergence threshold alone, but this approach is most accurate when the threshold is tuned to a particular taxon.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145296,Molecular identification of mosquitoes (Diptera: Culicidae) in southeastern Australia,S581585,R145298,Maximum intraspecific distances averaged,L406367,0.8,"Abstract DNA barcoding is a modern species identification technique that can be used to distinguish morphologically similar species, and is particularly useful when using small amounts of starting material from partial specimens or from immature stages. In order to use DNA barcoding in a surveillance program, a database containing mosquito barcode sequences is required. This study obtained Cytochrome Oxidase I (COI) sequences for 113 morphologically identified specimens, representing 29 species, six tribes and 12 genera; 17 of these species have not been previously barcoded. Three of the 29 species ─ Culex palpalis, Macleaya macmillani, and an unknown species originally identified as Tripteroides atripes ─ were initially misidentified as they are difficult to separate morphologically, highlighting the utility of DNA barcoding. While most species grouped separately (reciprocally monophyletic), the Cx. pipiens subgroup could not be genetically separated using COI. The average conspecific and congeneric p‐distance was 0.8% and 7.6%, respectively. In our study, we also demonstrate the utility of DNA barcoding in distinguishing exotics from endemic mosquitoes by identifying a single intercepted Stegomyia aegypti egg at an international airport. The use of DNA barcoding dramatically reduced the identification time required compared with rearing specimens through to adults, thereby demonstrating the value of this technique in biosecurity surveillance. The DNA barcodes produced by this study have been uploaded to the ‘Mosquitoes of Australia–Victoria’ project on the Barcode of Life Database (BOLD), which will serve as a resource for the Victorian Arbovirus Disease Control Program and other national and international mosquito surveillance programs.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R139497,Congruence between morphology-based species and Barcode Index Numbers (BINs) in Neotropical Eumaeini (Lycaenidae),S556384,R139502,Maximum intraspecific distances averaged,L391187,1.2,"Background With about 1,000 species in the Neotropics, the Eumaeini (Theclinae) are one of the most diverse butterfly tribes. Correct morphology-based identifications are challenging in many genera due to relatively little interspecific differences in wing patterns. Geographic infraspecific variation is sometimes more substantial than variation between species. In this paper we present a large DNA barcode dataset of South American Lycaenidae. We analyze how well DNA barcode BINs match morphologically delimited species. Methods We compare morphology-based species identifications with the clustering of molecular operational taxonomic units (MOTUs) delimitated by the RESL algorithm in BOLD, which assigns Barcode Index Numbers (BINs). We examine intra- and interspecific divergences for genera represented by at least four morphospecies. We discuss the existence of local barcode gaps in a genus by genus analysis. We also note differences in the percentage of species with barcode gaps in groups of lowland and high mountain genera. Results We identified 2,213 specimens and obtained 1,839 sequences of 512 species in 90 genera. Overall, the mean intraspecific divergence value of CO1 sequences was 1.20%, while the mean interspecific divergence between nearest congeneric neighbors was 4.89%, demonstrating the presence of a barcode gap. However, the gap seemed to disappear from the entire set when comparing the maximum intraspecific distance (8.40%) with the minimum interspecific distance (0.40%). Clear barcode gaps are present in many genera but absent in others. From the set of specimens that yielded COI fragment lengths of at least 650 bp, 75% of the a priori morphology-based identifications were unambiguously assigned to a single Barcode Index Number (BIN). However, after a taxonomic a posteriori review, the percentage of matched identifications rose to 85%. BIN splitting was observed for 17% of the species and BIN sharing for 9%. We found that genera that contain primarily lowland species show higher percentages of local barcode gaps and congruence between BINs and morphology than genera that contain exclusively high montane species. The divergence values to the nearest neighbors were significantly lower in high Andean species while the intra-specific divergence values were significantly lower in the lowland species. These results raise questions regarding the causes of observed low inter and high intraspecific genetic variation. We discuss incomplete lineage sorting and hybridization as most likely causes of this phenomenon, as the montane species concerned are relatively young and hybridization is probable. The release of our data set represents an essential baseline for a reference library for biological assessment studies of butterflies in mega diverse countries using modern high-throughput technologies an highlights the necessity of taxonomic revisions for various genera combining both molecular and morphological data.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R136201,DNA barcode analysis of butterfly species from Pakistan points towards regional endemism,S539101,R136203,Maximum intraspecific distances upper limit,L379761,1.6,"DNA barcodes were obtained for 81 butterfly species belonging to 52 genera from sites in north‐central Pakistan to test the utility of barcoding for their identification and to gain a better understanding of regional barcode variation. These species represent 25% of the butterfly fauna of Pakistan and belong to five families, although the Nymphalidae were dominant, comprising 38% of the total specimens. Barcode analysis showed that maximum conspecific divergence was 1.6%, while there was 1.7–14.3% divergence from the nearest neighbour species. Barcode records for 55 species showed <2% sequence divergence to records in the Barcode of Life Data Systems (BOLD), but only 26 of these cases involved specimens from neighbouring India and Central Asia. Analysis revealed that most species showed little incremental sequence variation when specimens from other regions were considered, but a threefold increase was noted in a few cases. There was a clear gap between maximum intraspecific and minimum nearest neighbour distance for all 81 species. Neighbour‐joining cluster analysis showed that members of each species formed a monophyletic cluster with strong bootstrap support. The barcode results revealed two provisional species that could not be clearly linked to known taxa, while 24 other species gained their first coverage. Future work should extend the barcode reference library to include all butterfly species from Pakistan as well as neighbouring countries to gain a better understanding of regional variation in barcode sequences in this topographically and climatically complex region.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R136201,DNA barcode analysis of butterfly species from Pakistan points towards regional endemism,S539103,R136203,nearest neighbor distance lower limit,L379763,1.7,"DNA barcodes were obtained for 81 butterfly species belonging to 52 genera from sites in north‐central Pakistan to test the utility of barcoding for their identification and to gain a better understanding of regional barcode variation. These species represent 25% of the butterfly fauna of Pakistan and belong to five families, although the Nymphalidae were dominant, comprising 38% of the total specimens. Barcode analysis showed that maximum conspecific divergence was 1.6%, while there was 1.7–14.3% divergence from the nearest neighbour species. Barcode records for 55 species showed <2% sequence divergence to records in the Barcode of Life Data Systems (BOLD), but only 26 of these cases involved specimens from neighbouring India and Central Asia. Analysis revealed that most species showed little incremental sequence variation when specimens from other regions were considered, but a threefold increase was noted in a few cases. There was a clear gap between maximum intraspecific and minimum nearest neighbour distance for all 81 species. Neighbour‐joining cluster analysis showed that members of each species formed a monophyletic cluster with strong bootstrap support. The barcode results revealed two provisional species that could not be clearly linked to known taxa, while 24 other species gained their first coverage. Future work should extend the barcode reference library to include all butterfly species from Pakistan as well as neighbouring countries to gain a better understanding of regional variation in barcode sequences in this topographically and climatically complex region.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145304,Analyzing Mosquito (Diptera: Culicidae) Diversity in Pakistan by DNA Barcoding,S581663,R145305,nearest neighbor distance lower limit,L406427,2.3,"Background Although they are important disease vectors mosquito biodiversity in Pakistan is poorly known. Recent epidemics of dengue fever have revealed the need for more detailed understanding of the diversity and distributions of mosquito species in this region. DNA barcoding improves the accuracy of mosquito inventories because morphological differences between many species are subtle, leading to misidentifications. Methodology/Principal Findings Sequence variation in the barcode region of the mitochondrial COI gene was used to identify mosquito species, reveal genetic diversity, and map the distribution of the dengue-vector species in Pakistan. Analysis of 1684 mosquitoes from 491 sites in Punjab and Khyber Pakhtunkhwa during 2010–2013 revealed 32 species with the assemblage dominated by Culex quinquefasciatus (61% of the collection). The genus Aedes (Stegomyia) comprised 15% of the specimens, and was represented by six taxa with the two dengue vector species, Ae. albopictus and Ae. aegypti, dominant and broadly distributed. Anopheles made up another 6% of the catch with An. subpictus dominating. Barcode sequence divergence in conspecific specimens ranged from 0–2.4%, while congeneric species showed from 2.3–17.8% divergence. A global haplotype analysis of disease-vectors showed the presence of multiple haplotypes, although a single haplotype of each dengue-vector species was dominant in most countries. Geographic distribution of Ae. aegypti and Ae. albopictus showed the later species was dominant and found in both rural and urban environments. Conclusions As the first DNA-based analysis of mosquitoes in Pakistan, this study has begun the construction of a barcode reference library for the mosquitoes of this region. Levels of genetic diversity varied among species. Because of its capacity to differentiate species, even those with subtle morphological differences, DNA barcoding aids accurate tracking of vector populations.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145304,Analyzing Mosquito (Diptera: Culicidae) Diversity in Pakistan by DNA Barcoding,S581661,R145305,Maximum intraspecific distances upper limit,L406425,2.4,"Background Although they are important disease vectors mosquito biodiversity in Pakistan is poorly known. Recent epidemics of dengue fever have revealed the need for more detailed understanding of the diversity and distributions of mosquito species in this region. DNA barcoding improves the accuracy of mosquito inventories because morphological differences between many species are subtle, leading to misidentifications. Methodology/Principal Findings Sequence variation in the barcode region of the mitochondrial COI gene was used to identify mosquito species, reveal genetic diversity, and map the distribution of the dengue-vector species in Pakistan. Analysis of 1684 mosquitoes from 491 sites in Punjab and Khyber Pakhtunkhwa during 2010–2013 revealed 32 species with the assemblage dominated by Culex quinquefasciatus (61% of the collection). The genus Aedes (Stegomyia) comprised 15% of the specimens, and was represented by six taxa with the two dengue vector species, Ae. albopictus and Ae. aegypti, dominant and broadly distributed. Anopheles made up another 6% of the catch with An. subpictus dominating. Barcode sequence divergence in conspecific specimens ranged from 0–2.4%, while congeneric species showed from 2.3–17.8% divergence. A global haplotype analysis of disease-vectors showed the presence of multiple haplotypes, although a single haplotype of each dengue-vector species was dominant in most countries. Geographic distribution of Ae. aegypti and Ae. albopictus showed the later species was dominant and found in both rural and urban environments. Conclusions As the first DNA-based analysis of mosquitoes in Pakistan, this study has begun the construction of a barcode reference library for the mosquitoes of this region. Levels of genetic diversity varied among species. Because of its capacity to differentiate species, even those with subtle morphological differences, DNA barcoding aids accurate tracking of vector populations.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R146932,DNA barcodes reveal cryptic genetic diversity within the blackfly subgenus Trichodagmia Enderlein (Diptera: Simuliidae: Simulium) and related taxa in the New World,S588385,R146934,nearest neighbor distance lower limit,L409667,2.8,"In this paper we investigate the utility of the COI DNA barcoding region for species identification and for revealing hidden diversity within the subgenus Trichodagmia and related taxa in the New World. In total, 24 morphospecies within the current expanded taxonomic concept of Trichodagmia were analyzed. Three species in the subgenus Aspathia and 10 species in the subgenus Simulium s.str. were also included in the analysis because of their putative phylogenetic relationship with Trichodagmia. In the Neighbour Joining analysis tree (NJ) derived from the DNA barcodes most of the specimens grouped together according to species or species groups as recognized by other morphotaxonomic studies. The interspecific genetic divergence averaged 11.2% (range 2.8–19.5%), whereas intraspecific genetic divergence within morphologically distinct species averaged 0.5% (range 0–1.2%). Higher values of genetic divergence (3.2–3.7%) in species complexes suggest the presence of cryptic diversity. The existence of well defined groups within S. piperi, S. duodenicornium, S. canadense and S. rostratum indicate the possible presence of cryptic species within these taxa. Also, the suspected presence of a sibling species in S. tarsatum and S. paynei is supported. DNA barcodes also showed that specimens from species that were taxonomically difficult to delimit such as S. hippovorum, S. rubrithorax, S. paynei, and other related taxa (S. solarii), grouped together in the NJ analysis, confirming the validity of their species status. The recovery of partial barcodes from specimens in collections was time consuming and PCR success was low from specimens more than 10 years old. However, when a sequence was obtained, it provided good resolution for species identification. Larvae preserved in ‘weak’ Carnoy’s solution (9:1 ethanol:acetic acid) provided full DNA barcodes. Adding legs directly to the PCR mix from recently collected and preserved adults was an inexpensive, fast methodology to obtain full barcodes. In summary, DNA barcoding combined with a sound morphotaxonomic framework provides an effective approach for the delineation of species and for the discovery of hidden diversity in the subgenus Trichodagmia.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145506,Identification of Nearctic black flies using DNA barcodes (Diptera: Simuliidae),S582608,R145508,nearest neighbor distance lower limit,L406916,2.83,"DNA barcoding has gained increased recognition as a molecular tool for species identification in various groups of organisms. In this preliminary study, we tested the efficacy of a 615‐bp fragment of the cytochrome c oxidase I (COI) as a DNA barcode in the medically important family Simuliidae, or black flies. A total of 65 (25%) morphologically distinct species and sibling species in species complexes of the 255 recognized Nearctic black fly species were used to create a preliminary barcode profile for the family. Genetic divergence among congeners averaged 14.93% (range 2.83–15.33%), whereas intraspecific genetic divergence between morphologically distinct species averaged 0.72% (range 0–3.84%). DNA barcodes correctly identified nearly 100% of the morphologically distinct species (87% of the total sampled taxa), whereas in species complexes (13% of the sampled taxa) maximum values of divergence were comparatively higher (max. 4.58–6.5%), indicating cryptic diversity. The existence of sibling species in Prosimulium travisi and P. neomacropyga was also demonstrated, thus confirming previous cytological evidence about the existence of such cryptic diversity in these two taxa. We conclude that DNA barcoding is an effective method for species identification and discovery of cryptic diversity in black flies.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R146932,DNA barcodes reveal cryptic genetic diversity within the blackfly subgenus Trichodagmia Enderlein (Diptera: Simuliidae: Simulium) and related taxa in the New World,S588389,R146934,Maximum intraspecific distances upper limit,L409671,3.7,"In this paper we investigate the utility of the COI DNA barcoding region for species identification and for revealing hidden diversity within the subgenus Trichodagmia and related taxa in the New World. In total, 24 morphospecies within the current expanded taxonomic concept of Trichodagmia were analyzed. Three species in the subgenus Aspathia and 10 species in the subgenus Simulium s.str. were also included in the analysis because of their putative phylogenetic relationship with Trichodagmia. In the Neighbour Joining analysis tree (NJ) derived from the DNA barcodes most of the specimens grouped together according to species or species groups as recognized by other morphotaxonomic studies. The interspecific genetic divergence averaged 11.2% (range 2.8–19.5%), whereas intraspecific genetic divergence within morphologically distinct species averaged 0.5% (range 0–1.2%). Higher values of genetic divergence (3.2–3.7%) in species complexes suggest the presence of cryptic diversity. The existence of well defined groups within S. piperi, S. duodenicornium, S. canadense and S. rostratum indicate the possible presence of cryptic species within these taxa. Also, the suspected presence of a sibling species in S. tarsatum and S. paynei is supported. DNA barcodes also showed that specimens from species that were taxonomically difficult to delimit such as S. hippovorum, S. rubrithorax, S. paynei, and other related taxa (S. solarii), grouped together in the NJ analysis, confirming the validity of their species status. The recovery of partial barcodes from specimens in collections was time consuming and PCR success was low from specimens more than 10 years old. However, when a sequence was obtained, it provided good resolution for species identification. Larvae preserved in ‘weak’ Carnoy’s solution (9:1 ethanol:acetic acid) provided full DNA barcodes. Adding legs directly to the PCR mix from recently collected and preserved adults was an inexpensive, fast methodology to obtain full barcodes. In summary, DNA barcoding combined with a sound morphotaxonomic framework provides an effective approach for the delineation of species and for the discovery of hidden diversity in the subgenus Trichodagmia.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145509,Identifying Canadian mosquito species through DNA barcodes,S582640,R145511,Maximum intraspecific distances upper limit,L406933,3.9,"Abstract A short fragment of mt DNA from the cytochrome c oxidase 1 (CO1) region was used to provide the first CO1 barcodes for 37 species of Canadian mosquitoes (Diptera: Culicidae) from the provinces Ontario and New Brunswick. Sequence variation was analysed in a 617‐bp fragment from the 5′ end of the CO1 region. Sequences of each mosquito species formed barcode clusters with tight cohesion that were usually clearly distinct from those of allied species. CO1 sequence divergences were, on average, nearly 20 times higher for congeneric species than for members of a species; divergences between congeneric species averaged 10.4% (range 0.2–17.2%), whereas those for conspecific individuals averaged 0.5% (range 0.0–3.9%).",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R139497,Congruence between morphology-based species and Barcode Index Numbers (BINs) in Neotropical Eumaeini (Lycaenidae),S556387,R139502,nearest neighbor distance averaged,L391190,4.89,"Background With about 1,000 species in the Neotropics, the Eumaeini (Theclinae) are one of the most diverse butterfly tribes. Correct morphology-based identifications are challenging in many genera due to relatively little interspecific differences in wing patterns. Geographic infraspecific variation is sometimes more substantial than variation between species. In this paper we present a large DNA barcode dataset of South American Lycaenidae. We analyze how well DNA barcode BINs match morphologically delimited species. Methods We compare morphology-based species identifications with the clustering of molecular operational taxonomic units (MOTUs) delimitated by the RESL algorithm in BOLD, which assigns Barcode Index Numbers (BINs). We examine intra- and interspecific divergences for genera represented by at least four morphospecies. We discuss the existence of local barcode gaps in a genus by genus analysis. We also note differences in the percentage of species with barcode gaps in groups of lowland and high mountain genera. Results We identified 2,213 specimens and obtained 1,839 sequences of 512 species in 90 genera. Overall, the mean intraspecific divergence value of CO1 sequences was 1.20%, while the mean interspecific divergence between nearest congeneric neighbors was 4.89%, demonstrating the presence of a barcode gap. However, the gap seemed to disappear from the entire set when comparing the maximum intraspecific distance (8.40%) with the minimum interspecific distance (0.40%). Clear barcode gaps are present in many genera but absent in others. From the set of specimens that yielded COI fragment lengths of at least 650 bp, 75% of the a priori morphology-based identifications were unambiguously assigned to a single Barcode Index Number (BIN). However, after a taxonomic a posteriori review, the percentage of matched identifications rose to 85%. BIN splitting was observed for 17% of the species and BIN sharing for 9%. We found that genera that contain primarily lowland species show higher percentages of local barcode gaps and congruence between BINs and morphology than genera that contain exclusively high montane species. The divergence values to the nearest neighbors were significantly lower in high Andean species while the intra-specific divergence values were significantly lower in the lowland species. These results raise questions regarding the causes of observed low inter and high intraspecific genetic variation. We discuss incomplete lineage sorting and hybridization as most likely causes of this phenomenon, as the montane species concerned are relatively young and hybridization is probable. The release of our data set represents an essential baseline for a reference library for biological assessment studies of butterflies in mega diverse countries using modern high-throughput technologies an highlights the necessity of taxonomic revisions for various genera combining both molecular and morphological data.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145468,DNA barcoding of Neotropical black flies (Diptera: Simuliidae): Species identification and discovery of cryptic diversity in Mesoamerica,S582314,R145470,Maximum intraspecific distances upper limit,L406766,6.13,"Although correct taxonomy is paramount for disease control programs and epidemiological studies, morphology-based taxonomy of black flies is extremely difficult. In the present study, the utility of a partial sequence of the COI gene, the DNA barcoding region, for the identification of species of black flies from Mesoamerica was assessed. A total of 32 morphospecies were analyzed, one belonging to the genus Gigantodax and 31 species to the genus Simulium and six of its subgenera (Aspathia, Eusimulium, Notolepria, Psaroniocompsa, Psilopelmia, Trichodagmia). The Neighbour Joining tree (NJ) derived from the DNA barcodes grouped most specimens according to species or species groups recognized by morphotaxonomic studies. Intraspecific sequence divergences within morphologically distinct species ranged from 0.07% to 1.65%, while higher divergences (2.05%-6.13%) in species complexes suggested the presence of cryptic diversity. The existence of well-defined groups within S. callidum (Dyar & Shannon), S. quadrivittatum Loew, and S. samboni Jennings revealed the likely inclusion of cryptic species within these taxa. In addition, the suspected presence of sibling species within S. paynei Vargas and S. tarsatum Macquart was supported. DNA barcodes also showed that specimens of species that are difficult to delimit morphologically such as S. callidum, S. pseudocallidum Díaz Nájera, S. travisi Vargas, Vargas & Ramírez-Pérez, relatives of the species complexes such as S. metallicum Bellardi s.l. (e.g., S. horacioi Okazawa & Onishi, S. jobbinsi Vargas, Martínez Palacios, Díaz Nájera, and S. puigi Vargas, Martínez Palacios & Díaz Nájera), and S. virgatum Coquillett complex (e.g., S. paynei and S. tarsatum) grouped together in the NJ analysis, suggesting they represent valid species. DNA barcoding combined with a sound morphotaxonomic framework provided an effective approach for the identification of medically important black flies species in Mesoamerica and for the discovery of hidden diversity within this group.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R139508,Close congruence between Barcode Index Numbers (bins) and species boundaries in the Erebidae (Lepidoptera: Noctuoidea) of the Iberian Peninsula,S556426,R139510,nearest neighbor distance averaged,L391214,6.4,"Abstract The DNA barcode reference library for Lepidoptera holds much promise as a tool for taxonomic research and for providing the reliable identifications needed for conservation assessment programs. We gathered sequences for the barcode region of the mitochondrial cytochrome c oxidase subunit I gene from 160 of the 176 nominal species of Erebidae moths (Insecta: Lepidoptera) known from the Iberian Peninsula. These results arise from a research project which constructing a DNA barcode library for the insect species of Spain. New records for 271 specimens (122 species) are coupled with preexisting data for 38 species from the Iberian fauna. Mean interspecific distance was 12.1%, while the mean nearest neighbour divergence was 6.4%. All 160 species possessed diagnostic barcode sequences, but one pair of congeneric taxa (Eublemma rosea and Eublemma rietzi) were assigned to the same BIN. As well, intraspecific sequence divergences higher than 1.5% were detected in four species which likely represent species complexes. This study reinforces the effectiveness of DNA barcoding as a tool for monitoring biodiversity in particular geographical areas and the strong correspondence between sequence clusters delineated by BINs and species recognized through detailed taxonomic analysis.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145506,Identification of Nearctic black flies using DNA barcodes (Diptera: Simuliidae),S582603,R145508,Maximum intraspecific distances upper limit,L406911,6.5,"DNA barcoding has gained increased recognition as a molecular tool for species identification in various groups of organisms. In this preliminary study, we tested the efficacy of a 615‐bp fragment of the cytochrome c oxidase I (COI) as a DNA barcode in the medically important family Simuliidae, or black flies. A total of 65 (25%) morphologically distinct species and sibling species in species complexes of the 255 recognized Nearctic black fly species were used to create a preliminary barcode profile for the family. Genetic divergence among congeners averaged 14.93% (range 2.83–15.33%), whereas intraspecific genetic divergence between morphologically distinct species averaged 0.72% (range 0–3.84%). DNA barcodes correctly identified nearly 100% of the morphologically distinct species (87% of the total sampled taxa), whereas in species complexes (13% of the sampled taxa) maximum values of divergence were comparatively higher (max. 4.58–6.5%), indicating cryptic diversity. The existence of sibling species in Prosimulium travisi and P. neomacropyga was also demonstrated, thus confirming previous cytological evidence about the existence of such cryptic diversity in these two taxa. We conclude that DNA barcoding is an effective method for species identification and discovery of cryptic diversity in black flies.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R108983,"Barcoding the butterflies of southern South America: Species delimitation efficacy, cryptic diversity and geographic patterns of divergence",S497105,R108986,nearest neighbor distance averaged,L359911,6.91,"Because the tropical regions of America harbor the highest concentration of butterfly species, its fauna has attracted considerable attention. Much less is known about the butterflies of southern South America, particularly Argentina, where over 1,200 species occur. To advance understanding of this fauna, we assembled a DNA barcode reference library for 417 butterfly species of Argentina, focusing on the Atlantic Forest, a biodiversity hotspot. We tested the efficacy of this library for specimen identification, used it to assess the frequency of cryptic species, and examined geographic patterns of genetic variation, making this study the first large-scale genetic assessment of the butterflies of southern South America. The average sequence divergence to the nearest neighbor (i.e. minimum interspecific distance) was 6.91%, ten times larger than the mean distance to the furthest conspecific (0.69%), with a clear barcode gap present in all but four of the species represented by two or more specimens. As a consequence, the DNA barcode library was extremely effective in the discrimination of these species, allowing a correct identification in more than 95% of the cases. Singletons (i.e. species represented by a single sequence) were also distinguishable in the gene trees since they all had unique DNA barcodes, divergent from those of the closest non-conspecific. The clustering algorithms implemented recognized from 416 to 444 barcode clusters, suggesting that the actual diversity of butterflies in Argentina is 3%–9% higher than currently recognized. Furthermore, our survey added three new records of butterflies for the country (Eurema agave, Mithras hannelore, Melanis hillapana). In summary, this study not only supported the utility of DNA barcoding for the identification of the butterfly species of Argentina, but also highlighted several cases of both deep intraspecific and shallow interspecific divergence that should be studied in more detail.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145296,Molecular identification of mosquitoes (Diptera: Culicidae) in southeastern Australia,S581588,R145298,nearest neighbor distance averaged,L406370,7.6,"Abstract DNA barcoding is a modern species identification technique that can be used to distinguish morphologically similar species, and is particularly useful when using small amounts of starting material from partial specimens or from immature stages. In order to use DNA barcoding in a surveillance program, a database containing mosquito barcode sequences is required. This study obtained Cytochrome Oxidase I (COI) sequences for 113 morphologically identified specimens, representing 29 species, six tribes and 12 genera; 17 of these species have not been previously barcoded. Three of the 29 species ─ Culex palpalis, Macleaya macmillani, and an unknown species originally identified as Tripteroides atripes ─ were initially misidentified as they are difficult to separate morphologically, highlighting the utility of DNA barcoding. While most species grouped separately (reciprocally monophyletic), the Cx. pipiens subgroup could not be genetically separated using COI. The average conspecific and congeneric p‐distance was 0.8% and 7.6%, respectively. In our study, we also demonstrate the utility of DNA barcoding in distinguishing exotics from endemic mosquitoes by identifying a single intercepted Stegomyia aegypti egg at an international airport. The use of DNA barcoding dramatically reduced the identification time required compared with rearing specimens through to adults, thereby demonstrating the value of this technique in biosecurity surveillance. The DNA barcodes produced by this study have been uploaded to the ‘Mosquitoes of Australia–Victoria’ project on the Barcode of Life Database (BOLD), which will serve as a resource for the Victorian Arbovirus Disease Control Program and other national and international mosquito surveillance programs.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R139497,Congruence between morphology-based species and Barcode Index Numbers (BINs) in Neotropical Eumaeini (Lycaenidae),S556386,R139502,Maximum intraspecific distances upper limit,L391189,8.4,"Background With about 1,000 species in the Neotropics, the Eumaeini (Theclinae) are one of the most diverse butterfly tribes. Correct morphology-based identifications are challenging in many genera due to relatively little interspecific differences in wing patterns. Geographic infraspecific variation is sometimes more substantial than variation between species. In this paper we present a large DNA barcode dataset of South American Lycaenidae. We analyze how well DNA barcode BINs match morphologically delimited species. Methods We compare morphology-based species identifications with the clustering of molecular operational taxonomic units (MOTUs) delimitated by the RESL algorithm in BOLD, which assigns Barcode Index Numbers (BINs). We examine intra- and interspecific divergences for genera represented by at least four morphospecies. We discuss the existence of local barcode gaps in a genus by genus analysis. We also note differences in the percentage of species with barcode gaps in groups of lowland and high mountain genera. Results We identified 2,213 specimens and obtained 1,839 sequences of 512 species in 90 genera. Overall, the mean intraspecific divergence value of CO1 sequences was 1.20%, while the mean interspecific divergence between nearest congeneric neighbors was 4.89%, demonstrating the presence of a barcode gap. However, the gap seemed to disappear from the entire set when comparing the maximum intraspecific distance (8.40%) with the minimum interspecific distance (0.40%). Clear barcode gaps are present in many genera but absent in others. From the set of specimens that yielded COI fragment lengths of at least 650 bp, 75% of the a priori morphology-based identifications were unambiguously assigned to a single Barcode Index Number (BIN). However, after a taxonomic a posteriori review, the percentage of matched identifications rose to 85%. BIN splitting was observed for 17% of the species and BIN sharing for 9%. We found that genera that contain primarily lowland species show higher percentages of local barcode gaps and congruence between BINs and morphology than genera that contain exclusively high montane species. The divergence values to the nearest neighbors were significantly lower in high Andean species while the intra-specific divergence values were significantly lower in the lowland species. These results raise questions regarding the causes of observed low inter and high intraspecific genetic variation. We discuss incomplete lineage sorting and hybridization as most likely causes of this phenomenon, as the montane species concerned are relatively young and hybridization is probable. The release of our data set represents an essential baseline for a reference library for biological assessment studies of butterflies in mega diverse countries using modern high-throughput technologies an highlights the necessity of taxonomic revisions for various genera combining both molecular and morphological data.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145491,DNA barcoding of tropical black flies (Diptera: Simuliidae) of Thailand,S582456,R145493,Maximum intraspecific distances upper limit,L406832,9.27,"The ecological and medical importance of black flies drives the need for rapid and reliable identification of these minute, structurally uniform insects. We assessed the efficiency of DNA barcoding for species identification of tropical black flies. A total of 351 cytochrome c oxidase subunit 1 sequences were obtained from 41 species in six subgenera of the genus Simulium in Thailand. Despite high intraspecific genetic divergence (mean = 2.00%, maximum = 9.27%), DNA barcodes provided 96% correct identification. Barcodes also differentiated cytoforms of selected species complexes, albeit with varying levels of success. Perfect differentiation was achieved for two cytoforms of Simulium feuerborni, and 91% correct identification was obtained for the Simulium angulistylum complex. Low success (33%), however, was obtained for the Simulium siamense complex. The differential efficiency of DNA barcodes to discriminate cytoforms was attributed to different levels of genetic structure and demographic histories of the taxa. DNA barcode trees were largely congruent with phylogenies based on previous molecular, chromosomal and morphological analyses, but revealed inconsistencies that will require further evaluation.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145509,Identifying Canadian mosquito species through DNA barcodes,S582641,R145511,nearest neighbor distance averaged,L406934,10.4,"Abstract A short fragment of mt DNA from the cytochrome c oxidase 1 (CO1) region was used to provide the first CO1 barcodes for 37 species of Canadian mosquitoes (Diptera: Culicidae) from the provinces Ontario and New Brunswick. Sequence variation was analysed in a 617‐bp fragment from the 5′ end of the CO1 region. Sequences of each mosquito species formed barcode clusters with tight cohesion that were usually clearly distinct from those of allied species. CO1 sequence divergences were, on average, nearly 20 times higher for congeneric species than for members of a species; divergences between congeneric species averaged 10.4% (range 0.2–17.2%), whereas those for conspecific individuals averaged 0.5% (range 0.0–3.9%).",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R146932,DNA barcodes reveal cryptic genetic diversity within the blackfly subgenus Trichodagmia Enderlein (Diptera: Simuliidae: Simulium) and related taxa in the New World,S588384,R146934,nearest neighbor distance averaged,L409666,11.2,"In this paper we investigate the utility of the COI DNA barcoding region for species identification and for revealing hidden diversity within the subgenus Trichodagmia and related taxa in the New World. In total, 24 morphospecies within the current expanded taxonomic concept of Trichodagmia were analyzed. Three species in the subgenus Aspathia and 10 species in the subgenus Simulium s.str. were also included in the analysis because of their putative phylogenetic relationship with Trichodagmia. In the Neighbour Joining analysis tree (NJ) derived from the DNA barcodes most of the specimens grouped together according to species or species groups as recognized by other morphotaxonomic studies. The interspecific genetic divergence averaged 11.2% (range 2.8–19.5%), whereas intraspecific genetic divergence within morphologically distinct species averaged 0.5% (range 0–1.2%). Higher values of genetic divergence (3.2–3.7%) in species complexes suggest the presence of cryptic diversity. The existence of well defined groups within S. piperi, S. duodenicornium, S. canadense and S. rostratum indicate the possible presence of cryptic species within these taxa. Also, the suspected presence of a sibling species in S. tarsatum and S. paynei is supported. DNA barcodes also showed that specimens from species that were taxonomically difficult to delimit such as S. hippovorum, S. rubrithorax, S. paynei, and other related taxa (S. solarii), grouped together in the NJ analysis, confirming the validity of their species status. The recovery of partial barcodes from specimens in collections was time consuming and PCR success was low from specimens more than 10 years old. However, when a sequence was obtained, it provided good resolution for species identification. Larvae preserved in ‘weak’ Carnoy’s solution (9:1 ethanol:acetic acid) provided full DNA barcodes. Adding legs directly to the PCR mix from recently collected and preserved adults was an inexpensive, fast methodology to obtain full barcodes. In summary, DNA barcoding combined with a sound morphotaxonomic framework provides an effective approach for the delineation of species and for the discovery of hidden diversity in the subgenus Trichodagmia.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R142471,DNA barcoding of Northern Nearctic Muscidae (Diptera) reveals high correspondence between morphological and molecular species limits,S572358,R142473,nearest neighbor distance upper limit,L401534,11.33,"Abstract Background Various methods have been proposed to assign unknown specimens to known species using their DNA barcodes, while others have focused on using genetic divergence thresholds to estimate “species” diversity for a taxon, without a well-developed taxonomy and/or an extensive reference library of DNA barcodes. The major goals of the present work were to: a) conduct the largest species-level barcoding study of the Muscidae to date and characterize the range of genetic divergence values in the northern Nearctic fauna; b) evaluate the correspondence between morphospecies and barcode groupings defined using both clustering-based and threshold-based approaches; and c) use the reference library produced to address taxonomic issues. Results Our data set included 1114 individuals and their COI sequences (951 from Churchill, Manitoba), representing 160 morphologically-determined species from 25 genera, covering 89% of the known fauna of Churchill and 23% of the Nearctic fauna. Following an iterative process through which all specimens belonging to taxa with anomalous divergence values and/or monophyly issues were re-examined, identity was modified for 9 taxa, including the reinstatement of Phaonia luteva (Walker) stat. nov. as a species distinct from Phaonia errans (Meigen). In the post-reassessment data set, no distinct gap was found between maximum pairwise intraspecific distances (range 0.00-3.01%) and minimum interspecific distances (range: 0.77-11.33%). Nevertheless, using a clustering-based approach, all individuals within 98% of species grouped with their conspecifics with high (>95%) bootstrap support; in contrast, a maximum species discrimination rate of 90% was obtained at the optimal threshold of 1.2%. DNA barcoding enabled the determination of females from 5 ambiguous species pairs and confirmed that 16 morphospecies were genetically distinct from named taxa. There were morphological differences among all distinct genetic clusters; thus, no cases of cryptic species were detected. Conclusions Our findings reveal the great utility of building a well-populated, species-level reference barcode database against which to compare unknowns. When such a library is unavailable, it is still possible to obtain a fairly accurate (within ~10%) rapid assessment of species richness based upon a barcode divergence threshold alone, but this approach is most accurate when the threshold is tuned to a particular taxon.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R136201,DNA barcode analysis of butterfly species from Pakistan points towards regional endemism,S539104,R136203,nearest neighbor distance upper limit,L379764,14.3,"DNA barcodes were obtained for 81 butterfly species belonging to 52 genera from sites in north‐central Pakistan to test the utility of barcoding for their identification and to gain a better understanding of regional barcode variation. These species represent 25% of the butterfly fauna of Pakistan and belong to five families, although the Nymphalidae were dominant, comprising 38% of the total specimens. Barcode analysis showed that maximum conspecific divergence was 1.6%, while there was 1.7–14.3% divergence from the nearest neighbour species. Barcode records for 55 species showed <2% sequence divergence to records in the Barcode of Life Data Systems (BOLD), but only 26 of these cases involved specimens from neighbouring India and Central Asia. Analysis revealed that most species showed little incremental sequence variation when specimens from other regions were considered, but a threefold increase was noted in a few cases. There was a clear gap between maximum intraspecific and minimum nearest neighbour distance for all 81 species. Neighbour‐joining cluster analysis showed that members of each species formed a monophyletic cluster with strong bootstrap support. The barcode results revealed two provisional species that could not be clearly linked to known taxa, while 24 other species gained their first coverage. Future work should extend the barcode reference library to include all butterfly species from Pakistan as well as neighbouring countries to gain a better understanding of regional variation in barcode sequences in this topographically and climatically complex region.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145506,Identification of Nearctic black flies using DNA barcodes (Diptera: Simuliidae),S582604,R145508,nearest neighbor distance averaged,L406912,14.93,"DNA barcoding has gained increased recognition as a molecular tool for species identification in various groups of organisms. In this preliminary study, we tested the efficacy of a 615‐bp fragment of the cytochrome c oxidase I (COI) as a DNA barcode in the medically important family Simuliidae, or black flies. A total of 65 (25%) morphologically distinct species and sibling species in species complexes of the 255 recognized Nearctic black fly species were used to create a preliminary barcode profile for the family. Genetic divergence among congeners averaged 14.93% (range 2.83–15.33%), whereas intraspecific genetic divergence between morphologically distinct species averaged 0.72% (range 0–3.84%). DNA barcodes correctly identified nearly 100% of the morphologically distinct species (87% of the total sampled taxa), whereas in species complexes (13% of the sampled taxa) maximum values of divergence were comparatively higher (max. 4.58–6.5%), indicating cryptic diversity. The existence of sibling species in Prosimulium travisi and P. neomacropyga was also demonstrated, thus confirming previous cytological evidence about the existence of such cryptic diversity in these two taxa. We conclude that DNA barcoding is an effective method for species identification and discovery of cryptic diversity in black flies.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145509,Identifying Canadian mosquito species through DNA barcodes,S582643,R145511,nearest neighbor distance upper limit,L406936,17.2,"Abstract A short fragment of mt DNA from the cytochrome c oxidase 1 (CO1) region was used to provide the first CO1 barcodes for 37 species of Canadian mosquitoes (Diptera: Culicidae) from the provinces Ontario and New Brunswick. Sequence variation was analysed in a 617‐bp fragment from the 5′ end of the CO1 region. Sequences of each mosquito species formed barcode clusters with tight cohesion that were usually clearly distinct from those of allied species. CO1 sequence divergences were, on average, nearly 20 times higher for congeneric species than for members of a species; divergences between congeneric species averaged 10.4% (range 0.2–17.2%), whereas those for conspecific individuals averaged 0.5% (range 0.0–3.9%).",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145304,Analyzing Mosquito (Diptera: Culicidae) Diversity in Pakistan by DNA Barcoding,S581664,R145305,nearest neighbor distance upper limit,L406428,17.8,"Background Although they are important disease vectors mosquito biodiversity in Pakistan is poorly known. Recent epidemics of dengue fever have revealed the need for more detailed understanding of the diversity and distributions of mosquito species in this region. DNA barcoding improves the accuracy of mosquito inventories because morphological differences between many species are subtle, leading to misidentifications. Methodology/Principal Findings Sequence variation in the barcode region of the mitochondrial COI gene was used to identify mosquito species, reveal genetic diversity, and map the distribution of the dengue-vector species in Pakistan. Analysis of 1684 mosquitoes from 491 sites in Punjab and Khyber Pakhtunkhwa during 2010–2013 revealed 32 species with the assemblage dominated by Culex quinquefasciatus (61% of the collection). The genus Aedes (Stegomyia) comprised 15% of the specimens, and was represented by six taxa with the two dengue vector species, Ae. albopictus and Ae. aegypti, dominant and broadly distributed. Anopheles made up another 6% of the catch with An. subpictus dominating. Barcode sequence divergence in conspecific specimens ranged from 0–2.4%, while congeneric species showed from 2.3–17.8% divergence. A global haplotype analysis of disease-vectors showed the presence of multiple haplotypes, although a single haplotype of each dengue-vector species was dominant in most countries. Geographic distribution of Ae. aegypti and Ae. albopictus showed the later species was dominant and found in both rural and urban environments. Conclusions As the first DNA-based analysis of mosquitoes in Pakistan, this study has begun the construction of a barcode reference library for the mosquitoes of this region. Levels of genetic diversity varied among species. Because of its capacity to differentiate species, even those with subtle morphological differences, DNA barcoding aids accurate tracking of vector populations.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145434,"DNA Barcoding of Neotropical Sand Flies (Diptera, Psychodidae, Phlebotominae): Species Identification and Discovery within Brazil",S582151,R145435,nearest neighbor distance upper limit,L406696,19.04,"DNA barcoding has been an effective tool for species identification in several animal groups. Here, we used DNA barcoding to discriminate between 47 morphologically distinct species of Brazilian sand flies. DNA barcodes correctly identified approximately 90% of the sampled taxa (42 morphologically distinct species) using clustering based on neighbor-joining distance, of which four species showed comparatively higher maximum values of divergence (range 4.23–19.04%), indicating cryptic diversity. The DNA barcodes also corroborated the resurrection of two species within the shannoni complex and provided an efficient tool to differentiate between morphologically indistinguishable females of closely related species. Taken together, our results validate the effectiveness of DNA barcoding for species identification and the discovery of cryptic diversity in sand flies from Brazil.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R146932,DNA barcodes reveal cryptic genetic diversity within the blackfly subgenus Trichodagmia Enderlein (Diptera: Simuliidae: Simulium) and related taxa in the New World,S588386,R146934,nearest neighbor distance upper limit,L409668,19.5,"In this paper we investigate the utility of the COI DNA barcoding region for species identification and for revealing hidden diversity within the subgenus Trichodagmia and related taxa in the New World. In total, 24 morphospecies within the current expanded taxonomic concept of Trichodagmia were analyzed. Three species in the subgenus Aspathia and 10 species in the subgenus Simulium s.str. were also included in the analysis because of their putative phylogenetic relationship with Trichodagmia. In the Neighbour Joining analysis tree (NJ) derived from the DNA barcodes most of the specimens grouped together according to species or species groups as recognized by other morphotaxonomic studies. The interspecific genetic divergence averaged 11.2% (range 2.8–19.5%), whereas intraspecific genetic divergence within morphologically distinct species averaged 0.5% (range 0–1.2%). Higher values of genetic divergence (3.2–3.7%) in species complexes suggest the presence of cryptic diversity. The existence of well defined groups within S. piperi, S. duodenicornium, S. canadense and S. rostratum indicate the possible presence of cryptic species within these taxa. Also, the suspected presence of a sibling species in S. tarsatum and S. paynei is supported. DNA barcodes also showed that specimens from species that were taxonomically difficult to delimit such as S. hippovorum, S. rubrithorax, S. paynei, and other related taxa (S. solarii), grouped together in the NJ analysis, confirming the validity of their species status. The recovery of partial barcodes from specimens in collections was time consuming and PCR success was low from specimens more than 10 years old. However, when a sequence was obtained, it provided good resolution for species identification. Larvae preserved in ‘weak’ Carnoy’s solution (9:1 ethanol:acetic acid) provided full DNA barcodes. Adding legs directly to the PCR mix from recently collected and preserved adults was an inexpensive, fast methodology to obtain full barcodes. In summary, DNA barcoding combined with a sound morphotaxonomic framework provides an effective approach for the delineation of species and for the discovery of hidden diversity in the subgenus Trichodagmia.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145554,Identifying the Main Mosquito Species in China Based on DNA Barcoding,S582889,R145555,nearest neighbor distance upper limit,L407102,21.8,"Mosquitoes are insects of the Diptera, Nematocera, and Culicidae families, some species of which are important disease vectors. Identifying mosquito species based on morphological characteristics is difficult, particularly the identification of specimens collected in the field as part of disease surveillance programs. Because of this difficulty, we constructed DNA barcodes of the cytochrome c oxidase subunit 1, the COI gene, for the more common mosquito species in China, including the major disease vectors. A total of 404 mosquito specimens were collected and assigned to 15 genera and 122 species and subspecies on the basis of morphological characteristics. Individuals of the same species grouped closely together in a Neighborhood-Joining tree based on COI sequence similarity, regardless of collection site. COI gene sequence divergence was approximately 30 times higher for species in the same genus than for members of the same species. Divergence in over 98% of congeneric species ranged from 2.3% to 21.8%, whereas divergence in conspecific individuals ranged from 0% to 1.67%. Cryptic species may be common and a few pseudogenes were detected.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R138551,Probing planetary biodiversity with DNA barcodes: The Noctuoidea of North America,S629317,R156994,Number of identified species with current taxonomy,L432490,3565,"This study reports the assembly of a DNA barcode reference library for species in the lepidopteran superfamily Noctuoidea from Canada and the USA. Based on the analysis of 69,378 specimens, the library provides coverage for 97.3% of the noctuoid fauna (3565 of 3664 species). In addition to verifying the strong performance of DNA barcodes in the discrimination of these species, the results indicate close congruence between the number of species analyzed (3565) and the number of sequence clusters (3816) recognized by the Barcode Index Number (BIN) system. Distributional patterns across 12 North American ecoregions are examined for the 3251 species that have GPS data while BIN analysis is used to quantify overlap between the noctuoid faunas of North America and other zoogeographic regions. This analysis reveals that 90% of North American noctuoids are endemic and that just 7.5% and 1.8% of BINs are shared with the Neotropics and with the Palearctic, respectively. One third (29) of the latter species are recent introductions and, as expected, they possess low intraspecific divergences.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R138551,Probing planetary biodiversity with DNA barcodes: The Noctuoidea of North America,S629308,R156994,higher number estimated species,L432485,3816,"This study reports the assembly of a DNA barcode reference library for species in the lepidopteran superfamily Noctuoidea from Canada and the USA. Based on the analysis of 69,378 specimens, the library provides coverage for 97.3% of the noctuoid fauna (3565 of 3664 species). In addition to verifying the strong performance of DNA barcodes in the discrimination of these species, the results indicate close congruence between the number of species analyzed (3565) and the number of sequence clusters (3816) recognized by the Barcode Index Number (BIN) system. Distributional patterns across 12 North American ecoregions are examined for the 3251 species that have GPS data while BIN analysis is used to quantify overlap between the noctuoid faunas of North America and other zoogeographic regions. This analysis reveals that 90% of North American noctuoids are endemic and that just 7.5% and 1.8% of BINs are shared with the Neotropics and with the Palearctic, respectively. One third (29) of the latter species are recent introductions and, as expected, they possess low intraspecific divergences.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R138551,Probing planetary biodiversity with DNA barcodes: The Noctuoidea of North America,S629313,R156994,No. of estimated species,L432487,3816,"This study reports the assembly of a DNA barcode reference library for species in the lepidopteran superfamily Noctuoidea from Canada and the USA. Based on the analysis of 69,378 specimens, the library provides coverage for 97.3% of the noctuoid fauna (3565 of 3664 species). In addition to verifying the strong performance of DNA barcodes in the discrimination of these species, the results indicate close congruence between the number of species analyzed (3565) and the number of sequence clusters (3816) recognized by the Barcode Index Number (BIN) system. Distributional patterns across 12 North American ecoregions are examined for the 3251 species that have GPS data while BIN analysis is used to quantify overlap between the noctuoid faunas of North America and other zoogeographic regions. This analysis reveals that 90% of North American noctuoids are endemic and that just 7.5% and 1.8% of BINs are shared with the Neotropics and with the Palearctic, respectively. One third (29) of the latter species are recent introductions and, as expected, they possess low intraspecific divergences.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R140252,Species-Level Para- and Polyphyly in DNA Barcode Gene Trees: Strong Operational Bias in European Lepidoptera,S628596,R156759,lower number estimated species,L432191,4977,"The proliferation of DNA data is revolutionizing all fields of systematic research. DNA barcode sequences, now available for millions of specimens and several hundred thousand species, are increasingly used in algorithmic species delimitations. This is complicated by occasional incongruences between species and gene genealogies, as indicated by situations where conspecific individuals do not form a monophyletic cluster in a gene tree. In two previous reviews, non-monophyly has been reported as being common in mitochondrial DNA gene trees. We developed a novel web service “Monophylizer” to detect non-monophyly in phylogenetic trees and used it to ascertain the incidence of species non-monophyly in COI (a.k.a. cox1) barcode sequence data from 4977 species and 41,583 specimens of European Lepidoptera, the largest data set of DNA barcodes analyzed from this regard. Particular attention was paid to accurate species identification to ensure data integrity. We investigated the effects of tree-building method, sampling effort, and other methodological issues, all of which can influence estimates of non-monophyly. We found a 12% incidence of non-monophyly, a value significantly lower than that observed in previous studies. Neighbor joining (NJ) and maximum likelihood (ML) methods yielded almost equal numbers of non-monophyletic species, but 24.1% of these cases of non-monophyly were only found by one of these methods. Non-monophyletic species tend to show either low genetic distances to their nearest neighbors or exceptionally high levels of intraspecific variability. Cases of polyphyly in COI trees arising as a result of deep intraspecific divergence are negligible, as the detected cases reflected misidentifications or methodological errors. Taking into consideration variation in sampling effort, we estimate that the true incidence of non-monophyly is ∼23%, but with operational factors still being included. Within the operational factors, we separately assessed the frequency of taxonomic limitations (presence of overlooked cryptic and oversplit species) and identification uncertainties. We observed that operational factors are potentially present in more than half (58.6%) of the detected cases of non-monophyly. Furthermore, we observed that in about 20% of non-monophyletic species and entangled species, the lineages involved are either allopatric or parapatric—conditions where species delimitation is inherently subjective and particularly dependent on the species concept that has been adopted. These observations suggest that species-level non-monophyly in COI gene trees is less common than previously supposed, with many cases reflecting misidentifications, the subjectivity of species delimitation or other operational factors.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R140252,Species-Level Para- and Polyphyly in DNA Barcode Gene Trees: Strong Operational Bias in European Lepidoptera,S628598,R156759,No. of estimated species,L432192,4977,"The proliferation of DNA data is revolutionizing all fields of systematic research. DNA barcode sequences, now available for millions of specimens and several hundred thousand species, are increasingly used in algorithmic species delimitations. This is complicated by occasional incongruences between species and gene genealogies, as indicated by situations where conspecific individuals do not form a monophyletic cluster in a gene tree. In two previous reviews, non-monophyly has been reported as being common in mitochondrial DNA gene trees. We developed a novel web service “Monophylizer” to detect non-monophyly in phylogenetic trees and used it to ascertain the incidence of species non-monophyly in COI (a.k.a. cox1) barcode sequence data from 4977 species and 41,583 specimens of European Lepidoptera, the largest data set of DNA barcodes analyzed from this regard. Particular attention was paid to accurate species identification to ensure data integrity. We investigated the effects of tree-building method, sampling effort, and other methodological issues, all of which can influence estimates of non-monophyly. We found a 12% incidence of non-monophyly, a value significantly lower than that observed in previous studies. Neighbor joining (NJ) and maximum likelihood (ML) methods yielded almost equal numbers of non-monophyletic species, but 24.1% of these cases of non-monophyly were only found by one of these methods. Non-monophyletic species tend to show either low genetic distances to their nearest neighbors or exceptionally high levels of intraspecific variability. Cases of polyphyly in COI trees arising as a result of deep intraspecific divergence are negligible, as the detected cases reflected misidentifications or methodological errors. Taking into consideration variation in sampling effort, we estimate that the true incidence of non-monophyly is ∼23%, but with operational factors still being included. Within the operational factors, we separately assessed the frequency of taxonomic limitations (presence of overlooked cryptic and oversplit species) and identification uncertainties. We observed that operational factors are potentially present in more than half (58.6%) of the detected cases of non-monophyly. Furthermore, we observed that in about 20% of non-monophyletic species and entangled species, the lineages involved are either allopatric or parapatric—conditions where species delimitation is inherently subjective and particularly dependent on the species concept that has been adopted. These observations suggest that species-level non-monophyly in COI gene trees is less common than previously supposed, with many cases reflecting misidentifications, the subjectivity of species delimitation or other operational factors.",TRUE,number
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R140252,Species-Level Para- and Polyphyly in DNA Barcode Gene Trees: Strong Operational Bias in European Lepidoptera,S628602,R156759,Number of identified species with current taxonomy,L432195,4977,"The proliferation of DNA data is revolutionizing all fields of systematic research. DNA barcode sequences, now available for millions of specimens and several hundred thousand species, are increasingly used in algorithmic species delimitations. This is complicated by occasional incongruences between species and gene genealogies, as indicated by situations where conspecific individuals do not form a monophyletic cluster in a gene tree. In two previous reviews, non-monophyly has been reported as being common in mitochondrial DNA gene trees. We developed a novel web service “Monophylizer” to detect non-monophyly in phylogenetic trees and used it to ascertain the incidence of species non-monophyly in COI (a.k.a. cox1) barcode sequence data from 4977 species and 41,583 specimens of European Lepidoptera, the largest data set of DNA barcodes analyzed from this regard. Particular attention was paid to accurate species identification to ensure data integrity. We investigated the effects of tree-building method, sampling effort, and other methodological issues, all of which can influence estimates of non-monophyly. We found a 12% incidence of non-monophyly, a value significantly lower than that observed in previous studies. Neighbor joining (NJ) and maximum likelihood (ML) methods yielded almost equal numbers of non-monophyletic species, but 24.1% of these cases of non-monophyly were only found by one of these methods. Non-monophyletic species tend to show either low genetic distances to their nearest neighbors or exceptionally high levels of intraspecific variability. Cases of polyphyly in COI trees arising as a result of deep intraspecific divergence are negligible, as the detected cases reflected misidentifications or methodological errors. Taking into consideration variation in sampling effort, we estimate that the true incidence of non-monophyly is ∼23%, but with operational factors still being included. Within the operational factors, we separately assessed the frequency of taxonomic limitations (presence of overlooked cryptic and oversplit species) and identification uncertainties. We observed that operational factors are potentially present in more than half (58.6%) of the detected cases of non-monophyly. Furthermore, we observed that in about 20% of non-monophyletic species and entangled species, the lineages involved are either allopatric or parapatric—conditions where species delimitation is inherently subjective and particularly dependent on the species concept that has been adopted. These observations suggest that species-level non-monophyly in COI gene trees is less common than previously supposed, with many cases reflecting misidentifications, the subjectivity of species delimitation or other operational factors.",TRUE,number
R194,Engineering,R138191,Ultrafast Dynamic Piezoresistive Response of Graphene-Based Cellular Elastomers,S547695,R138193,Detection limit (Pa),L385208,0.082,"Ultralight graphene-based cellular elastomers are found to exhibit nearly frequency-independent piezoresistive behaviors. Surpassing the mechanoreceptors in the human skin, these graphene elastomers can provide an instantaneous and high-fidelity electrical response to dynamic pressures ranging from quasi-static up to 2000 Hz, and are capable of detecting ultralow pressures as small as 0.082 Pa.",TRUE,number
R194,Engineering,R145512,"Highly Stable, Solution‐Processed Ga‐Doped IZTO Thin Film Transistor by Ar/O
2
Plasma Treatment",S582904,R145515, Threshold Voltage (V),L407112,0.12,"The effects of gallium doping into indium–zinc–tin oxide (IZTO) thin film transistors (TFTs) and Ar/O2 plasma treatment on the performance of a‐IZTO TFT are reported. The Ga doping ratio is varied from 0 to 20%, and it is found that 10% gallium doping in a‐IZTO TFT results in a saturation mobility (µsat) of 11.80 cm2 V−1 s−1, a threshold voltage (Vth) of 0.17 V, subthreshold swing (SS) of 94 mV dec−1, and on/off current ratio (Ion/Ioff) of 1.21 × 107. Additionally, the performance of 10% Ga‐doped IZTO TFT can be further improved by Ar/O2 plasma treatment. It is found that 30 s plasma treatment gives the best TFT performances such as µsat of 30.60 cm2 V−1 s−1, Vth of 0.12 V, SS of 92 mV dec−1, and Ion/Ioff ratio of 7.90 × 107. The bias‐stability of 10% Ga‐doped IZTO TFT is also improved by 30 s plasma treatment. The enhancement of the TFT performance appears to be due to the reduction in the oxygen vacancy and OH concentrations.",TRUE,number
R194,Engineering,R135551,Flexible Capacitive Pressure Sensor Enhanced by Tilted Micropillar Arrays,S536148,R135554,Sensibility of the pressure sensor ( /kPa),L378144,0.42,"Sensitivity of the sensor is of great importance in practical applications of wearable electronics or smart robotics. In the present study, a capacitive sensor enhanced by a tilted micropillar array-structured dielectric layer is developed. Because the tilted micropillars undergo bending deformation rather than compression deformation, the distance between the electrodes is easier to change, even discarding the contribution of the air gap at the interface of the structured dielectric layer and the electrode, thus resulting in high pressure sensitivity (0.42 kPa-1) and very small detection limit (1 Pa). In addition, eliminating the presence of uncertain air gap, the dielectric layer is strongly bonded with the electrode, which makes the structure robust and endows the sensor with high stability and reliable capacitance response. These characteristics allow the device to remain in normal use without the need for repair or replacement despite mechanical damage. Moreover, the proposed sensor can be tailored to any size and shape, which is further demonstrated in wearable application. This work provides a new strategy for sensors that are required to be sensitive and reliable in actual applications.",TRUE,number
R194,Engineering,R139614,Highly Efficient and Stable Sn-Rich Perovskite Solar Cells by Introducing Bromine,S557155,R139617,"Open circuit voltage, Voc (V)",L391632,0.78,"Compositional engineering of recently arising methylammonium (MA) lead (Pb) halide based perovskites is an essential approach for finding better perovskite compositions to resolve still remaining issues of toxic Pb, long-term instability, etc. In this work, we carried out crystallographic, morphological, optical, and photovoltaic characterization of compositional MASn0.6Pb0.4I3-xBrx by gradually introducing bromine (Br) into parental Pb-Sn binary perovskite (MASn0.6Pb0.4I3) to elucidate its function in Sn-rich (Sn:Pb = 6:4) perovskites. We found significant advances in crystallinity and dense coverage of the perovskite films by inserting the Br into Sn-rich perovskite lattice. Furthermore, light-intensity-dependent open circuit voltage (Voc) measurement revealed much suppressed trap-assisted recombination for a proper Br-added (x = 0.4) device. These contributed to attaining the unprecedented power conversion efficiency of 12.1% and Voc of 0.78 V, which are, to the best of our knowledge, the highest performance in the Sn-rich (≥60%) perovskite solar cells reported so far. In addition, impressive enhancement of photocurrent-output stability and little hysteresis were found, which paves the way for the development of environmentally benign (Pb reduction), stable monolithic tandem cells using the developed low band gap (1.24-1.26 eV) MASn0.6Pb0.4I3-xBrx with suggested composition (x = 0.2-0.4).",TRUE,number
R194,Engineering,R139632,Fabrication of Efficient Low-Bandgap Perovskite Solar Cells by Combining Formamidinium Tin Iodide with Methylammonium Lead Iodide,S557294,R139633,"Open circuit voltage, Voc (V)",L391748,0.795,"Mixed tin (Sn)-lead (Pb) perovskites with high Sn content exhibit low bandgaps suitable for fabricating the bottom cell of perovskite-based tandem solar cells. In this work, we report on the fabrication of efficient mixed Sn-Pb perovskite solar cells using precursors combining formamidinium tin iodide (FASnI3) and methylammonium lead iodide (MAPbI3). The best-performing cell fabricated using a (FASnI3)0.6(MAPbI3)0.4 absorber with an absorption edge of ∼1.2 eV achieved a power conversion efficiency (PCE) of 15.08 (15.00)% with an open-circuit voltage of 0.795 (0.799) V, a short-circuit current density of 26.86(26.82) mA/cm(2), and a fill factor of 70.6(70.0)% when measured under forward (reverse) voltage scan. The average PCE of 50 cells we have fabricated is 14.39 ± 0.33%, indicating good reproducibility.",TRUE,number
R194,Engineering,R141127,RF MEMS Switches With Enhanced Power-Handling Capabilities,S564066,R141129,Real down-state capacitance - Cr (pF),L395834,0.8,"This paper reports on the experimental and theoretical characterization of RF microelectromechanical systems (MEMS) switches for high-power applications. First, we investigate the problem of self-actuation due to high RF power and we demonstrate switches that do not self-actuate or catastrophically fail with a measured RF power of up to 5.5 W. Second, the problem of switch stiction to the down state as a function of the applied RF power is also theoretically and experimentally studied. Finally, a novel switch design with a top electrode is introduced and its advantages related to RF power-handling capabilities are presented. By applying this technology, we demonstrate hot-switching measurements with a maximum power of 0.8 W. Our results, backed by theory and measurements, illustrate that careful design can significantly improve the power-handling capabilities of RF MEMS switches.",TRUE,number
R194,Engineering,R141147,RF MEMS Shunt Capacitive Switches Using AlN Compared to Si3N4 Dielectric,S564199,R141149,Real down-state capacitance - Cr (pF),L395939,0.9,"RF microelectromechanical systems (MEMS) capacitive switches for two different dielectrics, aluminum nitride (AlN) and silicon nitride (Si3N4), are presented. The switches have been characterized and compared in terms of DC and RF performance (5-40 GHz). Switches based on AlN have higher down-state capacitance for similar dielectric thicknesses and provide better isolation and smaller insertion losses compared to Si3N4 switches. Experiments were carried out on RF MEMS switches with stiffening bars to prevent membrane deformation due to residual stress and with different spring and meander-type anchor designs. For a ~300-nm dielectric thickness, an air gap of 2.3 μm and identical spring-type designs, the AlN switches systematically show an improvement in the isolation by more than -12 dB (-35.8 dB versus -23.7 dB) and a better insertion loss (-0.68 dB versus -0.90 dB) at 40 GHz compared to Si3N4. DC measurements show small leakage current densities for both dielectrics (<;10-8 A/cm2 at 1 MV/cm). However, the resulting leakage current for AlN devices is ten times higher than for Si3N4 when applying a larger electric field. The fabricated switches were also stressed by applying different voltages in air and vacuum, and dielectric charging effects were investigated. AlN switches eliminate the residual or injected charge faster than the Si3N4 devices do.",TRUE,number
R194,Engineering,R139618,Efficiently Improving the Stability of Inverted Perovskite Solar Cells by Employing Polyethylenimine-Modified Carbon Nanotubes as Electrodes,S557188,R139622,"Open circuit voltage, Voc (V)",L391658,0.95,"Inverted perovskite solar cells (PSCs) have been becoming more and more attractive, owing to their easy-fabrication and suppressed hysteresis, while the ion diffusion between metallic electrode and perovskite layer limit the long-term stability of devices. In this work, we employed a novel polyethylenimine (PEI) modified cross-stacked superaligned carbon nanotube (CSCNT) film in the inverted planar PSCs configurated FTO/NiO x/methylammonium lead tri-iodide (MAPbI3)/6, 6-phenyl C61-butyric acid methyl ester (PCBM)/CSCNT:PEI. By modifying CSCNT with a certain concentration of PEI (0.5 wt %), suitable energy level alignment and promoted interfacial charge transfer have been achieved, leading to a significant enhancement in the photovoltaic performance. As a result, a champion power conversion efficiency (PCE) of ∼11% was obtained with a Voc of 0.95 V, a Jsc of 18.7 mA cm-2, a FF of 0.61 as well as negligible hysteresis. Moreover, CSCNT:PEI based inverted PSCs show superior durability in comparison to the standard silver based devices, remaining over 85% of the initial PCE after 500 h aging under various conditions, including long-term air exposure, thermal, and humid treatment. This work opens up a new avenue of facile modified carbon electrodes for highly stable and hysteresis suppressed PSCs.",TRUE,number
R194,Engineering,R148156,Simplified Hollow-Core Fiber-Based Fabry–Perot Interferometer With Modified Vernier Effect for Highly Sensitive High-Temperature Measurement,S594136,R148158,Sensitivity (nm/°C),L413106,1.019,"In this paper, a high-temperature fiber sensor based on an optical fiber Fabry-Perot interferometer is fabricated by splicing a section of simplified hollow-core fiber between two single-mode fibers (SMFs) and cleaving one of the two SMFs to a certain length. With the superposition of three beams of light reflected from two splicing joints and end face of the cleaved SMF, the modified Vernier effect will be generated in the proposed structure and improve the sensitivity of temperature measurement. The envelope of spectrum reflected from the proposed sensor head is modulated by the ambient temperature of the sensor head. By monitoring and measuring the shift of spectrum envelope, the measurement of environment temperature is carried out experimentally, and high temperature sensitivity of 1.019 nm/°C for the envelope of the reflected spectrum was obtained. A temperature measurement as high as 1050 °C has been achieved with excellent repeatability.",TRUE,number
R194,Engineering,R135556,Flexible capacitive pressure sensor with sensitivity and linear measuring range enhanced based on porous composite of carbon conductive paste and polydimethylsiloxane,S536174,R135559,Sensibility of the pressure sensor ( /kPa),L378165,1.1,"In recent years, the development of electronic skin and smart wearable body sensors has put forward high requirements for flexible pressure sensors with high sensitivity and large linear measuring range. However it turns out to be difficult to increase both of them simultaneously. In this paper, a flexible capacitive pressure sensor based on porous carbon conductive paste-PDMS composite is reported, the sensitivity and the linear measuring range of which were developed using multiple methods including adjusting the stiffness of the dielectric layer material, fabricating micro-structure and increasing dielectric permittivity of dielectric layer. The capacitive pressure sensor reported here has a relatively high sensitivity of 1.1 kPa-1 and a large linear measuring range of 10 kPa, making the product of the sensitivity and linear measuring range is 11, which is higher than that of the most reported capacitive pressure sensor to our best knowledge. The sensor has a detection of limit of 4 Pa, response time of 60 ms and great stability. Some potential applications of the sensor were demonstrated such as arterial pulse wave measuring and breathe measuring, which shows a promising candidate for wearable biomedical devices. In addition, a pressure sensor array based on the material was also fabricated and it could identify objects in the shape of different letters clearly, which shows a promising application in the future electronic skins.",TRUE,number
R194,Engineering,R138217,"Highly Sensitive, Transparent, and Durable Pressure Sensors Based on Sea-Urchin Shaped Metal Nanoparticles",S547891,R138219,Sensitivity (/kPa),L385370,2.46,"Highly sensitive, transparent, and durable pressure sensors are fabricated using sea-urchin-shaped metal nanoparticles and insulating polyurethane elastomer. The pressure sensors exhibit outstanding sensitivity (2.46 kPa-1 ), superior optical transmittance (84.8% at 550 nm), fast response/relaxation time (30 ms), and excellent operational durability. In addition, the pressure sensors successfully detect minute movements of human muscles.",TRUE,number
R194,Engineering,R141869,Integration of piezoelectric aluminum nitride and ultrananocrystalline diamond films for implantable biomedical microelectromechanical devices,S569135,R141871,Piezoelectric coefficient measured (pm/V),L399423,5.3,"The physics for integration of piezoelectric aluminum nitride (AlN) films with underlying insulating ultrananocrystalline diamond (UNCD), and electrically conductive grain boundary nitrogen-incorporated UNCD (N-UNCD) and boron-doped UNCD (B-UNCD) layers, as membranes for microelectromechanical system implantable drug delivery devices, has been investigated. AlN films deposited on platinum layers on as grown UNCD or N-UNCD layer (5–10 nm rms roughness) required thickness of ∼400 nm to induce (002) AlN orientation with piezoelectric d33 coefficient ∼1.91 pm/V at ∼10 V. Chemical mechanical polished B-UNCD films (0.2 nm rms roughness) substrates enabled (002) AlN film 200 nm thick, yielding d33 = 5.3 pm/V.",TRUE,number
R194,Engineering,R141891,Microstructure and Electrical Properties of Novel piezo-optrodes Based on Thin-Film Piezoelectric Aluminium Nitride for Sensing,S569310,R141895,Piezoelectric coefficient measured (pm/V),L399559,5.4,"Thin-film piezoelectric materials are currently employed in micro- and nanodevices for energy harvesting and mechanical sensing. The deposition of these functional layers, however, is quite challenging onto non-rigid/non-flat substrates, such as optical fibers (OFs). Besides the recent novel applications of OFs as probes for biosensing and bioactuation, the possibility to combine them with piezoelectric thin films and metallic electrodes can pave the way for the employment of novel opto-electro-mechanical sensors (e.g., waveguides, optical phase modulators, tunable filters, energy harvesters or biosensors). In this work the deposition of a thin-film piezoelectric wurtzite-phase Aluminium Nitride (AlN), sandwiched between molybdenum (Mo) electrodes, on the curved lateral surface of an optical fiber with polymeric cladding, is reported for the first time, without the need of an orientation-promoting interlayer. The material surface properties and morphology are characterized by microscopy techniques. High orientation is demonstrated by SEM, PFM and X-ray diffraction analysis on a flat polymeric control, with a resulting piezoelectric coefficient (d33) of ∼5.4 pm/V, while the surface roughness Rms measured by AFM is 9 ÷ 16 nm. The output mechanical sensing capability of the resulting AlN-based piezo-optrode is investigated through mechanical buckling tests: the peak-to-peak voltage for weakly impulsive loads increases with increasing relative displacements (up to 30%), in the range of 20 ÷ 35 mV. Impedance spectroscopy frequency sweeps (10 kHz-1 MHz, 1 V) demonstrate a sensor capacitance of ∼8 pF, with an electrical Q factor as high as 150. The electrical response in the long-term period (two months) revealed good reliability and durability.",TRUE,number
R194,Engineering,R141873,Preparation of highly c-axis oriented AlN thin films on Hastelloy tapes with Y2O3 buffer layer for flexible SAW sensor applications,S569159,R141876,Surface roughness (nm),L399442,5.46,"Highly c-axis oriented aluminum nitrade (AlN) films were successfully deposited on flexible Hastelloy tapes by middle-frequency magnetron sputtering. The microstructure and piezoelectric properties of the AlN films were investigated. The results show that the AlN films deposited directly on the bare Hastelloy substrate have rough surface with root mean square (RMS) roughness of 32.43nm and its full width at half maximum (FWHM) of the AlN (0002) peak is 12.5∘. However, the AlN films deposited on the Hastelloy substrate with Y2O3 buffer layer show smooth surface with RMS roughness of 5.46nm and its FWHM of the AlN (0002) peak is only 3.7∘. The piezoelectric coefficient d33 of the AlN films deposited on the Y2O3/Hastelloy substrate is larger than three times that of the AlN films deposited on the bare Hastelloy substrate. The prepared highly c-axis oriented AlN films can be used to develop high-temperature flexible SAW sensors.",TRUE,number
R194,Engineering,R141877,Low temperature aluminum nitride thin films for sensory applications,S569178,R141879,Piezoelectric coefficient measured (pm/V),L399458,5.57,"A low-temperature sputter deposition process for the synthesis of aluminum nitride (AlN) thin films that is attractive for applications with a limited temperature budget is presented. Influence of the reactive gas concentration, plasma treatment of the nucleation surface and film thickness on the microstructural, piezoelectric and dielectric properties of AlN is investigated. An improved crystal quality with respect to the increased film thickness was observed; where full width at half maximum (FWHM) of the AlN films decreased from 2.88 ± 0.16° down to 1.25 ± 0.07° and the effective longitudinal piezoelectric coefficient (d33,f) increased from 2.30 ± 0.32 pm/V up to 5.57 ± 0.34 pm/V for film thicknesses in the range of 30 nm to 2 μm. Dielectric loss angle (tan δ) decreased from 0.626% ± 0.005% to 0.025% ± 0.011% for the same thickness range. The average relative permittivity (er) was calculated as 10.4 ± 0.05. An almost constant transversal piezoelectric coefficient (|e31,f|) of 1.39 ± 0.01 C/m2 was measured for samples in the range of 0.5 μm to 2 μm. Transmission electron microscopy (TEM) investigations performed on thin (100 nm) and thick (1.6 μm) films revealed an (002) oriented AlN nucleation and growth starting directly from the AlN-Pt interface independent of the film thickness and exhibit comparable quality with the state-of-the-art AlN thin films sputtered at much higher substrate temperatures.",TRUE,number
R194,Engineering,R141884,Enhancement of c-Axis Oriented Aluminum Nitride Films via Low Temperature DC Sputtering,S569236,R141886,Piezoelectric coefficient measured (pm/V),L399506,5.92,"In this study, we successfully deposit c-axis oriented aluminum nitride (AlN) piezoelectric films at low temperature (100 °C) via the DC sputtering method with tilt gun. The X-ray diffraction (XRD) observations prove that the deposited films have a c-axis preferred orientation. Effective d33 value of the proposed films is 5.92 pm/V, which is better than most of the reported data using DC sputtering or other processing methods. It is found that the gun placed at 25° helped the films to rearrange at low temperature and c-axis orientation AlN films were successfully grown at 100 °C. This temperature is much lower than the reported growing temperature. It means the piezoelectric films can be deposited at flexible substrate and the photoresist can be stable at this temperature. The cantilever beam type microelectromechanical systems (MEMS) piezoelectric accelerometers are then fabricated based on the proposed AlN films with a lift-off process. The results show that the responsivity of the proposed devices is 8.12 mV/g, and the resonance frequency is 460 Hz, which indicates they can be used for machine tools.",TRUE,number
R194,Engineering,R139623,Hybrid Perovskite Films by a New Variant of Pulsed Excimer Laser Deposition: A Room-Temperature Dry Process,S557209,R139625,Maximum efficiency of the solar cell (%),L391676,7.7,"A new variant of the classic pulsed laser deposition (PLD) process is introduced as a room-temperature dry process for the growth and stoichiometry control of hybrid perovskite films through the use of nonstoichiometric single target ablation and off-axis growth. Mixed halide hybrid perovskite films nominally represented by CH3NH3PbI3–xAx (A = Cl or F) are also grown and are shown to reveal interesting trends in the optical properties and photoresponse. Growth of good quality lead-free CH3NH3SnI3 films is also demonstrated, and the corresponding optical properties are presented. Finally, perovskite solar cells fabricated at room temperature (which makes the process adaptable to flexible substrates) are shown to yield a conversion efficiency of about 7.7%.",TRUE,number
R194,Engineering,R138211,Highly Stretchable Resistive Pressure Sensors Using a Conductive Elastomeric Composite on a Micropyramid Array,S547844,R138213,Sensitivity (/kPa),L385331,10.3,"A stretchable resistive pressure sensor is achieved by coating a compressible substrate with a highly stretchable electrode. The substrate contains an array of microscale pyramidal features, and the electrode comprises a polymer composite. When the pressure-induced geometrical change experienced by the electrode is maximized at 40% elongation, a sensitivity of 10.3 kPa(-1) is achieved.",TRUE,number
R194,Engineering,R139614,Highly Efficient and Stable Sn-Rich Perovskite Solar Cells by Introducing Bromine,S557153,R139617,Maximum efficiency of the solar cell (%),L391630,12.1,"Compositional engineering of recently arising methylammonium (MA) lead (Pb) halide based perovskites is an essential approach for finding better perovskite compositions to resolve still remaining issues of toxic Pb, long-term instability, etc. In this work, we carried out crystallographic, morphological, optical, and photovoltaic characterization of compositional MASn0.6Pb0.4I3-xBrx by gradually introducing bromine (Br) into parental Pb-Sn binary perovskite (MASn0.6Pb0.4I3) to elucidate its function in Sn-rich (Sn:Pb = 6:4) perovskites. We found significant advances in crystallinity and dense coverage of the perovskite films by inserting the Br into Sn-rich perovskite lattice. Furthermore, light-intensity-dependent open circuit voltage (Voc) measurement revealed much suppressed trap-assisted recombination for a proper Br-added (x = 0.4) device. These contributed to attaining the unprecedented power conversion efficiency of 12.1% and Voc of 0.78 V, which are, to the best of our knowledge, the highest performance in the Sn-rich (≥60%) perovskite solar cells reported so far. In addition, impressive enhancement of photocurrent-output stability and little hysteresis were found, which paves the way for the development of environmentally benign (Pb reduction), stable monolithic tandem cells using the developed low band gap (1.24-1.26 eV) MASn0.6Pb0.4I3-xBrx with suggested composition (x = 0.2-0.4).",TRUE,number
R194,Engineering,R139629,Stable Low-Bandgap Pb-Sn Binary Perovskites for Tandem Solar Cells,S557262,R139631,Maximum efficiency of the solar cell (%),L391721,14.19,"A low-bandgap (1.33 eV) Sn-based MA0.5 FA0.5 Pb0.75 Sn0.25 I3 perovskite is developed via combined compositional, process, and interfacial engineering. It can deliver a high power conversion efficiency (PCE) of 14.19%. Finally, a four-terminal all-perovskite tandem solar cell is demonstrated by combining this low-bandgap cell with a semitransparent MAPbI3 cell to achieve a high efficiency of 19.08%.",TRUE,number
R194,Engineering,R139632,Fabrication of Efficient Low-Bandgap Perovskite Solar Cells by Combining Formamidinium Tin Iodide with Methylammonium Lead Iodide,S557293,R139633,Maximum efficiency of the solar cell (%),L391747,15.08,"Mixed tin (Sn)-lead (Pb) perovskites with high Sn content exhibit low bandgaps suitable for fabricating the bottom cell of perovskite-based tandem solar cells. In this work, we report on the fabrication of efficient mixed Sn-Pb perovskite solar cells using precursors combining formamidinium tin iodide (FASnI3) and methylammonium lead iodide (MAPbI3). The best-performing cell fabricated using a (FASnI3)0.6(MAPbI3)0.4 absorber with an absorption edge of ∼1.2 eV achieved a power conversion efficiency (PCE) of 15.08 (15.00)% with an open-circuit voltage of 0.795 (0.799) V, a short-circuit current density of 26.86(26.82) mA/cm(2), and a fill factor of 70.6(70.0)% when measured under forward (reverse) voltage scan. The average PCE of 50 cells we have fabricated is 14.39 ± 0.33%, indicating good reproducibility.",TRUE,number
R194,Engineering,R148179,Sensitivity-Enhanced Fiber Temperature Sensor Based on Vernier Effect and Dual In-Line Mach–Zehnder Interferometers,S594158,R148183,Amplification factor,L413128,17.5,"A highly sensitive fiber temperature sensor based on in-line Mach-Zehnder interferometers (MZIs) and Vernier effect was proposed and experimentally demonstrated. The MZI was fabricated by splicing a section of hollow core fiber between two pieces of multimode fiber. The temperature sensitivity obtained by extracting envelope dip shift of the superimposed spectrum reaches to 528.5 pm/°C in the range of 0 °C–100 °C, which is 17.5 times as high as that without enhanced by the Vernier effect. The experimental sensitivity amplification factor is close to the theoretical predication (18.3 times).The proposed sensitivity enhancement system employs parallel connecting to implement the Vernier effect, which possesses the advantages of easy fabrication and high flexibility.",TRUE,number
R194,Engineering,R145527,Performance Investigation of an n-Type Tin-Oxide Thin Film Transistor by Channel Plasma Processing,S582916,R145529,Mobility (cm2 /V.s),L407124,18.5,"In this paper, we investigated the performance of an n-type tin-oxide (SnOx) thin film transistor (TFT) by experiments and simulation. The fabricated SnOx TFT device by oxygen plasma treatment on the channel exhibited n-type conduction with an on/off current ratio of 4.4x104, a high field-effect mobility of 18.5 cm2/V.s and a threshold swing of 405 mV/decade, which could be attributed to the excess reacted oxygen incorporated to the channel to form the oxygen-rich n-type SnOx. Furthermore, a TCAD simulation based on the n-type SnOx TFT device was performed by fitting the experimental data to investigate the effect of the channel traps on the device performance, indicating that performance enhancements were further achieved by suppressing the density of channel traps. In addition, the n-type SnOx TFT device exhibited high stability upon illumination with visible light. The results show that the n-type SnOx TFT device by channel plasma processing has considerable potential for next-generation high-performance display application.",TRUE,number
R194,Engineering,R139618,Efficiently Improving the Stability of Inverted Perovskite Solar Cells by Employing Polyethylenimine-Modified Carbon Nanotubes as Electrodes,S557189,R139622,"Short-circuit current density, Jsc (mA/cm2)",L391659,18.7,"Inverted perovskite solar cells (PSCs) have been becoming more and more attractive, owing to their easy-fabrication and suppressed hysteresis, while the ion diffusion between metallic electrode and perovskite layer limit the long-term stability of devices. In this work, we employed a novel polyethylenimine (PEI) modified cross-stacked superaligned carbon nanotube (CSCNT) film in the inverted planar PSCs configurated FTO/NiO x/methylammonium lead tri-iodide (MAPbI3)/6, 6-phenyl C61-butyric acid methyl ester (PCBM)/CSCNT:PEI. By modifying CSCNT with a certain concentration of PEI (0.5 wt %), suitable energy level alignment and promoted interfacial charge transfer have been achieved, leading to a significant enhancement in the photovoltaic performance. As a result, a champion power conversion efficiency (PCE) of ∼11% was obtained with a Voc of 0.95 V, a Jsc of 18.7 mA cm-2, a FF of 0.61 as well as negligible hysteresis. Moreover, CSCNT:PEI based inverted PSCs show superior durability in comparison to the standard silver based devices, remaining over 85% of the initial PCE after 500 h aging under various conditions, including long-term air exposure, thermal, and humid treatment. This work opens up a new avenue of facile modified carbon electrodes for highly stable and hysteresis suppressed PSCs.",TRUE,number
R194,Engineering,R139638,Efficient perovskite solar cells by metal ion doping,S557360,R139641,Maximum efficiency of the solar cell (%),L391804,19.1,"Realizing the theoretical limiting power conversion efficiency (PCE) in perovskite solar cells requires a better understanding and control over the fundamental loss processes occurring in the bulk of the perovskite layer and at the internal semiconductor interfaces in devices. One of the main challenges is to eliminate the presence of charge recombination centres throughout the film which have been observed to be most densely located at regions near the grain boundaries. Here, we introduce aluminium acetylacetonate to the perovskite precursor solution, which improves the crystal quality by reducing the microstrain in the polycrystalline film. At the same time, we achieve a reduction in the non-radiative recombination rate, a remarkable improvement in the photoluminescence quantum efficiency (PLQE) and a reduction in the electronic disorder deduced from an Urbach energy of only 12.6 meV in complete devices. As a result, we demonstrate a PCE of 19.1% with negligible hysteresis in planar heterojunction solar cells comprising all organic p and n-type charge collection layers. Our work shows that an additional level of control of perovskite thin film quality is possible via impurity cation doping, and further demonstrates the continuing importance of improving the electronic quality of the perovskite absorber and the nature of the heterojunctions to further improve the solar cell performance.",TRUE,number
R194,Engineering,R148163,Sensitivity-enhanced temperature sensor by hybrid cascaded configuration of a Sagnac loop and a F-P cavity,S594142,R148165,Amplification factor,L413112,20.7,"A hybrid cascaded configuration consisting of a fiber Sagnac interferometer (FSI) and a Fabry-Perot interferometer (FPI) was proposed and experimentally demonstrated to enhance the temperature intensity by the Vernier-effect. The FSI, which consists of a certain length of Panda fiber, is for temperature sensing, while the FPI acts as a filter due to its temperature insensitivity. The two interferometers have almost the same free spectral range, with the spectral envelope of the cascaded sensor shifting much more than the single FSI. Experimental results show that the temperature sensitivity is enhanced from −1.4 nm/°C (single FSI) to −29.0 (cascaded configuration). The enhancement factor is 20.7, which is basically consistent with theoretical analysis (19.9).",TRUE,number
R194,Engineering,R141656,Natural Biowaste-Cocoon-Derived Granular Activated Carbon-Coated ZnO Nanorods: A Simple Route To Synthesizing a Core–Shell Structure and Its Highly Enhanced UV and Hydrogen Sensing Properties,S567957,R141660,Response (%),L398735,23.2,"Granular activated carbon (GAC) materials were prepared via simple gas activation of silkworm cocoons and were coated on ZnO nanorods (ZNRs) by the facile hydrothermal method. The present combination of GAC and ZNRs shows a core-shell structure (where the GAC is coated on the surface of ZNRs) and is exposed by systematic material analysis. The as-prepared samples were then fabricated as dual-functional sensors and, most fascinatingly, the as-fabricated core-shell structure exhibits better UV and H2 sensing properties than those of as-fabricated ZNRs and GAC. Thus, the present core-shell structure-based H2 sensor exhibits fast responses of 11% (10 ppm) and 23.2% (200 ppm) with ultrafast response and recovery. However, the UV sensor offers an ultrahigh photoresponsivity of 57.9 A W-1, which is superior to that of as-grown ZNRs (0.6 A W-1). Besides this, switching photoresponse of GAC/ZNR core-shell structures exhibits a higher switching ratio (between dark and photocurrent) of 1585, with ultrafast response and recovery, than that of as-grown ZNRs (40). Because of the fast adsorption ability of GAC, it was observed that the finest distribution of GAC on ZNRs results in rapid electron transportation between the conduction bands of GAC and ZNRs while sensing H2 and UV. Furthermore, the present core-shell structure-based UV and H2 sensors also well-retained excellent sensitivity, repeatability, and long-term stability. Thus, the salient feature of this combination is that it provides a dual-functional sensor with biowaste cocoon and ZnO, which is ecological and inexpensive.",TRUE,number
R194,Engineering,R139632,Fabrication of Efficient Low-Bandgap Perovskite Solar Cells by Combining Formamidinium Tin Iodide with Methylammonium Lead Iodide,S557295,R139633,"Short-circuit current density, Jsc (mA/cm2)",L391749,26.82,"Mixed tin (Sn)-lead (Pb) perovskites with high Sn content exhibit low bandgaps suitable for fabricating the bottom cell of perovskite-based tandem solar cells. In this work, we report on the fabrication of efficient mixed Sn-Pb perovskite solar cells using precursors combining formamidinium tin iodide (FASnI3) and methylammonium lead iodide (MAPbI3). The best-performing cell fabricated using a (FASnI3)0.6(MAPbI3)0.4 absorber with an absorption edge of ∼1.2 eV achieved a power conversion efficiency (PCE) of 15.08 (15.00)% with an open-circuit voltage of 0.795 (0.799) V, a short-circuit current density of 26.86(26.82) mA/cm(2), and a fill factor of 70.6(70.0)% when measured under forward (reverse) voltage scan. The average PCE of 50 cells we have fabricated is 14.39 ± 0.33%, indicating good reproducibility.",TRUE,number
R194,Engineering,R145512,"Highly Stable, Solution‐Processed Ga‐Doped IZTO Thin Film Transistor by Ar/O
2
Plasma Treatment",S582903,R145515,Mobility (cm2 /V.s),L407111,30.6,"The effects of gallium doping into indium–zinc–tin oxide (IZTO) thin film transistors (TFTs) and Ar/O2 plasma treatment on the performance of a‐IZTO TFT are reported. The Ga doping ratio is varied from 0 to 20%, and it is found that 10% gallium doping in a‐IZTO TFT results in a saturation mobility (µsat) of 11.80 cm2 V−1 s−1, a threshold voltage (Vth) of 0.17 V, subthreshold swing (SS) of 94 mV dec−1, and on/off current ratio (Ion/Ioff) of 1.21 × 107. Additionally, the performance of 10% Ga‐doped IZTO TFT can be further improved by Ar/O2 plasma treatment. It is found that 30 s plasma treatment gives the best TFT performances such as µsat of 30.60 cm2 V−1 s−1, Vth of 0.12 V, SS of 92 mV dec−1, and Ion/Ioff ratio of 7.90 × 107. The bias‐stability of 10% Ga‐doped IZTO TFT is also improved by 30 s plasma treatment. The enhancement of the TFT performance appears to be due to the reduction in the oxygen vacancy and OH concentrations.",TRUE,number
R194,Engineering,R148166,Ultrasensitive Temperature Sensor With Cascaded Fiber Optic Fabry–Perot Interferometers Based on Vernier Effect,S594147,R148171,Sensitivity (nm/°C),L413117,33.07,"We have proposed and experimentally demonstrated an ultrasensitive fiber-optic temperature sensor based on two cascaded Fabry–Perot interferometers (FPIs). Vernier effect that significantly improves the sensitivity is generated due to the slight cavity length difference of the sensing and reference FPI. The sensing FPI is composed of a cleaved fiber end-face and UV-cured adhesive while the reference FPI is fabricated by splicing SMF with hollow core fiber. Temperature sensitivity of the sensing FPI is much higher than the reference FPI, which means that the reference FPI need not to be thermally isolated. By curve fitting method, three different temperature sensitivities of 33.07, −58.60, and 67.35 nm/°C have been experimentally demonstrated with different cavity lengths ratio of the sensing and reference FPI, which can be flexibly adjusted to meet different application demands. The proposed probe-type ultrahigh sensitivity temperature sensor is compact and cost effective, which can be applied to special fields, such as biochemical engineering, medical treatment, and nuclear test.",TRUE,number
R194,Engineering,R139632,Fabrication of Efficient Low-Bandgap Perovskite Solar Cells by Combining Formamidinium Tin Iodide with Methylammonium Lead Iodide,S557291,R139633,"Fill factor, FF (%)",L391746,70.6,"Mixed tin (Sn)-lead (Pb) perovskites with high Sn content exhibit low bandgaps suitable for fabricating the bottom cell of perovskite-based tandem solar cells. In this work, we report on the fabrication of efficient mixed Sn-Pb perovskite solar cells using precursors combining formamidinium tin iodide (FASnI3) and methylammonium lead iodide (MAPbI3). The best-performing cell fabricated using a (FASnI3)0.6(MAPbI3)0.4 absorber with an absorption edge of ∼1.2 eV achieved a power conversion efficiency (PCE) of 15.08 (15.00)% with an open-circuit voltage of 0.795 (0.799) V, a short-circuit current density of 26.86(26.82) mA/cm(2), and a fill factor of 70.6(70.0)% when measured under forward (reverse) voltage scan. The average PCE of 50 cells we have fabricated is 14.39 ± 0.33%, indicating good reproducibility.",TRUE,number
R93,Human and Clinical Nutrition,R182134,On-Farm Crop Species Richness Is Associated with Household Diet Diversity and Quality in Subsistence- and Market-Oriented Farming Households in Malawi,S704534,R182136,Has lower limit for 95% confidence interval ,L475340,0.06,"BACKGROUND On-farm crop species richness (CSR) may be important for maintaining the diversity and quality of diets of smallholder farming households. OBJECTIVES The objectives of this study were to 1) determine the association of CSR with the diversity and quality of household diets in Malawi and 2) assess hypothesized mechanisms for this association via both subsistence- and market-oriented pathways. METHODS Longitudinal data were assessed from nationally representative household surveys in Malawi between 2010 and 2013 (n = 3000 households). A household diet diversity score (DDS) and daily intake per adult equivalent of energy, protein, iron, vitamin A, and zinc were calculated from 7-d household consumption data. CSR was calculated from plot-level data on all crops cultivated during the 2009-2010 and 2012-2013 agricultural seasons in Malawi. Adjusted generalized estimating equations were used to assess the longitudinal relation of CSR with household diet quality and diversity. RESULTS CSR was positively associated with DDS (β: 0.08; 95% CI: 0.06, 0.12; P < 0.001), as well as daily intake per adult equivalent of energy (kilocalories) (β: 41.6; 95% CI: 20.9, 62.2; P < 0.001), protein (grams) (β: 1.78; 95% CI: 0.80, 2.75; P < 0.001), iron (milligrams) (β: 0.30; 95% CI: 0.16, 0.44; P < 0.001), vitamin A (micrograms of retinol activity equivalent) (β: 25.8; 95% CI: 12.7, 38.9; P < 0.001), and zinc (milligrams) (β: 0.26; 95% CI: 0.13, 0.38; P < 0.001). Neither proportion of harvest sold nor distance to nearest population center modified the relation between CSR and household diet diversity or quality (P ≥ 0.05). Households with greater CSR were more commercially oriented (least-squares mean proportion of harvest sold ± SE, highest tertile of CSR: 17.1 ± 0.52; lowest tertile of CSR: 8.92 ± 1.09) (P < 0.05). CONCLUSION Promoting on-farm CSR may be a beneficial strategy for simultaneously supporting enhanced diet quality and diversity while also creating opportunities for smallholder farmers to engage with markets in subsistence agricultural contexts.",TRUE,number
R93,Human and Clinical Nutrition,R182134,On-Farm Crop Species Richness Is Associated with Household Diet Diversity and Quality in Subsistence- and Market-Oriented Farming Households in Malawi,S704510,R182136,Correlation Coefficient,L475324,0.08,"BACKGROUND On-farm crop species richness (CSR) may be important for maintaining the diversity and quality of diets of smallholder farming households. OBJECTIVES The objectives of this study were to 1) determine the association of CSR with the diversity and quality of household diets in Malawi and 2) assess hypothesized mechanisms for this association via both subsistence- and market-oriented pathways. METHODS Longitudinal data were assessed from nationally representative household surveys in Malawi between 2010 and 2013 (n = 3000 households). A household diet diversity score (DDS) and daily intake per adult equivalent of energy, protein, iron, vitamin A, and zinc were calculated from 7-d household consumption data. CSR was calculated from plot-level data on all crops cultivated during the 2009-2010 and 2012-2013 agricultural seasons in Malawi. Adjusted generalized estimating equations were used to assess the longitudinal relation of CSR with household diet quality and diversity. RESULTS CSR was positively associated with DDS (β: 0.08; 95% CI: 0.06, 0.12; P < 0.001), as well as daily intake per adult equivalent of energy (kilocalories) (β: 41.6; 95% CI: 20.9, 62.2; P < 0.001), protein (grams) (β: 1.78; 95% CI: 0.80, 2.75; P < 0.001), iron (milligrams) (β: 0.30; 95% CI: 0.16, 0.44; P < 0.001), vitamin A (micrograms of retinol activity equivalent) (β: 25.8; 95% CI: 12.7, 38.9; P < 0.001), and zinc (milligrams) (β: 0.26; 95% CI: 0.13, 0.38; P < 0.001). Neither proportion of harvest sold nor distance to nearest population center modified the relation between CSR and household diet diversity or quality (P ≥ 0.05). Households with greater CSR were more commercially oriented (least-squares mean proportion of harvest sold ± SE, highest tertile of CSR: 17.1 ± 0.52; lowest tertile of CSR: 8.92 ± 1.09) (P < 0.05). CONCLUSION Promoting on-farm CSR may be a beneficial strategy for simultaneously supporting enhanced diet quality and diversity while also creating opportunities for smallholder farmers to engage with markets in subsistence agricultural contexts.",TRUE,number
R93,Human and Clinical Nutrition,R182134,On-Farm Crop Species Richness Is Associated with Household Diet Diversity and Quality in Subsistence- and Market-Oriented Farming Households in Malawi,S704535,R182136,Has upper limit for 95% confidence interval,L475341,0.12,"BACKGROUND On-farm crop species richness (CSR) may be important for maintaining the diversity and quality of diets of smallholder farming households. OBJECTIVES The objectives of this study were to 1) determine the association of CSR with the diversity and quality of household diets in Malawi and 2) assess hypothesized mechanisms for this association via both subsistence- and market-oriented pathways. METHODS Longitudinal data were assessed from nationally representative household surveys in Malawi between 2010 and 2013 (n = 3000 households). A household diet diversity score (DDS) and daily intake per adult equivalent of energy, protein, iron, vitamin A, and zinc were calculated from 7-d household consumption data. CSR was calculated from plot-level data on all crops cultivated during the 2009-2010 and 2012-2013 agricultural seasons in Malawi. Adjusted generalized estimating equations were used to assess the longitudinal relation of CSR with household diet quality and diversity. RESULTS CSR was positively associated with DDS (β: 0.08; 95% CI: 0.06, 0.12; P < 0.001), as well as daily intake per adult equivalent of energy (kilocalories) (β: 41.6; 95% CI: 20.9, 62.2; P < 0.001), protein (grams) (β: 1.78; 95% CI: 0.80, 2.75; P < 0.001), iron (milligrams) (β: 0.30; 95% CI: 0.16, 0.44; P < 0.001), vitamin A (micrograms of retinol activity equivalent) (β: 25.8; 95% CI: 12.7, 38.9; P < 0.001), and zinc (milligrams) (β: 0.26; 95% CI: 0.13, 0.38; P < 0.001). Neither proportion of harvest sold nor distance to nearest population center modified the relation between CSR and household diet diversity or quality (P ≥ 0.05). Households with greater CSR were more commercially oriented (least-squares mean proportion of harvest sold ± SE, highest tertile of CSR: 17.1 ± 0.52; lowest tertile of CSR: 8.92 ± 1.09) (P < 0.05). CONCLUSION Promoting on-farm CSR may be a beneficial strategy for simultaneously supporting enhanced diet quality and diversity while also creating opportunities for smallholder farmers to engage with markets in subsistence agricultural contexts.",TRUE,number
R42,Immunology of Infectious Disease,R111393,"Clinical Evaluation of Self-Collected Saliva by Quantitative Reverse Transcription-PCR (RT-qPCR), Direct RT-qPCR, Reverse Transcription–Loop-Mediated Isothermal Amplification, and a Rapid Antigen Test To Diagnose COVID-19",S507293,R111397,Has result,L365856,11.7,"ABSTRACT
The clinical performances of six molecular diagnostic tests and a rapid antigen test for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) were clinically evaluated for the diagnosis of coronavirus disease 2019 (COVID-19) in self-collected saliva. Saliva samples from 103 patients with laboratory-confirmed COVID-19 (15 asymptomatic and 88 symptomatic) were collected on the day of hospital admission. SARS-CoV-2 RNA in saliva was detected using a quantitative reverse transcription-PCR (RT-qPCR) laboratory-developed test (LDT), a cobas SARS-CoV-2 high-throughput system, three direct RT-qPCR kits, and reverse transcription–loop-mediated isothermal amplification (RT-LAMP). The viral antigen was detected by a rapid antigen immunochromatographic assay. Of the 103 samples, viral RNA was detected in 50.5 to 81.6% of the specimens by molecular diagnostic tests, and an antigen was detected in 11.7% of the specimens by the rapid antigen test. Viral RNA was detected at significantly higher percentages (65.6 to 93.4%) in specimens collected within 9 days of symptom onset than in specimens collected after at least 10 days of symptoms (22.2 to 66.7%) and in specimens collected from asymptomatic patients (40.0 to 66.7%). Self-collected saliva is an alternative specimen option for diagnosing COVID-19. The RT-qPCR LDT, a cobas SARS-CoV-2 high-throughput system, direct RT-qPCR kits (except for one commercial kit), and RT-LAMP showed sufficient sensitivities in clinical use to be selectively used in clinical settings and facilities. The rapid antigen test alone is not recommended for an initial COVID-19 diagnosis because of its low sensitivity.",TRUE,number
R137681,"Information Systems, Process and Knowledge Management",R164003,SoMeSci- A 5 Star Open Data Gold Standard Knowledge Graph of Software Mentions in Scientific Articles,S655094,R164005,Number of entities,L445044,3756,"Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.",TRUE,number
R128,Inorganic Chemistry,R160606,Etching Silicon with Aqueous Acidic Ozone Solutions: Reactivity Studies and Surface Investigations,S640752,R160639,Miller index,R160642,-100,"Aqueous acidic ozone (O3)-containing solutions are increasingly used for silicon treatment in photovoltaic and semiconductor industries. We studied the behavior of aqueous hydrofluoric acid (HF)-containing solutions (i.e., HF–O3, HF–H2SO4–O3, and HF–HCl–O3 mixtures) toward boron-doped solar-grade (100) silicon wafers. The solubility of O3 and etching rates at 20 °C were investigated. The mixtures were analyzed for the potential oxidizing species by UV–vis and Raman spectroscopy. Concentrations of O3 (aq), O3 (g), and Cl2 (aq) were determined by titrimetric volumetric analysis. F–, Cl–, and SO42– ion contents were determined by ion chromatography. Model experiments were performed to investigate the oxidation of H-terminated silicon surfaces by H2O–O2, H2O–O3, H2O–H2SO4–O3, and H2O–HCl–O3 mixtures. The oxidation was monitored by diffuse reflection infrared Fourier transformation (DRIFT) spectroscopy. The resulting surfaces were examined by scanning electron microscopy (SEM) and X-ray photoelectron spectrosc...",TRUE,number
R126,Materials Chemistry,R147918,High-Performance Electron Acceptor with Thienyl Side Chains for Organic Photovoltaics,S593315,R147931,HOMO,R147934,-5.66,"We develop an efficient fused-ring electron acceptor (ITIC-Th) based on indacenodithieno[3,2-b]thiophene core and thienyl side-chains for organic solar cells (OSCs). Relative to its counterpart with phenyl side-chains (ITIC), ITIC-Th shows lower energy levels (ITIC-Th: HOMO = -5.66 eV, LUMO = -3.93 eV; ITIC: HOMO = -5.48 eV, LUMO = -3.83 eV) due to the σ-inductive effect of thienyl side-chains, which can match with high-performance narrow-band-gap polymer donors and wide-band-gap polymer donors. ITIC-Th has higher electron mobility (6.1 × 10(-4) cm(2) V(-1) s(-1)) than ITIC (2.6 × 10(-4) cm(2) V(-1) s(-1)) due to enhanced intermolecular interaction induced by sulfur-sulfur interaction. We fabricate OSCs by blending ITIC-Th acceptor with two different low-band-gap and wide-band-gap polymer donors. In one case, a power conversion efficiency of 9.6% was observed, which rivals some of the highest efficiencies for single junction OSCs based on fullerene acceptors.",TRUE,number
R126,Materials Chemistry,R146918,Design and Synthesis of a Low Bandgap Small Molecule Acceptor for Efficient Polymer Solar Cells,S593121,R146920,LUMO,R147862,-3.95,"A novel non-fullerene acceptor, possessing a very low bandgap of 1.34 eV and a high-lying lowest unoccupied molecular orbital level of -3.95 eV, is designed and synthesized by introducing electron-donating alkoxy groups to the backbone of a conjugated small molecule. Impressive power conversion efficiencies of 8.4% and 10.7% are obtained for fabricated single and tandem polymer solar cells.",TRUE,number
R126,Materials Chemistry,R147918,High-Performance Electron Acceptor with Thienyl Side Chains for Organic Photovoltaics,S593319,R147931,LUMO,R147935,-3.93,"We develop an efficient fused-ring electron acceptor (ITIC-Th) based on indacenodithieno[3,2-b]thiophene core and thienyl side-chains for organic solar cells (OSCs). Relative to its counterpart with phenyl side-chains (ITIC), ITIC-Th shows lower energy levels (ITIC-Th: HOMO = -5.66 eV, LUMO = -3.93 eV; ITIC: HOMO = -5.48 eV, LUMO = -3.83 eV) due to the σ-inductive effect of thienyl side-chains, which can match with high-performance narrow-band-gap polymer donors and wide-band-gap polymer donors. ITIC-Th has higher electron mobility (6.1 × 10(-4) cm(2) V(-1) s(-1)) than ITIC (2.6 × 10(-4) cm(2) V(-1) s(-1)) due to enhanced intermolecular interaction induced by sulfur-sulfur interaction. We fabricate OSCs by blending ITIC-Th acceptor with two different low-band-gap and wide-band-gap polymer donors. In one case, a power conversion efficiency of 9.6% was observed, which rivals some of the highest efficiencies for single junction OSCs based on fullerene acceptors.",TRUE,number
R126,Materials Chemistry,R148246,"Design, synthesis, and structural characterization of the first dithienocyclopentacarbazole-based n-type organic semiconductor and its application in non-fullerene polymer solar cells",S594353,R148250,"Open circuit voltage, Voc",R148253,0.95,"Ladder-type dithienocyclopentacarbazole (DTCC) cores, which possess highly extended π-conjugated backbones and versatile modular structures for derivatization, were widely used to develop high-performance p-type polymeric semiconductors. However, an n-type DTCC-based organic semiconductor has not been reported to date. In this study, the first DTCC-based n-type organic semiconductor (DTCC–IC) with a well-defined A–D–A backbone was designed, synthesized, and characterized, in which a DTCC derivative substituted by four p-octyloxyphenyl groups was used as the electron-donating core and two strongly electron-withdrawing 3-(dicyanomethylene)indan-1-one moieties were used as the terminal acceptors. It was found that DTCC–IC has strong light-capturing ability in the range of 500–720 nm and exhibits an impressively high molar absorption coefficient of 2.24 × 105 M−1 cm−1 at 669 nm owing to effective intramolecular charge transfer and a strong D–A effect. Cyclic voltammetry measurements indicated that the HOMO and LUMO energy levels of DTCC–IC are −5.50 and −3.87 eV, respectively. More importantly, a high electron mobility of 2.17 × 10−3 cm2 V−1 s−1 was determined by the space-charge-limited current method; this electron mobility can be comparable to that of fullerene derivative acceptors (μe ∼ 10−3 cm2 V−1 s−1). To investigate its application potential in non-fullerene solar cells, we fabricated organic solar cells (OSCs) by blending a DTCC–IC acceptor with a PTB7-Th donor under various conditions. The results suggest that the optimized device exhibits a maximum power conversion efficiency (PCE) of up to 6% and a rational high VOC of 0.95 V. These findings demonstrate that the ladder-type DTCC core is a promising building block for the development of high-mobility n-type organic semiconductors for OSCs.",TRUE,number
R126,Materials Chemistry,R148663,Dithienopicenocarbazole-Based Acceptors for Efficient Organic Solar Cells with Optoelectronic Response Over 1000 nm and an Extremely Low Energy Loss,S595971,R148666,Energy band gap,R148670,1.21,"Two cheliform non-fullerene acceptors, DTPC-IC and DTPC-DFIC, based on a highly electron-rich core, dithienopicenocarbazole (DTPC), are synthesized, showing ultra-narrow bandgaps (as low as 1.21 eV). The two-dimensional nitrogen-containing conjugated DTPC possesses strong electron-donating capability, which induces intense intramolecular charge transfer and intermolecular π-π stacking in derived acceptors. The solar cell based on DTPC-DFIC and a spectrally complementary polymer donor, PTB7-Th, showed a high power conversion efficiency of 10.21% and an extremely low energy loss of 0.45 eV, which is the lowest among reported efficient OSCs.",TRUE,number
R126,Materials Chemistry,R146918,Design and Synthesis of a Low Bandgap Small Molecule Acceptor for Efficient Polymer Solar Cells,S593115,R146920,Energy band gap,R147859,1.34,"A novel non-fullerene acceptor, possessing a very low bandgap of 1.34 eV and a high-lying lowest unoccupied molecular orbital level of -3.95 eV, is designed and synthesized by introducing electron-donating alkoxy groups to the backbone of a conjugated small molecule. Impressive power conversion efficiencies of 8.4% and 10.7% are obtained for fabricated single and tandem polymer solar cells.",TRUE,number
R126,Materials Chemistry,R148606,Fused Hexacyclic Nonfullerene Acceptor with Strong Near‐Infrared Absorption for Semitransparent Organic Solar Cells with 9.77% Efficiency,S595759,R148607,Energy band gap,R148610,1.38,"A fused hexacyclic electron acceptor, IHIC, based on strong electron‐donating group dithienocyclopentathieno[3,2‐b]thiophene flanked by strong electron‐withdrawing group 1,1‐dicyanomethylene‐3‐indanone, is designed, synthesized, and applied in semitransparent organic solar cells (ST‐OSCs). IHIC exhibits strong near‐infrared absorption with extinction coefficients of up to 1.6 × 105m−1 cm−1, a narrow optical bandgap of 1.38 eV, and a high electron mobility of 2.4 × 10−3 cm2 V−1 s−1. The ST‐OSCs based on blends of a narrow‐bandgap polymer donor PTB7‐Th and narrow‐bandgap IHIC acceptor exhibit a champion power conversion efficiency of 9.77% with an average visible transmittance of 36% and excellent device stability; this efficiency is much higher than any single‐junction and tandem ST‐OSCs reported in the literature.",TRUE,number
R126,Materials Chemistry,R148652,"Dithieno[3,2-b
:2′,3′-d
]pyrrol Fused Nonfullerene Acceptors Enabling Over 13% Efficiency for Organic Solar Cells",S595931,R148654,Energy band gap,R148657,1.39,"A new electron‐rich central building block, 5,5,12,12‐tetrakis(4‐hexylphenyl)‐indacenobis‐(dithieno[3,2‐b:2′,3′‐d]pyrrol) (INP), and two derivative nonfullerene acceptors (INPIC and INPIC‐4F) are designed and synthesized. The two molecules reveal broad (600–900 nm) and strong absorption due to the satisfactory electron‐donating ability of INP. Compared with its counterpart INPIC, fluorinated nonfullerene acceptor INPIC‐4F exhibits a stronger near‐infrared absorption with a narrower optical bandgap of 1.39 eV, an improved crystallinity with higher electron mobility, and down‐shifted highest occupied molecular orbital and lowest unoccupied molecular orbital energy levels. Organic solar cells (OSCs) based on INPIC‐4F exhibit a high power conversion efficiency (PCE) of 13.13% and a relatively low energy loss of 0.54 eV, which is among the highest efficiencies reported for binary OSCs in the literature. The results demonstrate the great potential of the new INP as an electron‐donating building block for constructing high‐performance nonfullerene acceptors for OSCs.",TRUE,number
R126,Materials Chemistry,R147944,A near-infrared non-fullerene electron acceptor for high performance polymer solar cells,S593366,R147951,Energy band gap,R147954,1.43,"Low-bandgap polymers/molecules are an interesting family of semiconductor materials, and have enabled many recent exciting breakthroughs in the field of organic electronics, especially for organic photovoltaics (OPVs). Here, such a low-bandgap (1.43 eV) non-fullerene electron acceptor (BT-IC) bearing a fused 7-heterocyclic ring with absorption edge extending to the near-infrared (NIR) region was specially designed and synthesized. Benefitted from its NIR light harvesting, high performance OPVs were fabricated with medium bandgap polymers (J61 and J71) as donors, showing power conversion efficiencies of 9.6% with J61 and 10.5% with J71 along with extremely low energy loss (0.56 eV for J61 and 0.53 eV for J71). Interestingly, femtosecond transient absorption spectroscopy studies on both systems show that efficient charge generation was observed despite the fact that the highest occupied molecular orbital (HOMO)–HOMO offset (ΔEH) in the blends was as low as 0.10 eV, suggesting that such a small ΔEH is not a crucial limitation in realizing high performance of NIR non-fullerene based OPVs. Our results indicated that BT-IC is an interesting NIR non-fullerene acceptor with great potential application in tandem/multi-junction, semitransparent, and ternary blend solar cells.",TRUE,number
R126,Materials Chemistry,R148537,"A Twisted Thieno[3,4-b
]thiophene-Based Electron Acceptor Featuring a 14-π-Electron Indenoindene Core for High-Performance Organic Photovoltaics",S595555,R148539,Energy band gap,R148542,1.49,"With an indenoindene core, a new thieno[3,4‐b]thiophene‐based small‐molecule electron acceptor, 2,2′‐((2Z,2′Z)‐((6,6′‐(5,5,10,10‐tetrakis(2‐ethylhexyl)‐5,10‐dihydroindeno[2,1‐a]indene‐2,7‐diyl)bis(2‐octylthieno[3,4‐b]thiophene‐6,4‐diyl))bis(methanylylidene))bis(5,6‐difluoro‐3‐oxo‐2,3‐dihydro‐1H‐indene‐2,1‐diylidene))dimalononitrile (NITI), is successfully designed and synthesized. Compared with 12‐π‐electron fluorene, a carbon‐bridged biphenylene with an axial symmetry, indenoindene, a carbon‐bridged E‐stilbene with a centrosymmetry, shows elongated π‐conjugation with 14 π‐electrons and one more sp3 carbon bridge, which may increase the tunability of electronic structure and film morphology. Despite its twisted molecular framework, NITI shows a low optical bandgap of 1.49 eV in thin film and a high molar extinction coefficient of 1.90 × 105m−1 cm−1 in solution. By matching NITI with a large‐bandgap polymer donor, an extraordinary power conversion efficiency of 12.74% is achieved, which is among the best performance so far reported for fullerene‐free organic photovoltaics and is inspiring for the design of new electron acceptors.",TRUE,number
R126,Materials Chemistry,R148630,Naphthodithiophene‐Based Nonfullerene Acceptor for High‐Performance Organic Photovoltaics: Effect of Extended Conjugation,S595855,R148632,Energy band gap,R148635,1.55,"Naphtho[1,2‐b:5,6‐b′]dithiophene is extended to a fused octacyclic building block, which is end capped by strong electron‐withdrawing 2‐(5,6‐difluoro‐3‐oxo‐2,3‐dihydro‐1H‐inden‐1‐ylidene)malononitrile to yield a fused‐ring electron acceptor (IOIC2) for organic solar cells (OSCs). Relative to naphthalene‐based IHIC2, naphthodithiophene‐based IOIC2 with a larger π‐conjugation and a stronger electron‐donating core shows a higher lowest unoccupied molecular orbital energy level (IOIC2: −3.78 eV vs IHIC2: −3.86 eV), broader absorption with a smaller optical bandgap (IOIC2: 1.55 eV vs IHIC2: 1.66 eV), and a higher electron mobility (IOIC2: 1.0 × 10−3 cm2 V−1 s−1 vs IHIC2: 5.0 × 10−4 cm2 V−1 s−1). Thus, IOIC2‐based OSCs show higher values in open‐circuit voltage, short‐circuit current density, fill factor, and thereby much higher power conversion efficiency (PCE) values than those of the IHIC2‐based counterpart. In particular, as‐cast OSCs based on FTAZ: IOIC2 yield PCEs of up to 11.2%, higher than that of the control devices based on FTAZ: IHIC2 (7.45%). Furthermore, by using 0.2% 1,8‐diiodooctane as the processing additive, a PCE of 12.3% is achieved from the FTAZ:IOIC2‐based devices, higher than that of the FTAZ:IHIC2‐based devices (7.31%). These results indicate that incorporating extended conjugation into the electron‐donating fused‐ring units in nonfullerene acceptors is a promising strategy for designing high‐performance electron acceptors.",TRUE,number
R126,Materials Chemistry,R141748,"Dual functional highly luminescence B, N Co-doped carbon nanodots as nanothermometer and Fe3+/Fe2+ sensor",S568453,R141750,Sensitivity,L399052,1.8,"Abstract Dual functional fluorescence nanosensors have many potential applications in biology and medicine. Monitoring temperature with higher precision at localized small length scales or in a nanocavity is a necessity in various applications. As well as the detection of biologically interesting metal ions using low-cost and sensitive approach is of great importance in bioanalysis. In this paper, we describe the preparation of dual-function highly fluorescent B, N-co-doped carbon nanodots (CDs) that work as chemical and thermal sensors. The CDs emit blue fluorescence peaked at 450 nm and exhibit up to 70% photoluminescence quantum yield with showing excitation-independent fluorescence. We also show that water-soluble CDs display temperature-dependent fluorescence and can serve as highly sensitive and reliable nanothermometers with a thermo-sensitivity 1.8% °C −1 , and wide range thermo-sensing between 0–90 °C with excellent recovery. Moreover, the fluorescence emission of CDs are selectively quenched after the addition of Fe 2+ and Fe 3+ ions while show no quenching with adding other common metal cations and anions. The fluorescence emission shows a good linear correlation with concentration of Fe 2+ and Fe 3+ (R 2 = 0.9908 for Fe 2+ and R 2 = 0.9892 for Fe 3+ ) with a detection limit of of 80.0 ± 0.5 nM for Fe 2+ and 110.0 ± 0.5 nM for Fe 3+ . Considering the high quantum yield and selectivity, CDs are exploited to design a nanoprobe towards iron detection in a biological sample. The fluorimetric assay is used to detect Fe 2+ in iron capsules and total iron in serum samples successfully.",TRUE,number
R126,Materials Chemistry,R146888,High-performance fullerene-free polymer solar cells with 6.31% efficiency,S593180,R146891,Power conversion efficiency,R147889,6.31,"A nonfullerene electron acceptor (IEIC) based on indaceno[1,2-b:5,6-b′]dithiophene and 2-(3-oxo-2,3-dihydroinden-1-ylidene)malononitrile was designed and synthesized, and fullerene-free polymer solar cells based on the IEIC acceptor showed power conversion efficiencies of up to 6.31%.
",TRUE,number
R126,Materials Chemistry,R146865,A simple small molecule as an acceptor for fullerene-free organic solar cells with efficiency near 8%,S588044,R146868,Power conversion efficiency (%),L409472,7.93,"A simple small molecule acceptor named DICTF, with fluorene as the central block and 2-(2,3-dihydro-3-oxo-1H-inden-1-ylidene)propanedinitrile as the end-capping groups, has been designed for fullerene-free organic solar cells. The new molecule was synthesized from widely available and inexpensive commercial materials in only three steps with a high overall yield of ∼60%. Fullerene-free organic solar cells with DICTF as the acceptor material provide a high PCE of 7.93%.",TRUE,number
R126,Materials Chemistry,R148606,Fused Hexacyclic Nonfullerene Acceptor with Strong Near‐Infrared Absorption for Semitransparent Organic Solar Cells with 9.77% Efficiency,S595773,R148607,Power conversion efficiency,R148616,9.77,"A fused hexacyclic electron acceptor, IHIC, based on strong electron‐donating group dithienocyclopentathieno[3,2‐b]thiophene flanked by strong electron‐withdrawing group 1,1‐dicyanomethylene‐3‐indanone, is designed, synthesized, and applied in semitransparent organic solar cells (ST‐OSCs). IHIC exhibits strong near‐infrared absorption with extinction coefficients of up to 1.6 × 105m−1 cm−1, a narrow optical bandgap of 1.38 eV, and a high electron mobility of 2.4 × 10−3 cm2 V−1 s−1. The ST‐OSCs based on blends of a narrow‐bandgap polymer donor PTB7‐Th and narrow‐bandgap IHIC acceptor exhibit a champion power conversion efficiency of 9.77% with an average visible transmittance of 36% and excellent device stability; this efficiency is much higher than any single‐junction and tandem ST‐OSCs reported in the literature.",TRUE,number
R126,Materials Chemistry,R148663,Dithienopicenocarbazole-Based Acceptors for Efficient Organic Solar Cells with Optoelectronic Response Over 1000 nm and an Extremely Low Energy Loss,S595985,R148666,Power conversion efficiency,R148676,10.21,"Two cheliform non-fullerene acceptors, DTPC-IC and DTPC-DFIC, based on a highly electron-rich core, dithienopicenocarbazole (DTPC), are synthesized, showing ultra-narrow bandgaps (as low as 1.21 eV). The two-dimensional nitrogen-containing conjugated DTPC possesses strong electron-donating capability, which induces intense intramolecular charge transfer and intermolecular π-π stacking in derived acceptors. The solar cell based on DTPC-DFIC and a spectrally complementary polymer donor, PTB7-Th, showed a high power conversion efficiency of 10.21% and an extremely low energy loss of 0.45 eV, which is the lowest among reported efficient OSCs.",TRUE,number
R126,Materials Chemistry,R147898,Side-Chain Isomerization on an n-type Organic Semiconductor ITIC Acceptor Makes 11.77% High Efficiency Polymer Solar Cells,S593253,R147899,Power conversion efficiency,R147910,11.77,"Low bandgap n-type organic semiconductor (n-OS) ITIC has attracted great attention for the application as an acceptor with medium bandgap p-type conjugated polymer as donor in nonfullerene polymer solar cells (PSCs) because of its attractive photovoltaic performance. Here we report a modification on the molecular structure of ITIC by side-chain isomerization with meta-alkyl-phenyl substitution, m-ITIC, to further improve its photovoltaic performance. In a comparison with its isomeric counterpart ITIC with para-alkyl-phenyl substitution, m-ITIC shows a higher film absorption coefficient, a larger crystalline coherence, and higher electron mobility. These inherent advantages of m-ITIC resulted in a higher power conversion efficiency (PCE) of 11.77% for the nonfullerene PSCs with m-ITIC as acceptor and a medium bandgap polymer J61 as donor, which is significantly improved over that (10.57%) of the corresponding devices with ITIC as acceptor. To the best of our knowledge, the PCE of 11.77% is one of the highest values reported in the literature to date for nonfullerene PSCs. More importantly, the m-ITIC-based device shows less thickness-dependent photovoltaic behavior than ITIC-based devices in the active-layer thickness range of 80-360 nm, which is beneficial for large area device fabrication. These results indicate that m-ITIC is a promising low bandgap n-OS for the application as an acceptor in PSCs, and the side-chain isomerization could be an easy and convenient way to further improve the photovoltaic performance of the donor and acceptor materials for high efficiency PSCs.",TRUE,number
R126,Materials Chemistry,R146997,Enhancing the Performance of Organic Solar Cells by Hierarchically Supramolecular Self-Assembly of Fused-Ring Electron Acceptors,S593043,R146999,Power conversion efficiency,R147832,12.17,"Three novel non-fullerene small molecular acceptors ITOIC, ITOIC-F, and ITOIC-2F were designed and synthesized with easy chemistry. The concept of supramolecular chemistry was successfully used in the molecular design, which includes noncovalently conformational locking (via intrasupramolecular interaction) to enhance the planarity of backbone and electrostatic interaction (intersupramolecular interaction) to enhance the π–π stacking of terminal groups. Fluorination can further strengthen the intersupramolecular electrostatic interaction of terminal groups. As expected, the designed acceptors exhibited excellent device performance when blended with polymer donor PBDB-T. In comparison with the parent acceptor molecule DC-IDT2T reported in the literature with a power conversion efficiency (PCE) of 3.93%, ITOIC with a planar structure exhibited a PCE of 8.87% and ITOIC-2F with a planar structure and enhanced electrostatic interaction showed a quite impressive PCE of 12.17%. Our result demonstrates the import...",TRUE,number
R126,Materials Chemistry,R148630,Naphthodithiophene‐Based Nonfullerene Acceptor for High‐Performance Organic Photovoltaics: Effect of Extended Conjugation,S595868,R148632,Power conversion efficiency,R148640,12.3,"Naphtho[1,2‐b:5,6‐b′]dithiophene is extended to a fused octacyclic building block, which is end capped by strong electron‐withdrawing 2‐(5,6‐difluoro‐3‐oxo‐2,3‐dihydro‐1H‐inden‐1‐ylidene)malononitrile to yield a fused‐ring electron acceptor (IOIC2) for organic solar cells (OSCs). Relative to naphthalene‐based IHIC2, naphthodithiophene‐based IOIC2 with a larger π‐conjugation and a stronger electron‐donating core shows a higher lowest unoccupied molecular orbital energy level (IOIC2: −3.78 eV vs IHIC2: −3.86 eV), broader absorption with a smaller optical bandgap (IOIC2: 1.55 eV vs IHIC2: 1.66 eV), and a higher electron mobility (IOIC2: 1.0 × 10−3 cm2 V−1 s−1 vs IHIC2: 5.0 × 10−4 cm2 V−1 s−1). Thus, IOIC2‐based OSCs show higher values in open‐circuit voltage, short‐circuit current density, fill factor, and thereby much higher power conversion efficiency (PCE) values than those of the IHIC2‐based counterpart. In particular, as‐cast OSCs based on FTAZ: IOIC2 yield PCEs of up to 11.2%, higher than that of the control devices based on FTAZ: IHIC2 (7.45%). Furthermore, by using 0.2% 1,8‐diiodooctane as the processing additive, a PCE of 12.3% is achieved from the FTAZ:IOIC2‐based devices, higher than that of the FTAZ:IHIC2‐based devices (7.31%). These results indicate that incorporating extended conjugation into the electron‐donating fused‐ring units in nonfullerene acceptors is a promising strategy for designing high‐performance electron acceptors.",TRUE,number
R126,Materials Chemistry,R148537,"A Twisted Thieno[3,4-b
]thiophene-Based Electron Acceptor Featuring a 14-π-Electron Indenoindene Core for High-Performance Organic Photovoltaics",S595567,R148539,Power conversion efficiency,R148548,12.74,"With an indenoindene core, a new thieno[3,4‐b]thiophene‐based small‐molecule electron acceptor, 2,2′‐((2Z,2′Z)‐((6,6′‐(5,5,10,10‐tetrakis(2‐ethylhexyl)‐5,10‐dihydroindeno[2,1‐a]indene‐2,7‐diyl)bis(2‐octylthieno[3,4‐b]thiophene‐6,4‐diyl))bis(methanylylidene))bis(5,6‐difluoro‐3‐oxo‐2,3‐dihydro‐1H‐indene‐2,1‐diylidene))dimalononitrile (NITI), is successfully designed and synthesized. Compared with 12‐π‐electron fluorene, a carbon‐bridged biphenylene with an axial symmetry, indenoindene, a carbon‐bridged E‐stilbene with a centrosymmetry, shows elongated π‐conjugation with 14 π‐electrons and one more sp3 carbon bridge, which may increase the tunability of electronic structure and film morphology. Despite its twisted molecular framework, NITI shows a low optical bandgap of 1.49 eV in thin film and a high molar extinction coefficient of 1.90 × 105m−1 cm−1 in solution. By matching NITI with a large‐bandgap polymer donor, an extraordinary power conversion efficiency of 12.74% is achieved, which is among the best performance so far reported for fullerene‐free organic photovoltaics and is inspiring for the design of new electron acceptors.",TRUE,number
R126,Materials Chemistry,R148652,"Dithieno[3,2-b
:2′,3′-d
]pyrrol Fused Nonfullerene Acceptors Enabling Over 13% Efficiency for Organic Solar Cells",S595944,R148654,Power conversion efficiency,R148662,13.13,"A new electron‐rich central building block, 5,5,12,12‐tetrakis(4‐hexylphenyl)‐indacenobis‐(dithieno[3,2‐b:2′,3′‐d]pyrrol) (INP), and two derivative nonfullerene acceptors (INPIC and INPIC‐4F) are designed and synthesized. The two molecules reveal broad (600–900 nm) and strong absorption due to the satisfactory electron‐donating ability of INP. Compared with its counterpart INPIC, fluorinated nonfullerene acceptor INPIC‐4F exhibits a stronger near‐infrared absorption with a narrower optical bandgap of 1.39 eV, an improved crystallinity with higher electron mobility, and down‐shifted highest occupied molecular orbital and lowest unoccupied molecular orbital energy levels. Organic solar cells (OSCs) based on INPIC‐4F exhibit a high power conversion efficiency (PCE) of 13.13% and a relatively low energy loss of 0.54 eV, which is among the highest efficiencies reported for binary OSCs in the literature. The results demonstrate the great potential of the new INP as an electron‐donating building block for constructing high‐performance nonfullerene acceptors for OSCs.",TRUE,number
R67,Medicinal Chemistry and Pharmaceutics,R138920,Functionalization of Silver Nanoparticles Loaded with Paclitaxel-induced A549 Cells Apoptosis Through ROS-Mediated Signaling Pathways,S552006,R138924,Zeta potential,L388275,-17,"Background: Paclitaxel (PTX) is one of the most important and effective anticancer drugs for the treatment of human cancer. However, its low solubility and severe adverse effects limited clinical use. To overcome this limitation, nanotechnology has been used to overcome tumors due to its excellent antimicrobial activity. Objective: This study was to demonstrate the anticancer properties of functionalization silver nanoparticles loaded with paclitaxel (Ag@PTX) induced A549 cells apoptosis through ROS-mediated signaling pathways. Methods: The Ag@PTX nanoparticles were charged with a zeta potential of about -17 mv and characterized around 2 nm with a narrow size distribution. Results: Ag@PTX significantly decreased the viability of A549 cells and possessed selectivity between cancer and normal cells. Ag@PTX induced A549 cells apoptosis was confirmed by nuclear condensation, DNA fragmentation, and activation of caspase-3. Furthermore, Ag@PTX enhanced the anti-cancer activity of A549 cells through ROS-mediated p53 and AKT signalling pathways. Finally, in a xenograft nude mice model, Ag@PTX suppressed the growth of tumors. Conclusion: Our findings suggest that Ag@PTX may be a candidate as a chemopreventive agent and could be a highly efficient way to achieve anticancer synergism for human cancers.",TRUE,number
R67,Medicinal Chemistry and Pharmaceutics,R138058,"Formulation, optimization, hemocompatibility and pharmacokinetic evaluation of PLGA nanoparticles containing paclitaxel",S546380,R138064,Poly dispercity index (PDI),L384168,0.115,"Abstract Objective: Paclitaxel (PTX)-loaded polymer (Poly(lactic-co-glycolic acid), PLGA)-based nanoformulation was developed with the objective of formulating cremophor EL-free nanoformulation intended for intravenous use. Significance: The polymeric PTX nanoparticles free from the cremophor EL will help in eliminating the shortcomings of the existing delivery system as cremophor EL causes serious allergic reactions to the subjects after intravenous use. Methods and results: Paclitaxel-loaded nanoparticles were formulated by nanoprecipitation method. The diminutive nanoparticles (143.2 nm) with uniform size throughout (polydispersity index, 0.115) and high entrapment efficiency (95.34%) were obtained by employing the Box–Behnken design for the optimization of the formulation with the aid of desirability approach-based numerical optimization technique. Optimized levels for each factor viz. polymer concentration (X1), amount of organic solvent (X2), and surfactant concentration (X3) were 0.23%, 5 ml %, and 1.13%, respectively. The results of the hemocompatibility studies confirmed the safety of PLGA-based nanoparticles for intravenous administration. Pharmacokinetic evaluations confirmed the longer retention of PTX in systemic circulation. Conclusion: In a nutshell, the developed polymeric nanoparticle formulation of PTX precludes the inadequacy of existing PTX formulation and can be considered as superior alternative carrier system of the same.",TRUE,number
R67,Medicinal Chemistry and Pharmaceutics,R137522,"Paclitaxel-loaded poly(D,L-lactide-co-glycolide) nanoparticles for radiotherapy in hypoxic human tumor cells in vitro",S544427,R137524,Drug entrapment efficiency (%),L383328,85.5,"Radioresistant hypoxic cells may contribute to the failure of radiation therapy in controlling certain tumors. Some studies have suggested the radiosensitizing effect of paclitaxel. The poly(D,L-lactide-co-glycolide)(PLGA) nanoparticles containing paclitaxel were prepared by o/w emulsification-solvent evaporation method. The physicochemical characteristics of the nanoparticles (i.e. encapsulation efficiency, particle size distribution, morphology, in vitro release) were studied. The morphology of the two human tumor cell lines: a carcinoma cervicis (HeLa) and a hepatoma (HepG2), treated with paclitaxel-loaded nanoparticles was photomicrographed. Flow cytometry was used to quantify the number of the tumor cells held in the G2/M phase of the cell cycle. The cellular uptake of nanoparticles was evaluated by transmission electronic microscopy. Cell viability was determined by the ability of single cell to form colonies in vitro. The prepared nanoparticles were spherical in shape with size between 200nm and 800nm. The encapsulation efficiency was 85.5%. The release behaviour of paclitaxel from the nanoparticles exhibited a biphasic pattern characterised by a fast initial release during the first 24 h, followed by a slower and continuous release. Co-culture of the two tumor cell lines with paclitaxel-loaded nanoparticles demonstrated that the cell morphology was changed and the released paclitaxel retained its bioactivity to block cells in the G2/M phase. The cellular uptake of nanoparticles was observed. The free paclitaxel and paclitaxel-loaded nanoparticles effectively sensitized hypoxic HeLa and HepG2 cells to radiation. Under this experimental condition, the radiosensitization of paclitaxel-loaded nanoparticles was more significant than that of free paclitaxel.Keywords: Paclitaxel;Drug delivery;Nanoparticle;Radiotherapy;Hypoxia;Human tumor cells;cellular uptake",TRUE,number
R67,Medicinal Chemistry and Pharmaceutics,R138058,"Formulation, optimization, hemocompatibility and pharmacokinetic evaluation of PLGA nanoparticles containing paclitaxel",S546381,R138064,Drug entrapment efficiency (%),L384169,95.34,"Abstract Objective: Paclitaxel (PTX)-loaded polymer (Poly(lactic-co-glycolic acid), PLGA)-based nanoformulation was developed with the objective of formulating cremophor EL-free nanoformulation intended for intravenous use. Significance: The polymeric PTX nanoparticles free from the cremophor EL will help in eliminating the shortcomings of the existing delivery system as cremophor EL causes serious allergic reactions to the subjects after intravenous use. Methods and results: Paclitaxel-loaded nanoparticles were formulated by nanoprecipitation method. The diminutive nanoparticles (143.2 nm) with uniform size throughout (polydispersity index, 0.115) and high entrapment efficiency (95.34%) were obtained by employing the Box–Behnken design for the optimization of the formulation with the aid of desirability approach-based numerical optimization technique. Optimized levels for each factor viz. polymer concentration (X1), amount of organic solvent (X2), and surfactant concentration (X3) were 0.23%, 5 ml %, and 1.13%, respectively. The results of the hemocompatibility studies confirmed the safety of PLGA-based nanoparticles for intravenous administration. Pharmacokinetic evaluations confirmed the longer retention of PTX in systemic circulation. Conclusion: In a nutshell, the developed polymeric nanoparticle formulation of PTX precludes the inadequacy of existing PTX formulation and can be considered as superior alternative carrier system of the same.",TRUE,number
R67,Medicinal Chemistry and Pharmaceutics,R138058,"Formulation, optimization, hemocompatibility and pharmacokinetic evaluation of PLGA nanoparticles containing paclitaxel",S546379,R138064,Particle size of nanoparticles (nm),L384167,143.2,"Abstract Objective: Paclitaxel (PTX)-loaded polymer (Poly(lactic-co-glycolic acid), PLGA)-based nanoformulation was developed with the objective of formulating cremophor EL-free nanoformulation intended for intravenous use. Significance: The polymeric PTX nanoparticles free from the cremophor EL will help in eliminating the shortcomings of the existing delivery system as cremophor EL causes serious allergic reactions to the subjects after intravenous use. Methods and results: Paclitaxel-loaded nanoparticles were formulated by nanoprecipitation method. The diminutive nanoparticles (143.2 nm) with uniform size throughout (polydispersity index, 0.115) and high entrapment efficiency (95.34%) were obtained by employing the Box–Behnken design for the optimization of the formulation with the aid of desirability approach-based numerical optimization technique. Optimized levels for each factor viz. polymer concentration (X1), amount of organic solvent (X2), and surfactant concentration (X3) were 0.23%, 5 ml %, and 1.13%, respectively. The results of the hemocompatibility studies confirmed the safety of PLGA-based nanoparticles for intravenous administration. Pharmacokinetic evaluations confirmed the longer retention of PTX in systemic circulation. Conclusion: In a nutshell, the developed polymeric nanoparticle formulation of PTX precludes the inadequacy of existing PTX formulation and can be considered as superior alternative carrier system of the same.",TRUE,number
R279,Nanoscience and Nanotechnology,R135569,A Highly Sensitive and Flexible Capacitive Pressure Sensor Based on a Porous Three-Dimensional PDMS/Microsphere Composite,S536275,R135573,Sensibility of the pressure sensor ( /kPa),L378248,0.124,"In recent times, polymer-based flexible pressure sensors have been attracting a lot of attention because of their various applications. A highly sensitive and flexible sensor is suggested, capable of being attached to the human body, based on a three-dimensional dielectric elastomeric structure of polydimethylsiloxane (PDMS) and microsphere composite. This sensor has maximal porosity due to macropores created by sacrificial layer grains and micropores generated by microspheres pre-mixed with PDMS, allowing it to operate at a wider pressure range (~150 kPa) while maintaining a sensitivity (of 0.124 kPa−1 in a range of 0~15 kPa) better than in previous studies. The maximized pores can cause deformation in the structure, allowing for the detection of small changes in pressure. In addition to exhibiting a fast rise time (~167 ms) and fall time (~117 ms), as well as excellent reproducibility, the fabricated pressure sensor exhibits reliability in its response to repeated mechanical stimuli (2.5 kPa, 1000 cycles). As an application, we develop a wearable device for monitoring repeated tiny motions, such as the pulse on the human neck and swallowing at the Adam’s apple. This sensory device is also used to detect movements in the index finger and to monitor an insole system in real-time.",TRUE,number
R279,Nanoscience and Nanotechnology,R137470,Fabrication and characterization of Ga-doped ZnO / Si heterojunction nanodiodes,S544118,R137471,Ideality factor,L383168,2.2,"In this study, temperature-dependent electrical properties of n-type Ga-doped ZnO thin film / p-type Si nanowire heterojunction diodes were reported. Metal-assisted chemical etching (MACE) process was performed to fabricate Si nanowires. Ga-doped ZnO films were then deposited onto nanowires through chemical bath deposition (CBD) technique to build three-dimensional nanowire-based heterojunction diodes. Fabricated devices revealed significant diode characteristics in the temperature range of 220 - 360 K. Electrical measurements shown that diodes had a well-defined rectifying behavior with a good rectification ratio of 103 ±3 V at room temperature. Ideality factor (n) were changed from 2.2 to 1.2 with increasing temperature.In this study, temperature-dependent electrical properties of n-type Ga-doped ZnO thin film / p-type Si nanowire heterojunction diodes were reported. Metal-assisted chemical etching (MACE) process was performed to fabricate Si nanowires. Ga-doped ZnO films were then deposited onto nanowires through chemical bath deposition (CBD) technique to build three-dimensional nanowire-based heterojunction diodes. Fabricated devices revealed significant diode characteristics in the temperature range of 220 - 360 K. Electrical measurements shown that diodes had a well-defined rectifying behavior with a good rectification ratio of 103 ±3 V at room temperature. Ideality factor (n) were changed from 2.2 to 1.2 with increasing temperature.",TRUE,number
R279,Nanoscience and Nanotechnology,R143695,Electrically conductive thermoplastic elastomer nanocomposites at ultralow graphene loading levels for strain sensor applications,S575018,R143697,Gauge Factor (GF),L402774,17.7,"An electrically conductive ultralow percolation threshold of 0.1 wt% graphene was observed in the thermoplastic polyurethane (TPU) nanocomposites. The homogeneously dispersed graphene effectively enhanced the mechanical properties of TPU significantly at a low graphene loading of 0.2 wt%. These nanocomposites were subjected to cyclic loading to investigate the influences of graphene loading, strain amplitude and strain rate on the strain sensing performances. The two dimensional graphene and the flexible TPU matrix were found to endow these nanocomposites with a wide range of strain sensitivity (gauge factor ranging from 0.78 for TPU with 0.6 wt% graphene at the strain rate of 0.1 min−1 to 17.7 for TPU with 0.2 wt% graphene at the strain rate of 0.3 min−1) and good sensing stability for different strain patterns. In addition, these nanocomposites demonstrated good recoverability and reproducibility after stabilization by cyclic loading. An analytical model based on tunneling theory was used to simulate the resistance response to strain under different strain rates. The change in the number of conductive pathways and tunneling distance under strain was responsible for the observed resistance-strain behaviors. This study provides guidelines for the fabrication of graphene based polymer strain sensors.",TRUE,number
R279,Nanoscience and Nanotechnology,R143705,A highly stretchable and sensitive strain sensor based on graphene–elastomer composites with a novel double-interconnected network,S575094,R143707,Gauge Factor (GF),L402837,82.5,"The construction of a continuous conductive network with a low percolation threshold plays a key role in fabricating a high performance strain sensor. Herein, a highly stretchable and sensitive strain sensor based on binary rubber blend/graphene was fabricated by a simple and effective assembly approach. A novel double-interconnected network composed of compactly continuous graphene conductive networks was designed and constructed using the composites, thereby resulting in an ultralow percolation threshold of 0.3 vol%, approximately 12-fold lower than that of the conventional graphene-based composites with a homogeneously dispersed morphology (4.0 vol%). Near the percolation threshold, the sensors could be stretched in excess of 100% applied strain, and exhibited a high stretchability, sensitivity (gauge factor ∼82.5) and good reproducibility (∼300 cycles) of up to 100% strain under cyclic tensile tests. The proposed strategy provides a novel effective approach for constructing a double-interconnected conductive network using polymer composites, and is very competitive for developing and designing high performance strain sensors.",TRUE,number
R279,Nanoscience and Nanotechnology,R148377,Bioinspired Cocatalysts Decorated WO3 Nanotube Toward Unparalleled Hydrogen Sulfide Chemiresistor,S595002,R148380,Gas response (S=Ra/Rg),L413619,203.5,"Herein, we incorporated dual biotemplates, i.e., cellulose nanocrystals (CNC) and apoferritin, into electrospinning solution to achieve three distinct benefits, i.e., (i) facile synthesis of a WO3 nanotube by utilizing the self-agglomerating nature of CNC in the core of as-spun nanofibers, (ii) effective sensitization by partial phase transition from WO3 to Na2W4O13 induced by interaction between sodium-doped CNC and WO3 during calcination, and (iii) uniform functionalization with monodispersive apoferritin-derived Pt catalytic nanoparticles (2.22 ± 0.42 nm). Interestingly, the sensitization effect of Na2W4O13 on WO3 resulted in highly selective H2S sensing characteristics against seven different interfering molecules. Furthermore, synergistic effects with a bioinspired Pt catalyst induced a remarkably enhanced H2S response ( Rair/ Rgas = 203.5), unparalleled selectivity ( Rair/ Rgas < 1.3 for the interfering molecules), and rapid response (<10 s)/recovery (<30 s) time at 1 ppm of H2S under 95% relative humidity level. This work paves the way for a new class of cosensitization routes to overcome critical shortcomings of SMO-based chemical sensors, thus providing a potential platform for diagnosis of halitosis.",TRUE,number
R58,Neuroscience and Neurobiology,R75482,Prevalence and Incidence of Epilepsy in Italy Based on a Nationwide Database,S346126,R75484,Females,R75651,7.7,"Objectives: To estimate the prevalence and incidence of epilepsy in Italy using a national database of general practitioners (GPs). Methods: The Health Search CSD Longitudinal Patient Database (HSD) has been established in 1998 by the Italian College of GPs. Participants were 700 GPs, representing a population of 912,458. For each patient, information on age and sex, EEG, CT scan, and MRI was included. Prevalent cases with a diagnosis of ‘epilepsy' (ICD9CM: 345*) were selected in the 2011 population. Incident cases of epilepsy were identified in 2011 by excluding patients diagnosed for epilepsy and convulsions and those with EEG, CT scan, MRI prescribed for epilepsy and/or convulsions in the previous years. Crude and standardized (Italian population) prevalence and incidence were calculated. Results: Crude prevalence of epilepsy was 7.9 per 1,000 (men 8.1; women 7.7). The highest prevalence was in patients <25 years and ≥75 years. The incidence of epilepsy was 33.5 per 100,000 (women 35.3; men 31.5). The highest incidence was in women <25 years and in men 75 years or older. Conclusions: Prevalence and incidence of epilepsy in this study were similar to those of other industrialized countries. HSD appears as a reliable data source for the surveillance of epilepsy in Italy. i 2014 S. Karger AG, Basel",TRUE,number
R58,Neuroscience and Neurobiology,R75482,Prevalence and Incidence of Epilepsy in Italy Based on a Nationwide Database,S346125,R75484,Overall,R75649,7.9,"Objectives: To estimate the prevalence and incidence of epilepsy in Italy using a national database of general practitioners (GPs). Methods: The Health Search CSD Longitudinal Patient Database (HSD) has been established in 1998 by the Italian College of GPs. Participants were 700 GPs, representing a population of 912,458. For each patient, information on age and sex, EEG, CT scan, and MRI was included. Prevalent cases with a diagnosis of ‘epilepsy' (ICD9CM: 345*) were selected in the 2011 population. Incident cases of epilepsy were identified in 2011 by excluding patients diagnosed for epilepsy and convulsions and those with EEG, CT scan, MRI prescribed for epilepsy and/or convulsions in the previous years. Crude and standardized (Italian population) prevalence and incidence were calculated. Results: Crude prevalence of epilepsy was 7.9 per 1,000 (men 8.1; women 7.7). The highest prevalence was in patients <25 years and ≥75 years. The incidence of epilepsy was 33.5 per 100,000 (women 35.3; men 31.5). The highest incidence was in women <25 years and in men 75 years or older. Conclusions: Prevalence and incidence of epilepsy in this study were similar to those of other industrialized countries. HSD appears as a reliable data source for the surveillance of epilepsy in Italy. i 2014 S. Karger AG, Basel",TRUE,number
R58,Neuroscience and Neurobiology,R75482,Prevalence and Incidence of Epilepsy in Italy Based on a Nationwide Database,S346128,R75484,Males,R75650,8.1,"Objectives: To estimate the prevalence and incidence of epilepsy in Italy using a national database of general practitioners (GPs). Methods: The Health Search CSD Longitudinal Patient Database (HSD) has been established in 1998 by the Italian College of GPs. Participants were 700 GPs, representing a population of 912,458. For each patient, information on age and sex, EEG, CT scan, and MRI was included. Prevalent cases with a diagnosis of ‘epilepsy' (ICD9CM: 345*) were selected in the 2011 population. Incident cases of epilepsy were identified in 2011 by excluding patients diagnosed for epilepsy and convulsions and those with EEG, CT scan, MRI prescribed for epilepsy and/or convulsions in the previous years. Crude and standardized (Italian population) prevalence and incidence were calculated. Results: Crude prevalence of epilepsy was 7.9 per 1,000 (men 8.1; women 7.7). The highest prevalence was in patients <25 years and ≥75 years. The incidence of epilepsy was 33.5 per 100,000 (women 35.3; men 31.5). The highest incidence was in women <25 years and in men 75 years or older. Conclusions: Prevalence and incidence of epilepsy in this study were similar to those of other industrialized countries. HSD appears as a reliable data source for the surveillance of epilepsy in Italy. i 2014 S. Karger AG, Basel",TRUE,number
R172,Oceanography,R160733,Environmental controls on the seasonal carbon dioxide fluxes in the northeastern Indian Ocean,S641192,R160734,CO2 flux (lower limit),L438867,-20,"Total carbon dioxide (TCO 2) and computations of partial pressure of carbon dioxide (pCO 2) had been examined in Northerneastern region of Indian Ocean. It exhibit seasonal and spatial variability. North-south gradients in the pCO 2 levels were closely related to gradients in salinity caused by fresh water discharge received from rivers. Eddies observed in this region helped to elevate the nutrients availability and the biological controls by increasing the productivity. These phenomena elevated the carbon dioxide draw down during the fair seasons. Seasonal fluxes estimated from local wind speed and air-sea carbon dioxide difference indicate that during southwest monsoon, the northeastern Indian Ocean acts as a strong sink of carbon dioxide (-20.04 mmol m –2 d -1 ). Also during fall intermonsoon the area acts as a weak sink of carbon dioxide (-4.69 mmol m –2 d -1 ). During winter monsoon, this region behaves as a weak carbon dioxide source with an average sea to air flux of 4.77 mmol m -2 d -1 . In the northern region, salinity levels in the surface level are high during winter compared to the other two seasons. Northeastern Indian Ocean shows significant intraseasonal variability in carbon dioxide fluxes that are mediated by eddies which provide carbon dioxide and nutrients from the subsurface waters to the mixed layer.",TRUE,number
R172,Oceanography,R147138,Evidence of active dinitrogen fixation in surface waters of the eastern tropical South Pacific during El Niño and La Niña events and evaluation of its potential nutrient controls: N2FIXATION IN THE ETSP,S589432,R147140,Volumetric N2 fixation rate (lower limit),L410257,0.01,"Biological N2 fixation rates were quantified in the Eastern Tropical South Pacific (ETSP) during both El Niño (February 2010) and La Niña (March–April 2011) conditions, and from Low‐Nutrient, Low‐Chlorophyll (20°S) to High‐Nutrient, Low‐Chlorophyll (HNLC) (10°S) conditions. N2 fixation was detected at all stations with rates ranging from 0.01 to 0.88 nmol N L−1 d−1, with higher rates measured during El Niño conditions compared to La Niña. High N2 fixations rates were reported at northern stations (HNLC conditions) at the oxycline and in the oxygen minimum zone (OMZ), despite nitrate concentrations up to 30 µmol L−1, indicating that inputs of new N can occur in parallel with N loss processes in OMZs. Water‐column integrated N2 fixation rates ranged from 4 to 53 µmol N m−2 d−1 at northern stations, and from 0 to 148 µmol m−2 d−1 at southern stations, which are of the same order of magnitude as N2 fixation rates measured in the oligotrophic ocean. N2 fixation rates responded significantly to Fe and organic carbon additions in the surface HNLC waters, and surprisingly by concomitant Fe and N additions in surface waters at the edge of the subtropical gyre. Recent studies have highlighted the predominance of heterotrophic diazotrophs in this area, and we hypothesize that N2 fixation could be directly limited by inorganic nutrient availability, or indirectly through the stimulation of primary production and the subsequent excretion of dissolved organic matter and/or the formation of micro‐environments favorable for heterotrophic N2 fixation.",TRUE,number
R172,Oceanography,R155527,"Nitrogen fixation in the western equatorial Pacific: Rates, diazotrophic cyanobacterial size class distribution, and biogeochemical significance: N2FIXATION IN THE EQUATORIAL PACIFIC",S622981,R155529,Volumetric N2 fixation rate (lower limit),L428888,0.06,"A combination of 15N2 labeling, Tyramide Signal Amplification–Fluorescent in Situ Hybridization (TSA‐FISH) assay, and chemical analyses were performed along a trophic gradient (8000 km) in the equatorial Pacific. Nitrogen fixation rates were low (0.06 ± 0.02 to 2.8 ± 2.1 nmol L−1 d−1) in HNLC waters, higher in the warm pool (0.11 ± 0.0 to 18.2 ± 2.8 nmol L−1 d−1), and extremely high close to Papua New Guinea (38 ± 9 to 610 ± 46 nmol L−1 d−1). Rates attributed to the <10‐μm fraction accounted for 74% of total activity. Both unicellular and filamentous diazotrophs were detected and reached 17 cells mL−1 and 1.85 trichome mL−1. Unicellular diazotrophs were found to be free‐living in <10‐μm fraction or in association with mucilage, particles, or eukaryotes in the >10‐μm fraction, leading to a possible overestimation of this fraction to total N2 fixation. In oceanic waters, 98% of the unicellular diazotrophs were picoplanktonic. Finally, we found a clear longitudinal pattern of niche partitioning between diazotroph groups: while unicellular diazotrophs were present all along the transect, Trichodesmium spp. were detected only in coastal waters, where nitrogen fixation associated to both size fractions was greatly stimulated.",TRUE,number
R172,Oceanography,R160161,Quantifying the nitrous oxide source from coastal upwelling: N2O FROM COASTAL UPWELLING,S638102,R160176,N2O flux (upper limit),L437104,0.2,"A continuous record of atmospheric N2O measured from a tower in northern California captures strong pulses of N2O released by coastal upwelling events. The atmospheric record offers a unique, observation‐based method for quantifying the coastal N2O source. A coastal upwelling model is developed and compared to the constraints imposed by the atmospheric record in the Pacific Northwest coastal region. The upwelling model is based on Ekman theory and driven by high‐resolution wind and SST data and by relationships between subsurface N2O and temperature. A simplified version of the upwelling model is extended to the world's major eastern boundary regions to estimate a total coastal upwelling source of ∼0.2 ± >70% Tg N2O‐N/yr. This flux represents ∼5% of the total ocean source, estimated here at ∼4 Tg N2O‐N/yr using traditional gas‐transfer methods, and is probably largely neglected in current N2O budgets.",TRUE,number
R172,Oceanography,R160723,"Ocean carbon cycling in the Indian Ocean: 1. Spatiotemporal variability of inorganic carbon and air-sea CO2gas exchange: INDIAN OCEAN CARBON CYCLE, 1",S641081,R160724,Average CO2 flux,L438773,0.24,"The spatiotemporal variability of upper ocean inorganic carbon parameters and air‐sea CO2 exchange in the Indian Ocean was examined using inorganic carbon data collected as part of the World Ocean Circulation Experiment (WOCE) cruises in 1995. Multiple linear regression methods were used to interpolate and extrapolate the temporally and geographically limited inorganic carbon data set to the entire Indian Ocean basin using other climatological hydrographic and biogeochemical data. The spatiotemporal distributions of total carbon dioxide (TCO2), alkalinity, and seawater pCO2 were evaluated for the Indian Ocean and regions of interest including the Arabian Sea, Bay of Bengal, and 10°N–35°S zones. The Indian Ocean was a net source of CO2 to the atmosphere, and a net sea‐to‐air CO2 flux of +237 ± 132 Tg C yr−1 (+0.24 Pg C yr−1) was estimated. Regionally, the Arabian Sea, Bay of Bengal, and 10°N–10°S zones were perennial sources of CO2 to the atmosphere. In the 10°S–35°S zone, the CO2 sink or source status of the surface ocean shifts seasonally, although the region is a net oceanic sink of atmospheric CO2.",TRUE,number
R172,Oceanography,R160158,Nitrous oxide emissions from the Arabian Sea: A synthesis,S637973,R160166,N2O flux (lower limit),L437003,0.33,"Abstract. We computed high-resolution (1o latitude x 1o longitude) seasonal and annual nitrous oxide (N2O) concentration fields for the Arabian Sea surface layer using a database containing more than 2400 values measured between December 1977 and July 1997. N2O concentrations are highest during the southwest (SW) monsoon along the southern Indian continental shelf. Annual emissions range from 0.33 to 0.70 Tg N2O and are dominated by fluxes from coastal regions during the SW and northeast monsoons. Our revised estimate for the annual N2O flux from the Arabian Sea is much more tightly constrained than the previous consensus derived using averaged in-situ data from a smaller number of studies. However, the tendency to focus on measurements in locally restricted features in combination with insufficient seasonal data coverage leads to considerable uncertainties of the concentration fields and thus in the flux estimates, especially in the coastal zones of the northern and eastern Arabian Sea. The overall mean relative error of the annual N2O emissions from the Arabian Sea was estimated to be at least 65%.",TRUE,number
R172,Oceanography,R160146,Variabilities in the fluxes and annual emissions of nitrous oxide from the Arabian Sea,S638085,R160173,N2O flux (lower limit),L437096,0.56,"Extensive measurements of nitrous oxide (N2O) have been made during April–May 1994 (intermonsoon), February–March 1995 (northeast monsoon), July–August 1995 and August 1996 (southwest monsoon) in the Arabian Sea. Low N2O supersaturations in the surface waters are observed during intermonsoon compared to those in northeast and southwest monsoons. Spatial distributions of supersaturations manifest the effects of larger mixing during winter cooling and wind‐driven upwelling during monsoon period off the Indian west coast. A net positive flux is observable during all the seasons, with no discernible differences from the open ocean to coastal regions. The average ocean‐to‐atmosphere fluxes of N2O are estimated, using wind speed dependent gas transfer velocity, to be of the order of 0.26, 0.003, and 0.51, and 0.78 pg (pico grams) cm−2 s−1 during northeast monsoon, intermonsoon, and southwest monsoon in 1995 and 1996, respectively. The lower range of annual emission of N2O is estimated to be 0.56–0.76 Tg N2O per year which constitutes 13–17% of the net global oceanic source. However, N2O emission from the Arabian Sea can be as high as 1.0 Tg N2O per year using different gas transfer models.",TRUE,number
R172,Oceanography,R155517,Dissolved Organic Matter Influences N2 Fixation in the New Caledonian Lagoon (Western Tropical South Pacific),S622890,R155519,Volumetric N2 fixation rate (lower limit),L428817,0.66,"Specialized prokaryotes performing biological dinitrogen (N2) fixation ('diazotrophs') provide an important source of fixed nitrogen in oligotrophic marine ecosystems such as tropical and subtropical oceans. In these waters, cyanobacterial photosynthetic diazotrophs are well known to be abundant and active, yet the role and contribution of non-cyanobacterial diazotrophs are currently unclear. The latter are not photosynthetic (here called 'heterotrophic') and hence require external sources of organic matter to sustain N2 fixation. Here we added the photosynthesis inhibitor 3-(3,4-dichlorophenyl)-1,1-dimethylurea (DCMU) to estimate the N2 fixation potential of heterotrophic diazotrophs as compared to autotrophic ones. Additionally, we explored the influence of dissolved organic matter (DOM) on these diazotrophs along a coast to open ocean gradient in the surface waters of a subtropical coral lagoon (New Caledonia). Total N2 fixation (samples not amended with DCMU) ranged from 0.66 to 1.32 nmol N L-1 d-1. The addition of DCMU reduced N2 fixation by >90%, suggesting that the contribution of heterotrophic diazotrophs to overall N2 fixation activity was minor in this environment. Higher contribution of heterotrophic diazotrophs occurred in stations closer to the shore and coincided with the decreasing lability of DOM, as shown by various colored DOM and fluorescent DOM (CDOM and FDOM) indices. This suggests that heterotrophic N2 fixation is favored when labile DOM compounds are available. We tested the response of diazotrophs (in terms of nifH gene expression and bulk N2 fixation rates) upon the addition of a mix of carbohydrates ('DOC' treatment), amino acids ('DON' treatment), and phosphonates and phosphomonesters ('DOP' treatment). While nifH expression increased significantly in Trichodesmium exposed to the DOC treatment, bulk N2 fixation rates increased significantly only in the DOP treatment. The lack of nifH expression by gammaproteobacteria, in any of the DOM addition treatments applied, questions the contribution of non-cyanobacterial diazotrophs to fixed nitrogen inputs in the New Caledonian lagoon. While the metabolism and ecology of heterotrophic diazotrophs is currently elusive, a deeper understanding of their ecology and relationship with DOM is needed in the light of increased DOM inputs in coastal zones due to anthropogenic pressure.",TRUE,number
R172,Oceanography,R160158,Nitrous oxide emissions from the Arabian Sea: A synthesis,S637974,R160166,N2O flux (upper limit),L437004,0.7,"Abstract. We computed high-resolution (1o latitude x 1o longitude) seasonal and annual nitrous oxide (N2O) concentration fields for the Arabian Sea surface layer using a database containing more than 2400 values measured between December 1977 and July 1997. N2O concentrations are highest during the southwest (SW) monsoon along the southern Indian continental shelf. Annual emissions range from 0.33 to 0.70 Tg N2O and are dominated by fluxes from coastal regions during the SW and northeast monsoons. Our revised estimate for the annual N2O flux from the Arabian Sea is much more tightly constrained than the previous consensus derived using averaged in-situ data from a smaller number of studies. However, the tendency to focus on measurements in locally restricted features in combination with insufficient seasonal data coverage leads to considerable uncertainties of the concentration fields and thus in the flux estimates, especially in the coastal zones of the northern and eastern Arabian Sea. The overall mean relative error of the annual N2O emissions from the Arabian Sea was estimated to be at least 65%.",TRUE,number
R172,Oceanography,R147159,Evidence of high N<sub>2</sub> fixation rates in the temperate northeast Atlantic,S589650,R147161,Volumetric N2 fixation rate (lower limit),L410440,0.7,"Abstract. Diazotrophic activity and primary production (PP) were investigated along two transects (Belgica BG2014/14 and GEOVIDE cruises) off the western Iberian Margin and the Bay of Biscay in May 2014. Substantial N2 fixation activity was observed at 8 of the 10 stations sampled, ranging overall from 81 to 384 µmol N m−2 d−1 (0.7 to 8.2 nmol N L−1 d−1), with two sites close to the Iberian Margin situated between 38.8 and 40.7∘ N yielding rates reaching up to 1355 and 1533 µmol N m−2 d−1. Primary production was relatively lower along the Iberian Margin, with rates ranging from 33 to 59 mmol C m−2 d−1, while it increased towards the northwest away from the peninsula, reaching as high as 135 mmol C m−2 d−1. In agreement with the area-averaged Chl a satellite data contemporaneous with our study period, our results revealed that post-bloom conditions prevailed at most sites, while at the northwesternmost station the bloom was still ongoing. When converted to carbon uptake using Redfield stoichiometry, N2 fixation could support 1 % to 3 % of daily PP in the euphotic layer at most sites, except at the two most active sites where this contribution to daily PP could reach up to 25 %. At the two sites where N2 fixation activity was the highest, the prymnesiophyte–symbiont Candidatus Atelocyanobacterium thalassa (UCYN-A) dominated the nifH sequence pool, while the remaining recovered sequences belonged to non-cyanobacterial phylotypes. At all the other sites, however, the recovered nifH sequences were exclusively assigned phylogenetically to non-cyanobacterial phylotypes. The intense N2 fixation activities recorded at the time of our study were likely promoted by the availability of phytoplankton-derived organic matter produced during the spring bloom, as evidenced by the significant surface particulate organic carbon concentrations. Also, the presence of excess phosphorus signature in surface waters seemed to contribute to sustaining N2 fixation, particularly at the sites with extreme activities. These results provide a mechanistic understanding of the unexpectedly high N2 fixation in productive waters of the temperate North Atlantic and highlight the importance of N2 fixation for future assessment of the global N inventory.",TRUE,number
R172,Oceanography,R160146,Variabilities in the fluxes and annual emissions of nitrous oxide from the Arabian Sea,S638086,R160173,N2O flux (upper limit),L437097,0.76,"Extensive measurements of nitrous oxide (N2O) have been made during April–May 1994 (intermonsoon), February–March 1995 (northeast monsoon), July–August 1995 and August 1996 (southwest monsoon) in the Arabian Sea. Low N2O supersaturations in the surface waters are observed during intermonsoon compared to those in northeast and southwest monsoons. Spatial distributions of supersaturations manifest the effects of larger mixing during winter cooling and wind‐driven upwelling during monsoon period off the Indian west coast. A net positive flux is observable during all the seasons, with no discernible differences from the open ocean to coastal regions. The average ocean‐to‐atmosphere fluxes of N2O are estimated, using wind speed dependent gas transfer velocity, to be of the order of 0.26, 0.003, and 0.51, and 0.78 pg (pico grams) cm−2 s−1 during northeast monsoon, intermonsoon, and southwest monsoon in 1995 and 1996, respectively. The lower range of annual emission of N2O is estimated to be 0.56–0.76 Tg N2O per year which constitutes 13–17% of the net global oceanic source. However, N2O emission from the Arabian Sea can be as high as 1.0 Tg N2O per year using different gas transfer models.",TRUE,number
R172,Oceanography,R160144,Nitrous oxide emissions from the Arabian Sea,S638065,R160172,N2O flux (lower limit),L437077,0.8,"Dissolved and atmospheric nitrous oxide (N2O) were measured on the legs 3 and 5 of the R/V Meteor cruise 32 in the Arabian Sea. A cruise track along 65°E was followed during both the intermonsoon (May 1995) and the southwest (SW) monsoon (July/August 1995) periods. During the second leg the coastal and open ocean upwelling regions off the Arabian Peninsula were also investigated. Mean N2O saturations for the oceanic regions of the Arabian Sea were in the range of 99–103% during the intermonsoon and 103–230% during the SW monsoon. Computed annual emissions of 0.8–1.5 Tg N2O for the Arabian Sea are considerably higher than previous estimates, indicating that the role of upwelling regions, such as the Arabian Sea, may be more important than previously assumed in global budgets of oceanic N2O emissions.",TRUE,number
R172,Oceanography,R155520,Measurements of nitrogen fixation in the oligotrophic North Pacific Subtropical Gyre using a free-drifting submersible incubation device,S622918,R155522,Volumetric N2 fixation rate (lower limit),L428842,0.8,"One challenge in field-based marine microbial ecology is to achieve sufficient spatial resolution to obtain representative information about microbial distributions and biogeochemical processes. The challenges are exacerbated when conducting rate measurements of biological processes due to potential perturbations during sampling and incubation. Here we present the first application of a robotic microlaboratory, the 4 L-submersible incubation device (SID), for conducting in situ measurements of the rates of biological nitrogen (N2) fixation (BNF). The free-drifting autonomous instrument obtains samples from the water column that are incubated in situ after the addition of 15N2 tracer. After each of up to four consecutive incubation experiments, the 4-L sample is filtered and chemically preserved. Measured BNF rates from two deployments of the SID in the oligotrophic North Pacific ranged from 0.8 to 2.8 nmol N L?1 day?1, values comparable with simultaneous rate measurements obtained using traditional conductivity–temperature–depth (CTD)–rosette sampling followed by on-deck or in situ incubation. Future deployments of the SID will help to better resolve spatial variability of oceanic BNF, particularly in areas where recovery of seawater samples by CTD compromises their integrity, e.g. anoxic habitats.",TRUE,number
R172,Oceanography,R147138,Evidence of active dinitrogen fixation in surface waters of the eastern tropical South Pacific during El Niño and La Niña events and evaluation of its potential nutrient controls: N2FIXATION IN THE ETSP,S589421,R147140,Volumetric N2 fixation rate (upper limit),L410247,0.88,"Biological N2 fixation rates were quantified in the Eastern Tropical South Pacific (ETSP) during both El Niño (February 2010) and La Niña (March–April 2011) conditions, and from Low‐Nutrient, Low‐Chlorophyll (20°S) to High‐Nutrient, Low‐Chlorophyll (HNLC) (10°S) conditions. N2 fixation was detected at all stations with rates ranging from 0.01 to 0.88 nmol N L−1 d−1, with higher rates measured during El Niño conditions compared to La Niña. High N2 fixations rates were reported at northern stations (HNLC conditions) at the oxycline and in the oxygen minimum zone (OMZ), despite nitrate concentrations up to 30 µmol L−1, indicating that inputs of new N can occur in parallel with N loss processes in OMZs. Water‐column integrated N2 fixation rates ranged from 4 to 53 µmol N m−2 d−1 at northern stations, and from 0 to 148 µmol m−2 d−1 at southern stations, which are of the same order of magnitude as N2 fixation rates measured in the oligotrophic ocean. N2 fixation rates responded significantly to Fe and organic carbon additions in the surface HNLC waters, and surprisingly by concomitant Fe and N additions in surface waters at the edge of the subtropical gyre. Recent studies have highlighted the predominance of heterotrophic diazotrophs in this area, and we hypothesize that N2 fixation could be directly limited by inorganic nutrient availability, or indirectly through the stimulation of primary production and the subsequent excretion of dissolved organic matter and/or the formation of micro‐environments favorable for heterotrophic N2 fixation.",TRUE,number
R172,Oceanography,R155517,Dissolved Organic Matter Influences N2 Fixation in the New Caledonian Lagoon (Western Tropical South Pacific),S622883,R155519,Volumetric N2 fixation rate (upper limit),L428811,1.32,"Specialized prokaryotes performing biological dinitrogen (N2) fixation ('diazotrophs') provide an important source of fixed nitrogen in oligotrophic marine ecosystems such as tropical and subtropical oceans. In these waters, cyanobacterial photosynthetic diazotrophs are well known to be abundant and active, yet the role and contribution of non-cyanobacterial diazotrophs are currently unclear. The latter are not photosynthetic (here called 'heterotrophic') and hence require external sources of organic matter to sustain N2 fixation. Here we added the photosynthesis inhibitor 3-(3,4-dichlorophenyl)-1,1-dimethylurea (DCMU) to estimate the N2 fixation potential of heterotrophic diazotrophs as compared to autotrophic ones. Additionally, we explored the influence of dissolved organic matter (DOM) on these diazotrophs along a coast to open ocean gradient in the surface waters of a subtropical coral lagoon (New Caledonia). Total N2 fixation (samples not amended with DCMU) ranged from 0.66 to 1.32 nmol N L-1 d-1. The addition of DCMU reduced N2 fixation by >90%, suggesting that the contribution of heterotrophic diazotrophs to overall N2 fixation activity was minor in this environment. Higher contribution of heterotrophic diazotrophs occurred in stations closer to the shore and coincided with the decreasing lability of DOM, as shown by various colored DOM and fluorescent DOM (CDOM and FDOM) indices. This suggests that heterotrophic N2 fixation is favored when labile DOM compounds are available. We tested the response of diazotrophs (in terms of nifH gene expression and bulk N2 fixation rates) upon the addition of a mix of carbohydrates ('DOC' treatment), amino acids ('DON' treatment), and phosphonates and phosphomonesters ('DOP' treatment). While nifH expression increased significantly in Trichodesmium exposed to the DOC treatment, bulk N2 fixation rates increased significantly only in the DOP treatment. The lack of nifH expression by gammaproteobacteria, in any of the DOM addition treatments applied, questions the contribution of non-cyanobacterial diazotrophs to fixed nitrogen inputs in the New Caledonian lagoon. While the metabolism and ecology of heterotrophic diazotrophs is currently elusive, a deeper understanding of their ecology and relationship with DOM is needed in the light of increased DOM inputs in coastal zones due to anthropogenic pressure.",TRUE,number
R172,Oceanography,R160144,Nitrous oxide emissions from the Arabian Sea,S638066,R160172,N2O flux (upper limit),L437078,1.5,"Dissolved and atmospheric nitrous oxide (N2O) were measured on the legs 3 and 5 of the R/V Meteor cruise 32 in the Arabian Sea. A cruise track along 65°E was followed during both the intermonsoon (May 1995) and the southwest (SW) monsoon (July/August 1995) periods. During the second leg the coastal and open ocean upwelling regions off the Arabian Peninsula were also investigated. Mean N2O saturations for the oceanic regions of the Arabian Sea were in the range of 99–103% during the intermonsoon and 103–230% during the SW monsoon. Computed annual emissions of 0.8–1.5 Tg N2O for the Arabian Sea are considerably higher than previous estimates, indicating that the role of upwelling regions, such as the Arabian Sea, may be more important than previously assumed in global budgets of oceanic N2O emissions.",TRUE,number
R172,Oceanography,R155520,Measurements of nitrogen fixation in the oligotrophic North Pacific Subtropical Gyre using a free-drifting submersible incubation device,S622916,R155522,Volumetric N2 fixation rate (upper limit),L428840,2.8,"One challenge in field-based marine microbial ecology is to achieve sufficient spatial resolution to obtain representative information about microbial distributions and biogeochemical processes. The challenges are exacerbated when conducting rate measurements of biological processes due to potential perturbations during sampling and incubation. Here we present the first application of a robotic microlaboratory, the 4 L-submersible incubation device (SID), for conducting in situ measurements of the rates of biological nitrogen (N2) fixation (BNF). The free-drifting autonomous instrument obtains samples from the water column that are incubated in situ after the addition of 15N2 tracer. After each of up to four consecutive incubation experiments, the 4-L sample is filtered and chemically preserved. Measured BNF rates from two deployments of the SID in the oligotrophic North Pacific ranged from 0.8 to 2.8 nmol N L?1 day?1, values comparable with simultaneous rate measurements obtained using traditional conductivity–temperature–depth (CTD)–rosette sampling followed by on-deck or in situ incubation. Future deployments of the SID will help to better resolve spatial variability of oceanic BNF, particularly in areas where recovery of seawater samples by CTD compromises their integrity, e.g. anoxic habitats.",TRUE,number
R172,Oceanography,R160733,Environmental controls on the seasonal carbon dioxide fluxes in the northeastern Indian Ocean,S641193,R160734,CO2 flux (upper limit),L438868,4.77,"Total carbon dioxide (TCO 2) and computations of partial pressure of carbon dioxide (pCO 2) had been examined in Northerneastern region of Indian Ocean. It exhibit seasonal and spatial variability. North-south gradients in the pCO 2 levels were closely related to gradients in salinity caused by fresh water discharge received from rivers. Eddies observed in this region helped to elevate the nutrients availability and the biological controls by increasing the productivity. These phenomena elevated the carbon dioxide draw down during the fair seasons. Seasonal fluxes estimated from local wind speed and air-sea carbon dioxide difference indicate that during southwest monsoon, the northeastern Indian Ocean acts as a strong sink of carbon dioxide (-20.04 mmol m –2 d -1 ). Also during fall intermonsoon the area acts as a weak sink of carbon dioxide (-4.69 mmol m –2 d -1 ). During winter monsoon, this region behaves as a weak carbon dioxide source with an average sea to air flux of 4.77 mmol m -2 d -1 . In the northern region, salinity levels in the surface level are high during winter compared to the other two seasons. Northeastern Indian Ocean shows significant intraseasonal variability in carbon dioxide fluxes that are mediated by eddies which provide carbon dioxide and nutrients from the subsurface waters to the mixed layer.",TRUE,number
R172,Oceanography,R147144,Nitrogen Fixation in Denitrified Marine Waters,S589490,R147145,Depth integrated N2 fixation rate (lower limit),L410307,7.5,"Nitrogen fixation is an essential process that biologically transforms atmospheric dinitrogen gas to ammonia, therefore compensating for nitrogen losses occurring via denitrification and anammox. Currently, inputs and losses of nitrogen to the ocean resulting from these processes are thought to be spatially separated: nitrogen fixation takes place primarily in open ocean environments (mainly through diazotrophic cyanobacteria), whereas nitrogen losses occur in oxygen-depleted intermediate waters and sediments (mostly via denitrifying and anammox bacteria). Here we report on rates of nitrogen fixation obtained during two oceanographic cruises in 2005 and 2007 in the eastern tropical South Pacific (ETSP), a region characterized by the presence of coastal upwelling and a major permanent oxygen minimum zone (OMZ). Our results show significant rates of nitrogen fixation in the water column; however, integrated rates from the surface down to 120 m varied by ∼30 fold between cruises (7.5±4.6 versus 190±82.3 µmol m−2 d−1). Moreover, rates were measured down to 400 m depth in 2007, indicating that the contribution to the integrated rates of the subsurface oxygen-deficient layer was ∼5 times higher (574±294 µmol m−2 d−1) than the oxic euphotic layer (48±68 µmol m−2 d−1). Concurrent molecular measurements detected the dinitrogenase reductase gene nifH in surface and subsurface waters. Phylogenetic analysis of the nifH sequences showed the presence of a diverse diazotrophic community at the time of the highest measured nitrogen fixation rates. Our results thus demonstrate the occurrence of nitrogen fixation in nutrient-rich coastal upwelling systems and, importantly, within the underlying OMZ. They also suggest that nitrogen fixation is a widespread process that can sporadically provide a supplementary source of fixed nitrogen in these regions.",TRUE,number
R172,Oceanography,R160186,Denitrification and nitrous oxide cycling within the upper oxycline of the eastern tropical South Pacific oxygen minimum zone,S638233,R160188,N2O flux (lower limit),L437219,12.7,"One of the shallowest, most intense oxygen minimum zones (OMZs) is found in the eastern tropical South Pacific, off northern Chile and southern Peru. It has a strong oxygen gradient (upper oxycline) and high N2O accumulation. N2O cycling by heterotrophic denitrification along the upper oxycline was studied by measuring N2O production and consumption rates using an improved acetylene blockage method. Dissolved N2O and its isotope (15N:14N ratio in N2O or δ15N) and isotopomer composition (intramolecular distribution of 15N in the N2O or δ15Nα and δ15Nβ), dissolved O2, nutrients, and other oceanographic variables were also measured. Strong N2O accumulation (up to 86 nmol L−1) was observed in the upper oxycline followed by a decline (around 8‐12 nmol L−1) toward the OMZ core. N2O production rates by denitrification (NO2− reduction to N2O) were 2.25 to 50.0 nmol L−1 d−1, whereas N2O consumption rates (N2O reduction to N2) were 2.73 and 70.8 nmol L−1 d−1. δ15N in N2O increased from 8.57% in the middle oxycline (50‐m depth) to 14.87% toward the OMZ core (100‐m depth), indicating the progressive use of N2O as an electron acceptor by denitrifying organisms. Isotopomer signals of N2O (δ15Nα and δ15Nβ) showed an abrupt change at the middle oxycline, indicating different mechanisms of N2O production and consumption in this layer. Thus, partial denitrification along with aerobic ammonium oxidation appears to be responsible for N2O accumulation in the upper oxycline, where O2 levels fluctuate widely; N2O reduction, on the other hand, is an important pathway for N2 production. As a result, the proportion of N2O consumption relative to its production increased as O2 decreased toward the OMZ core. A N2O mass balance in the subsurface layer indicates that only a small amount of the gas could be effluxed into the atmosphere (12.7‐30.7 µmol m−2 d−1) and that most N2O is used as an electron acceptor during denitrification (107‐468 µmol m−2 d−1).",TRUE,number
R172,Oceanography,R160186,Denitrification and nitrous oxide cycling within the upper oxycline of the eastern tropical South Pacific oxygen minimum zone,S638234,R160188,N2O flux (upper limit),L437220,30.7,"One of the shallowest, most intense oxygen minimum zones (OMZs) is found in the eastern tropical South Pacific, off northern Chile and southern Peru. It has a strong oxygen gradient (upper oxycline) and high N2O accumulation. N2O cycling by heterotrophic denitrification along the upper oxycline was studied by measuring N2O production and consumption rates using an improved acetylene blockage method. Dissolved N2O and its isotope (15N:14N ratio in N2O or δ15N) and isotopomer composition (intramolecular distribution of 15N in the N2O or δ15Nα and δ15Nβ), dissolved O2, nutrients, and other oceanographic variables were also measured. Strong N2O accumulation (up to 86 nmol L−1) was observed in the upper oxycline followed by a decline (around 8‐12 nmol L−1) toward the OMZ core. N2O production rates by denitrification (NO2− reduction to N2O) were 2.25 to 50.0 nmol L−1 d−1, whereas N2O consumption rates (N2O reduction to N2) were 2.73 and 70.8 nmol L−1 d−1. δ15N in N2O increased from 8.57% in the middle oxycline (50‐m depth) to 14.87% toward the OMZ core (100‐m depth), indicating the progressive use of N2O as an electron acceptor by denitrifying organisms. Isotopomer signals of N2O (δ15Nα and δ15Nβ) showed an abrupt change at the middle oxycline, indicating different mechanisms of N2O production and consumption in this layer. Thus, partial denitrification along with aerobic ammonium oxidation appears to be responsible for N2O accumulation in the upper oxycline, where O2 levels fluctuate widely; N2O reduction, on the other hand, is an important pathway for N2 production. As a result, the proportion of N2O consumption relative to its production increased as O2 decreased toward the OMZ core. A N2O mass balance in the subsurface layer indicates that only a small amount of the gas could be effluxed into the atmosphere (12.7‐30.7 µmol m−2 d−1) and that most N2O is used as an electron acceptor during denitrification (107‐468 µmol m−2 d−1).",TRUE,number
R172,Oceanography,R109394,Heterotrophic bacteria as major nitrogen fixers in the euphotic zone of the Indian Ocean,S500215,R109590,Upper limit (nitrogen fixation rates),L361923,47.1,"Diazotrophy in the Indian Ocean is poorly understood compared to that in the Atlantic and Pacific Oceans. We first examined the basin‐scale community structure of diazotrophs and their nitrogen fixation activity within the euphotic zone during the northeast monsoon period along about 69°E from 17°N to 20°S in the oligotrophic Indian Ocean, where a shallow nitracline (49–59 m) prevailed widely and the sea surface temperature (SST) was above 25°C. Phosphate was detectable at the surface throughout the study area. The dissolved iron concentration and the ratio of iron to nitrate + nitrite at the surface were significantly higher in the Arabian Sea than in the equatorial and southern Indian Ocean. Nitrogen fixation in the Arabian Sea (24.6–47.1 μmolN m−2 d−1) was also significantly greater than that in the equatorial and southern Indian Ocean (6.27–16.6 μmolN m−2 d−1), indicating that iron could control diazotrophy in the Indian Ocean. Phylogenetic analysis of nifH showed that most diazotrophs belonged to the Proteobacteria and that cyanobacterial diazotrophs were absent in the study area except in the Arabian Sea. Furthermore, nitrogen fixation was not associated with light intensity throughout the study area. These results are consistent with nitrogen fixation in the Indian Ocean, being largely performed by heterotrophic bacteria and not by cyanobacteria. The low cyanobacterial diazotrophy was attributed to the shallow nitracline, which is rarely observed in the Pacific and Atlantic oligotrophic oceans. Because the shallower nitracline favored enhanced upward nitrate flux, the competitive advantage of cyanobacterial diazotrophs over nondiazotrophic phytoplankton was not as significant as it is in other oligotrophic oceans.",TRUE,number
R172,Oceanography,R141354,Rates of dinitrogen fixation and the abundance of diazotrophs in North American coastal waters between Cape Hatteras and Georges Bank,S565461,R141356,Volumetric N2 fixation rate (upper limit),L396819,49.8,"We coupled dinitrogen (N2) fixation rate estimates with molecular biological methods to determine the activity and abundance of diazotrophs in coastal waters along the temperate North American Mid‐Atlantic continental shelf during multiple seasons and cruises. Volumetric rates of N2 fixation were as high as 49.8 nmol N L−1 d−1 and areal rates as high as 837.9 µmol N m−2 d−1 in our study area. Our results suggest that N2 fixation occurs at high rates in coastal shelf waters that were previously thought to be unimportant sites of N2 fixation and so were excluded from calculations of pelagic marine N2 fixation. Unicellular N2‐fixing group A cyanobacteria were the most abundant diazotrophs in the Atlantic coastal waters and their abundance was comparable to, or higher than, that measured in oceanic regimes where they were discovered. High rates of N2 fixation and the high abundance of diazotrophs along the North American Mid‐Atlantic continental shelf highlight the need to revise marine N budgets to include coastal N2 fixation. Integrating areal rates of N2 fixation over the continental shelf area between Cape Hatteras and Nova Scotia, the estimated N2 fixation in this temperate shelf system is about 0.02 Tmol N yr−1, the amount previously calculated for the entire North Atlantic continental shelf. Additional studies should provide spatially, temporally, and seasonally resolved rate estimates from coastal systems to better constrain N inputs via N2 fixation from the neritic zone.",TRUE,number
R172,Oceanography,R141354,Rates of dinitrogen fixation and the abundance of diazotrophs in North American coastal waters between Cape Hatteras and Georges Bank,S565474,R141356,Depth integrated N2 fixation rate (upper limit),L396831,837.9,"We coupled dinitrogen (N2) fixation rate estimates with molecular biological methods to determine the activity and abundance of diazotrophs in coastal waters along the temperate North American Mid‐Atlantic continental shelf during multiple seasons and cruises. Volumetric rates of N2 fixation were as high as 49.8 nmol N L−1 d−1 and areal rates as high as 837.9 µmol N m−2 d−1 in our study area. Our results suggest that N2 fixation occurs at high rates in coastal shelf waters that were previously thought to be unimportant sites of N2 fixation and so were excluded from calculations of pelagic marine N2 fixation. Unicellular N2‐fixing group A cyanobacteria were the most abundant diazotrophs in the Atlantic coastal waters and their abundance was comparable to, or higher than, that measured in oceanic regimes where they were discovered. High rates of N2 fixation and the high abundance of diazotrophs along the North American Mid‐Atlantic continental shelf highlight the need to revise marine N budgets to include coastal N2 fixation. Integrating areal rates of N2 fixation over the continental shelf area between Cape Hatteras and Nova Scotia, the estimated N2 fixation in this temperate shelf system is about 0.02 Tmol N yr−1, the amount previously calculated for the entire North Atlantic continental shelf. Additional studies should provide spatially, temporally, and seasonally resolved rate estimates from coastal systems to better constrain N inputs via N2 fixation from the neritic zone.",TRUE,number
R138056,Planetary Sciences,R138508,Raman spectroscopy as a method for mineral identification on lunar robotic exploration missions,S549977,R138510,Excitation Frequency/Wavelength (nm),L386995,514.5,"fiber bundle that carried the laser beam and returned the scattered radiation could be placed against surfaces at any desired angle by a deployment mechanism; otherwise, the instrument would need no moving parts. A modem micro-Raman spectrometer with its beam broadened (to .expand the spot to 50-gm diameter) and set for low resolution (7 cm ' in the 100-1400 cm ' region relative to 514.5-nm excitation), was used to simulate the spectra anticipated from a rover instrument. We present spectra for lunar mineral grains, <1 mm soil fines, breccia fragments, and glasses. From frequencies of olivine peaks, we derived sufficiently precise forsteritc contents to correlate the analyzed grains to known rock types and we obtained appropriate forsteritc contents from weak signals above background in soil fines and breccias. Peak positions of pyroxenes were sufficiently well determined to distinguish among orthorhombic, monoclinic, and triclinic (pyroxenoid) structures; additional information can be obtained from pyroxene spectra, but requires further laboratory calibration. Plagioclase provided sharp peaks in soil fines and most breccias even when the glass content was high.",TRUE,number
R138056,Planetary Sciences,R147335,Goldschmidt crater and the Moon's north polar region: Results from the Moon Mineralogy Mapper (M3),S590726,R147337,OH (nm),L411220,3000,"[1] Soils within the impact crater Goldschmidt have been identified as spectrally distinct from the local highland material. High spatial and spectral resolution data from the Moon Mineralogy Mapper (M3) on the Chandrayaan-1 orbiter are used to examine the character of Goldschmidt crater in detail. Spectral parameters applied to a north polar mosaic of M3 data are used to discern large-scale compositional trends at the northern high latitudes, and spectra from three widely separated regions are compared to spectra from Goldschmidt. The results highlight the compositional diversity of the lunar nearside, in particular, where feldspathic soils with a low-Ca pyroxene component are pervasive, but exclusively feldspathic regions and small areas of basaltic composition are also observed. Additionally, we find that the relative strengths of the diagnostic OH/H2O absorption feature near 3000 nm are correlated with the mineralogy of the host material. On both global and local scales, the strongest hydrous absorptions occur on the more feldspathic surfaces. Thus, M3 data suggest that while the feldspathic soils within Goldschmidt crater are enhanced in OH/H2O compared to the relatively mafic nearside polar highlands, their hydration signatures are similar to those observed in the feldspathic highlands on the farside.",TRUE,number
R102,Plant Pathology,R108704,Anastomosis Groups and Pathogenicity of Rhizoctonia solani and Binucleate Rhizoctonia from Potato in South Africa,S495162,R108706,AG HG4III,L358632,2.3,"A survey of anastomosis groups (AG) of Rhizoctonia spp. associated with potato diseases was conducted in South Africa. In total, 112 Rhizoctonia solani and 19 binucleate Rhizoctonia (BNR) isolates were recovered from diseased potato plants, characterized for AG and pathogenicity. The AG identity of the isolates was confirmed using phylogenetic analysis of the internal transcribed spacer region of ribosomal DNA. R. solani isolates recovered belonged to AG 3-PT, AG 2-2IIIB, AG 4HG-I, AG 4HG-III, and AG 5, while BNR isolates belonged to AG A and AG R, with frequencies of 74, 6.1, 2.3, 2.3, 0.8, 12.2, and 2.3%, respectively. R. solani AG 3-PT was the most predominant AG and occurred in all the potato-growing regions sampled, whereas the other AG occurred in distinct locations. Different AG grouped into distinct clades, with high maximum parsimony and maximum-likelihood bootstrap support for both R. solani and BNR. An experiment under greenhouse conditions with representative isolates from different AG showed differences in aggressiveness between and within AG. Isolates of AG 2-2IIIB, AG 4HG-III, and AG R were the most aggressive in causing stem canker while AG 3-PT, AG 5, and AG R caused black scurf. This is the first comprehensive survey of R. solani and BNR on potato in South Africa using a molecular-based approach. This is the first report of R. solani AG 2-2IIIB and AG 4 HG-I causing stem and stolon canker and BNR AG A and AG R causing stem canker and black scurf on potato in South Africa.",TRUE,number
R102,Plant Pathology,R108704,Anastomosis Groups and Pathogenicity of Rhizoctonia solani and Binucleate Rhizoctonia from Potato in South Africa,S495165,R108706,AG R,L358635,2.3,"A survey of anastomosis groups (AG) of Rhizoctonia spp. associated with potato diseases was conducted in South Africa. In total, 112 Rhizoctonia solani and 19 binucleate Rhizoctonia (BNR) isolates were recovered from diseased potato plants, characterized for AG and pathogenicity. The AG identity of the isolates was confirmed using phylogenetic analysis of the internal transcribed spacer region of ribosomal DNA. R. solani isolates recovered belonged to AG 3-PT, AG 2-2IIIB, AG 4HG-I, AG 4HG-III, and AG 5, while BNR isolates belonged to AG A and AG R, with frequencies of 74, 6.1, 2.3, 2.3, 0.8, 12.2, and 2.3%, respectively. R. solani AG 3-PT was the most predominant AG and occurred in all the potato-growing regions sampled, whereas the other AG occurred in distinct locations. Different AG grouped into distinct clades, with high maximum parsimony and maximum-likelihood bootstrap support for both R. solani and BNR. An experiment under greenhouse conditions with representative isolates from different AG showed differences in aggressiveness between and within AG. Isolates of AG 2-2IIIB, AG 4HG-III, and AG R were the most aggressive in causing stem canker while AG 3-PT, AG 5, and AG R caused black scurf. This is the first comprehensive survey of R. solani and BNR on potato in South Africa using a molecular-based approach. This is the first report of R. solani AG 2-2IIIB and AG 4 HG-I causing stem and stolon canker and BNR AG A and AG R causing stem canker and black scurf on potato in South Africa.",TRUE,number
R102,Plant Pathology,R108704,Anastomosis Groups and Pathogenicity of Rhizoctonia solani and Binucleate Rhizoctonia from Potato in South Africa,S495160,R108706,AG 2.2IIIB,L358630,6.1,"A survey of anastomosis groups (AG) of Rhizoctonia spp. associated with potato diseases was conducted in South Africa. In total, 112 Rhizoctonia solani and 19 binucleate Rhizoctonia (BNR) isolates were recovered from diseased potato plants, characterized for AG and pathogenicity. The AG identity of the isolates was confirmed using phylogenetic analysis of the internal transcribed spacer region of ribosomal DNA. R. solani isolates recovered belonged to AG 3-PT, AG 2-2IIIB, AG 4HG-I, AG 4HG-III, and AG 5, while BNR isolates belonged to AG A and AG R, with frequencies of 74, 6.1, 2.3, 2.3, 0.8, 12.2, and 2.3%, respectively. R. solani AG 3-PT was the most predominant AG and occurred in all the potato-growing regions sampled, whereas the other AG occurred in distinct locations. Different AG grouped into distinct clades, with high maximum parsimony and maximum-likelihood bootstrap support for both R. solani and BNR. An experiment under greenhouse conditions with representative isolates from different AG showed differences in aggressiveness between and within AG. Isolates of AG 2-2IIIB, AG 4HG-III, and AG R were the most aggressive in causing stem canker while AG 3-PT, AG 5, and AG R caused black scurf. This is the first comprehensive survey of R. solani and BNR on potato in South Africa using a molecular-based approach. This is the first report of R. solani AG 2-2IIIB and AG 4 HG-I causing stem and stolon canker and BNR AG A and AG R causing stem canker and black scurf on potato in South Africa.",TRUE,number
R102,Plant Pathology,R108704,Anastomosis Groups and Pathogenicity of Rhizoctonia solani and Binucleate Rhizoctonia from Potato in South Africa,S495163,R108706,AG A,L358633,12.2,"A survey of anastomosis groups (AG) of Rhizoctonia spp. associated with potato diseases was conducted in South Africa. In total, 112 Rhizoctonia solani and 19 binucleate Rhizoctonia (BNR) isolates were recovered from diseased potato plants, characterized for AG and pathogenicity. The AG identity of the isolates was confirmed using phylogenetic analysis of the internal transcribed spacer region of ribosomal DNA. R. solani isolates recovered belonged to AG 3-PT, AG 2-2IIIB, AG 4HG-I, AG 4HG-III, and AG 5, while BNR isolates belonged to AG A and AG R, with frequencies of 74, 6.1, 2.3, 2.3, 0.8, 12.2, and 2.3%, respectively. R. solani AG 3-PT was the most predominant AG and occurred in all the potato-growing regions sampled, whereas the other AG occurred in distinct locations. Different AG grouped into distinct clades, with high maximum parsimony and maximum-likelihood bootstrap support for both R. solani and BNR. An experiment under greenhouse conditions with representative isolates from different AG showed differences in aggressiveness between and within AG. Isolates of AG 2-2IIIB, AG 4HG-III, and AG R were the most aggressive in causing stem canker while AG 3-PT, AG 5, and AG R caused black scurf. This is the first comprehensive survey of R. solani and BNR on potato in South Africa using a molecular-based approach. This is the first report of R. solani AG 2-2IIIB and AG 4 HG-I causing stem and stolon canker and BNR AG A and AG R causing stem canker and black scurf on potato in South Africa.",TRUE,number
R185,Plasma and Beam Physics,R139083,Vacuum UV Radiation of a Plasma Jet Operated With Rare Gases at Atmospheric Pressure,S554167,R139171,Excitation_frequency,L389950,1.2,"The vacuum ultraviolet (VUV) emissions from 115 to 200 nm from the effluent of an RF (1.2 MHz) capillary jet fed with pure argon and binary mixtures of argon and xenon or krypton (up to 20%) are analyzed. The feed gas mixture is emanating into air at normal pressure. The Ar2 excimer second continuum, observed in the region of 120-135 nm, prevails in the pure Ar discharge. It decreases when small amounts (as low as 0.5%) of Xe or Kr are added. In that case, the resonant emission of Xe at 147 nm (or 124 nm for Kr, respectively) becomes dominant. The Xe2 second continuum at 172 nm appears for higher admixtures of Xe (10%). Furthermore, several N I emission lines, the O I resonance line, and H I line appear due to ambient air. Two absorption bands (120.6 and 124.6 nm) are present in the spectra. Their origin could be unequivocally associated to O2 and O3. The radiance is determined end-on at varying axial distance in absolute units for various mixtures of Ar/Xe and Ar/Kr and compared to pure Ar. Integration over the entire VUV wavelength region provides the integrated spectral distribution. Maximum values of 2.2 mW middotmm-2middotsr-1 are attained in pure Ar and at a distance of 4 mm from the outlet nozzle of the discharge. By adding diminutive admixtures of Kr or Xe, the intensity and spectral distribution is effectively changed.",TRUE,number
R185,Plasma and Beam Physics,R139065,Etching materials with an atmospheric-pressure plasma jet,S554041,R139165,Excitation_frequency,L389836,13.56,"A plasma jet has been developed for etching materials at atmospheric pressure and between 100 and C. Gas mixtures containing helium, oxygen and carbon tetrafluoride were passed between an outer, grounded electrode and a centre electrode, which was driven by 13.56 MHz radio frequency power at 50 to 500 W. At a flow rate of , a stable, arc-free discharge was produced. This discharge extended out through a nozzle at the end of the electrodes, forming a plasma jet. Materials placed 0.5 cm downstream from the nozzle were etched at the following maximum rates: for Kapton ( and He only), for silicon dioxide, for tantalum and for tungsten. Optical emission spectroscopy was used to identify the electronically excited species inside the plasma and outside in the jet effluent.",TRUE,number
R185,Plasma and Beam Physics,R139086,Generation of atomic oxygen in the effluent of an atmospheric pressure plasma jet,S554185,R139172,Excitation_frequency,L389966,13.56,"The planar 13.56 MHz RF-excited low temperature atmospheric pressure plasma jet (APPJ) investigated in this study is operated with helium feed gas and a small molecular oxygen admixture. The effluent leaving the discharge through the jet's nozzle contains very few charged particles and a high reactive oxygen species' density. As its main reactive radical, essential for numerous applications, the ground state atomic oxygen density in the APPJ's effluent is measured spatially resolved with two-photon absorption laser induced fluorescence spectroscopy. The atomic oxygen density at the nozzle reaches a value of ~1016 cm−3. Even at several centimetres distance still 1% of this initial atomic oxygen density can be detected. Optical emission spectroscopy (OES) reveals the presence of short living excited oxygen atoms up to 10 cm distance from the jet's nozzle. The measured high ground state atomic oxygen density and the unaccounted for presence of excited atomic oxygen require further investigations on a possible energy transfer from the APPJ's discharge region into the effluent: energetic vacuum ultraviolet radiation, measured by OES down to 110 nm, reaches far into the effluent where it is presumed to be responsible for the generation of atomic oxygen.",TRUE,number
R185,Plasma and Beam Physics,R139112,Absolute ozone densities in a radio-frequency driven atmospheric pressure plasma using two-beam UV-LED absorption spectroscopy and numerical simulations,S554358,R139181,Excitation_frequency,L390121,13.56,"The efficient generation of reactive oxygen species (ROS) in cold atmospheric pressure plasma jets (APPJs) is an increasingly important topic, e.g. for the treatment of temperature sensitive biological samples in the field of plasma medicine. A 13.56 MHz radio-frequency (rf) driven APPJ device operated with helium feed gas and small admixtures of oxygen (up to 1%), generating a homogeneous glow-mode plasma at low gas temperatures, was investigated. Absolute densities of ozone, one of the most prominent ROS, were measured across the 11 mm wide discharge channel by means of broadband absorption spectroscopy using the Hartley band centered at λ = 255 nm. A two-beam setup with a reference beam in MachZehnder configuration is employed for improved signal-to-noise ratio allowing highsensitivity measurements in the investigated single-pass weak-absorbance regime. The results are correlated to gas temperature measurements, deduced from the rotational temperature of the N2 (C Π u → B Π g , υ = 0 → 2) optical emission from introduced air impurities. The observed opposing trends of both quantities as a function of rf power input and oxygen admixture are analysed and explained in terms of a zerodimensional plasma-chemical kinetics simulation. It is found that the gas temperature as well as the densities of O and O2(b Σ g ) influence the absolute O3 densities when the rf power is varied. ‡ Current address: KROHNE Innovation GmbH, Ludwig-Krone-Str.5, 47058 Duisburg, Germany Page 1 of 26 AUTHOR SUBMITTED MANUSCRIPT PSST-101801.R1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 A cc e d M an us cr ip t",TRUE,number
R185,Plasma and Beam Physics,R139135,2D spatially resolved O atom density profiles in an atmospheric pressure plasma jet: from the active plasma volume to the effluent,S554537,R139189,Excitation_frequency,L390284,13.56,"Two-dimensional spatially resolved absolute atomic oxygen densities are measured within an atmospheric pressure micro plasma jet and in its effluent. The plasma is operated in helium with an admixture of 0.5% of oxygen at 13.56 MHz and with a power of 1 W. Absolute atomic oxygen densities are obtained using two photon absorption laser induced fluorescence spectroscopy. The results are interpreted based on measurements of the electron dynamics by phase resolved optical emission spectroscopy in combination with a simple model that balances the production of atomic oxygen with its losses due to chemical reactions and diffusion. Within the discharge, the atomic oxygen density builds up with a rise time of 600 µs along the gas flow and reaches a plateau of 8 × 1015 cm−3. In the effluent, the density decays exponentially with a decay time of 180 µs (corresponding to a decay length of 3 mm at a gas flow of 1.0 slm). It is found that both, the species formation behavior and the maximum distance between the jet nozzle and substrates for possible oxygen treatments of surfaces can be controlled by adjusting the gas flow.",TRUE,number
R11,Science,R25803,Intermetallic Compound Pd 2 Ga as a Selective Catalyst for the Semi-Hydrogenation of Acetylene: From Model to High Performance Systems,S78699,R25804,P (MPa),L49386,0.1,"A novel nanostructured Pd2Ga intermetallic catalyst is presented and compared to elemental Pd and a macroscopic bulk Pd2Ga material concerning physical and chemical properties. The new material was prepared by controlled co-precipitation from a single phase layered double hydroxide precursor or hydrotalcite-like compound, of the composition Pd0.025Mg0.675Ga0.3(OH)2(CO3)0.15 ∙ m H2O. Upon thermal reduction in hydrogen, bimetallic nanoparticles of an average size less than 10 nm and a porous MgO/MgGa2O4 support are formed. HRTEM images confirmed the presence of the intermetallic compound Pd2Ga and are corroborated by XPS investigations which revealed an interaction between Pd and Ga. Due to the relatively high dispersion of the intermetallic compound, the catalytic activity of the sample in the semi-hydrogenation of acetylene was more than five thousand times higher than observed for a bulk Pd2Ga model catalyst. Interestingly, the high selectivity of the model catalysts towards the semi-hydrogenated product of 74% was only slightly lowered to 70% for the nano-structured catalyst, while an elemental Pd reference catalyst showed only a selectivity of around 20% under these testing conditions. This result indicates the structural integrity of the intermetallic compound and the absence of elemental Pd in the nano-sized particles. Thus, this work serves as an example of how the unique properties of an intermetallic compound, well-studied as a model catalyst, can be made accessible as a real high performing materials allowing establishment of structure-performance relationships and other application-related further investigations. The general synthesis approach is assumed to be applicable to several Pd-X intermetallic catalysts for X being elements forming hydrotalcite-like precursors in their ionic form.",TRUE,number
R11,Science,R70587,Prediction of Recurrent Clostridium Difficile Infection Using Comprehensive Electronic Medical Records in an Integrated Healthcare Delivery System,S335927,R70588,C Statistic,L242711,0.605,"BACKGROUND Predicting recurrent Clostridium difficile infection (rCDI) remains difficult. METHODS. We employed a retrospective cohort design. Granular electronic medical record (EMR) data had been collected from patients hospitalized at 21 Kaiser Permanente Northern California hospitals. The derivation dataset (2007–2013) included data from 9,386 patients who experienced incident CDI (iCDI) and 1,311 who experienced their first CDI recurrences (rCDI). The validation dataset (2014) included data from 1,865 patients who experienced incident CDI and 144 who experienced rCDI. Using multiple techniques, including machine learning, we evaluated more than 150 potential predictors. Our final analyses evaluated 3 models with varying degrees of complexity and 1 previously published model. RESULTS Despite having a large multicenter cohort and access to granular EMR data (eg, vital signs, and laboratory test results), none of the models discriminated well (c statistics, 0.591–0.605), had good calibration, or had good explanatory power. CONCLUSIONS Our ability to predict rCDI remains limited. Given currently available EMR technology, improvements in prediction will require incorporating new variables because currently available data elements lack adequate explanatory power. Infect Control Hosp Epidemiol 2017;38:1196–1203",TRUE,number
R11,Science,R70595,"A Generalizable, Data-Driven Approach to Predict Daily Risk of Clostridium difficile Infection at Two Large Academic Health Centers",S335985,R70596,AUROC,L242761,0.82,"OBJECTIVE An estimated 293,300 healthcare-associated cases of Clostridium difficile infection (CDI) occur annually in the United States. To date, research has focused on developing risk prediction models for CDI that work well across institutions. However, this one-size-fits-all approach ignores important hospital-specific factors. We focus on a generalizable method for building facility-specific models. We demonstrate the applicability of the approach using electronic health records (EHR) from the University of Michigan Hospitals (UM) and the Massachusetts General Hospital (MGH). METHODS We utilized EHR data from 191,014 adult admissions to UM and 65,718 adult admissions to MGH. We extracted patient demographics, admission details, patient history, and daily hospitalization details, resulting in 4,836 features from patients at UM and 1,837 from patients at MGH. We used L2 regularized logistic regression to learn the models, and we measured the discriminative performance of the models on held-out data from each hospital. RESULTS Using the UM and MGH test data, the models achieved area under the receiver operating characteristic curve (AUROC) values of 0.82 (95% confidence interval [CI], 0.80–0.84) and 0.75 ( 95% CI, 0.73–0.78), respectively. Some predictive factors were shared between the 2 models, but many of the top predictive factors differed between facilities. CONCLUSION A data-driven approach to building models for estimating daily patient risk for CDI was used to build institution-specific models at 2 large hospitals with different patient populations and EHR systems. In contrast to traditional approaches that focus on developing models that apply across hospitals, our generalizable approach yields risk-stratification models tailored to an institution. These hospital-specific models allow for earlier and more accurate identification of high-risk patients and better targeting of infection prevention strategies. Infect Control Hosp Epidemiol 2018;39:425–433",TRUE,number
R11,Science,R70622,Improving Prediction of Surgical Site Infection Risk with Multilevel Modeling,S336160,R70623,AUROC,L242900,0.84,"Background Surgical site infection (SSI) surveillance is a key factor in the elaboration of strategies to reduce SSI occurrence and in providing surgeons with appropriate data feedback (risk indicators, clinical prediction rule). Aim To improve the predictive performance of an individual-based SSI risk model by considering a multilevel hierarchical structure. Patients and Methods Data were collected anonymously by the French SSI active surveillance system in 2011. An SSI diagnosis was made by the surgical teams and infection control practitioners following standardized criteria. A random 20% sample comprising 151 hospitals, 502 wards and 62280 patients was used. Three-level (patient, ward, hospital) hierarchical logistic regression models were initially performed. Parameters were estimated using the simulation-based Markov Chain Monte Carlo procedure. Results A total of 623 SSI were diagnosed (1%). The hospital level was discarded from the analysis as it did not contribute to variability of SSI occurrence (p = 0.32). Established individual risk factors (patient history, surgical procedure and hospitalization characteristics) were identified. A significant heterogeneity in SSI occurrence between wards was found (median odds ratio [MOR] 3.59, 95% credibility interval [CI] 3.03 to 4.33) after adjusting for patient-level variables. The effects of the follow-up duration varied between wards (p<10−9), with an increased heterogeneity when follow-up was <15 days (MOR 6.92, 95% CI 5.31 to 9.07]). The final two-level model significantly improved the discriminative accuracy compared to the single level reference model (p<10−9), with an area under the ROC curve of 0.84. Conclusion This study sheds new light on the respective contribution of patient-, ward- and hospital-levels to SSI occurrence and demonstrates the significant impact of the ward level over and above risk factors present at patient level (i.e., independently from patient case-mix).",TRUE,number
R11,Science,R70554,Prediction of Sepsis in the Intensive Care Unit With Minimal Electronic Health Record Data: A Machine Learning Approach,S335649,R70555,AUCPR,L242477,0.88,"Background Sepsis is one of the leading causes of mortality in hospitalized patients. Despite this fact, a reliable means of predicting sepsis onset remains elusive. Early and accurate sepsis onset predictions could allow more aggressive and targeted therapy while maintaining antimicrobial stewardship. Existing detection methods suffer from low performance and often require time-consuming laboratory test results. Objective To study and validate a sepsis prediction method, InSight, for the new Sepsis-3 definitions in retrospective data, make predictions using a minimal set of variables from within the electronic health record data, compare the performance of this approach with existing scoring systems, and investigate the effects of data sparsity on InSight performance. Methods We apply InSight, a machine learning classification system that uses multivariable combinations of easily obtained patient data (vitals, peripheral capillary oxygen saturation, Glasgow Coma Score, and age), to predict sepsis using the retrospective Multiparameter Intelligent Monitoring in Intensive Care (MIMIC)-III dataset, restricted to intensive care unit (ICU) patients aged 15 years or more. Following the Sepsis-3 definitions of the sepsis syndrome, we compare the classification performance of InSight versus quick sequential organ failure assessment (qSOFA), modified early warning score (MEWS), systemic inflammatory response syndrome (SIRS), simplified acute physiology score (SAPS) II, and sequential organ failure assessment (SOFA) to determine whether or not patients will become septic at a fixed period of time before onset. We also test the robustness of the InSight system to random deletion of individual input observations. Results In a test dataset with 11.3% sepsis prevalence, InSight produced superior classification performance compared with the alternative scores as measured by area under the receiver operating characteristic curves (AUROC) and area under precision-recall curves (APR). In detection of sepsis onset, InSight attains AUROC = 0.880 (SD 0.006) at onset time and APR = 0.595 (SD 0.016), both of which are superior to the performance attained by SIRS (AUROC: 0.609; APR: 0.160), qSOFA (AUROC: 0.772; APR: 0.277), and MEWS (AUROC: 0.803; APR: 0.327) computed concurrently, as well as SAPS II (AUROC: 0.700; APR: 0.225) and SOFA (AUROC: 0.725; APR: 0.284) computed at admission (P<.001 for all comparisons). Similar results are observed for 1-4 hours preceding sepsis onset. In experiments where approximately 60% of input data are deleted at random, InSight attains an AUROC of 0.781 (SD 0.013) and APR of 0.401 (SD 0.015) at sepsis onset time. Even with 60% of data missing, InSight remains superior to the corresponding SIRS scores (AUROC and APR, P<.001), qSOFA scores (P=.0095; P<.001) and superior to SOFA and SAPS II computed at admission (AUROC and APR, P<.001), where all of these comparison scores (except InSight) are computed without data deletion. Conclusions Despite using little more than vitals, InSight is an effective tool for predicting sepsis onset and performs well even with randomly missing data.",TRUE,number
R11,Science,R70554,Prediction of Sepsis in the Intensive Care Unit With Minimal Electronic Health Record Data: A Machine Learning Approach,S335839,R70577,AUROC,L242645,0.88,"Background Sepsis is one of the leading causes of mortality in hospitalized patients. Despite this fact, a reliable means of predicting sepsis onset remains elusive. Early and accurate sepsis onset predictions could allow more aggressive and targeted therapy while maintaining antimicrobial stewardship. Existing detection methods suffer from low performance and often require time-consuming laboratory test results. Objective To study and validate a sepsis prediction method, InSight, for the new Sepsis-3 definitions in retrospective data, make predictions using a minimal set of variables from within the electronic health record data, compare the performance of this approach with existing scoring systems, and investigate the effects of data sparsity on InSight performance. Methods We apply InSight, a machine learning classification system that uses multivariable combinations of easily obtained patient data (vitals, peripheral capillary oxygen saturation, Glasgow Coma Score, and age), to predict sepsis using the retrospective Multiparameter Intelligent Monitoring in Intensive Care (MIMIC)-III dataset, restricted to intensive care unit (ICU) patients aged 15 years or more. Following the Sepsis-3 definitions of the sepsis syndrome, we compare the classification performance of InSight versus quick sequential organ failure assessment (qSOFA), modified early warning score (MEWS), systemic inflammatory response syndrome (SIRS), simplified acute physiology score (SAPS) II, and sequential organ failure assessment (SOFA) to determine whether or not patients will become septic at a fixed period of time before onset. We also test the robustness of the InSight system to random deletion of individual input observations. Results In a test dataset with 11.3% sepsis prevalence, InSight produced superior classification performance compared with the alternative scores as measured by area under the receiver operating characteristic curves (AUROC) and area under precision-recall curves (APR). In detection of sepsis onset, InSight attains AUROC = 0.880 (SD 0.006) at onset time and APR = 0.595 (SD 0.016), both of which are superior to the performance attained by SIRS (AUROC: 0.609; APR: 0.160), qSOFA (AUROC: 0.772; APR: 0.277), and MEWS (AUROC: 0.803; APR: 0.327) computed concurrently, as well as SAPS II (AUROC: 0.700; APR: 0.225) and SOFA (AUROC: 0.725; APR: 0.284) computed at admission (P<.001 for all comparisons). Similar results are observed for 1-4 hours preceding sepsis onset. In experiments where approximately 60% of input data are deleted at random, InSight attains an AUROC of 0.781 (SD 0.013) and APR of 0.401 (SD 0.015) at sepsis onset time. Even with 60% of data missing, InSight remains superior to the corresponding SIRS scores (AUROC and APR, P<.001), qSOFA scores (P=.0095; P<.001) and superior to SOFA and SAPS II computed at admission (AUROC and APR, P<.001), where all of these comparison scores (except InSight) are computed without data deletion. Conclusions Despite using little more than vitals, InSight is an effective tool for predicting sepsis onset and performs well even with randomly missing data.",TRUE,number
R11,Science,R70585,Development and validation of a Clostridium difficile infection risk prediction model,S335921,R70586,AUROC,L242707,0.88,"Objective. To develop and validate a risk prediction model that could identify patients at high risk for Clostridium difficile infection (CDI) before they develop disease. Design and Setting. Retrospective cohort study in a tertiary care medical center. Patients. Patients admitted to the hospital for at least 48 hours during the calendar year 2003. Methods. Data were collected electronically from the hospital's Medical Informatics database and analyzed with logistic regression to determine variables that best predicted patients' risk for development of CDI. Model discrimination and calibration were calculated. The model was bootstrapped 500 times to validate the predictive accuracy. A receiver operating characteristic curve was calculated to evaluate potential risk cutoffs. Results. A total of 35,350 admitted patients, including 329 with CDI, were studied. Variables in the risk prediction model were age, CDI pressure, times admitted to hospital in the previous 60 days, modified Acute Physiology Score, days of treatment with high-risk antibiotics, whether albumin level was low, admission to an intensive care unit, and receipt of laxatives, gastric acid suppressors, or antimotility drugs. The calibration and discrimination of the model were very good to excellent (C index, 0.88; Brier score, 0.009). Conclusions. The CDI risk prediction model performed well. Further study is needed to determine whether it could be used in a clinical setting to prevent CDI-associated outcomes and reduce costs.",TRUE,number
R11,Science,R70618,A diagnostic algorithm for the surveillance of deep surgical site infections after colorectal surgery,S336136,R70619,AUROC,L242880,0.95,"Abstract Objective: Surveillance of surgical site infections (SSIs) is important for infection control and is usually performed through retrospective manual chart review. The aim of this study was to develop an algorithm for the surveillance of deep SSIs based on clinical variables to enhance efficiency of surveillance. Design: Retrospective cohort study (2012–2015). Setting: A Dutch teaching hospital. Participants: We included all consecutive patients who underwent colorectal surgery excluding those with contaminated wounds at the time of surgery. All patients were evaluated for deep SSIs through manual chart review, using the Centers for Disease Control and Prevention (CDC) criteria as the reference standard. Analysis: We used logistic regression modeling to identify predictors that contributed to the estimation of diagnostic probability. Bootstrapping was applied to increase generalizability, followed by assessment of statistical performance and clinical implications. Results: In total, 1,606 patients were included, of whom 129 (8.0%) acquired a deep SSI. The final model included postoperative length of stay, wound class, readmission, reoperation, and 30-day mortality. The model achieved 68.7% specificity and 98.5% sensitivity and an area under the receiver operator characteristic (ROC) curve (AUC) of 0.950 (95% CI, 0.932–0.969). Positive and negative predictive values were 21.5% and 99.8%, respectively. Applying the algorithm resulted in a 63.4% reduction in the number of records requiring full manual review (from 1,606 to 590). Conclusions: This 5-parameter model identified 98.5% of patients with a deep SSI. The model can be used to develop semiautomatic surveillance of deep SSIs after colorectal surgery, which may further improve efficiency and quality of SSI surveillance.",TRUE,number
R11,Science,R70614,Maximizing Interpretability and Cost-Effectiveness of Surgical Site Infection (SSI) Predictive Models Using Feature-Specific Regularized Logistic Regression on Preoperative Temporal Data,S336110,R70615,AUROC,L242858,0.967,"This study describes a novel approach to solve the surgical site infection (SSI) classification problem. Feature engineering has traditionally been one of the most important steps in solving complex classification problems, especially in cases with temporal data. The described novel approach is based on abstraction of temporal data recorded in three temporal windows. Maximum likelihood L1-norm (lasso) regularization was used in penalized logistic regression to predict the onset of surgical site infection occurrence based on available patient blood testing results up to the day of surgery. Prior knowledge of predictors (blood tests) was integrated in the modelling by introduction of penalty factors depending on blood test prices and an early stopping parameter limiting the maximum number of selected features used in predictive modelling. Finally, solutions resulting in higher interpretability and cost-effectiveness were demonstrated. Using repeated holdout cross-validation, the baseline C-reactive protein (CRP) classifier achieved a mean AUC of 0.801, whereas our best full lasso model achieved a mean AUC of 0.956. Best model testing results were achieved for full lasso model with maximum number of features limited at 20 features with an AUC of 0.967. Presented models showed the potential to not only support domain experts in their decision making but could also prove invaluable for improvement in prediction of SSI occurrence, which may even help setting new guidelines in the field of preoperative SSI prevention and surveillance.",TRUE,number
R11,Science,R33991,Facultative catadromy of the eel Anguilla japonica between freshwater and seawater habitats,S117882,R33992,Freshwater,L71188,2.5,"To confirm the occurrence of marine residents of the Japanese eel, Anguilla japonica, which have never entered freshwater ('sea eels'), we measured Sr and Ca concentrations by X-ray electron microprobe analysis of the otoliths of 69 yellow and silver eels, collected from 10 localities in seawater and freshwater habitats around Japan, and classified their migratory histories. Two-dimen- sional images of the Sr concentration in the otoliths showed that all specimens generally had a high Sr core at the center of their otolith, which corresponded to a period of their leptocephalus and early glass eel stages in the ocean, but there were a variety of different patterns of Sr concentration and concentric rings outside the central core. Line analysis of Sr/Ca ratios along the radius of each otolith showed peaks (ca 15 × 10 -3 ) between the core and out to about 150 µm (elver mark). The pattern change of the Sr/Ca ratio outside of 150 µm indicated 3 general categories of migratory history: 'river eels', 'estuarine eels' and 'sea eels'. These 3 categories corresponded to mean values of Sr/Ca ratios of ≥ 6.0 × 10 -3 for sea eels, which spent most of their life in the sea and did not enter freshwater, of 2.5 to 6.0 × 10 -3 for estuarine eels, which inhabited estuaries or switched between different habitats, and of <2.5 × 10 -3 for river eels, which entered and remained in freshwater river habitats after arrival in the estuary. The occurrence of sea eels was 20% of all specimens examined and that of river eels, 23%, while estuarine eels were the most prevalent (57%). The occurrence of sea eels was confirmed at 4 localities in Japanese coastal waters, including offshore islands, a small bay and an estuary. The finding of estuarine eels as an intermediate type, which appear to frequently move between different habitats, and their presence at almost all localities, suggested that A. japonica has a flexible pattern of migration, with an ability to adapt to various habitats and salinities. Thus, anguillid eel migrations into freshwater are clearly not an obligatory migratory pathway, and this form of diadromy should be defined as facultative catadromy, with the sea eel as one of several ecophenotypes. Furthermore, this study indicates that eels which utilize the marine environment to various degrees during their juve- nile growth phase may make a substantial contribution to the spawning stock each year.",TRUE,number
R11,Science,R34001,Use of otolith Sr:Ca ratios to study the riverine migratory behaviors of Japanese eel Anguilla japonica,S117932,R34002,Seawater,L71215,5.1,"To understand the migratory behavior and habitat use of the Japanese eel Anguilla japonica in the Kaoping River, SW Taiwan, the temporal changes of strontium (Sr) and calcium (Ca) contents in otoliths of the eels in combination with age data were examined by wavelength dispersive X-ray spectrometry with an electron probe microanalyzer. Ages of the eel were determined by the annulus mark in their otolith. The pattern of the Sr:Ca ratios in the otoliths, before the elver stage, was similar among all specimens. Post-elver stage Sr:Ca ratios indicated that the eels experienced different salinity histories in their growth phase yellow stage. The mean (±SD) Sr:Ca ratios in otoliths beyond elver check of the 6 yellow eels from the freshwater middle reach were 1.8 ± 0.2 x 10 -3 with a maximum value of 3.73 x 10 -3 . Sr:Ca ratios of less than 4 x 10-3 were used to discriminate the freshwater from seawater resident eels. Eels from the lower reach of the river were classified into 3 types: (1) freshwater contingents, Sr:Ca ratio <4 x 10 -3 , constituted 14 % of the eels examined; (2) seawater contingent, Sr:Ca ratio 5.1 ± 1.1 x 10-3 (5%); and (3) estuarine contingent, Sr:Ca ratios ranged from 0 to 10 x 10 -3 , with migration between freshwater and seawater (81 %). The frequency distribution of the 3 contingents differed between yellow and silver eel stages (0.01 < p < 0.05 for each case) and changed with age of the eel, indicating that most of the eels stayed in the estuary for the first year then migrated to the freshwater until 6 yr old. The eel population in the river system was dominated by the estuarine contingent, probably because the estuarine environment was more stable and had a larger carrying capacity than the freshwater middle reach did, and also due to a preference for brackish water by the growth-phase, yellow eel.",TRUE,number
R11,Science,R70618,A diagnostic algorithm for the surveillance of deep surgical site infections after colorectal surgery,S336131,R70619,Specificity,L242875,68.7,"Abstract Objective: Surveillance of surgical site infections (SSIs) is important for infection control and is usually performed through retrospective manual chart review. The aim of this study was to develop an algorithm for the surveillance of deep SSIs based on clinical variables to enhance efficiency of surveillance. Design: Retrospective cohort study (2012–2015). Setting: A Dutch teaching hospital. Participants: We included all consecutive patients who underwent colorectal surgery excluding those with contaminated wounds at the time of surgery. All patients were evaluated for deep SSIs through manual chart review, using the Centers for Disease Control and Prevention (CDC) criteria as the reference standard. Analysis: We used logistic regression modeling to identify predictors that contributed to the estimation of diagnostic probability. Bootstrapping was applied to increase generalizability, followed by assessment of statistical performance and clinical implications. Results: In total, 1,606 patients were included, of whom 129 (8.0%) acquired a deep SSI. The final model included postoperative length of stay, wound class, readmission, reoperation, and 30-day mortality. The model achieved 68.7% specificity and 98.5% sensitivity and an area under the receiver operator characteristic (ROC) curve (AUC) of 0.950 (95% CI, 0.932–0.969). Positive and negative predictive values were 21.5% and 99.8%, respectively. Applying the algorithm resulted in a 63.4% reduction in the number of records requiring full manual review (from 1,606 to 590). Conclusions: This 5-parameter model identified 98.5% of patients with a deep SSI. The model can be used to develop semiautomatic surveillance of deep SSIs after colorectal surgery, which may further improve efficiency and quality of SSI surveillance.",TRUE,number
R11,Science,R70593,A Multi-Center Prospective Derivation and Validation of a Clinical Prediction Tool for Severe Clostridium difficile Infection,S335972,R70594,Accuracy,L242750,72.5,"Background and Aims Prediction of severe clinical outcomes in Clostridium difficile infection (CDI) is important to inform management decisions for optimum patient care. Currently, treatment recommendations for CDI vary based on disease severity but validated methods to predict severe disease are lacking. The aim of the study was to derive and validate a clinical prediction tool for severe outcomes in CDI. Methods A cohort totaling 638 patients with CDI was prospectively studied at three tertiary care clinical sites (Boston, Dublin and Houston). The clinical prediction rule (CPR) was developed by multivariate logistic regression analysis using the Boston cohort and the performance of this model was then evaluated in the combined Houston and Dublin cohorts. Results The CPR included the following three binary variables: age ≥ 65 years, peak serum creatinine ≥2 mg/dL and peak peripheral blood leukocyte count of ≥20,000 cells/μL. The Clostridium difficile severity score (CDSS) correctly classified 76.5% (95% CI: 70.87-81.31) and 72.5% (95% CI: 67.52-76.91) of patients in the derivation and validation cohorts, respectively. In the validation cohort, CDSS scores of 0, 1, 2 or 3 were associated with severe clinical outcomes of CDI in 4.7%, 13.8%, 33.3% and 40.0% of cases respectively. Conclusions We prospectively derived and validated a clinical prediction rule for severe CDI that is simple, reliable and accurate and can be used to identify high-risk patients most likely to benefit from measures to prevent complications of CDI.",TRUE,number
R11,Science,R70541,A Pediatric Infection Screening System with a Radar Respiration Monitor for Rapid Detection of Seasonal Influenza among Outpatient Children,S335778,R70571,Sensitivity,L242590,81.5,"Background: Seasonal influenza virus outbreaks cause annual epidemics, mostly during winter in temperate zone countries, especially resulting in increased morbidity and higher mortality in children. In order to conduct rapid screening for influenza in pediatric outpatient units, we developed a pediatric infection screening system with a radar respiration monitor. Methods: The system conducts influenza screening within 10 seconds based on vital signs (i.e., respiration rate monitored using a 24 GHz microwave radar; facial temperature, using a thermopile array; and heart rate, using a pulse photosensor). A support vector machine (SVM) classification method was used to discriminate influenza children from healthy children based on vital signs. To assess the classification performance of the screening system that uses the SVM, we conducted influenza screening for 70 children (i.e., 27 seasonal influenza patients (11 ± 2 years) at a pediatric clinic and 43 healthy control subjects (9 ± 4 years) at a pediatric dental clinic) in the winter of 2013-2014. Results: The screening system using the SVM identified 26 subjects with influenza (22 of the 27 influenza patients and 4 of the 43 healthy subjects). The system discriminated 44 subjects as healthy (5 of the 27 influenza patients and 39 of the 43 healthy subjects), with sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 81.5%, 90.7%, 84.6%, and 88.6%, respectively. Conclusion: The SVM-based screening system achieved classification results for the outpatient children based on vital signs with comparatively high NPV within 10 seconds. At pediatric clinics and hospitals, our system seems potentially useful in the first screening step for infections in the future.",TRUE,number
R11,Science,R70541,A Pediatric Infection Screening System with a Radar Respiration Monitor for Rapid Detection of Seasonal Influenza among Outpatient Children,S335780,R70571,Precision,L242592,84.6,"Background: Seasonal influenza virus outbreaks cause annual epidemics, mostly during winter in temperate zone countries, especially resulting in increased morbidity and higher mortality in children. In order to conduct rapid screening for influenza in pediatric outpatient units, we developed a pediatric infection screening system with a radar respiration monitor. Methods: The system conducts influenza screening within 10 seconds based on vital signs (i.e., respiration rate monitored using a 24 GHz microwave radar; facial temperature, using a thermopile array; and heart rate, using a pulse photosensor). A support vector machine (SVM) classification method was used to discriminate influenza children from healthy children based on vital signs. To assess the classification performance of the screening system that uses the SVM, we conducted influenza screening for 70 children (i.e., 27 seasonal influenza patients (11 ± 2 years) at a pediatric clinic and 43 healthy control subjects (9 ± 4 years) at a pediatric dental clinic) in the winter of 2013-2014. Results: The screening system using the SVM identified 26 subjects with influenza (22 of the 27 influenza patients and 4 of the 43 healthy subjects). The system discriminated 44 subjects as healthy (5 of the 27 influenza patients and 39 of the 43 healthy subjects), with sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 81.5%, 90.7%, 84.6%, and 88.6%, respectively. Conclusion: The SVM-based screening system achieved classification results for the outpatient children based on vital signs with comparatively high NPV within 10 seconds. At pediatric clinics and hospitals, our system seems potentially useful in the first screening step for infections in the future.",TRUE,number
R11,Science,R70541,A Pediatric Infection Screening System with a Radar Respiration Monitor for Rapid Detection of Seasonal Influenza among Outpatient Children,S335774,R70571,NPV,L242586,88.6,"Background: Seasonal influenza virus outbreaks cause annual epidemics, mostly during winter in temperate zone countries, especially resulting in increased morbidity and higher mortality in children. In order to conduct rapid screening for influenza in pediatric outpatient units, we developed a pediatric infection screening system with a radar respiration monitor. Methods: The system conducts influenza screening within 10 seconds based on vital signs (i.e., respiration rate monitored using a 24 GHz microwave radar; facial temperature, using a thermopile array; and heart rate, using a pulse photosensor). A support vector machine (SVM) classification method was used to discriminate influenza children from healthy children based on vital signs. To assess the classification performance of the screening system that uses the SVM, we conducted influenza screening for 70 children (i.e., 27 seasonal influenza patients (11 ± 2 years) at a pediatric clinic and 43 healthy control subjects (9 ± 4 years) at a pediatric dental clinic) in the winter of 2013-2014. Results: The screening system using the SVM identified 26 subjects with influenza (22 of the 27 influenza patients and 4 of the 43 healthy subjects). The system discriminated 44 subjects as healthy (5 of the 27 influenza patients and 39 of the 43 healthy subjects), with sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 81.5%, 90.7%, 84.6%, and 88.6%, respectively. Conclusion: The SVM-based screening system achieved classification results for the outpatient children based on vital signs with comparatively high NPV within 10 seconds. At pediatric clinics and hospitals, our system seems potentially useful in the first screening step for infections in the future.",TRUE,number
R11,Science,R70541,A Pediatric Infection Screening System with a Radar Respiration Monitor for Rapid Detection of Seasonal Influenza among Outpatient Children,S335775,R70571,Specificity,L242587,90.7,"Background: Seasonal influenza virus outbreaks cause annual epidemics, mostly during winter in temperate zone countries, especially resulting in increased morbidity and higher mortality in children. In order to conduct rapid screening for influenza in pediatric outpatient units, we developed a pediatric infection screening system with a radar respiration monitor. Methods: The system conducts influenza screening within 10 seconds based on vital signs (i.e., respiration rate monitored using a 24 GHz microwave radar; facial temperature, using a thermopile array; and heart rate, using a pulse photosensor). A support vector machine (SVM) classification method was used to discriminate influenza children from healthy children based on vital signs. To assess the classification performance of the screening system that uses the SVM, we conducted influenza screening for 70 children (i.e., 27 seasonal influenza patients (11 ± 2 years) at a pediatric clinic and 43 healthy control subjects (9 ± 4 years) at a pediatric dental clinic) in the winter of 2013-2014. Results: The screening system using the SVM identified 26 subjects with influenza (22 of the 27 influenza patients and 4 of the 43 healthy subjects). The system discriminated 44 subjects as healthy (5 of the 27 influenza patients and 39 of the 43 healthy subjects), with sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of 81.5%, 90.7%, 84.6%, and 88.6%, respectively. Conclusion: The SVM-based screening system achieved classification results for the outpatient children based on vital signs with comparatively high NPV within 10 seconds. At pediatric clinics and hospitals, our system seems potentially useful in the first screening step for infections in the future.",TRUE,number
R11,Science,R70539,"Development of an infection screening system for entry inspection at airport quarantine stations using ear temperature, heart and respiration rates",S335768,R70570,Sensitivity,L242581,92.3,"After the outbreak of severe acute respiratory syndrome (SARS) in 2003, many international airport quarantine stations conducted fever-based screening to identify infected passengers using infrared thermography for preventing global pandemics. Due to environmental factors affecting measurement of facial skin temperature with thermography, some previous studies revealed the limits of authenticity in detecting infectious symptoms. In order to implement more strict entry screening in the epidemic seasons of emerging infectious diseases, we developed an infection screening system for airport quarantines using multi-parameter vital signs. This system can automatically detect infected individuals within several tens of seconds by a neural-network-based discriminant function using measured vital signs, i.e., heart rate obtained by a reflective photo sensor, respiration rate determined by a 10-GHz non-contact respiration radar, and the ear temperature monitored by a thermography. In this paper, to reduce the environmental effects on thermography measurement, we adopted the ear temperature as a new screening indicator instead of facial skin. We tested the system on 13 influenza patients and 33 normal subjects. The sensitivity of the infection screening system in detecting influenza were 92.3%, which was higher than the sensitivity reported in our previous paper (88.0%) with average facial skin temperature.",TRUE,number
R11,Science,R30590,Average of Synthetic Exact Filters,S101840,R30591,Accuracy (%) deyeo0:,L61135,98.5,"This paper introduces a class of correlation filters called average of synthetic exact filters (ASEF). For ASEF, the correlation output is completely specified for each training image. This is in marked contrast to prior methods such as synthetic discriminant functions (SDFs) which only specify a single output value per training image. Advantages of ASEF training include: insensitivity to over-fitting, greater flexibility with regard to training images, and more robust behavior in the presence of structured backgrounds. The theory and design of ASEF filters is presented using eye localization on the FERET database as an example task. ASEF is compared to other popular correlation filters including SDF, MACE, OTF, and UMACE, and with other eye localization methods including Gabor Jets and the OpenCV cascade classifier. ASEF is shown to outperform all these methods, locating the eye to within the radius of the iris approximately 98.5% of the time.",TRUE,number
R11,Science,R70618,A diagnostic algorithm for the surveillance of deep surgical site infections after colorectal surgery,S336134,R70619,Sensitivity,L242878,98.5,"Abstract Objective: Surveillance of surgical site infections (SSIs) is important for infection control and is usually performed through retrospective manual chart review. The aim of this study was to develop an algorithm for the surveillance of deep SSIs based on clinical variables to enhance efficiency of surveillance. Design: Retrospective cohort study (2012–2015). Setting: A Dutch teaching hospital. Participants: We included all consecutive patients who underwent colorectal surgery excluding those with contaminated wounds at the time of surgery. All patients were evaluated for deep SSIs through manual chart review, using the Centers for Disease Control and Prevention (CDC) criteria as the reference standard. Analysis: We used logistic regression modeling to identify predictors that contributed to the estimation of diagnostic probability. Bootstrapping was applied to increase generalizability, followed by assessment of statistical performance and clinical implications. Results: In total, 1,606 patients were included, of whom 129 (8.0%) acquired a deep SSI. The final model included postoperative length of stay, wound class, readmission, reoperation, and 30-day mortality. The model achieved 68.7% specificity and 98.5% sensitivity and an area under the receiver operator characteristic (ROC) curve (AUC) of 0.950 (95% CI, 0.932–0.969). Positive and negative predictive values were 21.5% and 99.8%, respectively. Applying the algorithm resulted in a 63.4% reduction in the number of records requiring full manual review (from 1,606 to 590). Conclusions: This 5-parameter model identified 98.5% of patients with a deep SSI. The model can be used to develop semiautomatic surveillance of deep SSIs after colorectal surgery, which may further improve efficiency and quality of SSI surveillance.",TRUE,number
R11,Science,R29741,Economic Development and Environmental Quality in Nigeria: Is There an Environmental Kuznets Curve?,S98701,R29742,EKC Turnaround point(s) 2,R29738,280.84,"This study utilizes standard- and nested-EKC models to investigate the income-environment relation for Nigeria, between 1960 and 2008. The results from the standard-EKC model provides weak evidence of an inverted-U shaped relationship with turning point (T.P) around $280.84, while the nested model presents strong evidence of an N-shaped relationship between income and emissions in Nigeria, with a T.P around $237.23. Tests for structural breaks caused by the 1973 oil price shocks and 1986 Structural Adjustment are not rejected, implying that these factors have not significantly affected the income-environment relationship in Nigeria. Further, results from the rolling interdecadal analysis shows that the observed relationship is stable and insensitive to the sample interval chosen. Overall, our findings imply that economic development is compatible with environmental improvements in Nigeria. However, tighter and concentrated environmental policy regimes will be required to ensure that the relationship is maintained around the first two-strands of the N-shape",TRUE,number
R11,Science,R27347,Influence of the shot peening temperature on the relaxation behaviour of residual stresses during cyclic bending,S88245,R27348,Steel Grade,L54626,4140,"Shot peening of steels at elevated temperatures (warm peening) can improve the fatigue behaviour of workpieces. For the steel AI Sf 4140 (German grade 42CrM04) in a quenched and tempered condition, it is shown that this is not only caused by the higher compressive residual stresses induced but also due to an enlarged stability of these residual stresses during cyclic bending. This can be explained by strain aging effects during shot peening, which cause different and more stable dislocation structures.",TRUE,number
R11,Science,R27362,Influence of Optimized Warm Peening on Residual Stress Stability and Fatigue Strength of AISI 4140 in Different Material States,S88328,R27363,Steel Grade,L54681,4140,"Using a modified air blasting machine warm peening at 20 O C < T I 410 ""C was feasible. An optimized peening temperature of about 310 ""C was identified for a 450 ""C quenched and ternpered steel AISI 4140. Warm peening was also investigated for a normalized, a 650 ""C quenched and tempered, and a martensitically hardened material state. The quasi static surface compressive yield strengths as well as the cyclic surface yield strengths were determined from residual stress relaxation tests conducted at different stress amplitudes and numbers of loading cycles. Dynamic and static strain aging effects acting during and after warm peening clearly increased the residual stress stability and the alternating bending strength for all material states.",TRUE,number
R11,Science,R27366,Consideration of shot peening treatment applied to a high strength aeronautical steel with different hardnesses,S88340,R27367,Steel Grade,L54688,4340,"One of the most important components in a aircraft is its landing gear, due to the high load that it is submitted to during, principally, the take off and landing. For this reason, the AISI 4340 steel is widely used in the aircraft industry for fabrication of structural components, in which strength and toughness are fundamental design requirements [I]. Fatigue is an important parameter to be considered in the behavior of mechanical components subjected to constant and variable amplitude loading. One of the known ways to improve fatigue resistance is by using the shot peening process to induce a conlpressive residual stress in the surface layers of the material, making the nucleation and propagation of fatigue cracks more difficult [2,3]. The shot peening results depend on various parameters. These parameters can be grouped in three different classes according to I<. Fathallah et a1 (41: parameters describing the treated part, parameters of stream energy produced by the process and parameters describing the contact conditions. Furthermore, relaxation of the CKSF induced by shot peening has been observed during the fatigue process 15-71. In the present research the gain in fatigue life of AISI 4340 steel, obtained by shot peening treatment, is evaluated under the two different hardnesses used in landing gear. Rotating bending fatigue tests were conducted and the CRSF was measured by an x-ray tensometry prior and during fatigue tests. The evaluation of fatigue life due the shot peening in relation to the relaxation of CRSF, of crack sources position and roughness variation is done.",TRUE,number
R281,Social and Behavioral Sciences,R76141,Bullying Victimization among In-School Adolescents in Ghana: Analysis of Prevalence and Correlates from the Global School-Based Health Survey,S348430,R76150,"Prevalence, %",L249196,41.3,"(1) Background: Although bullying victimization is a phenomenon that is increasingly being recognized as a public health and mental health concern in many countries, research attention on this aspect of youth violence in low- and middle-income countries, especially sub-Saharan Africa, is minimal. The current study examined the national prevalence of bullying victimization and its correlates among in-school adolescents in Ghana. (2) Methods: A sample of 1342 in-school adolescents in Ghana (55.2% males; 44.8% females) aged 12–18 was drawn from the 2012 Global School-based Health Survey (GSHS) for the analysis. Self-reported bullying victimization “during the last 30 days, on how many days were you bullied?” was used as the central criterion variable. Three-level analyses using descriptive, Pearson chi-square, and binary logistic regression were performed. Results of the regression analysis were presented as adjusted odds ratios (aOR) at 95% confidence intervals (CIs), with a statistical significance pegged at p < 0.05. (3) Results: Bullying victimization was prevalent among 41.3% of the in-school adolescents. Pattern of results indicates that adolescents in SHS 3 [aOR = 0.34, 95% CI = 0.25, 0.47] and SHS 4 [aOR = 0.30, 95% CI = 0.21, 0.44] were less likely to be victims of bullying. Adolescents who had sustained injury [aOR = 2.11, 95% CI = 1.63, 2.73] were more likely to be bullied compared to those who had not sustained any injury. The odds of bullying victimization were higher among adolescents who had engaged in physical fight [aOR = 1.90, 95% CI = 1.42, 2.25] and those who had been physically attacked [aOR = 1.73, 95% CI = 1.32, 2.27]. Similarly, adolescents who felt lonely were more likely to report being bullied [aOR = 1.50, 95% CI = 1.08, 2.08] as against those who did not feel lonely. Additionally, adolescents with a history of suicide attempts were more likely to be bullied [aOR = 1.63, 95% CI = 1.11, 2.38] and those who used marijuana had higher odds of bullying victimization [aOR = 3.36, 95% CI = 1.10, 10.24]. (4) Conclusions: Current findings require the need for policy makers and school authorities in Ghana to design and implement policies and anti-bullying interventions (e.g., Social Emotional Learning (SEL), Emotive Behavioral Education (REBE), Marijuana Cessation Therapy (MCT)) focused on addressing behavioral issues, mental health and substance abuse among in-school adolescents.",TRUE,number
R57,Virology,R178376,SARS-CoV-2 viral load as a predictor for disease severity in outpatients and hospitalised patients with COVID-19: A prospective cohort study,S699744,R178379,Lower confidence limit,R178381,0.81,"Introduction We aimed to examine if severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) polymerase chain reaction (PCR) cycle quantification (Cq) value, as a surrogate for SARS-CoV-2 viral load, could predict hospitalisation and disease severity in adult patients with coronavirus disease 2019 (COVID-19). Methods We performed a prospective cohort study of adult patients with PCR positive SARS-CoV-2 airway samples including all out-patients registered at the Department of Infectious Diseases, Odense University Hospital (OUH) March 9-March 17 2020, and all hospitalised patients at OUH March 10-April 21 2020. To identify associations between Cq-values and a) hospital admission and b) a severe outcome, logistic regression analyses were used to compute odds ratios (OR) and 95% Confidence Intervals (CI), adjusting for confounding factors (aOR). Results We included 87 non-hospitalised and 82 hospitalised patients. The median baseline Cq-value was 25.5 (interquartile range 22.3–29.0). We found a significant association between increasing Cq-value and hospital-admission in univariate analysis (OR 1.11, 95% CI 1.04–1.19). However, this was due to an association between time from symptom onset to testing and Cq-values, and no association was found in the adjusted analysis (aOR 1.08, 95% CI 0.94–1.23). In hospitalised patients, a significant association between lower Cq-values and higher risk of severe disease was found (aOR 0.89, 95% CI 0.81–0.98), independent of timing of testing. Conclusions SARS-CoV-2 PCR Cq-values in outpatients correlated with time after symptom onset, but was not a predictor of hospitalisation. However, in hospitalised patients lower Cq-values were associated with higher risk of severe disease.",TRUE,number
R57,Virology,R178376,SARS-CoV-2 viral load as a predictor for disease severity in outpatients and hospitalised patients with COVID-19: A prospective cohort study,S699864,R178429,Lower confidence limit,R178432,0.94,"Introduction We aimed to examine if severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) polymerase chain reaction (PCR) cycle quantification (Cq) value, as a surrogate for SARS-CoV-2 viral load, could predict hospitalisation and disease severity in adult patients with coronavirus disease 2019 (COVID-19). Methods We performed a prospective cohort study of adult patients with PCR positive SARS-CoV-2 airway samples including all out-patients registered at the Department of Infectious Diseases, Odense University Hospital (OUH) March 9-March 17 2020, and all hospitalised patients at OUH March 10-April 21 2020. To identify associations between Cq-values and a) hospital admission and b) a severe outcome, logistic regression analyses were used to compute odds ratios (OR) and 95% Confidence Intervals (CI), adjusting for confounding factors (aOR). Results We included 87 non-hospitalised and 82 hospitalised patients. The median baseline Cq-value was 25.5 (interquartile range 22.3–29.0). We found a significant association between increasing Cq-value and hospital-admission in univariate analysis (OR 1.11, 95% CI 1.04–1.19). However, this was due to an association between time from symptom onset to testing and Cq-values, and no association was found in the adjusted analysis (aOR 1.08, 95% CI 0.94–1.23). In hospitalised patients, a significant association between lower Cq-values and higher risk of severe disease was found (aOR 0.89, 95% CI 0.81–0.98), independent of timing of testing. Conclusions SARS-CoV-2 PCR Cq-values in outpatients correlated with time after symptom onset, but was not a predictor of hospitalisation. However, in hospitalised patients lower Cq-values were associated with higher risk of severe disease.",TRUE,number
R57,Virology,R178376,SARS-CoV-2 viral load as a predictor for disease severity in outpatients and hospitalised patients with COVID-19: A prospective cohort study,S699743,R178379,higher confidence limit,R178380,0.98,"Introduction We aimed to examine if severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) polymerase chain reaction (PCR) cycle quantification (Cq) value, as a surrogate for SARS-CoV-2 viral load, could predict hospitalisation and disease severity in adult patients with coronavirus disease 2019 (COVID-19). Methods We performed a prospective cohort study of adult patients with PCR positive SARS-CoV-2 airway samples including all out-patients registered at the Department of Infectious Diseases, Odense University Hospital (OUH) March 9-March 17 2020, and all hospitalised patients at OUH March 10-April 21 2020. To identify associations between Cq-values and a) hospital admission and b) a severe outcome, logistic regression analyses were used to compute odds ratios (OR) and 95% Confidence Intervals (CI), adjusting for confounding factors (aOR). Results We included 87 non-hospitalised and 82 hospitalised patients. The median baseline Cq-value was 25.5 (interquartile range 22.3–29.0). We found a significant association between increasing Cq-value and hospital-admission in univariate analysis (OR 1.11, 95% CI 1.04–1.19). However, this was due to an association between time from symptom onset to testing and Cq-values, and no association was found in the adjusted analysis (aOR 1.08, 95% CI 0.94–1.23). In hospitalised patients, a significant association between lower Cq-values and higher risk of severe disease was found (aOR 0.89, 95% CI 0.81–0.98), independent of timing of testing. Conclusions SARS-CoV-2 PCR Cq-values in outpatients correlated with time after symptom onset, but was not a predictor of hospitalisation. However, in hospitalised patients lower Cq-values were associated with higher risk of severe disease.",TRUE,number
R57,Virology,R36109,Transmission interval estimates suggest pre-symptomatic spread of COVID-19,S123628,R36112,R0 estimates (average),L74420,1.87,"Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.",TRUE,number
R57,Virology,R36109,Transmission interval estimates suggest pre-symptomatic spread of COVID-19,S123624,R36110,R0 estimates (average),L74418,1.97,"Abstract Background As the COVID-19 epidemic is spreading, incoming data allows us to quantify values of key variables that determine the transmission and the effort required to control the epidemic. We determine the incubation period and serial interval distribution for transmission clusters in Singapore and in Tianjin. We infer the basic reproduction number and identify the extent of pre-symptomatic transmission. Methods We collected outbreak information from Singapore and Tianjin, China, reported from Jan.19-Feb.26 and Jan.21-Feb.27, respectively. We estimated incubation periods and serial intervals in both populations. Results The mean incubation period was 7.1 (6.13, 8.25) days for Singapore and 9 (7.92, 10.2) days for Tianjin. Both datasets had shorter incubation periods for earlier-occurring cases. The mean serial interval was 4.56 (2.69, 6.42) days for Singapore and 4.22 (3.43, 5.01) for Tianjin. We inferred that early in the outbreaks, infection was transmitted on average 2.55 and 2.89 days before symptom onset (Singapore, Tianjin). The estimated basic reproduction number for Singapore was 1.97 (1.45, 2.48) secondary cases per infective; for Tianjin it was 1.87 (1.65, 2.09) secondary cases per infective. Conclusions Estimated serial intervals are shorter than incubation periods in both Singapore and Tianjin, suggesting that pre-symptomatic transmission is occurring. Shorter serial intervals lead to lower estimates of R0, which suggest that half of all secondary infections should be prevented to control spread.",TRUE,number
R57,Virology,R37003,Real-Time Estimation of the Risk of Death from Novel Coronavirus (COVID-19) Infection: Inference Using Exported Cases,S124028,R37004,R0 estimates (average),L75021,2.1,"The exported cases of 2019 novel coronavirus (COVID-19) infection that were confirmed outside China provide an opportunity to estimate the cumulative incidence and confirmed case fatality risk (cCFR) in mainland China. Knowledge of the cCFR is critical to characterize the severity and understand the pandemic potential of COVID-19 in the early stage of the epidemic. Using the exponential growth rate of the incidence, the present study statistically estimated the cCFR and the basic reproduction number—the average number of secondary cases generated by a single primary case in a naïve population. We modeled epidemic growth either from a single index case with illness onset on 8 December 2019 (Scenario 1), or using the growth rate fitted along with the other parameters (Scenario 2) based on data from 20 exported cases reported by 24 January 2020. The cumulative incidence in China by 24 January was estimated at 6924 cases (95% confidence interval [CI]: 4885, 9211) and 19,289 cases (95% CI: 10,901, 30,158), respectively. The latest estimated values of the cCFR were 5.3% (95% CI: 3.5%, 7.5%) for Scenario 1 and 8.4% (95% CI: 5.3%, 12.3%) for Scenario 2. The basic reproduction number was estimated to be 2.1 (95% CI: 2.0, 2.2) and 3.2 (95% CI: 2.7, 3.7) for Scenarios 1 and 2, respectively. Based on these results, we argued that the current COVID-19 epidemic has a substantial potential for causing a pandemic. The proposed approach provides insights in early risk assessment using publicly available data.",TRUE,number
R57,Virology,R12243,Pattern of early human-to-human transmission of Wuhan 2019-nCoV,S18697,R12244,R0 estimates (average),L12307,2.2,"ABSTRACT On December 31, 2019, the World Health Organization was notified about a cluster of pneumonia of unknown aetiology in the city of Wuhan, China. Chinese authorities later identified a new coronavirus (2019-nCoV) as the causative agent of the outbreak. As of January 23, 2020, 655 cases have been confirmed in China and several other countries. Understanding the transmission characteristics and the potential for sustained human-to-human transmission of 2019-nCoV is critically important for coordinating current screening and containment strategies, and determining whether the outbreak constitutes a public health emergency of international concern (PHEIC). We performed stochastic simulations of early outbreak trajectories that are consistent with the epidemiological findings to date. We found the basic reproduction number, R 0 , to be around 2.2 (90% high density interval 1.4—3.8), indicating the potential for sustained human-to-human transmission. Transmission characteristics appear to be of a similar magnitude to severe acute respiratory syndrome-related coronavirus (SARS-CoV) and the 1918 pandemic influenza. These findings underline the importance of heightened screening, surveillance and control efforts, particularly at airports and other travel hubs, in order to prevent further international spread of 2019-nCoV.",TRUE,number
R57,Virology,R12247,"Early transmission dynamics in wuhan, china, of novel coronavirus-infected pneumonia",S18726,R12248,R0 estimates (average),L12328,2.2,"Abstract Background The initial cases of novel coronavirus (2019-nCoV)–infected pneumonia (NCIP) occurred in Wuhan, Hubei Province, China, in December 2019 and January 2020. We analyzed data on the first 425 confirmed cases in Wuhan to determine the epidemiologic characteristics of NCIP. Methods We collected information on demographic characteristics, exposure history, and illness timelines of laboratory-confirmed cases of NCIP that had been reported by January 22, 2020. We described characteristics of the cases and estimated the key epidemiologic time-delay distributions. In the early period of exponential growth, we estimated the epidemic doubling time and the basic reproductive number. Results Among the first 425 patients with confirmed NCIP, the median age was 59 years and 56% were male. The majority of cases (55%) with onset before January 1, 2020, were linked to the Huanan Seafood Wholesale Market, as compared with 8.6% of the subsequent cases. The mean incubation period was 5.2 days (95% confidence interval [CI], 4.1 to 7.0), with the 95th percentile of the distribution at 12.5 days. In its early stages, the epidemic doubled in size every 7.4 days. With a mean serial interval of 7.5 days (95% CI, 5.3 to 19), the basic reproductive number was estimated to be 2.2 (95% CI, 1.4 to 3.9). Conclusions On the basis of this information, there is evidence that human-to-human transmission has occurred among close contacts since the middle of December 2019. Considerable efforts to reduce transmission will be required to control outbreaks if similar dynamics apply elsewhere. Measures to prevent or reduce transmission should be implemented in populations at risk. (Funded by the Ministry of Science and Technology of China and others.)",TRUE,number
R57,Virology,R12237,"Preliminary estimation of the basic reproduction number of novel coronavirus (2019-nCoV) in China, from 2019 to 2020: A data-driven analysis in the early phase of the outbreak",S18647,R12238,R0 estimates (average),L12269,2.24,"Abstract Backgrounds An ongoing outbreak of a novel coronavirus (2019-nCoV) pneumonia hit a major city of China, Wuhan, December 2019 and subsequently reached other provinces/regions of China and countries. We present estimates of the basic reproduction number, R 0 , of 2019-nCoV in the early phase of the outbreak. Methods Accounting for the impact of the variations in disease reporting rate, we modelled the epidemic curve of 2019-nCoV cases time series, in mainland China from January 10 to January 24, 2020, through the exponential growth. With the estimated intrinsic growth rate ( γ ), we estimated R 0 by using the serial intervals (SI) of two other well-known coronavirus diseases, MERS and SARS, as approximations for the true unknown SI. Findings The early outbreak data largely follows the exponential growth. We estimated that the mean R 0 ranges from 2.24 (95%CI: 1.96-2.55) to 3.58 (95%CI: 2.89-4.39) associated with 8-fold to 2-fold increase in the reporting rate. We demonstrated that changes in reporting rate substantially affect estimates of R 0 . Conclusion The mean estimate of R 0 for the 2019-nCoV ranges from 2.24 to 3.58, and significantly larger than 1. Our findings indicate the potential of 2019-nCoV to cause outbreaks.",TRUE,number
R57,Virology,R37006,Estimating the Unreported Number of Novel Coronavirus (2019-nCoV) Cases in China in the First Half of January 2020: A Data-Driven Modelling Analysis of the Early Outbreak,S124056,R37007,R0 estimates (average),L75042,2.56,"Background: In December 2019, an outbreak of respiratory illness caused by a novel coronavirus (2019-nCoV) emerged in Wuhan, China and has swiftly spread to other parts of China and a number of foreign countries. The 2019-nCoV cases might have been under-reported roughly from 1 to 15 January 2020, and thus we estimated the number of unreported cases and the basic reproduction number, R0, of 2019-nCoV. Methods: We modelled the epidemic curve of 2019-nCoV cases, in mainland China from 1 December 2019 to 24 January 2020 through the exponential growth. The number of unreported cases was determined by the maximum likelihood estimation. We used the serial intervals (SI) of infection caused by two other well-known coronaviruses (CoV), Severe Acute Respiratory Syndrome (SARS) and Middle East Respiratory Syndrome (MERS) CoVs, as approximations of the unknown SI for 2019-nCoV to estimate R0. Results: We confirmed that the initial growth phase followed an exponential growth pattern. The under-reporting was likely to have resulted in 469 (95% CI: 403–540) unreported cases from 1 to 15 January 2020. The reporting rate after 17 January 2020 was likely to have increased 21-fold (95% CI: 18–25) in comparison to the situation from 1 to 17 January 2020 on average. We estimated the R0 of 2019-nCoV at 2.56 (95% CI: 2.49–2.63). Conclusion: The under-reporting was likely to have occurred during the first half of January 2020 and should be considered in future investigation.",TRUE,number
R57,Virology,R36132,Lessons drawn from China and South Korea for managing COVID-19 epidemic: insights from a comparative modeling study,S123783,R36133,R0 estimates (average),L74532,2.6,"We conducted a comparative study of COVID-19 epidemic in three different settings: mainland China, the Guangdong province of China and South Korea, by formulating two disease transmission dynamics models incorporating epidemic characteristics and setting-specific interventions, and fitting the models to multi-source data to identify initial and effective reproduction numbers and evaluate effectiveness of interventions. We estimated the initial basic reproduction number for South Korea, the Guangdong province and mainland China as 2.6 (95% confidence interval (CI): (2.5, 2.7)), 3.0 (95%CI: (2.6, 3.3)) and 3.8 (95%CI: (3.5,4.2)), respectively, given a serial interval with mean of 5 days with standard deviation of 3 days. We found that the effective reproduction number for the Guangdong province and mainland China has fallen below the threshold 1 since February 8th and 18th respectively, while the effective reproduction number for South Korea remains high, suggesting that the interventions implemented need to be enhanced in order to halt further infections. We also project the epidemic trend in South Korea under different scenarios where a portion or the entirety of the integrated package of interventions in China is used. We show that a coherent and integrated approach with stringent public health interventions is the key to the success of containing the epidemic in China and specially its provinces outside its epicenter, and we show that this approach can also be effective to mitigate the burden of the COVID-19 epidemic in South Korea. The experience of outbreak control in mainland China should be a guiding reference for the rest of the world including South Korea.",TRUE,number
R57,Virology,R12231,Novel coronavirus 2019-nCoV: early estimation of epidemiological parameters and epidemic predictions,S18604,R12232,R0 estimates (average),L12238,3.11,"Since first identified, the epidemic scale of the recently emerged novel coronavirus (2019-nCoV) in Wuhan, China, has increased rapidly, with cases arising across China and other countries and regions. using a transmission model, we estimate a basic reproductive number of 3.11 (95%CI, 2.39-4.13); 58-76% of transmissions must be prevented to stop increasing; Wuhan case ascertainment of 5.0% (3.6-7.4); 21022 (11090-33490) total infections in Wuhan 1 to 22 January.",TRUE,number
R57,Virology,R37003,Real-Time Estimation of the Risk of Death from Novel Coronavirus (COVID-19) Infection: Inference Using Exported Cases,S124033,R37005,R0 estimates (average),L75024,3.2,"The exported cases of 2019 novel coronavirus (COVID-19) infection that were confirmed outside China provide an opportunity to estimate the cumulative incidence and confirmed case fatality risk (cCFR) in mainland China. Knowledge of the cCFR is critical to characterize the severity and understand the pandemic potential of COVID-19 in the early stage of the epidemic. Using the exponential growth rate of the incidence, the present study statistically estimated the cCFR and the basic reproduction number—the average number of secondary cases generated by a single primary case in a naïve population. We modeled epidemic growth either from a single index case with illness onset on 8 December 2019 (Scenario 1), or using the growth rate fitted along with the other parameters (Scenario 2) based on data from 20 exported cases reported by 24 January 2020. The cumulative incidence in China by 24 January was estimated at 6924 cases (95% confidence interval [CI]: 4885, 9211) and 19,289 cases (95% CI: 10,901, 30,158), respectively. The latest estimated values of the cCFR were 5.3% (95% CI: 3.5%, 7.5%) for Scenario 1 and 8.4% (95% CI: 5.3%, 12.3%) for Scenario 2. The basic reproduction number was estimated to be 2.1 (95% CI: 2.0, 2.2) and 3.2 (95% CI: 2.7, 3.7) for Scenarios 1 and 2, respectively. Based on these results, we argued that the current COVID-19 epidemic has a substantial potential for causing a pandemic. The proposed approach provides insights in early risk assessment using publicly available data.",TRUE,number
R57,Virology,R36128,Risk estimation and prediction by modeling the transmission of the novel coronavirus (COVID-19) in mainland China excluding Hubei province,S123752,R36129,R0 estimates (average),L74509,3.36,"Background: In December 2019, an outbreak of coronavirus disease (COVID-19)was identified in Wuhan, China and, later on, detected in other parts of China. Our aim is to evaluate the effectiveness of the evolution of interventions and self-protection measures, estimate the risk of partial lifting control measures and predict the epidemic trend of the virus in mainland China excluding Hubei province based on the published data and a novel mathematical model. Methods: A novel COVID-19 transmission dynamic model incorporating the intervention measures implemented in China is proposed. We parameterize the model by using the Markov Chain Monte Carlo (MCMC) method and estimate the control reproduction number Rc, as well as the effective daily reproduction ratio Re(t), of the disease transmission in mainland China excluding Hubei province. Results: The estimation outcomes indicate that the control reproduction number is 3.36 (95% CI 3.20-3.64) and Re(t) has dropped below 1 since January 31st, 2020, which implies that the containment strategies implemented by the Chinese government in mainland China excluding Hubei province are indeed effective and magnificently suppressed COVID-19 transmission. Moreover, our results show that relieving personal protection too early may lead to the spread of disease for a longer time and more people would be infected, and may even cause epidemic or outbreak again. By calculating the effective reproduction ratio, we proved that the contact rate should be kept at least less than 30% of the normal level by April, 2020. Conclusions: To ensure the epidemic ending rapidly, it is necessary to maintain the current integrated restrict interventions and self-protection measures, including travel restriction, quarantine of entry, contact tracing followed by quarantine and isolation and reduction of contact, like wearing masks, etc. People should be fully aware of the real-time epidemic situation and keep sufficient personal protection until April. If all the above conditions are met, the outbreak is expected to be ended by April in mainland China apart from Hubei province.",TRUE,number
R57,Virology,R36114,Estimation of the epidemic properties of the 2019 novel coronavirus: A mathematical modeling study,S123669,R36117,R0 estimates (average),L74451,3.39,"Background The 2019 novel Coronavirus (COVID-19) emerged in Wuhan, China in December 2019 and has been spreading rapidly in China. Decisions about its pandemic threat and the appropriate level of public health response depend heavily on estimates of its basic reproduction number and assessments of interventions conducted in the early stages of the epidemic. Methods We conducted a mathematical modeling study using five independent methods to assess the basic reproduction number (R0) of COVID-19, using data on confirmed cases obtained from the China National Health Commission for the period 10th January to 8th February. We analyzed the data for the period before the closure of Wuhan city (10th January to 23rd January) and the post-closure period (23rd January to 8th February) and for the whole period, to assess both the epidemic risk of the virus and the effectiveness of the closure of Wuhan city on spread of COVID-19. Findings Before the closure of Wuhan city the basic reproduction number of COVID-19 was 4.38 (95% CI: 3.63-5.13), dropping to 3.41 (95% CI: 3.16-3.65) after the closure of Wuhan city. Over the entire epidemic period COVID-19 had a basic reproduction number of 3.39 (95% CI: 3.09-3.70), indicating it has a very high transmissibility. Interpretation COVID-19 is a highly transmissible virus with a very high risk of epidemic outbreak once it emerges in metropolitan areas. The closure of Wuhan city was effective in reducing the severity of the epidemic, but even after closure of the city and the subsequent expansion of that closure to other parts of Hubei the virus remained extremely infectious. Emergency planners in other cities should consider this high infectiousness when considering responses to this virus.",TRUE,number
R57,Virology,R36114,Estimation of the epidemic properties of the 2019 novel coronavirus: A mathematical modeling study,S123661,R36116,R0 estimates (average),L74446,3.41,"Background The 2019 novel Coronavirus (COVID-19) emerged in Wuhan, China in December 2019 and has been spreading rapidly in China. Decisions about its pandemic threat and the appropriate level of public health response depend heavily on estimates of its basic reproduction number and assessments of interventions conducted in the early stages of the epidemic. Methods We conducted a mathematical modeling study using five independent methods to assess the basic reproduction number (R0) of COVID-19, using data on confirmed cases obtained from the China National Health Commission for the period 10th January to 8th February. We analyzed the data for the period before the closure of Wuhan city (10th January to 23rd January) and the post-closure period (23rd January to 8th February) and for the whole period, to assess both the epidemic risk of the virus and the effectiveness of the closure of Wuhan city on spread of COVID-19. Findings Before the closure of Wuhan city the basic reproduction number of COVID-19 was 4.38 (95% CI: 3.63-5.13), dropping to 3.41 (95% CI: 3.16-3.65) after the closure of Wuhan city. Over the entire epidemic period COVID-19 had a basic reproduction number of 3.39 (95% CI: 3.09-3.70), indicating it has a very high transmissibility. Interpretation COVID-19 is a highly transmissible virus with a very high risk of epidemic outbreak once it emerges in metropolitan areas. The closure of Wuhan city was effective in reducing the severity of the epidemic, but even after closure of the city and the subsequent expansion of that closure to other parts of Hubei the virus remained extremely infectious. Emergency planners in other cities should consider this high infectiousness when considering responses to this virus.",TRUE,number
R57,Virology,R12237,"Preliminary estimation of the basic reproduction number of novel coronavirus (2019-nCoV) in China, from 2019 to 2020: A data-driven analysis in the early phase of the outbreak",S18672,R12240,R0 estimates (average),L12290,3.58,"Abstract Backgrounds An ongoing outbreak of a novel coronavirus (2019-nCoV) pneumonia hit a major city of China, Wuhan, December 2019 and subsequently reached other provinces/regions of China and countries. We present estimates of the basic reproduction number, R 0 , of 2019-nCoV in the early phase of the outbreak. Methods Accounting for the impact of the variations in disease reporting rate, we modelled the epidemic curve of 2019-nCoV cases time series, in mainland China from January 10 to January 24, 2020, through the exponential growth. With the estimated intrinsic growth rate ( γ ), we estimated R 0 by using the serial intervals (SI) of two other well-known coronavirus diseases, MERS and SARS, as approximations for the true unknown SI. Findings The early outbreak data largely follows the exponential growth. We estimated that the mean R 0 ranges from 2.24 (95%CI: 1.96-2.55) to 3.58 (95%CI: 2.89-4.39) associated with 8-fold to 2-fold increase in the reporting rate. We demonstrated that changes in reporting rate substantially affect estimates of R 0 . Conclusion The mean estimate of R 0 for the 2019-nCoV ranges from 2.24 to 3.58, and significantly larger than 1. Our findings indicate the potential of 2019-nCoV to cause outbreaks.",TRUE,number
R57,Virology,R36132,Lessons drawn from China and South Korea for managing COVID-19 epidemic: insights from a comparative modeling study,S123796,R36137,R0 estimates (average),L74540,3.8,"We conducted a comparative study of COVID-19 epidemic in three different settings: mainland China, the Guangdong province of China and South Korea, by formulating two disease transmission dynamics models incorporating epidemic characteristics and setting-specific interventions, and fitting the models to multi-source data to identify initial and effective reproduction numbers and evaluate effectiveness of interventions. We estimated the initial basic reproduction number for South Korea, the Guangdong province and mainland China as 2.6 (95% confidence interval (CI): (2.5, 2.7)), 3.0 (95%CI: (2.6, 3.3)) and 3.8 (95%CI: (3.5,4.2)), respectively, given a serial interval with mean of 5 days with standard deviation of 3 days. We found that the effective reproduction number for the Guangdong province and mainland China has fallen below the threshold 1 since February 8th and 18th respectively, while the effective reproduction number for South Korea remains high, suggesting that the interventions implemented need to be enhanced in order to halt further infections. We also project the epidemic trend in South Korea under different scenarios where a portion or the entirety of the integrated package of interventions in China is used. We show that a coherent and integrated approach with stringent public health interventions is the key to the success of containing the epidemic in China and specially its provinces outside its epicenter, and we show that this approach can also be effective to mitigate the burden of the COVID-19 epidemic in South Korea. The experience of outbreak control in mainland China should be a guiding reference for the rest of the world including South Korea.",TRUE,number
R57,Virology,R36114,Estimation of the epidemic properties of the 2019 novel coronavirus: A mathematical modeling study,S123656,R36115,R0 estimates (average),L74443,4.38,"Background The 2019 novel Coronavirus (COVID-19) emerged in Wuhan, China in December 2019 and has been spreading rapidly in China. Decisions about its pandemic threat and the appropriate level of public health response depend heavily on estimates of its basic reproduction number and assessments of interventions conducted in the early stages of the epidemic. Methods We conducted a mathematical modeling study using five independent methods to assess the basic reproduction number (R0) of COVID-19, using data on confirmed cases obtained from the China National Health Commission for the period 10th January to 8th February. We analyzed the data for the period before the closure of Wuhan city (10th January to 23rd January) and the post-closure period (23rd January to 8th February) and for the whole period, to assess both the epidemic risk of the virus and the effectiveness of the closure of Wuhan city on spread of COVID-19. Findings Before the closure of Wuhan city the basic reproduction number of COVID-19 was 4.38 (95% CI: 3.63-5.13), dropping to 3.41 (95% CI: 3.16-3.65) after the closure of Wuhan city. Over the entire epidemic period COVID-19 had a basic reproduction number of 3.39 (95% CI: 3.09-3.70), indicating it has a very high transmissibility. Interpretation COVID-19 is a highly transmissible virus with a very high risk of epidemic outbreak once it emerges in metropolitan areas. The closure of Wuhan city was effective in reducing the severity of the epidemic, but even after closure of the city and the subsequent expansion of that closure to other parts of Hubei the virus remained extremely infectious. Emergency planners in other cities should consider this high infectiousness when considering responses to this virus.",TRUE,number
R57,Virology,R36118,"The Novel Coronavirus, 2019-nCoV, is Highly Contagious and More Infectious Than Initially Estimated",S123694,R36120,R0 estimates (average),L74469,4.7,"The novel coronavirus (2019-nCoV) is a recently emerged human pathogen that has spread widely since January 2020. Initially, the basic reproductive number, R0, was estimated to be 2.2 to 2.7. Here we provide a new estimate of this quantity. We collected extensive individual case reports and estimated key epidemiology parameters, including the incubation period. Integrating these estimates and high-resolution real-time human travel and infection data with mathematical models, we estimated that the number of infected individuals during early epidemic double every 2.4 days, and the R0 value is likely to be between 4.7 and 6.6. We further show that quarantine and contact tracing of symptomatic individuals alone may not be effective and early, strong control measures are needed to stop transmission of the virus.",TRUE,number
R57,Virology,R36149,Analysis of the epidemic growth of the early 2019-nCoV outbreak using internationally confirmed cases,S123895,R36150,R0 estimates (average),L74613,5.7,"Background: On January 23, 2020, a quarantine was imposed on travel in and out of Wuhan, where the 2019 novel coronavirus (2019-nCoV) outbreak originated from. Previous analyses estimated the basic epidemiological parameters using symptom onset dates of the confirmed cases in Wuhan and outside China. Methods: We obtained information on the 46 coronavirus cases who traveled from Wuhan before January 23 and have been subsequently confirmed in Hong Kong, Japan, Korea, Macau, Singapore, and Taiwan as of February 5, 2020. Most cases have detailed travel history and disease progress. Compared to previous analyses, an important distinction is that we used this data to informatively simulate the infection time of each case using the symptom onset time, previously reported incubation interval, and travel history. We then fitted a simple exponential growth model with adjustment for the January 23 travel ban to the distribution of the simulated infection time. We used a Bayesian analysis with diffuse priors to quantify the uncertainty of the estimated epidemiological parameters. We performed sensitivity analysis to different choices of incubation interval and the hyperparameters in the prior specification. Results: We found that our model provides good fit to the distribution of the infection time. Assuming the travel rate to the selected countries and regions is constant over the study period, we found that the epidemic was doubling in size every 2.9 days (95% credible interval [CrI], 2 days--4.1 days). Using previously reported serial interval for 2019-nCoV, the estimated basic reproduction number is 5.7 (95% CrI, 3.4--9.2). The estimates did not change substantially if we assumed the travel rate doubled in the last 3 days before January 23, when we used previously reported incubation interval for severe acute respiratory syndrome (SARS), or when we changed the hyperparameters in our prior specification. Conclusions: Our estimated epidemiological parameters are higher than an earlier report using confirmed cases in Wuhan. This indicates the 2019-nCoV could have been spreading faster than previous estimates.",TRUE,number
R57,Virology,R12245,Estimation of the Transmission Risk of 2019-nCov and Its Implication for Public Health Interventions,S18714,R12246,R0 estimates (average),L12320,6.47,"English Abstract: Background: Since the emergence of the first pneumonia cases in Wuhan, China, the novel coronavirus (2019-nCov) infection has been quickly spreading out to other provinces and neighbouring countries. Estimation of the basic reproduction number by means of mathematical modelling can be helpful for determining the potential and severity of an outbreak, and providing critical information for identifying the type of disease interventions and intensity. Methods: A deterministic compartmental model was devised based on the clinical progression of the disease, epidemiological status of the individuals, and the intervention measures. Findings: The estimation results based on likelihood and model analysis reveal that the control reproduction number may be as high as 6.47 (95% CI 5.71-7.23). Sensitivity analyses reveal that interventions, such as intensive contact tracing followed by quarantine and isolation, can effectively reduce the control reproduction number and transmission risk, with the effect of travel restriction of Wuhan on 2019-nCov infection in Beijing being almost equivalent to increasing quarantine by 100-thousand baseline value. Interpretation: It is essential to assess how the expensive, resource-intensive measures implemented by the Chinese authorities can contribute to the prevention and control of the 2019-nCov infection, and how long should be maintained. Under the most restrictive measures, the outbreak is expected to peak within two weeks (since January 23rd 2020) with significant low peak value. With travel restriction (no imported exposed individuals to Beijing), the number of infected individuals in 7 days will decrease by 91.14% in Beijing, compared with the scenario of no travel restriction. Mandarin Abstract: 背景:自从中国武汉出现第一例肺炎病例以来,新型冠状病毒(2019-nCov)感染已迅速传播到其他省份和周边国家。通过数学模型估计基本再生数,有助于确定疫情爆发的可能性和严重性,并为确定疾病干预类型和强度提供关键信息。 方法:根据疾病的临床进展,个体的流行病学状况和干预措施,设计确定性的仓室模型。 结果:基于似然函数和模型分析的估计结果表明,控制再生数可能高达6.47(95%CI 5.71-7.23)。敏感性分析显示,密集接触追踪和隔离等干预措施可以有效减少控制再生数和传播风险,武汉封城措施对北京2019-nCov感染的影响几乎等同于增加隔离措施10万的基线值。 解释:必须评估中国当局实施的昂贵,资源密集型措施如何有助于预防和控制2019-nCov感染,以及应维持多长时间。在最严格的措施下,预计疫情将在两周内(自2020年1月23日起)达到峰值,峰值较低。与没有出行限制的情况相比,有了出行限制(即没有输入的潜伏类个体进入北京),北京的7天感染者数量将减少91.14%。",TRUE,number
R57,Virology,R37008,Estimation of the Transmission Risk of the 2019-nCoV and Its Implication for Public Health Interventions,S124074,R37009,Rc estimates (average),L75056,6.47,"Since the emergence of the first cases in Wuhan, China, the novel coronavirus (2019-nCoV) infection has been quickly spreading out to other provinces and neighboring countries. Estimation of the basic reproduction number by means of mathematical modeling can be helpful for determining the potential and severity of an outbreak and providing critical information for identifying the type of disease interventions and intensity. A deterministic compartmental model was devised based on the clinical progression of the disease, epidemiological status of the individuals, and intervention measures. The estimations based on likelihood and model analysis show that the control reproduction number may be as high as 6.47 (95% CI 5.71–7.23). Sensitivity analyses show that interventions, such as intensive contact tracing followed by quarantine and isolation, can effectively reduce the control reproduction number and transmission risk, with the effect of travel restriction adopted by Wuhan on 2019-nCoV infection in Beijing being almost equivalent to increasing quarantine by a 100 thousand baseline value. It is essential to assess how the expensive, resource-intensive measures implemented by the Chinese authorities can contribute to the prevention and control of the 2019-nCoV infection, and how long they should be maintained. Under the most restrictive measures, the outbreak is expected to peak within two weeks (since 23 January 2020) with a significant low peak value. With travel restriction (no imported exposed individuals to Beijing), the number of infected individuals in seven days will decrease by 91.14% in Beijing, compared with the scenario of no travel restriction.",TRUE,number
R57,Virology,R36118,"The Novel Coronavirus, 2019-nCoV, is Highly Contagious and More Infectious Than Initially Estimated",S123702,R36121,R0 estimates (average),L74474,6.6,"The novel coronavirus (2019-nCoV) is a recently emerged human pathogen that has spread widely since January 2020. Initially, the basic reproductive number, R0, was estimated to be 2.2 to 2.7. Here we provide a new estimate of this quantity. We collected extensive individual case reports and estimated key epidemiology parameters, including the incubation period. Integrating these estimates and high-resolution real-time human travel and infection data with mathematical models, we estimated that the number of infected individuals during early epidemic double every 2.4 days, and the R0 value is likely to be between 4.7 and 6.6. We further show that quarantine and contact tracing of symptomatic individuals alone may not be effective and early, strong control measures are needed to stop transmission of the virus.",TRUE,number
,Photonics,R144864,Graphene Interdigital Electrodes for Improving Sensitivity in a Ga2O3:Zn Deep-Ultraviolet Photoconductive Detector,S580097,R144868,Photoresponsivity (A/W ) ,L405558,1.05,"Graphene (Gr) has been widely used as a transparent electrode material for photodetectors because of its high conductivity and high transmittance in recent years. However, the current low-efficiency manipulation of Gr has hindered the arraying and practical use of such detectors. We invented a multistep method of accurately tailoring graphene into interdigital electrodes for fabricating a sensitive, stable deep-ultraviolet photodetector based on Zn-doped Ga2O3 films. The fabricated photodetector exhibits a series of excellent performance, including extremely low dark current (∼10-11 A), an ultrahigh photo-to-dark ratio (>105), satisfactory responsivity (1.05 A/W), and excellent selectivity for the deep-ultraviolet band, compared to those with ordinary metal electrodes. The raise of photocurrent and responsivity is attributed to the increase of incident photons through Gr and separated carriers caused by the built-in electric field formed at the interface of Gr and Ga2O3:Zn films. The proposed ideas and methods of tailoring Gr can not only improve the performance of devices but more importantly contribute to the practical development of graphene.",TRUE,number
,Photonics,R145530,High Performance of Solution-Processed Amorphous p-Channel Copper-Tin-Sulfur-Gallium Oxide Thin-Film Transistors by UV/O3 Photocuring,S582920,R145532,Mobility (cm2 /V.s),L407128,1.75,"The development of p-type metal-oxide semiconductors (MOSs) is of increasing interest for applications in next-generation optoelectronic devices, display backplane, and low-power-consumption complementary MOS circuits. Here, we report the high performance of solution-processed, p-channel copper-tin-sulfide-gallium oxide (CTSGO) thin-film transistors (TFTs) using UV/O3 exposure. Hall effect measurement confirmed the p-type conduction of CTSGO with Hall mobility of 6.02 ± 0.50 cm2 V-1 s-1. The p-channel CTSGO TFT using UV/O3 treatment exhibited the field-effect mobility (μFE) of 1.75 ± 0.15 cm2 V-1 s-1 and an on/off current ratio (ION/IOFF) of ∼104 at a low operating voltage of -5 V. The significant enhancement in the device performance is due to the good p-type CTSGO material, smooth surface morphology, and fewer interfacial traps between the semiconductor and the Al2O3 gate insulator. Therefore, the p-channel CTSGO TFT can be applied for CMOS MOS TFT circuits for next-generation display.",TRUE,number
,electrical engineering,R145520,"Extremely Stable, High Performance Gd and Li Alloyed ZnO Thin Film Transistor by Spray Pyrolysis",S582909,R145521,Mobility (cm2 /V.s),L407117,25.87,"The simultaneous doping effect of Gadolinium (Gd) and Lithium (Li) on zinc oxide (ZnO) thin‐film transistor (TFT) by spray pyrolysis using a ZrOx gate insulator is reported. Li doping in ZnO increases mobility significantly, whereas the presence of Gd improves the stability of the device. The Gd ratio in ZnO is varied from 0% to 20% and the Li ratio from 0% to 10%. The optimized ZnO TFT with codoping of 5% Li and 10% Gd exhibits the linear mobility of 25.87 cm2 V−1 s−1, the subthreshold swing of 204 mV dec−1, on/off current ratio of ≈108, and zero hysteresis voltage. The enhancement of both mobility and stability is due to an increase in grain size by Li incorporation and decrease of defect states by Gd doping. The negligible threshold voltage shift (∆VTH) under gate bias and zero hysteresis are due to the reduced defects in an oxide semiconductor and decreased traps at the LiGdZnO/ZrOx interface. Li doping can balance the reduction of the carrier concentration by Gd doping, which improves the mobility and stability of the ZnO TFT. Therefore, LiGdZnO TFT shows excellent electrical performance with high stability.",TRUE,number
R123,Analytical Chemistry,R140514,High-Performance Chemical Sensing Using Schottky-Contacted Chemical Vapor Deposition Grown Monolayer MoS2 Transistors,S560979,R140516,has research problem,R139327,Chemical sensors,"Trace chemical detection is important for a wide range of practical applications. Recently emerged two-dimensional (2D) crystals offer unique advantages as potential sensing materials with high sensitivity, owing to their very high surface-to-bulk atom ratios and semiconducting properties. Here, we report the first use of Schottky-contacted chemical vapor deposition grown monolayer MoS2 as high-performance room temperature chemical sensors. The Schottky-contacted MoS2 transistors show current changes by 2-3 orders of magnitude upon exposure to very low concentrations of NO2 and NH3. Specifically, the MoS2 sensors show clear detection of NO2 and NH3 down to 20 ppb and 1 ppm, respectively. We attribute the observed high sensitivity to both well-known charger transfer mechanism and, more importantly, the Schottky barrier modulation upon analyte molecule adsorption, the latter of which is made possible by the Schottky contacts in the transistors and is not reported previously for MoS2 sensors. This study shows the potential of 2D semiconductors as high-performance sensors and also benefits the fundamental studies of interfacial phenomena and interactions between chemical species and monolayer 2D semiconductors.",TRUE,research problem
R123,Analytical Chemistry,R140522,"Highly sensitive MoTe
2
chemical sensor with fast recovery rate through gate biasing",S561036,R140524,has research problem,R139327,Chemical sensors,"The unique properties of two dimensional (2D) materials make them promising candidates for chemical and biological sensing applications. However, most 2D nanomaterial sensors suffer very long recovery time due to slow molecular desorption at room temperature. Here, we report a highly sensitive molybdenum ditelluride (MoTe2) gas sensor for NO2 and NH3 detection with greatly enhanced recovery rate. The effects of gate bias on sensing performance have been systematically studied. It is found that the recovery kinetics can be effectively adjusted by biasing the sensor to different gate voltages. Under the optimum biasing potential, the MoTe2 sensor can achieve more than 90% recovery after each sensing cycle well within 10 min at room temperature. The results demonstrate the potential of MoTe2 as a promising candidate for high-performance chemical sensors. The idea of exploiting gate bias to adjust molecular desorption kinetics can be readily applied to much wider sensing platforms based on 2D nanomaterials.",TRUE,research problem
R123,Analytical Chemistry,R140743,Flower-like Palladium Nanoclusters Decorated Graphene Electrodes for Ultrasensitive and Flexible Hydrogen Gas Sensing,S562328,R140745,has research problem,R139327,Chemical sensors,"Abstract Flower-like palladium nanoclusters (FPNCs) are electrodeposited onto graphene electrode that are prepared by chemical vapor deposition (CVD). The CVD graphene layer is transferred onto a poly(ethylene naphthalate) (PEN) film to provide a mechanical stability and flexibility. The surface of the CVD graphene is functionalized with diaminonaphthalene (DAN) to form flower shapes. Palladium nanoparticles act as templates to mediate the formation of FPNCs, which increase in size with reaction time. The population of FPNCs can be controlled by adjusting the DAN concentration as functionalization solution. These FPNCs_CG electrodes are sensitive to hydrogen gas at room temperature. The sensitivity and response time as a function of the FPNCs population are investigated, resulted in improved performance with increasing population. Furthermore, the minimum detectable level (MDL) of hydrogen is 0.1 ppm, which is at least 2 orders of magnitude lower than that of chemical sensors based on other Pd-based hybrid materials.",TRUE,research problem
R133,Artificial Intelligence,R140948,Findings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas,S563146,R140950,has research problem,R140954, Open Machine Translation,"This paper presents the results of the 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas. The shared task featured two independent tracks, and participants submitted machine translation systems for up to 10 indigenous languages. Overall, 8 teams participated with a total of 214 submissions. We provided training sets consisting of data collected from various sources, as well as manually translated sentences for the development and test sets. An official baseline trained on this data was also provided. Team submissions featured a variety of architectures, including both statistical and neural models, and for the majority of languages, many teams were able to considerably improve over the baseline. The best performing systems achieved 12.97 ChrF higher than baseline, when averaged across languages.",TRUE,research problem
R133,Artificial Intelligence,R6578,Discourse Trees Are Good Indicators of Importance in Text,S8268,R6579,has research problem,R6544,Automatic text summarization,"Researchers in computational linguistics have long speculated that the nuclei of the rhetorical structure tree of a text form an adequate \summary"" of the text for which that tree was built. However, to my knowledge, there has been no experiment to connrm how valid this speculation really is. In this paper, I describe a psycholinguistic experiment that shows that the concepts of discourse structure and nuclearity can be used eeectively in text summarization. More precisely, I show that there is a strong correlation between the nuclei of the discourse structure of a text and what readers perceive to be the most important units in that text. In addition, I propose and evaluate the quality of an automatic, discourse-based summa-rization system that implements the methods that were validated by the psycholinguistic experiment. The evaluation indicates that although the system does not match yet the results that would be obtained if discourse trees had been built manually, it still signiicantly outperforms both a baseline algorithm and Microsoft's OOce97 summarizer. 1 Motivation Traditionally, previous approaches to automatic text summarization have assumed that the salient parts of a text can be determined by applying one or more of the following assumptions: important sentences in a text contain words that are used frequently (Luhn 1958; Edmundson 1968); important sentences contain words that are used in the title and section headings (Edmundson 1968); important sentences are located at the beginning or end of paragraphs (Baxendale 1958); important sentences are located at positions in a text that are genre dependent, and these positions can be determined automatically, through training important sentences use bonus words such as \greatest"" and \signiicant"" or indicator phrases such as \the main aim of this paper"" and \the purpose of this article"", while unimportant sentences use stigma words such as \hardly"" and \im-possible"" important sentences and concepts are the highest connected entities in elaborate semantic struc-important and unimportant sentences are derivable from a discourse representation of the text (Sparck Jones 1993b; Ono, Sumita, & Miike 1994). In determining the words that occur most frequently in a text or the sentences that use words that occur in the headings of sections, computers are accurate tools. Therefore, in testing the validity of using these indicators for determining the most important units in a text, it is adequate to compare the direct output of a summarization program that implements the assump-tion(s) under scrutiny with a human-made …",TRUE,research problem
R133,Artificial Intelligence,R6733,A Statistical Approach for Automatic Text Summarization by Extraction,S8938,R6734,has research problem,R6544,Automatic text summarization,"Automatic Document Summarization is a highly interdisciplinary research area related with computer science as well as cognitive psychology. This Summarization is to compress an original document into a summarized version by extracting almost all of the essential concepts with text mining techniques. This research focuses on developing a statistical automatic text summarization approach, Kmixture probabilistic model, to enhancing the quality of summaries. KSRS employs the K-mixture probabilistic model to establish term weights in a statistical sense, and further identifies the term relationships to derive the semantic relationship significance (SRS) of nouns. Sentences are ranked and extracted based on their semantic relationship significance values. The objective of this research is thus to propose a statistical approach to text summarization. We propose a K-mixture semantic relationship significance (KSRS) approach to enhancing the quality of document summary results. The K-mixture probabilistic model is used to determine the term weights. Term relationships are then investigated to develop the semantic relationship of nouns that manifests sentence semantics. Sentences with significant semantic relationship, nouns are extracted to form the summary accordingly.",TRUE,research problem
R133,Artificial Intelligence,R140875,Overview of NLPTEA-2020 Shared Task for Chinese Grammatical Error Diagnosis,S562999,R140877,has research problem,R140878,Chinese Grammatical Error Diagnosis,"This paper presents the NLPTEA 2020 shared task for Chinese Grammatical Error Diagnosis (CGED) which seeks to identify grammatical error types, their range of occurrence and recommended corrections within sentences written by learners of Chinese as a foreign language. We describe the task definition, data preparation, performance metrics, and evaluation results. Of the 30 teams registered for this shared task, 17 teams developed the system and submitted a total of 43 runs. System performances achieved a significant progress, reaching F1 of 91% in detection level, 40% in position level and 28% in correction level. All data sets with gold standards and scoring scripts are made publicly available to researchers.",TRUE,research problem
R133,Artificial Intelligence,R140624,SemEval-2012 Task 5: Chinese Semantic Dependency Parsing,S561595,R140626,has research problem,R140627,Chinese Semantic Dependency Parsing,"The paper presents the SemEval-2012 Shared Task 5: Chinese Semantic Dependency Parsing. The goal of this task is to identify the dependency structure of Chinese sentences from the semantic view. We firstly introduce the motivation of providing Chinese semantic dependency parsing task, and then describe the task in detail including data preparation, data format, task evaluation, and so on. Over ten thousand sentences were labeled for participants to train and evaluate their systems. At last, we briefly describe the submitted systems and analyze these results.",TRUE,research problem
R133,Artificial Intelligence,R151347,Conversational Neuro-Symbolic Commonsense Reasoning,S631585,R157543,has research problem,R157538,commonsense reasoning,"One aspect of human commonsense reasoning is the ability to make presumptions about daily experiences, activities and social interactions with others. We propose a new commonsense reasoning benchmark where the task is to uncover commonsense presumptions implied by imprecisely stated natural language commands in the form of if-then-because statements. For example, in the command ""If it snows at night then wake me up early because I don't want to be late for work"" the speaker relies on commonsense reasoning of the listener to infer the implicit presumption that it must snow enough to cause traffic slowdowns. Such if-then-because commands are particularly important when users instruct conversational agents. We release a benchmark data set for this task, collected from humans and annotated with commonsense presumptions. We develop a neuro-symbolic theorem prover that extracts multi-hop reasoning chains and apply it to this problem. We further develop an interactive conversational framework that evokes commonsense knowledge from humans for completing reasoning chains.",TRUE,research problem
R133,Artificial Intelligence,R76056,SemEval-2020 Task 4: Commonsense Validation and Explanation,S348207,R76058,has research problem,R76075,ComVE,"In this paper, we present SemEval-2020 Task 4,CommonsenseValidation andExplanation(ComVE), which includes three subtasks, aiming to evaluate whether a system can distinguish anatural language statement thatmakes senseto humans from one that does not, and provide thereasons. Specifically, in our first subtask, the participating systems are required to choose from twonatural language statements of similar wording the one thatmakes senseand the one does not. Thesecond subtask additionally asks a system to select the key reason from three options why a givenstatement does not make sense. In the third subtask, a participating system needs to generate thereason automatically. 39 teams submitted their valid systems to at least one subtask. For SubtaskA and Subtask B, top-performing teams have achieved results closed to human performance.However, for Subtask C, there is still a considerable gap between system and human performance.The dataset used in our task can be found athttps://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation.",TRUE,research problem
R133,Artificial Intelligence,R140609,SemEval-2010 Task 1: Coreference Resolution in Multiple Languages,S561551,R140611,has research problem,R124236,Coreference Resolution,"This paper presents the task ""Coreference Resolution in Multiple Languages"" to be run in SemEval-2010 (5th International Workshop on Semantic Evaluations). This task aims to evaluate and compare automatic coreference resolution systems for three different languages (Catalan, English, and Spanish) by means of two alternative evaluation metrics, thus providing an insight into (i) the portability of coreference resolution systems across languages, and (ii) the effect of different scoring metrics on ranking the output of the participant systems.",TRUE,research problem
R133,Artificial Intelligence,R75785,SemEval-2020 Task 5: Counterfactual Recognition,S346647,R75787,has research problem,R75795,Counterfactual recognition,"We present a counterfactual recognition (CR) task, the shared Task 5 of SemEval-2020. Counterfactuals describe potential outcomes (consequents) produced by actions or circumstances that did not happen or cannot happen and are counter to the facts (antecedent). Counterfactual thinking is an important characteristic of the human cognitive system; it connects antecedents and consequent with causal relations. Our task provides a benchmark for counterfactual recognition in natural language with two subtasks. Subtask-1 aims to determine whether a given sentence is a counterfactual statement or not. Subtask-2 requires the participating systems to extract the antecedent and consequent in a given counterfactual statement. During the SemEval-2020 official evaluation period, we received 27 submissions to Subtask-1 and 11 to Subtask-2. Our data and baseline code are made publicly available at https://zenodo.org/record/3932442. The task website and leaderboard can be found at https://competitions.codalab.org/competitions/21691.",TRUE,research problem
R133,Artificial Intelligence,R140616,SemEval-2010 Task 2: Cross-Lingual Lexical Substitution,S561568,R140618,has research problem,R140619,Cross-Lingual Lexical Substitution,"In this paper we describe the SemEval-2010 Cross-Lingual Lexical Substitution task, where given an English target word in context, participating systems had to find an alternative substitute word or phrase in Spanish. The task is based on the English Lexical Substitution task run at SemEval-2007. In this paper we provide background and motivation for the task, we describe the data annotation process and the scoring system, and present the results of the participating systems.",TRUE,research problem
R133,Artificial Intelligence,R140634,Semeval-2013 Task 8: Cross-lingual Textual Entailment for Content Synchronization,S561641,R140636,has research problem,R140638,Cross-lingual Textual Entailment,"This paper presents the second round of the task on Cross-lingual Textual Entailment for Content Synchronization, organized within SemEval-2013. The task was designed to promote research on semantic inference over texts written in different languages, targeting at the same time a real application scenario. Participants were presented with datasets for different language pairs, where multi-directional entailment relations (“forward”, “backward”, “bidirectional”, “no entailment”) had to be identified. We report on the training and test data used for evaluation, the process of their creation, the participating systems (six teams, 61 runs), the approaches adopted and the results achieved.",TRUE,research problem
R133,Artificial Intelligence,R140850,SemEval-2016 Task 7: Determining Sentiment Intensity of English and Arabic Phrases,S562908,R140852,has research problem,R140853,Determining Sentiment Intensity,"We present a shared task on automatically determining sentiment intensity of a word or a phrase. The words and phrases are taken from three domains: general English, English Twitter, and Arabic Twitter. The phrases include those composed of negators, modals, and degree adverbs as well as phrases formed by words with opposing polarities. For each of the three domains, we assembled the datasets that include multi-word phrases and their constituent words, both manually annotated for real-valued sentiment intensity scores. The three datasets were presented as the test sets for three separate tasks (each focusing on a specific domain). Five teams submitted nine system outputs for the three tasks. All datasets created for this shared task are freely available to the research community.",TRUE,research problem
R133,Artificial Intelligence,R140871,SemEval-2020 Task 10: Emphasis Selection for Written Text in Visual Media,S562986,R140873,has research problem,R140874,Emphasis Selection for Written Text,"In this paper, we present the main findings and compare the results of SemEval-2020 Task 10, Emphasis Selection for Written Text in Visual Media. The goal of this shared task is to design automatic methods for emphasis selection, i.e. choosing candidates for emphasis in textual content to enable automated design assistance in authoring. The main focus is on short text instances for social media, with a variety of examples, from social media posts to inspirational quotes. Participants were asked to model emphasis using plain text with no additional context from the user or other design considerations. SemEval-2020 Emphasis Selection shared task attracted 197 participants in the early phase and a total of 31 teams made submissions to this task. The highest-ranked submission achieved 0.823 Matchm score. The analysis of systems submitted to the task indicates that BERT and RoBERTa were the most common choice of pre-trained models used, and part of speech tag (POS) was the most useful feature. Full results can be found on the task’s website.",TRUE,research problem
R133,Artificial Intelligence,R38180,End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures,S125587,R38182,has research problem,R38192,End-to-end Relation Extraction,"We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional tree-structured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the state-of-the-art feature-based model on end-to-end relation extraction, achieving 12.1% and 5.7% relative error reductions in F1-score on ACE2005 and ACE2004, respectively. We also show that our LSTM-RNN based model compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8). Finally, we present an extensive ablation analysis of several model components.",TRUE,research problem
R133,Artificial Intelligence,R111689,SemEval-2019 Task 8: Fact Checking in Community Question Answering Forums,S508181,R111691,has research problem,R111698,Fact Checking in Community Question Answering Forums,"We present SemEval-2019 Task 8 on Fact Checking in Community Question Answering Forums, which features two subtasks. Subtask A is about deciding whether a question asks for factual information vs. an opinion/advice vs. just socializing. Subtask B asks to predict whether an answer to a factual question is true, false or not a proper answer. We received 17 official submissions for subtask A and 11 official submissions for Subtask B. For subtask A, all systems improved over the majority class baseline. For Subtask B, all systems were below a majority class baseline, but several systems were very close to it. The leaderboard and the data from the competition can be found at http://competitions.codalab.org/competitions/20022.",TRUE,research problem
R133,Artificial Intelligence,R140996,"Fine-grained Event Classification in News-like Text Snippets - Shared Task 2, CASE 2021",S563235,R140998,has research problem,R140999,Fine-grained Event Classification,"This paper describes the Shared Task on Fine-grained Event Classification in News-like Text Snippets. The Shared Task is divided into three sub-tasks: (a) classification of text snippets reporting socio-political events (25 classes) for which vast amount of training data exists, although exhibiting different structure and style vis-a-vis test data, (b) enhancement to a generalized zero-shot learning problem, where 3 additional event types were introduced in advance, but without any training data (‘unseen’ classes), and (c) further extension, which introduced 2 additional event types, announced shortly prior to the evaluation phase. The reported Shared Task focuses on classification of events in English texts and is organized as part of the Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021), co-located with the ACL-IJCNLP 2021 Conference. Four teams participated in the task. Best performing systems for the three aforementioned sub-tasks achieved 83.9%, 79.7% and 77.1% weighted F1 scores respectively.",TRUE,research problem
R133,Artificial Intelligence,R181000,"Image-Based Food Calorie Estimation Using Knowledge on Food Categories, Ingredients and Cooking Directions",S705616,R182367,has research problem,R181009,Food calorie estimation,"Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.",TRUE,research problem
R133,Artificial Intelligence,R181000,"Image-Based Food Calorie Estimation Using Knowledge on Food Categories, Ingredients and Cooking Directions",S704355,R182079,has research problem,R182081,Food category estimation,"Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.",TRUE,research problem
R133,Artificial Intelligence,R75906,Hate Speech in Pixels: Detection of Offensive Memes towards Automatic Moderation,S347032,R75908,has research problem,R75890,hate speech detection,"This work addresses the challenge of hate speech detection in Internet memes, and attempts using visual information to automatically detect hate speech, unlike any previous work of our knowledge. Memes are pixel-based multimedia documents that contain photos or illustrations together with phrases which, when combined, usually adopt a funny meaning. However, hate memes are also used to spread hate through social networks, so their automatic detection would help reduce their harmful societal impact. Our results indicate that the model can learn to detect some of the memes, but that the task is far from being solved with this simple architecture. While previous work focuses on linguistic hate speech, our experiments indicate how the visual modality can be much more informative for hate speech detection than the linguistic one in memes. In our experiments, we built a dataset of 5,020 memes to train and evaluate a multi-layer perceptron over the visual and language representations, whether independently or fused. The source code and mode and models are available this https URL .",TRUE,research problem
R133,Artificial Intelligence,R75933,Detecting Hate Speech in Multi-modal Memes,S347142,R75935,has research problem,R75890,hate speech detection,"In the past few years, there has been a surge of interest in multi-modal problems, from image captioning to visual question answering and beyond. In this paper, we focus on hate speech detection in multi-modal memes wherein memes pose an interesting multi-modal fusion problem. We try to solve the Facebook Meme Challenge (Kiela et al., 2020) which aims to solve a binary classification problem of predicting whether a meme is hateful or not. A crucial characteristic of the challenge is that it includes ”benign confounders” to counter the possibility of models exploiting unimodal priors. The challenge states that the state-of-theart models perform poorly compared to humans. During the analysis of the dataset, we realized that majority of the data points which are originally hateful are turned into benign just be describing the image of the meme. Also, majority of the multi-modal baselines give more preference to the hate speech (language modality). To tackle these problems, we explore the visual modality using object detection and image captioning models to fetch the “actual caption” and then combine it with the multi-modal representation to perform binary classification. This approach tackles the benign text confounders present in the dataset to improve the performance. Another approach we experiment with is to improve the prediction with sentiment analysis. Instead of only using multi-modal representations obtained from pre-trained neural networks, we also include the unimodal sentiment to enrich the features. We perform a detailed analysis of the above two approaches, providing compelling reasons in favor of the methodologies used.",TRUE,research problem
R133,Artificial Intelligence,R69864,Knowledge Base Completion with Out-of-Knowledge-Base Entities: A Graph Neural Network Approach,S332104,R69866,has research problem,R69884,Knowledge Base Completion,"Knowledge base completion (KBC) aims to predict missing information in a knowledge this http URL this paper, we address the out-of-knowledge-base (OOKB) entity problem in KBC:how to answer queries concerning test entities not observed at training time. Existing embedding-based KBC models assume that all test entities are available at training time, making it unclear how to obtain embeddings for new entities without costly retraining. To solve the OOKB entity problem without retraining, we use graph neural networks (Graph-NNs) to compute the embeddings of OOKB entities, exploiting the limited auxiliary knowledge provided at test time.The experimental results show the effectiveness of our proposed model in the OOKB setting.Additionally, in the standard KBC setting in which OOKB entities are not involved, our model achieves state-of-the-art performance on the WordNet dataset. The code and dataset are available at this https URL",TRUE,research problem
R133,Artificial Intelligence,R76441,SemEval-2020 Task 2: Predicting Multilingual and Cross-Lingual (Graded) Lexical Entailment,S349471,R76443,has research problem,R76444,Lexical entailment,"Lexical entailment (LE) is a fundamental asymmetric lexico-semantic relation, supporting the hierarchies in lexical resources (e.g., WordNet, ConceptNet) and applications like natural language inference and taxonomy induction. Multilingual and cross-lingual NLP applications warrant models for LE detection that go beyond language boundaries. As part of SemEval 2020, we carried out a shared task (Task 2) on multilingual and cross-lingual LE. The shared task spans three dimensions: (1) monolingual vs. cross-lingual LE, (2) binary vs. graded LE, and (3) a set of 6 diverse languages (and 15 corresponding language pairs). We offered two different evaluation tracks: (a) Dist: for unsupervised, fully distributional models that capture LE solely on the basis of unannotated corpora, and (b) Any: for externally informed models, allowed to leverage any resources, including lexico-semantic networks (e.g., WordNet or BabelNet). In the Any track, we recieved runs that push state-of-the-art across all languages and language pairs, for both binary LE detection and graded LE prediction.",TRUE,research problem
R133,Artificial Intelligence,R76400,SemEval-2020 Task 1: Unsupervised Lexical Semantic Change Detection,S349356,R76402,has research problem,R76403,Lexical Semantic Change detection,"Lexical Semantic Change detection, i.e., the task of identifying words that change meaning over time, is a very active research area, with applications in NLP, lexicography, and linguistics. Evaluation is currently the most pressing problem in Lexical Semantic Change detection, as no gold standards are available to the community, which hinders progress. We present the results of the first shared task that addresses this gap by providing researchers with an evaluation framework and manually annotated, high-quality datasets for English, German, Latin, and Swedish. 33 teams submitted 186 systems, which were evaluated on two subtasks.",TRUE,research problem
R133,Artificial Intelligence,R76413,UWB at SemEval-2020 Task 1: Lexical Semantic Change Detection,S349389,R76415,has research problem,R76403,Lexical Semantic Change detection,"In this paper, we describe our method for detection of lexical semantic change, i.e., word sense changes over time. We examine semantic differences between specific words in two corpora, chosen from different time periods, for English, German, Latin, and Swedish. Our method was created for the SemEval 2020 Task 1: Unsupervised Lexical Semantic Change Detection. We ranked 1st in Sub-task 1: binary change detection, and 4th in Sub-task 2: ranked change detection. We present our method which is completely unsupervised and language independent. It consists of preparing a semantic vector space for each corpus, earlier and later; computing a linear transformation between earlier and later spaces, using Canonical Correlation Analysis and orthogonal transformation;and measuring the cosines between the transformed vector for the target word from the earlier corpus and the vector for the target word in the later corpus.",TRUE,research problem
R133,Artificial Intelligence,R140841,SemEval-2015 Task 5: QA TempEval - Evaluating Temporal Information Understanding with Question Answering,S562884,R140843,has research problem,R140845,QA TempEval,"QA TempEval shifts the goal of previous TempEvals away from an intrinsic evaluation methodology toward a more extrinsic goal of question answering. This evaluation requires systems to capture temporal information relevant to perform an end-user task, as opposed to corpus-based evaluation where all temporal information is equally important. Evaluation results show that the best automated TimeML annotations reach over 30% recall on questions with ‘yes’ answer and about 50% on easier questions with ‘no’ answers. Features that helped achieve better results are event coreference and a time expression reasoner.",TRUE,research problem
R133,Artificial Intelligence,R147109,Know What You Don't Know: Unanswerable Questions for SQuAD,S589237,R147111,has research problem,R9143,Question Answering ,"Extractive reading comprehension systems can often locate the correct answer to a question in a context document, but they also tend to make unreliable guesses on questions for which the correct answer is not stated in the context. Existing datasets either focus exclusively on answerable questions, or use automatically generated unanswerable questions that are easy to identify. To address these weaknesses, we present SQuADRUn, a new dataset that combines the existing Stanford Question Answering Dataset (SQuAD) with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuADRUn, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. SQuADRUn is a challenging natural language understanding task for existing models: a strong neural system that gets 86% F1 on SQuAD achieves only 66% F1 on SQuADRUn. We release SQuADRUn to the community as the successor to SQuAD.",TRUE,research problem
R133,Artificial Intelligence,R6380,Cross-Lingual Question Answering Using Common Semantic Space,S7476,R6381,has research problem,R6227,Question answering systems,"With the advent of Big Data concept, a lot of attention has been paid to structuring and giving semantic to this data. Knowledge bases like DBPedia play an important role to achieve this goal. Question answering systems are common approach to address expressivity and usability of information extraction from knowledge bases. Recent researches focused only on monolingual QA systems while cross-lingual setting has still so many barriers. In this paper we introduce a new cross-lingual approach using a unified semantic space among languages. After keyword extraction, entity linking and answer type detection, we use cross lingual semantic similarity to extract the answer from knowledge base via relation selection and type matching. We have evaluated our approach on Persian and Spanish which are typologically different languages. Our experiments are on DBPedia. The results are promising for both languages.",TRUE,research problem
R133,Artificial Intelligence,R141000,SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning,S563250,R141002,has research problem,R119127,Reading Comprehension,"This paper introduces the SemEval-2021 shared task 4: Reading Comprehension of Abstract Meaning (ReCAM). This shared task is designed to help evaluate the ability of machines in representing and understanding abstract concepts.Given a passage and the corresponding question, a participating system is expected to choose the correct answer from five candidates of abstract concepts in cloze-style machine reading comprehension tasks. Based on two typical definitions of abstractness, i.e., the imperceptibility and nonspecificity, our task provides three subtasks to evaluate models’ ability in comprehending the two types of abstract meaning and the models’ generalizability. Specifically, Subtask 1 aims to evaluate how well a participating system models concepts that cannot be directly perceived in the physical world. Subtask 2 focuses on models’ ability in comprehending nonspecific concepts located high in a hypernym hierarchy given the context of a passage. Subtask 3 aims to provide some insights into models’ generalizability over the two types of abstractness. During the SemEval-2021 official evaluation period, we received 23 submissions to Subtask 1 and 28 to Subtask 2. The participating teams additionally made 29 submissions to Subtask 3. The leaderboard and competition website can be found at https://competitions.codalab.org/competitions/26153. The data and baseline code are available at https://github.com/boyuanzheng010/SemEval2021-Reading-Comprehension-of-Abstract-Meaning.",TRUE,research problem
R133,Artificial Intelligence,R69630,Deep knowledge-aware network for news recommendation,S330779,R69631,has research problem,R69628,Recommender Systems,"Online news recommender systems aim to address the information explosion of news and make personalized recommendation for users. In general, news language is highly condensed, full of knowledge entities and common sense. However, existing methods are unaware of such external knowledge and cannot fully discover latent knowledge-level connections among news. The recommended results for a user are consequently limited to simple patterns and cannot be extended reasonably. To solve the above problem, in this paper, we propose a deep knowledge-aware network (DKN) that incorporates knowledge graph representation into news recommendation. DKN is a content-based deep recommendation framework for click-through rate prediction. The key component of DKN is a multi-channel and word-entity-aligned knowledge-aware convolutional neural network (KCNN) that fuses semantic-level and knowledge-level representations of news. KCNN treats words and entities as multiple channels, and explicitly keeps their alignment relationship during convolution. In addition, to address users» diverse interests, we also design an attention module in DKN to dynamically aggregate a user»s history with respect to current candidate news. Through extensive experiments on a real online news platform, we demonstrate that DKN achieves substantial gains over state-of-the-art deep recommendation models. We also validate the efficacy of the usage of knowledge in DKN.",TRUE,research problem
R133,Artificial Intelligence,R69633,Learning heterogeneous knowledge base embeddings for explainable recommendation,S330797,R69634,has research problem,R69628,Recommender Systems,"Providing model-generated explanations in recommender systems is important to user experience. State-of-the-art recommendation algorithms—especially the collaborative filtering (CF)- based approaches with shallow or deep models—usually work with various unstructured information sources for recommendation, such as textual reviews, visual images, and various implicit or explicit feedbacks. Though structured knowledge bases were considered in content-based approaches, they have been largely ignored recently due to the availability of vast amounts of data and the learning power of many complex models. However, structured knowledge bases exhibit unique advantages in personalized recommendation systems. When the explicit knowledge about users and items is considered for recommendation, the system could provide highly customized recommendations based on users’ historical behaviors and the knowledge is helpful for providing informed explanations regarding the recommended items. A great challenge for using knowledge bases for recommendation is how to integrate large-scale structured and unstructured data, while taking advantage of collaborative filtering for highly accurate performance. Recent achievements in knowledge-base embedding (KBE) sheds light on this problem, which makes it possible to learn user and item representations while preserving the structure of their relationship with external knowledge for explanation. In this work, we propose to explain knowledge-base embeddings for explainable recommendation. Specifically, we propose a knowledge-base representation learning framework to embed heterogeneous entities for recommendation, and based on the embedded knowledge base, a soft matching algorithm is proposed to generate personalized explanations for the recommended items. Experimental results on real-world e-commerce datasets verified the superior recommendation performance and the explainability power of our approach compared with state-of-the-art baselines.",TRUE,research problem
R133,Artificial Intelligence,R69635,Knowledge-aware autoencoders for explainable recommender systems,S330815,R69636,has research problem,R69628,Recommender Systems,"Recommender Systems have been widely used to help users in finding what they are looking for thus tackling the information overload problem. After several years of research and industrial findings looking after better algorithms to improve accuracy and diversity metrics, explanation services for recommendation are gaining momentum as a tool to provide a human-understandable feedback to results computed, in most of the cases, by black-box machine learning techniques. As a matter of fact, explanations may guarantee users satisfaction, trust, and loyalty in a system. In this paper, we evaluate how different information encoded in a Knowledge Graph are perceived by users when they are adopted to show them an explanation. More precisely, we compare how the use of categorical information, factual one or a mixture of them both in building explanations, affect explanatory criteria for a recommender system. Experimental results are validated through an A/B testing platform which uses a recommendation engine based on a Semantics-Aware Autoencoder to build users profiles which are in turn exploited to compute recommendation lists and to provide an explanation.",TRUE,research problem
R133,Artificial Intelligence,R38180,End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures,S515228,R129561,has research problem,R116569,Relation Extraction,"We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional tree-structured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the state-of-the-art feature-based model on end-to-end relation extraction, achieving 12.1% and 5.7% relative error reductions in F1-score on ACE2005 and ACE2004, respectively. We also show that our LSTM-RNN based model compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8). Finally, we present an extensive ablation analysis of several model components.",TRUE,research problem
R133,Artificial Intelligence,R76338,SemEval-2020 Task 6: Definition Extraction from Free Text with the DEFT Corpus,S349262,R76366,has research problem,R44342,Relation extraction,"Research on definition extraction has been conducted for well over a decade, largely with significant constraints on the type of definitions considered. In this work, we present DeftEval, a SemEval shared task in which participants must extract definitions from free text using a term-definition pair corpus that reflects the complex reality of definitions in natural language. Definitions and glosses in free text often appear without explicit indicators, across sentences boundaries, or in an otherwise complex linguistic manner. DeftEval involved 3 distinct subtasks: 1) Sentence classification, 2) sequence labeling, and 3) relation extraction.",TRUE,research problem
R133,Artificial Intelligence,R9567,A Neural Approach for Text Extraction from Scholarly Figures,S16055,R9568,has research problem,R10012,scene text detection,"In recent years, the problem of scene text extraction from images has received extensive attention and significant progress. However, text extraction from scholarly figures such as plots and charts remains an open problem, in part due to the difficulty of locating irregularly placed text lines. To the best of our knowledge, literature has not described the implementation of a text extraction system for scholarly figures that adapts deep convolutional neural networks used for scene text detection. In this paper, we propose a text extraction approach for scholarly figures that forgoes preprocessing in favor of using a deep convolutional neural network for text line localization. Our system uses a publicly available scene text detection approach whose network architecture is well suited to text extraction from scholarly figures. Training data are derived from charts in arXiv papers which are extracted using Allen Institute's pdffigures tool. Since this tool analyzes PDF data as a container format in order to extract text location through the mechanisms which render it, we were able to gather a large set of labeled training samples. We show significant improvement from methods in the literature, and discuss the structural changes of the text extraction pipeline.",TRUE,research problem
R133,Artificial Intelligence,R141030,*SEM 2013 shared task: Semantic Textual Similarity,S581371,R145247,has research problem,R122129,Semantic Textual Similarity,"In Semantic Textual Similarity (STS), systems rate the degree of semantic equivalence, on a graded scale from 0 to 5, with 5 being the most similar. This year we set up two tasks: (i) a core task (CORE), and (ii) a typed-similarity task (TYPED). CORE is similar in set up to SemEval STS 2012 task with pairs of sentences from sources related to those of 2012, yet different in genre from the 2012 set, namely, this year we included newswire headlines, machine translation evaluation datasets and multiple lexical resource glossed sets. TYPED, on the other hand, is novel and tries to characterize why two items are deemed similar, using cultural heritage items which are described with metadata such as title, author or description. Several types of similarity have been defined, including similar author, similar time period or similar location. The annotation for both tasks leverages crowdsourcing, with relative high interannotator correlation, ranging from 62% to 87%. The CORE task attracted 34 participants with 89 runs, and the TYPED task attracted 6 teams with 14 runs.",TRUE,research problem
R133,Artificial Intelligence,R76338,SemEval-2020 Task 6: Definition Extraction from Free Text with the DEFT Corpus,S349247,R76352,has research problem,R76357,Sentence classification,"Research on definition extraction has been conducted for well over a decade, largely with significant constraints on the type of definitions considered. In this work, we present DeftEval, a SemEval shared task in which participants must extract definitions from free text using a term-definition pair corpus that reflects the complex reality of definitions in natural language. Definitions and glosses in free text often appear without explicit indicators, across sentences boundaries, or in an otherwise complex linguistic manner. DeftEval involved 3 distinct subtasks: 1) Sentence classification, 2) sequence labeling, and 3) relation extraction.",TRUE,research problem
R133,Artificial Intelligence,R76338,SemEval-2020 Task 6: Definition Extraction from Free Text with the DEFT Corpus,S349248,R76356,has research problem,R76358,Sequence labeling,"Research on definition extraction has been conducted for well over a decade, largely with significant constraints on the type of definitions considered. In this work, we present DeftEval, a SemEval shared task in which participants must extract definitions from free text using a term-definition pair corpus that reflects the complex reality of definitions in natural language. Definitions and glosses in free text often appear without explicit indicators, across sentences boundaries, or in an otherwise complex linguistic manner. DeftEval involved 3 distinct subtasks: 1) Sentence classification, 2) sequence labeling, and 3) relation extraction.",TRUE,research problem
R133,Artificial Intelligence,R140620,SemEval-2012 Task 3: Spatial Role Labeling,S561581,R140622,has research problem,R140623,Spatial Role Labeling,"Many NLP applications require information about locations of objects referenced in text, or relations between them in space. For example, the phrase a book on the desk contains information about the location of the object book, as trajector, with respect to another object desk, as landmark. Spatial Role Labeling (SpRL) is an evaluation task in the information extraction domain which sets a goal to automatically process text and identify objects of spatial scenes and relations between them. This paper describes the task in Semantic Evaluations 2013, annotation schema, corpora, participants, methods and results obtained by the participants.",TRUE,research problem
R133,Artificial Intelligence,R140992,Overview of the MEDIQA 2021 Shared Task on Summarization in the Medical Domain,S563221,R140994,has research problem,R140995,Summarization,"The MEDIQA 2021 shared tasks at the BioNLP 2021 workshop addressed three tasks on summarization for medical text: (i) a question summarization task aimed at exploring new approaches to understanding complex real-world consumer health queries, (ii) a multi-answer summarization task that targeted aggregation of multiple relevant answers to a biomedical question into one concise and relevant answer, and (iii) a radiology report summarization task addressing the development of clinically relevant impressions from radiology report findings. Thirty-five teams participated in these shared tasks with sixteen working notes submitted (fifteen accepted) describing a wide variety of models developed and tested on the shared and external datasets. In this paper, we describe the tasks, the datasets, the models and techniques developed by various teams, the results of the evaluation, and a study of correlations among various summarization evaluation measures. We hope that these shared tasks will bring new research and insights in biomedical text summarization and evaluation.",TRUE,research problem
R133,Artificial Intelligence,R9567,A Neural Approach for Text Extraction from Scholarly Figures,S16054,R9568,has research problem,R10011,text extraction from images ,"In recent years, the problem of scene text extraction from images has received extensive attention and significant progress. However, text extraction from scholarly figures such as plots and charts remains an open problem, in part due to the difficulty of locating irregularly placed text lines. To the best of our knowledge, literature has not described the implementation of a text extraction system for scholarly figures that adapts deep convolutional neural networks used for scene text detection. In this paper, we propose a text extraction approach for scholarly figures that forgoes preprocessing in favor of using a deep convolutional neural network for text line localization. Our system uses a publicly available scene text detection approach whose network architecture is well suited to text extraction from scholarly figures. Training data are derived from charts in arXiv papers which are extracted using Allen Institute's pdffigures tool. Since this tool analyzes PDF data as a container format in order to extract text location through the mechanisms which render it, we were able to gather a large set of labeled training samples. We show significant improvement from methods in the literature, and discuss the structural changes of the text extraction pipeline.",TRUE,research problem
R133,Artificial Intelligence,R41177,A Novel Hierarchical Binary Tagging Framework for Joint Extraction of Entities and Relations,S130755,R41179,has research problem,R41218,the overlapping triple problem where multiple relational triples in the same sentence share the same entities,"Extracting relational triples from unstructured text is crucial for large-scale knowledge graph construction. However, few existing works excel in solving the overlapping triple problem where multiple relational triples in the same sentence share the same entities. We propose a novel Hierarchical Binary Tagging (HBT) framework derived from a principled problem formulation. Instead of treating relations as discrete labels as in previous works, our new framework models relations as functions that map subjects to objects in a sentence, which naturally handles overlapping triples. Experiments show that the proposed framework already outperforms state-of-the-art methods even its encoder module uses a randomly initialized BERT encoder, showing the power of the new tagging framework. It enjoys further performance boost when employing a pretrained BERT encoder, outperforming the strongest baseline by 25.6 and 45.9 absolute gain in F1-score on two public datasets NYT and WebNLG, respectively. In-depth analysis on different types of overlapping triples shows that the method delivers consistent performance gain in all scenarios.",TRUE,research problem
R133,Artificial Intelligence,R140600,SemEval-2007 Task 12: Turkish Lexical Sample Task,S561520,R140602,has research problem,R140603,Turkish Lexical Sample Task,"This paper presents the task definition, resources, and the single participant system for Task 12: Turkish Lexical Sample Task (TLST), which was organized in the SemEval-2007 evaluation exercise. The methodology followed for developing the specific linguistic resources necessary for the task has been described in this context. A language-specific feature set was defined for Turkish. TLST consists of three pieces of data: The dictionary, the training data, and the evaluation data. Finally, a single system that utilizes a simple statistical method was submitted for the task and evaluated.",TRUE,research problem
R133,Artificial Intelligence,R76413,UWB at SemEval-2020 Task 1: Lexical Semantic Change Detection,S349390,R76415,has research problem,R76404,Unsupervised Lexical Semantic Change Detection,"In this paper, we describe our method for detection of lexical semantic change, i.e., word sense changes over time. We examine semantic differences between specific words in two corpora, chosen from different time periods, for English, German, Latin, and Swedish. Our method was created for the SemEval 2020 Task 1: Unsupervised Lexical Semantic Change Detection. We ranked 1st in Sub-task 1: binary change detection, and 4th in Sub-task 2: ranked change detection. We present our method which is completely unsupervised and language independent. It consists of preparing a semantic vector space for each corpus, earlier and later; computing a linear transformation between earlier and later spaces, using Canonical Correlation Analysis and orthogonal transformation;and measuring the cosines between the transformed vector for the target word from the earlier corpus and the vector for the target word in the later corpus.",TRUE,research problem
R133,Artificial Intelligence,R140605,The SemEval-2007 WePS Evaluation: Establishing a benchmark for the Web People Search Task,S561533,R140607,has research problem,R140608,Web People Search task,"This paper presents the task definition, resources, participation, and comparative results for the Web People Search task, which was organized as part of the SemEval-2007 evaluation exercise. This task consists of clustering a set of documents that mention an ambiguous person name according to the actual entities referred to using that name.",TRUE,research problem
R375,Arts and Humanities,R51006,Are the FAIR Data Principles fair?,S538008,R135913,has research problem,R135921,effectiveness and relevance of the FAIR Data Principles,"This practice paper describes an ongoing research project to test the effectiveness and relevance of the FAIR Data Principles. Simultaneously, it will analyse how easy it is for data archives to adhere to the principles. The research took place from November 2016 to January 2017, and will be underpinned with feedback from the repositories. The FAIR Data Principles feature 15 facets corresponding to the four letters of FAIR - Findable, Accessible, Interoperable, Reusable. These principles have already gained traction within the research world. The European Commission has recently expanded its demand for research to produce open data. The relevant guidelines1are explicitly written in the context of the FAIR Data Principles. Given an increasing number of researchers will have exposure to the guidelines, understanding their viability and suggesting where there may be room for modification and adjustment is of vital importance. This practice paper is connected to a dataset(Dunning et al.,2017) containing the original overview of the sample group statistics and graphs, in an Excel spreadsheet. Over the course of two months, the web-interfaces, help-pages and metadata-records of over 40 data repositories have been examined, to score the individual data repository against the FAIR principles and facets. The traffic-light rating system enables colour-coding according to compliance and vagueness. The statistical analysis provides overall, categorised, on the principles focussing, and on the facet focussing results. The analysis includes the statistical and descriptive evaluation, followed by elaborations on Elements of the FAIR Data Principles, the subject specific or repository specific differences, and subsequently what repositories can do to improve their information architecture.",TRUE,research problem
R104,Bioinformatics,R5129,PARAMO: A Pipeline for Reconstructing Ancestral Anatomies Using Ontologies and Stochastic Mapping,S5657,R5136,has research problem,R5137,Comparative phylogenetics,"Abstract
Comparative phylogenetics has been largely lacking a method for reconstructing the evolution of phenotypic entities that consist of ensembles of multiple discrete traits—entire organismal anatomies or organismal body regions. In this study, we provide a new approach named PARAMO (PhylogeneticAncestralReconstruction ofAnatomy byMappingOntologies) that appropriately models anatomical dependencies and uses ontology-informed amalgamation of stochastic maps to reconstruct phenotypic evolution at different levels of anatomical hierarchy including entire phenotypes. This approach provides new opportunities for tracking phenotypic radiations and evolution of organismal anatomies.",TRUE,research problem
R104,Bioinformatics,R109029,Deep learning improves prediction of drug–drug and drug–food interactions,S496965,R109032,has research problem,R109036,DDI prediction,"Significance Drug interactions, including drug–drug interactions (DDIs) and drug–food constituent interactions, can trigger unexpected pharmacological effects such as adverse drug events (ADEs). Several existing methods predict drug interactions, but require detailed, but often unavailable drug information as inputs, such as drug targets. To this end, we present a computational framework DeepDDI that accurately predicts DDI types for given drug pairs and drug–food constituent pairs using only name and structural information as inputs. We show four applications of DeepDDI to better understand drug interactions, including prediction of DDI mechanisms causing ADEs, suggestion of alternative drug members for the intended pharmacological effects without negative health effects, prediction of the effects of food constituents on interacting drugs, and prediction of bioactivities of food constituents. Drug interactions, including drug–drug interactions (DDIs) and drug–food constituent interactions (DFIs), can trigger unexpected pharmacological effects, including adverse drug events (ADEs), with causal mechanisms often unknown. Several computational methods have been developed to better understand drug interactions, especially for DDIs. However, these methods do not provide sufficient details beyond the chance of DDI occurrence, or require detailed drug information often unavailable for DDI prediction. Here, we report development of a computational framework DeepDDI that uses names of drug–drug or drug–food constituent pairs and their structural information as inputs to accurately generate 86 important DDI types as outputs of human-readable sentences. DeepDDI uses deep neural network with its optimized prediction performance and predicts 86 DDI types with a mean accuracy of 92.4% using the DrugBank gold standard DDI dataset covering 192,284 DDIs contributed by 191,878 drug pairs. DeepDDI is used to suggest potential causal mechanisms for the reported ADEs of 9,284 drug pairs, and also predict alternative drug candidates for 62,707 drug pairs having negative health effects. Furthermore, DeepDDI is applied to 3,288,157 drug–food constituent pairs (2,159 approved drugs and 1,523 well-characterized food constituents) to predict DFIs. The effects of 256 food constituents on pharmacological effects of interacting drugs and bioactivities of 149 food constituents are predicted. These results suggest that DeepDDI can provide important information on drug prescription and even dietary suggestions while taking certain drugs and also guidelines during drug development.",TRUE,research problem
R104,Bioinformatics,R150549,Classifying semantic relations in bioscience texts,S603647,R150551,has research problem,R150562,Identification of semantic relations,"A crucial step toward the goal of automatic extraction of propositional information from natural language text is the identification of semantic relations between constituents in sentences. We examine the problem of distinguishing among seven relation types that can occur between the entities ""treatment"" and ""disease"" in bioscience text, and the problem of identifying such entities. We compare five generative graphical models and a neural network, using lexical, syntactic, and semantic features, finding that the latter help achieve high classification accuracy.",TRUE,research problem
R104,Bioinformatics,R148576,Exploiting syntax when detecting protein names in text,S595680,R148578,has research problem,R148595,Protein tagging,"This paper presents work on a method to detect names of proteins in running text. Our system - Yapex - uses a combination of lexical and syntactic knowledge, heuristic filters and a local dynamic dictionary. The syntactic information given by a general-purpose off-the-shelf parser supports the correct identification of the boundaries of protein names, and the local dynamic dictionary finds protein names in positions incompletely analysed by the parser. We present the different steps involved in our approach to protein tagging, and show how combinations of them influence recall and precision. We evaluate the system on a corpus of MEDLINE abstracts and compare it with the KeX system (Fukuda et al., 1998) along four different notions of correctness.",TRUE,research problem
R104,Bioinformatics,R150537,LINNAEUS: A species name identification system for biomedical literature,S603589,R150539,has research problem,R150540,Species name recognition and normalization,"Abstract Background The task of recognizing and identifying species names in biomedical literature has recently been regarded as critical for a number of applications in text and data mining, including gene name recognition, species-specific document retrieval, and semantic enrichment of biomedical articles. Results In this paper we describe an open-source species name recognition and normalization software system, LINNAEUS, and evaluate its performance relative to several automatically generated biomedical corpora, as well as a novel corpus of full-text documents manually annotated for species mentions. LINNAEUS uses a dictionary-based approach (implemented as an efficient deterministic finite-state automaton) to identify species names and a set of heuristics to resolve ambiguous mentions. When compared against our manually annotated corpus, LINNAEUS performs with 94% recall and 97% precision at the mention level, and 98% recall and 90% precision at the document level. Our system successfully solves the problem of disambiguating uncertain species mentions, with 97% of all mentions in PubMed Central full-text documents resolved to unambiguous NCBI taxonomy identifiers. Conclusions LINNAEUS is an open source, stand-alone software system capable of recognizing and normalizing species name mentions with speed and accuracy, and can therefore be integrated into a range of bioinformatics and text-mining applications. The software and manually annotated corpus can be downloaded freely at http://linnaeus.sourceforge.net/.",TRUE,research problem
R205,Biomedical Engineering and Bioengineering,R110043,Tensor gradient based discriminative region analysis for cognitive state classification,S501829,R110045,has research problem,L362811,Cognitive state classification,"Extraction of relevant features from high-dimensional multi-way functional MRI (fMRI) data is essential for the classification of a cognitive task. In general, fMRI records a combination of neural activation signals and several other noisy components. Alternatively, fMRI data is represented as a high dimensional array using a number of voxels, time instants, and snapshots. The organisation of fMRI data includes a number of Region Of Interests (ROI), snapshots, and thousand of voxels. The crucial step in cognitive task classification is a reduction of feature size through feature selection. Extraction of a specific pattern of interest within the noisy components is a challenging task. Tensor decomposition techniques have found several applications in the scientific fields. In this paper, a novel tensor gradient-based feature extraction technique for cognitive task classification is proposed. The technique has efficiently been applied on StarPlus fMRI data. Also, the technique has been used to discriminate the ROIs in fMRI data in terms of cognitive state classification. The method has been achieved a better average accuracy when compared to other existing feature extraction methods.",TRUE,research problem
R205,Biomedical Engineering and Bioengineering,R110061,Tensor gradient based discriminative region analysis for cognitive state classification,S501879,R110063,has research problem,L362844,Cognitive state classification,"Extraction of relevant features from high-dimensional multi-way functional MRI (fMRI) data is essential for the classification of a cognitive task. In general, fMRI records a combination of neural activation signals and several other noisy components. Alternatively, fMRI data is represented as a high dimensional array using a number of voxels, time instants, and snapshots. The organisation of fMRI data includes a number of Region Of Interests (ROI), snapshots, and thousand of voxels. The crucial step in cognitive task classification is a reduction of feature size through feature selection. Extraction of a specific pattern of interest within the noisy components is a challenging task. Tensor decomposition techniques have found several applications in the scientific fields. In this paper, a novel tensor gradient-based feature extraction technique for cognitive task classification is proposed. The technique has efficiently been applied on StarPlus fMRI data. Also, the technique has been used to discriminate the ROIs in fMRI data in terms of cognitive state classification. The method has been achieved a better average accuracy when compared to other existing feature extraction methods.",TRUE,research problem
R27,Botany,R111316,"Difference in reproductive mode rather than ploidy explains niche differentiation in sympatric sexual and apomictic populations of
Potentilla puberula",S506919,R111323,has research problem,R111324,ecological parthenogenesis,"Abstract Apomicts tend to have larger geographical distributional ranges and to occur in ecologically more extreme environments than their sexual progenitors. However, the expression of apomixis is typically linked to polyploidy. Thus, it is a priori not clear whether intrinsic effects related to the change in the reproductive mode or rather in the ploidy drive ecological differentiation. We used sympatric sexual and apomictic populations of Potentilla puberula to test for ecological differentiation. To distinguish the effects of reproductive mode and ploidy on the ecology of cytotypes, we compared the niches (a) of sexuals (tetraploids) and autopolyploid apomicts (penta‐, hepta‐, and octoploids) and (b) of the three apomictic cytotypes. We based comparisons on a ploidy screen of 238 populations along a latitudinal transect through the Eastern European Alps and associated bioclimatic, and soil and topographic data. Sexual tetraploids preferred primary habitats at drier, steeper, more south‐oriented slopes, while apomicts mostly occurred in human‐made habitats with higher water availability. Contrariwise, we found no or only marginal ecological differentiation among the apomictic higher ploids. Based on the pronounced ecological differences found between sexuals and apomicts, in addition to the lack of niche differentiation among cytotypes of the same reproductive mode, we conclude that reproductive mode rather than ploidy is the main driver of the observed differences. Moreover, we compared our system with others from the literature, to stress the importance of identifying alternative confounding effects (such as hybrid origin). Finally, we underline the relevance of studying ecological parthenogenesis in sympatry, to minimize the effects of differential migration abilities.",TRUE,research problem
R111778,Communication Neuroscience,R111716,Engaged listeners: shared neural processing of powerful political speeches,S508244,R111718,has research problem,R111721,brain dynamics during listening to speeches,"Powerful speeches can captivate audiences, whereas weaker speeches fail to engage their listeners. What is happening in the brains of a captivated audience? Here, we assess audience-wide functional brain dynamics during listening to speeches of varying rhetorical quality. The speeches were given by German politicians and evaluated as rhetorically powerful or weak. Listening to each of the speeches induced similar neural response time courses, as measured by inter-subject correlation analysis, in widespread brain regions involved in spoken language processing. Crucially, alignment of the time course across listeners was stronger for rhetorically powerful speeches, especially for bilateral regions of the superior temporal gyri and medial prefrontal cortex. Thus, during powerful speeches, listeners as a group are more coupled to each other, suggesting that powerful speeches are more potent in taking control of the listeners' brain responses. Weaker speeches were processed more heterogeneously, although they still prompted substantially correlated responses. These patterns of coupled neural responses bear resemblance to metaphors of resonance, which are often invoked in discussions of speech impact, and contribute to the literature on auditory attention under natural circumstances. Overall, this approach opens up possibilities for research on the neural mechanisms mediating the reception of entertaining or persuasive messages.",TRUE,research problem
R388,Comparative Literature,R8624,"Revisiting Style, a Key Concept in Literary Studies",S13528,R8625,has research problem,R8627,Definition of style,"AbstractLanguage and literary studies have studied style for centuries, and even since the advent of ›stylistics‹ as a discipline at the beginning of the twentieth century, definitions of ›style‹ have varied heavily across time, space and fields. Today, with increasingly large collections of literary texts being made available in digital form, computational approaches to literary style are proliferating. New methods from disciplines such as corpus linguistics and computer science are being adopted and adapted in interrelated fields such as computational stylistics and corpus stylistics, and are facilitating new approaches to literary style.The relation between definitions of style in established linguistic or literary stylistics, and definitions of style in computational or corpus stylistics has not, however, been systematically assessed. This contribution aims to respond to the need to redefine style in the light of this new situation and to establish a clearer perception of both the overlap and the boundaries between ›mainstream‹ and ›computational‹ and/or ›empirical‹ literary stylistics. While stylistic studies of non-literary texts are currently flourishing, our contribution deliberately centers on those approaches relevant to ›literary stylistics‹. It concludes by proposing an operational definition of style that we hope can act as a common ground for diverse approaches to literary style, fostering transdisciplinary research.The focus of this contribution is on literary style in linguistics and literary studies (rather than in art history, musicology or fashion), on textual aspects of style (rather than production- or reception-oriented theories of style), and on a descriptive perspective (rather than a prescriptive or didactic one). Even within these limits, however, it appears necessary to build on a broad understanding of the various perspectives on style that have been adopted at different times and in different traditions. For this reason, the contribution first traces the development of the notion of style in three different traditions, those of German, Dutch and French language and literary studies. Despite the numerous links between each other, and between each of them to the British and American traditions, these three traditions each have their proper dynamics, especially with regard to the convergence and/or confrontation between mainstream and computational stylistics. For reasons of space and coherence, the contribution is limited to theoretical developments occurring since 1945.The contribution begins by briefly outlining the range of definitions of style that can be encountered across traditions today: style as revealing a higher-order aesthetic value, as the holistic ›gestalt‹ of single texts, as an expression of the individuality of an author, as an artifact presupposing choice among alternatives, as a deviation from a norm or reference, or as any formal property of a text. The contribution then traces the development of definitions of style in each of the three traditions mentioned, with the aim of giving a concise account of how, in each tradition, definitions of style have evolved over time, with special regard to the way such definitions relate to empirical, quantitative or otherwise computational approaches to style in literary texts. It will become apparent how, in each of the three traditions, foundational texts continue to influence current discussions on literary style, but also how stylistics has continuously reacted to broader developments in cultural and literary theory, and how empirical, quantitative or computational approaches have long existed, usually in parallel to or at the margins of mainstream stylistics. The review will also reflect the lines of discussion around style as a property of literary texts – or of any textual entity in general.The perspective on three stylistic traditions is accompanied by a more systematic perspective. The rationale is to work towards a common ground for literary scholars and linguists when talking about (literary) style, across traditions of stylistics, with respect for established definitions of style, but also in light of the digital paradigm. Here, we first show to what extent, at similar or different moments in time, the three traditions have developed comparable positions on style, and which definitions out of the range of possible definitions have been proposed or promoted by which authors in each of the three traditions.On the basis of this synthesis, we then conclude by proposing an operational definition of style that is an attempt to provide a common ground for both mainstream and computational literary stylistics. This definition is discussed in some detail in order to explain not only what is meant by each term in the definition, but also how it relates to computational analyses of style – and how this definition aims to avoid some of the pitfalls that can be perceived in earlier definitions of style. Our definition, we hope, will be put to use by a new generation of computational, quantitative, and empirical studies of style in literary texts.",TRUE,research problem
R277,Computational Engineering,R108270,@spam: the underground on 140 characters or less,S493176,R108272,has research problem,L357387,Characterization of spam on Twitter,"In this work we present a characterization of spam on Twitter. We find that 8% of 25 million URLs posted to the site point to phishing, malware, and scams listed on popular blacklists. We analyze the accounts that send spam and find evidence that it originates from previously legitimate accounts that have been compromised and are now being puppeteered by spammers. Using clickthrough data, we analyze spammers' use of features unique to Twitter and the degree that they affect the success of spam. We find that Twitter is a highly successful platform for coercing users to visit spam pages, with a clickthrough rate of 0.13%, compared to much lower rates previously reported for email spam. We group spam URLs into campaigns and identify trends that uniquely distinguish phishing, malware, and spam, to gain an insight into the underlying techniques used to attract users. Given the absence of spam filtering on Twitter, we examine whether the use of URL blacklists would help to significantly stem the spread of Twitter spam. Our results indicate that blacklists are too slow at identifying new threats, allowing more than 90% of visitors to view a page before it becomes blacklisted. We also find that even if blacklist delays were reduced, the use by spammers of URL shortening services for obfuscation negates the potential gains unless tools that use blacklists develop more sophisticated spam filtering.",TRUE,research problem
R322,Computational Linguistics,R164478,BioNLP Shared Task 2011 – Bacteria Gene Interactions and Renaming,S656765,R164484,has research problem,R164482,Bacteria gene renaming,"We present two related tasks of the BioNLP Shared Tasks 2011: Bacteria Gene Renaming (Rename) and Bacteria Gene Interactions (GI). We detail the objectives, the corpus specification, the evaluation metrics, and we summarize the participants' results. Both issued from PubMed scientific literature abstracts, the Rename task aims at extracting gene name synonyms, and the GI task aims at extracting genic interaction events, mainly about gene transcriptional regulations in bacteria.",TRUE,research problem
R134,Computer and Systems Architecture,R107933,The Ontology-based Business Architecture Engineering Framework,S503268,R107935,has research problem,R110463,Business architecture development,"Business architecture became a well-known tool for business transformations. According to a recent study by Forrester, 50 percent of the companies polled claimed to have an active business architecture initiative, whereas 20 percent were planning to engage in business architecture work in the near future. However, despite the high interest in BA, there is not yet a common understanding of the main concepts. There is a lack for the business architecture framework which provides a complete metamodel, suggests methodology for business architecture development and enables tool support for it. The ORG- Master framework is designed to solve this problem using the ontology as a core of the metamodel. This paper describes the ORG-Master framework, its implementation and dissemination.",TRUE,research problem
R231,Computer and Systems Architecture,R38438,Towards a Large Corpus of Richly Annotated Web Tables for Knowledge Base Population,S126111,R38440,has research problem,R38441,Web Table Understanding,"Web Table Understanding in the context of Knowledge Base Population and the Semantic Web is the task of i) linking the content of tables retrieved from the Web to an RDF knowledge base, ii) of building hypotheses about the tables' structures and contents, iii) of extracting novel information from these tables, and iv) of adding this new information to a knowledge base. Knowledge Base Population has gained more and more interest in the last years due to the increased demand in large knowledge graphs which became relevant for Artificial Intelligence applications such as Question Answering and Semantic Search. In this paper we describe a set of basic tasks which are relevant for Web Table Understanding in the mentioned context. These tasks incrementally enrich a table with hypotheses about the table's content. In doing so, in the case of multiple interpretations, selecting one interpretation and thus deciding against other interpretations is avoided as much as possible. By postponing these decision, we enable table understanding approaches to decide by themselves, thus increasing the usability of the annotated table data. We present statistics from analyzing and annotating 1.000.000 tables from the Web Table Corpus 2015 and make this dataset as well as our code available online.",TRUE,research problem
R230,Computer Engineering,R74479,Contribution of big data in E-learning. A methodology to process academic data from heterogeneous sources,S497659,R109110,has research problem,R109109,Big Data,"Big Data covers a wide spectrum of technologies, which tends to support the processing of big amounts of heterogeneous data. The paper identifies the powerful benefits and the application areas of Big Data in the on-line education context. Considering the boom of academic services on-line, and the free access to the educative content, a great amount of data is being generated in the formal educational field as well as in less formal contexts. In this sense, Big Data can help stakeholders, involved in education decision making, to reach the objective of improving the quality of education and the learning outcomes. In this paper, a methodology is proposed to process big amounts of data coming from the educational field. The current study ends with a specific case study where the data of a well-known Ecuadorian institution that has more than 80 branches is analyzed.",TRUE,research problem
R230,Computer Engineering,R152812,LNN-EL: A Neuro-Symbolic Approach to Short-text Entity Linking,S646392,R161854,has research problem,R122863,Entity Linking,"Entity linking (EL) is the task of disambiguating mentions appearing in text by linking them to entities in a knowledge graph, a crucial task for text understanding, question answering or conversational systems. In the special case of short-text EL, which poses additional challenges due to limited context, prior approaches have reached good performance by employing heuristics-based methods or purely neural approaches. Here, we take a different, neuro-symbolic approach that combines the advantages of using interpretable rules based on first-order logic with the performance of neural learning. Even though constrained to use rules, we show that we reach competitive or better performance with SoTA black-box neural approaches. Furthermore, our framework has the benefits of extensibility and transferability. We show that we can easily blend existing rule templates given by a human expert, with multiple types of features (priors, BERT encodings, box embeddings, etc), and even with scores resulting from previous EL methods, thus improving on such methods. As an example of improvement, on the LC-QuAD-1.0 dataset, we show more than 3% increase in F1 score relative to previous SoTA. Finally, we show that the inductive bias offered by using logic results in a set of learned rules that transfers from one dataset to another, sometimes without finetuning, while still having high accuracy.",TRUE,research problem
R230,Computer Engineering,R75887,Exploring Hate Speech Detection in Multimodal Publications,S346970,R75889,has research problem,R75890,hate speech detection,"In this work we target the problem of hate speech detection in multimodal publications formed by a text and an image. We gather and annotate a large scale dataset from Twitter, MMHS150K, and propose different models that jointly analyze textual and visual information for hate speech detection, comparing them with unimodal detection. We provide quantitative and qualitative results and analyze the challenges of the proposed task. We find that, even though images are useful for the hate speech detection task, current multimodal models cannot outperform models analyzing only text. We discuss why and open the field and the dataset for further research.",TRUE,research problem
R239,Computer Engineering,R74012,Harvesting Information from Captions for Weakly Supervised Semantic Segmentation,S340042,R74014,has research problem,R74015,image segmentation,"Since acquiring pixel-wise annotations for training convolutional neural networks for semantic image segmentation is time-consuming, weakly supervised approaches that only require class tags have been proposed. In this work, we propose another form of supervision, namely image captions as they can be found on the Internet. These captions have two advantages. They do not require additional curation as it is the case for the clean class tags used by current weakly supervised approaches and they provide textual context for the classes present in an image. To leverage such textual context, we deploy a multi-modal network that learns a joint embedding of the visual representation of the image and the textual representation of the caption. The network estimates text activation maps (TAMs) for class names as well as compound concepts, i.e. combinations of nouns and their attributes. The TAMs of compound concepts describing classes of interest substantially improve the quality of the estimated class activation maps which are then used to train a network for semantic segmentation. We evaluate our method on the COCO dataset where it achieves state of the art results for weakly supervised image segmentation.",TRUE,research problem
R230,Computer Engineering,R74498,Linked open knowledge organization systems,S497760,R109121,has research problem,R109120,Knowledge Organization,"The Web is the most used Internet's service to create and share information. In large information collections, Knowledge Organization plays a key role in order to classify and to find valuable information. Likewise, Linked Open Data is a powerful approach for linking different Web datasets. Today, several Knowledge Organization Systems are published by using the design criteria of linked data, it facilitates the automatic processing of them. In this paper, we address the issue of traversing open Knowledge Organization Systems, considering difficulties associated with their dynamics and size. To fill this issue, we propose a method to identify irrelevant nodes on an open graph, thus reducing the time and the scope of the graph path and maximizing the possibilities of finding more relevant results. The approach for graph reduction is independent of the domain or task for which the open system will be used. The preliminary results of the proof of concept lead us to think that the method can be effective when the coverage of the concept of interest increases.",TRUE,research problem
R230,Computer Engineering,R74438,Application of ontologies in higher education: A systematic mapping study,S497397,R109088,has research problem,R109071,Knowledge Representation,"Universities and higher education institutes continually create knowledge. Hence it is necessary to keep a record of the academic and administrative information generated. Considering the vast amount of information managed by higher education institutions and the diversity of heterogeneous systems that can coexist within the same institution, it becomes necessary to use technologies for knowledge representation. Ontologies facilitate access to knowledge allowing the adequate exchange of information between people and between heterogeneous systems. This paper aims to identify existing research on the use and application of ontologies in higher education. From a set of 2792 papers, a study based on systematic mapping was conducted. A total of 52 research papers were reviewed and analyzed. Our results contribute key findings regarding how ontologies are used in higher education institutes, what technologies and tools are applied for the development of ontologies and what are the main vocabularies reused in the application of ontologies.",TRUE,research problem
R230,Computer Engineering,R74465,The use of ontologies for syllabus representation,S497580,R109102,has research problem,R109071,Knowledge Representation,"A syllabus is an important document for teachers, students, and institutions. It represents what and how a course will be conducted. Some authors have researched regarding the components for writing a good syllabus. However, there is not a standard format universally accepted by all educators. Even inside the same university, there are syllabuses written in different type of files like PDF, DOC, HTML, for instance. These kind of files are easily readable by humans but it is not the same by machines. On the other hand, ontologies are technologies of knowledge representation that allow setting information understandable for both humans and machines. In this paper, we present a literature review regarding the use of ontologies for syllabus representation. The objective of this paper is to know the use of ontologies to represent a syllabus semantically and to determine their score according to the five-stars of Linked Data vocabulary use scale. Our results show that some researchers have used ontologies for many purposes, widely concerning with learning objectives. Nevertheless, the ontologies created by the authors do not fulfill with the five-stars rating and do not have all components of a well-suited syllabus.",TRUE,research problem
R230,Computer Engineering,R74432,Towards a learning analytics approach for supporting discovery and reuse of OER an approach based on Social Networks Analysis and Linked Open Data,S497357,R109085,has research problem,R109084,Learning Analytics,"The OER movement poses challenges inherent to discovering and reuse digital educational materials from highly heterogeneous and distributed digital repositories. Search engines on today's Web of documents are based on keyword queries. Search engines don't provide a sufficiently comprehensive solution to answer a query that permits personalization of open educational materials. To find OER on the Web today, users must first be well informed of which OER repositories potentially contain the data they want and what data model describes these datasets, before using this information to create structured queries. Learning analytics requires not only to retrieve the useful information and knowledge about educational resources, learning processes and relations among learning agents, but also to transform the data gathered in actionable e interoperable information. Linked Data is considered as one of the most effective alternatives for creating global shared information spaces, it has become an interesting approach for discovering and enriching open educational resources data, as well as achieving semantic interoperability and re-use between multiple OER repositories. In this work, an approach based on Semantic Web technologies, the Linked Data guidelines, and Social Network Analysis methods are proposed as a fundamental way to describing, analyzing and visualizing knowledge sharing on OER initiatives.",TRUE,research problem
R230,Computer Engineering,R74440,EMadrid project: MOOCs and learning analytics,S497418,R109089,has research problem,R109084,Learning Analytics,"Both, MOOCs and learning analytics, are two emergent topics in the field of educational technology. This paper shows the main contributions of the eMadrid network in these two topics during the last years (2014-2016), as well as the planned future works in the network. The contributions in the field of the MOOCs include the design and authoring of materials, the improvement of the peer review process or experiences about teaching these courses and institutional adoption. The contributions in the field of learning analytics include the inference of higher level information, the development of dashboards, the evaluation of the learning process, or the prediction and clustering.",TRUE,research problem
R230,Computer Engineering,R74463,Application of data anonymization in Learning Analytics,S497568,R109101,has research problem,R109084,Learning Analytics,"Thanks to the proliferation of academic services on the Web and the opening of educational content, today, students can access a large number of free learning resources, and interact with value-added services. In this context, Learning Analytics can be carried out on a large scale thanks to the proliferation of open practices that promote the sharing of datasets. However, the opening or sharing of data managed through platforms and educational services, without considering the protection of users' sensitive data, could cause some privacy issues. Data anonymization is a strategy that should be adopted during lifecycle of data processing to reduce security risks. In this research, we try to characterize how much and how the anonymization techniques have been used in learning analytics proposals. From an initial exploration made in the Scopus database, we found that less than 6% of the papers focused on LA have also covered the privacy issue. Finally, through a specific case, we applied data anonymization and learning analytics to demonstrate that both technique can be integrated, in a reliably and effectively way, to support decision making in educational institutions.",TRUE,research problem
R230,Computer Engineering,R74401,A rating system that open-data repositories must satisfy to be considered OER: Reusing open data resources in teaching,S497189,R109066,has research problem,R109065,Open Data,"The re-use of digital content is an essential part of the knowledge-based economy. The online accessibility of open materials will make possible for teachers, students and self-learners to use them for leisure, studies or work. Recent studies have focused on the use or reuse of digital resources with specific reference to the open data field. Moreover, open data can be reused — for both commercial and non-commercial purposes — for uses such as developing learning and educational content, with full respect for copyright and related rights. This work presents a rating system for open data as OER is proposed. This rating system present a framework to search, download and re-use digital resources by teachers and students. The rating system proposed is built from the ground upwards on Open data principles and Semantic Web technologies. By following open data best practices and Linked Data principles, open data initiative ensures that data hosted can be fully connected into a Web of Linked Data. This work aims at sharing good practices for administrative/academics/researchers on gathering and disseminating good quality data, using interoperable standards and overcoming legal obstacles. In this way, open data becomes a tool for teaching and educational environments to improve engagement and student learning. Designing an open data repository that manages and shares information of different catalogues and an evaluation tool to OER will allow teachers and students to increase educational contents and to improve relationship between open data initiatives and education context.",TRUE,research problem
R230,Computer Engineering,R74407,OCW-S: Enablers for building sustainable open education evolving OCW and MOOC,S497225,R109070,has research problem,R109069,Open Education,"MOOCs (Massive Open Online Courses) promote mass education through collaboration scenarios between participants. The purpose of this paper is to analyze the characteristics of MOOCs that can be incorporated into environments such as OpenCourseWare. We develop a study on how the concept of OCW evolves nowadays: focused on the distribution of open educational resources, towards a concept of collaborative open massive training (as with the MOOC), that not all current learning platforms provide. This new generation of social OCW will be called OCW-S.",TRUE,research problem
R230,Computer Engineering,R74414,"Open educational practices and resources based on social software, UTPL experience",S497266,R109074,has research problem,R109069,Open Education,"Open Educational Resources (OER) are a direct reaction to knowledge privatization; they foment their exchange to the entire world with the aim of increase the human intellectual capacity.In this document, we describe the committment of Universidad Técnica Particular de Loja (UTPL), Ecuador, in the promotion of open educational practices and resources and their impact in society and knowledge economy through the use of Social Software.",TRUE,research problem
R230,Computer Engineering,R74434,Impact of open educational resources in higher education institutions in spain and latin americas through social network analysis,S497378,R109086,has research problem,R109069,Open Education,"The modernization of the European education system is allowing undertake new missions such as university responsibility. Higher education institutions have felt the need to open and provide educational resources. OpenCourseWare (OCW) and Open Educational Resources (OER) in general contribute to the dissemination of knowledge and public construction, providing a social good. Openness also means sharing, reuse information and create content in an open environment to improve and maintain the quality of education, at the same time provide an advertising medium for higher education institutions. OpenCourseWare Consortium himself and other educational organizations stress the importance of taking measurement and evaluation programs for an institution OCW. Two main reasons are argued: Allow monitoring the usefulness and usability of Open Educational Resources (OER) and the efficiency of the publishing process, helping to identify and implement improvements that may be relevant over time. Measure the use and impact of the parties involved in the OCW site helps ensure their commitment. This paper the authors evaluate social network analysis like a technique to measure the impact of open educational resources, and shows the results of applied this kind analysis in OpenCourseWare Spanish and Latin American, trying to tackle the above problems by extending the impact of resource materials in the new innovative teaching strategies and mission of university social responsibility providing updated information on the impact of OCW materials, and showing the true potential inherent in the current OCW repositories in Latin American universities. To evaluate the utility of Social Network Analysis in open educational resources, different social networks were built, using the explicit relationships between different participants of OCW initiatives, e.g. co-authorship, to show the current state of OCW resources. And through the implicit relationships, e.g. tagging, to assess the potential of OCW. To measure the impact of OCW, the social relationships, drawing from the information published by universities of Spain and Latin American, between OCW actors are described and assessed using social networks analysis techniques and metrics, the results obtained let to: present a current state of OCW in Latin America, know the informal organization behind the OCW initiatives, the folksonomies arise from using tags to describe courses, and potential collaborative networks between: universities, professors linked to production of OCW.",TRUE,research problem
R230,Computer Engineering,R74444,Open educational practices and resources based on social software: UTPL experience,S497436,R109091,has research problem,R109069,Open Education,"Open Educational Resources (OER) are a direct reaction to knowledge privatization; they foment their exchange to the entire world with the aim of increase the human intellectual capacity.In this document, we describe the committment of Universidad Técnica Particular de Loja (UTPL), Ecuador, in the promotion of open educational practices and resources and their impact in society and knowledge economy through the use of Social Software.",TRUE,research problem
R230,Computer Engineering,R74448,Promotion of self-learning by means of Open Educational Resources and semantic technologies,S497471,R109093,has research problem,R109069,Open Education,"Open Educational Resources (OER) have the potential to encourage self-learning and lifelong learning. However, there are some barriers that make it difficult to find the suitable information. The successful adoption of OER in processes of informal or self-directed learning will depend largely on what the learner is able to reach without a guide or a tutor. However, the OERs quality and their particular features can positively influence into the motivation of the self-learner to achieve their goals. In this paper, the authors present an approach to enhance the OER discovery by self-learners. This is designed to leverage open knowledge sources, which are described by semantic technologies. User's data and OER encoded in formal languages can be linked and thus find paths for suggest the most appropriate resources.",TRUE,research problem
R230,Computer Engineering,R74453,OER development and promotion. Outcomes of an international research project on the OpenCourseWare model,S497488,R109096,has research problem,R109069,Open Education,"In this paper, we describe the successful results of an international research project focused on the use of Web technology in the educational context. The article explains how this international project, funded by public organizations and developed over the last two academic years, focuses on the area of open educational resources (OER) and particularly the educational content of the OpenCourseWare (OCW) model. This initiative has been developed by a research group composed of researchers from three countries. The project was enabled by the Universidad Politecnica de Madrid OCW Office's leadership of the Consortium of Latin American Universities and the distance education know-how of the Universidad Tecnica Particular de Loja (UTPL, Ecuador). We give a full account of the project, methodology, main outcomes and validation. The project results have further consolidated the group, and increased the maturity of group members and networking with other groups in the area. The group is now participating in other research projects that continue the lines developed here.",TRUE,research problem
R230,Computer Engineering,R74481,Open educational resources and standards in the eMadrid network,S497671,R109111,has research problem,R109069,Open Education,"This paper presents the main results achieved in the program eMadrid Program in Open Educational Resources, Free Software, Open Data, and about formats and standardization of content and services.",TRUE,research problem
R230,Computer Engineering,R74485,Roadmap towards the openness of educational resources: Outcomes of the participation in the eMadrid network,S497689,R109113,has research problem,R109069,Open Education,The contribution of GICAC UPM group in the eMadrid initiative has focused to the application of semantic web technologies in the Open Education context. This work presents the main results obtained through different applications and models according to a roadmap followed by the group.,TRUE,research problem
R230,Computer Engineering,R74436,Finding OERs with social-semantic search,S497383,R109087,has research problem,R109059,Semantic Search,"Social and semantic web can be complementary approaches searching web resources. This cooperative approach lets enable a semantic search engine to find accurate results and annotate web resources. This work develops the components of a Social-Semantic search architecture proposed by the authors to find open educational resources (OER). By means of metadata enrichment and logic inference, OER consumers get more precise results from general search engines. The Search prototype has been applied to find OER related with computers and engineering in a domain provided by OpenCourseWare materials from two universities, MIT and UTPL. The semantic search answers reasonably well the queta collected.",TRUE,research problem
R132,Computer Sciences,R132399,Prioritized Experience Replay,S526538,R132601,has research problem,R124884,Atari Games,"Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.",TRUE,research problem
R132,Computer Sciences,R133007,Learning values across many orders of magnitude,S527861,R133008,has research problem,R124884,Atari Games,"Most learning algorithms are not invariant to the scale of the function that is being approximated. We propose to adaptively normalize the targets used in learning. This is useful in value-based reinforcement learning, where the magnitude of appropriate value approximations can change over time when we update the policy of behavior. Our main motivation is prior work on learning to play Atari games, where the rewards were all clipped to a predetermined range. This clipping facilitates learning across many different games with a single learning algorithm, but a clipped reward function can result in qualitatively different behavior. Using the adaptive normalization we can remove this domain-specific heuristic without diminishing overall performance.",TRUE,research problem
R132,Computer Sciences,R133207,Deep Exploration via Bootstrapped DQN,S528392,R133208,has research problem,R124884,Atari Games,"Efficient exploration in complex environments remains a major challenge for reinforcement learning. We propose bootstrapped DQN, a simple algorithm that explores in a computationally and statistically efficient manner through use of randomized value functions. Unlike dithering strategies such as epsilon-greedy exploration, bootstrapped DQN carries out temporally-extended (or deep) exploration; this can lead to exponentially faster learning. We demonstrate these benefits in complex stochastic MDPs and in the large-scale Arcade Learning Environment. Bootstrapped DQN substantially improves learning times and performance across most Atari games.",TRUE,research problem
R132,Computer Sciences,R133383,Value Prediction Network,S528866,R133384,has research problem,R124884,Atari Games,"This paper proposes a novel deep reinforcement learning (RL) architecture, called Value Prediction Network (VPN), which integrates model-free and model-based RL methods into a single neural network. In contrast to typical model-based RL methods, VPN learns a dynamics model whose abstract states are trained to make option-conditional predictions of future values (discounted sum of rewards) rather than of future observations. Our experimental results show that VPN has several advantages over both model-free and model-based baselines in a stochastic environment where careful planning is required but building an accurate observation-prediction model is difficult. Furthermore, VPN outperforms Deep Q-Network (DQN) on several Atari games even with short-lookahead planning, demonstrating its potential as a new way of learning a good state representation.",TRUE,research problem
R132,Computer Sciences,R133937,Evolution Strategies as a Scalable Alternative to Reinforcement Learning,S530578,R133938,has research problem,R124884,Atari Games,"We explore the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients. Experiments on MuJoCo and Atari show that ES is a viable solution strategy that scales extremely well with the number of CPUs available: By using a novel communication strategy based on common random numbers, our ES implementation only needs to communicate scalars, making it possible to scale to over a thousand parallel workers. This allows us to solve 3D humanoid walking in 10 minutes and obtain competitive results on most Atari games after one hour of training. In addition, we highlight several advantages of ES as a black box optimization technique: it is invariant to action frequency and delayed rewards, tolerant of extremely long horizons, and does not need temporal discounting or value function approximation.",TRUE,research problem
R132,Computer Sciences,R134043,Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings,S530894,R134059,has research problem,R124884,Atari Games,"Recent advances in off-policy deep reinforcement learning (RL) have led to impressive success in complex tasks from visual observations. Experience replay improves sample-efficiency by reusing experiences from the past, and convolutional neural networks (CNNs) process high-dimensional inputs effectively. However, such techniques demand high memory and computational bandwidth. In this paper, we present Stored Embeddings for Efficient Reinforcement Learning (SEER), a simple modification of existing off-policy RL methods, to address these computational and memory requirements. To reduce the computational overhead of gradient updates in CNNs, we freeze the lower layers of CNN encoders early in training due to early convergence of their parameters. Additionally, we reduce memory requirements by storing the low-dimensional latent vectors for experience replay instead of high-dimensional images, enabling an adaptive increase in the replay buffer capacity, a useful technique in constrained-memory settings. In our experiments, we show that SEER does not degrade the performance of RL agents while significantly saving computation and memory across a diverse set of DeepMind Control environments and Atari games.",TRUE,research problem
R132,Computer Sciences,R134062,Playing Atari with Six Neurons,S530910,R134063,has research problem,R124884,Atari Games,"Deep reinforcement learning, applied to vision-based problems like Atari games, maps pixels directly to actions; internally, the deep neural network bears the responsibility of both extracting useful information and making decisions based on it. By separating the image processing from decision-making, one could better understand the complexity of each task, as well as potentially find smaller policy representations that are easier for humans to understand and may generalize better. To this end, we propose a new method for learning policies and compact state representations separately but simultaneously for policy approximation in reinforcement learning. State representations are generated by an encoder based on two novel algorithms: Increasing Dictionary Vector Quantization makes the encoder capable of growing its dictionary size over time, to address new observations as they appear in an open-ended online-learning context; Direct Residuals Sparse Coding encodes observations by disregarding reconstruction error minimization, and aiming instead for highest information inclusion. The encoder autonomously selects observations online to train on, in order to maximize code sparsity. As the dictionary size increases, the encoder produces increasingly larger inputs for the neural network: this is addressed by a variation of the Exponential Natural Evolution Strategies algorithm which adapts its probability distribution dimensionality along the run. We test our system on a selection of Atari games using tiny neural networks of only 6 to 18 neurons (depending on the game's controls). These are still capable of achieving results comparable---and occasionally superior---to state-of-the-art techniques which use two orders of magnitude more neurons.",TRUE,research problem
R132,Computer Sciences,R134171,Fully Parameterized Quantile Function for Distributional Reinforcement Learning,S531213,R134172,has research problem,R124884,Atari Games,"Distributional Reinforcement Learning (RL) differs from traditional RL in that, rather than the expectation of total returns, it estimates distributions and has achieved state-of-the-art performance on Atari Games. The key challenge in practical distributional RL algorithms lies in how to parameterize estimated distributions so as to better approximate the true continuous distribution. Existing distributional RL algorithms parameterize either the probability side or the return value side of the distribution function, leaving the other side uniformly fixed as in C51, QR-DQN or randomly sampled as in IQN. In this paper, we propose fully parameterized quantile function that parameterizes both the quantile fraction axis (i.e., the x-axis) and the value axis (i.e., y-axis) for distributional RL. Our algorithm contains a fraction proposal network that generates a discrete set of quantile fractions and a quantile value network that gives corresponding quantile values. The two networks are jointly trained to find the best approximation of the true distribution. Experiments on 55 Atari Games show that our algorithm significantly outperforms existing distributional RL algorithms and creates a new record for the Atari Learning Environment for non-distributed agents.",TRUE,research problem
R132,Computer Sciences,R134238,Model-Free Episodic Control with State Aggregation,S531443,R134239,has research problem,R124884,Atari Games,"Episodic control provides a highly sample-efficient method for reinforcement learning while enforcing high memory and computational requirements. This work proposes a simple heuristic for reducing these requirements, and an application to Model-Free Episodic Control (MFEC) is presented. Experiments on Atari games show that this heuristic successfully reduces MFEC computational demands while producing no significant loss of performance when conservative choices of hyperparameters are used. Consequently, episodic control becomes a more feasible option when dealing with reinforcement learning tasks.",TRUE,research problem
R132,Computer Sciences,R134288,Exploration by Random Network Distillation,S531607,R134289,has research problem,R124884,Atari Games,"We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access to the underlying state of the game, and occasionally completes the first level.",TRUE,research problem
R132,Computer Sciences,R134312,Count-Based Exploration with Neural Density Models,S531708,R134322,has research problem,R124884,Atari Games,"Bellemare et al. (2016) introduced the notion of a pseudo-count, derived from a density model, to generalize count-based exploration to non-tabular reinforcement learning. This pseudo-count was used to generate an exploration bonus for a DQN agent and combined with a mixed Monte Carlo update was sufficient to achieve state of the art on the Atari 2600 game Montezuma's Revenge. We consider two questions left open by their work: First, how important is the quality of the density model for exploration? Second, what role does the Monte Carlo update play in exploration? We answer the first question by demonstrating the use of PixelCNN, an advanced neural density model for images, to supply a pseudo-count. In particular, we examine the intrinsic difficulties in adapting Bellemare et al.'s approach when assumptions about the model are violated. The result is a more practical and general algorithm requiring no special apparatus. We combine PixelCNN pseudo-counts with different agent architectures to dramatically improve the state of the art on several hard Atari games. One surprising finding is that the mixed Monte Carlo update is a powerful facilitator of exploration in the sparsest of settings, including Montezuma's Revenge.",TRUE,research problem
R132,Computer Sciences,R134401,Incentivizing Exploration In Reinforcement Learning With Deep Predictive Models,S531956,R134402,has research problem,R124884,Atari Games,"Achieving efficient and scalable exploration in complex domains poses a major challenge in reinforcement learning. While Bayesian and PAC-MDP approaches to the exploration problem offer strong formal guarantees, they are often impractical in higher dimensions due to their reliance on enumerating the state-action space. Hence, exploration in complex domains is often performed with simple epsilon-greedy methods. In this paper, we consider the challenging Atari games domain, which requires processing raw pixel inputs and delayed rewards. We evaluate several more sophisticated exploration strategies, including Thompson sampling and Boltzman exploration, and propose a new exploration method based on assigning exploration bonuses from a concurrently learned model of the system dynamics. By parameterizing our learned model with a neural network, we are able to develop a scalable and efficient approach to exploration bonuses that can be applied to tasks with complex, high-dimensional state spaces. In the Atari domain, our method provides the most consistent improvement across a range of games that pose a major challenge for prior methods. In addition to raw game-scores, we also develop an AUC-100 metric for the Atari Learning domain to evaluate the impact of exploration on this benchmark.",TRUE,research problem
R132,Computer Sciences,R134413,RUDDER: Return Decomposition for Delayed Rewards,S531996,R134414,has research problem,R124884,Atari Games,"We propose RUDDER, a novel reinforcement learning approach for delayed rewards in finite Markov decision processes (MDPs). In MDPs the Q-values are equal to the expected immediate reward plus the expected future rewards. The latter are related to bias problems in temporal difference (TD) learning and to high variance problems in Monte Carlo (MC) learning. Both problems are even more severe when rewards are delayed. RUDDER aims at making the expected future rewards zero, which simplifies Q-value estimation to computing the mean of the immediate reward. We propose the following two new concepts to push the expected future rewards toward zero. (i) Reward redistribution that leads to return-equivalent decision processes with the same optimal policies and, when optimal, zero expected future rewards. (ii) Return decomposition via contribution analysis which transforms the reinforcement learning task into a regression task at which deep learning excels. On artificial tasks with delayed rewards, RUDDER is significantly faster than MC and exponentially faster than Monte Carlo Tree Search (MCTS), TD({\lambda}), and reward shaping approaches. At Atari games, RUDDER on top of a Proximal Policy Optimization (PPO) baseline improves the scores, which is most prominent at games with delayed rewards. Source code is available at \url{this https URL} and demonstration videos at \url{this https URL}.",TRUE,research problem
R132,Computer Sciences,R131636,Multilingual Models for Compositional Distributed Semantics,S523296,R131637,has research problem,R124044,Cross-Lingual Document Classification,"We present a novel technique for learning semantic representations, which extends the distributional hypothesis to multilingual data and joint-space embeddings. Our models leverage parallel data and learn to strongly align the embeddings of semantically equivalent sentences, while maintaining sufficient distance between those of dissimilar sentences. The models do not rely on word alignments or any syntactic information and are successfully applied to a number of diverse languages. We extend our approach to learn semantic representations at the document level, too. We evaluate these models on two cross-lingual document classification tasks, outperforming the prior state of the art. Through qualitative analysis and the study of pivoting effects we demonstrate that our representations are semantically plausible and can capture semantic relationships across languages without parallel data.",TRUE,research problem
R132,Computer Sciences,R131658,A Corpus for Multilingual Document Classification in Eight Languages,S523439,R131685,has research problem,R124044,Cross-Lingual Document Classification,"Cross-lingual document classification aims at training a document classifier on resources in one language and transferring it to a different language without any additional resources. Several approaches have been proposed in the literature and the current best practice is to evaluate them on a subset of the Reuters Corpus Volume 2. However, this subset covers only few languages (English, German, French and Spanish) and almost all published works focus on the the transfer between English and German. In addition, we have observed that the class prior distributions differ significantly between the languages. We argue that this complicates the evaluation of the multilinguality. In this paper, we propose a new subset of the Reuters corpus with balanced class priors for eight languages. By adding Italian, Russian, Japanese and Chinese, we cover languages which are very different with respect to syntax, morphology, etc. We provide strong baselines for all language transfer directions using multilingual word and sentence embeddings respectively. Our goal is to offer a freely available framework to evaluate cross-lingual document classification, and we hope to foster by these means, research in this important area.",TRUE,research problem
R132,Computer Sciences,R131694,Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond,S523478,R131695,has research problem,R124044,Cross-Lingual Document Classification,"Abstract We introduce an architecture to learn joint multilingual sentence representations for 93 languages, belonging to more than 30 different families and written in 28 different scripts. Our system uses a single BiLSTM encoder with a shared byte-pair encoding vocabulary for all languages, which is coupled with an auxiliary decoder and trained on publicly available parallel corpora. This enables us to learn a classifier on top of the resulting embeddings using English annotated data only, and transfer it to any of the 93 languages without any modification. Our experiments in cross-lingual natural language inference (XNLI data set), cross-lingual document classification (MLDoc data set), and parallel corpus mining (BUCC data set) show the effectiveness of our approach. We also introduce a new test set of aligned sentences in 112 languages, and show that our sentence embeddings obtain strong results in multilingual similarity search even for low- resource languages. Our implementation, the pre-trained encoder, and the multilingual test set are available at https://github.com/facebookresearch/LASER.",TRUE,research problem
R132,Computer Sciences,R130466,Data-to-Text Generation with Content Selection and Planning,S518846,R130467,has research problem,R120250,Data-to-Text Generation,"Recent advances in data-to-text generation have led to the use of large-scale datasets and neural network models which are trained end-to-end, without explicitly modeling what to say and in what order. In this work, we present a neural network architecture which incorporates content selection and planning without sacrificing end-to-end training. We decompose the generation task into two stages. Given a corpus of data records (paired with descriptive documents), we first generate a content plan highlighting which information should be mentioned and in which order and then generate the document while taking the content plan into account. Automatic and human-based evaluation experiments show that our model1 outperforms strong baselines improving the state-of-the-art on the recently released RotoWIRE dataset.",TRUE,research problem
R132,Computer Sciences,R134494,HDLTex: Hierarchical Deep Learning for Text Classification,S532245,R134495,has research problem,R125963,Document Classification,"Increasingly large document collections require improved information processing methods for searching, retrieving, and organizing text. Central to these information processing methods is document classification, which has become an important application for supervised learning. Recently the performance of traditional supervised classifiers has degraded as the number of documents has increased. This is because along with growth in the number of documents has come an increase in the number of categories. This paper approaches this problem differently from current document classification methods that view the problem as multi-class classification. Instead we perform hierarchical classification using an approach we call Hierarchical Deep Learning for Text classification (HDLTex). HDLTex employs stacks of deep learning architectures to provide specialized understanding at each level of the document hierarchy.",TRUE,research problem
R132,Computer Sciences,R134502,BilBOWA: Fast Bilingual Distributed Representations without Word Alignments,S532272,R134503,has research problem,R125963,Document Classification,"We introduce BilBOWA (Bilingual Bag-of-Words without Alignments), a simple and computationally-efficient model for learning bilingual distributed representations of words which can scale to large monolingual datasets and does not require word-aligned parallel training data. Instead it trains directly on monolingual data and extracts a bilingual signal from a smaller set of raw-text sentence-aligned data. This is achieved using a novel sampled bag-of-words cross-lingual objective, which is used to regularize two noise-contrastive language models for efficient cross-lingual feature learning. We show that bilingual embeddings learned using the proposed model outperform state-of-the-art methods on a cross-lingual document classification task as well as a lexical translation task on WMT11 data.",TRUE,research problem
R132,Computer Sciences,R131146,Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification,S521866,R131150,has research problem,R122341,Environmental Sound Classification,"The ability of deep convolutional neural networks (CNNs) to learn discriminative spectro-temporal patterns makes them well suited to environmental sound classification. However, the relative scarcity of labeled data has impeded the exploitation of this family of high-capacity models. This study has two primary contributions: first, we propose a deep CNN architecture for environmental sound classification. Second, we propose the use of audio data augmentation for overcoming the problem of data scarcity and explore the influence of different augmentations on the performance of the proposed CNN architecture. Combined with data augmentation, the proposed model produces state-of-the-art results for environmental sound classification. We show that the improved performance stems from the combination of a deep, high-capacity model and an augmented training set: this combination outperforms both the proposed CNN without augmentation and a “shallow” dictionary learning model with augmentation. Finally, we examine the influence of each augmentation on the model's classification accuracy for each class, and observe that the accuracy for each class is influenced differently by each augmentation, suggesting that the performance of the model could be improved further by applying class-conditional data augmentation.",TRUE,research problem
R132,Computer Sciences,R34961,Introduction: A Survey of the Evolutionary Computation Techniques for Software Engineering,S122211,R34963,has research problem,R34964,Evolutionary Computation Techniques,"Evolutionary algorithms are methods, which imitate the natural evolution process. An artificial evolution process evaluates fitness of each individual, which are solution candidates. The next population of candidate solutions is formed by using the good properties of the current population by applying different mutation and crossover operations. Different kinds of evolutionary algorithm applications related to software engineering were searched in the literature. Because the entire book presents some interesting information chapter of some evolutionary computation techniques applied into the software engineering we consider necessary to present into this chapter a short survey of some techniques which are very useful in the future research of this field. The majority of evolutionary algorithm applications related to software engineering were about software design or testing. Software Engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software (Abran and Moore, 2004). The purpose of this book is to open a door in order to find out the optimization problems in different software engineering problems. The idea of putting together the application of evolutionary computation and evolutionary optimization techniques in software engineering problems provided to the researchers the possibility to study some existing abSTraCT",TRUE,research problem
R132,Computer Sciences,R134578,An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,S532755,R134625,has research problem,R38570,Image Classification,"While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.",TRUE,research problem
R132,Computer Sciences,R134713,Going deeper with Image Transformers,S533218,R134771,has research problem,R38570,Image Classification,"Transformers have been recently adapted for large scale image classification, achieving high scores shaking up the long supremacy of convolutional neural networks. However the optimization of vision transformers has been little studied so far. In this work, we build and optimize deeper transformer networks for image classification. In particular, we investigate the interplay of architecture and optimization of such dedicated transformers. We make two architecture changes that significantly improve the accuracy of deep transformers. This leads us to produce models whose performance does not saturate early with more depth, for in-stance we obtain 86.5% top-1 accuracy on Imagenet when training with no external data, we thus attain the current sate of the art with less floating-point operations and parameters. Our best model establishes the new state of the art on Imagenet with Reassessed labels and Imagenet-V2 / match frequency, in the setting with no additional training data. We share our code and models1.",TRUE,research problem
R132,Computer Sciences,R134775,LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference,S533432,R134850,has research problem,R38570,Image Classification,"We design a family of image classification architectures that optimize the trade-off between accuracy and efficiency in a high-speed regime. Our work exploits recent findings in attention-based architectures, which are competitive on highly parallel processing hardware. We revisit principles from the extensive literature on convolutional neural networks to apply them to transformers, in particular activation maps with decreasing resolutions. We also introduce the attention bias, a new way to integrate positional information in vision transformers.As a result, we propose LeViT: a hybrid neural network for fast inference image classification. We consider different measures of efficiency on different hardware platforms, so as to best reflect a wide range of application scenarios. Our extensive experiments empirically validate our technical choices and show they are suitable to most architectures. Overall, LeViT significantly outperforms existing convnets and vision transformers with respect to the speed/accuracy tradeoff. For example, at 80% ImageNet top-1 accuracy, LeViT is 5 times faster than EfficientNet on CPU. We release the code at https://github.com/facebookresearch/LeViT.",TRUE,research problem
R132,Computer Sciences,R134998,Training data-efficient image transformers & distillation through attention,S534086,R135041,has research problem,R38570,Image Classification,"Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. However, these visual transformers are pre-trained with hundreds of millions of images using an expensive infrastructure, thereby limiting their adoption by the larger community. In this work, with an adequate training scheme, we produce a competitive convolution-free transformer by training on Imagenet only. We train it on a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external data. We share our code and models to accelerate community advances on this line of research. Additionally, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention. We show the interest of this tokenbased distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets for both Imagenet (where we obtain up to 84.4% accuracy) and when transferring to other tasks.",TRUE,research problem
R132,Computer Sciences,R135142,Some Improvements on Deep Convolutional Neural Network Based Image Classification,S534435,R135143,has research problem,R38570,Image Classification,"Abstract: We investigate multiple techniques to improve upon the current state of the art deep convolutional neural network based image classification pipeline. The techiques include adding more image transformations to training data, adding more transformations to generate additional predictions at test time and using complementary models applied to higher resolution images. This paper summarizes our entry in the Imagenet Large Scale Visual Recognition Challenge 2013. Our system achieved a top 5 classification error rate of 13.55% using no external data which is over a 20% relative improvement on the previous year's winner.",TRUE,research problem
R132,Computer Sciences,R131755,Knowledge Graph Embedding with Atrous Convolution and Residual Learning,S523671,R131756,has research problem,R124628,Knowledge Graph Embedding,"Knowledge graph embedding is an important task and it will benefit lots of downstream applications. Currently, deep neural networks based methods achieve state-of-the-art performance. However, most of these existing methods are very complex and need much time for training and inference. To address this issue, we propose a simple but effective atrous convolution based knowledge graph embedding method. Compared with existing state-of-the-art methods, our method has following main characteristics. First, it effectively increases feature interactions by using atrous convolutions. Second, to address the original information forgotten issue and vanishing/exploding gradient issue, it uses the residual learning method. Third, it has simpler structure but much higher parameter efficiency. We evaluate our method on six benchmark datasets with different evaluation metrics. Extensive experiments show that our model is very effective. On these diverse datasets, it achieves better results than the compared state-of-the-art methods on most of evaluation metrics. The source codes of our model could be found at https://github.com/neukg/AcrE.",TRUE,research problem
R132,Computer Sciences,R130572,Compressive Transformers for Long-Range Sequence Modelling,S519226,R130580,has research problem,R120872,Language Modelling,"We present the Compressive Transformer, an attentive sequence model which compresses past memories for long-range sequence learning. We find the Compressive Transformer obtains state-of-the-art language modelling results in the WikiText-103 and Enwik8 benchmarks, achieving 17.1 ppl and 0.97 bpc respectively. We also find it can model high-frequency speech effectively and can be used as a memory mechanism for RL, demonstrated on an object matching task. To promote the domain of long-range sequence learning, we propose a new open-vocabulary language modelling benchmark derived from books, PG-19.",TRUE,research problem
R132,Computer Sciences,R130733,Multiplicative LSTM for sequence modelling,S520071,R130745,has research problem,R120872,Language Modelling,"We introduce multiplicative LSTM (mLSTM), a recurrent neural network architecture for sequence modelling that combines the long short-term memory (LSTM) and multiplicative recurrent neural network architectures. mLSTM is characterised by its ability to have different recurrent transition functions for each possible input, which we argue makes it more expressive for autoregressive density estimation. We demonstrate empirically that mLSTM outperforms standard LSTM and its deep variants for a range of character level language modelling tasks. In this version of the paper, we regularise mLSTM to achieve 1.27 bits/char on text8 and 1.24 bits/char on Hutter Prize. We also apply a purely byte-level mLSTM on the WikiText-2 dataset to achieve a character level entropy of 1.26 bits/char, corresponding to a word level perplexity of 88.8, which is comparable to word level LSTMs regularised in similar ways on the same task.",TRUE,research problem
R132,Computer Sciences,R130768,Hierarchical Multiscale Recurrent Neural Networks,S520163,R130773,has research problem,R120872,Language Modelling,"Learning both hierarchical and temporal representation has been among the long-standing challenges of recurrent neural networks. Multiscale recurrent neural networks have been considered as a promising approach to resolve this issue, yet there has been a lack of empirical evidence showing that this type of models can actually capture the temporal dependencies by discovering the latent hierarchical structure of the sequence. In this paper, we propose a novel multiscale approach, called the hierarchical multiscale recurrent neural networks, which can capture the latent hierarchical structure in the sequence by encoding the temporal dependencies with different timescales using a novel update mechanism. We show some evidence that our proposed multiscale architecture can discover underlying hierarchical structure in the sequences without using explicit boundary information. We evaluate our proposed model on character-level language modelling and handwriting sequence modelling.",TRUE,research problem
R132,Computer Sciences,R130777,HyperNetworks,S520202,R130782,has research problem,R120872,Language Modelling,"This work explores hypernetworks: an approach of using one network, also known as a hypernetwork, to generate the weights for another network. We apply hypernetworks to generate adaptive weights for recurrent networks. In this case, hypernetworks can be viewed as a relaxed form of weight-sharing across layers. In our implementation, hypernetworks are are trained jointly with the main network in an end-to-end fashion. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks.",TRUE,research problem
R132,Computer Sciences,R131015,Pay Attention when Required,S521313,R131022,has research problem,R120872,Language Modelling,"Transformer-based models consist of interleaved feed-forward blocks - that capture content meaning, and relatively more expensive self-attention blocks - that capture context meaning. In this paper, we explored trade-offs and ordering of the blocks to improve upon the current Transformer architecture and proposed PAR Transformer. It needs 35% lower compute time than Transformer-XL achieved by replacing ~63% of the self-attention blocks with feed-forward blocks, and retains the perplexity on WikiText-103 language modelling benchmark. We further validated our results on text8 and enwiki8 datasets, as well as on the BERT model.",TRUE,research problem
R132,Computer Sciences,R129673,Phrase-Based & Neural Unsupervised Machine Translation,S515687,R129697,has research problem,R117118,Machine Translation,"Machine translation systems achieve near human-level performance on some languages, yet their effectiveness strongly relies on the availability of large amounts of parallel sentences, which hinders their applicability to the majority of language pairs. This work investigates how to learn to translate when having access to only large monolingual corpora in each language. We propose two model variants, a neural and a phrase-based model. Both versions leverage a careful initialization of the parameters, the denoising effect of language models and automatic generation of parallel data by iterative back-translation. These models are significantly better than methods from the literature, while being simpler and having fewer hyper-parameters. On the widely used WMT’14 English-French and WMT’16 German-English benchmarks, our models respectively obtain 28.1 and 25.2 BLEU points without using a single parallel sentence, outperforming the state of the art by more than 11 BLEU points. On low-resource languages like English-Urdu and English-Romanian, our methods achieve even better results than semi-supervised and supervised approaches leveraging the paucity of available bitexts. Our code for NMT and PBSMT is publicly available.",TRUE,research problem
R132,Computer Sciences,R129709,Random Feature Attention,S515742,R129710,has research problem,R117118,Machine Translation,"Transformers are state-of-the-art models for a variety of sequence modeling tasks. At their core is an attention function which models pairwise interactions between the inputs at every timestep. While attention is powerful, it does not scale efficiently to long sequences due to its quadratic time and space complexity in the sequence length. We propose RFA, a linear time and space attention that uses random feature methods to approximate the softmax function, and explore its application in transformers. RFA can be used as a drop-in replacement for conventional softmax attention and offers a straightforward way of learning with recency bias through an optional gating mechanism. Experiments on language modeling and machine translation demonstrate that RFA achieves similar or better performance compared to strong transformer baselines. In the machine translation experiment, RFA decodes twice as fast as a vanilla transformer. Compared to existing efficient transformer variants, RFA is competitive in terms of both accuracy and efficiency on three long text classification datasets. Our analysis shows that RFA’s efficiency gains are especially notable on long sequences, suggesting that RFA will be particularly useful in tasks that require working with large inputs, fast decoding speed, or low memory footprints.",TRUE,research problem
R132,Computer Sciences,R129725,Unsupervised Statistical Machine Translation,S515792,R129726,has research problem,R117118,Machine Translation,"While modern machine translation has relied on large parallel corpora, a recent line of work has managed to train Neural Machine Translation (NMT) systems from monolingual corpora only (Artetxe et al., 2018c; Lample et al., 2018). Despite the potential of this approach for low-resource settings, existing systems are far behind their supervised counterparts, limiting their practical interest. In this paper, we propose an alternative approach based on phrase-based Statistical Machine Translation (SMT) that significantly closes the gap with supervised systems. Our method profits from the modular architecture of SMT: we first induce a phrase table from monolingual corpora through cross-lingual embedding mappings, combine it with an n-gram language model, and fine-tune hyperparameters through an unsupervised MERT variant. In addition, iterative backtranslation improves results further, yielding, for instance, 14.08 and 26.22 BLEU points in WMT 2014 English-German and English-French, respectively, an improvement of more than 7-10 BLEU points over previous unsupervised systems, and closing the gap with supervised SMT (Moses trained on Europarl) down to 2-5 BLEU points. Our implementation is available at https://github.com/artetxem/monoses.",TRUE,research problem
R132,Computer Sciences,R129773,Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement,S515939,R129774,has research problem,R117118,Machine Translation,"We propose a conditional non-autoregressive neural sequence model based on iterative refinement. The proposed model is designed based on the principles of latent variable models and denoising autoencoders, and is generally applicable to any sequence generation task. We extensively evaluate the proposed model on machine translation (En-De and En-Ro) and image caption generation, and observe that it significantly speeds up decoding while maintaining the generation quality comparable to the autoregressive counterpart.",TRUE,research problem
R132,Computer Sciences,R129787,Linguistic Input Features Improve Neural Machine Translation,S515979,R129788,has research problem,R117118,Machine Translation,"Neural machine translation has recently achieved impressive results, while using little in the way of external linguistic information. In this paper we show that the strong learning capability of neural MT models does not make linguistic features redundant; they can be easily incorporated to provide further improvements in performance. We generalize the embedding layer of the encoder in the attentional encoder--decoder architecture to support the inclusion of arbitrary features, in addition to the baseline word feature. We add morphological features, part-of-speech tags, and syntactic dependency labels as input features to English German, and English->Romanian neural machine translation systems. In experiments on WMT16 training and test sets, we find that linguistic input features improve model quality according to three metrics: perplexity, BLEU and CHRF3. An open-source implementation of our neural MT system is available, as are sample files and configurations.",TRUE,research problem
R132,Computer Sciences,R129793,Unsupervised Neural Machine Translation with Weight Sharing,S516001,R129794,has research problem,R117118,Machine Translation,"Unsupervised neural machine translation (NMT) is a recently proposed approach for machine translation which aims to train the model without using any labeled data. The models proposed for unsupervised NMT often use only one shared encoder to map the pairs of sentences from different languages to a shared-latent space, which is weak in keeping the unique and internal characteristics of each language, such as the style, terminology, and sentence structure. To address this issue, we introduce an extension by utilizing two independent encoders but sharing some partial weights which are responsible for extracting high-level representations of the input sentences. Besides, two different generative adversarial networks (GANs), namely the local GAN and global GAN, are proposed to enhance the cross-language translation. With this new approach, we achieve significant improvements on English-German, English-French and Chinese-to-English translation tasks.",TRUE,research problem
R132,Computer Sciences,R129799,Unsupervised Machine Translation Using Monolingual Corpora Only,S516038,R129800,has research problem,R117118,Machine Translation,"Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time.",TRUE,research problem
R132,Computer Sciences,R129805,Improving Neural Language Modeling via Adversarial Training,S516068,R129809,has research problem,R117118,Machine Translation,"Recently, substantial progress has been made in language modeling by using deep neural networks. However, in practice, large scale neural language models have been shown to be prone to overfitting. In this paper, we present a simple yet highly effective adversarial training mechanism for regularizing neural language models. The idea is to introduce adversarial noise to the output embedding layer while training the models. We show that the optimal adversarial noise yields a simple closed-form solution, thus allowing us to develop a simple and time efficient algorithm. Theoretically, we show that our adversarial mechanism effectively encourages the diversity of the embedding vectors, helping to increase the robustness of models. Empirically, we show that our method improves on the single model state-of-the-art results for language modeling on Penn Treebank (PTB) and Wikitext-2, achieving test perplexity scores of 46.01 and 38.07, respectively. When applied to machine translation, our method improves over various transformer-based translation baselines in BLEU scores on the WMT14 English-German and IWSLT14 German-English tasks.",TRUE,research problem
R132,Computer Sciences,R130126,XLNet: Generalized Autoregressive Pretraining for Language Understanding,S517297,R130189,has research problem,R120724,Natural Language Inference,"With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.",TRUE,research problem
R132,Computer Sciences,R131303,Neural Architecture Transfer,S523053,R131554,has research problem,R123710,Neural Architecture Search,"Neural architecture search (NAS) has emerged as a promising avenue for automatically designing task-specific neural networks. Existing NAS approaches require one complete search for each deployment specification of hardware or objective. This is a computationally impractical endeavor given the potentially large number of application scenarios. In this paper, we propose Neural Architecture Transfer (NAT) to overcome this limitation. NAT is designed to efficiently generate task-specific custom models that are competitive under multiple conflicting objectives. To realize this goal we learn task-specific supernets from which specialized subnets can be sampled without any additional training. The key to our approach is an integrated online transfer learning and many-objective evolutionary search procedure. A pre-trained supernet is iteratively adapted while simultaneously searching for task-specific subnets. We demonstrate the efficacy of NAT on 11 benchmark image classification tasks ranging from large-scale multi-class to small-scale fine-grained datasets. In all cases, including ImageNet, NATNets improve upon the state-of-the-art under mobile settings ($\leq$≤ 600M Multiply-Adds). Surprisingly, small-scale fine-grained datasets benefit the most from NAT. At the same time, the architecture search and transfer is orders of magnitude more efficient than existing NAS methods. Overall, experimental evaluation indicates that, across diverse image classification tasks and computational objectives, NAT is an appreciably more effective alternative to conventional transfer learning of fine-tuning weights of an existing network architecture learned on standard datasets. Code is available at https://github.com/human-analysis/neural-architecture-transfer.",TRUE,research problem
R132,Computer Sciences,R130126,XLNet: Generalized Autoregressive Pretraining for Language Understanding,S517206,R130160,has research problem,R2061,Question Answering,"With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking.",TRUE,research problem
R132,Computer Sciences,R130221,Reading Wikipedia to Answer Open-Domain Questions,S517514,R130237,has research problem,R2061,Question Answering,This paper proposes to tackle open- domain question answering using Wikipedia as the unique knowledge source: the answer to any factoid question is a text span in a Wikipedia article. This task of machine reading at scale combines the challenges of document retrieval (finding the relevant articles) with that of machine comprehension of text (identifying the answer spans from those articles). Our approach combines a search component based on bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model trained to detect answers in Wikipedia paragraphs. Our experiments on multiple existing QA datasets indicate that (1) both modules are highly competitive with respect to existing counterparts and (2) multitask learning using distant supervision on their combination is an effective complete system on this challenging task.,TRUE,research problem
R132,Computer Sciences,R130241,Frustratingly Easy Natural Question Answering,S517537,R130242,has research problem,R2061,Question Answering,"Existing literature on Question Answering (QA) mostly focuses on algorithmic novelty, data augmentation, or increasingly large pre-trained language models like XLNet and RoBERTa. Additionally, a lot of systems on the QA leaderboards do not have associated research documentation in order to successfully replicate their experiments. In this paper, we outline these algorithmic components such as Attention-over-Attention, coupled with data augmentation and ensembling strategies that have shown to yield state-of-the-art results on benchmark datasets like SQuAD, even achieving super-human performance. Contrary to these prior results, when we evaluate on the recently proposed Natural Questions benchmark dataset, we find that an incredibly simple approach of transfer learning from BERT outperforms the previous state-of-the-art system trained on 4 million more examples than ours by 1.9 F1 points. Adding ensembling strategies further improves that number by 2.3 F1 points.",TRUE,research problem
R132,Computer Sciences,R130257,Stochastic Answer Networks for Machine Reading Comprehension,S517640,R130269,has research problem,R2061,Question Answering,"We propose a simple yet robust stochastic answer network (SAN) that simulates multi-step reasoning in machine reading comprehension. Compared to previous work such as ReasoNet which used reinforcement learning to determine the number of steps, the unique feature is the use of a kind of stochastic prediction dropout on the answer module (final layer) of the neural network during the training. We show that this simple trick improves robustness and achieves results competitive to the state-of-the-art on the Stanford Question Answering Dataset (SQuAD), the Adversarial SQuAD, and the Microsoft MAchine Reading COmprehension Dataset (MS MARCO).",TRUE,research problem
R132,Computer Sciences,R130276,FusionNet: Fusing via Fully-Aware Attention with Application to Machine Comprehension,S517712,R130289,has research problem,R2061,Question Answering,"This paper introduces a new neural structure called FusionNet, which extends existing attention approaches from three perspectives. First, it puts forward a novel concept of ""history of word"" to characterize attention information from the lowest word-level embedding up to the highest semantic-level representation. Second, it introduces an improved attention scoring function that better utilizes the ""history of word"" concept. Third, it proposes a fully-aware multi-level attention mechanism to capture the complete information in one text (such as a question) and exploit it in its counterpart (such as context or passage) layer by layer. We apply FusionNet to the Stanford Question Answering Dataset (SQuAD) and it achieves the first position for both single and ensemble model on the official SQuAD leaderboard at the time of writing (Oct. 4th, 2017). Meanwhile, we verify the generalization of FusionNet with two adversarial SQuAD datasets and it sets up the new state-of-the-art on both datasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to 51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%.",TRUE,research problem
R132,Computer Sciences,R130308,Dynamic Coattention Networks For Question Answering,S517824,R130319,has research problem,R2061,Question Answering,"Several deep learning models have been proposed for question answering. However, due to their single-pass nature, they have no way to recover from local maxima corresponding to incorrect answers. To address this problem, we introduce the Dynamic Coattention Network (DCN) for question answering. The DCN first fuses co-dependent representations of the question and the document in order to focus on relevant parts of both. Then a dynamic pointing decoder iterates over potential answer spans. This iterative procedure enables the model to recover from initial local maxima corresponding to incorrect answers. On the Stanford question answering dataset, a single DCN model improves the previous state of the art from 71.0% F1 to 75.9%, while a DCN ensemble obtains 80.4% F1.",TRUE,research problem
R132,Computer Sciences,R130355,"BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension",S517972,R130356,has research problem,R2061,Question Answering,"We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and other recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa on GLUE and SQuAD, and achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 3.5 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also replicate other pretraining schemes within the BART framework, to understand their effect on end-task performance.",TRUE,research problem
R132,Computer Sciences,R130368,Deep contextualized word representations,S518179,R130376,has research problem,R2061,Question Answering,"We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.",TRUE,research problem
R132,Computer Sciences,R130420,Simple and Effective Multi-Paragraph Reading Comprehension,S518705,R130425,has research problem,R2061,Question Answering,"We consider the problem of adapting neural paragraph-level question answering models to the case where entire documents are given as input. Our proposed solution trains models to produce well calibrated confidence scores for their results on individual paragraphs. We sample multiple paragraphs from the documents during training, and use a shared-normalization training objective that encourages the model to produce globally correct output. We combine this method with a state-of-the-art pipeline for training models on document QA data. Experiments demonstrate strong performance on several document QA datasets. Overall, we are able to achieve a score of 71.3 F1 on the web portion of TriviaQA, a large improvement from the 56.7 F1 of the previous best system.",TRUE,research problem
R132,Computer Sciences,R130429,Dynamic Integration of Background Knowledge in Neural NLU Systems,S518723,R130430,has research problem,R2061,Question Answering,"Common-sense or background knowledge is required to understand natural language, but in most neural natural language understanding (NLU) systems, the requisite background knowledge is indirectly acquired from static corpora. We develop a new reading architecture for the dynamic integration of explicit background knowledge in NLU models. A new task-agnostic reading module provides refined word representations to a task-specific NLU architecture by processing background knowledge in the form of free-text statements, together with the task-specific inputs. Strong performance on the tasks of document question answering (DQA) and recognizing textual entailment (RTE) demonstrate the effectiveness and flexibility of our approach. Analysis shows that our models learn to exploit knowledge selectively and in a semantically appropriate way.",TRUE,research problem
R132,Computer Sciences,R130434,MEMEN: Multi-layer Embedding with Memory Networks for Machine Comprehension,S518777,R130447,has research problem,R2061,Question Answering,"Machine comprehension(MC) style question answering is a representative problem in natural language processing. Previous methods rarely spend time on the improvement of encoding layer, especially the embedding of syntactic information and name entity of the words, which are very crucial to the quality of encoding. Moreover, existing attention methods represent each query word as a vector or use a single vector to represent the whole query sentence, neither of them can handle the proper weight of the key words in query sentence. In this paper, we introduce a novel neural network architecture called Multi-layer Embedding with Memory Network(MEMEN) for machine reading task. In the encoding layer, we employ classic skip-gram model to the syntactic and semantic information of the words to train a new kind of embedding layer. We also propose a memory network of full-orientation matching of the query and passage to catch more pivotal information. Experiments show that our model has competitive results both from the perspectives of precision and efficiency in Stanford Question Answering Dataset(SQuAD) among all published results and achieves the state-of-the-art results on TriviaQA dataset.",TRUE,research problem
R132,Computer Sciences,R129380,Improving Relation Extraction by Pre-trained Language Representations,S514674,R129387,has research problem,R116569,Relation Extraction,"Current state-of-the-art relation extraction methods typically rely on a set of lexical, syntactic, and semantic features, explicitly computed in a pre-processing step. Training feature extraction models requires additional annotated language resources, which severely restricts the applicability and portability of relation extraction to novel languages. Similarly, pre-processing introduces an additional source of error. To address these limitations, we introduce TRE, a Transformer for Relation Extraction, extending the OpenAI Generative Pre-trained Transformer [Radford et al., 2018]. Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions. TRE allows us to learn implicit linguistic features solely from plain text corpora by unsupervised pre-training, before fine-tuning the learned language representations on the relation extraction task. TRE obtains a new state-of-the-art result on the TACRED and SemEval 2010 Task 8 datasets, achieving a test F1 of 67.4 and 87.1, respectively. Furthermore, we observe a significant increase in sample efficiency. With only 20% of the training examples, TRE matches the performance of our baselines and our model trained from scratch on 100% of the TACRED dataset. We open-source our trained models, experiments, and source code.",TRUE,research problem
R132,Computer Sciences,R129399,A Hierarchical Framework for Relation Extraction with Reinforcement Learning,S514721,R129400,has research problem,R116569,Relation Extraction,"Most existing methods determine relation types only after all the entities have been recognized, thus the interaction between relation types and entity mentions is not fully modeled. This paper presents a novel paradigm to deal with relation extraction by regarding the related entities as the arguments of a relation. We apply a hierarchical reinforcement learning (HRL) framework in this paradigm to enhance the interaction between entity mentions and relation types. The whole extraction process is decomposed into a hierarchy of two-level RL policies for relation detection and entity extraction respectively, so that it is more feasible and natural to deal with overlapping relations. Our model was evaluated on public datasets collected via distant supervision, and results show that it gains better performance than existing methods and is more powerful for extracting overlapping relations1.",TRUE,research problem
R132,Computer Sciences,R129468,Two are Better than One: Joint Entity and Relation Extraction with Table-Sequence Encoders,S514929,R129469,has research problem,R116569,Relation Extraction,"Named entity recognition and relation extraction are two important fundamental problems. Joint learning algorithms have been proposed to solve both tasks simultaneously, and many of them cast the joint task as a table-filling problem. However, they typically focused on learning a single encoder (usually learning representation in the form of a table) to capture information required for both tasks within the same space. We argue that it can be beneficial to design two distinct encoders to capture such two different types of information in the learning process. In this work, we propose the novel {\em table-sequence encoders} where two different encoders -- a table encoder and a sequence encoder are designed to help each other in the representation learning process. Our experiments confirm the advantages of having {\em two} encoders over {\em one} encoder. On several standard datasets, our model shows significant improvements over existing approaches.",TRUE,research problem
R132,Computer Sciences,R129488,Span-based Joint Entity and Relation Extraction with Transformer Pre-training,S515033,R129502,has research problem,R116569,Relation Extraction,"We introduce SpERT, an attention model for span-based joint entity and relation extraction. Our key contribution is a light-weight reasoning on BERT embeddings, which features entity recognition and filtering, as well as relation classification with a localized, marker-free context representation. The model is trained using strong within-sentence negative samples, which are efficiently extracted in a single BERT pass. These aspects facilitate a search over all spans in the sentence. In ablation studies, we demonstrate the benefits of pre-training, strong negative sampling and localized context. Our model outperforms prior work by up to 2.6% F1 score on several datasets for joint entity and relation extraction.",TRUE,research problem
R132,Computer Sciences,R129508,Deeper Task-Specificity Improves Joint Entity and Relation Extraction,S515055,R129509,has research problem,R116569,Relation Extraction,"Multi-task learning (MTL) is an effective method for learning related tasks, but designing MTL models necessitates deciding which and how many parameters should be task-specific, as opposed to shared between tasks. We investigate this issue for the problem of jointly learning named entity recognition (NER) and relation extraction (RE) and propose a novel neural architecture that allows for deeper task-specificity than does prior work. In particular, we introduce additional task-specific bidirectional RNN layers for both the NER and RE tasks and tune the number of shared and task-specific layers separately for different datasets. We achieve state-of-the-art (SOTA) results for both tasks on the ADE dataset; on the CoNLL04 dataset, we achieve SOTA results on the NER task and competitive results on the RE task while using an order of magnitude fewer trainable parameters than the current SOTA architecture. An ablation study confirms the importance of the additional task-specific layers for achieving these results. Our work suggests that previous solutions to joint NER and RE undervalue task-specificity and demonstrates the importance of correctly balancing the number of shared and task-specific parameters for MTL approaches in general.",TRUE,research problem
R132,Computer Sciences,R129518,Neural Metric Learning for Fast End-to-End Relation Extraction,S515097,R129523,has research problem,R116569,Relation Extraction,"Relation extraction (RE) is an indispensable information extraction task in several disciplines. RE models typically assume that named entity recognition (NER) is already performed in a previous step by another independent model. Several recent efforts, under the theme of end-to-end RE, seek to exploit inter-task correlations by modeling both NER and RE tasks jointly. Earlier work in this area commonly reduces the task to a table-filling problem wherein an additional expensive decoding step involving beam search is applied to obtain globally consistent cell labels. In efforts that do not employ table-filling, global optimization in the form of CRFs with Viterbi decoding for the NER component is still necessary for competitive performance. We introduce a novel neural architecture utilizing the table structure, based on repeated applications of 2D convolutions for pooling local dependency and metric-based features, that improves on the state-of-the-art without the need for global optimization. We validate our model on the ADE and CoNLL04 datasets for end-to-end RE and demonstrate $\approx 1\%$ gain (in F-score) over prior best results with training and testing times that are seven to ten times faster --- the latter highly advantageous for time-sensitive end user applications.",TRUE,research problem
R132,Computer Sciences,R129538,Adversarial training for multi-context joint entity and relation extraction,S515157,R129539,has research problem,R116569,Relation Extraction,"Adversarial training (AT) is a regularization method that can be used to improve the robustness of neural network methods by adding small perturbations in the training data. We show how to use AT for the tasks of entity recognition and relation extraction. In particular, we demonstrate that applying AT to a general purpose baseline model for jointly extracting entities and relations, allows improving the state-of-the-art effectiveness on several datasets in different contexts (i.e., news, biomedical, and real estate data) and for different languages (English and Dutch).",TRUE,research problem
R132,Computer Sciences,R129585,"Entity, Relation, and Event Extraction with Contextualized Span Representations",S515315,R129586,has research problem,R116569,Relation Extraction,"We examine the capabilities of a unified, multi-task framework for three information extraction tasks: named entity recognition, relation extraction, and event extraction. Our framework (called DyGIE++) accomplishes all tasks by enumerating, refining, and scoring text spans designed to capture local (within-sentence) and global (cross-sentence) context. Our framework achieves state-of-the-art results across all tasks, on four datasets from a variety of domains. We perform experiments comparing different techniques to construct span representations. Contextualized embeddings like BERT perform well at capturing relationships among entities in the same or adjacent sentences, while dynamic span graph updates model long-range cross-sentence relationships. For instance, propagating span representations via predicted coreference links can enable the model to disambiguate challenging entity mentions. Our code is publicly available at https://github.com/dwadden/dygiepp and can be easily adapted for new tasks or datasets.",TRUE,research problem
R132,Computer Sciences,R129595,A Frustratingly Easy Approach for Entity and Relation Extraction,S515349,R129596,has research problem,R116569,Relation Extraction,"End-to-end relation extraction aims to identify named entities and extract relations between them. Most recent work models these two subtasks jointly, either by casting them in one structured prediction framework, or performing multi-task learning through shared representations. In this work, we present a simple pipelined approach for entity and relation extraction, and establish the new state-of-the-art on standard benchmarks (ACE04, ACE05 and SciERC), obtaining a 1.7%-2.8% absolute improvement in relation F1 over previous joint models with the same pre-trained encoders. Our approach essentially builds on two independent encoders and merely uses the entity model to construct the input for the relation model. Through a series of careful examinations, we validate the importance of learning distinct contextual representations for entities and relations, fusing entity information early in the relation model, and incorporating global context. Finally, we also present an efficient approximation to our approach which requires only one pass of both entity and relation encoders at inference time, achieving an 8-16× speedup with a slight reduction in accuracy.",TRUE,research problem
R132,Computer Sciences,R131114,Deep Learning for Detecting Robotic Grasps,S521728,R131115,has research problem,R121349,Robotic Grasping,"We consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. In this work, we apply a deep learning approach to solve this problem, which avoids time-consuming hand-design of features. This presents two main challenges. First, we need to evaluate a huge number of candidate grasps. In order to make detection fast and robust, we present a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second. The first network has fewer features, is faster to run, and can effectively prune out unlikely candidate grasps. The second, with more features, is slower but has to run only on the top few detections. Second, we need to handle multimodal inputs effectively, for which we present a method that applies structured regularization on the weights based on multimodal group regularization. We show that our method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms.",TRUE,research problem
R132,Computer Sciences,R129411,SciBERT: A Pretrained Language Model for Scientific Text,S514894,R129459,has research problem,R125984,Sentence Classification,"Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SciBERT, a pretrained language model based on BERT (Devlin et. al., 2018) to address the lack of high-quality, large-scale labeled scientific data. SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-the-art results on several of these tasks. The code and pretrained models are available at https://github.com/allenai/scibert/.",TRUE,research problem
R132,Computer Sciences,R130368,Deep contextualized word representations,S518469,R130403,has research problem,R122620,Sentiment Analysis,"We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pre-trained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pre-trained network is crucial, allowing downstream models to mix different types of semi-supervision signals.",TRUE,research problem
R132,Computer Sciences,R36097,Schema extraction for tabular data on the web,S123573,R36098,has research problem,R36029,Table extraction,"Tabular data is an abundant source of information on the Web, but remains mostly isolated from the latter's interconnections since tables lack links and computer-accessible descriptions of their structure. In other words, the schemas of these tables -- attribute names, values, data types, etc. -- are not explicitly stored as table metadata. Consequently, the structure that these tables contain is not accessible to the crawlers that power search engines and thus not accessible to user search queries. We address this lack of structure with a new method for leveraging the principles of table construction in order to extract table schemas. Discovering the schema by which a table is constructed is achieved by harnessing the similarities and differences of nearby table rows through the use of a novel set of features and a feature processing scheme. The schemas of these data tables are determined using a classification technique based on conditional random fields in combination with a novel feature encoding method called logarithmic binning, which is specifically designed for the data table extraction task. Our method provides considerable improvement over the well-known WebTables schema extraction method. In contrast with previous work that focuses on extracting individual relations, our method excels at correctly interpreting full tables, thereby being capable of handling general tables such as those found in spreadsheets, instead of being restricted to HTML tables as is the case with the WebTables method. We also extract additional schema characteristics, such as row groupings, which are important for supporting information retrieval tasks on tabular data.",TRUE,research problem
R132,Computer Sciences,R134434,Text classification with word embedding regularization and soft similarity measure,S532074,R134435,has research problem,R125884,Text Classification,"Since the seminal work of Mikolov et al., word embeddings have become the preferred word representations for many natural language processing tasks. Document similarity measures extracted from word embeddings, such as the soft cosine measure (SCM) and the Word Mover's Distance (WMD), were reported to achieve state-of-the-art performance on semantic text similarity and text classification. Despite the strong performance of the WMD on text classification and semantic text similarity, its super-cubic average time complexity is impractical. The SCM has quadratic worst-case time complexity, but its performance on text classification has never been compared with the WMD. Recently, two word embedding regularization techniques were shown to reduce storage and memory costs, and to improve training speed, document processing speed, and task performance on word analogy, word similarity, and semantic text similarity. However, the effect of these techniques on text classification has not yet been studied. In our work, we investigate the individual and joint effect of the two word embedding regularization techniques on the document processing speed and the task performance of the SCM and the WMD on text classification. For evaluation, we use the $k$NN classifier and six standard datasets: BBCSPORT, TWITTER, OHSUMED, REUTERS-21578, AMAZON, and 20NEWS. We show 39% average $k$NN test error reduction with regularized word embeddings compared to non-regularized word embeddings. We describe a practical procedure for deriving such regularized embeddings through Cholesky factorization. We also show that the SCM with regularized word embeddings significantly outperforms the WMD on text classification and is over 10,000 times faster.",TRUE,research problem
R132,Computer Sciences,R131782,Text Summarization with Pretrained Encoders,S523782,R131783,has research problem,R124682,Text Summarization,"Bidirectional Encoder Representations from Transformers (BERT) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several inter-sentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves state-of-the-art results across the board in both extractive and abstractive settings.",TRUE,research problem
R132,Computer Sciences,R129825,An Effective Approach to Unsupervised Machine Translation,S516122,R129826,has research problem,R117322,Unsupervised Machine Translation,"While machine translation has traditionally relied on large amounts of parallel corpora, a recent research line has managed to train both Neural Machine Translation (NMT) and Statistical Machine Translation (SMT) systems using monolingual corpora only. In this paper, we identify and address several deficiencies of existing unsupervised SMT approaches by exploiting subword information, developing a theoretically well founded unsupervised tuning method, and incorporating a joint refinement procedure. Moreover, we use our improved SMT system to initialize a dual NMT model, which is further fine-tuned through on-the-fly back-translation. Together, we obtain large improvements over the previous state-of-the-art in unsupervised machine translation. For instance, we get 22.5 BLEU points in English-to-German WMT 2014, 5.5 points more than the previous best unsupervised system, and 0.5 points more than the (supervised) shared task winner back in 2014.",TRUE,research problem
R132,Computer Sciences,R129839,Unsupervised Neural Machine Translation with SMT as Posterior Regularization,S516165,R129840,has research problem,R117322,Unsupervised Machine Translation,"Without real bilingual corpus available, unsupervised Neural Machine Translation (NMT) typically requires pseudo parallel data generated with the back-translation method for the model training. However, due to weak supervision, the pseudo data inevitably contain noises and errors that will be accumulated and reinforced in the subsequent training process, leading to bad translation performance. To address this issue, we introduce phrase based Statistic Machine Translation (SMT) models which are robust to noisy data, as posterior regularizations to guide the training of unsupervised NMT models in the iterative back-translation process. Our method starts from SMT models built with pre-trained language models and word-level translation tables inferred from cross-lingual embeddings. Then SMT and NMT models are optimized jointly and boost each other incrementally in a unified EM framework. In this way, (1) the negative effect caused by errors in the iterative back-translation process can be alleviated timely by SMT filtering noises from its phrase tables; meanwhile, (2) NMT can compensate for the deficiency of fluency inherent in SMT. Experiments conducted on en-fr and en-de translation tasks show that our method outperforms the strong baseline and achieves new state-of-the-art unsupervised machine translation performance.",TRUE,research problem
R233,Data Storage Systems,R135474,An Ontology-Based Approach for Curriculum Mapping in Higher Education,S535838,R135476,has research problem,R135481,curriculum mapping,"Programs offered by academic institutions in higher education need to meet specific standards that are established by the appropriate accreditation bodies. Curriculum mapping is an important part of the curriculum management process that is used to document the expected learning outcomes, ensure quality, and align programs and courses with industry standards. Semantic web languages can be used to express and share common agreement about the vocabularies used in the domain under study. In this paper, we present an approach based on ontology for curriculum mapping in higher education. Our proposed approach is focused on the creation of a core curriculum ontology that can support effective knowledge representation and knowledge discovery. The research work presents the case of ontology reuse through the extension of the curriculum ontology to support the creation of micro-credentials. We also present a conceptual framework for knowledge discovery to support various business use case scenarios based on ontology inferencing and querying operations.",TRUE,research problem
R233,Data Storage Systems,R136098,An ontology based modeling framework for design of educational technologies,S539321,R136100,has research problem,R136277,framework,"Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.",TRUE,research problem
R233,Data Storage Systems,R136098,An ontology based modeling framework for design of educational technologies,S539326,R136100,has research problem,R136280,goals,"Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.",TRUE,research problem
R233,Data Storage Systems,R136098,An ontology based modeling framework for design of educational technologies,S539323,R136100,has research problem,R136279,instructional designs,"Abstract Despite rapid progress, most of the educational technologies today lack a strong instructional design knowledge basis leading to questionable quality of instruction. In addition, a major challenge is to customize these educational technologies for a wide range of customizable instructional designs. Ontologies are one of the pertinent mechanisms to represent instructional design in the literature. However, existing approaches do not support modeling of flexible instructional designs. To address this problem, in this paper, we propose an ontology based framework for systematic modeling of different aspects of instructional design knowledge based on domain patterns. As part of the framework, we present ontologies for modeling goals , instructional processes and instructional material . We demonstrate the ontology framework by presenting instances of the ontology for the large scale case study of adult literacy in India (287 million learners spread across 22 Indian Languages), which requires creation of hundreds of similar but varied e Learning Systems based on flexible instructional designs. The implemented framework is available at http://rice.iiit.ac.in and is transferred to National Literacy Mission Authority of Government of India . The proposed framework could be potentially used for modeling instructional design knowledge for school education, vocational skills and beyond.",TRUE,research problem
R135,Databases/Information Systems,R6139,Self-training author name disambiguation for information scarce scenarios,S6513,R6140,has research problem,R6000,Author name disambiguation,"We present a novel 3‐step self‐training method for author name disambiguation—SAND (self‐training associative name disambiguator)—which requires no manual labeling, no parameterization (in real‐world scenarios) and is particularly suitable for the common situation in which only the most basic information about a citation record is available (i.e., author names, and work and venue titles). During the first step, real‐world heuristics on coauthors are able to produce highly pure (although fragmented) clusters. The most representative of these clusters are then selected to serve as training data for the third supervised author assignment step. The third step exploits a state‐of‐the‐art transductive disambiguation method capable of detecting unseen authors not included in any training example and incorporating reliable predictions to the training data. Experiments conducted with standard public collections, using the minimum set of attributes present in a citation, demonstrate that our proposed method outperforms all representative unsupervised author grouping disambiguation methods and is very competitive with fully supervised author assignment methods. Thus, different from other bootstrapping methods that explore privileged, hard to obtain information such as self‐citations and personal information, our proposed method produces topnotch performance with no (manual) training data or parameterization and in the presence of scarce information.",TRUE,research problem
R135,Databases/Information Systems,R107637,Scalable Methods for Measuring the Connectivity and Quality of Large Numbers of Linked Datasets,S489874,R107639,has research problem,R107644,Data Quality,"Although the ultimate objective of Linked Data is linking and integration, it is not currently evident how connected the current Linked Open Data (LOD) cloud is. In this article, we focus on methods, supported by special indexes and algorithms, for performing measurements related to the connectivity of more than two datasets that are useful in various tasks including (a) Dataset Discovery and Selection; (b) Object Coreference, i.e., for obtaining complete information about a set of entities, including provenance information; (c) Data Quality Assessment and Improvement, i.e., for assessing the connectivity between any set of datasets and monitoring their evolution over time, as well as for estimating data veracity; (d) Dataset Visualizations; and various other tasks. Since it would be prohibitively expensive to perform all these measurements in a naïve way, in this article, we introduce indexes (and their construction algorithms) that can speed up such tasks. In brief, we introduce (i) a namespace-based prefix index, (ii) a sameAs catalog for computing the symmetric and transitive closure of the owl:sameAs relationships encountered in the datasets, (iii) a semantics-aware element index (that exploits the aforementioned indexes), and, finally, (iv) two lattice-based incremental algorithms for speeding up the computation of the intersection of URIs of any set of datasets. For enhancing scalability, we propose parallel index construction algorithms and parallel lattice-based incremental algorithms, we evaluate the achieved speedup using either a single machine or a cluster of machines, and we provide insights regarding the factors that affect efficiency. Finally, we report measurements about the connectivity of the (billion triples-sized) LOD cloud that have never been carried out so far.",TRUE,research problem
R135,Databases/Information Systems,R135477,A learning object ontology repository to support annotation and discovery of educational resources using semantic thesauri,S539315,R135479,has research problem,R136274,educational resources," Open educational resources are currently becoming increasingly available from a multitude of sources and are consequently annotated in many diverse ways. Interoperability concerns that naturally arise can often be resolved through the semantification of metadata descriptions, while at the same time strengthening the knowledge value of resources. SKOS can be a solid linking point offering a standard vocabulary for thematic descriptions, by referencing semantic thesauri. We propose the enhancement and maintenance of educational resources’ metadata in the form of learning object ontologies and introduce the notion of a learning object ontology repository that can help towards their publication, discovery and reuse. At the same time, linking to thesauri datasets and contextualized sources interrelates learning objects with linked data and exposes them to the Web of Data. We build a set of extensions and workflows on top of contemporary ontology management tools, such as WebProtégé, that can make it suitable as a learning object ontology repository. The proposed approach and implementation can help libraries and universities in discovering, managing and incorporating open educational resources and enhancing current curricula. ",TRUE,research problem
R135,Databases/Information Systems,R25018,A survey of current Link Discovery frameworks,S74155,R25049,has research problem,R25025,Link Discovery,"Links build the backbone of the Linked Data Cloud. With the steady growth in size of datasets comes an increased need for end users to know which frameworks to use for deriving links between datasets. In this survey, we comparatively evaluate current Link Discovery tools and frameworks. For this purpose, we outline general requirements and derive a generic architecture of Link Discovery frameworks. Based on this generic architecture, we study and compare the features of state-ofthe-art linking frameworks. We also analyze reported performance evaluations for the different frameworks. Finally, we derive insights pertaining to possible future developments in the domain of Link Discovery.",TRUE,research problem
R135,Databases/Information Systems,R135477,A learning object ontology repository to support annotation and discovery of educational resources using semantic thesauri,S539316,R135479,has research problem,R136275,metadata description," Open educational resources are currently becoming increasingly available from a multitude of sources and are consequently annotated in many diverse ways. Interoperability concerns that naturally arise can often be resolved through the semantification of metadata descriptions, while at the same time strengthening the knowledge value of resources. SKOS can be a solid linking point offering a standard vocabulary for thematic descriptions, by referencing semantic thesauri. We propose the enhancement and maintenance of educational resources’ metadata in the form of learning object ontologies and introduce the notion of a learning object ontology repository that can help towards their publication, discovery and reuse. At the same time, linking to thesauri datasets and contextualized sources interrelates learning objects with linked data and exposes them to the Web of Data. We build a set of extensions and workflows on top of contemporary ontology management tools, such as WebProtégé, that can make it suitable as a learning object ontology repository. The proposed approach and implementation can help libraries and universities in discovering, managing and incorporating open educational resources and enhancing current curricula. ",TRUE,research problem
R135,Databases/Information Systems,R2047,Capturing Knowledge in Semantically-typed Relational Patterns to Enhance Relation Linking,S2076,R2060,has research problem,R2061,Question Answering,"Transforming natural language questions into formal queries is an integral task in Question Answering (QA) systems. QA systems built on knowledge graphs like DBpedia, require a step after natural language processing for linking words, specifically including named entities and relations, to their corresponding entities in a knowledge graph. To achieve this task, several approaches rely on background knowledge bases containing semantically-typed relations, e.g., PATTY, for an extra disambiguation step. Two major factors may affect the performance of relation linking approaches whenever background knowledge bases are accessed: a) limited availability of such semantic knowledge sources, and b) lack of a systematic approach on how to maximize the benefits of the collected knowledge. We tackle this problem and devise SIBKB, a semantic-based index able to capture knowledge encoded on background knowledge bases like PATTY. SIBKB represents a background knowledge base as a bi-partite and a dynamic index over the relation patterns included in the knowledge base. Moreover, we develop a relation linking component able to exploit SIBKB features. The benefits of SIBKB are empirically studied on existing QA benchmarks and observed results suggest that SIBKB is able to enhance the accuracy of relation linking by up to three times.",TRUE,research problem
R135,Databases/Information Systems,R2047,Capturing Knowledge in Semantically-typed Relational Patterns to Enhance Relation Linking,S2074,R2058,has research problem,R2059,Relation linking,"Transforming natural language questions into formal queries is an integral task in Question Answering (QA) systems. QA systems built on knowledge graphs like DBpedia, require a step after natural language processing for linking words, specifically including named entities and relations, to their corresponding entities in a knowledge graph. To achieve this task, several approaches rely on background knowledge bases containing semantically-typed relations, e.g., PATTY, for an extra disambiguation step. Two major factors may affect the performance of relation linking approaches whenever background knowledge bases are accessed: a) limited availability of such semantic knowledge sources, and b) lack of a systematic approach on how to maximize the benefits of the collected knowledge. We tackle this problem and devise SIBKB, a semantic-based index able to capture knowledge encoded on background knowledge bases like PATTY. SIBKB represents a background knowledge base as a bi-partite and a dynamic index over the relation patterns included in the knowledge base. Moreover, we develop a relation linking component able to exploit SIBKB features. The benefits of SIBKB are empirically studied on existing QA benchmarks and observed results suggest that SIBKB is able to enhance the accuracy of relation linking by up to three times.",TRUE,research problem
R135,Databases/Information Systems,R77123,Heuristics-based query optimisation for SPARQL,S535786,R135463,has research problem,R75732,SPARQL query optimization,"Query optimization in RDF Stores is a challenging problem as SPARQL queries typically contain many more joins than equivalent relational plans, and hence lead to a large join order search space. In such cases, cost-based query optimization often is not possible. One practical reason for this is that statistics typically are missing in web scale setting such as the Linked Open Datasets (LOD). The more profound reason is that due to the absence of schematic structure in RDF, join-hit ratio estimation requires complicated forms of correlated join statistics; and currently there are no methods to identify the relevant correlations beforehand. For this reason, the use of good heuristics is essential in SPARQL query optimization, even in the case that are partially used with cost-based statistics (i.e., hybrid query optimization). In this paper we describe a set of useful heuristics for SPARQL query optimizers. We present these in the context of a new Heuristic SPARQL Planner (HSP) that is capable of exploiting the syntactic and the structural variations of the triple patterns in a SPARQL query in order to choose an execution plan without the need of any cost model. For this, we define the variable graph and we show a reduction of the SPARQL query optimization problem to the maximum weight independent set problem. We implemented our planner on top of the MonetDB open source column-store and evaluated its effectiveness against the state-of-the-art RDF-3X engine as well as comparing the plan quality with a relational (SQL) equivalent of the benchmarks.",TRUE,research problem
R142,Earth Sciences,R143827,A short survey of hyperspectral remote sensing applications in agriculture,S575672,R143829,has research problem,R143793,Application of Hyperspectral remote sensing to Agriculture,"Hyperspectral sensors are devices that acquire images over hundreds of spectral bands, thereby enabling the extraction of spectral signatures for objects or materials observed. Hyperspectral remote sensing has been used over a wide range of applications, such as agriculture, forestry, geology, ecological monitoring and disaster monitoring. In this paper, the specific application of hyperspectral remote sensing to agriculture is examined. The technological development of agricultural methods is of critical importance as the world's population is anticipated to continuously rise much beyond the current number of 7 billion. One area upon which hyperspectral sensing can yield considerable impact is that of precision agriculture - the use of observations to optimize the use of resources and management of farming practices. For example, hyperspectral image processing is used in the monitoring of plant diseases, insect pests and invasive plant species; the estimation of crop yield; and the fine classification of crop distributions. This paper also presents a detailed overview of hyperspectral data processing techniques and suggestions for advancing the agricultural applications of hyperspectral technologies in Turkey.",TRUE,research problem
R302,Economics,R182241,COVID-19 Disruptions Disproportionately Affect Female Academics,S717743,R187531,has research problem,R187527,child,"The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents’ circumstances, such as a spouse’s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.",TRUE,research problem
R302,Economics,R182241,COVID-19 Disruptions Disproportionately Affect Female Academics,S717656,R187523,has research problem,R187516,childcare,"The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents’ circumstances, such as a spouse’s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.",TRUE,research problem
R302,Economics,R182241,COVID-19 Disruptions Disproportionately Affect Female Academics,S717739,R187531,has research problem,R187504,COVID-19 pandemic,"The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents’ circumstances, such as a spouse’s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.",TRUE,research problem
R302,Economics,R34965,Enterprise resource planning: An integrative review,S122224,R34967,has research problem,R34968,Enterprise resource planning,"Enterprise resource planning (ERP) system solutions are currently in high demand by both manufacturing and service organisations because they provide a tightly integrated solution to an organisation's information system needs. During the last decade, ERP systems have received a significant amount of attention from researchers and practitioners from a variety of functional disciplines. In this paper, a comprehensive review of the research literature (1990‐2003) concerning ERP systems is presented. The literature is further classified and the major outcomes of each study are addressed and analysed. Following a comprehensive review of the literature, proposals for future research are formulated to identify topics where fruitful opportunities exist.",TRUE,research problem
R302,Economics,R182241,COVID-19 Disruptions Disproportionately Affect Female Academics,S717744,R187531,has research problem,R5105,gender,"The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents’ circumstances, such as a spouse’s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.",TRUE,research problem
R302,Economics,R182241,COVID-19 Disruptions Disproportionately Affect Female Academics,S717657,R187523,has research problem,R187517,housework,"The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents’ circumstances, such as a spouse’s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.",TRUE,research problem
R302,Economics,R182241,COVID-19 Disruptions Disproportionately Affect Female Academics,S717741,R187531,has research problem,R187506,research,"The rapid spread of the COVID-19 pandemic and subsequent countermeasures, such as school closures, the shift to working from home, and social distancing are disrupting economic activity around the world. As with other major economic shocks, there are winners and losers, leading to increased inequality across certain groups. In this project, we investigate the effects of COVID-19 disruptions on the gender gap in academia. We administer a global survey to a broad range of academics across various disciplines to collect nuanced data on the respondents’ circumstances, such as a spouse’s employment, the number and ages of children, and time use. We find that female academics, particularly those who have children, report a disproportionate reduction in time dedicated to research relative to what comparable men and women without children experience. Both men and women report substantial increases in childcare and housework burdens, but women experienced significantly larger increases than men did.",TRUE,research problem
R32,Environmental Health,R76029,Towards Consistent Data Representation in the IoT Healthcare Landscape,S348965,R76031,has research problem,R76287,consistently represent health and fitness data from heterogeneous IoT sources,"Nowadays, the enormous volume of health and fitness data gathered from IoT wearable devices offers favourable opportunities to the research community. For instance, it can be exploited using sophisticated data analysis techniques, such as automatic reasoning, to find patterns and, extract information and new knowledge in order to enhance decision-making and deliver better healthcare. However, due to the high heterogeneity of data representation formats, the IoT healthcare landscape is characterised by an ubiquitous presence of data silos which prevents users and clinicians from obtaining a consistent representation of the whole knowledge. Semantic web technologies, such as ontologies and inference rules, have been shown as a promising way for the integration and exploitation of data from heterogeneous sources. In this paper, we present a semantic data model useful to: (1) consistently represent health and fitness data from heterogeneous IoT sources; (2) integrate and exchange them; and (3) enable automatic reasoning by inference engines.",TRUE,research problem
R32,Environmental Health,R76020,HealthIoT Ontology for Data Semantic Representation and Interpretation Obtained from Medical Connected Objects,S348104,R76022,has research problem,R76038,semantic interoperability of the medical connected objects and their data,"Internet of Things (IoT) covers a variety of applications including the Healthcare field. Consequently, medical objects become connected to each other with the purpose to share and exchange health data. These medical connected objects raise issues on how to ensure the analysis, interpretation and semantic interoperability of the extensive obtained health data with the purpose to make an appropriate decision. This paper proposes a HealthIoT ontology for representing the semantic interoperability of the medical connected objects and their data; while an algorithm alleviates the analysis of the detected vital signs and the decision-making of the doctor. The execution of this algorithm needs the definition of several SWRL rules (Semantic Web Rule Language).",TRUE,research problem
R32,Environmental Health,R76026,Design and Implementation of e-Health System Based on Semantic Sensor Network Using IETF YANG,S348970,R76028,has research problem,R76288,semantic interoperability support for the e-Health system,"Recently, healthcare services can be delivered effectively to patients anytime and anywhere using e-Health systems. e-Health systems are developed through Information and Communication Technologies (ICT) that involve sensors, mobiles, and web-based applications for the delivery of healthcare services and information. Remote healthcare is an important purpose of the e-Health system. Usually, the eHealth system includes heterogeneous sensors from diverse manufacturers producing data in different formats. Device interoperability and data normalization is a challenging task that needs research attention. Several solutions are proposed in the literature based on manual interpretation through explicit programming. However, programmatically implementing the interpretation of the data sender and data receiver in the e-Health system for the data transmission is counterproductive as modification will be required for each new device added into the system. In this paper, an e-Health system with the Semantic Sensor Network (SSN) is proposed to address the device interoperability issue. In the proposed system, we have used IETF YANG for modeling the semantic e-Health data to represent the information of e-Health sensors. This modeling scheme helps in provisioning semantic interoperability between devices and expressing the sensing data in a user-friendly manner. For this purpose, we have developed an ontology for e-Health data that supports different styles of data formats. The ontology is defined in YANG for provisioning semantic interpretation of sensing data in the system by constructing meta-models of e-Health sensors. The proposed approach assists in the auto-configuration of eHealth sensors and querying the sensor network with semantic interoperability support for the e-Health system.",TRUE,research problem
R32,Environmental Health,R76032,Meaningful Integration of Data from Heterogeneous Health Services and Home Environment Based on Ontology,S348928,R76034,has research problem,R76283,Semantic Web technologies to integrate the health data and home environment data,"The development of electronic health records, wearable devices, health applications and Internet of Things (IoT)-empowered smart homes is promoting various applications. It also makes health self-management much more feasible, which can partially mitigate one of the challenges that the current healthcare system is facing. Effective and convenient self-management of health requires the collaborative use of health data and home environment data from different services, devices, and even open data on the Web. Although health data interoperability standards including HL7 Fast Healthcare Interoperability Resources (FHIR) and IoT ontology including Semantic Sensor Network (SSN) have been developed and promoted, it is impossible for all the different categories of services to adopt the same standard in the near future. This study presents a method that applies Semantic Web technologies to integrate the health data and home environment data from heterogeneously built services and devices. We propose a Web Ontology Language (OWL)-based integration ontology that models health data from HL7 FHIR standard implemented services, normal Web services and Web of Things (WoT) services and Linked Data together with home environment data from formal ontology-described WoT services. It works on the resource integration layer of the layered integration architecture. An example use case with a prototype implementation shows that the proposed method successfully integrates the health data and home environment data into a resource graph. The integrated data are annotated with semantics and ontological links, which make them machine-understandable and cross-system reusable.",TRUE,research problem
R54,Environmental Microbiology and Microbial Ecology,R78291,The Effect of Hydroxycinnamic Acids on the Microbial Mineralisation of Phenanthrene in Soil,S354090,R78293,has research problem,R78296,"The effect of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids) on the microbial mineralisation of phenanthrene in soil slurry by the indigenous microbial community has been investigated.","The effect of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids) on the microbial mineralisation of phenanthrene in soil slurry by the indigenous microbial community has been investigated. The rate and extent of 14C–phenanthrenemineralisation in artificially spiked soils were monitored in the absence of hydroxycinnamic acids and presence of hydroxycinnamic acids applied at three different concentrations (50, 100 and 200 µg kg-1) either as single compounds or as a mixture of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids at a 1:1:1 ratio). The highest extent of 14C–phenanthrene mineralisation (P 200 µg kg-1. Depending on its concentrationin soil, hydroxycinnamic acids can either stimulate or inhibit mineralisation of phenanthrene by indigenous soil microbial community. Therefore, effective understanding of phytochemical–microbe–organic contaminant interactions is essential for further development of phytotechnologies for remediation of PAH–contaminated soils.",TRUE,research problem
R54,Environmental Microbiology and Microbial Ecology,R78283,The Effect of Rhizosphere Soil and Root Tissues Amendment on Microbial Mineralisation of Target 14C–Hydrocarbons in Contaminated Soil,S354073,R78285,has research problem,R78288,The effect of rhizosphere soil or root tissues amendments on the microbial mineralisation of hydrocarbons in soil slurry by the indigenous microbial communities has been investigated. ,"The effect of rhizosphere soil or root tissues amendments on the microbial mineralisation of hydrocarbons in soil slurry by the indigenous microbial communities has been investigated. In this study, rhizosphere soil and root tissues of reed canary grass (Phalaris arundinacea), channel grass (Vallisneria spiralis), blackberry (Rubus fructicosus) and goat willow (Salix caprea) were collected from the former Shell and Imperial Industries (ICI) Refinery site in Lancaster, UK. The rates and extents of 14C–hydrocarbons (naphthalene, phenanthrene, hexadecane or octacosane) mineralisation in artificially spiked soils were monitored in the absence and presence of 5% (wet weight) of rhizosphere soil or root tissues. Respirometric and microbial assays were monitored in fresh (0 d) and pre–incubated (28 d) artificially spiked soils following amendment with rhizosphere soil or root tissues. There were significant increases (P < 0.001) in the extents of 14C–naphthalene and 14C–phenanthrene mineralisation in fresh artificially spiked soils amended with rhizosphere soil and root tissues compared to those measured in unamended soils. However, amendment of fresh artificially spiked soils with rhizosphere soil and root tissues did not enhance the microbial mineralisation of 14C–hexadecane or 14C–octacosane by indigenous microbial communities. Apart from artificially spiked soil systems containing naphthalene (amended with reed canary grass and channel grass rhizosphere) and hexadecane amended with goat willow rhizosphere, microbial mineralisation of hydrocarbons was further enhanced following 28 d soil–organic contaminants pre–exposure and subsequent amendment with rhizosphere soil or root tissues. This study suggests that organic chemicals in roots and/or rhizosphere can enhance the microbial degradation of petroleum hydrocarbons in freshly contaminated soil by supporting higher numbers of hydrocarbon–degrading populations, promoting microbial activity and/or enhancing bioavailability of organic contaminants.",TRUE,research problem
R145,Environmental Sciences,R78114,Petroleum Exploration and Production: Past and Present Environmental Issues in the Nigeria’s Niger Delta,S353881,R78117,has research problem,R78207,"Activities associated with petroleum exploration, development and production operations have local detrimental and significant impacts on the atmosphere, soils and sediments, surface and groundwater, marine environment and terrestrial ecosystems in the Niger Delta.","Petroleum exploration and production in the Nigeria’s Niger Delta region and export of oil and gas resources by the petroleum sector has substantially improved the nation’s economy over the past five decades. However, activities associated with petroleum exploration, development and production operations have local detrimental and significant impacts on the atmosphere, soils and sediments, surface and groundwater, marine environment and terrestrial ecosystems in the Niger Delta. Discharges of petroleum hydrocarbon and petroleum–derived waste streams have caused environmental pollution, adverse human health effects, socio–economic problems and degradation of host communities in the 9 oil–producing states in the Niger Delta region. Many approaches have been developed for the management of environmental impacts of petroleum production–related activities and several environmental laws have been institutionalized to regulate the Nigerian petroleum industry. However, the existing statutory laws and regulations for environmental protection appear to be grossly inadequate and some of the multinational oil companies operating in the Niger Delta region have failed to adopt sustainable practices to prevent environmental pollution. This review examines the implications of multinational oil companies operations and further highlights some of the past and present environmental issues associated with petroleum exploitation and production in the Nigeria’s Niger Delta. Although effective understanding of petroleum production and associated environmental degradation is importance for developing management strategies, there is a need for more multidisciplinary approaches for sustainable risk mitigation and effective environmental protection of the oil–producing host communities in the Niger Delta.",TRUE,research problem
R145,Environmental Sciences,R8034,An Overview of CMIP5 and the Experiment Design,S12093,R8035,has research problem,R8037,climate change,"The fifth phase of the Coupled Model Intercomparison Project (CMIP5) will produce a state-of-the- art multimodel dataset designed to advance our knowledge of climate variability and climate change. Researchers worldwide are analyzing the model output and will produce results likely to underlie the forthcoming Fifth Assessment Report by the Intergovernmental Panel on Climate Change. Unprecedented in scale and attracting interest from all major climate modeling groups, CMIP5 includes “long term” simulations of twentieth-century climate and projections for the twenty-first century and beyond. Conventional atmosphere–ocean global climate models and Earth system models of intermediate complexity are for the first time being joined by more recently developed Earth system models under an experiment design that allows both types of models to be compared to observations on an equal footing. Besides the longterm experiments, CMIP5 calls for an entirely new suite of “near term” simulations focusing on recent decades...",TRUE,research problem
R145,Environmental Sciences,R8061,Wind extremes in the North Sea Basin under climate change: An ensemble study of 12 CMIP5 GCMs: WIND EXTREMES IN THE NORTH SEA IN CMIP5,S12141,R8062,has research problem,R8065,climate change,"Coastal safety may be influenced by climate change, as changes in extreme surge levels and wave extremes may increase the vulnerability of dunes and other coastal defenses. In the North Sea, an area already prone to severe flooding, these high surge levels and waves are generated by low atmospheric pressure and severe wind speeds during storm events. As a result of the geometry of the North Sea, not only the maximum wind speed is relevant, but also wind direction. Climate change could change maximum wind conditions, with potentially negative effects for coastal safety. Here, we use an ensemble of 12 Coupled Model Intercomparison Project Phase 5 (CMIP5) General Circulation Models (GCMs) and diagnose the effect of two climate scenarios (rcp4.5 and rcp8.5) on annual maximum wind speed, wind speeds with lower return frequencies, and the direction of these annual maximum wind speeds. The 12 selected CMIP5 models do not project changes in annual maximum wind speed and in wind speeds with lower return frequencies; however, we do find an indication that the annual extreme wind events are coming more often from western directions. Our results are in line with the studies based on CMIP3 models and do not confirm the statement based on some reanalysis studies that there is a climate‐change‐related upward trend in storminess in the North Sea area.",TRUE,research problem
R145,Environmental Sciences,R8048,Future changes of wind energy potentials over Europe in a large CMIP5 multi-model ensemble: FUTURE CHANGES OF WIND ENERGY OVER EUROPE IN A CMIP5 ENSEMBLE,S12122,R8049,has research problem,R8054,Europe,"A statistical‐dynamical downscaling method is used to estimate future changes of wind energy output (Eout) of a benchmark wind turbine across Europe at the regional scale. With this aim, 22 global climate models (GCMs) of the Coupled Model Intercomparison Project Phase 5 (CMIP5) ensemble are considered. The downscaling method uses circulation weather types and regional climate modelling with the COSMO‐CLM model. Future projections are computed for two time periods (2021–2060 and 2061–2100) following two scenarios (RCP4.5 and RCP8.5). The CMIP5 ensemble mean response reveals a more likely than not increase of mean annual Eout over Northern and Central Europe and a likely decrease over Southern Europe. There is some uncertainty with respect to the magnitude and the sign of the changes. Higher robustness in future changes is observed for specific seasons. Except from the Mediterranean area, an ensemble mean increase of Eout is simulated for winter and a decreasing for the summer season, resulting in a strong increase of the intra‐annual variability for most of Europe. The latter is, in particular, probable during the second half of the 21st century under the RCP8.5 scenario. In general, signals are stronger for 2061–2100 compared to 2021–2060 and for RCP8.5 compared to RCP4.5. Regarding changes of the inter‐annual variability of Eout for Central Europe, the future projections strongly vary between individual models and also between future periods and scenarios within single models. This study showed for an ensemble of 22 CMIP5 models that changes in the wind energy potentials over Europe may take place in future decades. However, due to the uncertainties detected in this research, further investigations with multi‐model ensembles are needed to provide a better quantification and understanding of the future changes.",TRUE,research problem
R145,Environmental Sciences,R8034,An Overview of CMIP5 and the Experiment Design,S12095,R8035,has research problem,R8039,experiment design,"The fifth phase of the Coupled Model Intercomparison Project (CMIP5) will produce a state-of-the- art multimodel dataset designed to advance our knowledge of climate variability and climate change. Researchers worldwide are analyzing the model output and will produce results likely to underlie the forthcoming Fifth Assessment Report by the Intergovernmental Panel on Climate Change. Unprecedented in scale and attracting interest from all major climate modeling groups, CMIP5 includes “long term” simulations of twentieth-century climate and projections for the twenty-first century and beyond. Conventional atmosphere–ocean global climate models and Earth system models of intermediate complexity are for the first time being joined by more recently developed Earth system models under an experiment design that allows both types of models to be compared to observations on an equal footing. Besides the longterm experiments, CMIP5 calls for an entirely new suite of “near term” simulations focusing on recent decades...",TRUE,research problem
R145,Environmental Sciences,R8048,Future changes of wind energy potentials over Europe in a large CMIP5 multi-model ensemble: FUTURE CHANGES OF WIND ENERGY OVER EUROPE IN A CMIP5 ENSEMBLE,S12123,R8049,has research problem,R8055,future changes,"A statistical‐dynamical downscaling method is used to estimate future changes of wind energy output (Eout) of a benchmark wind turbine across Europe at the regional scale. With this aim, 22 global climate models (GCMs) of the Coupled Model Intercomparison Project Phase 5 (CMIP5) ensemble are considered. The downscaling method uses circulation weather types and regional climate modelling with the COSMO‐CLM model. Future projections are computed for two time periods (2021–2060 and 2061–2100) following two scenarios (RCP4.5 and RCP8.5). The CMIP5 ensemble mean response reveals a more likely than not increase of mean annual Eout over Northern and Central Europe and a likely decrease over Southern Europe. There is some uncertainty with respect to the magnitude and the sign of the changes. Higher robustness in future changes is observed for specific seasons. Except from the Mediterranean area, an ensemble mean increase of Eout is simulated for winter and a decreasing for the summer season, resulting in a strong increase of the intra‐annual variability for most of Europe. The latter is, in particular, probable during the second half of the 21st century under the RCP8.5 scenario. In general, signals are stronger for 2061–2100 compared to 2021–2060 and for RCP8.5 compared to RCP4.5. Regarding changes of the inter‐annual variability of Eout for Central Europe, the future projections strongly vary between individual models and also between future periods and scenarios within single models. This study showed for an ensemble of 22 CMIP5 models that changes in the wind energy potentials over Europe may take place in future decades. However, due to the uncertainties detected in this research, further investigations with multi‐model ensembles are needed to provide a better quantification and understanding of the future changes.",TRUE,research problem
R145,Environmental Sciences,R8048,Future changes of wind energy potentials over Europe in a large CMIP5 multi-model ensemble: FUTURE CHANGES OF WIND ENERGY OVER EUROPE IN A CMIP5 ENSEMBLE,S12121,R8049,has research problem,R8053,multi‐model ensemble,"A statistical‐dynamical downscaling method is used to estimate future changes of wind energy output (Eout) of a benchmark wind turbine across Europe at the regional scale. With this aim, 22 global climate models (GCMs) of the Coupled Model Intercomparison Project Phase 5 (CMIP5) ensemble are considered. The downscaling method uses circulation weather types and regional climate modelling with the COSMO‐CLM model. Future projections are computed for two time periods (2021–2060 and 2061–2100) following two scenarios (RCP4.5 and RCP8.5). The CMIP5 ensemble mean response reveals a more likely than not increase of mean annual Eout over Northern and Central Europe and a likely decrease over Southern Europe. There is some uncertainty with respect to the magnitude and the sign of the changes. Higher robustness in future changes is observed for specific seasons. Except from the Mediterranean area, an ensemble mean increase of Eout is simulated for winter and a decreasing for the summer season, resulting in a strong increase of the intra‐annual variability for most of Europe. The latter is, in particular, probable during the second half of the 21st century under the RCP8.5 scenario. In general, signals are stronger for 2061–2100 compared to 2021–2060 and for RCP8.5 compared to RCP4.5. Regarding changes of the inter‐annual variability of Eout for Central Europe, the future projections strongly vary between individual models and also between future periods and scenarios within single models. This study showed for an ensemble of 22 CMIP5 models that changes in the wind energy potentials over Europe may take place in future decades. However, due to the uncertainties detected in this research, further investigations with multi‐model ensembles are needed to provide a better quantification and understanding of the future changes.",TRUE,research problem
R145,Environmental Sciences,R8061,Wind extremes in the North Sea Basin under climate change: An ensemble study of 12 CMIP5 GCMs: WIND EXTREMES IN THE NORTH SEA IN CMIP5,S12142,R8062,has research problem,R8066,North sea,"Coastal safety may be influenced by climate change, as changes in extreme surge levels and wave extremes may increase the vulnerability of dunes and other coastal defenses. In the North Sea, an area already prone to severe flooding, these high surge levels and waves are generated by low atmospheric pressure and severe wind speeds during storm events. As a result of the geometry of the North Sea, not only the maximum wind speed is relevant, but also wind direction. Climate change could change maximum wind conditions, with potentially negative effects for coastal safety. Here, we use an ensemble of 12 Coupled Model Intercomparison Project Phase 5 (CMIP5) General Circulation Models (GCMs) and diagnose the effect of two climate scenarios (rcp4.5 and rcp8.5) on annual maximum wind speed, wind speeds with lower return frequencies, and the direction of these annual maximum wind speeds. The 12 selected CMIP5 models do not project changes in annual maximum wind speed and in wind speeds with lower return frequencies; however, we do find an indication that the annual extreme wind events are coming more often from western directions. Our results are in line with the studies based on CMIP3 models and do not confirm the statement based on some reanalysis studies that there is a climate‐change‐related upward trend in storminess in the North Sea area.",TRUE,research problem
R145,Environmental Sciences,R78209,Role of Plants and Microbes in Bioremediation of Petroleum Hydrocarbons Contaminated Soils,S353896,R78211,has research problem,R78212,"Petroleum hydrocarbons contamination of soil, sediments and marine environment associated with the inadvertent discharges of petroleum–derived chemical wastes and petroleum hydrocarbons associated with spillage and other sources into the environment often pose harmful effects on human health and the natural environment, and have negative socio–economic impacts in the oil–producing host communities. ","Petroleum hydrocarbons contamination of soil, sediments and marine environment associated with the inadvertent discharges of petroleum–derived chemical wastes and petroleum hydrocarbons associated with spillage and other sources into the environment often pose harmful effects on human health and the natural environment, and have negative socio–economic impacts in the oil–producing host communities. In practice, plants and microbes have played a major role in microbial transformation and growth–linked mineralization of petroleum hydrocarbons in contaminated soils and/or sediments over the past years. Bioremediation strategies has been recognized as an environmental friendly and cost–effective alternative in comparison with the traditional physico-chemical approaches for the restoration and reclamation of contaminated sites. The success of any plant–based remediation strategy depends on the interaction of plants with rhizospheric microbial populations in the surrounding soil medium and the organic contaminant. Effective understanding of the fate and behaviour of organic contaminants in the soil can help determine the persistence of the contaminant in the terrestrial environment, promote the success of any bioremediation approach and help develop a high–level of risks mitigation strategies. In this review paper, we provide a clear insight into the role of plants and microbes in the microbial degradation of petroleum hydrocarbons in contaminated soil that have emerged from the growing body of bioremediation research and its applications in practice. In addition, plant–microbe interactions have been discussed with respect to biodegradation of petroleum hydrocarbons and these could provide a better understanding of some important factors necessary for development of in situ bioremediation strategies for risks mitigation in petroleum hydrocarbon–contaminated soil.",TRUE,research problem
R145,Environmental Sciences,R8048,Future changes of wind energy potentials over Europe in a large CMIP5 multi-model ensemble: FUTURE CHANGES OF WIND ENERGY OVER EUROPE IN A CMIP5 ENSEMBLE,S12120,R8049,has research problem,R8052,wind energy potentials,"A statistical‐dynamical downscaling method is used to estimate future changes of wind energy output (Eout) of a benchmark wind turbine across Europe at the regional scale. With this aim, 22 global climate models (GCMs) of the Coupled Model Intercomparison Project Phase 5 (CMIP5) ensemble are considered. The downscaling method uses circulation weather types and regional climate modelling with the COSMO‐CLM model. Future projections are computed for two time periods (2021–2060 and 2061–2100) following two scenarios (RCP4.5 and RCP8.5). The CMIP5 ensemble mean response reveals a more likely than not increase of mean annual Eout over Northern and Central Europe and a likely decrease over Southern Europe. There is some uncertainty with respect to the magnitude and the sign of the changes. Higher robustness in future changes is observed for specific seasons. Except from the Mediterranean area, an ensemble mean increase of Eout is simulated for winter and a decreasing for the summer season, resulting in a strong increase of the intra‐annual variability for most of Europe. The latter is, in particular, probable during the second half of the 21st century under the RCP8.5 scenario. In general, signals are stronger for 2061–2100 compared to 2021–2060 and for RCP8.5 compared to RCP4.5. Regarding changes of the inter‐annual variability of Eout for Central Europe, the future projections strongly vary between individual models and also between future periods and scenarios within single models. This study showed for an ensemble of 22 CMIP5 models that changes in the wind energy potentials over Europe may take place in future decades. However, due to the uncertainties detected in this research, further investigations with multi‐model ensembles are needed to provide a better quantification and understanding of the future changes.",TRUE,research problem
R445,Ethics and Political Philosophy,R41454,Individual homogenization in large-scale systems: on the politics of computer and social architectures,S131124,R41458,has research problem,R41461,Heterogeneity,"Abstract One determining characteristic of contemporary sociopolitical systems is their power over increasingly large and diverse populations. This raises questions about power relations between heterogeneous individuals and increasingly dominant and homogenizing system objectives. This article crosses epistemic boundaries by integrating computer engineering and a historicalphilosophical approach making the general organization of individuals within large-scale systems and corresponding individual homogenization intelligible. From a versatile archeological-genealogical perspective, an analysis of computer and social architectures is conducted that reinterprets Foucault’s disciplines and political anatomy to establish the notion of politics for a purely technical system. This permits an understanding of system organization as modern technology with application to technical and social systems alike. Connecting to Heidegger’s notions of the enframing ( Gestell ) and a more primal truth ( anfänglicheren Wahrheit) , the recognition of politics in differently developing systems then challenges the immutability of contemporary organization. Following this critique of modernity and within the conceptualization of system organization, Derrida’s democracy to come (à venir) is then reformulated more abstractly as organizations to come . Through the integration of the discussed concepts, the framework of Large-Scale Systems Composed of Homogeneous Individuals (LSSCHI) is proposed, problematizing the relationships between individuals, structure, activity, and power within large-scale systems. The LSSCHI framework highlights the conflict of homogenizing system-level objectives and individual heterogeneity, and outlines power relations and mechanisms of control shared across different social and technical systems.",TRUE,research problem
R445,Ethics and Political Philosophy,R41454,Individual homogenization in large-scale systems: on the politics of computer and social architectures,S131123,R41458,has research problem,R41460,System organization,"Abstract One determining characteristic of contemporary sociopolitical systems is their power over increasingly large and diverse populations. This raises questions about power relations between heterogeneous individuals and increasingly dominant and homogenizing system objectives. This article crosses epistemic boundaries by integrating computer engineering and a historicalphilosophical approach making the general organization of individuals within large-scale systems and corresponding individual homogenization intelligible. From a versatile archeological-genealogical perspective, an analysis of computer and social architectures is conducted that reinterprets Foucault’s disciplines and political anatomy to establish the notion of politics for a purely technical system. This permits an understanding of system organization as modern technology with application to technical and social systems alike. Connecting to Heidegger’s notions of the enframing ( Gestell ) and a more primal truth ( anfänglicheren Wahrheit) , the recognition of politics in differently developing systems then challenges the immutability of contemporary organization. Following this critique of modernity and within the conceptualization of system organization, Derrida’s democracy to come (à venir) is then reformulated more abstractly as organizations to come . Through the integration of the discussed concepts, the framework of Large-Scale Systems Composed of Homogeneous Individuals (LSSCHI) is proposed, problematizing the relationships between individuals, structure, activity, and power within large-scale systems. The LSSCHI framework highlights the conflict of homogenizing system-level objectives and individual heterogeneity, and outlines power relations and mechanisms of control shared across different social and technical systems.",TRUE,research problem
R38,Genomics,R50397,The application of RNA sequencing for the diagnosis and genomic classification of pediatric acute lymphoblastic leukemia,S154153,R50404,has research problem,R50405,acute lymphoblastic leukemia (ALL),"Acute lymphoblastic leukemia (ALL) is the most common childhood malignancy, and implementation of risk-adapted therapy has been instrumental in the dramatic improvements in clinical outcomes. A key to risk-adapted therapies includes the identification of genomic features of individual tumors, including chromosome number (for hyper- and hypodiploidy) and gene fusions, notably ETV6-RUNX1, TCF3-PBX1, and BCR-ABL1 in B-cell ALL (B-ALL). RNA-sequencing (RNA-seq) of large ALL cohorts has expanded the number of recurrent gene fusions recognized as drivers in ALL, and identification of these new entities will contribute to refining ALL risk stratification. We used RNA-seq on 126 ALL patients from our clinical service to test the utility of including RNA-seq in standard-of-care diagnostic pipelines to detect gene rearrangements and IKZF1 deletions. RNA-seq identified 86% of rearrangements detected by standard-of-care diagnostics. KMT2A (MLL) rearrangements, although usually identified, were the most commonly missed by RNA-seq as a result of low expression. RNA-seq identified rearrangements that were not detected by standard-of-care testing in 9 patients. These were found in patients who were not classifiable using standard molecular assessment. We developed an approach to detect the most common IKZF1 deletion from RNA-seq data and validated this using an RQ-PCR assay. We applied an expression classifier to identify Philadelphia chromosome-like B-ALL patients. T-ALL proved a rich source of novel gene fusions, which have clinical implications or provide insights into disease biology. Our experience shows that RNA-seq can be implemented within an individual clinical service to enhance the current molecular diagnostic risk classification of ALL.",TRUE,research problem
R136,Graphics,R6515,Formal Linked Data Visualization Model,S77687,R25679,has research problem,R25667,Data Visualization,"Recently, the amount of semantic data available in the Web has increased dramatically. The potential of this vast amount of data is enormous but in most cases it is difficult for users to explore and use this data, especially for those without experience with Semantic Web technologies. Applying information visualization techniques to the Semantic Web helps users to easily explore large amounts of data and interact with them. In this article we devise a formal Linked Data Visualization Model (LDVM), which allows to dynamically connect data with visualizations. We report about our implementation of the LDVM comprising a library of generic visualizations that enable both users and data analysts to get an overview on, visualize and explore the Data Web and perform detailed analyzes on Linked Data.",TRUE,research problem
R93,Human and Clinical Nutrition,R78237,Comparative Assessment of Iodine Content of Commercial Table Salt Brands Available in Nigerian Market,S353953,R78239,has research problem,R78244,Iodine deficiency disorders (IDD) has been a major global public health problem threatening more than 2 billion people worldwide.,"Iodine deficiency disorders (IDD) has been a major global public health problem threatening more than 2 billion people worldwide. Considering various human health implications associated with iodine deficiency, universal salt iodization programme has been recognized as one of the best methods of preventing iodine deficiency disorder and iodizing table salt is currently done in many countries. In this study, comparative assessment of iodine content of commercially available table salt brands in Nigerian market were investigated and iodine content were measured in ten table salt brands samples using iodometric titration. The iodine content ranged from 14.80 mg/kg – 16.90 mg/kg with mean value of 15.90 mg/kg for Sea salt; 24.30 mg/kg – 25.40 mg/kg with mean value of 24.60 mg/kg for Dangote salt (blue sachet); 22.10 mg/kg – 23.10 mg/kg with mean value of 22.40 mg/kg for Dangote salt (red sachet); 23.30 mg/kg – 24.30 mg/kg with mean value of 23.60 mg/kg for Mr Chef salt; 23.30 mg/kg – 24.30 mg/kg with mean value of 23.60 mg/kg for Annapurna; 26.80 mg/kg – 27.50 mg/kg with mean value of 27.20mg/kg for Uncle Palm salt; 23.30 mg/kg – 29.60 mg/kg with mean content of 26.40 mg/kg for Dangote (bag); 25.40 mg/kg – 26.50 mg/kg with mean value of 26.50 mg/kg for Royal salt; 36.80 mg/kg – 37.20 mg/kg with mean iodine content of 37.0 mg/kg for Abakaliki refined salt, and 30.07 mg/kg – 31.20 mg/kg with mean value of 31.00 mg/kg for Ikom refined salt. The mean iodine content measured in the Sea salt brand (15.70 mg/kg) was significantly P < 0.01 lower compared to those measured in other table salt brands. Although the iodine content of Abakaliki and Ikom refined salt exceed the recommended value, it is clear that only Sea salt brand falls below the World Health Organization (WHO) recommended value (20 – 30 mg/kg), while the remaining table salt samples are just within the range. The results obtained have revealed that 70 % of the table salt brands were adequately iodized while 30 % of the table salt brands were not adequately iodized and provided baseline data that can be used for potential identification of human health risks associated with inadequate and/or excess iodine content in table salt brands consumed in households in Nigeria.",TRUE,research problem
R278,Information Science,R41467,Matching the Blanks: Distributional Similarity for Relation Learning,S131215,R41493,has research problem,R41485,build task agnostic relation representations solely from entity-linked text,"General purpose relation extractors, which can model arbitrary relations, are a core aspiration in information extraction. Efforts have been made to build general purpose extractors that represent relations with their surface forms, or which jointly embed surface forms with relations from an existing knowledge graph. However, both of these approaches are limited in their ability to generalize. In this paper, we build on extensions of Harris’ distributional hypothesis to relations, as well as recent advances in learning text representations (specifically, BERT), to build task agnostic relation representations solely from entity-linked text. We show that these representations significantly outperform previous work on exemplar based relation extraction (FewRel) even without using any of that task’s training data. We also show that models initialized with our task agnostic representations, and then tuned on supervised relation extraction datasets, significantly outperform the previous methods on SemEval 2010 Task 8, KBP37, and TACRED",TRUE,research problem
R278,Information Science,R44210,Semantic relation classification via bidirectional LSTM networks with entity-aware attention using latent entity typing,S134576,R44211,has research problem,R44226,Classifying semantic relations between entity pairs in sentences,"Classifying semantic relations between entity pairs in sentences is an important task in natural language processing (NLP). Most previous models applied to relation classification rely on high-level lexical and syntactic features obtained by NLP tools such as WordNet, the dependency parser, part-of-speech (POS) tagger, and named entity recognizers (NER). In addition, state-of-the-art neural models based on attention mechanisms do not fully utilize information related to the entity, which may be the most crucial feature for relation classification. To address these issues, we propose a novel end-to-end recurrent neural model that incorporates an entity-aware attention mechanism with a latent entity typing (LET) method. Our model not only effectively utilizes entities and their latent types as features, but also builds word representations by applying self-attention based on symmetrical similarity of a sentence itself. Moreover, the model is interpretable by visualizing applied attention mechanisms. Experimental results obtained with the SemEval-2010 Task 8 dataset, which is one of the most popular relation classification tasks, demonstrate that our model outperforms existing state-of-the-art models without any high-level features.",TRUE,research problem
R278,Information Science,R136019,Ontology-based E-learning Content Recommender System for Addressing the Pure Cold-start Problem,S538562,R136021,has research problem,R136028,Cold-start Problem,"E-learning recommender systems are gaining significance nowadays due to its ability to enhance the learning experience by providing tailor-made services based on learner preferences. A Personalized Learning Environment (PLE) that automatically adapts to learner characteristics such as learning styles and knowledge level can recommend appropriate learning resources that would favor the learning process and improve learning outcomes. The pure cold-start problem is a relevant issue in PLEs, which arises due to the lack of prior information about the new learner in the PLE to create appropriate recommendations. This article introduces a semantic framework based on ontology to address the pure cold-start problem in content recommenders. The ontology encapsulates the domain knowledge about the learners as well as Learning Objects (LOs). The semantic model that we built has been experimented with different combinations of the key learner parameters such as learning style, knowledge level, and background knowledge. The proposed framework utilizes these parameters to build natural learner groups from the learner ontology using SPARQL queries. The ontology holds 480 learners’ data, 468 annotated learning objects with 5,600 learner ratings. A multivariate k-means clustering algorithm, an unsupervised machine learning technique for grouping similar data, is used to evaluate the learner similarity computation accuracy. The learner satisfaction achieved with the proposed model is measured based on the ratings given by the 40 participants of the experiments. From the evaluation perspective, it is evident that 79% of the learners are satisfied with the recommendations generated by the proposed model in pure cold-start condition.",TRUE,research problem
R278,Information Science,R3000,A model for contextual data sharing in smartphone applications,S3008,R3005,has research problem,R3006,contextual data,"Purpose The purpose of this paper is to introduce a model for identifying, storing and sharing contextual information across smartphone apps that uses the native device services. The authors present the idea of using user input and interaction within an app as contextual information, and how each app can identify and store contextual information. Design/methodology/approach Contexts are modeled as hierarchical objects that can be stored and shared by applications using native mechanisms. A proof-of-concept implementation of the model for the Android platform demonstrates contexts modelled as hierarchical objects stored and shared by applications using native mechanisms. Findings The model was found to be practically viable by implemented sample apps that share context and through a performance analysis of the system. Practical implications The contextual data-sharing model enables the creation of smart apps and services without being tied to any vendor’s cloud services. Originality/value This paper introduces a new approach for sharing context in smartphone applications that does not require cloud services.",TRUE,research problem
R278,Information Science,R108652,"A streamlined workflow for conversion, peer review, and publication of genomics metadata as omics data papers ",S495075,R108654,has research problem,R68946,Data Publishing,"Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.",TRUE,research problem
R278,Information Science,R109205,Directing the development of constraint languages by checking constraints on rdf data,S498286,R109207,has research problem,R107644,Data Quality,"For research institutes, data libraries, and data archives, validating RDF data according to predefined constraints is a much sought-after feature, particularly as this is taken for granted in the XML world. Based on our work in two international working groups on RDF validation and jointly identified requirements to formulate constraints and validate RDF data, we have published 81 types of constraints that are required by various stakeholders for data applications. In this paper, we evaluate the usability of identified constraint types for assessing RDF data quality by (1) collecting and classifying 115 constraints on vocabularies commonly used in the social, behavioral, and economic sciences, either from the vocabularies themselves or from domain experts, and (2) validating 15,694 data sets (4.26 billion triples) of research data against these constraints. We classify each constraint according to (1) the severity of occurring violations and (2) based on which types of constraint languages are able to express its constraint type. Based on the large-scale evaluation, we formulate several findings to direct the further development of constraint languages.",TRUE,research problem
R278,Information Science,R38544,Estimating relative depth in single images via rankboost,S126412,R38546,has research problem,R38551,Depth Estimation,"In this paper, we present a novel approach to estimate the relative depth of regions in monocular images. There are several contributions. First, the task of monocular depth estimation is considered as a learning-to-rank problem which offers several advantages compared to regression approaches. Second, monocular depth clues of human perception are modeled in a systematic manner. Third, we show that these depth clues can be modeled and integrated appropriately in a Rankboost framework. For this purpose, a space-efficient version of Rankboost is derived that makes it applicable to rank a large number of objects, as posed by the given problem. Finally, the monocular depth clues are combined with results from a deep learning approach. Experimental results show that the error rate is reduced by adding the monocular features while outperforming state-of-the-art systems.",TRUE,research problem
R278,Information Science,R49502,Pairwise Multi-Class Document Classification for Semantic Relations between Wikipedia Articles,S147791,R49504,has research problem,R49505,Document classification,"Many digital libraries recommend literature to their users considering the similarity between a query document and their repository. However, they often fail to distinguish what is the relationship that makes two documents alike. In this paper, we model the problem of finding the relationship between two documents as a pairwise document classification task. To find the semantic relation between documents, we apply a series of techniques, such as GloVe, Paragraph Vectors, BERT, and XLNet under different configurations (e.g., sequence length, vector concatenation scheme), including a Siamese architecture for the Transformer-based systems. We perform our experiments on a newly proposed dataset of 32,168 Wikipedia article pairs and Wikidata properties that define the semantic document relations. Our results show vanilla BERT as the best performing system with an F1-score of 0.93, which we manually examine to better understand its applicability to other domains. Our findings suggest that classifying semantic relations between documents is a solvable task and motivates the development of a recommender system based on the evaluated techniques. The discussions in this paper serve as first steps in the exploration of documents through SPARQL-like queries such that one could find documents that are similar in one aspect but dissimilar in another.",TRUE,research problem
R278,Information Science,R41526,Enriching pre-trained language model with entity information for relation classification,S131320,R41528,has research problem,R41558,extract relations between entities,"Relation classification is an important NLP task to extract relations between entities. The state-of-the-art methods for relation classification are primarily based on Convolutional or Recurrent Neural Networks. Recently, the pre-trained BERT model achieves very successful results in many NLP classification / sequence labeling tasks. Relation classification differs from those tasks in that it relies on information of both the sentence and the two target entities. In this paper, we propose a model that both leverages the pre-trained BERT language model and incorporates information from the target entities to tackle the relation classification task. We locate the target entities and transfer the information through the pre-trained architecture and incorporate the corresponding encoding of the two entities. We achieve significant improvement over the state-of-the-art method on the SemEval-2010 task 8 relational dataset.",TRUE,research problem
R278,Information Science,R135842,Considerations for the Conduction and Interpretation of FAIRness Evaluations,S537505,R135851,has research problem,R128971,Fairness,"The FAIR principles were received with broad acceptance in several scientific communities. However, there is still some degree of uncertainty on how they should be implemented. Several self-report questionnaires have been proposed to assess the implementation of the FAIR principles. Moreover, the FAIRmetrics group released 14, general-purpose maturity for representing FAIRness. Initially, these metrics were conducted as open-answer questionnaires. Recently, these metrics have been implemented into a software that can automatically harvest metadata from metadata providers and generate a principle-specific FAIRness evaluation. With so many different approaches for FAIRness evaluations, we believe that further clarification on their limitations and advantages, as well as on their interpretation and interplay should be considered.",TRUE,research problem
R278,Information Science,R41374,Attention Guided Graph Convolutional Networks for Relation Extraction,S131081,R41376,has research problem,R41429,how to effectively make use of relevant information while ignoring irrelevant information from the dependency trees,"Dependency trees convey rich structural information that is proven useful for extracting relations among entities in text. However, how to effectively make use of relevant information while ignoring irrelevant information from the dependency trees remains a challenging research question. Existing approaches employing rule based hard-pruning strategies for selecting relevant partial dependency structures may not always yield optimal results. In this work, we propose Attention Guided Graph Convolutional Networks (AGGCNs), a novel model which directly takes full dependency trees as inputs. Our model can be understood as a soft-pruning approach that automatically learns how to selectively attend to the relevant sub-structures useful for the relation extraction task. Extensive results on various tasks including cross-sentence n-ary relation extraction and large-scale sentence-level relation extraction show that our model is able to better leverage the structural information of the full dependency trees, giving significantly better results than previous approaches.",TRUE,research problem
R278,Information Science,R41562,Improving relation extraction by pre-trained language representations,S131375,R41564,has research problem,R41602,"learn implicit linguistic features solely from plain text corpora by unsupervised pre-training, before fine-tuning the learned language representations on the relation extraction task","Current state-of-the-art relation extraction methods typically rely on a set of lexical, syntactic, and semantic features, explicitly computed in a pre-processing step. Training feature extraction models requires additional annotated language resources, which severely restricts the applicability and portability of relation extraction to novel languages. Similarly, pre-processing introduces an additional source of error. To address these limitations, we introduce TRE, a Transformer for Relation Extraction, extending the OpenAI Generative Pre-trained Transformer [Radford et al., 2018]. Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions. TRE allows us to learn implicit linguistic features solely from plain text corpora by unsupervised pre-training, before fine-tuning the learned language representations on the relation extraction task. TRE obtains a new state-of-the-art result on the TACRED and SemEval 2010 Task 8 datasets, achieving a test F1 of 67.4 and 87.1, respectively. Furthermore, we observe a significant increase in sample efficiency. With only 20% of the training examples, TRE matches the performance of our baselines and our model trained from scratch on 100% of the TACRED dataset. We open-source our trained models, experiments, and source code.",TRUE,research problem
R278,Information Science,R46391,A parallel-hierarchical model for machine comprehension on sparse data,S141550,R46393,has research problem,R46405,Machine comprehension,"Understanding unstructured text is a major goal within natural language processing. Comprehension tests pose questions based on short text passages to evaluate such understanding. In this work, we investigate machine comprehension on the challenging {\it MCTest} benchmark. Partly because of its limited size, prior work on {\it MCTest} has focused mainly on engineering better features. We tackle the dataset with a neural approach, harnessing simple neural networks arranged in a parallel hierarchy. The parallel hierarchy enables our model to compare the passage, question, and answer from a variety of trainable perspectives, as opposed to using a manually designed, rigid feature set. Perspectives range from the word level to sentence fragments to sequences of sentences; the networks operate only on word-embedding representations of text. When trained with a methodology designed to help cope with limited training data, our Parallel-Hierarchical model sets a new state of the art for {\it MCTest}, outperforming previous feature-engineered approaches slightly and previous neural approaches by a significant margin (over 15\% absolute).",TRUE,research problem
R278,Information Science,R46407,Iterative alternating neural attention for machine reading,S141578,R46409,has research problem,R46405,Machine comprehension,"We propose a novel neural attention architecture to tackle machine comprehension tasks, such as answering Cloze-style queries with respect to a document. Unlike previous models, we do not collapse the query into a single vector, instead we deploy an iterative alternating attention mechanism that allows a fine-grained exploration of both the query and the document. Our model outperforms state-of-the-art baselines in standard machine comprehension benchmarks such as CNN news articles and the Children’s Book Test (CBT) dataset.",TRUE,research problem
R278,Information Science,R46427,Machine comprehension using match-lstm and answer pointer,S141597,R46429,has research problem,R46405,Machine comprehension,"Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.",TRUE,research problem
R278,Information Science,R46427,Machine comprehension using match-lstm and answer pointer,S141598,R46429,has research problem,R46439,Machine comprehension of text,"Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created by humans through crowdsourcing. SQuAD provides a challenging testbed for evaluating machine comprehension algorithms, partly because compared with previous datasets, in SQuAD the answers do not come from a small set of candidate answers and they have variable lengths. We propose an end-to-end neural architecture for the task. The architecture is based on match-LSTM, a model we proposed previously for textual entailment, and Pointer Net, a sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the output tokens to be from the input sequences. We propose two ways of using Pointer Net for our task. Our experiments show that both of our two models substantially outperform the best results obtained by Rajpurkar et al.(2016) using logistic regression and manually crafted features.",TRUE,research problem
R278,Information Science,R38897,Multi-Agent Systems: A Survey,S139149,R45056,has research problem,R38900,Multi-agent systems,"Multi-agent systems (MASs) have received tremendous attention from scholars in different disciplines, including computer science and civil engineering, as a means to solve complex problems by subdividing them into smaller tasks. The individual tasks are allocated to autonomous entities, known as agents. Each agent decides on a proper action to solve the task using multiple inputs, e.g., history of actions, interactions with its neighboring agents, and its goal. The MAS has found multiple applications, including modeling complex systems, smart grids, and computer networks. Despite their wide applicability, there are still a number of challenges faced by MAS, including coordination between agents, security, and task allocation. This survey provides a comprehensive discussion of all aspects of MAS, starting from definitions, features, applications, challenges, and communications to evaluation. A classification on MAS applications and challenges is provided along with references for further studies. We expect this paper to serve as an insightful and comprehensive resource on the MAS for researchers and practitioners in the area.",TRUE,research problem
R278,Information Science,R46346,Robust lexical features for improved neural network named-entity recognition,S141501,R46348,has research problem,R46371,Named-Entity Recognition,"Neural network approaches to Named-Entity Recognition reduce the need for carefully hand-crafted features. While some features do remain in state-of-the-art systems, lexical features have been mostly discarded, with the exception of gazetteers. In this work, we show that this is unfair: lexical features are actually quite useful. We propose to embed words and entity types into a low-dimensional vector space we train from annotated data produced by distant supervision thanks to Wikipedia. From this, we compute — offline — a feature vector representing each word. When used with a vanilla recurrent neural network model, this representation yields substantial improvements. We establish a new state-of-the-art F1 score of 87.95 on ONTONOTES 5.0, while matching state-of-the-art performance with a F1 score of 91.73 on the over-studied CONLL-2003 dataset.",TRUE,research problem
R278,Information Science,R46346,Robust lexical features for improved neural network named-entity recognition,S141500,R46348,has research problem,R46370,Neural network approaches to Named-Entity Recognition,"Neural network approaches to Named-Entity Recognition reduce the need for carefully hand-crafted features. While some features do remain in state-of-the-art systems, lexical features have been mostly discarded, with the exception of gazetteers. In this work, we show that this is unfair: lexical features are actually quite useful. We propose to embed words and entity types into a low-dimensional vector space we train from annotated data produced by distant supervision thanks to Wikipedia. From this, we compute — offline — a feature vector representing each word. When used with a vanilla recurrent neural network model, this representation yields substantial improvements. We establish a new state-of-the-art F1 score of 87.95 on ONTONOTES 5.0, while matching state-of-the-art performance with a F1 score of 91.73 on the over-studied CONLL-2003 dataset.",TRUE,research problem
R278,Information Science,R136009,Ontology-Based Personalized Course Recommendation Framework,S538532,R136012,has research problem,R136018,Ontology mapping,"Choosing a higher education course at university is not an easy task for students. A wide range of courses are offered by the individual universities whose delivery mode and entry requirements differ. A personalized recommendation system can be an effective way of suggesting the relevant courses to the prospective students. This paper introduces a novel approach that personalizes course recommendations that will match the individual needs of users. The proposed approach developed a framework of an ontology-based hybrid-filtering system called the ontology-based personalized course recommendation (OPCR). This approach aims to integrate the information from multiple sources based on the hierarchical ontology similarity with a view to enhancing the efficiency and the user satisfaction and to provide students with appropriate recommendations. The OPCR combines collaborative-based filtering with content-based filtering. It also considers familiar related concepts that are evident in the profiles of both the student and the course, determining the similarity between them. Furthermore, OPCR uses an ontology mapping technique, recommending jobs that will be available following the completion of each course. This method can enable students to gain a comprehensive knowledge of courses based on their relevance, using dynamic ontology mapping to link the course profiles and student profiles with job profiles. Results show that a filtering algorithm that uses hierarchically related concepts produces better outcomes compared to a filtering method that considers only keyword similarity. In addition, the quality of the recommendations is improved when the ontology similarity between the items’ and the users’ profiles were utilized. This approach, using a dynamic ontology mapping, is flexible and can be adapted to different domains. The proposed framework can be used to filter the items for both postgraduate courses and items from other domains.",TRUE,research problem
R278,Information Science,R150376,The Pattern of Patterns: What is a pattern in conceptual modeling?,S603120,R150377,has research problem,R150401,ontology-driven conceptual model patterns,"It has been proven that using structured methods to represent the domain reduces human errors in the process of creating models and also in the process of using them. Using modeling patterns is a proven structural method in this regard. A pattern is a generalizable reusable solution to a design problem. Positive effects of using patterns were demonstrated in several experimental studies and explained using theories. However, detailed knowledge about how properties of patterns lead to increased performance in writing and reading conceptual models is currently lacking. This paper proposes a theoretical framework to characterize the properties of ontology-driven conceptual model patterns. The development of such framework is the first step in investigating the effects of pattern properties and devising rules to compose patterns based on well-understood properties.",TRUE,research problem
R278,Information Science,R73196,Persistent Identification of Instruments,S338906,R73205,has research problem,R73161,Persistent Identification,"Instruments play an essential role in creating research data. Given the importance of instruments and associated metadata to the assessment of data quality and data reuse, globally unique, persistent and resolvable identification of instruments is crucial. The Research Data Alliance Working Group Persistent Identification of Instruments (PIDINST) developed a community-driven solution for persistent identification of instruments which we present and discuss in this paper. Based on an analysis of 10 use cases, PIDINST developed a metadata schema and prototyped schema implementation with DataCite and ePIC as representative persistent identifier infrastructures and with HZB (Helmholtz-Zentrum Berlin fur Materialien und Energie) and BODC (British Oceanographic Data Centre) as representative institutional instrument providers. These implementations demonstrate the viability of the proposed solution in practice. Moving forward, PIDINST will further catalyse adoption and consolidate the schema by addressing new stakeholder requirements.",TRUE,research problem
R278,Information Science,R41295,Spanbert: Improving pre-training by representing and predicting spans,S131004,R41297,has research problem,R41359,pre-training method that is designed to better represent and predict spans of text,"We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random tokens, and (2) training the span boundary representations to predict the entire content of the masked span, without relying on the individual token representations within it. SpanBERT consistently outperforms BERT and our better-tuned baselines, with substantial gains on span selection tasks such as question answering and coreference resolution. In particular, with the same training data and model size as BERT large , our single model obtains 94.6% and 88.7% F1 on SQuAD 1.1 and 2.0 respectively. We also achieve a new state of the art on the OntoNotes coreference resolution task (79.6% F1), strong performance on the TACRED relation extraction benchmark, and even gains on GLUE. 1",TRUE,research problem
R278,Information Science,R108690,Open Science meets Food Modelling: Introducing the Food Modelling Journal (FMJ),S496115,R108916,has research problem,R108925,publishing,"This Editorial describes the rationale, focus, scope and technology behind the newly launched, open access, innovative Food Modelling Journal (FMJ). The Journal is designed to publish those outputs of the research cycle that usually precede the publication of the research article, but have their own value and re-usability potential. Such outputs are methods, models, software and data. The Food Modelling Journal is launched by the AGINFRA+ community and is integrated with the AGINFRA+ Virtual Research Environment (VRE) to facilitate and streamline the authoring, peer review and publication of the manuscripts via the ARPHA Publishing Platform.",TRUE,research problem
R278,Information Science,R58119,A Survey of Recommender Systems Based on Deep Learning,S332120,R58120,has research problem,R69628,Recommender Systems,"In recent years, deep learning’s revolutionary advances in speech recognition, image analysis, and natural language processing have gained significant attention. Deep learning technology has become a hotspot research field in the artificial intelligence and has been applied into recommender system. In contrast to traditional recommendation models, deep learning is able to effectively capture the non-linear and non-trivial user-item relationships and enables the codification of more complex abstractions as data representations in the higher layers. In this paper, we provide a comprehensive review of the related research contents of deep learning-based recommender systems. First, we introduce the basic terminologies and the background concepts of recommender systems and deep learning technology. Second, we describe the main current research on deep learning-based recommender systems. Third, we provide the possible research directions of deep learning-based recommender systems in the future. Finally, concludes this paper.",TRUE,research problem
R278,Information Science,R135998,A Hybrid Knowlegde-Based Approach for Recommending Massive Learning Activities,S538505,R136000,has research problem,R69628,Recommender Systems,"In recent years, the development of recommender systems has attracted increased interest in several domains, especially in e-learning. Massive Open Online Courses have brought a revolution. However, deficiency in support and personalization in this context drive learners to lose their motivation and leave the learning process. To overcome this problem we focus on adapting learning activities to learners' needs using a recommender system.This paper attempts to provide an introduction to different recommender systems for e-learning settings, as well as to present our proposed recommender system for massive learning activities in order to provide learners with the suitable learning activities to follow the learning process and maintain their motivation. We propose a hybrid knowledge-based recommender system based on ontology for recommendation of e-learning activities to learners in the context of MOOCs. In the proposed recommendation approach, ontology is used to model and represent the knowledge about the domain model, learners and learning activities.",TRUE,research problem
R278,Information Science,R136019,Ontology-based E-learning Content Recommender System for Addressing the Pure Cold-start Problem,S538561,R136021,has research problem,R69628,Recommender Systems,"E-learning recommender systems are gaining significance nowadays due to its ability to enhance the learning experience by providing tailor-made services based on learner preferences. A Personalized Learning Environment (PLE) that automatically adapts to learner characteristics such as learning styles and knowledge level can recommend appropriate learning resources that would favor the learning process and improve learning outcomes. The pure cold-start problem is a relevant issue in PLEs, which arises due to the lack of prior information about the new learner in the PLE to create appropriate recommendations. This article introduces a semantic framework based on ontology to address the pure cold-start problem in content recommenders. The ontology encapsulates the domain knowledge about the learners as well as Learning Objects (LOs). The semantic model that we built has been experimented with different combinations of the key learner parameters such as learning style, knowledge level, and background knowledge. The proposed framework utilizes these parameters to build natural learner groups from the learner ontology using SPARQL queries. The ontology holds 480 learners’ data, 468 annotated learning objects with 5,600 learner ratings. A multivariate k-means clustering algorithm, an unsupervised machine learning technique for grouping similar data, is used to evaluate the learner similarity computation accuracy. The learner satisfaction achieved with the proposed model is measured based on the ratings given by the 40 participants of the experiments. From the evaluation perspective, it is evident that 79% of the learners are satisfied with the recommendations generated by the proposed model in pure cold-start condition.",TRUE,research problem
R278,Information Science,R41526,Enriching pre-trained language model with entity information for relation classification,S131319,R41528,has research problem,R41557,Relation classification,"Relation classification is an important NLP task to extract relations between entities. The state-of-the-art methods for relation classification are primarily based on Convolutional or Recurrent Neural Networks. Recently, the pre-trained BERT model achieves very successful results in many NLP classification / sequence labeling tasks. Relation classification differs from those tasks in that it relies on information of both the sentence and the two target entities. In this paper, we propose a model that both leverages the pre-trained BERT language model and incorporates information from the target entities to tackle the relation classification task. We locate the target entities and transfer the information through the pre-trained architecture and incorporate the corresponding encoding of the two entities. We achieve significant improvement over the state-of-the-art method on the SemEval-2010 task 8 relational dataset.",TRUE,research problem
R278,Information Science,R41562,Improving relation extraction by pre-trained language representations,S134826,R41564,has research problem,R44342,Relation extraction,"Current state-of-the-art relation extraction methods typically rely on a set of lexical, syntactic, and semantic features, explicitly computed in a pre-processing step. Training feature extraction models requires additional annotated language resources, which severely restricts the applicability and portability of relation extraction to novel languages. Similarly, pre-processing introduces an additional source of error. To address these limitations, we introduce TRE, a Transformer for Relation Extraction, extending the OpenAI Generative Pre-trained Transformer [Radford et al., 2018]. Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions. TRE allows us to learn implicit linguistic features solely from plain text corpora by unsupervised pre-training, before fine-tuning the learned language representations on the relation extraction task. TRE obtains a new state-of-the-art result on the TACRED and SemEval 2010 Task 8 datasets, achieving a test F1 of 67.4 and 87.1, respectively. Furthermore, we observe a significant increase in sample efficiency. With only 20% of the training examples, TRE matches the performance of our baselines and our model trained from scratch on 100% of the TACRED dataset. We open-source our trained models, experiments, and source code.",TRUE,research problem
R278,Information Science,R44287,Graph Convolution over Pruned Dependency Trees Improves Relation Extraction,S134721,R44288,has research problem,R44342,Relation extraction,"Dependency trees help relation extraction models capture long-range relations between words. However, existing dependency-based models either neglect crucial information (e.g., negation) by pruning the dependency trees too aggressively, or are computationally inefficient because it is difficult to parallelize over different tree structures. We propose an extension of graph convolutional networks that is tailored for relation extraction, which pools information over arbitrary dependency structures efficiently in parallel. To incorporate relevant information while maximally removing irrelevant content, we further apply a novel pruning strategy to the input trees by keeping words immediately around the shortest path between the two entities among which a relation might hold. The resulting model achieves state-of-the-art performance on the large-scale TACRED dataset, outperforming existing sequence and dependency-based neural models. We also show through detailed analysis that this model has complementary strengths to sequence models, and combining them further improves the state of the art.",TRUE,research problem
R278,Information Science,R46373,Sentence similarity learning by lexical decomposition and composition,S141526,R46375,has research problem,R46390,sentence similarity,"Most conventional sentence similarity methods only focus on similar parts of two input sentences, and simply ignore the dissimilar parts, which usually give us some clues and semantic meanings about the sentences. In this work, we propose a model to take into account both the similarities and dissimilarities by decomposing and composing lexical semantics over sentences. The model represents each word as a vector, and calculates a semantic matching vector for each word based on all words in the other sentence. Then, each word vector is decomposed into a similar component and a dissimilar component based on the semantic matching vector. After this, a two-channel CNN model is employed to capture features by composing the similar and dissimilar components. Finally, a similarity score is estimated over the composed feature vectors. Experimental results show that our model gets the state-of-the-art performance on the answer sentence selection task, and achieves a comparable result on the paraphrase identification task.",TRUE,research problem
R278,Information Science,R46440,Learning context-sensitive convolutional filters for text processing,S141628,R46442,has research problem,R46462,Text Processing,"Convolutional neural networks (CNNs) have recently emerged as a popular building block for natural language processing (NLP). Despite their success, most existing CNN models employed in NLP share the same learned (and static) set of filters for all input sentences. In this paper, we consider an approach of using a small meta network to learn context-sensitive convolutional filters for text processing. The role of meta network is to abstract the contextual information of a sentence or document into a set of input-sensitive filters. We further generalize this framework to model sentence pairs, where a bidirectional filter generation mechanism is introduced to encapsulate co-dependent sentence representations. In our benchmarks on four different tasks, including ontology classification, sentiment analysis, answer sentence selection, and paraphrase identification, our proposed model, a modified CNN with context-sensitive filters, consistently outperforms the standard CNN and attention-based CNN baselines. By visualizing the learned context-sensitive filters, we further validate and rationalize the effectiveness of proposed framework.",TRUE,research problem
R278,Information Science,R46391,A parallel-hierarchical model for machine comprehension on sparse data,S141547,R46393,has research problem,R46402,Understanding unstructured text,"Understanding unstructured text is a major goal within natural language processing. Comprehension tests pose questions based on short text passages to evaluate such understanding. In this work, we investigate machine comprehension on the challenging {\it MCTest} benchmark. Partly because of its limited size, prior work on {\it MCTest} has focused mainly on engineering better features. We tackle the dataset with a neural approach, harnessing simple neural networks arranged in a parallel hierarchy. The parallel hierarchy enables our model to compare the passage, question, and answer from a variety of trainable perspectives, as opposed to using a manually designed, rigid feature set. Perspectives range from the word level to sentence fragments to sequences of sentences; the networks operate only on word-embedding representations of text. When trained with a methodology designed to help cope with limited training data, our Parallel-Hierarchical model sets a new state of the art for {\it MCTest}, outperforming previous feature-engineered approaches slightly and previous neural approaches by a significant margin (over 15\% absolute).",TRUE,research problem
R137681,"Information Systems, Process and Knowledge Management",R140059,Open data hackathons: an innovative strategy to enhance entrepreneurial intention,S559059,R140061,has research problem,R140068,aims to suggest a model that incorporates factors that affect the decision of establishing a startup by developers who have participated in open data hackathons,"
Purpose
In terms of entrepreneurship, open data benefits include economic growth, innovation, empowerment and new or improved products and services. Hackathons encourage the development of new applications using open data and the creation of startups based on these applications. Researchers focus on factors that affect nascent entrepreneurs’ decision to create a startup but researches in the field of open data hackathons have not been fully investigated yet. This paper aims to suggest a model that incorporates factors that affect the decision of establishing a startup by developers who have participated in open data hackathons.
Design/methodology/approach
In total, 70 papers were examined and analyzed using a three-phased literature review methodology, which was suggested by Webster and Watson (2002). These surveys investigated several factors that affect a nascent entrepreneur to create a startup.
Findings
Eventually, by identifying the motivations for developers to participate in a hackathon, and understanding the benefits of the use of open data, researchers will be able to elaborate the proposed model and evaluate if the contest has contributed to the decision of establish a startup and what factors affect the decision to establish a startup apply to open data developers, and if the participants of the contest agree with these factors.
Originality/value
The paper expands the scope of open data research on entrepreneurship field, stating the need for more research to be conducted regarding the open data in entrepreneurship through hackathons.
",TRUE,research problem
R137681,"Information Systems, Process and Knowledge Management",R175410,The FAIR Data Maturity Model: An Approach to Harmonise FAIR Assessments,S695007,R175412,has research problem,R175413,assess the FAIRness of research data,"In the past years, many methodologies and tools have been developed to assess the FAIRness of research data. These different methodologies and tools have been based on various interpretations of the FAIR principles, which makes comparison of the results of the assessments difficult. The work in the RDA FAIR Data Maturity Model Working Group reported here has delivered a set of indicators with priorities and guidelines that provide a ‘lingua franca’ that can be used to make the results of the assessment using those methodologies and tools comparable. The model can act as a tool that can be used by various stakeholders, including researchers, data stewards, policy makers and funding agencies, to gain insight into the current FAIRness of data as well as into the aspects that can be improved to increase the potential for reuse of research data. Through increased efficiency and effectiveness, it helps research activities to solve societal challenges and to support evidence-based decisions. The Maturity Model is publicly available and the Working Group is encouraging application of the model in practice. Experience with the model will be taken into account in the further development of the model.",TRUE,research problem
R137681,"Information Systems, Process and Knowledge Management",R156129,DIGITAL MANUFACTURING: REQUIREMENTS AND CHALLENGES FOR IMPLEMENTING DIGITAL SURROGATES,S627020,R156131,has research problem,R5007,digital twin,"A key challenge for manufacturers today is efficiently producing and delivering products on time. Issues include demand for customized products, changes in orders, and equipment status change, complicating the decision-making process. A real-time digital representation of the manufacturing operation would help address these challenges. Recent technology advancements of smart sensors, IoT, and cloud computing make it possible to realize a ""digital twin"" of a manufacturing system or process. Digital twins or surrogates are data-driven virtual representations that replicate, connect, and synchronize the operation of a manufacturing system or process. They utilize dynamically collected data to track system behaviors, analyze performance, and help make decisions without interrupting production. In this paper, we define digital surrogate, explore their relationships to simulation, digital thread, artificial intelligence, and IoT. We identify the technology and standard requirements and challenges for implementing digital surrogates. A production planning case is used to exemplify the digital surrogate concept.",TRUE,research problem
R78413,Learner-Interface Interaction,R107843,Getting the Mix Right Again: An updated and theoretical rationale for interaction,S490726,R107845,has research problem,R107847,The role of interaction as a crucial component of the education process,"No topic raises more contentious debate among educators than the role of interaction as a crucial component of the education process. This debate is fueled by surface problems of definition and vested interests of professional educators, but is more deeply marked by epistemological assumptions relative to the role of humans and human interaction in education and learning. The seminal article by Daniel and Marquis (1979) challenged distance educators to get the mixture right between independent study and interactive learning strategies and activities. They quite rightly pointed out that these two primary forms of education have differing economic, pedagogical, and social characteristics, and that we are unlikely to find a “perfect” mix that meets all learner and institutional needs across all curricula and content. Nonetheless, hard decisions have to be made. Even more than in 1979, the development of newer, cost effective technologies and the nearly ubiquitous (in developed countries) Net-based telecommunications system is transforming, at least, the cost and access implications of getting the mix right. Further, developments in social cognitive based learning theories are providing increased evidence of the importance of collaborative activity as a component of all forms of education – including those delivered at a distance. Finally, the context in which distance education is developed and delivered is changing in response to the capacity of the semantic Web (Berners-Lee, 1999) to support interaction, not only amongst humans, but also between and among autonomous agents and human beings. Thus, the landscape and challenges of “getting the mix right” have not lessened in the past 25 years, and, in fact, have become even more complicated. This paper attempts to provide a theoretical rationale and guide for instructional designers and teachers interested in developing distance education systems that are both effective and efficient in meeting diverse student learning needs.",TRUE,research problem
R112125,Machine Learning,R147894,Active Learning Yields Better Training Data for Scientific Named Entity Recognition,S593215,R147896,has research problem,R114168,Active Learning,"Despite significant progress in natural language processing, machine learning models require substantial expertannotated training data to perform well in tasks such as named entity recognition (NER) and entity relations extraction. Furthermore, NER is often more complicated when working with scientific text. For example, in polymer science, chemical structure may be encoded using nonstandard naming conventions, the same concept can be expressed using many different terms (synonymy), and authors may refer to polymers with ad-hoc labels. These challenges, which are not unique to polymer science, make it difficult to generate training data, as specialized skills are needed to label text correctly. We have previously designed polyNER, a semi-automated system for efficient identification of scientific entities in text. PolyNER applies word embedding models to generate entity-rich corpora for productive expert labeling, and then uses the resulting labeled data to bootstrap a context-based classifier. PolyNER facilitates a labeling process that is otherwise tedious and expensive. Here, we use active learning to efficiently obtain more annotations from experts and improve performance. Our approach requires just five hours of expert time to achieve discrimination capacity comparable to that of a state-of-the-art chemical NER toolkit.",TRUE,research problem
R112125,Machine Learning,R140159,LogicENN: A Neural Based Knowledge Graphs Embedding Model with Logical Rules,S559461,R140160,has research problem,R124628,Knowledge Graph Embedding,"Knowledge graph embedding models have gained significant attention in AI research. The aim of knowledge graph embedding is to embed the graphs into a vector space in which the structure of the graph is preserved. Recent works have shown that the inclusion of background knowledge, such as logical rules, can improve the performance of embeddings in downstream machine learning tasks. However, so far, most existing models do not allow the inclusion of rules. We address the challenge of including rules and present a new neural based embedding model (LogicENN). We prove that LogicENN can learn every ground truth of encoded rules in a knowledge graph. To the best of our knowledge, this has not been proved so far for the neural based family of embedding models. Moreover, we derive formulae for the inclusion of various rules, including (anti-)symmetric, inverse, irreflexive and transitive, implication, composition, equivalence, and negation. Our formulation allows avoiding grounding for implication and equivalence relations. Our experiments show that LogicENN outperforms the existing models in link prediction.",TRUE,research problem
R112125,Machine Learning,R140156,OWL2Vec*: Embedding of OWL Ontologies,S559998,R140158,has research problem,R140302,Ontology Embedding,"Abstract Semantic embedding of knowledge graphs has been widely studied and used for prediction and statistical analysis tasks across various domains such as Natural Language Processing and the Semantic Web. However, less attention has been paid to developing robust methods for embedding OWL (Web Ontology Language) ontologies, which contain richer semantic information than plain knowledge graphs, and have been widely adopted in domains such as bioinformatics. In this paper, we propose a random walk and word embedding based ontology embedding method named , which encodes the semantics of an OWL ontology by taking into account its graph structure, lexical information and logical constructors. Our empirical evaluation with three real world datasets suggests that benefits from these three different aspects of an ontology in class membership prediction and class subsumption prediction tasks. Furthermore, often significantly outperforms the state-of-the-art methods in our experiments.",TRUE,research problem
R112125,Machine Learning,R144951,Accurate unlexicalized parsing,S580178,R144953,has research problem,R144956,Parsing,"We demonstrate that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar. Indeed, its performance of 86.36% (LP/LR F1) is better than that of early lexicalized PCFG models, and surprisingly close to the current state-of-the-art. This result has potential uses beyond establishing a strong lower bound on the maximum possible accuracy of unlexicalized models: an unlexicalized PCFG is much more compact, easier to replicate, and easier to interpret than more complex lexical models, and the parsing algorithms are simpler, more widely understood, of lower asymptotic complexity, and easier to optimize.",TRUE,research problem
R112125,Machine Learning,R144816,NLTK: The Natural Language Toolkit,S579826,R144818,has research problem,R144852,ready-to-use computational linguistics courseware,"NLTK, the Natural Language Toolkit, is a suite of open source program modules, tutorials and problem sets, providing ready-to-use computational linguistics courseware. NLTK covers symbolic and statistical natural language processing, and is interfaced to annotated corpora. Students augment and replace existing components, learn structured programming by example, and manipulate sophisticated models from the outset.",TRUE,research problem
R112125,Machine Learning,R139596,"An Integrated Approach for Improving Brand Consistency of Web Content: Modeling, Analysis, and Recommendation",S557044,R139598,has research problem,R139600,Sentence Ranking,"A consumer-dependent (business-to-consumer) organization tends to present itself as possessing a set of human qualities, which is termed the brand personality of the company. The perception is impressed upon the consumer through the content, be it in the form of advertisement, blogs, or magazines, produced by the organization. A consistent brand will generate trust and retain customers over time as they develop an affinity toward regularity and common patterns. However, maintaining a consistent messaging tone for a brand has become more challenging with the virtual explosion in the amount of content that needs to be authored and pushed to the Internet to maintain an edge in the era of digital marketing. To understand the depth of the problem, we collect around 300K web page content from around 650 companies. We develop trait-specific classification models by considering the linguistic features of the content. The classifier automatically identifies the web articles that are not consistent with the mission and vision of a company and further helps us to discover the conditions under which the consistency cannot be maintained. To address the brand inconsistency issue, we then develop a sentence ranking system that outputs the top three sentences that need to be changed for making a web article more consistent with the company’s brand personality.",TRUE,research problem
R112125,Machine Learning,R161808,Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer,S646350,R161810,has research problem,R41156,transfer learning,"Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ""Colossal Clean Crawled Corpus"", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.",TRUE,research problem
R126,Materials Chemistry,R141661,Fluorescent N-Doped Carbon Dots as in Vitro and in Vivo Nanothermometer,S567956,R141663,has research problem,R141664,Nanothermometer,"The fluorescent N-doped carbon dots (N-CDs) obtained from C3N4 emit strong blue fluorescence, which is stable with different ionic strengths and time. The fluorescence intensity of N-CDs decreases with the temperature increasing, while it can recover to the initial one with the temperature decreasing. It is an accurate linear response of fluorescence intensity to temperature, which may be attributed to the synergistic effect of abundant oxygen-containing functional groups and hydrogen bonds. Further experiments also demonstrate that N-CDs can serve as effective in vitro and in vivo fluorescence-based nanothermometer.",TRUE,research problem
R126,Materials Chemistry,R141701,Carbon Dot Nanothermometry: Intracellular Photoluminescence Lifetime Thermal Sensing,S568264,R141706,has research problem,R141664,Nanothermometer,"Nanoscale biocompatible photoluminescence (PL) thermometers that can be used to accurately and reliably monitor intracellular temperatures have many potential applications in biology and medicine. Ideally, such nanothermometers should be functional at physiological pH across a wide range of ionic strengths, probe concentrations, and local environments. Here, we show that water-soluble N,S-co-doped carbon dots (CDs) exhibit temperature-dependent photoluminescence lifetimes and can serve as highly sensitive and reliable intracellular nanothermometers. PL intensity measurements indicate that these CDs have many advantages over alternative semiconductor- and CD-based nanoscale temperature sensors. Importantly, their PL lifetimes remain constant over wide ranges of pH values (5-12), CD concentrations (1.5 × 10-5 to 0.5 mg/mL), and environmental ionic strengths (up to 0.7 mol·L-1 NaCl). Moreover, they are biocompatible and nontoxic, as demonstrated by cell viability and flow cytometry analyses using NIH/3T3 and HeLa cell lines. N,S-CD thermal sensors also exhibit good water dispersibility, superior photo- and thermostability, extraordinary environment and concentration independence, high storage stability, and reusability-their PL decay curves at temperatures between 15 and 45 °C remained unchanged over seven sequential experiments. In vitro PL lifetime-based temperature sensing performed with human cervical cancer HeLa cells demonstrated the great potential of these nanosensors in biomedicine. Overall, N,S-doped CDs exhibit excitation-independent emission with strongly temperature-dependent monoexponential decay, making them suitable for both in vitro and in vivo luminescence lifetime thermometry.",TRUE,research problem
R126,Materials Chemistry,R141724,Intracellular ratiometric temperature sensing using fluorescent carbon dots,S568371,R141728,has research problem,R141664,Nanothermometer,A self-referencing dual fluorescing carbon dot-based nanothermometer can ratiometrically sense thermal events in HeLa cells with very high sensitivity.,TRUE,research problem
R126,Materials Chemistry,R141748,"Dual functional highly luminescence B, N Co-doped carbon nanodots as nanothermometer and Fe3+/Fe2+ sensor",S568447,R141750,has research problem,R141664,Nanothermometer,"Abstract Dual functional fluorescence nanosensors have many potential applications in biology and medicine. Monitoring temperature with higher precision at localized small length scales or in a nanocavity is a necessity in various applications. As well as the detection of biologically interesting metal ions using low-cost and sensitive approach is of great importance in bioanalysis. In this paper, we describe the preparation of dual-function highly fluorescent B, N-co-doped carbon nanodots (CDs) that work as chemical and thermal sensors. The CDs emit blue fluorescence peaked at 450 nm and exhibit up to 70% photoluminescence quantum yield with showing excitation-independent fluorescence. We also show that water-soluble CDs display temperature-dependent fluorescence and can serve as highly sensitive and reliable nanothermometers with a thermo-sensitivity 1.8% °C −1 , and wide range thermo-sensing between 0–90 °C with excellent recovery. Moreover, the fluorescence emission of CDs are selectively quenched after the addition of Fe 2+ and Fe 3+ ions while show no quenching with adding other common metal cations and anions. The fluorescence emission shows a good linear correlation with concentration of Fe 2+ and Fe 3+ (R 2 = 0.9908 for Fe 2+ and R 2 = 0.9892 for Fe 3+ ) with a detection limit of of 80.0 ± 0.5 nM for Fe 2+ and 110.0 ± 0.5 nM for Fe 3+ . Considering the high quantum yield and selectivity, CDs are exploited to design a nanoprobe towards iron detection in a biological sample. The fluorimetric assay is used to detect Fe 2+ in iron capsules and total iron in serum samples successfully.",TRUE,research problem
R126,Materials Chemistry,R142153,CdSe Quantum Dots for Two-Photon Fluorescence Thermal Imaging,S571115,R142155,has research problem,R142135,Nanothermometer,"The technological development of quantum dots has ushered in a new era in fluorescence bioimaging, which was propelled with the advent of novel multiphoton fluorescence microscopes. Here, the potential use of CdSe quantum dots has been evaluated as fluorescent nanothermometers for two-photon fluorescence microscopy. In addition to the enhancement in spatial resolution inherent to any multiphoton excitation processes, two-photon (near-infrared) excitation leads to a temperature sensitivity of the emission intensity much higher than that achieved under one-photon (visible) excitation. The peak emission wavelength is also temperature sensitive, providing an additional approach for thermal imaging, which is particularly interesting for systems where nanoparticles are not homogeneously dispersed. On the basis of these superior thermal sensitivity properties of the two-photon excited fluorescence, we have demonstrated the ability of CdSe quantum dots to image a temperature gradient artificially created in a biocompatible fluid (phosphate-buffered saline) and also their ability to measure an intracellular temperature increase externally induced in a single living cell.",TRUE,research problem
R126,Materials Chemistry,R146779,A Solution-Processable Electron Acceptor Based on Dibenzosilole and Diketopyrrolopyrrole for Organic Solar Cells,S587666,R146781,has research problem,R146783,Organic solar cells,"Organic solar cells (OSCs) are a promising cost-effective alternative for utility of solar energy, and possess low-cost, light-weight, and fl exibility advantages. [ 1–7 ] Much attention has been focused on the development of OSCs which have seen a dramatic rise in effi ciency over the last decade, and the encouraging power conversion effi ciency (PCE) over 9% has been achieved from bulk heterojunction (BHJ) OSCs. [ 8 ] With regard to photoactive materials, fullerenes and their derivatives, such as [6,6]-phenyl C 61 butyric acid methyl ester (PC 61 BM), have been the dominant electron-acceptor materials in BHJ OSCs, owing to their high electron mobility, large electron affi nity and isotropy of charge transport. [ 9 ] However, fullerenes have a few disadvantages, such as restricted electronic tuning and weak absorption in the visible region. Furthermore, in typical BHJ system of poly(3-hexylthiophene) (P3HT):PC 61 BM, mismatching energy levels between donor and acceptor leads to energy loss and low open-circuit voltages ( V OC ). To solve these problems, novel electron acceptor materials with strong and broad absorption spectra and appropriate energy levels are necessary for OSCs. Recently, non-fullerene small molecule acceptors have been developed. [ 10 , 11 ] However, rare reports on the devices based on solution-processed non-fullerene small molecule acceptors have shown PCEs approaching or exceeding 1.5%, [ 12–19 ] and only one paper reported PCEs over 2%. [ 16 ]",TRUE,research problem
R126,Materials Chemistry,R146842,Push–Pull Type Non-Fullerene Acceptors for Polymer Solar Cells: Effect of the Donor Core,S587920,R146845,has research problem,R146783,Organic solar cells,"There has been a growing interest in the design and synthesis of non-fullerene acceptors for organic solar cells that may overcome the drawbacks of the traditional fullerene-based acceptors. Herein, two novel push-pull (acceptor-donor-acceptor) type small-molecule acceptors, that is, ITDI and CDTDI, with indenothiophene and cyclopentadithiophene as the core units and 2-(3-oxo-2,3-dihydroinden-1-ylidene)malononitrile (INCN) as the end-capping units, are designed and synthesized for non-fullerene polymer solar cells (PSCs). After device optimization, PSCs based on ITDI exhibit good device performance with a power conversion efficiency (PCE) as high as 8.00%, outperforming the CDTDI-based counterparts fabricated under identical condition (2.75% PCE). We further discuss the performance of these non-fullerene PSCs by correlating the energy level and carrier mobility with the core of non-fullerene acceptors. These results demonstrate that indenothiophene is a promising electron-donating core for high-performance non-fullerene small-molecule acceptors.",TRUE,research problem
R126,Materials Chemistry,R146865,A simple small molecule as an acceptor for fullerene-free organic solar cells with efficiency near 8%,S588036,R146868,has research problem,R146783,Organic solar cells,"A simple small molecule acceptor named DICTF, with fluorene as the central block and 2-(2,3-dihydro-3-oxo-1H-inden-1-ylidene)propanedinitrile as the end-capping groups, has been designed for fullerene-free organic solar cells. The new molecule was synthesized from widely available and inexpensive commercial materials in only three steps with a high overall yield of ∼60%. Fullerene-free organic solar cells with DICTF as the acceptor material provide a high PCE of 7.93%.",TRUE,research problem
R126,Materials Chemistry,R147918,High-Performance Electron Acceptor with Thienyl Side Chains for Organic Photovoltaics,S593309,R147931,has research problem,R146783,Organic solar cells,"We develop an efficient fused-ring electron acceptor (ITIC-Th) based on indacenodithieno[3,2-b]thiophene core and thienyl side-chains for organic solar cells (OSCs). Relative to its counterpart with phenyl side-chains (ITIC), ITIC-Th shows lower energy levels (ITIC-Th: HOMO = -5.66 eV, LUMO = -3.93 eV; ITIC: HOMO = -5.48 eV, LUMO = -3.83 eV) due to the σ-inductive effect of thienyl side-chains, which can match with high-performance narrow-band-gap polymer donors and wide-band-gap polymer donors. ITIC-Th has higher electron mobility (6.1 × 10(-4) cm(2) V(-1) s(-1)) than ITIC (2.6 × 10(-4) cm(2) V(-1) s(-1)) due to enhanced intermolecular interaction induced by sulfur-sulfur interaction. We fabricate OSCs by blending ITIC-Th acceptor with two different low-band-gap and wide-band-gap polymer donors. In one case, a power conversion efficiency of 9.6% was observed, which rivals some of the highest efficiencies for single junction OSCs based on fullerene acceptors.",TRUE,research problem
R126,Materials Chemistry,R148204,Halogenated conjugated molecules for ambipolar field-effect transistors and non-fullerene organic solar cells,S594270,R148219,has research problem,R146783,Organic solar cells,"A series of halogenated conjugated molecules, containing F, Cl, Br and I, were easily prepared via Knoevenagel condensation and applied in field-effect transistors and organic solar cells. Halogenated conjugated materials were found to possess deep frontier energy levels and high crystallinity compared to their non-halogenated analogues, which is due to the strong electronegativity and heavy atom effect of halogens. As a result, halogenated semiconductors provide high electron mobilities up to 1.3 cm2 V−1 s−1 in transistors and high efficiencies over 9% in non-fullerene solar cells.",TRUE,research problem
R126,Materials Chemistry,R148232,Enhancing Performance of Nonfullerene Acceptors via Side‐Chain Conjugation Strategy,S594334,R148234,has research problem,R146783,Organic solar cells,"A side‐chain conjugation strategy in the design of nonfullerene electron acceptors is proposed, with the design and synthesis of a side‐chain‐conjugated acceptor (ITIC2) based on a 4,8‐bis(5‐(2‐ethylhexyl)thiophen‐2‐yl)benzo[1,2‐b:4,5‐b′]di(cyclopenta‐dithiophene) electron‐donating core and 1,1‐dicyanomethylene‐3‐indanone electron‐withdrawing end groups. ITIC2 with the conjugated side chains exhibits an absorption peak at 714 nm, which redshifts 12 nm relative to ITIC1. The absorption extinction coefficient of ITIC2 is 2.7 × 105m−1 cm−1, higher than that of ITIC1 (1.5 × 105m−1 cm−1). ITIC2 exhibits slightly higher highest occupied molecular orbital (HOMO) (−5.43 eV) and lowest unoccupied molecular orbital (LUMO) (−3.80 eV) energy levels relative to ITIC1 (HOMO: −5.48 eV; LUMO: −3.84 eV), and higher electron mobility (1.3 × 10−3 cm2 V−1 s−1) than that of ITIC1 (9.6 × 10−4 cm2 V−1 s−1). The power conversion efficiency of ITIC2‐based organic solar cells is 11.0%, much higher than that of ITIC1‐based control devices (8.54%). Our results demonstrate that side‐chain conjugation can tune energy levels, enhance absorption, and electron mobility, and finally enhance photovoltaic performance of nonfullerene acceptors.",TRUE,research problem
R126,Materials Chemistry,R148246,"Design, synthesis, and structural characterization of the first dithienocyclopentacarbazole-based n-type organic semiconductor and its application in non-fullerene polymer solar cells",S594371,R148250,has research problem,R146783,Organic solar cells,"Ladder-type dithienocyclopentacarbazole (DTCC) cores, which possess highly extended π-conjugated backbones and versatile modular structures for derivatization, were widely used to develop high-performance p-type polymeric semiconductors. However, an n-type DTCC-based organic semiconductor has not been reported to date. In this study, the first DTCC-based n-type organic semiconductor (DTCC–IC) with a well-defined A–D–A backbone was designed, synthesized, and characterized, in which a DTCC derivative substituted by four p-octyloxyphenyl groups was used as the electron-donating core and two strongly electron-withdrawing 3-(dicyanomethylene)indan-1-one moieties were used as the terminal acceptors. It was found that DTCC–IC has strong light-capturing ability in the range of 500–720 nm and exhibits an impressively high molar absorption coefficient of 2.24 × 105 M−1 cm−1 at 669 nm owing to effective intramolecular charge transfer and a strong D–A effect. Cyclic voltammetry measurements indicated that the HOMO and LUMO energy levels of DTCC–IC are −5.50 and −3.87 eV, respectively. More importantly, a high electron mobility of 2.17 × 10−3 cm2 V−1 s−1 was determined by the space-charge-limited current method; this electron mobility can be comparable to that of fullerene derivative acceptors (μe ∼ 10−3 cm2 V−1 s−1). To investigate its application potential in non-fullerene solar cells, we fabricated organic solar cells (OSCs) by blending a DTCC–IC acceptor with a PTB7-Th donor under various conditions. The results suggest that the optimized device exhibits a maximum power conversion efficiency (PCE) of up to 6% and a rational high VOC of 0.95 V. These findings demonstrate that the ladder-type DTCC core is a promising building block for the development of high-mobility n-type organic semiconductors for OSCs.",TRUE,research problem
R126,Materials Chemistry,R148606,Fused Hexacyclic Nonfullerene Acceptor with Strong Near‐Infrared Absorption for Semitransparent Organic Solar Cells with 9.77% Efficiency,S595992,R148607,has research problem,R146783,Organic solar cells,"A fused hexacyclic electron acceptor, IHIC, based on strong electron‐donating group dithienocyclopentathieno[3,2‐b]thiophene flanked by strong electron‐withdrawing group 1,1‐dicyanomethylene‐3‐indanone, is designed, synthesized, and applied in semitransparent organic solar cells (ST‐OSCs). IHIC exhibits strong near‐infrared absorption with extinction coefficients of up to 1.6 × 105m−1 cm−1, a narrow optical bandgap of 1.38 eV, and a high electron mobility of 2.4 × 10−3 cm2 V−1 s−1. The ST‐OSCs based on blends of a narrow‐bandgap polymer donor PTB7‐Th and narrow‐bandgap IHIC acceptor exhibit a champion power conversion efficiency of 9.77% with an average visible transmittance of 36% and excellent device stability; this efficiency is much higher than any single‐junction and tandem ST‐OSCs reported in the literature.",TRUE,research problem
R126,Materials Chemistry,R148630,Naphthodithiophene‐Based Nonfullerene Acceptor for High‐Performance Organic Photovoltaics: Effect of Extended Conjugation,S595991,R148632,has research problem,R146783,Organic solar cells,"Naphtho[1,2‐b:5,6‐b′]dithiophene is extended to a fused octacyclic building block, which is end capped by strong electron‐withdrawing 2‐(5,6‐difluoro‐3‐oxo‐2,3‐dihydro‐1H‐inden‐1‐ylidene)malononitrile to yield a fused‐ring electron acceptor (IOIC2) for organic solar cells (OSCs). Relative to naphthalene‐based IHIC2, naphthodithiophene‐based IOIC2 with a larger π‐conjugation and a stronger electron‐donating core shows a higher lowest unoccupied molecular orbital energy level (IOIC2: −3.78 eV vs IHIC2: −3.86 eV), broader absorption with a smaller optical bandgap (IOIC2: 1.55 eV vs IHIC2: 1.66 eV), and a higher electron mobility (IOIC2: 1.0 × 10−3 cm2 V−1 s−1 vs IHIC2: 5.0 × 10−4 cm2 V−1 s−1). Thus, IOIC2‐based OSCs show higher values in open‐circuit voltage, short‐circuit current density, fill factor, and thereby much higher power conversion efficiency (PCE) values than those of the IHIC2‐based counterpart. In particular, as‐cast OSCs based on FTAZ: IOIC2 yield PCEs of up to 11.2%, higher than that of the control devices based on FTAZ: IHIC2 (7.45%). Furthermore, by using 0.2% 1,8‐diiodooctane as the processing additive, a PCE of 12.3% is achieved from the FTAZ:IOIC2‐based devices, higher than that of the FTAZ:IHIC2‐based devices (7.31%). These results indicate that incorporating extended conjugation into the electron‐donating fused‐ring units in nonfullerene acceptors is a promising strategy for designing high‐performance electron acceptors.",TRUE,research problem
R126,Materials Chemistry,R148652,"Dithieno[3,2-b
:2′,3′-d
]pyrrol Fused Nonfullerene Acceptors Enabling Over 13% Efficiency for Organic Solar Cells",S595989,R148654,has research problem,R146783,Organic solar cells,"A new electron‐rich central building block, 5,5,12,12‐tetrakis(4‐hexylphenyl)‐indacenobis‐(dithieno[3,2‐b:2′,3′‐d]pyrrol) (INP), and two derivative nonfullerene acceptors (INPIC and INPIC‐4F) are designed and synthesized. The two molecules reveal broad (600–900 nm) and strong absorption due to the satisfactory electron‐donating ability of INP. Compared with its counterpart INPIC, fluorinated nonfullerene acceptor INPIC‐4F exhibits a stronger near‐infrared absorption with a narrower optical bandgap of 1.39 eV, an improved crystallinity with higher electron mobility, and down‐shifted highest occupied molecular orbital and lowest unoccupied molecular orbital energy levels. Organic solar cells (OSCs) based on INPIC‐4F exhibit a high power conversion efficiency (PCE) of 13.13% and a relatively low energy loss of 0.54 eV, which is among the highest efficiencies reported for binary OSCs in the literature. The results demonstrate the great potential of the new INP as an electron‐donating building block for constructing high‐performance nonfullerene acceptors for OSCs.",TRUE,research problem
R126,Materials Chemistry,R161104,Our plastic age,S643266,R161106,has research problem,R161107,Plastic,"Within the last few decades, plastics have revolutionized our daily lives. Globally we use in excess of 260 million tonnes of plastic per annum, accounting for approximately 8 per cent of world oil production. In this Theme Issue of Philosophical Transactions of the Royal Society, we describe current and future trends in usage, together with the many benefits that plastics bring to society. At the same time, we examine the environmental consequences resulting from the accumulation of waste plastic, the effects of plastic debris on wildlife and concerns for human health that arise from the production, usage and disposal of plastics. Finally, we consider some possible solutions to these problems together with the research and policy priorities necessary for their implementation.",TRUE,research problem
R67,Medicinal Chemistry and Pharmaceutics,R110242,"Comparison, synthesis and evaluation of anticancer drug-loaded polymeric nanoparticles on breast cancer cell lines",S502509,R110244,has research problem,R3070,Breast cancer,"Breast cancer is a major form of cancer, with a high mortality rate in women. It is crucial to achieve more efficient and safe anticancer drugs. Recent developments in medical nanotechnology have resulted in novel advances in cancer drug delivery. Cisplatin, doxorubicin, and 5-fluorouracil are three important anti-cancer drugs which have poor water-solubility. In this study, we used cisplatin, doxorubicin, and 5-fluorouracil-loaded polycaprolactone-polyethylene glycol (PCL-PEG) nanoparticles to improve the stability and solubility of molecules in drug delivery systems. The nanoparticles were prepared by a double emulsion method and characterized with Fourier Transform Infrared (FTIR) spectroscopy and Hydrogen-1 nuclear magnetic resonance (1HNMR). Cells were treated with equal concentrations of cisplatin, doxorubicin and 5-fluorouracil-loaded PCL-PEG nanoparticles, and free cisplatin, doxorubicin and 5-fluorouracil. The 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay confirmed that cisplatin, doxorubicin, and 5-fluorouracil-loaded PCL-PEG nanoparticles enhanced cytotoxicity and drug delivery in T47D and MCF7 breast cancer cells. However, the IC50 value of doxorubicin was lower than the IC50 values of both cisplatin and 5-fluorouracil, where the difference was statistically considered significant (p˂0.05). However, the IC50 value of all drugs on T47D were lower than those on MCF7.",TRUE,research problem
R67,Medicinal Chemistry and Pharmaceutics,R110813,Resveratrol loaded polymeric micelles for theranostic targeting of breast cancer cells,S505411,R110815,has research problem,R3070,Breast cancer,"Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179±22 nm were produced. Res was loaded with high EE of 73±0.9% and DL content of 6.2±0.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.",TRUE,research problem
R67,Medicinal Chemistry and Pharmaceutics,R138607,A Novel Nanoparticle Formulation for Sustained Paclitaxel Delivery,S550735,R138609,has research problem,R3070,Breast cancer,"PurposeTo develop a novel nanoparticle drug delivery system consisting of chitosan and glyceryl monooleate (GMO) for the delivery of a wide variety of therapeutics including paclitaxel.MethodsChitosan/GMO nanoparticles were prepared by multiple emulsion (o/w/o) solvent evaporation methods. Particle size and surface charge were determined. The morphological characteristics and cellular adhesion were evaluated with surface or transmission electron microscopy methods. The drug loading, encapsulation efficiency, in vitro release and cellular uptake were determined using HPLC methods. The safety and efficacy were evaluated by MTT cytotoxicity assay in human breast cancer cells (MDA-MB-231).ResultsThese studies provide conceptual proof that chitosan/GMO can form polycationic nano-sized particles (400 to 700 nm). The formulation demonstrates high yields (98 to 100%) and similar entrapment efficiencies. The lyophilized powder can be stored and easily be resuspended in an aqueous matrix. The nanoparticles have a hydrophobic inner-core with a hydrophilic coating that exhibits a significant positive charge and sustained release characteristics. This novel nanoparticle formulation shows evidence of mucoadhesive properties; a fourfold increased cellular uptake and a 1000-fold reduction in the IC50 of PTX.ConclusionThese advantages allow lower doses of PTX to achieve a therapeutic effect, thus presumably minimizing the adverse side effects.",TRUE,research problem
R55,Microbial Physiology,R49446,The Impact of Pyroglutamate: Sulfolobus acidocaldarius Has a Growth Advantage over Saccharolobus solfataricus in Glutamate-Containing Media,S147533,R49467,has research problem,R49465,Utilization of pyroglutamate,"Microorganisms are well adapted to their habitat but are partially sensitive to toxic metabolites or abiotic compounds secreted by other organisms or chemically formed under the respective environmental conditions. Thermoacidophiles are challenged by pyroglutamate, a lactam that is spontaneously formed by cyclization of glutamate under aerobic thermoacidophilic conditions. It is known that growth of the thermoacidophilic crenarchaeon Saccharolobus solfataricus (formerly Sulfolobus solfataricus) is completely inhibited by pyroglutamate. In the present study, we investigated the effect of pyroglutamate on the growth of S. solfataricus and the closely related crenarchaeon Sulfolobus acidocaldarius. In contrast to S. solfataricus, S. acidocaldarius was successfully cultivated with pyroglutamate as a sole carbon source. Bioinformatical analyses showed that both members of the Sulfolobaceae have at least one candidate for a 5-oxoprolinase, which catalyses the ATP-dependent conversion of pyroglutamate to glutamate. In S. solfataricus, we observed the intracellular accumulation of pyroglutamate and crude cell extract assays showed a less effective degradation of pyroglutamate. Apparently, S. acidocaldarius seems to be less versatile regarding carbohydrates and prefers peptidolytic growth compared to S. solfataricus. Concludingly, S. acidocaldarius exhibits a more efficient utilization of pyroglutamate and is not inhibited by this compound, making it a better candidate for applications with glutamate-containing media at high temperatures.",TRUE,research problem
R145261,Natural Language Processing,R142108,SemEval-2013 Task 7: The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge,S570958,R142110,has research problem,R140633, Joint Student Response Analysis,"We present the results of the Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge, aiming to bring together researchers in educational NLP technology and textual entailment. The task of giving feedback on student answers requires semantic inference and therefore is related to recognizing textual entailment. Thus, we offered to the community a 5-way student response labeling task, as well as 3-way and 2way RTE-style tasks on educational data. In addition, a partial entailment task was piloted. We present and compare results from 9 participating teams, and discuss future directions.",TRUE,research problem
R145261,Natural Language Processing,R141014,NADI 2021: The Second Nuanced Arabic Dialect Identification Shared Task,S563297,R141016,has research problem,R141017,Arabic Dialect Identification,"We present the findings and results of theSecond Nuanced Arabic Dialect IdentificationShared Task (NADI 2021). This Shared Taskincludes four subtasks: country-level ModernStandard Arabic (MSA) identification (Subtask1.1), country-level dialect identification (Subtask1.2), province-level MSA identification (Subtask2.1), and province-level sub-dialect identifica-tion (Subtask 2.2). The shared task dataset cov-ers a total of 100 provinces from 21 Arab coun-tries, collected from the Twitter domain. A totalof 53 teams from 23 countries registered to par-ticipate in the tasks, thus reflecting the interestof the community in this area. We received 16submissions for Subtask 1.1 from five teams, 27submissions for Subtask 1.2 from eight teams,12 submissions for Subtask 2.1 from four teams,and 13 Submissions for subtask 2.2 from fourteams.",TRUE,research problem
R145261,Natural Language Processing,R146872,"Identification of Tasks, Datasets, Evaluation Metrics, and Numeric Scores for Scientific Leaderboards Construction",S588081,R146874,has research problem,R146875,Automatic leaderboard construction,"While the fast-paced inception of novel tasks and new datasets helps foster active research in a community towards interesting directions, keeping track of the abundance of research activity in different areas on different datasets is likely to become increasingly difficult. The community could greatly benefit from an automatic system able to summarize scientific results, e.g., in the form of a leaderboard. In this paper we build two datasets and develop a framework (TDMS-IE) aimed at automatically extracting task, dataset, metric and score from NLP papers, towards the automatic construction of leaderboards. Experiments show that our model outperforms several baselines by a large margin. Our model is a first step towards automatic leaderboard construction, e.g., in the NLP domain.",TRUE,research problem
R145261,Natural Language Processing,R162482,BioCreative V track 4: a shared task for the extraction of causal network information using the Biological Expression Language,S648252,R162484,has research problem,R162485,automatically constructing causal biological networks from text,"Automatic extraction of biological network information is one of the most desired and most complex tasks in biological and medical text mining. Track 4 at BioCreative V attempts to approach this complexity using fragments of large-scale manually curated biological networks, represented in Biological Expression Language (BEL), as training and test data. BEL is an advanced knowledge representation format which has been designed to be both human readable and machine processable. The specific goal of track 4 was to evaluate text mining systems capable of automatically constructing BEL statements from given evidence text, and of retrieving evidence text for given BEL statements. Given the complexity of the task, we designed an evaluation methodology which gives credit to partially correct statements. We identified various levels of information expressed by BEL statements, such as entities, functions, relations, and introduced an evaluation framework which rewards systems capable of delivering useful BEL fragments at each of these levels. The aim of this evaluation method is to help identify the characteristics of the systems which, if combined, would be most useful for achieving the overall goal of automatically constructing causal biological networks from text.",TRUE,research problem
R145261,Natural Language Processing,R163406,Overview of the Cancer Genetics (CG) task of BioNLP Shared Task 2013,S656894,R164522,has research problem,R163411,Cancer genetics (CG) event extraction,"We present the design, preparation, results and analysis of the Cancer Genetics (CG) event extraction task, a main task of the BioNLP Shared Task (ST) 2013. The CG task is an information extraction task targeting the recognition of events in text, represented as structured n-ary associations of given physical entities. In addition to addressing the cancer domain, the CG task is differentiated from previous event extraction tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple levels of biological organization, ranging from the molecular through the cellular and organ levels up to whole organisms. Final test set submissions were accepted from six teams. The highest-performing system achieved an Fscore of 55.4%. This level of performance is broadly comparable with the state of the art for established molecular-level extraction tasks, demonstrating that event extraction resources and methods generalize well to higher levels of biological organization and are applicable to the analysis of scientific texts on cancer. The CG task continues as an open challenge to all interested parties, with tools and resources available from http://2013. bionlp-st.org/.",TRUE,research problem
R145261,Natural Language Processing,R162561,Overview of the NLM-Chem BioCreative VII track: Full-text Chemical Identification and Indexing in PubMed articles,S648637,R162563,has research problem,R162565,chemical indexing,"The BioCreative NLM-Chem track calls for a community effort to fine-tune automated recognition of chemical names in biomedical literature. Chemical names are one of the most searched biomedical entities in PubMed and – as highlighted during the COVID-19 pandemic – their identification may significantly advance research in multiple biomedical subfields. While previous community challenges focused on identifying chemical names mentioned in titles and abstracts, the full text contains valuable additional detail. We organized the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document. This manuscript summarizes the BioCreative NLM-Chem track. We received a total of 88 submissions in total from 17 teams worldwide. The highest performance achieved for the Chemical Identification task was 0.8672 f-score (0.8759 precision, 0.8587 recall) for strict NER performance and 0.8136 f-score (0.8621 precision, 0.7702 recall) for strict normalization performance. The highest performance achieved for the Chemical Indexing task was 0.4825 f-score (0.4397 precision, 0.5344 recall). The NLM-Chem track dataset and other challenge materials are publicly available at https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/. This community challenge demonstrated 1) the current substantial achievements in deep learning technologies can be utilized to further improve automated prediction accuracy, and 2) the Chemical Indexing task is substantially more challenging. We look forward to further development of biomedical text mining methods to respond to the rapid growth of biomedical literature. Keywords— biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; text mining; chemical entity recognition; chemical indexing",TRUE,research problem
R145261,Natural Language Processing,R162457,Overview of the CHEMDNER patents task,S686650,R171968,has research problem,R171988,Chemical passage detection,"A considerable effort has been made to extract biological and chemical entities, as well as their relationships, from the scientific literature, either manually through traditional literature curation or by using information extraction and text mining technologies. Medicinal chemistry patents contain a wealth of information, for instance to uncover potential biomarkers that might play a role in cancer treatment and prognosis. However, current biomedical annotation databases do not cover such information, partly due to limitations of publicly available biomedical patent mining software. As part of the BioCreative V CHEMDNER patents track, we present the results of the first named entity recognition (NER) assignment carried out to detect mentions of chemical compounds and genes/proteins in running patent text. More specifically, this task aimed to evaluate the performance of automatic name recognition strategies capable of isolating chemical names and gene and gene product mentions from surrounding text within patent titles and abstracts. A total of 22 unique teams submitted results for at least one of the three CHEMDNER subtasks. The first subtask, called the CEMP (chemical entity mention in patents) task, focused on the detection of chemical named entity mentions in patents, requesting teams to return the start and end indices corresponding to all the chemical entities found in a given record. A total of 21 teams submitted 93 runs, for this subtask. The top performing team reached an f-measure of 0.89 with a precision of 0.87 and a recall of 0.91. The CPD (chemical passage detection) task required the classification of patent titles and abstracts whether they do or do not contain chemical compound mentions. Nine teams returned predictions for this task (40 runs). The top run in terms of Matthew’s correlation coefficient (MCC) had a score of 0.88, the highest sensitivity ? Corresponding author",TRUE,research problem
R145261,Natural Language Processing,R162474,Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task,S648200,R162476,has research problem,R162477,chemical-disease relation (CDR) extraction,"Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task—a result that approaches the human inter-annotator agreement (0.8875)—and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system’s ability to return real-time results: the average response time for each team’s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/",TRUE,research problem
R145261,Natural Language Processing,R162474,Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task,S686689,R172006,has research problem,R162479,chemical-induced disease (CID) relation extraction,"Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task—a result that approaches the human inter-annotator agreement (0.8875)—and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system’s ability to return real-time results: the average response time for each team’s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/",TRUE,research problem
R145261,Natural Language Processing,R162400,Overview of the gene ontology task at BioCreative IV,S647923,R162402,has research problem,R162407,concept-recognition,"Gene Ontology (GO) annotation is a common task among model organism databases (MODs) for capturing gene function data from journal articles. It is a time-consuming and labor-intensive task, and is thus often considered as one of the bottlenecks in literature curation. There is a growing need for semiautomated or fully automated GO curation techniques that will help database curators to rapidly and accurately identify gene function information in full-length articles. Despite multiple attempts in the past, few studies have proven to be useful with regard to assisting real-world GO curation. The shortage of sentence-level training data and opportunities for interaction between text-mining developers and GO curators has limited the advances in algorithm development and corresponding use in practical circumstances. To this end, we organized a text-mining challenge task for literature-based GO annotation in BioCreative IV. More specifically, we developed two subtasks: (i) to automatically locate text passages that contain GO-relevant information (a text retrieval task) and (ii) to automatically identify relevant GO terms for the genes in a given article (a concept-recognition task). With the support from five MODs, we provided teams with >4000 unique text passages that served as the basis for each GO annotation in our task data. Such evidence text information has long been recognized as critical for text-mining algorithm development but was never made available because of the high cost of curation. In total, seven teams participated in the challenge task. From the team results, we conclude that the state of the art in automatically mining GO terms from literature has improved over the past decade while much progress is still needed for computer-assisted GO curation. Future work should focus on addressing remaining technical challenges for improved performance of automatic GO concept recognition and incorporating practical benefits of text-mining tools into real-world GO annotation. Database URL: http://www.biocreative.org/tasks/biocreative-iv/track-4-GO/.",TRUE,research problem
R145261,Natural Language Processing,R175469,Extracting a Knowledge Base of Mechanisms from COVID-19 Papers,S695286,R175471,has research problem,R175472,construction of a knowledge base (KB) of mechanisms,"The COVID-19 pandemic has spawned a diverse body of scientific literature that is challenging to navigate, stimulating interest in automated tools to help find useful knowledge. We pursue the construction of a knowledge base (KB) of mechanisms—a fundamental concept across the sciences, which encompasses activities, functions and causal relations, ranging from cellular processes to economic impacts. We extract this information from the natural language of scientific papers by developing a broad, unified schema that strikes a balance between relevance and breadth. We annotate a dataset of mechanisms with our schema and train a model to extract mechanism relations from papers. Our experiments demonstrate the utility of our KB in supporting interdisciplinary scientific search over COVID-19 literature, outperforming the prominent PubMed search in a study with clinical experts. Our search engine, dataset and code are publicly available.",TRUE,research problem
R145261,Natural Language Processing,R163616,"CRAFT Shared Tasks 2019 Overview –- Integrated Structure, Semantics, and Coreference",S653417,R163635,has research problem,R124236,Coreference Resolution,"As part of the BioNLP Open Shared Tasks 2019, the CRAFT Shared Tasks 2019 provides a platform to gauge the state of the art for three fundamental language processing tasks — dependency parse construction, coreference resolution, and ontology concept identification — over full-text biomedical articles. The structural annotation task requires the automatic generation of dependency parses for each sentence of an article given only the article text. The coreference resolution task focuses on linking coreferring base noun phrase mentions into chains using the symmetrical and transitive identity relation. The ontology concept annotation task involves the identification of concept mentions within text using the classes of ten distinct ontologies in the biomedical domain, both unmodified and augmented with extension classes. This paper provides an overview of each task, including descriptions of the data provided to participants and the evaluation metrics used, and discusses participant results relative to baseline performances for each of the three tasks.",TRUE,research problem
R145261,Natural Language Processing,R164170,Coreference Resolution in Biomedical Texts: a Machine Learning Approach,S655538,R164172,has research problem,R124236,Coreference Resolution,"Motivation: Coreference resolution, the process of identifying different mentions of an entity, is a very important component in a text-mining system. Compared with the work in news articles, the existing study of coreference resolution in biomedical texts is quite preliminary by only focusing on specific types of anaphors like pronouns or definite noun phrases, using heuristic methods, and running on small data sets. Therefore, there is a need for an in-depth exploration of this task in the biomedical domain. Results: In this article, we presented a learning-based approach to coreference resolution in the biomedical domain. We made three contributions in our study. Firstly, we annotated a large scale coreference corpus, MedCo, which consists of 1,999 medline abstracts in the GENIA data set. Secondly, we proposed a detailed framework for the coreference resolution task, in which we augmented the traditional learning model by incorporating non-anaphors into training. Lastly, we explored various sources of knowledge for coreference resolution, particularly, those that can deal with the complexity of biomedical texts. The evaluation on the MedCo corpus showed promising results. Our coreference resolution system achieved a high precision of 85.2% with a reasonable recall of 65.3%, obtaining an F-measure of 73.9%. The results also suggested that our augmented learning model significantly boosted precision (up to 24.0%) without much loss in recall (less than 5%), and brought a gain of over 8% in F-measure.",TRUE,research problem
R145261,Natural Language Processing,R141014,NADI 2021: The Second Nuanced Arabic Dialect Identification Shared Task,S583312,R145677,has research problem,R145682,Country-level dialect identification,"We present the findings and results of theSecond Nuanced Arabic Dialect IdentificationShared Task (NADI 2021). This Shared Taskincludes four subtasks: country-level ModernStandard Arabic (MSA) identification (Subtask1.1), country-level dialect identification (Subtask1.2), province-level MSA identification (Subtask2.1), and province-level sub-dialect identifica-tion (Subtask 2.2). The shared task dataset cov-ers a total of 100 provinces from 21 Arab coun-tries, collected from the Twitter domain. A totalof 53 teams from 23 countries registered to par-ticipate in the tasks, thus reflecting the interestof the community in this area. We received 16submissions for Subtask 1.1 from five teams, 27submissions for Subtask 1.2 from eight teams,12 submissions for Subtask 2.1 from four teams,and 13 Submissions for subtask 2.2 from fourteams.",TRUE,research problem
R145261,Natural Language Processing,R163747,CrossNER: Evaluating Cross-Domain Named Entity Recognition,S653880,R163749,has research problem,R123004,Cross-Domain Named Entity Recognition,"Cross-domain named entity recognition (NER) models are able to cope with the scarcity issue of NER samples in target domains. However, most of the existing NER benchmarks lack domain-specialized entity types or do not focus on a certain domain, leading to a less effective cross-domain evaluation. To address these obstacles, we introduce a cross-domain NER dataset (CrossNER), a fully-labeled collection of NER data spanning over five diverse domains with specialized entity categories for different domains. Additionally, we also provide a domain-related corpus since using it to continue pre-training language models (domain-adaptive pre-training) is effective for the domain adaptation. We then conduct comprehensive experiments to explore the effectiveness of leveraging different levels of the domain corpus and pre-training strategies to do domain-adaptive pre-training for the cross-domain task. Results show that focusing on the fractional corpus containing domain-specialized entities and utilizing a more challenging pre-training strategy in domain-adaptive pre-training are beneficial for the NER domain adaptation, and our proposed method can consistently outperform existing cross-domain NER baselines. Nevertheless, experiments also illustrate the challenge of this cross-domain NER task. We hope that our dataset and baselines will catalyze research in the NER domain adaptation area. The code and data are available at this https URL.",TRUE,research problem
R145261,Natural Language Processing,R141052,The CoNLL-2010 Shared Task: Learning to Detect Hedges and their Scope in Natural Language Text,S563427,R141054,has research problem,R141055,Detection of uncertainty cues and their linguistic scope,"The CoNLL-2010 Shared Task was dedicated to the detection of uncertainty cues and their linguistic scope in natural language texts. The motivation behind this task was that distinguishing factual and uncertain information in texts is of essential importance in information extraction. This paper provides a general overview of the shared task, including the annotation protocols of the training and evaluation datasets, the exact task definitions, the evaluation metrics employed and the overall results. The paper concludes with an analysis of the prominent approaches and an overview of the systems submitted to the shared task.",TRUE,research problem
R145261,Natural Language Processing,R162920,GATE: an architecture for development of robust HLT applications,S649900,R162922,has research problem,R162965,develop and deploy language engineering components,"In this paper we present GATE, a framework and graphical development environment which enables users to develop and deploy language engineering components and resources in a robust fashion. The GATE architecture has enabled us not only to develop a number of successful applications for various language processing tasks (such as Information Extraction), but also to build and annotate corpora and carry out evaluations on the applications generated. The framework can be used to develop applications and resources in multiple languages, based on its thorough Unicode support.",TRUE,research problem
R145261,Natural Language Processing,R141026,IJCNLP-2017 Task 2: Dimensional Sentiment Analysis for Chinese Phrases,S563333,R141028,has research problem,R141029,Dimensional Sentiment Analysis,"This paper presents the IJCNLP 2017 shared task on Dimensional Sentiment Analysis for Chinese Phrases (DSAP) which seeks to identify a real-value sentiment score of Chinese single words and multi-word phrases in the both valence and arousal dimensions. Valence represents the degree of pleasant and unpleasant (or positive and negative) feelings, and arousal represents the degree of excitement and calm. Of the 19 teams registered for this shared task for two-dimensional sentiment analysis, 13 submitted results. We expected that this evaluation campaign could produce more advanced dimensional sentiment analysis techniques, especially for Chinese affective computing. All data sets with gold standards and scoring script are made publicly available to researchers.",TRUE,research problem
R145261,Natural Language Processing,R162474,Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task,S686687,R172005,has research problem,R162478,disease named entity recognition (DNER),"Manually curating chemicals, diseases and their relationships is significantly important to biomedical research, but it is plagued by its high cost and the rapid growth of the biomedical literature. In recent years, there has been a growing interest in developing computational approaches for automatic chemical-disease relation (CDR) extraction. Despite these attempts, the lack of a comprehensive benchmarking dataset has limited the comparison of different techniques in order to assess and advance the current state-of-the-art. To this end, we organized a challenge task through BioCreative V to automatically extract CDRs from the literature. We designed two challenge tasks: disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. To assist system development and assessment, we created a large annotated text corpus that consisted of human annotations of chemicals, diseases and their interactions from 1500 PubMed articles. 34 teams worldwide participated in the CDR task: 16 (DNER) and 18 (CID). The best systems achieved an F-score of 86.46% for the DNER task—a result that approaches the human inter-annotator agreement (0.8875)—and an F-score of 57.03% for the CID task, the highest results ever reported for such tasks. When combining team results via machine learning, the ensemble system was able to further improve over the best team results by achieving 88.89% and 62.80% in F-score for the DNER and CID task, respectively. Additionally, another novel aspect of our evaluation is to test each participating system’s ability to return real-time results: the average response time for each team’s DNER and CID web service systems were 5.6 and 9.3 s, respectively. Most teams used hybrid systems for their submissions based on machining learning. Given the level of participation and results, we found our task to be successful in engaging the text-mining research community, producing a large annotated corpus and improving the results of automatic disease recognition and CDR extraction. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/",TRUE,research problem
R145261,Natural Language Processing,R147638,Identifying used methods and datasets in scientific publications,S592309,R147640,has research problem,R147644,Domain-specific named entity recognition,"Although it has become common to assess publications and researchers by means of their citation count (e.g., using the h-index), measuring the impact of scientific methods and datasets (e.g., using an “h-index for datasets”) has been performed only to a limited extent. This is not surprising because the usage information of methods and datasets is typically not explicitly provided by the authors, but hidden in a publication’s text. In this paper, we propose an approach to identifying methods and datasets in texts that have actually been used by the authors. Our approach first recognizes datasets and methods in the text by means of a domain-specific named entity recognition method with minimal human interaction. It then classifies these mentions into used vs. non-used based on the textual contexts. The obtained labels are aggregated on the document level and integrated into the Microsoft Academic Knowledge Graph modeling publications’ metadata. In experiments based on the Microsoft Academic Graph, we show that both method and dataset mentions can be identified and correctly classified with respect to their usage to a high degree. Overall, our approach facilitates method and dataset recommendation, enhanced paper recommendation, and scientific impact quantification. It can be extended in such a way that it can identify mentions of any entity type (e.g., task).",TRUE,research problem
R145261,Natural Language Processing,R145803,End-to-end Neural Coreference Resolution,S583856,R145805,has research problem,R145806,End-to-end coreference resolution,"We introduce the first end-to-end coreference resolution model and show that it significantly outperforms all previous work without using a syntactic parser or hand-engineered mention detector. The key idea is to directly consider all spans in a document as potential mentions and learn distributions over possible antecedents for each. The model computes span embeddings that combine context-dependent boundary representations with a head-finding attention mechanism. It is trained to maximize the marginal likelihood of gold antecedent spans from coreference clusters and is factored to enable aggressive pruning of potential mentions. Experiments demonstrate state-of-the-art performance, with a gain of 1.5 F1 on the OntoNotes benchmark and by 3.1 F1 using a 5-model ensemble, despite the fact that this is the first approach to be successfully trained with no external resources.",TRUE,research problem
R145261,Natural Language Processing,R145798,End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures,S583839,R145800,has research problem,R38192,End-to-end Relation Extraction,"We present a novel end-to-end neural model to extract entities and relations between them. Our recurrent neural network based model captures both word sequence and dependency tree substructure information by stacking bidirectional tree-structured LSTM-RNNs on bidirectional sequential LSTM-RNNs. This allows our model to jointly represent both entities and relations with shared parameters in a single model. We further encourage detection of entities during training and use of entity information in relation extraction via entity pretraining and scheduled sampling. Our model improves over the state-of-the-art feature-based model on end-to-end relation extraction, achieving 12.1% and 5.7% relative error reductions in F1-score on ACE2005 and ACE2004, respectively. We also show that our LSTM-RNN based model compares favorably to the state-of-the-art CNN based model (in F1-score) on nominal relation classification (SemEval-2010 Task 8). Finally, we present an extensive ablation analysis of several model components.",TRUE,research problem
R145261,Natural Language Processing,R162924,HeidelTime: High Quality Rule-Based Extraction and Normalization of Temporal Expressions,S649909,R162926,has research problem,R162966,extraction and normalization of temporal expressions,"In this paper, we describe HeidelTime, a system for the extraction and normalization of temporal expressions. HeidelTime is a rule-based system mainly using regular expression patterns for the extraction of temporal expressions and knowledge resources as well as linguistic clues for their normalization. In the TempEval-2 challenge, HeidelTime achieved the highest F-Score (86%) for the extraction and the best results in assigning the correct value attribute, i.e., in understanding the semantics of the temporal expressions.",TRUE,research problem
R145261,Natural Language Processing,R162342,Overview of BioCreAtIvE: critical assessment of information extraction for biology,S647519,R162344,has research problem,R162345,Extraction of gene or protein names,"Abstract Background The goal of the first BioCreAtIvE challenge (Critical Assessment of Information Extraction in Biology) was to provide a set of common evaluation tasks to assess the state of the art for text mining applied to biological problems. The results were presented in a workshop held in Granada, Spain March 28–31, 2004. The articles collected in this BMC Bioinformatics supplement entitled ""A critical assessment of text mining methods in molecular biology"" describe the BioCreAtIvE tasks, systems, results and their independent evaluation. Results BioCreAtIvE focused on two tasks. The first dealt with extraction of gene or protein names from text, and their mapping into standardized gene identifiers for three model organism databases (fly, mouse, yeast). The second task addressed issues of functional annotation, requiring systems to identify specific text passages that supported Gene Ontology annotations for specific proteins, given full text articles. Conclusion The first BioCreAtIvE assessment achieved a high level of international participation (27 groups from 10 countries). The assessment provided state-of-the-art performance results for a basic task (gene name finding and normalization), where the best systems achieved a balanced 80% precision / recall or better, which potentially makes them suitable for real applications in biology. The results for the advanced task (functional annotation from free text) were significantly lower, demonstrating the current limitations of text-mining approaches where knowledge extrapolation and interpretation are required. In addition, an important contribution of BioCreAtIvE has been the creation and release of training and test data sets for both tasks. There are 22 articles in this special issue, including six that provide analyses of results or data quality for the data sets, including a novel inter-annotator consistency assessment for the test set used in task 2.",TRUE,research problem
R145261,Natural Language Processing,R162342,Overview of BioCreAtIvE: critical assessment of information extraction for biology,S647523,R162344,has research problem,R162348,gene name finding and normalization,"Abstract Background The goal of the first BioCreAtIvE challenge (Critical Assessment of Information Extraction in Biology) was to provide a set of common evaluation tasks to assess the state of the art for text mining applied to biological problems. The results were presented in a workshop held in Granada, Spain March 28–31, 2004. The articles collected in this BMC Bioinformatics supplement entitled ""A critical assessment of text mining methods in molecular biology"" describe the BioCreAtIvE tasks, systems, results and their independent evaluation. Results BioCreAtIvE focused on two tasks. The first dealt with extraction of gene or protein names from text, and their mapping into standardized gene identifiers for three model organism databases (fly, mouse, yeast). The second task addressed issues of functional annotation, requiring systems to identify specific text passages that supported Gene Ontology annotations for specific proteins, given full text articles. Conclusion The first BioCreAtIvE assessment achieved a high level of international participation (27 groups from 10 countries). The assessment provided state-of-the-art performance results for a basic task (gene name finding and normalization), where the best systems achieved a balanced 80% precision / recall or better, which potentially makes them suitable for real applications in biology. The results for the advanced task (functional annotation from free text) were significantly lower, demonstrating the current limitations of text-mining approaches where knowledge extrapolation and interpretation are required. In addition, an important contribution of BioCreAtIvE has been the creation and release of training and test data sets for both tasks. There are 22 articles in this special issue, including six that provide analyses of results or data quality for the data sets, including a novel inter-annotator consistency assessment for the test set used in task 2.",TRUE,research problem
R145261,Natural Language Processing,R141092,DSTC7 Task 1: Noetic End-to-End Response Selection,S563663,R141094,has research problem,R116510,Goal-Oriented Dialogue Systems,"Goal-oriented dialogue in complex domains is an extremely challenging problem and there are relatively few datasets. This task provided two new resources that presented different challenges: one was focused but small, while the other was large but diverse. We also considered several new variations on the next utterance selection problem: (1) increasing the number of candidates, (2) including paraphrases, and (3) not including a correct option in the candidate set. Twenty teams participated, developing a range of neural network models, including some that successfully incorporated external data to boost performance. Both datasets have been publicly released, enabling future work to build on these results, working towards robust goal-oriented dialogue systems.",TRUE,research problem
R145261,Natural Language Processing,R76157,SemEval-2020 Task 3: Graded Word Similarity in Context,S534890,R76159,has research problem,R76161,Graded Word Similarity in Context,"This paper presents the Graded Word Similarity in Context (GWSC) task which asked participants to predict the effects of context on human perception of similarity in English, Croatian, Slovene and Finnish. We received 15 submissions and 11 system description papers. A new dataset (CoSimLex) was created for evaluation in this task: it contains pairs of words, each annotated within two different contexts. Systems beat the baselines by significant margins, but few did well in more than one language or subtask. Almost every system employed a Transformer model, but with many variations in the details: WordNet sense embeddings, translation of contexts, TF-IDF weightings, and the automatic creation of datasets for fine-tuning were all used to good effect.",TRUE,research problem
R145261,Natural Language Processing,R147657,Concept-based analysis of scientific literature,S592387,R147659,has research problem,R147662,Identifying and categorizing mentions of concepts,"This paper studies the importance of identifying and categorizing scientific concepts as a way to achieve a deeper understanding of the research literature of a scientific community. To reach this goal, we propose an unsupervised bootstrapping algorithm for identifying and categorizing mentions of concepts. We then propose a new clustering algorithm that uses citations' context as a way to cluster the extracted mentions into coherent concepts. Our evaluation of the algorithms against gold standards shows significant improvement over state-of-the-art results. More importantly, we analyze the computational linguistic literature using the proposed algorithms and show four different ways to summarize and understand the research community which are difficult to obtain using existing techniques.",TRUE,research problem
R145261,Natural Language Processing,R163319,Overview of the Infectious Diseases (ID) task of BioNLP Shared Task 2011,S651264,R163321,has research problem,R163322,Infectious Diseases (ID) information extraction task,"This paper presents the preparation, resources, results and analysis of the Infectious Diseases (ID) information extraction task, a main task of the BioNLP Shared Task 2011. The ID task represents an application and extension of the BioNLP'09 shared task event extraction approach to full papers on infectious diseases. Seven teams submitted final results to the task, with the highest-performing system achieving 56% F-score in the full task, comparable to state-of-the-art performance in the established BioNLP'09 task. The results indicate that event extraction methods generalize well to new domains and full-text publications and are applicable to the extraction of events relevant to the molecular mechanisms of infectious diseases.",TRUE,research problem
R145261,Natural Language Processing,R69288,"Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction",S583761,R69289,has research problem,R74030,Information Extraction,"We introduce a multi-task setup of identifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called SciIE with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.",TRUE,research problem
R145261,Natural Language Processing,R162342,Overview of BioCreAtIvE: critical assessment of information extraction for biology,S647518,R162344,has research problem,R74030,Information Extraction,"Abstract Background The goal of the first BioCreAtIvE challenge (Critical Assessment of Information Extraction in Biology) was to provide a set of common evaluation tasks to assess the state of the art for text mining applied to biological problems. The results were presented in a workshop held in Granada, Spain March 28–31, 2004. The articles collected in this BMC Bioinformatics supplement entitled ""A critical assessment of text mining methods in molecular biology"" describe the BioCreAtIvE tasks, systems, results and their independent evaluation. Results BioCreAtIvE focused on two tasks. The first dealt with extraction of gene or protein names from text, and their mapping into standardized gene identifiers for three model organism databases (fly, mouse, yeast). The second task addressed issues of functional annotation, requiring systems to identify specific text passages that supported Gene Ontology annotations for specific proteins, given full text articles. Conclusion The first BioCreAtIvE assessment achieved a high level of international participation (27 groups from 10 countries). The assessment provided state-of-the-art performance results for a basic task (gene name finding and normalization), where the best systems achieved a balanced 80% precision / recall or better, which potentially makes them suitable for real applications in biology. The results for the advanced task (functional annotation from free text) were significantly lower, demonstrating the current limitations of text-mining approaches where knowledge extrapolation and interpretation are required. In addition, an important contribution of BioCreAtIvE has been the creation and release of training and test data sets for both tasks. There are 22 articles in this special issue, including six that provide analyses of results or data quality for the data sets, including a novel inter-annotator consistency assessment for the test set used in task 2.",TRUE,research problem
R145261,Natural Language Processing,R141018,RDoC Task at BioNLP-OST 2019,S583249,R145658,has research problem,R128068,Information Retrieval,"BioNLP Open Shared Tasks (BioNLP-OST) is an international competition organized to facilitate development and sharing of computational tasks of biomedical text mining and solutions to them. For BioNLP-OST 2019, we introduced a new mental health informatics task called “RDoC Task”, which is composed of two subtasks: information retrieval and sentence extraction through National Institutes of Mental Health’s Research Domain Criteria framework. Five and four teams around the world participated in the two tasks, respectively. According to the performance on the two tasks, we observe that there is room for improvement for text mining on brain research and mental illness.",TRUE,research problem
R145261,Natural Language Processing,R69291,The ACL RD-TEC 2.0: A Language Resource for Evaluating Term Extraction and Entity Recognition Methods,S587257,R69292,has research problem,R76427,Language resource,"This paper introduces the ACL Reference Dataset for Terminology Extraction and Classification, version 2.0 (ACL RD-TEC 2.0). The ACL RD-TEC 2.0 has been developed with the aim of providing a benchmark for the evaluation of term and entity recognition tasks based on specialised text from the computational linguistics domain. This release of the corpus consists of 300 abstracts from articles in the ACL Anthology Reference Corpus, published between 1978–2006. In these abstracts, terms (i.e., single or multi-word lexical units with a specialised meaning) are manually annotated. In addition to their boundaries in running text, annotated terms are classified into one of the seven categories method, tool, language resource (LR), LR product, model, measures and measurements, and other. To assess the quality of the annotations and to determine the difficulty of this annotation task, more than 171 of the abstracts are annotated twice, independently, by each of the two annotators. In total, 6,818 terms are identified and annotated in more than 1300 sentences, resulting in a specialised vocabulary made of 3,318 lexical forms, mapped to 3,471 concepts. We explain the development of the annotation guidelines and discuss some of the challenges we encountered in this annotation task.",TRUE,research problem
R145261,Natural Language Processing,R141018,RDoC Task at BioNLP-OST 2019,S563320,R141020,has research problem,R141025,Mental Health Informatics,"BioNLP Open Shared Tasks (BioNLP-OST) is an international competition organized to facilitate development and sharing of computational tasks of biomedical text mining and solutions to them. For BioNLP-OST 2019, we introduced a new mental health informatics task called “RDoC Task”, which is composed of two subtasks: information retrieval and sentence extraction through National Institutes of Mental Health’s Research Domain Criteria framework. Five and four teams around the world participated in the two tasks, respectively. According to the performance on the two tasks, we observe that there is room for improvement for text mining on brain research and mental illness.",TRUE,research problem
R145261,Natural Language Processing,R163186,WikiNEuRal: Combined Neural and Knowledge-based Silver Data Creation for Multilingual NER,S650644,R163188,has research problem,R163133,Multilingual named entity recognition,"Multilingual Named Entity Recognition (NER) is a key intermediate task which is needed in many areas of NLP. In this paper, we address the well-known issue of data scarcity in NER, especially relevant when moving to a multilingual scenario, and go beyond current approaches to the creation of multilingual silver data for the task. We exploit the texts of Wikipedia and introduce a new methodology based on the effective combination of knowledge-based approaches and neural models, together with a novel domain adaptation technique, to produce high-quality training corpora for NER. We evaluate our datasets extensively on standard benchmarks for NER, yielding substantial improvements of up to 6 span-based F1-score points over previous state-of-the-art systems for data creation.",TRUE,research problem
R145261,Natural Language Processing,R141070,Named Entity Recognition on Code-Switched Data: Overview of the CALCS 2018 Shared Task,S563620,R141072,has research problem,R69806,Named Entity Recognition,"In the third shared task of the Computational Approaches to Linguistic Code-Switching (CALCS) workshop, we focus on Named Entity Recognition (NER) on code-switched social-media data. We divide the shared task into two competitions based on the English-Spanish (ENG-SPA) and Modern Standard Arabic-Egyptian (MSA-EGY) language pairs. We use Twitter data and 9 entity types to establish a new dataset for code-switched NER benchmarks. In addition to the CS phenomenon, the diversity of the entities and the social media challenges make the task considerably hard to process. As a result, the best scores of the competitions are 63.76% and 71.61% for ENG-SPA and MSA-EGY, respectively. We present the scores of 9 participants and discuss the most common challenges among submissions.",TRUE,research problem
R145261,Natural Language Processing,R163050,Named Entity Recognition in Wikipedia,S650182,R163052,has research problem,R69806,Named Entity Recognition,"Named entity recognition (NER) is used in many domains beyond the newswire text that comprises current gold-standard corpora. Recent work has used Wikipedia's link structure to automatically generate near gold-standard annotations. Until now, these resources have only been evaluated on newswire corpora or themselves. We present the first NER evaluation on a Wikipedia gold standard (WG) corpus. Our analysis of cross-corpus performance on WG shows that Wikipedia text may be a harder NER domain than newswire. We find that an automatic annotation of Wikipedia has high agreement with WG and, when used as training data, outperforms newswire models by up to 7.7%.",TRUE,research problem
R145261,Natural Language Processing,R163747,CrossNER: Evaluating Cross-Domain Named Entity Recognition,S654005,R163789,has research problem,R69806,Named Entity Recognition,"Cross-domain named entity recognition (NER) models are able to cope with the scarcity issue of NER samples in target domains. However, most of the existing NER benchmarks lack domain-specialized entity types or do not focus on a certain domain, leading to a less effective cross-domain evaluation. To address these obstacles, we introduce a cross-domain NER dataset (CrossNER), a fully-labeled collection of NER data spanning over five diverse domains with specialized entity categories for different domains. Additionally, we also provide a domain-related corpus since using it to continue pre-training language models (domain-adaptive pre-training) is effective for the domain adaptation. We then conduct comprehensive experiments to explore the effectiveness of leveraging different levels of the domain corpus and pre-training strategies to do domain-adaptive pre-training for the cross-domain task. Results show that focusing on the fractional corpus containing domain-specialized entities and utilizing a more challenging pre-training strategy in domain-adaptive pre-training are beneficial for the NER domain adaptation, and our proposed method can consistently outperform existing cross-domain NER baselines. Nevertheless, experiments also illustrate the challenge of this cross-domain NER task. We hope that our dataset and baselines will catalyze research in the NER domain adaptation area. The code and data are available at this https URL.",TRUE,research problem
R145261,Natural Language Processing,R164317,Named Entity Recognition for Bacterial Type IV Secretion Systems,S655939,R164318,has research problem,R69806,Named Entity Recognition,"Research on specialized biological systems is often hampered by a lack of consistent terminology, especially across species. In bacterial Type IV secretion systems genes within one set of orthologs may have over a dozen different names. Classifying research publications based on biological processes, cellular components, molecular functions, and microorganism species should improve the precision and recall of literature searches allowing researchers to keep up with the exponentially growing literature, through resources such as the Pathosystems Resource Integration Center (PATRIC, patricbrc.org). We developed named entity recognition (NER) tools for four entities related to Type IV secretion systems: 1) bacteria names, 2) biological processes, 3) molecular functions, and 4) cellular components. These four entities are important to pathogenesis and virulence research but have received less attention than other entities, e.g., genes and proteins. Based on an annotated corpus, large domain terminological resources, and machine learning techniques, we developed recognizers for these entities. High accuracy rates (>80%) are achieved for bacteria, biological processes, and molecular function. Contrastive experiments highlighted the effectiveness of alternate recognition strategies; results of term extraction on contrasting document sets demonstrated the utility of these classes for identifying T4SS-related documents.",TRUE,research problem
R145261,Natural Language Processing,R165975,Named Entity Recognition for Astronomy Literature,S661482,R165977,has research problem,R69806,Named Entity Recognition,"We present a system for named entity recognition (ner) in astronomy journal articles. We have developed this system on a ne corpus comprising approximately 200,000 words of text from astronomy articles. These have been manually annotated with ∼40 entity types of interest to astronomers. We report on the challenges involved in extracting the corpus, defining entity classes and annotating scientific text. We investigate which features of an existing state-of-the-art Maximum Entropy approach perform well on astronomy text. Our system achieves an F-score of 87.8%.",TRUE,research problem
R145261,Natural Language Processing,R166178,Exploiting Wikipedia as external knowledge for named entity recognition,S661875,R166180,has research problem,R69806,Named Entity Recognition,"We explore the use of Wikipedia as external knowledge to improve named entity recognition (NER). Our method retrieves the corresponding Wikipedia entry for each candidate word sequence and extracts a category label from the first sentence of the entry, which can be thought of as a definition part. These category labels are used as features in a CRF-based NE tagger. We demonstrate using the CoNLL 2003 dataset that the Wikipedia category labels extracted by such a simple method actually improve the accuracy of NER.",TRUE,research problem
R145261,Natural Language Processing,R166235,WEXEA: Wikipedia EXhaustive Entity Annotation,S662011,R166237,has research problem,R69806,Named Entity Recognition,"Building predictive models for information extraction from text, such as named entity recognition or the extraction of semantic relationships between named entities in text, requires a large corpus of annotated text. Wikipedia is often used as a corpus for these tasks where the annotation is a named entity linked by a hyperlink to its article. However, editors on Wikipedia are only expected to link these mentions in order to help the reader to understand the content, but are discouraged from adding links that do not add any benefit for understanding an article. Therefore, many mentions of popular entities (such as countries or popular events in history), or previously linked articles, as well as the article’s entity itself, are not linked. In this paper, we discuss WEXEA, a Wikipedia EXhaustive Entity Annotation system, to create a text corpus based on Wikipedia with exhaustive annotations of entity mentions, i.e. linking all mentions of entities to their corresponding articles. This results in a huge potential for additional annotations that can be used for downstream NLP tasks, such as Relation Extraction. We show that our annotations are useful for creating distantly supervised datasets for this task. Furthermore, we publish all code necessary to derive a corpus from a raw Wikipedia dump, so that it can be reproduced by everyone.",TRUE,research problem
R145261,Natural Language Processing,R172408,A Survey on Recent Advances in Named Entity Recognition from Deep Learning models,S687935,R172410,has research problem,R69806,Named Entity Recognition,"Named Entity Recognition (NER) is a key component in NLP systems for question answering, information retrieval, relation extraction, etc. NER systems have been studied and developed widely for decades, but accurate systems using deep neural networks (NN) have only been introduced in the last few years. We present a comprehensive survey of deep neural network architectures for NER, and contrast them with previous approaches to NER based on feature engineering and other supervised or semi-supervised learning algorithms. Our results highlight the improvements achieved by neural networks, and show how incorporating some of the lessons learned from past work on feature-based NER systems can yield further improvements.",TRUE,research problem
R145261,Natural Language Processing,R172672,Named Entity Recognition with Bidirectional LSTM-CNNs,S689039,R172674,has research problem,R69806,Named Entity Recognition,"Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.",TRUE,research problem
R145261,Natural Language Processing,R184238,A Survey on Recent Advances in Named Entity Recognition from Deep Learning models,S707562,R184240,has research problem,R69806,Named Entity Recognition,"Named Entity Recognition (NER) is a key component in NLP systems for question answering, information retrieval, relation extraction, etc. NER systems have been studied and developed widely for decades, but accurate systems using deep neural networks (NN) have only been introduced in the last few years. We present a comprehensive survey of deep neural network architectures for NER, and contrast them with previous approaches to NER based on feature engineering and other supervised or semi-supervised learning algorithms. Our results highlight the improvements achieved by neural networks, and show how incorporating some of the lessons learned from past work on feature-based NER systems can yield further improvements.",TRUE,research problem
R145261,Natural Language Processing,R146853,SciREX: A Challenge Dataset for Document-Level Information Extraction,S587992,R146855,has research problem,R146857,N-ary relation identification from scientific articles,"Extracting information from full documents is an important problem in many domains, but most previous work focus on identifying relationships within a sentence or a paragraph. It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections. In this paper, we introduce SciREX, a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles. We annotate our dataset by integrating automatic and human annotations, leveraging existing scientific knowledge resources. We develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE. Analyzing the model performance shows a significant gap between human performance and current baselines, inviting the community to use our dataset as a challenge to develop document-level IE models. Our data and code are publicly available at https://github.com/allenai/SciREX .",TRUE,research problem
R145261,Natural Language Processing,R146670,ParsCit: an Open-source CRF Reference String Parsing Package,S587237,R146672,has research problem,R146674,Open-source implementation,"We describe ParsCit, a freely available, open-source implementation of a reference string parsing package. At the core of ParsCit is a trained conditional random field (CRF) model used to label the token sequences in the reference string. A heuristic model wraps this core with added functionality to identify reference strings from a plain text file, and to retrieve the citation contexts. The package comes with utilities to run it as a web service or as a standalone utility. We compare ParsCit on three distinct reference string datasets and show that it compares well with other previously published work.",TRUE,research problem
R145261,Natural Language Processing,R163499,Overview of the Pathway Curation (PC) task of BioNLP Shared Task 2013,S652845,R163501,has research problem,R163541,Pathway Curation (PC) task,"We present the Pathway Curation (PC) task, a main event extraction task of the BioNLP shared task (ST) 2013. The PC task concerns the automatic extraction of biomolecular reactions from text. The task setting, representation and semantics are defined with respect to pathway model standards and ontologies (SBML, BioPAX, SBO) and documents selected by relevance to specific model reactions. Two BioNLP ST 2013 participants successfully completed the PC task. The highest achieved Fscore, 52.8%, indicates that event extraction is a promising approach to supporting pathway curation efforts. The PC task continues as an open challenge with data, resources and tools available from http://2013.bionlp-st.org/",TRUE,research problem
R145261,Natural Language Processing,R141066,"CLPsych 2018 Shared Task: Predicting Current and Future
Psychological Health from Childhood Essays",S563479,R141068,has research problem,R141069,Predicting Current and Future Psychological Health,"We describe the shared task for the CLPsych 2018 workshop, which focused on predicting current and future psychological health from an essay authored in childhood. Language-based predictions of a person’s current health have the potential to supplement traditional psychological assessment such as questionnaires, improving intake risk measurement and monitoring. Predictions of future psychological health can aid with both early detection and the development of preventative care. Research into the mental health trajectory of people, beginning from their childhood, has thus far been an area of little work within the NLP community. This shared task represents one of the first attempts to evaluate the use of early language to predict future health; this has the potential to support a wide variety of clinical health care tasks, from early assessment of lifetime risk for mental health problems, to optimal timing for targeted interventions aimed at both prevention and treatment.",TRUE,research problem
R145261,Natural Language Processing,R147106,"SQuAD: 100,000+ Questions for Machine Comprehension of Text",S589217,R147108,has research problem,R9143,Question Answering ,"We present the Stanford Question Answering Dataset (SQuAD), a new reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. We analyze the dataset to understand the types of reasoning required to answer the questions, leaning heavily on dependency and constituency trees. We build a strong logistic regression model, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). However, human performance (86.8%) is much higher, indicating that the dataset presents a good challenge problem for future research. The dataset is freely available at this https URL",TRUE,research problem
R145261,Natural Language Processing,R147113,MS MARCO: A Human Generated MAchine Reading COmprehension Dataset,S589269,R147115,has research problem,R9143,Question Answering ,"This paper presents our recent work on the design and development of a new, large scale dataset, which we name MS MARCO, for MAchine Reading COmprehension. This new dataset is aimed to overcome a number of well-known weaknesses of previous publicly available datasets for the same task of reading comprehension and question answering. In MS MARCO, all questions are sampled from real anonymized user queries. The context passages, from which answers in the dataset are derived, are extracted from real web documents using the most advanced version of the Bing search engine. The answers to the queries are human generated. Finally, a subset of these queries has multiple answers. We aim to release one million queries and the corresponding answers in the dataset, which, to the best of our knowledge, is the most comprehensive real-world dataset of its kind in both quantity and quality. We are currently releasing 100,000 queries with their corresponding answers to inspire work in reading comprehension and question answering along with gathering feedback from the research community.",TRUE,research problem
R145261,Natural Language Processing,R147125,WWW'18 Open Challenge: Financial Opinion Mining and Question Answering,S589364,R147127,has research problem,R9143,Question Answering ,"The growing maturity of Natural Language Processing (NLP) techniques and resources is dramatically changing the landscape of many application domains which are dependent on the analysis of unstructured data at scale. The finance domain, with its reliance on the interpretation of multiple unstructured and structured data sources and its demand for fast and comprehensive decision making is already emerging as a primary ground for the experimentation of NLP, Web Mining and Information Retrieval (IR) techniques for the automatic analysis of financial news and opinions online. This challenge focuses on advancing the state-of-the-art of aspect-based sentiment analysis and opinion-based Question Answering for the financial domain.",TRUE,research problem
R145261,Natural Language Processing,R147129,A Hierarchical Attention Retrieval Model for Healthcare Question Answering,S589382,R147131,has research problem,R9143,Question Answering ,"The growth of the Web in recent years has resulted in the development of various online platforms that provide healthcare information services. These platforms contain an enormous amount of information, which could be beneficial for a large number of people. However, navigating through such knowledgebases to answer specific queries of healthcare consumers is a challenging task. A majority of such queries might be non-factoid in nature, and hence, traditional keyword-based retrieval models do not work well for such cases. Furthermore, in many scenarios, it might be desirable to get a short answer that sufficiently answers the query, instead of a long document with only a small amount of useful information. In this paper, we propose a neural network model for ranking documents for question answering in the healthcare domain. The proposed model uses a deep attention mechanism at word, sentence, and document levels, for efficient retrieval for both factoid and non-factoid queries, on documents of varied lengths. Specifically, the word-level cross-attention allows the model to identify words that might be most relevant for a query, and the hierarchical attention at sentence and document levels allows it to do effective retrieval on both long and short documents. We also construct a new large-scale healthcare question-answering dataset, which we use to evaluate our model. Experimental evaluation results against several state-of-the-art baselines show that our model outperforms the existing retrieval techniques.",TRUE,research problem
R145261,Natural Language Processing,R154297,Question Answering Benchmarks for Wikidata,S626949,R154298,has research problem,R9143,Question Answering ,"Wikidata is becoming an increasingly important knowledge base whose usage is spreading in the research community. However, most question answering systems evaluation datasets rely on Freebase or DBpedia. We present two new datasets in order to train and benchmark QA systems over Wikidata. The first is a translation of the popular SimpleQuestions dataset to Wikidata, the second is a dataset created by collecting user feedbacks.",TRUE,research problem
R145261,Natural Language Processing,R156119,Question Answering Benchmarks for Wikidata,S626959,R156120,has research problem,R9143,Question Answering ,"Wikidata is becoming an increasingly important knowledge base whose usage is spreading in the research community. However, most question answering systems evaluation datasets rely on Freebase or DBpedia. We present two new datasets in order to train and benchmark QA systems over Wikidata. The first is a translation of the popular SimpleQuestions dataset to Wikidata, the second is a dataset created by collecting user feedbacks.",TRUE,research problem
R145261,Natural Language Processing,R142108,SemEval-2013 Task 7: The Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge,S570962,R142112,has research problem,R140632,Recognizing Textual Entailment,"We present the results of the Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge, aiming to bring together researchers in educational NLP technology and textual entailment. The task of giving feedback on student answers requires semantic inference and therefore is related to recognizing textual entailment. Thus, we offered to the community a 5-way student response labeling task, as well as 3-way and 2way RTE-style tasks on educational data. In addition, a partial entailment task was piloted. We present and compare results from 9 participating teams, and discuss future directions.",TRUE,research problem
R145261,Natural Language Processing,R146670,ParsCit: an Open-source CRF Reference String Parsing Package,S587236,R146672,has research problem,R146673,Reference string parsing,"We describe ParsCit, a freely available, open-source implementation of a reference string parsing package. At the core of ParsCit is a trained conditional random field (CRF) model used to label the token sequences in the reference string. A heuristic model wraps this core with added functionality to identify reference strings from a plain text file, and to retrieve the citation contexts. The package comes with utilities to run it as a web service or as a standalone utility. We compare ParsCit on three distinct reference string datasets and show that it compares well with other previously published work.",TRUE,research problem
R145261,Natural Language Processing,R164050,Static Relations: a Piece in the Biomedical Information Extraction Puzzle,S655136,R164052,has research problem,R116569,Relation Extraction,"We propose a static relation extraction task to complement biomedical information extraction approaches. We argue that static relations such as part-whole are implicitly involved in many common extraction settings, define a task setting making them explicit, and discuss their integration into previously proposed tasks and extraction methods. We further identify a specific static relation extraction task motivated by the BioNLP'09 shared task on event extraction, introduce an annotated corpus for the task, and demonstrate the feasibility of the task by experiments showing that the defined relations can be reliably extracted. The task setting and corpus can serve to support several forms of domain information extraction.",TRUE,research problem
R145261,Natural Language Processing,R161742,Relationship extraction for knowledge graph creation from biomedical literature,S646018,R161744,has research problem,R161745,Relationship extraction,"Biomedical research is growing at such an exponential pace that scientists, researchers, and practitioners are no more able to cope with the amount of published literature in the domain. The knowledge presented in the literature needs to be systematized in such a way that claims and hypotheses can be easily found, accessed, and validated. Knowledge graphs can provide such a framework for semantic knowledge representation from literature. However, in order to build a knowledge graph, it is necessary to extract knowledge as relationships between biomedical entities and normalize both entities and relationship types. In this paper, we present and compare a few rule-based and machine learning-based (Naive Bayes, Random Forests as examples of traditional machine learning methods and DistilBERT and T5-based models as examples of modern deep learning transformers) methods for scalable relationship extraction from biomedical literature, and for the integration into the knowledge graphs. We examine how resilient are these various methods to unbalanced and fairly small datasets, showing that transformer-based models handle well both small datasets, due to pre-training on large C4 dataset, as well as unbalanced data. The best performing model was the DistilBERT-based model fine-tuned on balanced data, with a reported F1-score of 0.89.",TRUE,research problem
R145261,Natural Language Processing,R182418,SPECTER: Document-level Representation Learning using Citation-informed Transformers,S705833,R182420,has research problem,R125056,Representation Learning,"Representation learning is a critical ingredient for natural language processing systems. Recent Transformer language models like BERT learn powerful textual representations, but these models are targeted towards token- and sentence-level training objectives and do not leverage information on inter-document relatedness, which limits their document-level representation power. For applications on scientific documents, such as classification and recommendation, accurate embeddings of documents are a necessity. We propose SPECTER, a new method to generate document-level embedding of scientific papers based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph. Unlike existing pretrained language models, Specter can be easily applied to downstream applications without task-specific fine-tuning. Additionally, to encourage further research on document-level models, we introduce SciDocs, a new evaluation benchmark consisting of seven document-level tasks ranging from citation prediction, to document classification and recommendation. We show that Specter outperforms a variety of competitive baselines on the benchmark.",TRUE,research problem
R145261,Natural Language Processing,R146853,SciREX: A Challenge Dataset for Document-Level Information Extraction,S587993,R146855,has research problem,R146858,salient entity identification,"Extracting information from full documents is an important problem in many domains, but most previous work focus on identifying relationships within a sentence or a paragraph. It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections. In this paper, we introduce SciREX, a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles. We annotate our dataset by integrating automatic and human annotations, leveraging existing scientific knowledge resources. We develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE. Analyzing the model performance shows a significant gap between human performance and current baselines, inviting the community to use our dataset as a challenge to develop document-level IE models. Our data and code are publicly available at https://github.com/allenai/SciREX .",TRUE,research problem
R145261,Natural Language Processing,R146357,The STEM-ECR Dataset: Grounding Scientific Entity References in STEM Scholarly Content to Authoritative Encyclopedic and Lexicographic Sources,S585980,R146359,has research problem,R146365,Scientific entity extraction,"We introduce the STEM (Science, Technology, Engineering, and Medicine) Dataset for Scientific Entity Extraction, Classification, and Resolution, version 1.0 (STEM-ECR v1.0). The STEM-ECR v1.0 dataset has been developed to provide a benchmark for the evaluation of scientific entity extraction, classification, and resolution tasks in a domain-independent fashion. It comprises abstracts in 10 STEM disciplines that were found to be the most prolific ones on a major publishing platform. We describe the creation of such a multidisciplinary corpus and highlight the obtained findings in terms of the following features: 1) a generic conceptual formalism for scientific entities in a multidisciplinary scientific context; 2) the feasibility of the domain-independent human annotation of scientific entities under such a generic formalism; 3) a performance benchmark obtainable for automatic extraction of multidisciplinary scientific entities using BERT-based neural models; 4) a delineated 3-step entity resolution procedure for human annotation of the scientific entities via encyclopedic entity linking and lexicographic word sense disambiguation; and 5) human evaluations of Babelfy returned encyclopedic links and lexicographic senses for our entities. Our findings cumulatively indicate that human annotation and automatic learning of multidisciplinary scientific concepts as well as their semantic disambiguation in a wide-ranging setting as STEM is reasonable.",TRUE,research problem
R145261,Natural Language Processing,R145757,SemEval-2018 Task 7: Semantic Relation Extraction and Classification in Scientific Papers,S583745,R145773,has research problem,R145768,Semantic Relation Extraction and Classification,"This paper describes the first task on semantic relation extraction and classification in scientific paper abstracts at SemEval 2018. The challenge focuses on domain-specific semantic relations and includes three different subtasks. The subtasks were designed so as to compare and quantify the effect of different pre-processing steps on the relation classification results. We expect the task to be relevant for a broad range of researchers working on extracting specialized knowledge from domain corpora, for example but not limited to scientific or bio-medical information extraction. The task attracted a total of 32 participants, with 158 submissions across different scenarios.",TRUE,research problem
R145261,Natural Language Processing,R141018,RDoC Task at BioNLP-OST 2019,S583250,R145659,has research problem,R145660,Sentence extraction,"BioNLP Open Shared Tasks (BioNLP-OST) is an international competition organized to facilitate development and sharing of computational tasks of biomedical text mining and solutions to them. For BioNLP-OST 2019, we introduced a new mental health informatics task called “RDoC Task”, which is composed of two subtasks: information retrieval and sentence extraction through National Institutes of Mental Health’s Research Domain Criteria framework. Five and four teams around the world participated in the two tasks, respectively. According to the performance on the two tasks, we observe that there is room for improvement for text mining on brain research and mental illness.",TRUE,research problem
R145261,Natural Language Processing,R172664,End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF,S689022,R172666,has research problem,R76358,Sequence labeling,"State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data pre-processing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks --- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both the two data --- 97.55\% accuracy for POS tagging and 91.21\% F1 for NER.",TRUE,research problem
R145261,Natural Language Processing,R162400,Overview of the gene ontology task at BioCreative IV,S647924,R162402,has research problem,R162408,text retrieval,"Gene Ontology (GO) annotation is a common task among model organism databases (MODs) for capturing gene function data from journal articles. It is a time-consuming and labor-intensive task, and is thus often considered as one of the bottlenecks in literature curation. There is a growing need for semiautomated or fully automated GO curation techniques that will help database curators to rapidly and accurately identify gene function information in full-length articles. Despite multiple attempts in the past, few studies have proven to be useful with regard to assisting real-world GO curation. The shortage of sentence-level training data and opportunities for interaction between text-mining developers and GO curators has limited the advances in algorithm development and corresponding use in practical circumstances. To this end, we organized a text-mining challenge task for literature-based GO annotation in BioCreative IV. More specifically, we developed two subtasks: (i) to automatically locate text passages that contain GO-relevant information (a text retrieval task) and (ii) to automatically identify relevant GO terms for the genes in a given article (a concept-recognition task). With the support from five MODs, we provided teams with >4000 unique text passages that served as the basis for each GO annotation in our task data. Such evidence text information has long been recognized as critical for text-mining algorithm development but was never made available because of the high cost of curation. In total, seven teams participated in the challenge task. From the team results, we conclude that the state of the art in automatically mining GO terms from literature has improved over the past decade while much progress is still needed for computer-assisted GO curation. Future work should focus on addressing remaining technical challenges for improved performance of automatic GO concept recognition and incorporating practical benefits of text-mining tools into real-world GO annotation. Database URL: http://www.biocreative.org/tasks/biocreative-iv/track-4-GO/.",TRUE,research problem
R145261,Natural Language Processing,R162526,Overview of the BioCreative VI text-mining services for Kinome Curation Track,S648479,R162528,has research problem,R162530,text-mining services for kinome curation,"Abstract The text-mining services for kinome curation track, part of BioCreative VI, proposed a competition to assess the effectiveness of text mining to perform literature triage. The track has exploited an unpublished curated data set from the neXtProt database. This data set contained comprehensive annotations for 300 human protein kinases. For a given protein and a given curation axis [diseases or gene ontology (GO) biological processes], participants’ systems had to identify and rank relevant articles in a collection of 5.2 M MEDLINE citations (task 1) or 530 000 full-text articles (task 2). Explored strategies comprised named-entity recognition and machine-learning frameworks. For that latter approach, participants developed methods to derive a set of negative instances, as the databases typically do not store articles that were judged as irrelevant by curators. The supervised approaches proposed by the participating groups achieved significant improvements compared to the baseline established in a previous study and compared to a basic PubMed search.",TRUE,research problem
R145261,Natural Language Processing,R141057,Overview of the Epigenetics and Post-translational Modifications (EPI) task of BioNLP Shared Task 2011,S651191,R141059,has research problem,R163300,The Epigenetics and Post-translational Modifications (EPI) task,"This paper presents the preparation, resources, results and analysis of the Epigenetics and Post-translational Modifications (EPI) task, a main task of the BioNLP Shared Task 2011. The task concerns the extraction of detailed representations of 14 protein and DNA modification events, the catalysis of these reactions, and the identification of instances of negated or speculatively stated event instances. Seven teams submitted final results to the EPI task in the shared task, with the highest-performing system achieving 53% F-score in the full task and 69% F-score in the extraction of a simplified set of core event arguments.",TRUE,research problem
R145261,Natural Language Processing,R141003,SemEval-2021 Task 5: Toxic Spans Detection,S563263,R141005,has research problem,R141006,Toxic Spans Detection,"The Toxic Spans Detection task of SemEval-2021 required participants to predict the spans of toxic posts that were responsible for the toxic label of the posts. The task could be addressed as supervised sequence labeling, using training data with gold toxic spans provided by the organisers. It could also be treated as rationale extraction, using classifiers trained on potentially larger external datasets of posts manually annotated as toxic or not, without toxic span annotations. For the supervised sequence labeling approach and evaluation purposes, posts previously labeled as toxic were crowd-annotated for toxic spans. Participants submitted their predicted spans for a held-out test set and were scored using character-based F1. This overview summarises the work of the 36 teams that provided system descriptions.",TRUE,research problem
R112,Numerical Analysis and Computation,R12164,Exact and Heuristic Methods for the Assembly Line Worker Assignment and Balancing Problem,S18348,R12165,has research problem,R12065,Assembly line worker assignment and balancing problem,"In traditional assembly lines, it is reasonable to assume that task execution times are the same for each worker. However, in sheltered work centres for disabled this assumption is not valid: some workers may execute some tasks considerably slower or even be incapable of executing them. Worker heterogeneity leads to a problem called the assembly line worker assignment and balancing problem (ALWABP). For a fixed number of workers the problem is to maximize the production rate of an assembly line by assigning workers to stations and tasks to workers, while satisfying precedence constraints between the tasks. This paper introduces new heuristic and exact methods to solve this problem. We present a new MIP model, propose a novel heuristic algorithm based on beam search, as well as a task-oriented branch-and-bound procedure which uses new reduction rules and lower bounds for solving the problem. Extensive computational tests on a large set of instances show that these methods are effective and improve over existing ones.",TRUE,research problem
R112,Numerical Analysis and Computation,R12174,Exact and Heuristic Methods for the Assembly Line Worker Assignment and Balancing Problem,S18365,R12175,has research problem,R12065,Assembly line worker assignment and balancing problem,"In traditional assembly lines, it is reasonable to assume that task execution times are the same for each worker. However, in sheltered work centres for disabled this assumption is not valid: some workers may execute some tasks considerably slower or even be incapable of executing them. Worker heterogeneity leads to a problem called the assembly line worker assignment and balancing problem (ALWABP). For a fixed number of workers the problem is to maximize the production rate of an assembly line by assigning workers to stations and tasks to workers, while satisfying precedence constraints between the tasks. This paper introduces new heuristic and exact methods to solve this problem. We present a new MIP model, propose a novel heuristic algorithm based on beam search, as well as a task-oriented branch-and-bound procedure which uses new reduction rules and lower bounds for solving the problem. Extensive computational tests on a large set of instances show that these methods are effective and improve over existing ones.",TRUE,research problem
R112,Numerical Analysis and Computation,R12016,Solving Mixed Model Workplace Time-dependent Assembly Line Balancing Problem with FSS Algorithm,S18172,R12018,has research problem,R12066,Optimization problem,"Balancing assembly lines, a family of optimization problems commonly known as Assembly Line Balancing Problem, is notoriously NP-Hard. They comprise a set of problems of enormous practical interest to manufacturing industry due to the relevant frequency of this type of production paradigm. For this reason, many researchers on Computational Intelligence and Industrial Engineering have been conceiving algorithms for tackling different versions of assembly line balancing problems utilizing different methodologies. In this article, it was proposed a problem version referred as Mixed Model Workplace Time-dependent Assembly Line Balancing Problem with the intention of including pressing issues of real assembly lines in the optimization problem, to which four versions were conceived. Heuristic search procedures were used, namely two Swarm Intelligence algorithms from the Fish School Search family: the original version, named ""vanilla"", and a special variation including a stagnation avoidance routine. Either approaches solved the newly posed problem achieving good results when compared to Particle Swarm Optimization algorithm.",TRUE,research problem
R112,Numerical Analysis and Computation,R12192,Iterative Beam Search for Simple Assembly Line Balancing with a Fixed Number of Work Stations,S18414,R12193,has research problem,R12064,Simple assembly line balancing problem (SALBP),"The simple assembly line balancing problem (SALBP) concern s the assignment of tasks with pre-defined processing times to work stations that are arran ged in a line. Hereby, precedence constraints between the tasks must be respected. The optimi zation goal of the SALBP-2 variant of the problem concerns the minimization of the so-called cy cle time, that is, the time in which the tasks of each work station must be completed. In this work we p ropose to tackle this problem with an iterative search method based on beam search. The propose d algorithm is able to generate optimal solutions, respectively the best upper bounds, for 283 out of 302 test cases. Moreover, for 9 further test cases the algorithm is able to improve the c urrently best upper bounds. These numbers indicate that the proposed iterative beam search al gorithm is currently a state-of-the-art method for the SALBP-2",TRUE,research problem
R112,Numerical Analysis and Computation,R12072,Assembly line balancing with task division,S18198,R12073,has research problem,R12074,Task division assembly line balancing problem,"In a commonly-used version of the Simple Assembly Line Balancing Problem (SALBP-1) tasks are assigned to stations along an assembly line with a fixed cycle time in order to minimize the required number of stations. It has traditionally been assumed that the total work needed for each product unit has been partitioned into economically indivisible tasks. However, in practice, it is sometimes possible to divide particular tasks in limited ways at additional time penalty cost. Despite the penalties, task division where possible, now and then leads to a reduction in the minimum number of stations. Deciding which allowable tasks to divide creates a new assembly line balancing problem, TDALBP (Task Division Assembly Line Balancing Problem). We propose a mathematical model of the TDALBP, an exact solution procedure for it and present promising computational results for the adaptation of some classical SALBP instances from the research literature. The results demonstrate that the TDALBP sometimes has the potential to significantly improve assembly line performance.",TRUE,research problem
R137,Numerical Analysis/Scientific Computing,R109703,Simulation of Severe Accident Progression Using ROSHNI: A New Integrated Simulation Code for PHWR Severe Accidents,S500830,R109705,has research problem,R109710,Fission product release,"As analysts still grapple with understanding core damage accident progression at Three Mile Island and Fukushima that caught the nuclear industry off-guard once too many times, one notices the very limited detail with which the large reactor cores of these subject reactors have been modelled in their severe accident simulation code packages. At the same time, modelling of CANDU severe accidents have largely borrowed from and suffered from the limitations of the same LWR codes (see IAEA TECDOC 1727) whose applications to PHWRs have poorly caught critical PHWR design specifics and vulnerabilities. As a result, accident management measures that have been instituted at CANDU PHWRs, while meeting the important industry objective of publically seeming to be doing something about lessons learnt from say Fukushima and showing that the reactor designs are oh so close to perfect and the off-site consequences of severe accidents happily benign. Integrated PHWR severe accident progression and consequence assessment code ROSHNI can make a significant contribution to actual, practical understanding of severe accident progression in CANDU PHWRs, improving significantly on the other PHWR specific computer codes developed three decades ago when modeling decisions were constrained by limited computing power and poor understanding of and interest in severe core damage accidents. These codes force gross simplifications in reactor core modelling and do not adequately represent all the right CANDU core details, materials, fluids, vessels or phenomena. But they produce results that are familiar and palatable. They do, however to their credit, also excel in their computational speed, largely because they model and compute so little and with such un-necessary simplifications. ROSHNI sheds most previous modelling simplifications and represents each of the 380 channels, 4560 bundle, 37 elements in four concentric ring, Zircaloy clad fuel geometry, materials and fluids more faithfully in a 2000 MW(Th) CANDU6 reactor. It can be used easily for other PHWRs with different number of fuel channels and bundles per each channel. Each of horizontal PHWR reactor channels with all their bundles, fuel rings, sheaths, appendages, end fittings and feeders are modelled and in detail that reflects large across core differences. While other codes model at best a few hundred core fuel entities, thermo-chemical transient behaviour of about 73,000 different fuel channel entities within the core is considered by ROSHNI simultaneously along with other 15,000 or so other flow path segments. At each location all known thermo-chemical and hydraulic phenomena are computed. With such detail, ROSHNI is able to provide information on their progressive and parallel thermo-chemical contribution to accident progression and a more realistic fission product release source term that would belie the miniscule one (100 TBq of Cs-137 or 0.15% of core inventory) used by EMOs now in Canada on recommendation of our national regulator CNSC. ROSHNI has an advanced, more CANDU specific consideration of each bundle transitioning to a solid debris behaviour in the Calandria vessel without reverting to a simplified molten corium formulation that happily ignores interaction of debris with vessel welds, further vessel failures and energetic interactions. The code is able to follow behaviour of each fuel bundle following its disassembly from the fuel channel and thus demonstrate that the gross assumption of a core collapse made in some analyses is wrong and misleading. It is able to thus demonstrate that PHWR core disassembly is not only gradual, it will be also be incomplete with a large number of low power, peripheral fuel channels never disassembling under most credible scenarios. The code is designed to grow into and use its voluminous results in a severe accident simulator for operator training. It’s phenomenological models are able to examine design inadequacies / issues that affect accident progression and several simple to implement design improvements that have a profound effect on results. For example, an early pressure boundary failure due to inadequacy of heat sinks in a station blackout scenario can be examined along with the effect of improved and adequate over pressure protection. A best effort code such as ROSHNI can be instrumental in identifying the risk reduction benefits of undertaking certain design, operational and accidental management improvements for PHWRs, with some of the multi-unit ones handicapped by poor pressurizer placement and leaky containments with vulnerable materials, poor overpressure protection, ad-hoc mitigation measures and limited instrumentation common to all CANDUs. Case in point is the PSA supported design and installed number of Hydrogen recombiners that are neither for the right gas (designed mysteriously for H2 instead of D2) or its potential release quantity (they are sparse and will cause explosions). The paper presents ROSHNI results of simulations of a postulated station blackout scenario and sheds a light on the challenges ahead in minimizing risk from operation of these otherwise unique power reactors.",TRUE,research problem
R272,"Operations Research, Systems Engineering and Industrial Engineering",R185255,Supply chain network design under the risk of uncertain disruptions,S709531,R185257,has research problem,R185258,Supply chain,"Facility disruptions in the supply chain often lead to catastrophic consequences, although they occur rarely. The low frequency and non-repeatability of disruptive events also make it impossible to estimate the disruption probability accurately. Therefore, we construct an uncertain programming model to design the three-echelon supply chain network with the disruption risk, in which disruptions are considered as uncertain events. Under the constraint of satisfying customer demands, the model optimises the selection of retailers with uncertain disruptions and the assignment of customers and retailers, in order to minimise the expected total cost of network design. In addition, we simplify the proposed model by analysing its properties and further linearise the simplified model. A Lagrangian relaxation algorithm for the linearised model and a genetic algorithm for the simplified model are developed to solve medium-scale problems and large-scale problems, respectively. Finally, we illustrate the effectiveness of proposed models and algorithms through several numerical examples.",TRUE,research problem
R129,Organic Chemistry,R138577,Solvent-Free Chelation-Assisted Catalytic C-C Bond Cleavage of Unstrained Ketone by Rhodium(I) Complexes under Microwave Irradiation,S550513,R138579,has research problem,R138572,C-C bond cleavage,A highly efficient C-C bond cleavage of unstrained aliphatic ketones bearing β-hydrogens with olefins was achieved using a chelation-assisted catalytic system consisting of (Ph 3 P) 3 RhCl and 2-amino-3-picoline by microwave irradiation under solvent-free conditions. The addition of cyclohexylamine catalyst accelerated the reaction rate dramatically under microwave irradiation compared with the classical heating method.,TRUE,research problem
R11,Science,R31670,Less is more: Active learning with support vector machines,S106109,R31676,has research problem,R25028,Active learning,"We describe a simple active learning heuristic which greatly enhances the generalization behavior of support vector machines (SVMs) on several practical document classification tasks. We observe a number of benefits, the most surprising of which is that a SVM trained on a wellchosen subset of the available corpus frequently performs better than one trained on all available data. The heuristic for choosing this subset is simple to compute, and makes no use of information about the test set. Given that the training time of SVMs depends heavily on the training set size, our heuristic not only offers better performance with fewer data, it frequently does so in less time than the naive approach of training on all available data.",TRUE,research problem
R11,Science,R31672,Support vector machine active learning with applications to text classification,S106090,R31673,has research problem,R25028,Active learning,"Support vector machines have met with significant success in numerous real-world learning tasks. However, like most machine learning algorithms, they are generally applied using a randomly selected training set classified in advance. In many settings, we also have the option of using pool-based active learning. Instead of using a randomly selected training set, the learner has access to a pool of unlabeled instances and can request the labels for some number of them. We introduce a new algorithm for performing active learning with support vector machines, i.e., an algorithm for choosing which instances to request next. We provide a theoretical motivation for the algorithm using the notion of a version space. We present experimental results showing that employing our active learning method can significantly reduce the need for labeled training instances in both the standard inductive and transductive settings.",TRUE,research problem
R11,Science,R31677,Active learning using adaptive resampling,S106127,R31678,has research problem,R25028,Active learning,"Classi cation modeling (a.k.a. supervised learning) is an extremely useful analytical technique for developing predictive and forecasting applications. The explosive growth in data warehousing and internet usage has made large amounts of data potentially available for developing classi cation models. For example, natural language text is widely available in many forms (e.g., electronic mail, news articles, reports, and web page contents). Categorization of data is a common activity which can be automated to a large extent using supervised learning methods. Examples of this include routing of electronic mail, satellite image classi cation, and character recognition. However, these tasks require labeled data sets of su ciently high quality with adequate instances for training the predictive models. Much of the on-line data, particularly the unstructured variety (e.g., text), is unlabeled. Labeling is usually a expensive manual process done by domain experts. Active learning is an approach to solving this problem and works by identifying a subset of the data that needs to be labeled and uses this subset to generate classi cation models. We present an active learning method that uses adaptive resampling in a natural way to signi cantly reduce the size of the required labeled set and generates a classi cation model that achieves the high accuracies possible with current adaptive resampling methods.",TRUE,research problem
R11,Science,R31679,Toward optimal active learning through sampling estimation of error reduction,S106143,R31680,has research problem,R25028,Active learning,"This paper presents an active learning method that directly optimizes expected future error. This is in contrast to many other popular techniques that instead aim to reduce version space size. These other methods are popular because for many learning models, closed form calculation of the expected future error is intractable. Our approach is made feasible by taking a sampling approach to estimating the expected reduction in error due to the labeling of a query. In experimental results on two real-world data sets we reach high accuracy very quickly, sometimes with four times fewer labeled examples than competing methods.",TRUE,research problem
R11,Science,R31683,A probabilistic active support vector learning algorithm,S106181,R31684,has research problem,R25028,Active learning,"The paper describes a probabilistic active learning strategy for support vector machine (SVM) design in large data applications. The learning strategy is motivated by the statistical query model. While most existing methods of active SVM learning query for points based on their proximity to the current separating hyperplane, the proposed method queries for a set of points according to a distribution as determined by the current separating hyperplane and a newly defined concept of an adaptive confidence factor. This enables the algorithm to have more robust and efficient learning capabilities. The confidence factor is estimated from local information using the k nearest neighbor principle. The effectiveness of the method is demonstrated on real-life data sets both in terms of generalization performance, query complexity, and training time.",TRUE,research problem
R11,Science,R31685,Active learning using pre-clustering,S106198,R31686,has research problem,R25028,Active learning,"The paper is concerned with two-class active learning. While the common approach for collecting data in active learning is to select samples close to the classification boundary, better performance can be achieved by taking into account the prior data distribution. The main contribution of the paper is a formal framework that incorporates clustering into active learning. The algorithm first constructs a classifier on the set of the cluster representatives, and then propagates the classification decision to the other samples via a local noise model. The proposed model allows to select the most representative samples as well as to avoid repeatedly labeling samples in the same cluster. During the active learning process, the clustering is adjusted using the coarse-to-fine strategy in order to balance between the advantage of large clusters and the accuracy of the data representation. The results of experiments in image databases show a better performance of our algorithm compared to the current methods.",TRUE,research problem
R11,Science,R31687,Balancing Exploration and Exploitation: A New Algorithm for Active Machine Learning,S106215,R31688,has research problem,R25028,Active learning,"Active machine learning algorithms are used when large numbers of unlabeled examples are available and getting labels for them is costly (e.g. requiring consulting a human expert). Many conventional active learning algorithms focus on refining the decision boundary, at the expense of exploring new regions that the current hypothesis misclassifies. We propose a new active learning algorithm that balances such exploration with refining of the decision boundary by dynamically adjusting the probability to explore at each step. Our experimental results demonstrate improved performance on data sets that require extensive exploration while remaining competitive on data sets that do not. Our algorithm also shows significant tolerance of noise.",TRUE,research problem
R11,Science,R32992,Cytogenetic abnormalities in adult acute lymphoblastic leukemia: correlations with hematologic findings outcome A Collaborative Study of the Group Francais de Cyto- genetique Hematologique,S154267,R32993,has research problem,R50405,acute lymphoblastic leukemia (ALL),"Cytogenetic analyses performed at diagnosis on 443 adult patients with acute lymphoblastic leukemia (ALL) were reviewed by the Groupe Français de Cytogénétique Hématologique, correlated with hematologic data, and compared with findings for childhood ALL. This study showed that the same recurrent abnormalities as those reported in childhood ALL are found in adults, and it determined their frequencies and distribution according to age. Hyperdiploidy greater than 50 chromosomes with a standard pattern of chromosome gains had a lower frequency (7%) than in children, and was associated with the Philadelphia chromosome (Ph) in 11 of 30 cases. Tetraploidy (2%) and triploidy (3%) were more frequent than that in childhood ALL. Hypodiploidy 30-39 chromosomes (2%), characterized by a specific pattern of chromosome losses, might be related to the triploid group that evoked a duplication of the 30-39 hypodiploidy. Both groups shared similar hematologic features. Ph+ ALL (29%) peaked in the 40- to 50-year-old age range (49%) and showed a high frequency of myeloid antigens (24%). ALL with t(1;19) (3%) occurred in young adults (median age, 22 years). In T-cell ALL (T-ALL), frequencies of 14q11 breakpoints (26%) and of t(10;14)(q24;q11) (14%) were higher than those in childhood ALL. New recurrent changes were identified, ie, monosomies 7 present in Ph-ALL (17%) and also in other ALL (8%) and two new recurrent translocations, t(1;11)(p34;pll) in T-ALL and t(1;7)(q11-21;q35-36) in Ph+ ALL. The ploidy groups with a favorable prognostic impact were hyperdiploidy greater than 50 without Ph chromosome (median event-free survival [EFS], 46 months) and tetraploidy (median EFS, 46 months). The recurrent abnormalities associated with better response to therapy were also significantly correlated to T-cell lineage. Among them, t(10;14)(q24;q11) (median EFS, 46 months) conferred the best prognostic impact (3-year EFS, 75%). Hypodiploidy 30-39 chromosomes and the related triploidy were associated with poor outcome. All Ph-ALL had short EFS (median EFS, 5 months), and no additional change affected this prognostic impact. Most patients with t(1;19) failed therapy within 1 year. Patients with 11q23 changes not because of t(4;11) had a poor outcome, although they did not present the high-risk factors found in t(4;11).",TRUE,research problem
R11,Science,R34242,How Would Monetary Policy Matter in the Proposed African Monetary Unions? Evidence from Output and Prices,S119254,R34281,has research problem,R34186,African monetary unions,"We analyze the effects of monetary policy on economic activity in the proposed African monetary unions. Findings broadly show that: (1) but for financial efficiency in the EAMZ, monetary policy variables affect output neither in the short-run nor in the long-term and; (2) with the exception of financial size that impacts inflation in the EAMZ in the short-term, monetary policy variables generally have no effect on prices in the short-run. The WAMZ may not use policy instruments to offset adverse shocks to output by pursuing either an expansionary or a contractionary policy, while the EAMZ can do with the ‘financial allocation efficiency’ instrument. Policy implications are discussed.",TRUE,research problem
R11,Science,R25617,Understanding post-adoptive agile usage: An exploratory cross- case analysis,S77336,R25618,has research problem,R25586,Agile Usage,"The widespread adoption of agile methodologies raises the question of their continued and effective usage in organizations. An agile usage model consisting of innovation, sociological, technological, team, and organizational factors is used to inform an analysis of post-adoptive usage of agile practices in two major organizations. Analysis of the two case studies found that a methodology champion and top management support were the most important factors influencing continued usage, while innovation factors such as compatibility seemed less influential. Both horizontal and vertical usage was found to have significant impact on the effectiveness of agile usage.",TRUE,research problem
R11,Science,R33869,An Ant Colony Optimization Approach to Test Sequence Generation for Statebased Software Testing,S117462,R33870,has research problem,R33857,Ant Colony Optimization,"Properly generated test suites may not only locate the defects in software systems, but also help in reducing the high cost associated with software testing, ft is often desired that test sequences in a test suite can be automatically generated to achieve required test coverage. However, automatic test sequence generation remains a major problem in software testing. This paper proposes an ant colony optimization approach to automatic test sequence generation for state-based software testing. The proposed approach can directly use UML artifacts to automatically generate test sequences to achieve required test coverage.",TRUE,research problem
R11,Science,R33873,Automatic Mutation Test Input Data Generation via Ant Colony,S117475,R33874,has research problem,R33857,Ant Colony Optimization,"Fault-based testing is often advocated to overcome limitations ofother testing approaches; however it is also recognized as beingexpensive. On the other hand, evolutionary algorithms have beenproved suitable for reducing the cost of data generation in the contextof coverage based testing. In this paper, we propose a newevolutionary approach based on ant colony optimization for automatictest input data generation in the context of mutation testingto reduce the cost of such a test strategy. In our approach the antcolony optimization algorithm is enhanced by a probability densityestimation technique. We compare our proposal with otherevolutionary algorithms, e.g., Genetic Algorithm. Our preliminaryresults on JAVA testbeds show that our approach performed significantlybetter than other alternatives.",TRUE,research problem
R11,Science,R33888,A Non- Pheromone based Intelligent Swarm Optimization Technique in Software Test Suite Optimization,S117513,R33889,has research problem,R33857,Ant Colony Optimization,"In our paper, we applied a non-pheromone based intelligent swarm optimization technique namely artificial bee colony optimization (ABC) for test suite optimization. Our approach is a population based algorithm, in which each test case represents a possible solution in the optimization problem and happiness value which is a heuristic introduced to each test case corresponds to the quality or fitness of the associated solution. The functionalities of three groups of bees are extended to three agents namely Search Agent, Selector Agent and Optimizer Agent to select efficient test cases among near infinite number of test cases. Because of the parallel behavior of these agents, the solution generation becomes faster and makes the approach an efficient one. Since, the test adequacy criterion we used is path coverage; the quality of the test cases is improved during each iteration to cover the paths in the software. Finally, we compared our approach with Ant Colony Optimization (ACO), a pheromone based optimization technique in test suite optimization and finalized that, ABC based approach has several advantages over ACO based optimization.",TRUE,research problem
R11,Science,R33894,Variable Strength Interaction Testing with an Ant Colony System Approach,S117528,R33895,has research problem,R33857,Ant Colony Optimization,"Interaction testing (also called combinatorial testing) is an cost-effective test generation technique in software testing. Most research work focuses on finding effective approaches to build optimal t-way interaction test suites. However, the strength of different factor sets may not be consistent due to the practical test requirements. To solve this problem, a variable strength combinatorial object and several approaches based on it have been proposed. These approaches include simulated annealing (SA) and greedy algorithms. SA starts with a large randomly generated test suite and then uses a binary search process to find the optimal solution. Although this approach often generates the minimal test suites, it is time consuming. Greedy algorithms avoid this shortcoming but the size of generated test suites is usually not as small as SA. In this paper, we propose a novel approach to generate variable strength interaction test suites (VSITs). In our approach, we adopt a one-test-at-a-time strategy to build final test suites. To generate a single test, we adopt ant colony system (ACS) strategy, an effective variant of ant colony optimization (ACO). In order to successfully adopt ACS, we formulize the solution space, the cost function and several heuristic settings in this framework. We also apply our approach to some typical inputs. Experimental results show the effectiveness of our approach especially compared to greedy algorithms and several existing tools.",TRUE,research problem
R11,Science,R33899,Building Prioritized Pairwise Interaction Test Suites with Ant Colony Optimization,S117541,R33900,has research problem,R33857,Ant Colony Optimization,"Interaction testing offers a stable cost-benefit ratio in identifying faults. But in many testing scenarios, the entire test suite cannot be fully executed due to limited time or cost. In these situations, it is essential to take the importance of interactions into account and prioritize these tests. To tackle this issue, the biased covering array is proposed and the Weighted Density Algorithm (WDA) is developed. To find a better solution, in this paper we adopt ant colony optimization (ACO) to build this prioritized pairwise interaction test suite (PITS). In our research, we propose four concrete test generation algorithms based on Ant System, Ant System with Elitist, Ant Colony System and Max-Min Ant System respectively. We also implement these algorithms and apply them to two typical inputs and report experimental results. The results show the effectiveness of these algorithms.",TRUE,research problem
R11,Science,R33907,An approach of optimal path generation using ant colony optimization,S117567,R33908,has research problem,R33857,Ant Colony Optimization,"Software Testing is one of the indispensable parts of the software development lifecycle and structural testing is one of the most widely used testing paradigms to test various software. Structural testing relies on code path identification, which in turn leads to identification of effective paths. Aim of the current paper is to present a simple and novel algorithm with the help of an ant colony optimization, for the optimal path identification by using the basic property and behavior of the ants. This novel approach uses certain set of rules to find out all the effective/optimal paths via ant colony optimization (ACO) principle. The method concentrates on generation of paths, equal to the cyclomatic complexity. This algorithm guarantees full path coverage.",TRUE,research problem
R11,Science,R33912,Optimized Test Sequence Generation from Usage Models using Ant Colony Optimization,S117581,R33913,has research problem,R33857,Ant Colony Optimization,Software Testing is the process of testing the software in order to ensure that it is free of errors and produces the desired outputs in any given situation. Model based testing is an approach in which software is viewed as a set of states. A usage model describes software on the basis of its statistical usage data. One of the major problems faced in such an approach is the generation of optimal sets of test sequences. The model discussed in this paper is a Markov chain based usage model. The analytical operations and results associated with Markov chains make them an appropriate choice for checking the feasibility of test sequences while they are being generated. The statistical data about the estimated usage has been used to build a stochastic model of the software under test. This paper proposes a technique to generate optimized test sequences from a markov chain based usage model. The proposed technique uses ant colony optimization as its basis and also incorporates factors like cost and criticality of various states in the model. It further takes into consideration the average number of visits to any state and the trade-off between cost considerations and optimality of the test coverage.,TRUE,research problem
R11,Science,R33918,Automated Software Testing Using Metaheuristic Technique Based on an Ant Colony Optimization,S117594,R33919,has research problem,R33857,Ant Colony Optimization,"Software testing is an important and valuable part of the software development life cycle. Due to time, cost and other circumstances, exhaustive testing is not feasible that’s why there is a need to automate the testing process. Testing effectiveness can be achieved by the State Transition Testing (STT) which is commonly used in real time, embedded and web-based kinds of software systems. The tester’s main job to test all the possible transitions in the system. This paper proposed an Ant Colony Optimization (ACO) technique for the automated and full coverage of all state-transitions in the system. Present paper approach generates test sequence in order to obtain the complete software coverage. This paper also discusses the comparison between two metaheuristic techniques (Genetic Algorithm and Ant Colony optimization) for transition based testing.",TRUE,research problem
R11,Science,R33931,Generation of test data using Meta heuristic approach,S117622,R33932,has research problem,R33857,Ant Colony Optimization,"Software testing is of huge importance to development of any software. The prime focus is to minimize the expenses on the testing. In software testing the major problem is generation of test data. Several metaheuristic approaches in this field have become very popular. The aim is to generate the optimum set of test data, which would still not compromise on exhaustive testing of software. Our objective is to generate such efficient test data using genetic algorithm and ant colony optimization for a given software. We have also compared the two approaches of software testing to determine which of these are effective towards generation of test data and constraints if any.",TRUE,research problem
R11,Science,R33933,Automatic Test Data Generation Based on Ant Colony Optimization,S117636,R33934,has research problem,R33857,Ant Colony Optimization,Software testing is a crucial measure used to assure the quality of software. Path testing can detect bugs earlier because of it performs higher error coverage. This paper presents a model of generating test data based on an improved ant colony optimization and path coverage criteria. Experiments show that the algorithm has a better performance than other two algorithms and improve the efficiency of test data generation notably.,TRUE,research problem
R11,Science,R33943,A New Software Data-Flow Testing Approach via Ant Colony Algorithms,S117661,R33944,has research problem,R33857,Ant Colony Optimization,"Search-based optimization techniques (e.g., hill climbing, simulated annealing, and genetic algorithms) have been applied to a wide variety of software engineering activities including cost estimation, next release problem, and test generation. Several search based test generation techniques have been developed. These techniques had focused on finding suites of test data to satisfy a number of control-flow or data-flow testing criteria. Genetic algorithms have been the most widely employed search-based optimization technique in software testing issues. Recently, there are many novel search-based optimization techniques have been developed such as Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), Artificial Immune System (AIS), and Bees Colony Optimization. ACO and AIS have been employed only in the area of control-flow testing of the programs. This paper aims at employing the ACO algorithms in the issue of software data-flow testing. The paper presents an ant colony optimization based approach for generating set of optimal paths to cover all definition-use associations (du-pairs) in the program under test. Then, this approach uses the ant colony optimization to generate suite of test-data for satisfying the generated set of paths. In addition, the paper introduces a case study to illustrate our approach. Keywordsdata-flow testing; path-cover generation, test-data generation; ant colony optimization algorithms",TRUE,research problem
R11,Science,R33951,"Test Case Prioritization Using Ant Colony optimization,” Association in Computing Machinery",S117680,R33952,has research problem,R33857,Ant Colony Optimization,"Regression testing is primarily a maintenance activity that is performed frequently to ensure the validity of the modified software. In such cases, due to time and cost constraints, the entire test suite cannot be run. Thus, it becomes essential to prioritize the tests in order to cover maximum faults in minimum time. In this paper, ant colony optimization is used, which is a new way to solve time constraint prioritization problem. This paper presents the regression test prioritization technique to reorder test suites in time constraint environment along with an algorithm that implements the technique.",TRUE,research problem
R11,Science,R34374,Clostridium difficile small bowel enteritis occurring after total colectomy,S119681,R34375,has research problem,R34321,Clostridium difficile infection,Clostridium difficile infection is usually associated with antibiotic therapy and is almost always limited to the colonic mucosa. Small bowel enteritis is rare: only 9 cases have been previously cited in the literature. This report describes a case of C. difficile small bowel enteritis that occurred in a patient after total colectomy and reviews the 9 previously reported cases of C. difficile enteritis.,TRUE,research problem
R11,Science,R34392,Treatment of metronidazole-refractory Clostridium difficile enteritis with vancomycin,S119803,R34393,has research problem,R34321,Clostridium difficile infection,"BACKGROUND Clostridium difficile infection of the colon is a common and well-described clinical entity. Clostridium difficile enteritis of the small bowel is believed to be less common and has been described sparsely in the literature. METHODS Case report and literature review. RESULTS We describe a patient who had undergone total proctocolectomy with ileal pouch-anal anastomosis who was treated with broad-spectrum antibiotics and contracted C. difficile refractory to metronidazole. The enteritis resolved quickly after initiation of combined oral vancomycin and metronidazole. A literature review found that eight of the fifteen previously reported cases of C. difficile-associated small-bowel enteritis resulted in death. CONCLUSIONS It is important for physicians who treat acolonic patients to be aware of C. difficile enteritis of the small bowel so that it can be suspected, diagnosed, and treated.",TRUE,research problem
R11,Science,R29384,Are environmental Kuznets curves misleading us? The case of CO2 emissions,S97695,R29385,has research problem,R29367,CO2 emissions,"Environmental Kuznets curve (EKC) analysis links changes in environmental quality to national economic growth. The reduced form models, however, do not provide insight into the underlying processes that generate these changes. We compare EKC models to structural transition models of per capita CO2 emissions and per capita GDP, and find that, for the 16 countries which have undergone such a transition, the initiation of the transition correlates not with income levels but with historic events related to the oil price shocks of the 1970s and the policies that followed them. In contrast to previous EKC studies of CO2 the transition away from positive emissions elasticities for these 16 countries is found to occur as a sudden, discontinuous transition rather than as a gradual change. We also demonstrate that the third order polynomial 'N' dependence of emissions on income is the result of data aggregation. We conclude that neither the 'U'- nor the 'N'-shaped relationship between CO2 emissions and income provide a reliable indication of future behaviour.",TRUE,research problem
R11,Science,R29711,A panel data heterogeneous Bayesian estimation of environmental Kuznets curves for CO2emissions,S98598,R29712,has research problem,R29367,CO2 emissions,"This article investigates the Environmental Kuznets Curves (EKC) for CO2 emissions in a panel of 109 countries during the period 1959 to 2001. The length of the series makes the application of a heterogeneous estimator suitable from an econometric point of view. The results, based on the hierarchical Bayes estimator, show that different EKC dynamics are associated with the different sub-samples of countries considered. On average, more industrialized countries show evidence of EKC in quadratic specifications, which nevertheless are probably evolving into an N-shape based on their cubic specification. Nevertheless, it is worth noting that the EU, and not the Umbrella Group led by US, has been driving currently observed EKC-like shapes. The latter is associated to monotonic income–CO2 dynamics. The EU shows a clear EKC shape. Evidence for less-developed countries consistently shows that CO2 emissions rise positively with income, though there are some signs of an EKC. Analyses of future performance, nevertheless, favour quadratic specifications, thus supporting EKC evidence for wealthier countries and non-EKC shapes for industrializing regions.",TRUE,research problem
R11,Science,R29854,An Empirical Analysis of the Environmental Kuznets Curve for CO2 Emissions in Indonesia: The Role of Energy Consumption and Foreign Trade,S99080,R29855,has research problem,R29367,CO2 emissions,"This study examines the dynamic relationship among carbon dioxide (CO2) emissions, economic growth, energy consumption and foreign trade based on the environmental Kuznets curve (EKC) hypothesis in Indonesia for the period 1971–2007, using the Auto Regressive Distributed Lag (ARDL) methodology. The results do not support the EKC hypothesis, which assumes an inverted U-shaped relationship between income and environmental degradation. The long-run results indicate that foreign trade is the most significant variable in explaining CO2 emissions in Indonesia followed by Energy consumption and economic growth. The stability of the variables in estimated model is also examined. The result suggests that the estimated model is stable over the study period.",TRUE,research problem
R11,Science,R29907,"A panel estimation of the relationship between trade liberalization, economic growth and CO2 emissions in BRICS countries",S99232,R29908,has research problem,R29367,CO2 emissions,"In the last few years, several studies have found an inverted-U relationship between per capita income and environmental degradation. This relationship, known as the environmental Kuznets curve (EKC), suggests that environmental degradation increases in the early stages of growth, but it eventually decreases as income exceeds a threshold level. However, this paper investigation relationship between per capita CO2 emission, growth economics and trade liberalization based on econometric techniques of unit root test, co-integration and a panel data set during the period 1960-1996 for BRICS countries. Data properties were analyzed to determine their stationarity using the LLC , IPS , ADF and PP unit root tests which indicated that the series are I(1). We find a cointegration relationship between per capita CO2 emission, growth economics and trade liberalization by applying Kao panel cointegration test. The evidence indi cates that in the long-run trade liberalization has a positive significant impact on CO2 emissions and impact of trade liberalization on emissions growth depends on the level of income Our findings suggest that there is a quadratic relationship between relationship between real GDP and CO2 emissions for the region as a whole. The estimated long-run coefficients of real GDP and its square satisfy the EKC hypothesis in all of studied countries. Our estimation shows that the inflection point or optimal point real GDP per capita is about 5269.4 dollars. The results show that on average, sample countries are on the positive side of the inverted U curve. The turning points are very low in some cases and very high in other cases, hence providing poor evidence in support of the EKC hypothesis. Thus, our findings suggest that all BRICS countries need to sacrifice economic growth to decrease their emission levels",TRUE,research problem
R11,Science,R29970,The potential of renewable energy: using the environmental Kuznets curve model,S99369,R29971,has research problem,R29367,CO2 emissions,"This study examines the potential of Renewable Energy Sources (RES) in reducing the impact of carbon emission in Malaysia and the Greenhouse Gas (GHG) emissions, which leads to global warming. Using the Environmental Kuznets Curve (EKC) hypothesis, this study analyses the impact of electricity generated using RES on the environment and trade openness for the period 1980-2009. Using the Autoregressive Distributed Lag (ARDL) approach the results show that the elasticities of electricity production from renewable sources with respect to CO2 emissions are negative and significant in both the short and long-run. This implies the potential of renewable energy in controlling CO2 emissions in both short and long-run in Malaysia. Renewable energy can ensure sustainability of electricity supply and at the same time can reduce CO2 emissions. Trade openness has a significant negative effect on CO2 emissions in the long-run. The Granger causality test based on Vector Error Correction Mode (VECM) indicates that there is an evidence of positive bi-directional Granger causality relationship between economic growth and CO2 emissions in the short and long-run suggesting that carbon emissions and economic growth are interrelated to each other. Furthermore, there is a negative long-run bi-directional Granger causality relationship between electricity production from renewable sources and CO2 emissions. The short-run Granger causality shows a negative uni-directional causality for electricity production from renewable sources to CO2 emissions. This result suggests that there is an inverted U-shaped relationship between CO2 emissions and economic growth.",TRUE,research problem
R11,Science,R30088,Environmental Kuznets curve in an open economy: a bounds testing and causality analysis for Tunisia,S99738,R30089,has research problem,R29367,CO2 emissions,"The aim of this paper is to investigate the existence of environmental Kuznets curve (EKC) in an open economy like Tunisia using annual time series data for the period of 1971-2010. The ARDL bounds testing approach to cointegration is applied to test long run relationship in the presence of structural breaks and vector error correction model (VECM) to detect the causality among the variables. The robustness of causality analysis has been tested by applying the innovative accounting approach (IAA). The findings of this paper confirmed the long run relationship between economic growth, energy consumption, trade openness and CO2 emissions in Tunisian Economy. The results also indicated the existence of EKC confirmed by the VECM and IAA approaches. The study has significant contribution for policy implications to curtail energy pollutants by implementing environment friendly regulations to sustain the economic development in Tunisia.",TRUE,research problem
R11,Science,R30260,The environmental Kuznets curve at different levels of economic development: a counterfactual quantile regression analysis for CO2emissions,S100276,R30261,has research problem,R29367,CO2 emissions,"This paper applies the quantile fixed effects technique in exploring the CO2 environmental Kuznets curve within two groups of economic development (OECD and Non-OECD countries) and six geographical regions West, East Europe, Latin America, East Asia, West Asia and Africa. A comparison of the findings resulting from the use of this technique with those of conventional fixed effects method reveals that the latter may depict a flawed summary of the prevailing incomeemissions nexus depending on the conditional quantile examined. We also extend the Machado and Mata decomposition method to the Kuznets curve framework to explore the most important explanations for the CO2 emissions gap between OECD and Non-OECD countries. We find a statistically significant OECD-Non-OECD emissions gap and this contracts as we ascent the emissions distribution. The decomposition further reveals that there are non-income related factors working against the Non-OECD group's greening. We tentatively conclude that deliberate and systematic mitigation of current CO2 emissions in the Non-OECD group is required. JEL Classification: Q56, Q58.",TRUE,research problem
R11,Science,R30284,"The Relationship between CO2 Emission, Energy Consumption, Urbanization and Trade Openness for Selected CEECs",S100370,R30285,has research problem,R29367,CO2 emissions,"This paper investigates the relationship between CO2 emission, real GDP, energy consumption, urbanization and trade openness for 10 for selected Central and Eastern European Countries (CEECs), including, Albania, Bulgaria, Croatia, Czech Republic, Macedonia, Hungary, Poland, Romania, Slovak Republic and Slovenia for the period of 1991–2011. The results show that the environmental Kuznets curve (EKC) hypothesis holds for these countries. The fully modified ordinary least squares (FMOLS) results reveal that a 1% increase in energy consumption leads to a %1.0863 increase in CO2 emissions. Results for the existence and direction of panel Vector Error Correction Model (VECM) Granger causality method show that there is bidirectional causal relationship between CO2 emissions - real GDP and energy consumption-real GDP as well.",TRUE,research problem
R11,Science,R30373,CO2emissions in Australia: economic and non-economic drivers in the long-run,S100745,R30374,has research problem,R29367,CO2 emissions,"ABSTRACT Australia has sustained a relatively high economic growth rate since the 1980s compared to other developed countries. Per capita CO2 emissions tend to be highest amongst OECD countries, creating new challenges to cut back emissions towards international standards. This research explores the long-run dynamics of CO2 emissions, economic and population growth along with the effects of globalization tested as contributing factors. We find economic growth is not emission-intensive in Australia, while energy consumption is emissions intensive. Second, in an environment of increasing population, our findings suggest Australia needs to be energy efficient at the household level, creating appropriate infrastructure for sustainable population growth. High population growth and open migration policy can be detrimental in reducing CO2 emissions. Finally, we establish globalized environment has been conducive in combating emissions. In this respect, we establish the beneficial effect of economic globalization compared to social and political dimensions of globalization in curbing emissions.",TRUE,research problem
R11,Science,R30390,Relationship between economic growth and environmental degradation: is there evidence of an environmental Kuznets curve for Brazil?,S100827,R30391,has research problem,R29367,CO2 emissions,"This study investigates the relationship between CO2 emissions, economic growth, energy use and electricity production by hydroelectric sources in Brazil. To verify the environmental Kuznets curve (EKC) hypothesis we use time-series data for the period 1971-2011. The autoregressive distributed lag methodology was used to test for cointegration in the long run. Additionally, the vector error correction model Granger causality test was applied to verify the predictive value of independent variables. Empirical results find that there is a quadratic long run relationship between CO2emissions and economic growth, confirming the existence of an EKC for Brazil. Furthermore, energy use shows increasing effects on emissions, while electricity production by hydropower sources has an inverse relationship with environmental degradation. The short run model does not provide evidence for the EKC theory. The differences between the results in the long and short run models can be considered for establishing environmental policies. This suggests that special attention to both variables-energy use and the electricity production by hydroelectric sources- could be an effective way to mitigate CO2 emissions in Brazil",TRUE,research problem
R11,Science,R30439,Environmental Kuznets Curve with Adjusted Net Savings as a Trade-Off Between Environment and Development,S100995,R30440,has research problem,R29367,CO2 emissions,"The Environmental Kuznets Curve (EKC) hypothesises that emissions first increase at low stages of development then decrease once a certain threshold has been reached. The EKC concept is usually used with per capita Gross Domestic Product as the explanatory variable. As others, we find mixed evidence, at best, of such a pattern for CO2 emissions with respect to per capita GDP. We also show that the share of manufacture in GDP and governance/institutions play a significant role in the CO2 emissions–income relationship. As GDP presents shortcomings in representing income, development in a broad perspective or human well-being, it is then replaced by the World Bank's Adjusted Net Savings (ANS, also known as Genuine Savings). Using the ANS as an explanatory variable, we show that the EKC is generally empirically supported for CO2 emissions. We also show that human capital and natural capital are the main drivers of the downward sloping part of the EKC.",TRUE,research problem
R11,Science,R26214,Decomposition of a Combined Inventory and Time Constrained Ship Routing Problem,S82711,R26362,has research problem,R26155,Combined inventory management,"In contrast to vehicle routing problems, little work has been done in ship routing and scheduling, although large benefits may be expected from improving this scheduling process. We will present a real ship planning problem, which is a combined inventory management problem anda routing problem with time windows. A fleet of ships transports a single product (ammonia) between production and consumption harbors. The quantities loaded and discharged are determined by the production rates of the harbors, possible stock levels, and the actual ship visiting the harbor. We describe the real problem and the underlying mathematical model. To decompose this model, we discuss some model adjustments. Then, the problem can be solved by a Dantzig Wolfe decomposition approach including both ship routing subproblems and inventory management subproblems. The overall problem is solved by branch-and-bound. Our computational results indicate that the proposed method works for the real planning problem.",TRUE,research problem
R11,Science,R27777,Computer games for the math achievement of diverse students,S90466,R27778,has research problem,R27729,Educational Games,"Introduction As a way to improve student academic performance, educators have begun paying special attention to computer games (Gee, 2005; Oblinger, 2006). Reflecting the interests of the educators, studies have been conducted to explore the effects of computer games on student achievement. However, there has been no consensus on the effects of computer games: Some studies support computer games as educational resources to promote students' learning (Annetta, Mangrum, Holmes, Collazo, & Cheng, 2009; Vogel et al., 2006). Other studies have found no significant effects on the students' performance in school, especially in math achievement of elementary school students (Ke, 2008). Researchers have also been interested in the differential effects of computer games between gender groups. While several studies have reported various gender differences in the preferences of computer games (Agosto, 2004; Kinzie & Joseph, 2008), a few studies have indicated no significant differential effect of computer games between genders and asserted generic benefits for both genders (Vogel et al., 2006). To date, the studies examining computer games and gender interaction are far from conclusive. Moreover, there is a lack of empirical studies examining the differential effects of computer games on the academic performance of diverse learners. These learners included linguistic minority students who speak languages other than English. Recent trends in the K-12 population feature the increasing enrollment of linguistic minority students, whose population reached almost four million (NCES, 2004). These students have been a grieve concern for American educators because of their reported low performance. In response, this study empirically examined the effects of math computer games on the math performance of 4th-graders with focused attention on differential effects for gender and linguistic groups. To achieve greater generalizability of the study findings, the study utilized a US nationally representative database--the 2005 National Assessment of Educational Progress (NAEP). The following research questions guided the current study: 1. Are computer games in math classes associated with the 4th-grade students' math performance? 2. How does the relationship differ by linguistic group? 3. How does the association vary by gender? 4. Is there an interaction effect of computer games on linguistic and gender groups? In other words, how does the effect of computer games on linguistic groups vary by gender group? Literature review Academic performance and computer games According DeBell and Chapman (2004), of 58,273,000 students of nursery and K-12 school age in the USA, 56% of students played computer games. Along with the popularity among students, computer games have received a lot of attention from educators as a potential way to provide learners with effective and fun learning environments (Oblinger, 2006). Gee (2005) agreed that a game would turn out to be good for learning when the game is built to incorporate learning principles. Some researchers have also supported the potential of games for affective domains of learning and fostering a positive attitude towards learning (Ke, 2008; Ke & Grabowski, 2007; Vogel et al., 2006). For example, based on the study conducted on 1,274 1st- and 2nd-graders, Rosas et al. (2003) found a positive effect of educational games on the motivation of students. Although there is overall support for the idea that games have a positive effect on affective aspects of learning, there have been mixed research results regarding the role of games in promoting cognitive gains and academic achievement. In the meta-analysis, Vogel et al. (2006) examined 32 empirical studies and concluded that the inclusion of games for students' learning resulted in significantly higher cognitive gains compared with traditional teaching methods without games. …",TRUE,research problem
R11,Science,R28567,"Embryonal sarcoma of the liver in an adult treated with preoperative chemotherapy, radiation therapy, and hepatic lobectomy",S93945,R28568,has research problem,R28521,Embryonal sarcoma of the liver,"A rare case of embryonal sarcoma of the liver in a 28‐year‐old man is reported. The patient was treated preoperatively with a combination of chemotherapy and radiation therapy. Complete surgical resection, 4.5 months after diagnosis, consisted of a left hepatic lobectomy. No viable tumor was found in the operative specimen. The patient was disease‐free 20 months postoperatively.",TRUE,research problem
R11,Science,R28609,Undifferentiated embryonal sarcoma of the liver mimicking acute appendicitis. Case report and review of the literature,S94471,R28610,has research problem,R28521,Embryonal sarcoma of the liver,"Abstract Background Undifferentiated embryonal sarcoma (UES) of liver is a rare malignant neoplasm, which affects mostly the pediatric population accounting for 13% of pediatric hepatic malignancies, a few cases has been reported in adults. Case presentation We report a case of undifferentiated embryonal sarcoma of the liver in a 20-year-old Caucasian male. The patient was referred to us for further investigation after a laparotomy in a district hospital for spontaneous abdominal hemorrhage, which was due to a liver mass. After a through evaluation with computed tomography scan and magnetic resonance imaging of the liver and taking into consideration the previous history of the patient, it was decided to surgically explore the patient. Resection of I–IV and VIII hepatic lobe. Patient developed disseminated intravascular coagulation one day after the surgery and died the next day. Conclusion It is a rare, highly malignant hepatic neoplasm, affecting almost exclusively the pediatric population. The prognosis is poor but recent evidence has shown that long-term survival is possible after complete surgical resection with or without postoperative chemotherapy.",TRUE,research problem
R11,Science,R29133,Enterprise resource planning research: where are we now and where should we go from here?,S96522,R29134,has research problem,R29113,Enterprise resource planning,"ABSTRACT The research related to Enterprise Resource Planning (ERP) has grown over the past several years. This growing body of ERP research results in an increased need to review this extant literature with the intent of identifying gaps and thus motivate researchers to close this breach. Therefore, this research was intended to critique, synthesize and analyze both the content (e.g., topics, focus) and processes (i.e., methods) of the ERP literature, and then enumerates and discusses an agenda for future research efforts. To accomplish this, we analyzed 49 ERP articles published (1999-2004) in top Information Systems (IS) and Operations Management (OM) journals. We found an increasing level of activity during the 5-year period and a slightly biased distribution of ERP articles targeted at IS journals compared to OM. We also found several research methods either underrepresented or absent from the pool of ERP research. We identified several areas of need within the ERP literature, none more prevalent than the need to analyze ERP within the context of the supply chain. INTRODUCTION Davenport (1998) described the strengths and weaknesses of using Enterprise Resource Planning (ERP). He called attention to the growth of vendors like SAP, Baan, Oracle, and People-Soft, and defined this software as, ""...the seamless integration of all the information flowing through a companyfinancial and accounting information, human resource information, supply chain information, and customer information."" (Davenport, 1998). Since the time of that article, there has been a growing interest among researchers and practitioners in how organization implement and use ERP systems (Amoako-Gyampah and Salam, 2004; Bendoly and Jacobs, 2004; Gattiker and Goodhue, 2004; Lander, Purvis, McCray and Leigh, 2004; Luo and Strong, 2004; Somers and Nelson, 2004; Zoryk-Schalla, Fransoo and de Kok, 2004). This interest is a natural continuation of trends in Information Technology (IT), such as MRP II, (Olson, 2004; Teltumbde, 2000; Toh and Harding, 1999) and in business practice improvement research, such as continuous process improvement and business process reengineering (Markus and Tanis, 2000; Ng, Ip and Lee, 1999; Reijers, Limam and van der Aalst, 2003; Toh and Harding, 1999). This growing body of ERP research results in an increased need to review this extant literature with the intent of ""identifying critical knowledge gaps and thus motivate researchers to close this breach"" (Webster and Watson, 2002). Also, as noted by Scandura & Williams (2000), in order for research to advance, the methods used by researchers must periodically be evaluated to provide insights into the methods utilized and thus the areas of need. These two interrelated needs provide the motivation for this paper. In essence, this research critiques, synthesizes and analyzes both the content (e.g., topics, focus) and processes (i.e., methods) of the ERP literature and then enumerates and discusses an agenda for future research efforts. The remainder of the paper is organized as follows: Section 2 describes the approach to the analysis of the ERP research. Section 3 contains the results and a review of the literature. Section 4 discusses our findings and the needs relative to future ERP research efforts. Finally, section 5 summarizes the research. RESEARCH STUDY We captured the trends pertaining to (1) the number and distribution of ERP articles published in the leading journals, (2) methodologies employed in ERP research, and (3) emphasis relative to topic of ERP research. During the analysis of the ERP literature, we identified gaps and needs in the research and therefore enumerate and discuss a research agenda which allows the progression of research (Webster and Watson, 2002). In short, we sought to paint a representative landscape of the current ERP literature base in order to influence the direction of future research efforts relative to ERP. …",TRUE,research problem
R11,Science,R29137,"Work, organisation and Enterprise Resource Planning systems: an alternative research agenda",S96542,R29138,has research problem,R29113,Enterprise resource planning,"This paper reviews literature that examines the design, implementation and use of Enterprise Resource Planning systems (ERPs). It finds that most of this literature is managerialist in orientation, and concerned with the impact of ERPs in terms of efficiency, effectiveness and business performance. The paper seeks to provide an alternative research agenda, one that emphasises work- and organisation-based approaches to the study of the implementation and use of ERPs.",TRUE,research problem
R11,Science,R29143,Enterprise Resource Planning (ERP): a review of the literature,S96574,R29144,has research problem,R29113,Enterprise resource planning,"This article is a review of work published in various journals on the topics of Enterprise Resource Planning (ERP) between January 2000 and May 2006. A total of 313 articles from 79 journals are reviewed. The article intends to serve three goals. First, it will be useful to researchers who are interested in understanding what kinds of questions have been addressed in the area of ERP. Second, the article will be a useful resource for searching for research topics. Third, it will serve as a comprehensive bibliography of the articles published during the period. The literature is analysed under six major themes and nine sub-themes.",TRUE,research problem
R11,Science,R29146,A review of literature on Enterprise Resource Planning systems,S96590,R29147,has research problem,R29113,Enterprise resource planning,"Enterprise resource planning (ERP) systems are currently involved into every aspect of organization as they provide a highly integrated solution to meet the information system needs. ERP systems have attracted a large amount of researchers and practitioners attention and received a variety of investigate and study. In this paper, we have selected a certain number of papers concerning ERP systems between 1998 and 2006, and this is by no means a comprehensive review. The literature is further classified by its topic and the major outcomes and research methods of each study are addressed. Following implications for future research are provided.",TRUE,research problem
R11,Science,R29149,A comprehensive literature review of the ERP research field over a Decade,S96606,R29150,has research problem,R29113,Enterprise resource planning,"Purpose – The purpose of this paper is first, to develop a methodological framework for conducting a comprehensive literature review on an empirical phenomenon based on a vast amount of papers published. Second, to use this framework to gain an understanding of the current state of the enterprise resource planning (ERP) research field, and third, based on the literature review, to develop a conceptual framework identifying areas of concern with regard to ERP systems.Design/methodology/approach – Abstracts from 885 peer‐reviewed journal publications from 2000 to 2009 have been analysed according to journal, authors and year of publication, and further categorised into research discipline, research topic and methods used, using the structured methodological framework.Findings – The body of academic knowledge about ERP systems has reached a certain maturity and several different research disciplines have contributed to the field from different points of view using different methods, showing that the ERP rese...",TRUE,research problem
R11,Science,R29156,Process orientation through enterprise resource planning (ERP): a review of critical issues,S96637,R29157,has research problem,R29113,Enterprise resource planning,"The significant development in global information technologies and the ever-intensifying competitive market climate have both pushed many companies to transform their businesses. Enterprise resource planning (ERP) is seen as one of the most recently emerging process-orientation tools that can enable such a transformation. Its development has presented both researchers and practitioners with new challenges and opportunities. This paper provides a comprehensive review of the state of research in the ERP field relating to process management, organizational change and knowledge management. It surveys current practices, research and development, and suggests several directions for future investigation. Copyright © 2001 John Wiley & Sons, Ltd.",TRUE,research problem
R11,Science,R29159,Planning for ERP systems: analysis and future trend,S96650,R29160,has research problem,R29113,Enterprise resource planning,"The successful implementation of various enterprise resource planning (ERP) systems has provoked considerable interest over the last few years. Management has recently been enticed to look toward these new information technologies and philosophies of manufacturing for the key to survival or competitive edges. Although there is no shortage of glowing reports on the success of ERP installations, many companies have tossed millions of dollars in this direction with little to show for it. Since many of the ERP failures today can be attributed to inadequate planning prior to installation, we choose to analyze several critical planning issues including needs assessment and choosing a right ERP system, matching business process with the ERP system, understanding the organizational requirements, and economic and strategic justification. In addition, this study also identifies new windows of opportunity as well as challenges facing companies today as enterprise systems continue to evolve and expand.",TRUE,research problem
R11,Science,R29161,Enterprise resource planning (ERP) systems: a research agenda,S96663,R29162,has research problem,R29113,Enterprise resource planning,"The continuing development of enterprise resource planning (ERP) systems has been considered by many researchers and practitioners as one of the major IT innovations in this decade. ERP solutions seek to integrate and streamline business processes and their associated information and work flows. What makes this technology more appealing to organizations is increasing capability to integrate with the most advanced electronic and mobile commerce technologies. However, as is the case with any new IT field, research in the ERP area is still lacking and the gap in the ERP literature is huge. Attempts to fill this gap by proposing a novel taxonomy for ERP research. Also presents the current status with some major themes of ERP research relating to ERP adoption, technical aspects of ERP and ERP in IS curricula. The discussion presented on these issues should be of value to researchers and practitioners. Future research work will continue to survey other major areas presented in the taxonomy framework.",TRUE,research problem
R11,Science,R29174,Research on ERP Application from an Integrative Review,S96718,R29175,has research problem,R29113,Enterprise resource planning,"Enterprise resource planning (ERP) system is an enterprise management system, currently in high demand by both manufacturing and service organizations. Recently, ERP systems have been drawn an important amount of attention by researchers and top managers. This paper will summarize the previous research literature about ERP application from an integrative review, and further research issues have been introduced to guide the future direction of research.",TRUE,research problem
R11,Science,R29191,Critical factors for successful implementation of enterprise systems,S96795,R29192,has research problem,R29113,Enterprise resource planning,"Enterprise resource planning (ERP) systems have emerged as the core of successful information management and the enterprise backbone of organizations. The difficulties of ERP implementations have been widely cited in the literature but research on the critical factors for initial and ongoing ERP implementation success is rare and fragmented. Through a comprehensive review of the literature, 11 factors were found to be critical to ERP implementation success – ERP teamwork and composition; change management program and culture; top management support; business plan and vision; business process reengineering with minimum customization; project management; monitoring and evaluation of performance; effective communication; software development, testing and troubleshooting; project champion; appropriate business and IT legacy systems. The classification of these factors into the respective phases (chartering, project, shakedown, onward and upward) in Markus and Tanis’ ERP life cycle model is presented and the importance of each factor is discussed.",TRUE,research problem
R11,Science,R29194,Critical successful factors of ERP implementation: a review,S96810,R29195,has research problem,R29113,Enterprise resource planning,"Recently e -business has become the focus of management interest both in academics and in business. Among the major components of e -business, ERP (Enterprise Resource Planning) is the backbone of other applications. Therefore more and more enterprises attempt to adopt this new application in order to improve their business competitiveness. Owing to the specific characteristics of ERP, its implementation is more difficult than that of traditional information systems. For this reason, how to implement ERP successfully becomes an important issue for both academics and practitioners. In this paper, a review on critical successful factors of ERP in important MIS publications will be presented. Additionally traditional IS implementatio n and ERP implementation will be compared and the findings will be served as the basis for further research.",TRUE,research problem
R11,Science,R29196,Critical success factors for ERP projects,S96826,R29197,has research problem,R29113,Enterprise resource planning,"Over the past decade, Enterprise Resource Planning systems (ERP) have become one of the most important developments in the corporate use of information technology. ERP implementations are usually large, complex projects, involving large groups of people and other resources, working together under considerable time pressure and facing many unforeseen developments. In order for an organization to compete in this rapidly expanding and integrated marketplace, ERP systems must be employed to ensure access to an efficient, effective, and highly reliable information infrastructure. Despite the benefits that can be achieved from a successful ERP system implementation, there is evidence of high failure in ERP implementation projects. Too frequently key development practices are ignored and early warn ing signs that lead to project failure are not understood. Identifying project success and failure factors and their consequences as early as possible can provide valuable clues to help project managers improve their chances of success. It is the long-lange goal of our research to shed light on these factors and to provide a tool that project managers can use to help better manage their software development projects. This paper will present a review of the general background to our work; the results from the current research and conclude with a discussion of the findings thus far. The findings will include a list of 23 unique Critical Success Factors identified throughout the literature, which we believe to be essential for Project Managers. The implications of these results will be discussed along with the lessons learnt.",TRUE,research problem
R11,Science,R29210,Identification and assessment of risks associated with ERP post-implementation in China,S96931,R29211,has research problem,R29113,Enterprise resource planning,"Purpose – The purpose of this paper is to identify, assess and explore potential risks that Chinese companies may encounter when using, maintaining and enhancing their enterprise resource planning (ERP) systems in the post‐implementation phase.Design/methodology/approach – The study adopts a deductive research design based on a cross‐sectional questionnaire survey. This survey is preceded by a political, economic, social and technological analysis and a set of strength, weakness, opportunity and threat analyses, from which the researchers refine the research context and select state‐owned enterprises (SOEs) in the electronic and telecommunications industry in Guangdong province as target companies to carry out the research. The questionnaire design is based on a theoretical risk ontology drawn from a critical literature review process. The questionnaire is sent to 118 selected Chinese SOEs, from which 42 (84 questionnaires) valid and usable responses are received and analysed.Findings – The findings ident...",TRUE,research problem
R11,Science,R29212,Challenges and influential factors in ERP adoption and implementation,S96948,R29213,has research problem,R29113,Enterprise resource planning,"The adoption and implementation of Enterprise Resource Planning (ERP) systems is a challenging and expensive task that not only requires rigorous efforts but also demands to have a detailed analysis of such factors that are critical to the adoption or implementation of ERP systems. Many efforts have been made to identify such influential factors for ERP; however, they are not filtered comprehensively in terms of the different perspectives. This paper focuses on the ERP critical success factors from five different perspectives such as: stakeholders; process; technology; organisation; and project. Results from the literature review are presented and 19 such factors are identified that are imperative for a successful ERP implementation, which are listed in order of their importance. Considering these factors can realize several benefits such as reducing costs and saving time or extra effort.",TRUE,research problem
R11,Science,R29217,The Core Critical Success Factors in Implementation of Enterprise Resource Planning Systems,S96984,R29218,has research problem,R29113,Enterprise resource planning,"The Implementation of Enterprise Resource Planning ERP systems require huge investments while ineffective implementations of such projects are commonly observed. A considerable number of these projects have been reported to fail or take longer than it was initially planned, while previous studies show that the aim of rapid implementation of such projects has not been successful and the failure of the fundamental goals in these projects have imposed huge amounts of costs on investors. Some of the major consequences are the reduction in demand for such products and the introduction of further skepticism to the managers and investors of ERP systems. In this regard, it is important to understand the factors determining success or failure of ERP implementation. The aim of this paper is to study the critical success factors CSFs in implementing ERP systems and to develop a conceptual model which can serve as a basis for ERP project managers. These critical success factors that are called ""core critical success factors"" are extracted from 62 published papers using the content analysis and the entropy method. The proposed conceptual model has been verified in the context of five multinational companies.",TRUE,research problem
R11,Science,R29224,Comparing risk and success factors in ERP projects: a literature review,S97035,R29225,has research problem,R29113,Enterprise resource planning,"Although research and practice has attributed considerable attention to Enterprise Resource Planning (ERP) projects their failure rate is still high. There are two main fields of research, which aim at increasing the success rate of ERP projects: Research on risk factors and research on success factors. Despite their topical relatedness, efforts to integrate these two fields have been rare. Against this background, this paper analyzes 68 articles dealing with risk and success factors and categorizes all identified factors into twelve categories. Though some topics are equally important in risk and success factor research, the literature on risk factors emphasizes topics which ensure achieving budget, schedule and functionality targets. In contrast, the literature on success factors concentrates more on strategic and organizational topics. We argue that both fields of research cover important aspects of project success. The paper concludes with the presentation of a possible holistic consideration to integrate both, the understanding of risk and success factors.",TRUE,research problem
R11,Science,R29226,A FRAMEWORK FOR CLASSIFYING RISKS IN ERP MAINTENANCE PROJECTS,S97050,R29227,has research problem,R29113,Enterprise resource planning,"Enterprise resource planning (ERP) is one the most common applications implemented by firms around the world. These systems cannot remain static after their implementation, they need maintenance. This process is required by the rapidly-changing business environment and the usual software maintenance needs. However, these projects are highly complex and risky. So, the risks management associated with ERP maintenance projects is crucial to attain a satisfactory performance. Unfortunately, ERP maintenance risks have not been studied in depth. For this reason, this paper presents a framework, which gathers together the risks affecting the performance of ERP maintenance.",TRUE,research problem
R11,Science,R29231,Evaluation of Key Success Factors Influencing ERP Implementation Success,S97082,R29232,has research problem,R29113,Enterprise resource planning,"Enterprise Resource Planning (ERP) application is often viewed as a strategic investment that can provide significant competitive advantage with positive return thus contributing to the firms' revenue and growth. Despite such strategic importance given to ERP the implementation success to achieve the desired goal has been viewed disappointing. There have been numerous industry stories about failures of ERP initiatives. There have also been stories reporting on the significant benefits achieved from successful ERP initiatives. This study review the industry and academic literature on ERP results and identify possible trends or factors which may help future ERP initiatives achieve greater success and less failure. The purpose of this study is to review the industry and academic literature on ERP results, identify and discuss critical success factors which may help future ERP initiatives achieve greater success and less failure.",TRUE,research problem
R11,Science,R29238,Critical success factors in enterprise resource planning systems,S97134,R29239,has research problem,R29113,Enterprise resource planning,"Organizations perceive ERP as a vital tool for organizational competition as it integrates dispersed organizational systems and enables flawless transactions and production. This review examines studies investigating Critical Success Factors (CSFs) in implementing Enterprise Resource Planning (ERP) systems. Keywords relating to the theme of this study were defined and used to search known Web engines and journal databases for studies on both implementing ERP systems per se and integrating ERP systems with other well- known systems (e.g., SCM, CRM) whose importance to business organizations and academia is acknowledged to work in a complementary fashion. A total of 341 articles were reviewed to address three main goals. This study structures previous research by presenting a comprehensive taxonomy of CSFs in the area of ERP. Second, it maps studies, identified through an exhaustive and comprehensive literature review, to different dimensions and facets of ERP system implementation. Third, it presents studies investigating CSFs in terms of a specific ERP lifecycle phase and across the entire ERP life cycle. This study not only reviews articles in which an ERP system is the sole or primary field of research, but also articles that refer to an integration of ERP systems and other popular systems (e.g., SCM, CRM). Finally it provides a comprehensive bibliography of the articles published during this period that can serve as a guide for future research.",TRUE,research problem
R11,Science,R29243,Critical elements for a successful enterprise resource planning implementation in small-and medium-sized enterprises,S97150,R29244,has research problem,R29113,Enterprise resource planning,"The body of research relating to the implementation of enterprise resource planning (ERP) systems in small- and medium-sized enterprises (SMEs) has been increasing rapidly over the last few years. It is important, particularly for SMEs, to recognize the elements for a successful ERP implementation in their environments. This research aims to examine the critical elements that constitute a successful ERP implementation in SMEs. The objective is to identify the constituents within the critical elements. A comprehensive literature review and interviews with eight SMEs in the UK were carried out. The results serve as the basic input into the formation of the critical elements and their constituents. Three main critical elements are formed: critical success factors, critical people and critical uncertainties. Within each critical element, the related constituents are identified. Using the process theory approach, the constituents within each critical element are linked to their specific phase(s) of ERP implementation. Ten constituents for critical success factors were found, nine constituents for critical people and 21 constituents for critical uncertainties. The research suggests that a successful ERP implementation often requires the identification and management of the critical elements and their constituents at each phase of implementation. The results are constructed as a reference framework that aims to provide researchers and practitioners with indicators and guidelines to improve the success rate of ERP implementation in SMEs.",TRUE,research problem
R11,Science,R29248,ERP systems and open source: an initial review and some implications for SMEs,S97164,R29249,has research problem,R29113,Enterprise resource planning,"Purpose – The purpose of this paper is to further build up the knowledge about reasons for small and mid‐sized enterprises (SMEs) to adopt open source enterprise resource planning (ERP) systems.Design/methodology/approach – The paper presents and analyses findings in articles about proprietary ERPs and open source ERPs. In addition, a limited investigation of the distribution channel SourceForge for open source is made.Findings – The cost perspective seems to receive a high attention regarding adoption of open source ERPs. This can be questioned and the main conclusion is that costs seem to have a secondary role in adoption or non adoption of open source ERPs.Research limitations/implications – The paper is mainly a conceptual paper written from a literature review. The ambition is to search support for the findings by doing more research in the area.Practical implications – The findings presented are of interest both for developers of proprietary ERPs as well as SMEs since it is shown that there are defi...",TRUE,research problem
R11,Science,R29252,The ERP implementation of SME in China,S97178,R29253,has research problem,R29113,Enterprise resource planning,"Enterprise Resource Planning-ERP implementing of small and middle size enterprise — SME is different from the large one. Based on the analysis on the character of ERP marketing and SMEs of China, 6 critical success factors are recommended. The research suggests that the top management support is most important to ERP implement in SME of China, in which paternalism prevails. Database of management and capital are main obstacles. ERP 1 or ERP 2 fits to demand of SME; high power project team has tremendous significance in the situation of absence of IT engineer for SME; education and training is helpful to successfully ERP implementing. The results service as better understanding the ERP implementation of SME in China and gaining the good performance of ERP implementation.",TRUE,research problem
R11,Science,R29255,A Comparative Study of Issues Affecting ERP Implementation in Large Scale and Small Medium Scale Enterprises in India: A Pareto Approach,S97194,R29256,has research problem,R29113,Enterprise resource planning,"paper attempts to explore and identify issues affecting Enterprise Resource Planning (ERP) implementation in context to Indian Small and Medium Enterprises (SMEs) and large enterprises. Issues which are considered more important for large scale enterprises may not be of equal importance for a small and medium scale enterprise and hence replicating the implementation experience which holds for large organizations will not a wise approach on the part of the implementation vendors targeting small scale enterprises. This paper attempts to highlight those specific issues where a different approach needs to be adopted. Pareto analysis has been applied to identify the issues for Indian SMEs and Large scale enterprises as available from the published literature. Also by doing comparative analysis between the identified issues for Indian large enterprises and SMEs four issues are proved to be crucial for SMEs in India but not for large enterprises such as proper system implementation strategy, clearly defined scope of implementation procedure, proper project planning and minimal customization of the system selected for implementation, because of some limitations faced by the Indian SMEs compared to large enterprises.",TRUE,research problem
R11,Science,R29268,ERP Systems in SMEs: A Literature Review,S97239,R29269,has research problem,R29113,Enterprise resource planning,"This review summarizes research on enterprise resource planning (ERP) systems in small and medium-size enterprises (SMEs). Due to the close-to-saturation of ERP adoptions in large enterprises (LEs), ERP vendors now focus more on SMEs. Moreover, because of globalization, partnerships, value networks, and the huge information flow across and within SMEs nowadays, more and more SMEs are adopting ERP systems. Risks of adoption rely on the fact that SMEs have limited resources and specific characteristics that make their case different from LEs. The main focus of this article is to shed the light on the areas that lack sufficient research within the ERP in SMEs domain, suggest future research avenues, as well as, present the current research findings that could aid practitioners, suppliers, and SMEs when embarking on ERP projects.",TRUE,research problem
R11,Science,R29273,Enterprise resource planning post-adoption value: a literature review amongst small and medium enterprises,S97252,R29274,has research problem,R29113,Enterprise resource planning,"It is consensual that Enterprise Resource Planning (ERP) after a successful implementation has significant effects on the productivity of firm as well small and medium-sized enterprises (SMEs) recognized as fundamentally different environments compared to large enterprises. There are few reviews in the literature about the post-adoption phase and even fewer at SME level. Furthermore, to the best of our knowledge there is none with focus in ERP value stage. This review will fill this gap. It provides an updated bibliography of ERP publications published in the IS journal and conferences during the period of 2000 and 2012. A total of 33 articles from 21 journals and 12 conferences are reviewed. The main focus of this paper is to shed the light on the areas that lack sufficient research within the ERP in SME domain, in particular in ERP business value stage, suggest future research avenues, as well as, present the current research findings that could support researchers and practitioners when embarking on ERP projects.",TRUE,research problem
R11,Science,R29295,Limits to using ERP systems,S97349,R29296,has research problem,R29113,Enterprise resource planning,"The paper examines limitations that restrict the potential benefits from the use of Enterprise Resource Planning (ERP) systems in business firms. In the first part we discuss a limitation that arises from the strategic decision of top managers for mergers, acquisitions and divestitures as well as outsourcing. Managers tend to treat their companies like component-based business units, which are to be arranged and re-arranged to yet higher market values. Outsourcing of in-house activities to suppliers means disintegrating processes and information. Such consequences of strategic business decisions impose severe restrictions on what business organizations can benefit from ERP systems. The second part of the paper reflects upon the possibility of imbedding best practice business processes in ERP systems. We critically review the process of capturing and transferring best practices with a particular focus on context-dependence and nature of IT innovations.",TRUE,research problem
R11,Science,R29298,Potential impact of cultural differences on enterprise resource planning (ERP) projects,S97359,R29299,has research problem,R29113,Enterprise resource planning,"Over the last ten years, there has been a dramatic growth in the acquisition of Enterprise Resource Planning (ERP) systems, where the market leader is the German company, SAP AG. However, more recently, there has been an increase in reported ERP failures, suggesting that the implementation issues are not just technical, but encompass wider behavioural factors.",TRUE,research problem
R11,Science,R29304,Deconstructing information packages: organizational and behavioural implications of ERP systems,S97379,R29305,has research problem,R29113,Enterprise resource planning,"Argues that the organizational involvement of large scale information technology packages, such as those known as enterprise resource planning (ERP), has important implications that go far beyond the acknowledged effects of keeping the organizational operations accountable and integrated across functions and production sites. Claims that ERP packages are predicated on an understanding of human agency as a procedural affair and of organizations as an extended series of functional or cross‐functional transactions. Accordingly, the massive introduction of ERP packages to organizations is bound to have serious implications that precisely recount the procedural forms by which such packages instrument organizational operations and fashion organizational roles. The conception of human agency and organizational operations in procedural terms may seem reasonable yet it recounts a very specific and, in a sense, limited understanding of humans and organizations. The distinctive status of framing human agency and organizations in procedural terms becomes evident in its juxtaposition with other forms of human action like improvisation, exploration or playing. These latter forms of human involvement stand out against the serial fragmentation underlying procedural action. They imply acting on the world on loose premises that trade off a variety of forms of knowledge and courses of action in attempts to explore and discover alternative ways of coping with reality.",TRUE,research problem
R11,Science,R29306,Developing a cultural perspective on ERP,S97389,R29307,has research problem,R29113,Enterprise resource planning,"Purpose – To develop an analytical framework through which the organizational cultural dimension of enterprise resource planning (ERP) implementations can be analyzed.Design/methodology/approach – This paper is primarily based on a review of the literature.Findings – ERP is an enterprise system that offers, to a certain extent, standard business solutions. This standardization is reinforced by two processes: ERP systems are generally implemented by intermediary IT organizations, mediating between the development of ERP‐standard software packages and specific business domains of application; and ERP systems integrate complex networks of production divisions, suppliers and customers.Originality/value – In this paper, ERP itself is presented as problematic, laying heavy burdens on organizations – ERP is a demanding technology. While in some cases recognizing the mutual shaping of technology and organization, research into ERP mainly addresses the economic‐technological rationality of ERP (i.e. matters of eff...",TRUE,research problem
R11,Science,R29308,Training for ERP: does the is training literature have value?,S97399,R29309,has research problem,R29113,Enterprise resource planning,"This paper examines end-user training (EUT) in enterprise resource planning (ERP) systems, with the aim of identifying whether current EUT research is applicable to ERP systems. An extensive review and analysis of EUT research in mainstream IS journals was undertaken. The findings of this analysis were compared to views expressed by a leading ERP trainer in a large Australian company. The principles outlined in the EUT literature were used to construct the Training, Education and Learning Strategy model for an ERP environment. Our analysis found very few high-quality empirical studies involving EUT training in such an environment. Moreover, we argue that while the extensive EUT literature provides a rich source of ideas about ERP training, the findings of many studies cannot be transferred to ERP systems, as these systems are inherently more complex than the office-based, non-mandatory applications upon which most IS EUT research is based.",TRUE,research problem
R11,Science,R29316,Organisations and vanilla software: what do we know about ERP systems and competitive advantage?,S97432,R29317,has research problem,R29113,Enterprise resource planning,"Enterprise Resource Planning (ERP) systems have become a de facto standard for integrating business functions. But an obvious question arises: if every business is using the same socalled “Vanilla” software (e.g. an SAP ERP system) what happens to the competitive advantage from implementing IT systems? If we discard our custom-built legacy systems in favour of enterprise systems do we also jettison our valued competitive advantage from IT? While for some organisations ERPs have become just a necessity for conducting business, others want to exploit them to outperform their competitors. In the last few years, researchers have begun to study the link between ERP systems and competitive advantage. This link will be the focus of this paper. We outline a framework summarizing prior research and suggest two researchable questions. A future article will develop the framework with two empirical case studies from within part of the European food industry.",TRUE,research problem
R11,Science,R29320,ERP systems business value: a critical review of empirical literature,S97444,R29321,has research problem,R29113,Enterprise resource planning,"The business value generated by information and communication technologies (ICT) has been for long time a major research topic. Recently there is a growing research interest in the business value generated by particular types of information systems (IS). One of them is the enterprise resource planning (ERP) systems, which are increasingly adopted by organizations for supporting and integrating key business and management processes. The current paper initially presents a critical review of the existing empirical literature concerning the business value of the ERP systems, which investigates the impact of ERP systems adoption on various measures of organizational performance. Then is critically reviewed the literature concerning the related topic of critical success factors (CSFs) in ERP systems implementation, which aims at identifying and investigating factors that result in more successful ERP systems implementation that generate higher levels of value for organizations. Finally, future directions of research concerning ERP systems business value are proposed.",TRUE,research problem
R11,Science,R29332,Taking knowledge management on the ERP road: a two-dimensional analysis”,S97487,R29333,has research problem,R29113,Enterprise resource planning,"In today's fierce business competition, companies face the tremendous challenge of expanding markets, improving their products, services and processes and exploiting their intellectual capital in a dynamic network of knowledge-intensive relations inside and outside their borders. In order to accomplish these objectives, more and more companies are turning to the Enterprise Resource Planning systems (ERP). On the other hand, Knowledge Management (KM) has received considerable attention in the last decade and is continuously gaining interest by industry, enterprises and academia. As we are moving into an era of “knowledge capitalism”, knowledge management will play a fundamental role in the success of today's businesses. This paper aims at throwing light on the role of KM in the ERP success first and on their possible integration second. A wide range of academic and practitioner literature related to KM and ERP is reviewed. On the basis of this review, the paper gives answers to specific research questions and analyses future research directions.",TRUE,research problem
R11,Science,R29340,Taxonomy of cost of quality (COQ) across the enterprise resource planning (ERP) implementation phases”,S97536,R29341,has research problem,R29113,Enterprise resource planning,"Companies declare that quality or customer satisfaction is their top priority in order to keep and attract more business in an increasingly competitive marketplace. The cost of quality (COQ) is a tool which can help determine the optimal level of quality investment. COQ analysis enables organizations to identify measure and control the consequences of poor quality. This study attempts to identify the COQ elements across the enterprise resource planning (ERP) implementation phases for the ERP implementation services of consultancy companies. The findings provide guidance to project managers on how best to utilize their limited resources. In summary, we suggest that project teams should focus on “value-added” activities and minimize the cost of “non-value-added” activities at each phase of the ERP implementation project. Key words: Services, ERP implementation services, quality standard, service quality standard, cost of quality, project management, project quality management, project financial management.",TRUE,research problem
R11,Science,R27129,Effects of Exchange Rate Volatility on Trade: Some Further Evidence (Effets de l'instabilite des taux de change sur le commerce mondial: nouvelles constatations) (Efectos de la inestabilidad de los tipos de cambio en el comercio internacional: Alguna evidencia adicional),S87248,R27130,has research problem,R27126,Exchange rate volatility,"A recent survey of the empirical studies examining the effects of exchange rate volatility on international trade concluded that ""the large majority of empirical studies... are unable to establish a systematically significant link between measured exchange rate variability and the volume of international trade, whether on an aggregated or on a bilateral basis"" (International Monetary Fund, Exchange Rate Volatility and World Trade, Washington, July 1984, p. 36). A recent paper by M.A. Akhtar and R.S. Hilton (""Exchange Rate Uncertainty and International Trade,"" Federal Reserve Bank of New York, May 1984), in contrast, suggests that exchange rate volatility, as measured by the standard deviation of indices of nominal effective exchange rates, has had significant adverse effects on the trade in manufactures of the United States and the Federal Republic of Germany. The purpose of the present study is to test the robustness of Akhtar and Hilton's empirical results, with their basic theoretical framework taken as given. The study extends their analysis to include France, Japan, and the United Kingdom; it then examines the robustness of the results with respect to changes in the choice of sample period, volatility measure, and estimation techniques. The main conclusion of the analysis is that the methodology of Akhtar and Hilton fails to establish a systematically significant link between exchange rate volatility and the volume of international trade. This is not to say that significant adverse effects cannot be detected in individual cases, but rather that, viewed in the large, the results tend to be insignificant or unstable. Specifically, the results suggest that straightforward application of Akhtar and Hilton's methodology to three additional countries (France, Japan, and the United Kingdom) yields mixed results; that their methodology seems to be flawed in several respects, and that correction for such flaws has the effect of weakening their conclusions; that the estimates are quite sensitive to fairly minor variations in methodology; and that ""revised"" estimates for the five countries do not, for the most part, support the hypothesis that exchange rate volatility has had a systematically adverse effect on trade. /// Un rA©cent aperA§u des A©tudes empiriques consacrA©es aux effets de l'instabilitA© des taux de change sur le commerce international conclut que ""dans leur grande majoritA©, les A©tudes empiriques... ne rA©ussissent pas A A©tablir un lien significatif et systA©matique entre la variabilitA© mesurA©e des taux de change et le volume du commerce international, que celui-ci soit exprimA© sous forme globale ou bilatA©rale"" (Fonds monA©taire international, Exchange Rate Volatility and World Trade, Washington, juillet 1984, page 36). Par contre, un article publiA© rA©cemment par M.A. Akhtar et R.S. Hilton (""Exchange Rate Uncertainty and International Trade"", Federal Reserve Bank of New York, mai 1984) soutient que l'instabilitA© des taux de change, mesurA©e par l'A©cart type des indices des taux de change effectifs nominaux, a eu un effet dA©favorable significatif sur le commerce de produits manufacturA©s des Etats-Unis et de la RA©publique fA©dA©rale d'Allemagne. La prA©sente A©tude a pour objet d'A©valuer la soliditA© des rA©sultats empiriques prA©sentA©s par Akhtar et Hilton, en prenant comme donnA© leur cadre thA©orique de base. L'auteur A©tend l'analyse au cas de la France, du Japon et du Royaume-Uni; elle cherche ensuite dans quelle mesure ces rA©sultats restent valables si l'on modifie la pA©riode de rA©fA©rence, la mesure de l'instabilitA© et les techniques d'estimation. La principale conclusion de cette A©tude est que la mA©thode utilisA©e par Akhtar et Hilton n'A©tablit pas de lien significatif et systA©matique entre l'instabilitA© des taux de change et le volume du commerce international. Ceci ne veut pas dire que l'on ne puisse pas constater dans certains cas particuliers des effets dA©favorables significatifs, mais plutA´t que, pris dans leur ensemble, les rA©sultats sont peu significatifs ou peu stables. Plus prA©cisA©ment, cette A©tude laisse entendre qu'une application systA©matique de la mA©thode d'Akhtar et Hilton A trois pays supplA©mentaires (France, Japon et Royaume-Uni) donne des rA©sultats mitigA©s; que leur mA©thode semble prA©senter plusieurs dA©fauts et que la correction de ces dA©fauts a pour effet d'affaiblir la portA©e de leurs conclusions; que leurs estimations sont trA¨s sensibles A des variations relativement mineures de la mA©thode utilisA©e et que la plupart des estimations ""rA©visA©es"" pour les cinq pays ne confirment pas l'hypothA¨se selon laquelle l'instabilitA© des taux de change aurait eu un effet systA©matiquement nA©gatif sur le commerce international. /// En un examen reciente de los estudios empAricos sobre los efectos de la inestabilidad de los tipos de cambio en el comercio internacional se llega a la conclusiA³n de que ""la gran mayorAa de estos anAilisis empAricos no consiguen demostrar sistemAiticamente un vAnculo significativo entre los diferentes grados de variabilidad cambiaria y el volumen del comercio internacional, tanto sea en tA©rminos agregados como bilaterales"". (Fondo Monetario Internacional, Exchange Rate Volatility and World Trade, Washington, julio de 1984, pAig. 36). Un estudio reciente de M.A. Akhtar y R.S. Hilton (""Exchange Rate Uncertainty and International Trade,"" Banco de la Reserva Federal de Nueva York, mayo de 1984) indica, por el contrario, que la inestabilidad de los tipos de cambio, expresada segAon la desviaciA³n estAindar de los Andices de los tipos de cambio efectivos nominales, ha tenido efectos negativos considerables en el comercio de productos manufacturados de Estados Unidos y de la RepAoblica Federal de Alemania. El presente estudio tiene por objeto comprobar la solidez de los resultados empAricos de Akhtar y Hilton, tomando como base de partida su marco teA³rico bAisico. El estudio amplAa su anAilisis incluyendo a Francia, JapA³n y el Reino Unido, pasando luego a examinar la solidez de los resultados con respecto a variaciones en la selecciA³n del perAodo de la muestra, medida de la inestabilidad y tA©cnicas de estimaciA³n. La conclusiA³n principal del anAilisis es que la metodologAa de Akhtar y Hilton no logra establecer un vAnculo significativo sistemAitico entre la inestabilidad de los tipos de cambio y el volumen del comercio internacional. Esto no quiere decir que no puedan obsevarse en casos especAficos efectos negativos importantes, sino mAis bien que, en tA©rminos generales, los resultados no suelen ser ni considerables ni estables. En concreto, de los resultados se desprende que la aplicaciA³n directa de la metodologAa de Akhtar y Hilton a tres nuevos paAses (Francia, JapA³n y el Reino Unido) arroja resultados dispares; que esta metodologAa parece ser defectuosa en varios aspectos y que la correcciA³n de tales deficiencias tiene como efecto el debilitamiento de sus conclusiones; que las estimaciones son muy sensibles a modificaciones poco importantes de la metodologAa, y que las estimaciones ""revisadas"" para los cinco paAses no confirman, en su mayor parte, la hipotA©sis de que la inestabilidad de los tipos de cambio ha ejercido un efecto negativo sistemAitico en el comercio exterior.",TRUE,research problem
R11,Science,R27149,Real Exchange Rate Volatility and U.S. Bilateral Trade: A VAR Approach,S87323,R27150,has research problem,R27126,Exchange rate volatility,"This paper uses VAR models to investigate the impact of real exchange rate volatility on U.S. bilateral imports from the United Kingdom, France, Germany, Japan and Canada. The VAR systems include U.S. and foreign macro variables, and are estimated separately for each country. The major results suggest that the effect of volatility on imports is weak, although permanent shocks to volatility do have a negative impact on this measure of trade, and those effects are relatively more important over the flexible rate period. Copyright 1989 by MIT Press.",TRUE,research problem
R11,Science,R27168,Exchange Rate Volatility and International Prices,S87415,R27169,has research problem,R27126,Exchange rate volatility,"We examine how exchange rate volatility affects exporter's pricing decisions in the presence of optimal forward covering. By taking account of forward covering, we are able to derive an expression for the risk premium in the foreign exchange market, which is then estimated as a generalized ARCH model to obtain the time-dependent variance of the exchange rate. Our theory implies a connection between the estimated risk premium equation, and the influence of exchange rate volatility on export prices. In particular, we argue that if there is no risk premium, then exchange rate variance can only have a negative impact on export prices. In the presence of a risk premium, however, the effect of exchange rate variance on export prices is ambiguous, and may be statistically insignificant with aggregate data. These results are supported using data on aggregate U.S. imports and exchange rates of the dollar against the pound. yen and mark.",TRUE,research problem
R11,Science,R27190,Does Exchange Rate Volatility Depress Trade Flows? Evidence from Error- Correction Models,S87513,R27191,has research problem,R27126,Exchange rate volatility,"This paper examines the impact of exchange rate volatility on the trade flows of the G-7 countries in the context of a multivariate error-correction model. The error-correction models do not show any sign of parameter instability. The results indicate that the exchange rate volatility has a significant negative impact on the volume of exports in each of the G-7 countries. Assuming market participants are risk averse, these results imply that exchange rate uncertainty causes them to reduce their activities, change prices, or shift sources of demand and supply in order to minimize their exposure to the effects of exchange rate volatility. This, in turn, can change the distribution of output across many sectors in these countries. It is quite possible that the surprisingly weak relationship between trade flows and exchange rate volatility reported in several previous studies are due to insufficient attention to the stochastic properties of the relevant time series. Copyright 1993 by MIT Press.",TRUE,research problem
R11,Science,R27209,Estimating the impact of exchange rate volatility on exports: evidence from Asian countries,S87588,R27210,has research problem,R27126,Exchange rate volatility,"The paper examines the impact of exchange rate volatility on the exports of five Asian countries. The countries are Turkey, South Korea, Malaysia, Indonesia and Pakistan. The impact of a volatility term on exports is examined by using an Engle-Granger residual-based cointegrating technique. The results indicate that the exchange rate volatility reduced real exports for these countries. This might mean that producers in these countries are risk-averse. The producers will prefer to sell in domestic markets rather than foreign markets if the exchange rate volatility increases.",TRUE,research problem
R11,Science,R27217,Exchange Rate Volatility and Trade among the Asia Pacific,S87616,R27218,has research problem,R27126,Exchange rate volatility,"The purpose of this paper is to investigate the impact of exchange rate volatility on exports among 14 Asia Pacific countries, where various measures to raise the intra-region trade are being implemented. Specifically, this paper estimates a gravity model, in which the dependent variable is the product of the exports of two trading countries. In addition, it also estimates a unilateral exports model, in which the dependent variable is not the product of the exports of two trading countries but the exports from one country to another. By doing this, the depreciation rate of the exporting country's currency value can be included as one of the explanatory variables affecting the volume of exports. As the explanatory variables of the export volume, the gravity model adopts the product of the GDPs of two trading counties, their bilateral exchange rate volatility, their distance, a time trend and dummies for the share of the border line, the use of the same language, and the APEC membership. In the case of the unilateral exports model, the product of the GDPs is replaced by the GDP of the importing country, and the depreciation rate of the exporting country's currency value is dded. In addition, considering that the export volume will also depend on various onditions of the exporting country, dummies for exporting countries are also included as an explanatory variable. The empirical tests, using annual data for the period from 1980 to 2002, detect a significant negative impact of exchange rate volatility on the volume of exports. In addition, various tests using the data for sub-sample periods indicate that the negative impact had been weakened since 1989, when APEC had launched, and surged again from 1997, when the Asian financial crisis broke out. This finding implies that the impact of exchange rate volatility is time-dependent and that it is significantlynegative at least in the present time. This phenomenon is noticed regardless which estimation model is adopted. In addition, the test results show that the GDP of the importing country, the depreciation of the exporting country's currency value, the use of the same language and the membership of APEC have positive impacts on exports, while the distance between trading countries have negative impacts. Finally, it turns out that the negative impact of exchange rate volatility is much weaker among OECD countries than among non-OECD counties.",TRUE,research problem
R11,Science,R27225,Exchange Rate Uncertainty in Turkey and its Impact on Export Volume,S87650,R27226,has research problem,R27126,Exchange rate volatility,"This paper investigates the impact of real exchange rate volatility on Turkey’s exports to its most important trading partners using quarterly data for the period 1982 to 2001. Cointegration and error correction modeling approaches are applied, and estimates of the cointegrating relations are obtained using Johansen’s multivariate procedure. Estimates of the short-run dynamics are obtained through the error correction technique. Our results indicate that exchange rate volatility has a significant positive effect on export volume in the long run. This result may indicate that firms operating in a small economy, like Turkey, have little option for dealing with increased exchange rate risk.",TRUE,research problem
R11,Science,R27230,Exchange Rate Volatility and Trade Flows of the U.K. in 1990s,S87674,R27231,has research problem,R27126,Exchange rate volatility,"This paper examines the impact of exchange rate volatility on trade flows in the U.K. over the period 1990–2000. According to the conventional approach, exchange rate volatility clamps down trade volumes. This paper, however, identifies the existence of a positive relationship between exchange rate volatility and imports in the U.K. in the 1990s by using a bivariate GARCH-in-mean model. It highlights a possible emergence of a polarized version with conventional proposition that ERV works as an impediment factor on trade flows.",TRUE,research problem
R11,Science,R30584,Precise eye localization through a general-to-specific model definition,S102230,R30642,has research problem,R30583,Eye localization,"We present a method for precise eye localization that uses two Support Vector Machines trained on properly selected Haar wavelet coefficients. The evaluation of our technique on many standard databases exhibits very good performance. Furthermore, we study the strong correlation between the eye localization error and the face recognition rate.",TRUE,research problem
R11,Science,R30590,Average of Synthetic Exact Filters,S101846,R30591,has research problem,R30583,Eye localization,"This paper introduces a class of correlation filters called average of synthetic exact filters (ASEF). For ASEF, the correlation output is completely specified for each training image. This is in marked contrast to prior methods such as synthetic discriminant functions (SDFs) which only specify a single output value per training image. Advantages of ASEF training include: insensitivity to over-fitting, greater flexibility with regard to training images, and more robust behavior in the presence of structured backgrounds. The theory and design of ASEF filters is presented using eye localization on the FERET database as an example task. ASEF is compared to other popular correlation filters including SDF, MACE, OTF, and UMACE, and with other eye localization methods including Gabor Jets and the OpenCV cascade classifier. ASEF is shown to outperform all these methods, locating the eye to within the radius of the iris approximately 98.5% of the time.",TRUE,research problem
R11,Science,R30594,Eye Localization based on Multi-Scale Gabor Feature Vector Model,S102096,R30624,has research problem,R30583,Eye localization,"Eye localization is necessary for face recognition and related application areas. Most of eye localization algorithms reported thus far still need to be improved about precision and computational time for successful applications. In this paper, we propose an improved eye localization method based on multi-scale Gator feature vector models. The proposed method first tries to locate eyes in the downscaled face image by utilizing Gabor Jet similarity between Gabor feature vector at an initial eye coordinates and the eye model bunch of the corresponding scale. The proposed method finally locates eyes in the original input face image after it processes in the same way recursively in each scaled face image by using the eye coordinates localized in the downscaled image as initial eye coordinates. Experiments verify that our proposed method improves the precision rate without causing much computational overhead compared with other eye localization methods reported in the previous researches.",TRUE,research problem
R11,Science,R30606,2D cascaded AdaBoost for eye localization,S102127,R30627,has research problem,R30583,Eye localization,"In this paper, 2D cascaded AdaBoost, a novel classifier designing framework, is presented and applied to eye localization. By the term ""2D"", we mean that in our method there are two cascade classifiers in two directions: The first one is a cascade designed by bootstrapping the positive samples, and the second one, as the component classifiers of the first one, is cascaded by bootstrapping the negative samples. The advantages of the 2D structure include: (1) it greatly facilitates the classifier designing on huge-scale training set; (2) it can easily deal with the significant variations within the positive (or negative) samples; (3) both the training and testing procedures are more efficient. The proposed structure is applied to eye localization and evaluated on four public face databases, extensive experimental results verified the effectiveness, efficiency, and robustness of the proposed method",TRUE,research problem
R11,Science,R30608,A robust eye localization method for low quality face images,S102152,R30631,has research problem,R30583,Eye localization,"Eye localization is an important part in face recognition system, because its precision closely affects the performance of face recognition. Although various methods have already achieved high precision on the face images with high quality, their precision will drop on low quality images. In this paper, we propose a robust eye localization method for low quality face images to improve the eye detection rate and localization precision. First, we propose a probabilistic cascade (P-Cascade) framework, in which we reformulate the traditional cascade classifier in a probabilistic way. The P-Cascade can give chance to each image patch contributing to the final result, regardless the patch is accepted or rejected by the cascade. Second, we propose two extensions to further improve the robustness and precision in the P-Cascade framework. There are: (1) extending feature set, and (2) stacking two classifiers in multiple scales. Extensive experiments on JAFFE, BioID, LFW and a self-collected video surveillance database show that our method is comparable to state-of-the-art methods on high quality images and can work well on low quality images. This work supplies a solid base for face recognition applications under unconstrained or surveillance environments.",TRUE,research problem
R11,Science,R30620,Eye localization through multiscale sparse dictionaries,S102065,R30621,has research problem,R30583,Eye localization,"This paper presents a new eye localization method via Multiscale Sparse Dictionaries (MSD). We built a pyramid of dictionaries that models context information at multiple scales. Eye locations are estimated at each scale by fitting the image through sparse coefficients of the dictionary. By using context information, our method is robust to various eye appearances. The method also works efficiently since it avoids sliding a search window in the image during localization. The experiments in BioID database prove the effectiveness of our method.",TRUE,research problem
R11,Science,R30629,Enhanced Pictorial Structures for precise eye localization under incontrolled conditions,S102143,R30630,has research problem,R30583,Eye localization,"In this paper, we present an enhanced pictorial structure (PS) model for precise eye localization, a fundamental problem involved in many face processing tasks. PS is a computationally efficient framework for part-based object modelling. For face images taken under uncontrolled conditions, however, the traditional PS model is not flexible enough for handling the complicated appearance and structural variations. To extend PS, we 1) propose a discriminative PS model for a more accurate part localization when appearance changes seriously, 2) introduce a series of global constraints to improve the robustness against scale, rotation and translation, and 3) adopt a heuristic prediction method to address the difficulty of eye localization with partial occlusion. Experimental results on the challenging LFW (Labeled Face in the Wild) database show that our model can locate eyes accurately and efficiently under a broad range of uncontrolled variations involving poses, expressions, lightings, camera qualities, occlusions, etc.",TRUE,research problem
R11,Science,R30634,For your eyes only,S102184,R30635,has research problem,R30583,Eye localization,"In this paper, we take a look at an enhanced approach for eye detection under difficult acquisition circumstances such as low-light, distance, pose variation, and blur. We present a novel correlation filter based eye detection pipeline that is specifically designed to reduce face alignment errors, thereby increasing eye localization accuracy and ultimately face recognition accuracy. The accuracy of our eye detector is validated using data derived from the Labeled Faces in the Wild (LFW) and the Face Detection on Hard Datasets Competition 2011 (FDHD) sets. The results on the LFW dataset also show that the proposed algorithm exhibits enhanced performance, compared to another correlation filter based detector, and that a considerable increase in face recognition accuracy may be achieved by focusing more effort on the eye localization stage of the face recognition process. Our results on the FDHD dataset show that our eye detector exhibits superior performance, compared to 11 different state-of-the-art algorithms, on the entire set of difficult data without any per set modifications to our detection or preprocessing algorithms. The immediate application of eye detection is automatic face recognition, though many good applications exist in other areas, including medical research, training simulators, communication systems for the disabled, and automotive engineering.",TRUE,research problem
R11,Science,R30644,Automatic eye detection and its validation,S102255,R30645,has research problem,R30583,Eye localization,"The accuracy of face alignment affects the performance of a face recognition system. Since face alignment is usually conducted using eye positions, an accurate eye localization algorithm is therefore essential for accurate face recognition. In this paper, we first study the impact of eye locations on face recognition accuracy, and then introduce an automatic technique for eye detection. The performance of our automatic eye detection technique is subsequently validated using FRGC 1.0 database. The validation shows that our eye detector has an overall 94.5% eye detection rate, with the detected eyes very close to the manually provided eye positions. In addition, the face recognition performance based on the automatic eye detection is shown to be comparable to that of using manually given eye positions.",TRUE,research problem
R11,Science,R34436,Facial Expression Recognition using PCA and Gabor with JAFFE Database,S120040,R34437,has research problem,R34431,Facial Expression Recognition,"Abstract — In this paper I discussed Facial Expression Recognition System in two different ways and with two different databases. Principal Component Analysis is used here for feature extraction. I used JAFFE (Japanese Female Facial Expression). I implemented system with JAFFE database, I got accuracy of the algorithm is about 70-71% which gives quite poor Efficiency of the system. Then I implemented facial expression recognition system with Gabor filter and PCA. Here Gabor filter selected because of its good feature extraction property. The output of the Gabor filter was used as an input for the PCA. PCA has a good feature of dimension reduction so it was choose for that purpose.",TRUE,research problem
R11,Science,R34438,A Region Based Methodology for Facial Expression Recognition,S120050,R34439,has research problem,R34431,Facial Expression Recognition,"Facial expression recognition is an active research field which accommodates the need of interaction between humans and machines in a broad field of subjects. This work investigates the performance of a multi-scale and multi-orientation Gabor Filter Bank constructed in such a way to avoid redundant information. A region based approach is employed using different neighbourhood size at the locations of 34 fiducial points. Furthermore, a reduced set of 19 fiducial points is used to model the face geometry. The use of Principal Component Analysis (PCA) is evaluated. The proposed methodology is evaluated for the classification of the 6 basic emotions proposed by Ekman considering neutral expression as the seventh emotion.",TRUE,research problem
R11,Science,R34440,Jian-Cheng Huang “A New Facial Expression Recognition Method Based on Local Gabor Filter Bank and PCA plus LDA”,S120061,R34441,has research problem,R34431,Facial Expression Recognition,"ses a facial expression recognition system based on Gabor feature using a novel r bank. Traditionally, a global Gabor filter bank with 5 frequencies and 8 ten used to extract the Gabor feature. A lot of time will be involved to extract mensions of such Gabor feature vector are prohibitively high. A novel local Gabor art of frequency and orientation parameters is proposed. In order to evaluate the he local Gabor filter bank, we first employed a two-stage feature compression LDA to select and compress the Gabor feature, then adopted minimum distance nize facial expression. Experimental results show that the method is effective for eduction and good recognition performance in comparison with traditional entire . The best average recognition rate achieves 97.33% for JAFFE facial expression abor filter bank, feature extraction, PCA, LDA, facial expression recognition. deliver rich information about human emotion and play an essential role in human In order to facilitate a more intelligent and natural human machine interface of new cts, automatic facial expression recognition [1][18][20] had been studied world en years, which has become a very active research area in computer vision and n. There are many approaches have been proposed for facial expression analysis ages and image sequences [12][18] in the literature. we focus on the recognition of facial expression from single digital images with feature extraction. A number of approaches have been developed for extracting by: Motorola Labs Research Foundation (No.303D804372), NSFC (No.60275005), GDNSF 105938).",TRUE,research problem
R11,Science,R34444,Facial Expression Recognition from Line-Based Caricatures,S120087,R34445,has research problem,R34431,Facial Expression Recognition,"The automatic recognition of facial expression presents a significant challenge to the pattern analysis and man-machine interaction research community. Recognition from a single static image is particularly a difficult task. In this paper, we present a methodology for facial expression recognition from a single static image using line-based caricatures. The recognition process is completely automatic. It also addresses the computational expensive problem and is thus suitable for real-time applications. The proposed approach uses structural and geometrical features of a user sketched expression model to match the line edge map (LEM) descriptor of an input face image. A disparity measure that is robust to expression variations is defined. The effectiveness of the proposed technique has been evaluated and promising results are obtained. This work has proven the proposed idea that facial expressions can be characterized and recognized by caricatures.",TRUE,research problem
R11,Science,R34446,Robust Facial Expression Recognition Using Local Binary Patterns,S120098,R34447,has research problem,R34431,Facial Expression Recognition,"A novel low-computation discriminative feature space is introduced for facial expression recognition capable of robust performance over a rang of image resolutions. Our approach is based on the simple local binary patterns (LBP) for representing salient micro-patterns of face images. Compared to Gabor wavelets, the LBP features can be extracted faster in a single scan through the raw image and lie in a lower dimensional space, whilst still retaining facial information efficiently. Template matching with weighted Chi square statistic and support vector machine are adopted to classify facial expressions. Extensive experiments on the Cohn-Kanade Database illustrate that the LBP features are effective and efficient for facial expression discrimination. Additionally, experiments on face images with different resolutions show that the LBP features are robust to low-resolution images, which is critical in real-world applications where only low-resolution video input is available.",TRUE,research problem
R11,Science,R34448,Facial expression recognition and synthesis based on an appearance model,S120110,R34449,has research problem,R34431,Facial Expression Recognition,"Facial expression interpretation, recognition and analysis is a key issue in visual communication and man to machine interaction. We address the issues of facial expression recognition and synthesis and compare the proposed bilinear factorization based representations with previously investigated methods such as linear discriminant analysis and linear regression. We conclude that bilinear factorization outperforms these techniques in terms of correct recognition rates and synthesis photorealism especially when the number of training samples is restrained.",TRUE,research problem
R11,Science,R26144,"The size, origins, and character of Mongolia’s informal sector during the transition",S81619,R26145,has research problem,R26108,Informal sector,"The explosion of informal entrepreneurial activity during Mongolia's transition to a market economy represents one of the most visible signs of change in this expansive but sparsely populated Asian country. To deepen our understanding of Mongolia's informal sector during the transition, the author merges anecdotal experience from qualitative interviews with hard data from a survey of 770 informals in Ulaanbaatar, from a national household survey, and from official employment statistics. Using varied sources, the author generates rudimentary estimates of the magnitude of, and trends in, informal activity in Mongolia, estimates that are surprisingly consistent with each other. He evaluates four types of reasons for the burst of informal activity in Mongolia since 1990: 1) The crisis of the early and mid-1990s, during which large pools of labor were released from formal employment. 2) Rural to urban migration. 3) The""market's""reallocation of resources toward areas neglected under the old system: services such as distribution and transportation. 4) The institutional environments faced by the formal and informal sectors: hindering growth of the formal sector, facilitating entry for the informal sector. Formal labor markets haven't absorbed the labor made available by the crisis and by migration and haven't fully responded to the demand for new services. The relative ease of entering the informal market explains that market's great expansion. The relative difficulty of entering formal markets is not random but is driven by policy. Improving policies in the formal sector could afford the same ease of entry there as is currently being experienced in the informal sector.",TRUE,research problem
R11,Science,R31224,"Tracking the Middle-Income Trap: What is it, Who is in it, and Why?",S104869,R31267,has research problem,R31216,Middle-Income Trap,"This paper provides a working definition of what the middle-income trap is. We start by defining four income groups of GDP per capita in 1990 PPP dollars: low-income below $2,000; lower-middle-income between $2,000 and $7,250; upper-middle-income between $7,250 and $11,750; and high-income above $11,750. We then classify 124 countries for which we have consistent data for 1950–2010. In 2010, there were 40 low-income countries in the world, 38 lower-middle-income, 14 upper-middle-income, and 32 high-income countries. Then we calculate the threshold number of years for a country to be in the middle-income trap: a country that becomes lower-middle-income (i.e., that reaches $2,000 per capita income) has to attain an average growth rate of per capita income of at least 4.7 percent per annum to avoid falling into the lower-middle-income trap (i.e., to reach $7,250, the upper-middle-income threshold); and a country that becomes upper-middle-income (i.e., that reaches $7,250 per capita income) has to attain an average growth rate of per capita income of at least 3.5 percent per annum to avoid falling into the upper-middle-income trap (i.e., to reach $11,750, the high-income level threshold). Avoiding the middle-income trap is, therefore, a question of how to grow fast enough so as to cross the lower-middle-income segment in at most 28 years, and the upper-middle-income segment in at most 14 years. Finally, the paper proposes and analyzes one possible reason why some countries get stuck in the middle-income trap: the role played by the changing structure of the economy (from low-productivity activities into high-productivity activities), the types of products exported (not all products have the same consequences for growth and development), and the diversification of the economy. We compare the exports of countries in the middle-income trap with those of countries that graduated from it, across eight dimensions that capture different aspects of a country’s capabilities to undergo structural transformation, and test whether they are different. Results indicate that, in general, they are different. We also compare Korea, Malaysia, and the Philippines according to the number of products that each exports with revealed comparative advantage. We find that while Korea was able to gain comparative advantage in a significant number of sophisticated products and was well connected, Malaysia and the Philippines were able to gain comparative advantage in electronics only.",TRUE,research problem
R11,Science,R31228,Middle-Income Transitions: Trap or Myth?,S104876,R31268,has research problem,R31216,Middle-Income Trap,"During the last few years, the newly coined term middle-income trap has been widely used by policymakers to refer to the middle-income economies that seem to be stuck in the middle-income range. However, there is no accepted definition of the term in the literature. In this paper, we study historical transitions across income groups to see whether there is any evidence that supports the claim that economies do not advance. Overall, the data rejects this proposition. Instead, we argue that what distinguishes economies in their transition from middle to high income is fast versus slow transitions. We find that, historically, it has taken a “typical” economy 55 years to graduate from lower-middle income ($2,000 in 1990 purchasing power parity [PPP] $) to upper-middle income ($7,250 in 1990 PPP $). Likewise, we find that, historically, it has taken 15 years for an economy to graduate from upper-middle income to high income (above $11,750 in 1990 PPP $). Our analysis implies that as of 2013, there were 10 (out of 39) lower-middle-income economies and that 4 (out of 15) upper-middle-income economies that were experiencing slow transitions (i.e., above 55 and 15 years, respectively). The historical evidence presented in this paper indicates that economies move up across income groups. Analyzing a large sample of economies over many decades, indicates that experiences are wide, including many economies that today are high income that spent many decades traversing the middle-income segment.",TRUE,research problem
R11,Science,R31231,Growth Slowdowns and the Middle-Income Trap,S104835,R31257,has research problem,R31216,Middle-Income Trap,"The “middle-income trap” is the phenomenon of hitherto rapidly growing economies stagnating at middle-income levels and failing to graduate into the ranks of high-income countries. In this study we examine the middle-income trap as a special case of growth slowdowns, which are identified as large sudden and sustained deviations from the growth path predicted by a basic conditional convergence framework. We then examine their determinants by means of probit regressions, looking into the role of institutions, demography, infrastructure, the macroeconomic environment, output structure and trade structure. Two variants of Bayesian Model Averaging are used as robustness checks. The results—including some that indeed speak to the special status of middle-income countries—are then used to derive policy implications, with a particular focus on Asian economies.",TRUE,research problem
R11,Science,R31241,Middle-Income Traps: A Conceptual and Empirical Survey,S104797,R31242,has research problem,R31216,Middle-Income Trap,"The term ""middle-income trap"" has entered common parlance in the development policy community, despite the lack of a precise definition. This paper discusses in more detail the definitional issues associated with the term. It also provides evidence on whether the growth performance of middle-income countries (MICs) has been different from other income categories, including historical transition phases in the inter-country distribution of income. A transition matrix analysis and an exploration of cross-country growth patterns provide little support for the existence of a middle-income trap.",TRUE,research problem
R11,Science,R31244,Transitioning from Low-Income Growth to High-Income Growth: Is There a Middle Income Trap?,S104844,R31261,has research problem,R31216,Middle-Income Trap,"Is there a “middle-income trap”? Theory suggests that the determinants of growth at low and high income levels may be different. If countries struggle to transition from growth strategies that are effective at low income levels to growth strategies that are effective at high income levels, they may stagnate at some middle income level; this phenomenon can be thought of as a “middle-income trap.” Defining income levels based on per capita gross domestic product relative to the United States, we do not find evidence for (unusual) stagnation at any particular middle income level. However, we do find evidence that the determinants of growth at low and high income levels differ. These findings suggest a mixed conclusion: middle-income countries may need to change growth strategies in order to transition smoothly to high income growth strategies, but this can be done smoothly and does not imply the existence of a middle-income trap.",TRUE,research problem
R11,Science,R25533,A Flexible Model-Driven Game Development Approach,S76904,R25534,has research problem,R25530,Model-driven Game Development,"Game developers are facing an increasing demand for new games every year. Game development tools can be of great help, but require highly specialized professionals. Also, just as any software development effort, game development has some challenges. Model-Driven Game Development (MDGD) is suggested as a means to solve some of these challenges, but with a loss in flexibility. We propose a MDGD approach that combines multiple domain-specific languages (DSLs) with design patterns to provide flexibility and allow generated code to be integrated with manual code. After experimentation, we observed that, with the approach, less experienced developers can create games faster and more easily, and the product of code generation can be customized with manually written code, providing flexibility. However, with MDGD, developers become less familiar with the code, making manual codification more difficult.",TRUE,research problem
R11,Science,R25535,Automatic prototyping in model-driven game development,S76917,R25536,has research problem,R25530,Model-driven Game Development,"Model-driven game development (MDGD) is an emerging paradigm where models become first-order elements in game development, maintenance, and evolution. In this article, we present a first approach to 2D platform game prototyping automatization through the use of model-driven engineering (MDE). Platform-independent models (PIM) define the structure and the behavior of the games and a platform-specific model (PSM) describes the game control mapping. Automatic MOFscript transformations from these models generate the software prototype code in C++. As an example, Bubble Bobble has been prototyped in a few hours following the MDGD approach. The resulting code generation represents 93% of the game prototype.",TRUE,research problem
R11,Science,R25569,Engine- Cooperative Game Modeling (ECGM),S77118,R25570,has research problem,R25530,Model-driven Game Development,"Today game engines are popular in commercial game development, as they lower the threshold of game production by providing common technologies and convenient content-creation tools. Game engine based development is therefore the mainstream methodology in the game industry. Model-Driven Game Development (MDGD) is an emerging game development methodology, which applies the Model-Driven Software Development (MDSD) method in the game development domain. This simplifies game development by reducing the gap between game design and implementation. MDGD has to take advantage of the existing game engines in order to be useful in commercial game development practice. However, none of the existing MDGD approaches in literature has convincingly demonstrated good integration of its tools with the game engine tool-chain. In this paper, we propose a hybrid approach named ECGM to address the integration challenges of two methodologies with a focus on the technical aspects. The approach makes a run-time engine the base of the domain framework, and uses the game engine tool-chain together with the MDGD tool-chain. ECGM minimizes the change to the existing workflow and technology, thus reducing the cost and risk of adopting MDGD in commercial game development. Our contribution is one important step towards MDGD industrialization.",TRUE,research problem
R11,Science,R34102,Optic neuritis: oligoclo- nal bands increase the risk of multiple sclerosis,S118276,R34103,has research problem,R34101,Multiple sclerosis,"ABSTRACT‐ In 1974 we examined 30 patients 0.5–14 (mean 5) years after acute unilateral optic neuritis (ON), when no clinical signs of multiple sclerosis (MS) were discernable. 11 of the patients had oligoclonal bands in the cerebrospinal fluid (CSF). Re‐examination after an additional 6 years revealed that 9 of the 11 ON patients with oligoclonal bands (but only 1 of the 19 without this CSF abnormality) had developed MS. The occurrence of oligoclonal bands in CSF in a patient with ON is ‐ within the limits of the present observation time ‐ accompanied by a significantly increased risk of the future development of MS. Recurrent ON also occurred significantly more often in those ON patients who later developed MS.",TRUE,research problem
R11,Science,R34108,"Optic neuritis: Prognosis for multiple sclerosis from MRI, CSF, and HLA findings",S118315,R34109,has research problem,R34101,Multiple sclerosis,"We investigated the paraclinical profile of monosymptomatic optic neuritis(ON) and its prognosis for multiple sclerosis (MS). The correct identification of patients with very early MS carrying a high risk for conversion to clinically definite MS is important when new treatments are emerging that hopefully will prevent or at least delay future MS. We conducted a prospective single observer and population-based study of 147 consecutive patients (118 women, 80%) with acute monosymptomatic ON referred from a catchment area of 1.6 million inhabitants between January 1, 1990 and December 31, 1995. Of 116 patients examined with brain MRI, 64 (55%) had three or more high signal lesions, 11 (9%) had one to two high signal lesions, and 41 (35%) had a normal brain MRI. Among 143 patients examined, oligoclonal IgG (OB) bands in CSF only were demonstrated in 103 patients (72%). Of 146 patients analyzed, 68 (47%) carried the DR15,DQ6,Dw2 haplotype. During the study period, 53 patients (36%) developed clinically definite MS. The presence of three or more MS-like MRI lesions as well as the presence of OB were strongly associated with the development of MS (p < 0.001). Also, Dw2 phenotype was related to the development of MS (p = 0.046). MRI and CSF studies in patients with ON give clinically important information regarding the risk for future MS.",TRUE,research problem
R11,Science,R34111,A long-term prospective study of optic neuritis: evaluation of risk factors,S118335,R34112,has research problem,R34101,Multiple sclerosis,"Eighty‐six patients with monosymptomatic optic neuritis of unknown cause were followed prospectively for a median period of 12.9 years. At onset, cerebrospinal fluid (CSF) pleocytosis was present in 46 patients (53%) but oligoclonal immunoglobulin in only 40 (47%) of the patients. The human leukocyte antigen (HLA)‐DR2 was present in 45 (52%). Clinically definite multiple sclerosis (MS) was established in 33 patients. Actuarial analysis showed that the cumulative probability of developing MS within 15 years was 45%. Three risk factors were identified: low age and abnormal CSF at onset, and early recurrence of optic neuritis. Female gender, onset in the winter season, and the presence of HLA‐DR2 antigen increased the risk for MS, but not significantly. Magnetic resonance imaging detected bilateral discrete white matter lesions, similar to those in MS, in 11 of 25 patients, 7 to 18 years after the isolated attack of optic neuritis. Nine were among the 13 with abnormal CSF and only 2 belonged to the group of 12 with normal CSF (p = 0.01). Normal CSF at the onset of optic neuritis conferred better prognosis but did not preclude the development of MS.",TRUE,research problem
R11,Science,R34113,Clinically isolated syndromes: a new oligoclonal band test accurately predicts conversion to MS,S118355,R34114,has research problem,R34101,Multiple sclerosis,"Background: Patients with a clinically isolated demyelinating syndrome (CIS) are at risk of developing a second attack, thus converting into clinically definite multiple sclerosis (CDMS). Therefore, an accurate prognostic marker for that conversion might allow early treatment. Brain MRI and oligoclonal IgG band (OCGB) detection are the most frequent paraclinical tests used in MS diagnosis. A new OCGB test has shown high sensitivity and specificity in differential diagnosis of MS. Objective: To evaluate the accuracy of the new OCGB method and of current MRI criteria (MRI-C) to predict conversion of CIS to CDMS. Methods: Fifty-two patients with CIS were studied with OCGB detection and brain MRI, and followed up for 6 years. The sensitivity and specificity of both methods to predict conversion to CDMS were analyzed. Results: OCGB detection showed a sensitivity of 91.4% and specificity of 94.1%. MRI-C had a sensitivity of 74.23% and specificity of 88.2%. The presence of either OCGB or MRI-C studied simultaneously showed a sensitivity of 97.1% and specificity of 88.2%. Conclusions: The presence of oligoclonal IgG bands is highly specific and sensitive for early prediction of conversion to multiple sclerosis. MRI criteria have a high specificity but less sensitivity. The simultaneous use of both tests shows high sensitivity and specificity in predicting clinically isolated demyelinating syndrome conversion to clinically definite multiple sclerosis.",TRUE,research problem
R11,Science,R34115,Predicting multiple sclerosis at optic neuritis onset,S118375,R34116,has research problem,R34101,Multiple sclerosis,"Using multivariate analyses, individual risk of clinically definite multiple sclerosis (C DMS) after monosymptomatic optic neuritis (MO N) was quantified in a prospective study with clinical MO N onset during 1990 -95 in Stockholm, Sweden. During a mean follow-up time of 3.8 years, the presence of MS-like brain magnetic resonance imaging (MRI) lesions and oligoclonal immunoglobulin (Ig) G bands in cerebrospinal fluid (CSF) were strong prognostic markers of C DMS, with relative hazard ratios of 4.68 {95% confidence interval (CI) 2.21 -9.91} and 5.39 (95% C I 1.56 -18.61), respectively. Age and season of clinical onset were also significant predictors, with relative hazard ratios of 1.76 (95% C I 1.02 -3.04) and 2.21 (95% C I 1.13 -3.98), respectively. Based on the above two strong predicto rs, individual probability of C DMS development after MO N was calculated in a three-quarter sample drawn from a cohort, with completion of follow-up at three years. The highest probability, 0.66 (95% C I 0.48 -0.80), was obtained for individuals presenting with three or more brain MRI lesions and oligoclonal bands in the C SF, and the lowest, 0.09 (95% C I 0.02 -0.32), for those not presenting with these traits. Medium values, 0.29 (95% C I 0.13 -0.53) and 0.32 (95% C I 0.07 -0.73), were obtained for individuals discordant for the presence of brain MRI lesions and oligoclonal bands in the C SF. These predictions were validated in an external one-quarter sample.",TRUE,research problem
R11,Science,R34117,"Correlation of clinical, magnetic resonance imaging, and cerebrospinal fluid findings in optic neuritis",S118396,R34118,has research problem,R34101,Multiple sclerosis,"We found 42 of 74 patients (57%) with isolated monosymptomatic optic neuritis to have 1 to 20 brain lesions, by magnetic resonance imaging (MRI). All of the brain lesions were clinically silent and had characteristics consistent with multiple sclerosis (MS). None of the patients had ever experienced neurologic symptoms prior to the episode of optic neuritis. During 5.6 years of follow‐up, 21 patients (28%) developed definite MS on clinical grounds. Sixteen of the 21 converting patients (76%) had abnormal MRIs; the other 5 (24%) had MRIs that were normal initially (when they had optic neuritis only) and when repeated after they had developed clinical MS in 4 of the 5. Of the 53 patients who have not developed clinically definite MS, 26 (49%) have abnormal MRIs and 27 (51%) have normal MRIs. The finding of an abnormal MRI at the time of optic neuritis was significantly related to the subsequent development of MS on clinical grounds, but interpretation of the strength of that relationship must be tempered by the fact that some of the converting patients had normal MRIs and approximately half of the patients who did not develop clinical MS had abnormal MRIs. We found that abnormal IgG levels in the cerebrospinal fluid correlated more strongly than abnormal MRIs with the subsequent development of clinically definite MS.",TRUE,research problem
R11,Science,R34122,Uncomplicated retrobul- bar neuritis and the development of multiple sclerosis,S118431,R34123,has research problem,R34101,Multiple sclerosis,"Abstract A retrospective study of 30 patients hospitalized with a diagnosis of uncomplicated retrobulbar neuritis was carried out. The follow‐up period was 2–11 years; 57% developed multiple sclerosis. When the initial examination revealed oligoclonal bands in the cerebrospinal fluid, the risk of developing multiple sclerosis increased to 79%. With normal cerebrospinal fluid the risk decreased to only 10%. In the majority of cases, the diagnosis of MS was made during the first 3 years after retrobulbar neuritis.",TRUE,research problem
R11,Science,R34124,Can CSF predict the course of optic neuritis?,S118448,R34125,has research problem,R34101,Multiple sclerosis,"To discuss the implications of CSF abnormalities for the course of acute monosymptomatic optic neuritis (AMON), various CSF markers were analysed in patients being randomly selected from a population-based cohort. Paired serum and CSF were obtained within a few weeks from onset of AMON. CSF-restricted oligoclonal IgG bands, free kappa and free lambda chain bands were observed in 17, 15, and nine of 27 examined patients, respectively. Sixteen patients showed a polyspecific intrathecal synthesis of oligoclonal IgG antibodies against one or more viruses. At 1 year follow-up five patients had developed clinically definite multiple sclerosis (CDMS); all had CSF oligoclonal IgG bands and virus-specific oligoclonal IgG antibodies at onset. Due to the relative small number studied at the short-term follow-up, no firm conclusion of the prognostic value of these analyses could be reached. CSF Myelin Basic Protein-like material was increased in only two of 29 patients with AMON, but may have potential value in reflecting disease activity, as the highest values were obtained among patients with CSF sampled soon after the worst visual acuity was reached, and among patients with severe visual impairment. In most previous studies of patients with AMON qualitative and quantitative analyses of CSF IgG had a predictive value for development of CDMS, but the results are conflicting.",TRUE,research problem
R11,Science,R32197,Chemical Composition of the Essential Oil ofArtemisia herba-albaAsso Grown in Algeria,S109507,R32198,has research problem,R26370,Oil,"Abstract The essential oil obtained by hydrodistillation from the aerial parts of Artemisia herba-alba Asso growing wild in M'sila-Algeria, was investigated using both capillary GC and GC/MS techniques. The oil yield was 1.02% based on dry weight. Sixty-eight components amounting to 94.7% of the oil were identifed, 33 of them being reported for the frst time in Algerian A. herba-alba oil and 21 of these components have not been previously reported in A. herba-alba oils. The oil contained camphor (19.4%), trans-pinocarveol (16.9%), chrysanthenone (15.8%) and β-thujone (15%) as major components. Monoterpenoids are the main components (86.1%), and the irregular monoterpenes fraction represented a 3.1% yield.",TRUE,research problem
R11,Science,R32210,Extraction by Steam Distillation ofArtemisia herba-albsEssential Oil from Algeria: Kinetic Study and Optimization of the Operating Conditions,S109545,R32211,has research problem,R26370,Oil,"Abstract In order to study the extraction process of essential oil from Artemisia herba-alba, kinetic studies as well as an optimization of the operating conditions were achieved. The optimization was carried out by a parametric study and experiments planning method. Three operational parameters were chosen: Artemisia mass to be treated, steam flow rate and extraction time. The optimal extraction conditions obtained by the parametric study correspond to: a mass of 30 g, a steam flow rate of 1.65 mL.min−1 and the extraction time of 60 min. The results reveal that the combined effects of two parameters, the steam water flow rate and the extraction time, are the most significant. The yield is also affected by the interaction of the three parameters. The essential oil obtained with optimal conditions was analyzed by GC-MS and a kinetic study was realised.",TRUE,research problem
R11,Science,R32215,Chemical composi- tion of Algerian Artemisia herba-alba essential oils isolated by microwave and hydrodistillation,S109570,R32219,has research problem,R26370,Oil,"Abstract Isolation of the essential oil from Artemisia herba-alba collected in the North Sahara desert has been conducted by hydrodistillation (HD) and a microwave distillation process (MD). The chemical composition of the two oils was investigated by GC and GC/MS. In total, 94 constituents were identified. The main components were camphor (49.3 and 48.1% in HD and MD oils, respectively), 1,8-cineole (13.4–12.4%), borneol (7.3–7.1%), pinocarvone (5.6–5.5%), camphene (4.9–4.5%) and chrysanthenone (3.2–3.3%). In comparison with HD, MD allows one to obtain an oil in a very short time, with similar yields, comparable qualities and a substantial savings of energy.",TRUE,research problem
R11,Science,R32258,Chemovariation ofArtemisia herba albaAsso. Aromatic Plants of the Holy Land and the Sinai. Part XVI.,S109656,R32259,has research problem,R26370,Oil,"Abstract In continuation of our investigation of aromatic flora of the Holy Land, the systematic study of Artemisia herba alba essential oils has been conducted. The detailed composition of five relatively rare chemotypes of A. herba alba obtained through GC and GC/MS analysis are presented. To ensure the integrity of each chemotype the volatiles were extracted from individual plant specimens and bulked only if the GC profiles were substantially similar. The major constituents were: Type 1: 1,8 cineole (10.8%), α-thujone (40.9%) and β-thujone (34.9%); Type 2: 1,8 cineole (26.0%) and camphor (42.1%); Type 3; 1,8 cineole (26.6%) and β-thujone (44.0%); Type 4: cis-chrysanthenyl acetate (8.9%) and cis-chrysanthenol (30.0%); Type 5: cis-chysanthenol (6.8%) and cis-chrysanthenyl acetate (69.0%). This study showed that the population of A. herba alba in Israel consists of a much greater number of chemovarieties than was previously believed. Though chemovarieties are unevenly distributed in different geographic areas, no clear relation between the plant type and environmental conditions could be established.",TRUE,research problem
R11,Science,R32268,Composition of the Essential Oil fromArtemisia herba-albaGrown in Jordan,S109678,R32269,has research problem,R26370,Oil,"Abstract The composition of the essential oil hydrodistilled from the aerial parts of Artemisia herba-alba Asso. growing in Jordan was determined by GC and GC/MS. The oil yield was 1.3% (v/w) from dried tops (leaves, stems and fowers). Forty components corresponding to 95.3% of the oil were identifed, of which oxygenated monoterpenes were the main oil fraction (39.3% of the oil), with α- and β-thujones as the principal components (24.7%). The other major identifed components were: santolina alcohol (13.0%), artemisia ketone (12.4%), trans-sabinyl acetate (5.4%), germacrene D (4.6%), α-eudesmol (4.2%) and caryophyllene acetate (5.7%). The high oil yield and the substantial levels of potentially active components, in particular thujones and santolina alcohol, in the oil of this Jordanian species make the plant and the oil thereof promising candidates as natural herbal constituents of antimicrobial drug combinations.",TRUE,research problem
R11,Science,R32277,APPLICATION OF ESSENTIAL OIL OF ARTEMISIA HERBA ALBA AS GREEN CORROSION INHIBITOR FOR STEEL IN 0.5 M H2SO4,S109717,R32278,has research problem,R26370,Oil,Essential oil from Artemisia herba alba (Art) was hydrodistilled and tested as corrosion inhibitor of steel in 0.5 M H2SO4 using weight loss measurements and electrochemical polarization methods. Results gathered show that this natural oil reduced the corrosion rate by the cathodic action. Its inhibition efficiency attains the maximum (74%) at 1 g/L. The inhibition efficiency of Arm oil increases with the rise of temperature. The adsorption isotherm of natural product on the steel has been determined. A. herba alba essential oil was obtained by hydrodistillation and its chemical composition oil was investigated by capillary GC and GC/MS. The major components were chrysanthenone (30.6%) and camphor (24.4%).,TRUE,research problem
R11,Science,R32286,Chemical variability of Artemisia herba-alba Asso essential oils from East Morocco,S109738,R32287,has research problem,R26370,Oil,"Chemical compositions of 16 Artemisia herba-alba oil samples harvested in eight East Moroccan locations were investigated by GC and GC/MS. Chemical variability of the A. herba-alba oils is also discussed using statistical analysis. Detailed analysis of the essential oils led to the identification of 52 components amounting to 80.5–98.6 % of the total oil. The investigated chemical compositions showed significant qualitative and quantitative differences. According to their major components (camphor, chrysanthenone, and α- and β-thujone), three main groups of essential oils were found. This study also found regional specificity of the major components.",TRUE,research problem
R11,Science,R32293,Chemical composition and antiproliferative activity of essential oil from aerial parts of a medicinal herb Artemisia herba-alba,S109763,R32294,has research problem,R26370,Oil,"Artemisia herba-alba Asso., Asteraceae, is widely used in Morrocan folk medicine for the treatment of different health disorders. However, no scientific or medical studies were carried out to assess the cytotoxicity of A. herba-alba essential oil against cancer cell lines. In this study, eighteen volatile compounds were identified by GC-MS analysis of the essential oil obtained from the plant's aerial parts. The main volatile constituent in A. herba-alba was found to be a monoterpene, Verbenol, contributing to about 22% of the total volatile components. The essential oil showed significant antiproliferative activity against the acute lymphoblastic leukaemia (CEM) cell line, with 3 µg/mL as IC50 value. The anticancer bioactivity of Moroccan A. herba-alba essential oil is described here for the first time.",TRUE,research problem
R11,Science,R32308,Chemical composition of the essential oil of Artemisia herba-alba Asso ssp. valentine (Lam.) Marcl,S109802,R32309,has research problem,R26370,Oil,"Abstract The composition of the oil, steam-distilled from aerial parts of Artemisia herha-alba Asso ssp. valentina (Lam.) Marcl. (Asteraceae) collected from the south of Spain, has been analyzed by GC/MS. Among the 65 constituents investigated (representing 93.6 % of the oil composition), 61 were identified (90.3% of the oil composition). The major constituents detected were the sesquiterpene davanone (18.1%) and monoterpenes p-cymene (13.5%), 1,8-cineole (10.2%), chrysanthenone (6.7%), cis-chrysanthenyl acetate (5.6%), γ-terpinene (5.5%), myrcene (5.1%) and camphor (4.0%). The oil was dominated by monoterpenes (ca. 66% of the oil), p-menthane and pinane being the most representative skeleta of the group. The oil sample studied did not contain thujones, unlike most A. herba-alba oils described in the literature.",TRUE,research problem
R11,Science,R32330,"Chemical composition, mutagenic and antimutagenic activities of essential oils from (Tunisian) Artemisia campestris and Artemisia herba-alba",S109861,R32331,has research problem,R26370,Oil,"Abstract The essential oil composition from the aerial parts of Artemisia campestris var. glutinosa Gay ex Bess and Artemisia herba-alba Asso (Asteraceae) of Tunisian origin has been studied by GC and GC/MS. The main constituents of the oil from A. campestris collected in Benguerdane (South of Tunisia) were found to be β-pinene (41.0%), p-cymene (9.9%), α-terpinene (7.9%), limonene (6.5%), myrcene (4.1%), β-phellandrene (3.4%) and a-pinene (3.2%). Whereas the oil from A. herba-alba collected in Tataouine (South of Tunisia) showed, pinocarvone (38.3%), a-copaene (12.18%), limonene (11.0%), isoamyl2-methylbutyrate (19.5%) as major compounds. The mutagenic and antimutagenic activities of the two oils were investigated by the Salmonella typhimurium/microsome assay, with and without addition of an extrinsic metabolic activation system. The oils showed no mutagenicity when tested with Salmonella typhimurium strains TA98 and TA97. On the other hand, we showed that each oil had antimutagenic activity against the carcinogen Benzo (a) pyrene (B[a] P) when tested with TA97 and TA98 assay systems.",TRUE,research problem
R11,Science,R32350,Essential Oil Composition of Artemisia herba-alba from Southern Tunisia,S109878,R32351,has research problem,R26370,Oil,"The composition of the essential oil hydrodistilled from the aerial parts of 18 individual Artemisia herba-alba Asso. plants collected in southern Tunisia was determined by GC and GCMS analysis. The oil yield varied between 0.68% v/w and 1.93% v/w. One hundred components were identified, 21 of of which are reported for the first time in Artemisia herba-alba oil. The oil contained 10 components with percentages higher than 10%. The main components were cineole, thujones, chrysanthenone, camphor, borneol, chrysanthenyl acetate, sabinyl acetate, davana ethers and davanone. Twelve samples had monoterpenes as major components, three had sesquiterpenes as major components and the last three samples had approximately the same percentage of monoterpenes and sesquiterpenes. The chemical compositions revealed that ten samples had compositions similar to those of other Artemisia herba-alba essential oils analyzed in other countries. The remaining eight samples had an original chemical composition.",TRUE,research problem
R11,Science,R32361,Influence of drying time and process on Artemisia herba-alba Asso essential oil yield and composition,S109911,R32362,has research problem,R26370,Oil,"Abstract The essential oil content of Artemisia herba-alba Asso decreased along the drying period from 2.5 % to 1.8 %. Conversely, the composition of the essential oil was not qualitatively affected by the drying process. The same principle components were found in all essential analyzed such as α-thujone (13.0 – 22.7 %), β-thujone (18.0 – 25.0 %), camphor (8.6 - 13 %), 1,8-cineole (7.1 – 9.4 %), chrysanthenone (6.7 – 10.9 %), terpinen-4-ol (3.4 – 4.7 %). Quantitatively, during the air-drying process, the content of some components decreased slightly such as α-thujone (from 22.7 to 15.9 %) and 1,8-cineole (from 9.4 to 7.1 %), while the amount of other compounds increased such as chrysanthenone (from 6.7 to 10.9 %), borneol (from 0.8 to 1.5 %), germacrene-D (from 1.0 to 2.4 %) and spathulenol (from 0.8 to 1.5 %). The chemical composition of the oil was more affected by oven-drying the plant material at 35°C. α-Thujone and β-thujone decreased to 13.0 %and 18.0 %respectively, while the percentage of camphor, germacrene-D and spathulenol increased to 13.0 %, 5.5 %and 3.7 %, respectively.",TRUE,research problem
R11,Science,R32369,The essential oil from Artemisia herba-alba Asso cultivated in Arid Land (South Tunisia),S109933,R32370,has research problem,R26370,Oil,"Abstract Seedlings of Artemisia herba-alba Asso collected from Kirchaou area were transplanted in an experimental garden near the Institut des Régions Arides of Médenine (Tunisia). During three years, the aerials parts were harvested (three levels of cutting, 25%, 50% and 75% of the plant), at full blossom and during the vegetative stage. The essential oil was isolated by hydrodistillation and its chemical composition was determined by GC(RI) and 13C-NMR. With respect to the quantity of vegetable material and the yield of hydrodistillation, it appears that the best results were obtained for plants cut at 50% of their height and during the full blossom. The chemical composition of the essential oil was dominated by β-thujone, α-thujone, 1,8-cineole, camphor and trans-sabinyl acetate, irrespective of the level of cutting and the period of harvest. It remains similar to that of plants growing wild in the same area.",TRUE,research problem
R11,Science,R32377,IMPACT OF SEASON AND HARVEST FREQUENCY ON BIOMASS AND ESSENTIAL OIL YIELDS OF ARTEMISIA HERBA-ALBA CULTIVATED IN SOUTHERN TUNISIA,S109956,R32378,has research problem,R26370,Oil,"SUMMARYArtemisia herba-alba Asso has been successfully cultivated in the Tunisian arid zone. However, information regarding the effects of the harvest frequency on its biomass and essential oil yields is very limited. In this study, the effects of three different frequencies of harvesting the upper half of the A. herba-alba plant tuft were compared. The harvest treatments were: harvesting the same individual plants at the flowering stage annually; harvesting the same individual plants at the full vegetative growth stage annually and harvesting the same individual plants every six months. Statistical analyses indicated that all properties studied were affected by the harvest frequency. Essential oil yield, depended both on the dry biomass and its essential oil content, and was significantly higher from plants harvested annually at the flowering stage than the other two treatments. The composition of the β- and α-thujone-rich oils did not vary throughout the experimental period.",TRUE,research problem
R11,Science,R32385,Composition and intraspecific chemical vari- ability of the essential oil from Artemisia herba alba growing wild in a Tunisian arid zone,S109980,R32386,has research problem,R26370,Oil,"The intraspecific chemical variability of essential oils (50 samples) isolated from the aerial parts of Artemisia herba‐alba Asso growing wild in the arid zone of Southeastern Tunisia was investigated. Analysis by GC (RI) and GC/MS allowed the identification of 54 essential oil components. The main compounds were β‐thujone and α‐thujone, followed by 1,8‐cineole, camphor, chrysanthenone, trans‐sabinyl acetate, trans‐pinocarveol, and borneol. Chemometric analysis (k‐means clustering and PCA) led to the partitioning into three groups. The composition of two thirds of the samples was dominated by α‐thujone or β‐thujone. Therefore, it could be expected that wild plants of A. herba‐alba randomly harvested in the area of Kirchaou and transplanted by local farmers for the cultivation in arid zones of Southern Tunisia produce an essential oil belonging to the α‐thujone/β‐thujone chemotype and containing also 1,8‐cineole, camphor, and trans‐sabinyl acetate at appreciable amounts.",TRUE,research problem
R11,Science,R32407,Chemical Variability ofArtemisia herba-albaAsso Growing Wild in Semi-arid and Arid Land (Tunisia),S110025,R32408,has research problem,R26370,Oil,"Abstract Twenty-six oil samples were isolated by hydrodistillation from aerial parts of Artemisia herba-alba Asso growing wild in Tunisia (semi-arid land) and their chemical composition was determined by GC(RI), GC/MS and 13C-NMR. Various compositions were observed, dominated either by a single component (α-thujone, camphor, chrysanthenone or trans-sabinyl acetate) or characterized by the occurrence, at appreciable contents, of two or more of these compounds. These results confrmed the tremendous chemical variability of A. herba-alba.",TRUE,research problem
R11,Science,R32413,Chemical composition and biological activities of a new essential oil chemotype of Tunisian Artemisia herba alba Asso,S110056,R32414,has research problem,R26370,Oil,"The aim of the present study was to investigate the chemical composition, antioxidant, angiotensin Iconverting enzyme (ACE) inhibitory, antibacterial and antifungal activities of the essential oil of Artemisia herba alba Asso (Aha), a traditional medicinal plant widely growing in Tunisia. The essential oil from the air dried leaves and flowers of Aha were extracted by hydrodistillation and analyzed by GC and GC/MS. More than fifty compounds, out of which 48 were identified. The main chemical class of the oil was represented by oxygenated monoterpenes (50.53%). These were represented by 21 derivatives, among which the cis -chrysantenyl acetate (10.60%), the sabinyl acetate (9.13%) and the α-thujone (8.73%) were the principal compounds. Oxygenated sesquiterpenes, particularly arbusculones were identified in the essential oil at relatively high rates. The Aha essential oil was found to have an interesting antioxidant activity as evaluated by the 2,2-diphenyl-1-picrylhydrazyl and the β-carotene bleaching methods. The Aha essential oil also exhibited an inhibitory activity towards the ACE. The antimicrobial activities of Aha essential oil was evaluated against six bacterial strains and three fungal strains by the agar diffusion method and by determining the inhibition zone. The inhibition zones were in the range of 8-51 mm. The essential oil exhibited a strong growth inhibitory activity on all the studied fungi. Our findings demonstrated that Aha growing wild in South-Western of Tunisia seems to be a new chemotype and its essential oil might be a natural potential source for food preservation and for further investigation by developing new bioactive substances.",TRUE,research problem
R11,Science,R32422,Chemical constituents and antioxidant activity of the essential oil from aerial parts of Artemisia herba-alba grown in Tunisian semi-arid region,S110076,R32423,has research problem,R26370,Oil,"Essential oils and their components are becoming increasingly popular as naturally occurring antioxidant agents. In this work, the composition of essential oil in Artemisia herba-alba from southwest Tunisia, obtained by hydrodistillation was determined by GC/MS. Eighteen compounds were identified with the main constituents namely, α-thujone (24.88%), germacrene D (14.48%), camphor (10.81%), 1,8-cineole (8.91%) and β-thujone (8.32%). The oil was screened for its antioxidant activity with 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical scavenging, β-carotene bleaching and reducing power assays. The essential oil of A. herba-alba exhibited a good antioxidant activity with all assays with dose dependent manner and can be attributed to its presence in the oil. Key words: Artemisia herba alba, essential oil, chemical composition, antioxidant activity.",TRUE,research problem
R11,Science,R32427,"Unemployment Persistency, Over-education and the Employment Chances of the Less Educated",S110102,R32428,has research problem,R32426,Over-education,"The research question addressed in this article concerns whether unemployment persistency can be regarded as a phenomenon that increases employment difficulties for the less educated and, if so, whether their employment chances are reduced by an overly rapid reduction in the number of jobs with low educational requirements. The empirical case is Sweden and the data covers the period 1976-2000. The empirical analyses point towards a negative response to both questions. First, it is shown that jobs with low educational requirements have declined but still constitute a substantial share of all jobs. Secondly, educational attainment has changed at a faster rate than the job structure with increasing over-education in jobs with low educational requirements as a result. This, together with changed selection patterns into the low education group, are the main reasons for the poor employment chances of the less educated in periods with low general demand for labour.",TRUE,research problem
R11,Science,R32444,«Educational Mismatch and Labour Mobility of People with Disabilities: The Spanish Case,S110183,R32445,has research problem,R32426,Over-education,"In this paper we analyze the job-matching quality of people with disabilities. We do not find evidence of a greater importance of over-education in this group in comparison to the rest of the population. We find that people with disabilities have a lower probability of being over-educated for a period of 3 or more years, a higher probability of leaving mismatch towards inactivity or marginal employment, a lower probability of leaving mismatch towards a better match, and a higher probability of employment mobility towards inactivity or marginal employment. The empirical analysis is based on Spanish data from the European Community Household Panel from 1995 to 2000.",TRUE,research problem
R11,Science,R32455,Measuring Over-education,S110236,R32456,has research problem,R32426,Over-education,"Previous work on over-education has assumed homogeneity of workers and jobs. Relaxing these assumptions, we find that over-educated workers have lower education credentials than matched graduates. Among the over-educated graduates we distinguish between the apparently over-educated workers, who have similar unobserved skills as matched graduates, and the genuinely over-educated workers, who have a much lower skill endowment. Over-education is associated with a pay penalty of 5%-11% for apparently over-educated workers compared with matched graduates and of 22%-26% for the genuinely over-educated. Over-education originates from the lack of skills of graduates. This should be taken into consideration in the current debate on the future of higher education in the UK. Copyright The London School of Economics and Political Science 2003.",TRUE,research problem
R11,Science,R27394,Application of Linear Matrix Inequalities for Load Frequency Control With Communication Delays,S88566,R27395,has research problem,R27381,Power systems,"Load frequency control has been used for decades in power systems. Traditionally, this has been a centralized control by area with communication over a dedicated and closed network. New regulatory guidelines allow for competitive markets to supply this load frequency control. In order to allow an effective market operation, an open communication infrastructure is needed to support an increasing complex system of controls. While such a system has great advantage in terms of cost and reliability, the possibility of communication signal delays and other problems must be carefully analyzed. This paper presents a load frequency control method based on linear matrix inequalities. The primary aim is to find a robust controller that can ensure good performance despite indeterminate delays and other problems in the communication network.",TRUE,research problem
R11,Science,R32160,Application of genetic algorithms with dominant genes in a distributed scheduling problem in flexible manufacturing systems,S109338,R32161,has research problem,R32066,Scheduling problems,"Multi-factory production networks have increased in recent years. With the factories located in different geographic areas, companies can benefit from various advantages, such as closeness to their customers, and can respond faster to market changes. Products (jobs) in the network can usually be produced in more than one factory. However, each factory has its operations efficiency, capacity, and utilization level. Allocation of jobs inappropriately in a factory will produce high cost, long lead time, overloading or idling resources, etc. This makes distributed scheduling more complicated than classical production scheduling problems because it has to determine how to allocate the jobs into suitable factories, and simultaneously determine the production scheduling in each factory as well. The problem is even more complicated when alternative production routing is allowed in the factories. This paper proposed a genetic algorithm with dominant genes to deal with distributed scheduling problems, especially in a flexible manufacturing system (FMS) environment. The idea of dominant genes is to identify and record the critical genes in the chromosome and to enhance the performance of genetic search. To testify and benchmark the optimization reliability, the proposed algorithm has been compared with other approaches on several distributed scheduling problems. These comparisons demonstrate the importance of distributed scheduling and indicate the optimization reliability of the proposed algorithm.",TRUE,research problem
R11,Science,R32176,An introduction of dominant genes in genetic algorithm for FMS,S109431,R32177,has research problem,R32066,Scheduling problems,"This paper proposes a new idea, namely genetic algorithms with dominant genes (GADG) in order to deal with FMS scheduling problems with alternative production routing. In the traditional genetic algorithm (GA) approach, crossover and mutation rates should be pre-defined. However, different rates applied in different problems will directly influence the performance of genetic search. Determination of optimal rates in every run is time-consuming and not practical in reality due to the infinite number of possible combinations. In addition, this crossover rate governs the number of genes to be selected to undergo crossover, and this selection process is totally arbitrary. The selected genes may not represent the potential critical structure of the chromosome. To tackle this problem, GADG is proposed. This approach does not require a defined crossover rate, and the proposed similarity operator eliminates the determination of the mutation rate. This idea helps reduce the computational time remarkably and improve the performance of genetic search. The proposed GADG will identify and record the best genes and structure of each chromosome. A new crossover mechanism is designed to ensure the best genes and structures to undergo crossover. The performance of the proposed GADG is testified by comparing it with other existing methodologies, and the results show that it outperforms other approaches.",TRUE,research problem
R11,Science,R28652,On the value of user preferences in search-based software engineering: A case study in software product lines,S94929,R28783,has research problem,R28618,Search-Based Software Engineering,"Software design is a process of trading off competing objectives. If the user objective space is rich, then we should use optimizers that can fully exploit that richness. For example, this study configures software product lines (expressed as feature maps) using various search-based software engineering methods. As we increase the number of optimization objectives, we find that methods in widespread use (e.g. NSGA-II, SPEA2) perform much worse than IBEA (Indicator-Based Evolutionary Algorithm). IBEA works best since it makes most use of user preference knowledge. Hence it does better on the standard measures (hypervolume and spread) but it also generates far more products with 0% violations of domain constraints. Our conclusion is that we need to change our methods for search-based software engineering, particularly when studying complex decision spaces.",TRUE,research problem
R11,Science,R28817,Search-based Genetic Optimization for Deployment and Reconfiguration of Software in the Cloud,S95085,R28818,has research problem,R28618,Search-Based Software Engineering,"Migrating existing enterprise software to cloud platforms involves the comparison of competing cloud deployment options (CDOs). A CDO comprises a combination of a specific cloud environment, deployment architecture, and runtime reconfiguration rules for dynamic resource scaling. Our simulator CDOSim can evaluate CDOs, e.g., regarding response times and costs. However, the design space to be searched for well-suited solutions is extremely huge. In this paper, we approach this optimization problem with the novel genetic algorithm CDOXplorer. It uses techniques of the search-based software engineering field and CDOSim to assess the fitness of CDOs. An experimental evaluation that employs, among others, the cloud environments Amazon EC2 and Microsoft Windows Azure, shows that CDOXplorer can find solutions that surpass those of other state-of-the-art techniques by up to 60%. Our experiment code and data and an implementation of CDOXplorer are available as open source software.",TRUE,research problem
R11,Science,R50025,Segmentation of Ocular Pathologies Using Deep Convolutional Neural Network,S153324,R50027,has research problem,R50028,"simultaneous delineation of retinal pathologies (hard exudates, soft exudates, hemorrhages, microaneurysms)","Diabetes Mellitus (DM) is a chronic, progressive and life-threatening disease. The ocular manifestations of DM, Diabetic Retinopathy (DR) and Diabetic Macular Edema (DME), are the leading causes of blindness in the adult population throughout the world. Early diagnosis of DR and DM through screening tests and successive treatments can reduce the threat to visual acuity. In this context, we propose an encoder decoder based semantic segmentation network SOP-Net (Segmentation of Ocular Pathologies Using Deep Convolutional Neural Network) for simultaneous delineation of retinal pathologies (hard exudates, soft exudates, hemorrhages, microaneurysms). The proposed semantic segmentation framework is capable of providing segmentation results at pixel-level with good localization of object boundaries. SOP-Net has been trained and tested on IDRiD dataset which is publicly available with pixel level annotations of retinal pathologies. The network achieved average accuracies of 98.98%, 90.46%, 96.79%, and 96.70% for segmentation of hard exudates, soft exudates, hemorrhages, and microaneurysms. The proposed methodology has the capability to be used in developing a diagnostic system for organizing large scale ophthalmic screening programs.",TRUE,research problem
R11,Science,R33144,"Distinguishing the critical success factors between e-commerce, enterprise resource planning, and supply chain management",S115097,R33145,has research problem,R33101,Supply chain management,"The rapid deployment of e-business systems has surprised even the most futuristic management thinkers. Unfortunately very little empirical research has documented the many variations of e-business solutions as major software vendors release complex IT products into the marketplace. The literature holds simultaneous evidence of major success and major failure as implementations evolve. It is not clear from the literature just what the difference is between e-commerce and its predecessor concepts of supply chain management and enterprise resource planning. In this paper we use existing case studies, industrial interviews, and survey data to describe how these systems are similar and how they differ. We develop a conceptual model to show how these systems are related and how they serve significantly different strategic objectives. Finally, we suggest the critical success factors that are the key issues to resolve in order to successfully implement these systems in practice.",TRUE,research problem
R11,Science,R33198,Understanding supply chain management: critical research and a theoretical framework,S115196,R33199,has research problem,R33101,Supply chain management,"Increasing global cooperation, vertical disintegration and a focus on core activities have led to the notion that firms are links in a networked supply chain. This strategic viewpoint has created the challenge of coordinating effectively the entire supply chain, from upstream to downstream activities. While supply chains have existed ever since businesses have been organized to bring products and services to customers, the notion of their competitive advantage, and consequently supply chain management (SCM), is a relatively recent thinking in management literature. Although research interests in and the importance of SCM are growing, scholarly materials remain scattered and disjointed, and no research has been directed towards a systematic identification of the core initiatives and constructs involved in SCM. Thus, the purpose of this study is to develop a research framework that improves understanding of SCM and stimulates and facilitates researchers to undertake both theoretical and empirical investigation on the critical constructs of SCM, and the exploration of their impacts on supply chain performance. To this end, we analyse over 400 articles and synthesize the large, fragmented body of work dispersed across many disciplines such as purchasing and supply, logistics and transportation, marketing, organizational dynamics, information management, strategic management, and operations management literature.",TRUE,research problem
R11,Science,R33305,Supply chain management in SMEs: development of constructs and propositions,S115384,R33306,has research problem,R33101,Supply chain management,"Purpose – The purpose of this paper is to review the literature on supply chain management (SCM) practices in small and medium scale enterprises (SMEs) and outlines the key insights.Design/methodology/approach – The paper describes a literature‐based research that has sought understand the issues of SCM for SMEs. The methodology is based on critical review of 77 research papers from high‐quality, international refereed journals. Mainly, issues are explored under three categories – supply chain integration, strategy and planning and implementation. This has supported the development of key constructs and propositions.Findings – The research outcomes are three fold. Firstly, paper summarizes the reported literature and classifies it based on their nature of work and contributions. Second, paper demonstrates the overall approach towards the development of constructs, research questions, and investigative questions leading to key proposition for the further research. Lastly, paper outlines the key findings an...",TRUE,research problem
R11,Science,R33328,Critical success factors in the context of humanitarian aid supply chains,S115434,R33329,has research problem,R33101,Supply chain management,"– Critical success factors (CSFs) have been widely used in the context of commercial supply chains. However, in the context of humanitarian aid (HA) this is a poorly addressed area and this paper therefore aims to set out the key areas for research., – This paper is based on a conceptual discussion of CSFs as applied to the HA sector. A detailed literature review is undertaken to identify CSFs in a commercial context and to consider their applicability to the HA sector., – CSFs have not previously been identified for the HA sector, an issue addressed in this paper., – The main constraint on this paper is that CSFs have not been previously considered in the literature as applied to HA. The relevance of CSFs will therefore need to be tested in the HA environment and qualitative research is needed to inform further work., – This paper informs the HA community of key areas of activity which have not been fully addressed and offers., – This paper contributes to the understanding of supply chain management in an HA context.",TRUE,research problem
R11,Science,R33348,Critical success factors for B2B e‐commerce use within the UK NHS pharmaceutical supply chain,S115467,R33349,has research problem,R33101,Supply chain management,"Purpose – The purpose of this paper is to determine those factors perceived by users to influence the successful on‐going use of e‐commerce systems in business‐to‐business (B2B) buying and selling transactions through examination of the views of individuals acting in both purchasing and selling roles within the UK National Health Service (NHS) pharmaceutical supply chain.Design/methodology/approach – Literature from the fields of operations and supply chain management (SCM) and information systems (IS) is used to determine candidate factors that might influence the success of the use of e‐commerce. A questionnaire based on these is used for primary data collection in the UK NHS pharmaceutical supply chain. Factor analysis is used to analyse the data.Findings – The paper yields five composite factors that are perceived by users to influence successful e‐commerce use. “System quality,” “information quality,” “management and use,” “world wide web – assurance and empathy,” and “trust” are proposed as potentia...",TRUE,research problem
R11,Science,R33368,Managing Supply Chain at High Technology Companies,S115500,R33369,has research problem,R33101,Supply chain management,"There is an expectation that high technology companies use unique and leading edge technology to gain competitive advantage by investing heavily in supply chain management. This research uses multiple case study methodology to determine factors affecting the supply chain management at high technology companies. The research compares the supply chain performance of these high technology companies against the supply chain of benchmark (or commodity-type) companies at both strategic and tactical levels. In addition, the research also looks at supply chain practices within the high technology companies. The results indicate that at the strategic level the high technology companies and benchmark companies have a similar approach to supply chain management. However at the tactical, or critical, supply chain factor level, the analysis suggests that the high technology companies do have a different approach to supply chain management. The analysis also found differences in supply chain practices within the high technology companies; in this case the analysis shows that high technology companies with more advanced supply chain practices are more successful.",TRUE,research problem
R11,Science,R33375,Critical factors for implementing green supply chain management practice,S115513,R33376,has research problem,R33101,Supply chain management,"Purpose – The purpose of this paper is to explore critical factors for implementing green supply chain management (GSCM) practice in the Taiwanese electrical and electronics industries relative to European Union directives.Design/methodology/approach – A tentative list of critical factors of GSCM was developed based on a thorough and detailed analysis of the pertinent literature. The survey questionnaire contained 25 items, developed based on the literature and interviews with three industry experts, specifically quality and product assurance representatives. A total of 300 questionnaires were mailed out, and 87 were returned, of which 84 were valid, representing a response rate of 28 percent. Using the data collected, the identified critical factors were performed via factor analysis to establish reliability and validity.Findings – The results show that 20 critical factors were extracted into four dimensions, which denominated supplier management, product recycling, organization involvement and life cycl...",TRUE,research problem
R11,Science,R33436,A Study of Key Success Factors for Supply Chain Management System in Semiconductor Industry,S115622,R33437,has research problem,R33101,Supply chain management,"Developing a supply chain management (SCM) system is costly, but important. However, because of its complicated nature, not many of such projects are considered successful. Few research publications directly relate to key success factors (KSFs) for implementing and operating a SCM system. Motivated by the above, this research proposes two hierarchies of KSFs for SCM system implementation and operation phase respectively in the semiconductor industry by using a two-step approach. First, a literature review indicates the initial hierarchy. The second step includes a focus group approach to finalize the proposed KSF hierarchies by extracting valuable experiences from executives and managers that actively participated in a project, which successfully establish a seamless SCM integration between the world's largest semiconductor foundry manufacturing company and the world's largest assembly and testing company. Finally, this research compared the KSF's between the two phases and made a conclusion. Future project executives may refer the resulting KSF hierarchies as a checklist for SCM system implementation and operation in semiconductor or related industries.",TRUE,research problem
R11,Science,R33461,Supply chain management: success factors from the Malaysian manufacturer's perspective,S115674,R33462,has research problem,R33101,Supply chain management,"The purpose of this paper is to shed the light on the critical success factors that lead to high supply chain performance outcomes in a Malaysian manufacturing company. The critical success factors consist of relationship with customer and supplier, information communication and technology (ICT), material flow management, corporate culture and performance measurement. Questionnaire was the main instrument for the study and it was distributed to 84 staff from departments of purchasing, planning, logistics and operation. Data analysis was conducted by employing descriptive analysis (mean and standard deviation), reliability analysis, Pearson correlation analysis and multiple regression. The findings show that there are relationships exist between relationship with customer and supplier, ICT, material flow management, performance measurement and supply chain management (SCM) performance, but not for corporate culture. Forming a good customer and supplier relationship is the main predictor of SCM performance, followed by performance measurement, material flow management and ICT. It is recommended that future study to determine additional success factors that are pertinent to firms’ current SCM strategies and directions, competitive advantages and missions. Logic suggests that further study to include more geographical data coverage, other nature of businesses and research instruments. Key words: Supply chain management, critical success factor.",TRUE,research problem
R11,Science,R33476,An empirical study on the impact of critical success factors on the balanced scorecard performance in Korean green supply chain management enterprises,S115705,R33477,has research problem,R33101,Supply chain management,"Rapid industrial modernisation and economic reform have been features of the Korean economy since the 1990s, and have brought with it substantial environmental problems. In response to these problems, the Korean government has been developing approaches to promote cleaner production technologies. Green supply chain management (GSCM) is emerging to be an important approach for Korean enterprises to improve performance. The purpose of this study is to examine the impact of GSCM CSFs (critical success factors) on the BSC (balanced scorecard) performance by the structural equation modelling, using empirical results from 249 enterprise respondents involved in national GSCM business in Korea. Planning and implementation was a dominant antecedent factor in this study, followed by collaboration with partners and integration of infrastructure. However, activation of support was a negative impact to the finance performance, raising the costs and burdens. It was found out that there were important implications in the implementation of GSCM.",TRUE,research problem
R11,Science,R33486,Understanding the Success Factors of Sustainable Supply Chain Management: Empirical Evidence from the Electrics and Electronics Industry,S115732,R33487,has research problem,R33101,Supply chain management,"Recent studies have reported that organizations are often unable to identify the key success factors of Sustainable Supply Chain Management (SSCM) and to understand their implications for management practice. For this reason, the implementation of SSCM often does not result in noticeable benefits. So far, research has failed to offer any explanations for this discrepancy. In view of this fact, our study aims at identifying and analyzing the factors that underlie successful SSCM. Success factors are identified by means of a systematic literature review and are then integrated into an explanatory model. Consequently, the proposed success factor model is tested on the basis of an empirical study focusing on recycling networks of the electrics and electronics industry. We found that signaling, information provision and the adoption of standards are crucial preconditions for strategy commitment, mutual learning, the establishment of ecological cycles and hence for the overall success of SSCM. Copyright © 2011 John Wiley & Sons, Ltd and ERP Environment.",TRUE,research problem
R11,Science,R33506,Key success factor analysis for e‐SCM project implementation and a case study in semiconductor manufacturers,S115774,R33507,has research problem,R33101,Supply chain management,"Purpose – The semiconductor market exceeded US$250 billion worldwide in 2010 and has had a double‐digit compound annual growth rate (CAGR) in the last 20 years. As it is located far upstream of the electronic product market, the semiconductor industry has suffered severely from the “bullwhip” effect. Therefore, effective e‐based supply chain management (e‐SCM) has become imperative for the efficient operation of semiconductor manufacturing (SM) companies. The purpose of this research is to define and analyze the key success factors (KSF) for e‐SCM system implementation in the semiconductor industry.Design/methodology/approach – A hierarchy of KSFs is defined first by a combination of a literature review and a focus group discussion with experts who successfully implemented an inter‐organizational e‐SCM project. Fuzzy analytic hierarchy process (FAHP) is then employed to rank the importance of these identified KSFs. To confirm the research result and further explore the managerial implications, a second in...",TRUE,research problem
R11,Science,R33529,Supply chain issues in SMEs: select insights from cases of Indian origin,S115814,R33530,has research problem,R33101,Supply chain management,"This article reports the supply chain issues in small and medium scale enterprises (SMEs) using insights from select cases of Indian origin (manufacturing SMEs). A broad range of qualitative and quantitative data were collected during interviews and plant visits in a multi-case study (of 10 SMEs) research design. Company documentation and business reports were also employed. Analysis is carried out using diagnostic tools like ‘EBM-REP’ (Thakkar, J., Kanda, A., and Deshmukh, S.G., 2008c. An enquiry-analysis framework “EBM-REP” for qualitative research. International Journal of Innovation and Learning (IJIL), 5 (5), 557–580.) and ‘Role Interaction Model’ (Thakkar J., Kanda, A., and Deshmukh, S.G., 2008b. A conceptual role interaction model for supply chain management in SMEs. Journal of Small Business and Enterprise Development (JSBED), 15 (1), 74–95). This article reports a set of critical success factors and evaluates six critical research questions for the successful supply chain planning and management in SMEs. The results of this article will help SME managers to assess their supply chain function more rigorously. This article addresses the issue on supply chain management in SMEs using case study approach and diagnostic tools to add select new insights to the existing body of knowledge on supply chain issues in SMEs.",TRUE,research problem
R11,Science,R33534,Application of critical success factors in supply chain management,S115825,R33535,has research problem,R33101,Supply chain management,"This study is the first attempt that assembled published academic work on critical success factors (CSFs) in supply chain management (SCM) fields. The purpose of this study are to review the CSFs in SCM and to uncover the major CSFs that are apparent in SCM literatures. This study apply literature survey techniques from published CSFs studies in SCM. A collection of 42 CSFs studies in various SCM fields are obtained from major databases. The search uses keywords such as as supply chain management, critical success factors, logistics management and supply chain drivers and barriers. From the literature survey, four major CSFs are proposed. The factors are collaborative partnership, information technology, top management support and human resource. It is hoped that this review will serve as a platform for future research in SCM and CSFs studies. Plus, this study contribute to existing SCM knowledge and further appraise the concept of CSFs.",TRUE,research problem
R11,Science,R33564,Identification of critical success factors 261 to achieve high green supply chain management performances in Indian automobile industry”,S115871,R33565,has research problem,R33101,Supply chain management,"Green supply chain management (GSCM) has been receiving the spotlight in last few years. The study aims to identify critical success factors (CSFs) to achieve high GSCM performances from three perspectives i.e., environmental, social and economic performance. CSFs to achieve high GSCM performances relevant to Indian automobile industry have been identified and categorised according to three perspectives from the literature review and experts' opinions. Conceptual models also have been put forward. This paper may play vital role to understand CSFs to achieve GSCM performances in Indian automobile industry and help the supply chain managers to understand how they may improve environmental, social and economic performance.",TRUE,research problem
R11,Science,R33571,Critical success factors of green supply chain management for achieving sustainability in Indian automobile industry,S115885,R33572,has research problem,R33101,Supply chain management,"The aim of this study was to identify and analyse the key success factors behind successful achievement of environment sustainability in Indian automobile industry supply chains. Here, critical success factors (CSFs) and performance measures of green supply chain management (GSCM) have been identified through extensive literature review and discussions with experts from Indian automobile industry. Based on the literature review, a questionnaire was designed and 123 final responses were considered. Six CSFs to implement GSCM for achieving sustainability and four expected performance measures of GSCM practices implementation were extracted using factor analysis. interpretive ranking process (IRP) modelling approach is employed to examine the contextual relationships among CSFs and to rank them with respect to performance measures. The developed IRP model shows that the CSF ‘Competitiveness’ is the most important CSF for achieving sustainability in Indian automobile industry through GSCM practices. This study is one of the few that have considered the environmental sustainability practices in the automobile industry in India and their implications on sectoral economy. The results of this study may help the mangers/SC practitioners/Governments/Customers in making strategic and tactical decisions regarding successful implementation of GSCM practices in Indian automobile industry with a sustainability focus. The developed framework provides a comprehensive perspective for assessing the synergistic impact of CSFs on GSCM performances and can act as ready reckoner for the practitioners. As there is very limited work presented in literature using IRP, this piece of work would provide a better understanding of this relatively new ranking methodology.",TRUE,research problem
R11,Science,R33579,Critical success factors of customer involvement in greening the supply chain: an empirical study,S115900,R33580,has research problem,R33101,Supply chain management,The role of customers and their involvement in green supply chain management (GSCM) has been recognised as an important research area. This paper is an attempt to explore factors influencing involvement of customers towards greening the supply chain (SC). Twenty-five critical success factors (CSFs) of customer involvement in GSCM have been identified from literature review and through extensive discussions with senior and middle level SC professionals. Interviews and questionnaire-based survey have been used to indicate the significance of these CSFs. A total of 478 valid responses were received to rate these CSFs on a five-point Likert scale (ranging from unimportant to most important). Statistical analysis has been carried out to establish the reliability and to test the validity of the questionnaires. Subsequent factor analysis has identified seven major components covering 79.24% of total variance. This paper may help to establish the importance of customer role in promoting green concept in SCs and to develop an understanding of factors influencing customer involvement – key input towards creating ‘greening pull system’ (GPSYS). This understanding may further help in framing the policies and strategies to green the SC.,TRUE,research problem
R11,Science,R32687,Maritime situation awareness capabilities from satellite and terrestrial sensor systems,S111582,R32688,has research problem,R32545,Vessel detection,"Maritime situation awareness is supported by a combination of satellite, airborne, and terrestrial sensor systems. This paper presents several solutions to process that sensor data into information that supports operator decisions. Examples are vessel detection algorithms based on multispectral image techniques in combination with background subtraction, feature extraction techniques that estimate the vessel length to support vessel classification, and data fusion techniques to combine image based information, detections from coastal radar, and reports from cooperative systems such as (satellite) AIS. Other processing solutions include persistent tracking techniques that go beyond kinematic tracking, and include environmental information from navigation charts, and if available, ELINT reports. And finally rule-based and statistical solutions for the behavioural analysis of anomalous vessels. With that, trends and future work will be presented.",TRUE,research problem
R11,Science,R32694,NEAR REAL-TIME AUTOMATIC MARINE VESSEL DETECTION ON OPTICAL SATELLITE IMAGES,S111630,R32695,has research problem,R32545,Vessel detection,"Abstract. Vessel monitoring and surveillance is important for maritime safety and security, environment protection and border control. Ship monitoring systems based on Synthetic-aperture Radar (SAR) satellite images are operational. On SAR images the ships made of metal with sharp edges appear as bright dots and edges, therefore they can be well distinguished from the water. Since the radar is independent from the sun light and can acquire images also by cloudy weather and rain, it provides a reliable service. Vessel detection from spaceborne optical images (VDSOI) can extend the SAR based systems by providing more frequent revisit times and overcoming some drawbacks of the SAR images (e.g. lower spatial resolution, difficult human interpretation). Optical satellite images (OSI) can have a higher spatial resolution thus enabling the detection of smaller vessels and enhancing the vessel type classification. The human interpretation of an optical image is also easier than as of SAR image. In this paper I present a rapid automatic vessel detection method which uses pattern recognition methods, originally developed in the computer vision field. In the first step I train a binary classifier from image samples of vessels and background. The classifier uses simple features which can be calculated very fast. For the detection the classifier is slided along the image in various directions and scales. The detector has a cascade structure which rejects most of the background in the early stages which leads to faster execution. The detections are grouped together to avoid multiple detections. Finally the position, size(i.e. length and width) and heading of the vessels is extracted from the contours of the vessel. The presented method is parallelized, thus it runs fast (in minutes for 16000 × 16000 pixels image) on a multicore computer, enabling near real-time applications, e.g. one hour from image acquisition to end user.",TRUE,research problem
R11,Science,R32726,Texture-based vessel classifier for electro-optical satellite imagery,S111856,R32727,has research problem,R32545,Vessel detection,"Satellite imagery provides a valuable source of information for maritime surveillance. The vast majority of the research regarding satellite imagery for maritime surveillance focuses on vessel detection and image enhancement, whilst vessel classification remains a largely unexplored research topic. This paper presents a vessel classifier for spaceborne electro-optical imagery based on a feature representative across all satellite imagery, texture. Local Binary Patterns were selected to represent vessels for their high distinctivity and low computational complexity. Considering vessels characteristic super-structure, the extracted vessel signatures are sub-divided in three sections bow, middle and stern. A hierarchical decision-level classification is proposed, analysing first each vessel section individually and then combining the results in the second stage. The proposed approach is evaluated with the electro-optical satellite image dataset presented in [1]. Experimental results reveal an accuracy of 85.64% across four vessel categories.",TRUE,research problem
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5765,R5230,has research problem,R5232,self-citation,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,research problem
R141823,Semantic Web,R172806,Discovery of ontologies from knowledge bases,S689401,R172808,has research problem,R172809, discover implicit ontological relationships,"Current approaches to building knowledge-based systems propose the development of an ontology as a precursor to building the problem-solver. This paper outlines an attempt to do the reverse and discover interesting ontologies from systems built without the ontology being explicit. In particular the paper considers large classification knowledge bases used for the interpretation of medical chemical pathology results and built using Ripple-Down Rules (RDR). The rule conclusions in these knowledge bases provide free-text interpretations of the results rather than explicit classes. The goal is to discover implicit ontological relationships between these interpretations as the system evolves. RDR allows for incremental development and the goal is that the ontology emerges as the system evolves. The results suggest that approach has potential, but further investigation is required before strong claims can be made.",TRUE,research problem
R141823,Semantic Web,R172893,Using an Ontology Learning System for Trend Analysis and Detection,S689805,R172895,has research problem,R172896,an ontology learning system to create domain ontologies from scratch,"The aim of ontology learning is to generate domain models (semi-) automatically. We apply an ontology learning system to create domain ontologies from scratch in a monthly interval and use the resulting data to detect and analyze trends in the domain. In contrast to traditional trend analysis on the level of single terms, the application of semantic technologies allows for a more abstract and integrated view of the domain. A Web frontend displays the resulting ontologies, and a number of analyses are performed on the data collected. This frontend can be used to detect trends and evolution in a domain, and dissect them on an aggregated, as well as a fine-grained-level.",TRUE,research problem
R141823,Semantic Web,R172802,Building accurate semantic taxonomies from monolingual MRDs,S689391,R172804,has research problem,R172805,automatic extraction of taxonomic links from MRD entries,"This paper presents a method that conbines a set of unsupervised algorithms in order to accurately build large taxonomies from any machine-readable dictionary (MRD). Our aim is to profit from conventional MRDs, with no explicit semantic coding. We propose a system that 1) performs fully automatic extraction of taxonomic links from MRD entries and 2) ranks the extracted relations in a way that selective manual refinement is allowed. Tested accuracy can reach around 100% depending on the degree of coverage selected, showing that taxonomy building is not limited to structured dictionaries such as LDOCE.",TRUE,research problem
R141823,Semantic Web,R165795,Automatic Subject Indexing with Knowledge Graphs,S660831,R165797,has research problem,R165800,automatic subject indexing,"Automatic subject indexing has been a longstanding goal of digital curators to facilitate effective retrieval access to large collections of both online and offline information resources. Controlled vocabularies are often used for this purpose, as they standardise annotation practices and help users to navigate online resources through following interlinked topical concepts. However, to this date, the assignment of suitable text annotations from a controlled vocabulary is still largely done manually, or at most (semi-)automatically, even though effective machine learning tools are already in place. This is because existing procedures require a sufficient amount of training data and they have to be adapted to each vocabulary, language and application domain anew. In this paper, we argue that there is a third solution to subject indexing which harnesses cross-domain knowledge graphs. Our KINDEX approach fuses distributed knowledge graph information from different sources. Experimental evaluation shows that the approach achieves good accuracy scores by exploiting correspondence links of publicly available knowledge graphs.",TRUE,research problem
R141823,Semantic Web,R186138,ShEx-Lite: Automatic Generation of Domain Object Models from a Shape Expressions Subset Language,S711639,R186140,has research problem,R120366,Code Generation,"Shape Expressions (ShEx) was defined as a human-readable and concise language to describe and validate RDF. In the last years, the usage of ShEx has grown and more functionalities are being demanded. One such functionality is to ensure interoperability between ShEx schemas and domain models in programming languages. In this paper, we present ShEx-Lite, a tabular based subset of ShEx that allows to generate domain object models in different object-oriented languages. Although the current system generates Java and Python, it offers a public interface so anyone can implement code generation in other programming languages. The system has been employed in a workflow where the shape expressions are used both to define constraints over an ontology and to generate domain objects that will be part of a clean architecture style.",TRUE,research problem
R141823,Semantic Web,R172814,AUTOMATING DATA ACQUISITION INTO ONTOLOGIES FROM PHARMACOGENETICS RELATIONAL DATA SOURCES USING DECLARATIVE OBJECT DEFINITIONS AND XML,S689429,R172816,has research problem,R172817,interfacing ontology models with data acquisition from external relational data sources,"Ontologies are useful for organizing large numbers of concepts having complex relationships, such as the breadth of genetic and clinical knowledge in pharmacogenomics. But because ontologies change and knowledge evolves, it is time consuming to maintain stable mappings to external data sources that are in relational format. We propose a method for interfacing ontology models with data acquisition from external relational data sources. This method uses a declarative interface between the ontology and the data source, and this interface is modeled in the ontology and implemented using XML schema. Data is imported from the relational source into the ontology using XML, and data integrity is checked by validating the XML submission with an XML schema. We have implemented this approach in PharmGKB (http://www.pharmgkb.org/), a pharmacogenetics knowledge base. Our goals were to (1) import genetic sequence data, collected in relational format, into the pharmacogenetics ontology, and (2) automate the process of updating the links between the ontology and data acquisition when the ontology changes. We tested our approach by linking PharmGKB with data acquisition from a relational model of genetic sequence information. The ontology subsequently evolved, and we were able to rapidly update our interface with the external data and continue acquiring the data. Similar approaches may be helpful for integrating other heterogeneous information sources in order make the diversity of pharmacogenetics data amenable to computational analysis.",TRUE,research problem
R141823,Semantic Web,R172818,Automated Learning of Social Ontologies,S689442,R172820,has research problem,R172821,learning ontologies from social content,"Learned social ontologies can be viewed as products of a social fermentation process, i.e. a process between users who belong in communities of common interests (CoI), in open, collaborative, and communicative environments. In such a setting, social fermentation ensures the automatic encapsulation of agreement and trust of shared knowledge that participating stakeholders provide during an ontology learning task. This chapter discusses the requirements for the automated learning of social ontologies and presents a working method and results of preliminary work. Furthermore, due to its importance for the exploitation of the learned ontologies, it introduces a model for representing the interlinking of agreement, trust and the learned domain conceptualizations that are extracted from social content. The motivation behind this work is an effort towards supporting the design of methods for learning ontologies from social content i.e. methods that aim to learn not only domain conceptualizations but also the degree that agents (software and human) may trust these conceptualizations or not.",TRUE,research problem
R141823,Semantic Web,R162600,DBpedia Archivo - A Web-Scale Interface for Ontology Archiving under Consumer-oriented Aspects,S648812,R162602,has research problem,R162603,Unified System for handling online Ontologies,"Abstract While thousands of ontologies exist on the web, a unified system for handling online ontologies – in particular with respect to discovery, versioning, access, quality-control, mappings – has not yet surfaced and users of ontologies struggle with many challenges. In this paper, we present an online ontology interface and augmented archive called DBpedia Archivo, that discovers, crawls, versions and archives ontologies on the DBpedia Databus. Based on this versioned crawl, different features, quality measures and, if possible, fixes are deployed to handle and stabilize the changes in the found ontologies at web-scale. A comparison to existing approaches and ontology repositories is given .",TRUE,research problem
R281,Social and Behavioral Sciences,R76141,Bullying Victimization among In-School Adolescents in Ghana: Analysis of Prevalence and Correlates from the Global School-Based Health Survey,S348410,R76150,has research problem,R76151,bullying,"(1) Background: Although bullying victimization is a phenomenon that is increasingly being recognized as a public health and mental health concern in many countries, research attention on this aspect of youth violence in low- and middle-income countries, especially sub-Saharan Africa, is minimal. The current study examined the national prevalence of bullying victimization and its correlates among in-school adolescents in Ghana. (2) Methods: A sample of 1342 in-school adolescents in Ghana (55.2% males; 44.8% females) aged 12–18 was drawn from the 2012 Global School-based Health Survey (GSHS) for the analysis. Self-reported bullying victimization “during the last 30 days, on how many days were you bullied?” was used as the central criterion variable. Three-level analyses using descriptive, Pearson chi-square, and binary logistic regression were performed. Results of the regression analysis were presented as adjusted odds ratios (aOR) at 95% confidence intervals (CIs), with a statistical significance pegged at p < 0.05. (3) Results: Bullying victimization was prevalent among 41.3% of the in-school adolescents. Pattern of results indicates that adolescents in SHS 3 [aOR = 0.34, 95% CI = 0.25, 0.47] and SHS 4 [aOR = 0.30, 95% CI = 0.21, 0.44] were less likely to be victims of bullying. Adolescents who had sustained injury [aOR = 2.11, 95% CI = 1.63, 2.73] were more likely to be bullied compared to those who had not sustained any injury. The odds of bullying victimization were higher among adolescents who had engaged in physical fight [aOR = 1.90, 95% CI = 1.42, 2.25] and those who had been physically attacked [aOR = 1.73, 95% CI = 1.32, 2.27]. Similarly, adolescents who felt lonely were more likely to report being bullied [aOR = 1.50, 95% CI = 1.08, 2.08] as against those who did not feel lonely. Additionally, adolescents with a history of suicide attempts were more likely to be bullied [aOR = 1.63, 95% CI = 1.11, 2.38] and those who used marijuana had higher odds of bullying victimization [aOR = 3.36, 95% CI = 1.10, 10.24]. (4) Conclusions: Current findings require the need for policy makers and school authorities in Ghana to design and implement policies and anti-bullying interventions (e.g., Social Emotional Learning (SEL), Emotive Behavioral Education (REBE), Marijuana Cessation Therapy (MCT)) focused on addressing behavioral issues, mental health and substance abuse among in-school adolescents.",TRUE,research problem
R281,Social and Behavioral Sciences,R76164,Patterns and Correlates for Bullying among Young Adolescents in Ghana,S348480,R76166,has research problem,R76151,bullying,"Bullying is relatively common and is considered to be a public health problem among adolescents worldwide. The present study examined the risk factors associated with bullying behavior among adolescents in a lower-middle-income country setting. Data on 6235 adolescents aged 11–16 years, derived from the Republic of Ghana’s contribution to the Global School-based Health Survey, were analyzed using bivariate and multinomial logistic regression analysis. A high prevalence of bullying was found among Ghanaian adolescents. Alcohol-related health compromising behaviors (alcohol use, alcohol misuse and getting into trouble as a result of alcohol) increased the risk of being bullied. In addition, substance use, being physically attacked, being seriously injured, hunger and truancy were also found to increase the risk of being bullied. However, having understanding parents and having classmates who were kind and helpful reduced the likelihood of being bullied. These findings suggest that school-based intervention programs aimed at reducing rates of peer victimization should simultaneously target multiple risk behaviors. Teachers can also reduce peer victimization by introducing programs that enhance adolescents’ acceptance of each other in the classroom.",TRUE,research problem
R281,Social and Behavioral Sciences,R38552,""" Does 4-4-2 exist?""– An Analytics Approach to Understand and Classify Football Team Formations in Single Match Situations",S126432,R38554,has research problem,R38558,Formation Classification,"The chance to win a football match can be significantly increased if the right tactic is chosen and the behavior of the opposite team is well anticipated. For this reason, every professional football club employs a team of game analysts. However, at present game performance analysis is done manually and therefore highly time-consuming. Consequently, automated tools to support the analysis process are required. In this context, one of the main tasks is to summarize team formations by patterns such as 4-4-2 that can give insights into tactical instructions and patterns. In this paper, we introduce an analytics approach that automatically classifies and visualizes the team formation based on the players' position data. We focus on single match situations instead of complete halftimes or matches to provide a more detailed analysis. %in contrast to previous work. The novel classification approach calculates the similarity based on pre-defined templates for different tactical formations. A detailed analysis of individual match situations depending on ball possession and match segment length is provided. For this purpose, a visual summary is utilized that summarizes the team formation in a match segment. An expert annotation study is conducted that demonstrates 1)~the complexity of the task and 2)~the usefulness of the visualization of single situations to understand team formations. The suggested classification approach outperforms existing methods for formation classification. In particular, our approach gives insights into the shortcomings of using patterns like 4-4-2 to describe team formations.",TRUE,research problem
R354,Sociology,R44713,Telephone psychotherapy and telephone care management for primary care patients starting antidepressant treatment: a randomized controlled trial,S136763,R44714,has research problem,R44679,Psychotherapy for Depression,"CONTEXT Both antidepressant medication and structured psychotherapy have been proven efficacious, but less than one third of people with depressive disorders receive effective levels of either treatment. OBJECTIVE To compare usual primary care for depression with 2 intervention programs: telephone care management and telephone care management plus telephone psychotherapy. DESIGN Three-group randomized controlled trial with allocation concealment and blinded outcome assessment conducted between November 2000 and May 2002. SETTING AND PARTICIPANTS A total of 600 patients beginning antidepressant treatment for depression were systematically sampled from 7 group-model primary care clinics; patients already receiving psychotherapy were excluded. INTERVENTIONS Usual primary care; usual care plus a telephone care management program including at least 3 outreach calls, feedback to the treating physician, and care coordination; usual care plus care management integrated with a structured 8-session cognitive-behavioral psychotherapy program delivered by telephone. MAIN OUTCOME MEASURES Blinded telephone interviews at 6 weeks, 3 months, and 6 months assessed depression severity (Hopkins Symptom Checklist Depression Scale and the Patient Health Questionnaire), patient-rated improvement, and satisfaction with treatment. Computerized administrative data examined use of antidepressant medication and outpatient visits. RESULTS Treatment participation rates were 97% for telephone care management and 93% for telephone care management plus psychotherapy. Compared with usual care, the telephone psychotherapy intervention led to lower mean Hopkins Symptom Checklist Depression Scale depression scores (P =.02), a higher proportion of patients reporting that depression was ""much improved"" (80% vs 55%, P<.001), and a higher proportion of patients ""very satisfied"" with depression treatment (59% vs 29%, P<.001). The telephone care management program had smaller effects on patient-rated improvement (66% vs 55%, P =.04) and satisfaction (47% vs 29%, P =.001); effects on mean depression scores were not statistically significant. CONCLUSIONS For primary care patients beginning antidepressant treatment, a telephone program integrating care management and structured cognitive-behavioral psychotherapy can significantly improve satisfaction and clinical outcomes. These findings suggest a new public health model of psychotherapy for depression including active outreach and vigorous efforts to improve access to and motivation for treatment.",TRUE,research problem
R140,Software Engineering,R53034,MODELING SAFEST AND OPTIMAL EMERGENCY EVACUATION PLAN FOR LARGE-SCALE PEDESTRIANS ENVIRONMENTS,S161134,R53035,has research problem,R53039,crowd safety,"Large-scale events are always vulnerable to natural disasters and man-made chaos which poses great threat to crowd safety. Such events need an appropriate evacuation plan to alleviate the risk of causalities. We propose a modeling framework for large-scale evacuation of pedestrians during emergency situation. Proposed framework presents optimal and safest path evacuation for a hypothetical large-scale crowd scenario. The main aim is to provide the safest and nearest evacuation path because during disastrous situations there is possibility of exit gate blockade and directions of evacuees may have to be changed at run time. For this purpose run time diversions are given to evacuees to ensure their quick and safest exit. In this work, different evacuation algorithms are implemented and compared to determine the optimal solution in terms of evacuation time and crowd safety. The recommended framework incorporates Anylogic simulation environment to design complex spatial environment for large-scale pedestrians as agents.",TRUE,research problem
R140,Software Engineering,R49480,Software Architecture Optimization Methods: A Systematic Literature Review,S147723,R49482,has research problem,R49487,Software architecture optimization,"Due to significant industrial demands toward software systems with increasing complexity and challenging quality requirements, software architecture design has become an important development activity and the research domain is rapidly evolving. In the last decades, software architecture optimization methods, which aim to automate the search for an optimal architecture design with respect to a (set of) quality attribute(s), have proliferated. However, the reported results are fragmented over different research communities, multiple system domains, and multiple quality attributes. To integrate the existing research results, we have performed a systematic literature review and analyzed the results of 188 research papers from the different research communities. Based on this survey, a taxonomy has been created which is used to classify the existing research. Furthermore, the systematic analysis of the research literature provided in this review aims to help the research community in consolidating the existing research efforts and deriving a research agenda for future developments.",TRUE,research problem
R75,Systems and Integrative Physiology,R4918,"Pilot Study to Estimate ""Difficult"" Area in e-Learning Material by Physiological Measurements",S5399,R4928,has research problem,R4929,Establishing a method to improve e-learning materials based on learners' mental states.,"To improve designs of e-learning materials, it is necessary to know which word or figure a learner felt ""difficult"" in the materials. In this pilot study, we measured electroencephalography (EEG) and eye gaze data of learners and analyzed to estimate which area they had difficulty to learn. The developed system realized simultaneous measurements of physiological data and subjective evaluations during learning. Using this system, we observed specific EEG activity in difficult pages. Integrating of eye gaze and EEG measurements raised a possibility to determine where a learner felt ""difficult"" in a page of learning materials. From these results, we could suggest that the multimodal measurements of EEG and eye gaze would lead to effective improvement of learning materials. For future study, more data collection using various materials and learners with different backgrounds is necessary. This study could lead to establishing a method to improve e-learning materials based on learners' mental states.",TRUE,research problem
R106,Systems Biology,R49453,MetaboMAPS: Pathway sharing and multi-omics data visualization in metabolic context,S147526,R49455,has research problem,R38854,Visualization,"Metabolic pathways are an important part of systems biology research since they illustrate complex interactions between metabolites, enzymes, and regulators. Pathway maps are drawn to elucidate metabolism or to set data in a metabolic context. We present MetaboMAPS, a web-based platform to visualize numerical data on individual metabolic pathway maps. Metabolic maps can be stored, distributed and downloaded in SVG-format. MetaboMAPS was designed for users without computational background and supports pathway sharing without strict conventions. In addition to existing applications that established standards for well-studied pathways, MetaboMAPS offers a niche for individual, customized pathways beyond common knowledge, supporting ongoing research by creating publication-ready visualizations of experimental data.",TRUE,research problem
R369,"Theory, Knowledge and Science",R76770,Knowledge Graphs in Manufacturing and Production: A Systematic Literature Review,S350481,R76772,has research problem,R76784, finding the primary studies in the existing literature,"Knowledge graphs in manufacturing and production aim to make production lines more efficient and flexible with higher quality output. This makes knowledge graphs attractive for companies to reach Industry 4.0 goals. However, existing research in the field is quite preliminary, and more research effort on analyzing how knowledge graphs can be applied in the field of manufacturing and production is needed. Therefore, we have conducted a systematic literature review as an attempt to characterize the state-of-the-art in this field, i.e., by identifying existing research and by identifying gaps and opportunities for further research. We have focused on finding the primary studies in the existing literature, which were classified and analyzed according to four criteria: bibliometric key facts, research type facets, knowledge graph characteristics, and application scenarios. Besides, an evaluation of the primary studies has also been carried out to gain deeper insights in terms of methodology, empirical evidence, and relevance. As a result, we can offer a complete picture of the domain, which includes such interesting aspects as the fact that knowledge fusion is currently the main use case for knowledge graphs, that empirical research and industrial application are still missing to a large extent, that graph embeddings are not fully exploited, and that technical literature is fast-growing but still seems to be far from its peak.",TRUE,research problem
R369,"Theory, Knowledge and Science",R75675,Knowledge Graph Refinement: A Survey of Approaches and Evaluation Methods,S346242,R75677,has research problem,R75678,knowledge graph refinement approaches,"In the recent years, different Web knowledge graphs, both free and commercial, have been created. While Google coined the term ""Knowledge Graph"" in 2012, there are also a few openly available knowledge graphs, with DBpedia, YAGO, and Freebase being among the most prominent ones. Those graphs are often constructed from semi-structured knowledge, such as Wikipedia, or harvested from the web with a combination of statistical and linguistic methods. The result are large-scale knowledge graphs that try to make a good trade-off between completeness and correctness. In order to further increase the utility of such knowledge graphs, various refinement methods have been proposed, which try to infer and add missing knowledge to the graph, or identify erroneous pieces of information. In this article, we provide a survey of such knowledge graph refinement approaches, with a dual look at both the methods being proposed as well as the evaluation methodologies used.",TRUE,research problem
R369,"Theory, Knowledge and Science",R76758,Relational Representation Learning for Dynamic (Knowledge) Graphs: A Survey,S350444,R76760,has research problem,R76761,review the recent advances in representation learning for dynamic graphs,"Graphs arise naturally in many real-world applications including social networks, recommender systems, ontologies, biology, and computational finance. Traditionally, machine learning models for graphs have been mostly designed for static graphs. However, many applications involve evolving graphs. This introduces important challenges for learning and inference since nodes, attributes, and edges change over time. In this survey, we review the recent advances in representation learning for dynamic graphs, including dynamic knowledge graphs. We describe existing models from an encoder-decoder perspective, categorize these encoders and decoders based on the techniques they employ, and analyze the approaches in each category. We also review several prominent applications and widely used datasets, and highlight directions for future research.",TRUE,research problem
R369,"Theory, Knowledge and Science",R75084,"A Survey on Knowledge Graph Embedding: Approaches, Applications and Benchmarks",S344471,R75086,has research problem,R75092,systematically introduce the existing state-of-the-art approaches and a variety of applications,"A knowledge graph (KG), also known as a knowledge base, is a particular kind of network structure in which the node indicates entity and the edge represent relation. However, with the explosion of network volume, the problem of data sparsity that causes large-scale KG systems to calculate and manage difficultly has become more significant. For alleviating the issue, knowledge graph embedding is proposed to embed entities and relations in a KG to a low-, dense and continuous feature space, and endow the yield model with abilities of knowledge inference and fusion. In recent years, many researchers have poured much attention in this approach, and we will systematically introduce the existing state-of-the-art approaches and a variety of applications that benefit from these methods in this paper. In addition, we discuss future prospects for the development of techniques and application trends. Specifically, we first introduce the embedding models that only leverage the information of observed triplets in the KG. We illustrate the overall framework and specific idea and compare the advantages and disadvantages of such approaches. Next, we introduce the advanced models that utilize additional semantic information to improve the performance of the original methods. We divide the additional information into two categories, including textual descriptions and relation paths. The extension approaches in each category are described, following the same classification criteria as those defined for the triplet fact-based models. We then describe two experiments for comparing the performance of listed methods and mention some broader domain tasks such as question answering, recommender systems, and so forth. Finally, we collect several hurdles that need to be overcome and provide a few future research directions for knowledge graph embedding.",TRUE,research problem
R369,"Theory, Knowledge and Science",R76762,Virtual Knowledge Graphs: An Overview of Systems and Use Cases.,S350454,R76764,has research problem,R76765,virtual knowledge graph (VKG) paradigm for data integration,"In this paper, we present the virtual knowledge graph (VKG) paradigm for data integration and access, also known in the literature as Ontology-based Data Access. Instead of structuring the integration layer as a collection of relational tables, the VKG paradigm replaces the rigid structure of tables with the flexibility of graphs that are kept virtual and embed domain knowledge. We explain the main notions of this paradigm, its tooling ecosystem and significant use cases in a wide range of applications. Finally, we discuss future research directions.",TRUE,research problem
R141,Theory/Algorithms,R44108,Time Series Data Cleaning: A Survey,S134404,R44111,has research problem,R44112,Time Series Data Cleaning,"Errors are prevalent in time series data, which is particularly common in the industrial field. Data with errors could not be stored in the database, which results in the loss of data assets. At present, to deal with these time series containing errors, besides keeping original erroneous data, discarding erroneous data and manually checking erroneous data, we can also use the cleaning algorithm widely used in the database to automatically clean the time series data. This survey provides a classification of time series data cleaning techniques and comprehensively reviews the state-of-the-art methods of each type. Besides we summarize data cleaning tools, systems and evaluation criteria from research and industry. Finally, we highlight possible directions time series data cleaning.",TRUE,research problem
R342,Urban Studies,R74127,ISO-Standardized Smart City Platform Architecture and Dashboard,S341055,R74129,has research problem,R74272,City dashboards,"A concept guided by the ISO 37120 standard for city services and quality of life is suggested as unified framework for smart city dashboards. The slow (annual, quarterly, or monthly) ISO 37120 indicators are enhanced and complemented with more detailed and person-centric indicators that can further accelerate the transition toward smart cities. The architecture supports three tasks: acquire and manage data from heterogeneous sensors; process data originated from heterogeneous sources (sensors, OpenData, social data, blogs, news, and so on); and implement such collection and processing on the cloud. A prototype application based on the proposed architecture concept is developed for the city of Skopje, Macedonia. This article is part of a special issue on smart cities.",TRUE,research problem
R342,Urban Studies,R74221,Cities-Board: A Framework to Automate the Development of Smart Cities Dashboards,S341048,R74225,has research problem,R74275,Model-driven engineering,"Smart cities’ authorities use graphic dashboards to visualize and analyze important information on cities, citizens, institutions, and their interactions. This information supports various decision-making processes that affect citizens’ quality of life. Cities across the world have similar, if not the same, functional and nonfunctional requirements to develop their dashboards. Software developers will face the same challenges and they are likely to provide similar solutions for each developed city dashboard. Moreover, the development of these dashboards implies a significant investment in terms of human and financial resources from cities. The automation of the development of smart cities dashboards is feasible as these visualization systems will have common requirements between cities. This article introduces cities-board, a framework to automate the development of smart cities dashboards based on model-driven engineering. Cities-board proposes a graphic domain-specific language (DSL) that allows the creation of dashboard models with concepts that are closer to city authorities. Cities-board transforms these dashboards models to functional code artifacts by using model-to-model (M2M) and model-to-text (M2T) transformations. We evaluate cities-board by measuring the generation time, and the quality of the generated code under different models configurations. Results show the strengths and weaknesses of cities-board compared against a generic code generation tool.",TRUE,research problem
R342,Urban Studies,R74133,"Research Notes: Smart City Control Room Dashboards: Big Data Infrastructure, from data to decision support",S341053,R74135,has research problem,R74276,Smart city control rooms,"Smart City Control Rooms are mainly focused on Dashboards which are in turn created by using the socalled Dashboard Builders tools or generated custom. For a city the production of Dashboards is not something that is performed once forever, and it is a continuous working task for improving city monitoring, to follow extraordinary events and/or activities, to monitor critical conditions and cases. Thus, relevant complexities are due to the data aggregation architecture and to the identification of modalities to present data and their identification, prediction, etc., to arrive at producing high level representations that can be used by decision makers. In this paper, the architecture of a Dashboard Builder for creating Smart City Control Rooms is presented. As a validation and test, it has been adopted for generating the dashboards in Florence city and other cities in Tuscany area. The solution proposed has been developed in the context of REPLICATE H2020 European Commission Flagship project on Smart City and Communities.",TRUE,research problem
R342,Urban Studies,R74127,ISO-Standardized Smart City Platform Architecture and Dashboard,S341056,R74129,has research problem,R74277,Standard for city services,"A concept guided by the ISO 37120 standard for city services and quality of life is suggested as unified framework for smart city dashboards. The slow (annual, quarterly, or monthly) ISO 37120 indicators are enhanced and complemented with more detailed and person-centric indicators that can further accelerate the transition toward smart cities. The architecture supports three tasks: acquire and manage data from heterogeneous sensors; process data originated from heterogeneous sources (sensors, OpenData, social data, blogs, news, and so on); and implement such collection and processing on the cloud. A prototype application based on the proposed architecture concept is developed for the city of Skopje, Macedonia. This article is part of a special issue on smart cities.",TRUE,research problem
R374,Urban Studies and Planning,R151420,Digital twins technolgy and its data fusion in iron and steel product life cycle,S607316,R151422,has research problem,R5007,digital twin,"The related models in iron and steel product life cycle (IS-PLC), from order, design, purchase, scheduling to specific manufacturing processes (i.e., coking, sintering, blast furnace iron-making, converter, steel-making, continuous steel casting, rolling) is characterized by large-scale, multi-objective, multi-physics, dynamic uncertainty and complicated constraint. To achieve complex task in IS-PLC, involved models need be interrelated and interact, but how to build digital twin models in each IS-PLC stage, and carry out fusion between models and data to achieve virtual space (VS) and physical space (PS) intercorrelation, is a key technology in IS-PLC. In this paper, digital twins modeling and its fusion data problem in each IS-PLC stage are preliminary discussed.",TRUE,research problem
R374,Urban Studies and Planning,R151426,Equipment energy consumption management in digital twin shop-floor: A framework and potential applications,S607336,R151428,has research problem,R5007,digital twin,"With increasing attentions focused on the energy consumption (EC) in manufacturing, it is imperative to realize the equipment energy consumption management (EECM) to reduce the EC and improve the energy efficiency. Recently, with the developments of digital twin (DT) and digital twin shop-floor (DTS), the data and models are enriched greatly and a physical-virtual convergence environment is provided. Accordingly, the new chances emerge for improving the EECM in EC monitoring, analysis and optimization. In this situation, the paper proposes the framework of EECM in DTS and discusses the potential applications, aiming at studying the improvements and providing a guideline for the future works.",TRUE,research problem
R374,Urban Studies and Planning,R154611,Digital Twins In Farm Management: Illustrations From The Fiware Accelerators Smartagrifood And Fractals,S618809,R154613,has research problem,R5007,digital twin,"The Internet of Things (IoT) provides a vision of a world in which the Internet extends into the real world embracing everyday objects. In the IoT, physical objects are accompanied by Digital Twins: virtual, digital equivalents to physical objects. The interaction between real/physical and digital/virtual objects (digital twins) is an essential concept behind this vision. Digital twins can act as a central means to manage farms and has the potential to revolutionize agriculture. It removes fundamental constraints concerning place, time, and human observation. Farming operations would no longer require physical proximity, which allows for remote monitoring, control and coordination of farm operations. Moreover, Digital Twins can be enriched with information that cannot be observed (or not accurately) by the human senses, e.g. sensor and satellite data. A final interesting angle is that Digital Twins do not only represent actual states, but can also reproduce historical states and simulate future states. As a consequence, applications based on Digital Twins, if properly synchronized, enable farmers and other stakeholders to act immediately in case of (expected) deviations. This paper introduces the concept of Digital Twins and illustrate its application in agriculture by six cases of the SmartAgriFood and Fractals accelerator projects (2014-2016).",TRUE,research problem
R374,Urban Studies and Planning,R154617,The Digital Twin Paradigm for Future NASA and U.S. Air Force Vehicles,S618821,R154619,has research problem,R5007,digital twin,"Future generations of NASA and U.S. Air Force vehicles will require lighter mass while being subjected to higher loads and more extreme service conditions over longer time periods than the present generation. Current approaches for certification, fleet management and sustainment are largely based on statistical distributions of material properties, heuristic design philosophies, physical testing and assumed similitude between testing and operational conditions and will likely be unable to address these extreme requirements. To address the shortcomings of conventional approaches, a fundamental paradigm shift is needed. This paradigm shift, the Digital Twin, integrates ultra-high fidelity simulation with the vehicle s on-board integrated vehicle health management system, maintenance history and all available historical and fleet data to mirror the life of its flying twin and enable unprecedented levels of safety and reliability.",TRUE,research problem
R374,Urban Studies and Planning,R154626,A Digital Twin-Based Approach for Designing and Multi-Objective Optimization of Hollow Glass Production Line,S618852,R154630,has research problem,R5007,digital twin,"Various new national advanced manufacturing strategies, such as Industry 4.0, Industrial Internet, and Made in China 2025, are issued to achieve smart manufacturing, resulting in the increasing number of newly designed production lines in both developed and developing countries. Under the individualized designing demands, more realistic virtual models mirroring the real worlds of production lines are essential to bridge the gap between design and operation. This paper presents a digital twin-based approach for rapid individualized designing of the hollow glass production line. The digital twin merges physics-based system modeling and distributed real-time process data to generate an authoritative digital design of the system at pre-production phase. A digital twin-based analytical decoupling framework is also developed to provide engineering analysis capabilities and support the decision-making over the system designing and solution evaluation. Three key enabling techniques as well as a case study in hollow glass production line are addressed to validate the proposed approach.",TRUE,research problem
R374,Urban Studies and Planning,R154649,Digital Twin of Manufacturing Systems,S618925,R154651,has research problem,R5007,digital twin,"The digitization of manufacturing systems is at the crux of the next industrial revolutions. The digital representation of the “Physical Twin,” also known as the “Digital Twin,” will help in maintaining the process quality effectively by allowing easy visualization and incorporation of cognitive capability in the system. In this technical report, we tackle two issues regarding the Digital Twin: (1) modeling the Digital Twin by extracting information from the side-channel emissions, and (2) making sure that the Digital Twin is up-to-date (or “alive”). We will first analyze various analog emissions to figure out if they behave as side-channels, informing about the various states of both cyber and physical domains. Then, we will present a dynamic data-driven application system enabled Digital Twin, which is able to check if it is the most up-to-date version of the Physical Twin. Index Terms Digital Twin, Cyber-Physical Systems, Digitization, Additive Manufacturing, Machine Learning, Sensor Fusion, Dynamic Data-Driven Application Systems",TRUE,research problem
R374,Urban Studies and Planning,R154672,Digital Twin and Big Data Towards Smart Manufacturing and Industry 4.0: 360 Degree Comparison,S619007,R154674,has research problem,R5007,digital twin,"With the advances in new-generation information technologies, especially big data and digital twin, smart manufacturing is becoming the focus of global manufacturing transformation and upgrading. Intelligence comes from data. Integrated analysis for the manufacturing big data is beneficial to all aspects of manufacturing. Besides, the digital twin paves a way for the cyber-physical integration of manufacturing, which is an important bottleneck to achieve smart manufacturing. In this paper, the big data and digital twin in manufacturing are reviewed, including their concept as well as their applications in product design, production planning, manufacturing, and predictive maintenance. On this basis, the similarities and differences between big data and digital twin are compared from the general and data perspectives. Since the big data and digital twin can be complementary, how they can be integrated to promote smart manufacturing are discussed.",TRUE,research problem
R374,Urban Studies and Planning,R154687,Towards an extended model-based definition for the digital twin,S619064,R154692,has research problem,R5007,digital twin,"ABSTRACTThe concept of the digital twin calls for virtual replicas of real world products. Achieving this requires a sophisticated network of models that have a level of interconnectivity. The authors attempted to improve model interconnectivity by enhancing the computer-aided design model with spatially related non-geometric data. A tool was created to store, visualize, and search for spatial data within the computer-aided design tool. This enables both model authors, and consumers to utilize information inside the CAD tool which traditionally would have existed in separate software.",TRUE,research problem
R374,Urban Studies and Planning,R154694,Dynamic resource allocation optimization for digital twin-driven smart shopfloor,S619075,R154696,has research problem,R5007,digital twin,"Smart manufacturing is the core in the 4th industrial revolution. It is very important that how to realize the intelligent interaction between hardware and software in smart manufacturing. The paper proposes the architecture of Digital Twin-driven Smart ShopFloor (DTSF), as a contribution to the research of the research discussion about Digital Twin concept. Then the scheme for dynamic resource allocation optimization (DRAO) is designed for DTSF, as an application of the proposed architecture. Furthermore, a case study is given to illustrate the detailed method of DRAO. The experimental result shows that the proposed scheme is effective.",TRUE,research problem
R374,Urban Studies and Planning,R154698,A DIGITAL TWIN FOR ROOT CAUSE ANALYSIS AND PRODUCT QUALITY MONITORING,S619085,R154700,has research problem,R5007,digital twin,Mass customization and increasing product complexity require new methods to ensure a continuously high product quality. In the case of product failures it has to be determined what distinguishes flawed products. The data generated by cybertronic products over their lifecycle offers new possibilities to find such distinctions. To manage this data for individual product instances the concept of a Digital Twin has been proposed. This paper introduces the elements of a Digital Twin for root cause analysis and product quality monitoring and suggests a data structure that enables data analytics.,TRUE,research problem
R374,Urban Studies and Planning,R139878,A Conceptual Enterprise Architecture Framework for Smart Cities - A Survey Based Approach: ,S558368,R139880,has research problem,R139888,Enterprise architecture,"Enterprise architecture for smart cities is the focus of the research project “EADIC - (Developing an Enterprise Architecture for Digital Cities)” which is the context of the reported results in this work. We report in detail the results of a survey we contacted. Using these results we identify important quality and functional requirements for smart cities. Important quality properties include interoperability, usability, security, availability, recoverability and maintainability. We also observe business-related issues such as an apparent uncertainty on who is selling services, the lack of business plan in most cases and uncertainty in commercialization of services. At the software architecture domain we present a conceptual architectural framework based on architectural patterns which address the identified quality requirements. The conceptual framework can be used as a starting point for actual smart cities' projects.",TRUE,research problem
R374,Urban Studies and Planning,R139881,An Information Framework for Creating a Smart City Through Internet of Things,S558370,R139883,has research problem,R139889,Internet of Things,"Increasing population density in urban centers demands adequate provision of services and infrastructure to meet the needs of city inhabitants, encompassing residents, workers, and visitors. The utilization of information and communications technologies to achieve this objective presents an opportunity for the development of smart cities, where city management and citizens are given access to a wealth of real-time information about the urban environment upon which to base decisions, actions, and future planning. This paper presents a framework for the realization of smart cities through the Internet of Things (IoT). The framework encompasses the complete urban information system, from the sensory level and networking support structure through to data management and Cloud-based integration of respective systems and services, and forms a transformational part of the existing cyber-physical system. This IoT vision for a smart city is applied to a noise mapping case study to illustrate a new method for existing operations that can be adapted for the enhancement and delivery of important city services.",TRUE,research problem
R374,Urban Studies and Planning,R139875,Framework for Smart City Applications Based on Participatory Sensing,S558366,R139877,has research problem,R139887,Participatory Sensing,"Smart cities offer services to their inhabitants which make everyday life easier beyond providing a feedback channel to the city administration. For instance, a live timetable service for public transportation or real-time traffic jam notification can increase the efficiency of travel planning substantially. Traditionally, the implementation of these smart city services require the deployment of some costly sensing and tracking infrastructure. As an alternative, the crowd of inhabitants can be involved in data collection via their mobile devices. This emerging paradigm is called mobile crowd-sensing or participatory sensing. In this paper, we present our generic framework built upon XMPP (Extensible Messaging and Presence Protocol) for mobile participatory sensing based smart city applications. After giving a short description of this framework we show three use-case smart city application scenarios, namely a live transit feed service, a soccer intelligence agency service and a smart campus application, which are currently under development on top of our framework.",TRUE,research problem
R374,Urban Studies and Planning,R140879,Is Growth Obsolete?,S563931,R140881,has research problem,R141108,political economy,"A long decade ago economic growth was the reigning fashion of political economy. It was simultaneously the hottest subject of economic theory and research, a slogan eagerly claimed by politicians of all stripes, and a serious objective of the policies of governments. The climate of opinion has changed dramatically. Disillusioned critics indict both economic science and economic policy for blind obeisance to aggregate material ""progress,"" and for neglect of its costly side effects. Growth, it is charged, distorts national priorities, worsens the distribution of income, and irreparably damages the environment. Paul Erlich speaks for a multitude when he says, ""We must acquire a life style which has as its goal maximum freedom and happiness for the individual, not a maximum Gross National Product."" Growth was in an important sense a discovery of economics after the Second World War. Of course economic development has always been the grand theme of historically minded scholars of large mind and bold concept, notably Marx, Schumpeter, Kuznets. But the mainstream of economic analysis was not comfortable with phenomena of change and progress. The stationary state was the long-run equilibrium of classical and neoclassical theory, and comparison of alternative static equilibriums was the most powerful theoretical tool. Technological change and population increase were most readily accommodated as one-time exogenous shocks; comparative static analysis could be used to tell how they altered the equilibrium of the system. The obvious fact that these ""shocks"" were occurring continuously, never allowing the",TRUE,research problem
R374,Urban Studies and Planning,R141201,"Will the real smart city please stand up?: Intelligent, progressive or entrepreneurial?",S570653,R142080,has research problem,R139925,Smart cities,"Debates about the future of urban development in many Western countries have been increasingly influenced by discussions of smart cities. Yet despite numerous examples of this ‘urban labelling’ phenomenon, we know surprisingly little about so‐called smart cities, particularly in terms of what the label ideologically reveals as well as hides. Due to its lack of definitional precision, not to mention an underlying self‐congratulatory tendency, the main thrust of this article is to provide a preliminary critical polemic against some of the more rhetorical aspects of smart cities. The primary focus is on the labelling process adopted by some designated smart cities, with a view to problematizing a range of elements that supposedly characterize this new urban form, as well as question some of the underlying assumptions/contradictions hidden within the concept. To aid this critique, the article explores to what extent labelled smart cities can be understood as a high‐tech variation of the ‘entrepreneurial city’, as well as speculates on some general principles which would make them more progressive and inclusive.",TRUE,research problem
R374,Urban Studies and Planning,R141208,Smart Cities and Sustainability Models,S570623,R142078,has research problem,R139925,Smart cities,"In our age cities are complex systems and we can say systems of systems. Today locality is the result of using information and communication technologies in all departments of our life, but in future all cities must to use smart systems for improve quality of life and on the other hand for sustainable development. The smart systems make daily activities more easily, efficiently and represent a real support for sustainable city development. This paper analysis the sus-tainable development and identified the key elements of future smart cities.",TRUE,research problem
R374,Urban Studies and Planning,R141211,Smart Cities in Europe,S570608,R142077,has research problem,R139925,Smart cities,"Urban performance currently depends not only on a city's endowment of hard infrastructure (physical capital), but also, and increasingly so, on the availability and quality of knowledge communication and social infrastructure (human and social capital). The latter form of capital is decisive for urban competitiveness. Against this background, the concept of the “smart city” has recently been introduced as a strategic device to encompass modern urban production factors in a common framework and, in particular, to highlight the importance of Information and Communication Technologies (ICTs) in the last 20 years for enhancing the competitive profile of a city. The present paper aims to shed light on the often elusive definition of the concept of the “smart city.” We provide a focused and operational definition of this construct and present consistent evidence on the geography of smart cities in the EU27. Our statistical and graphical analyses exploit in depth, for the first time to our knowledge, the most recent version of the Urban Audit data set in order to analyze the factors determining the performance of smart cities. We find that the presence of a creative class, the quality of and dedicated attention to the urban environment, the level of education, and the accessibility to and use of ICTs for public administration are all positively correlated with urban wealth. This result prompts the formulation of a new strategic agenda for European cities that will allow them to achieve sustainable urban development and a better urban landscape.",TRUE,research problem
R374,Urban Studies and Planning,R141218,Understanding Smart Cities: An Integrative Framework,S570578,R142075,has research problem,R139925,Smart cities,"Making a city ""smart"" is emerging as a strategy to mitigate the problems generated by the urban population growth and rapid urbanization. Yet little academic research has sparingly discussed the phenomenon. To close the gap in the literature about smart cities and in response to the increasing use of the concept, this paper proposes a framework to understand the concept of smart cities. Based on the exploration of a wide and extensive array of literature from various disciplinary areas we identify eight critical factors of smart city initiatives: management and organization, technology, governance, policy context, people and communities, economy, built infrastructure, and natural environment. These factors form the basis of an integrative framework that can be used to examine how local governments are envisioning smart city initiatives. The framework suggests directions and agendas for smart city research and outlines practical implications for government professionals.",TRUE,research problem
R374,Urban Studies and Planning,R141224,Smart cities in perspective – a comparative European study by means of self-organizing maps,S570547,R142073,has research problem,R139925,Smart cities,"Cities form the heart of a dynamic society. In an open space-economy cities have to mobilize all of their resources to remain attractive and competitive. Smart cities depend on creative and knowledge resources to maximize their innovation potential. This study offers a comparative analysis of nine European smart cities on the basis of an extensive database covering two time periods. After conducting a principal component analysis, a new approach, based on a self-organizing map analysis, is adopted to position the various cities under consideration according to their selected “smartness” performance indicators.",TRUE,research problem
R374,Urban Studies and Planning,R141227,Modelling the smart city performance,S570532,R142072,has research problem,R139925,Smart cities,"This paper aims to offer a profound analysis of the interrelations between smart city components connecting the cornerstones of the triple helix. The triple helix model has emerged as a reference framework for the analysis of knowledge-based innovation systems, and relates the multiple and reciprocal relationships between the three main agencies in the process of knowledge creation and capitalization: university, industry and government. This analysis of the triple helix will be augmented using the Analytic Network Process to model, cluster and begin measuring the performance of smart cities. The model obtained allows interactions and feedbacks within and between clusters, providing a process to derive ratio scales priorities from elements. This offers a more truthful and realistic representation for supporting policy-making. The application of this model is still to be developed, but a full list of indicators, available at urban level, has been identified and selected from literature review.",TRUE,research problem
R374,Urban Studies and Planning,R141230,Smart Ideas for Smart Cities: Investigating Crowdsourcing for Generating and Selecting Ideas for ICT Innovation in a City Context,S570503,R142067,has research problem,R139925,Smart cities,"Within this article, the strengths and weaknesses of crowdsourcing for idea generation and idea selection in the context of smart city innovation are investigated. First, smart cities are defined next to similar but different concepts such as digital cities, intelligent cities or ubiquitous cities. It is argued that the smart city-concept is in fact a more user-centered evolution of the other city-concepts which seem to be more technological deterministic in nature. The principles of crowdsourcing are explained and the different manifestations are demonstrated. By means of a case study, the generation of ideas for innovative uses of ICT for city innovation by citizens through an online platform is studied, as well as the selection process. For this selection, a crowdsourcing solution is compared to a selection made by external experts. The comparison of both indicates that using the crowd as gatekeeper and selector of innovative ideas yields a long list with high user benefits. However, the generation of ideas in itself appeared not to deliver extremely innovative ideas. Crowdsourcing thus appears to be a useful and effective tool in the context of smart city innovation, but should be thoughtfully used and combined with other user involvement approaches and within broader frameworks such as Living Labs.",TRUE,research problem
R374,Urban Studies and Planning,R141934,"Smart Cities: Definitions, Dimensions, Performance, and Initiatives",S570765,R141936,has research problem,R139925,Smart cities,"Abstract As the term “smart city” gains wider and wider currency, there is still confusion about what a smart city is, especially since several similar terms are often used interchangeably. This paper aims to clarify the meaning of the word “smart” in the context of cities through an approach based on an in-depth literature review of relevant studies as well as official documents of international institutions. It also identifies the main dimensions and elements characterizing a smart city. The different metrics of urban smartness are reviewed to show the need for a shared definition of what constitutes a smart city, what are its features, and how it performs in comparison to traditional cities. Furthermore, performance measures and initiatives in a few smart cities are identified.",TRUE,research problem
R374,Urban Studies and Planning,R141937,Mapping Dimensions of Governance in Smart Cities: Practitioners versus Prior Research,S570764,R141973,has research problem,R139925,Smart cities,"Many of the challenges to be faced by smart cities surpass the capacities, capabilities, and reaches of their traditional institutions and their classical processes of governing, and therefore new and innovative forms of governance are needed to meet these challenges. According to the network governance literature, governance models in public administrations can be categorized through the identification and analysis of some main dimensions that govern in the way of managing the city by governments. Based on prior research and on the perception of city practitioners in European smart cities, this paper seeks to analyze the relevance of main dimensions of governance models in smart cities. Results could shed some light regarding new future research on efficient patterns of governance models within smart cities.",TRUE,research problem
R374,Urban Studies and Planning,R141946,What makes a city smart? Identifying core components and proposing an integrative and comprehensive conceptualization,S570760,R141948,has research problem,R139925,Smart cities,"This study represents two critical steps forward in the area of smart city research and practice. The first is in the form of the development of a comprehensive conceptualization of smart city as a resource for researchers and government practition- ers; the second is in the form of the creation of a bridge between smart cities research and practice expertise. City governments increasingly need innovative arrangements to solve a variety of technical, physical, and social problems. ""Smart city"" could be used to represent efforts that in many ways describe a vision of a city, but there is little clarity about this new concept. This paper proposes a comprehensive conceptualization of smart city, including its main components and several specific elements. Academic literature is used to create a robust framework, while a review of practical tools is used to identify specific elements or aspects not treated in the academic studies, but essential to create an integrative and comprehensive conceptualization of smart city. The paper also provides policy implications and suggests areas for future research in this topic.",TRUE,research problem
R374,Urban Studies and Planning,R141949,Making smart cities work in the face of conflicts: lessons from practitioners of South Korea’s U-City projects,S570759,R141951,has research problem,R139925,Smart cities,"A common concern in relation to smart cities is how to turn the concept into reality. The aim of this research is to investigate the implementation process of smart cities based upon the experience of South Korea’s U-City projects. The research shows that poorly-managed conflicts during implementation can diminish the potential of smart cities and discourage future improvements. The nature of smart cities is based on the concept of governance, while the planning practice is still in the notion of government. In order to facilitate the collaborative practice, the research has shown that collaborative institutional arrangements and joint fact-finding processes might secure an integrated service delivery for smart cities by overcoming operational difficulties in real-life contexts.",TRUE,research problem
R374,Urban Studies and Planning,R141955,The economic value of smart city technology,S570757,R141957,has research problem,R139925,Smart cities,"1. IntroductionEconomy is the main determinant of smart city proposals, and a city with a significant level of economic competitiveness (Popescu, 2015a, b, c, d, e) has one of the features of a smart city. The economic consequences of the smart city proposals are business production, job generation, personnel development, and enhancement in the productivity. The enforcement of an ICT infrastructure is essential to a smart city's advancement and is contingent on several elements associated with its attainability and operation. (Chourabi et al., 2012) Smart city involvements are the end results of, and uncomfortably incorporated into, present social and spatial configurations of urban governa nce (Br a tu, 2015) a nd the built setting: the s ma rt city is put together gradually, integrated awkwardly into current arrangements of city administration and the reinforced environment. Smart cities are intrinsically distinguished, being geographically asymmetrical at a diversity of scales. Not all places of the city will be similarly smart: smart cities will favor some spaces, individuals, and undertakings over others. An essential component of the smart city is its capacity to further economic growth. (Shelton et al., 2015)2. The Assemblage of Participants, Tenets and Technologies Related to Smart City InterventionsThe ""smart city"" notion has arisen from long-persisting opinions regarding urban technological idealistic schemes (Lazaroiu, 2013) and the absolutely competitive city. Smart cities are where novel technologies may be produced and the receptacles for technology, i.e. the goal of its utilizations. The contest to join this movement and become a smart city has stimulated city policymakers to endogenize the performance of technology-led growth (Lazaroiu, 2014a, b, c), leading municipal budgets toward financings that present smart city standing. The boundaries of the smart city are generated both by the lack of data utilizations that can handle shared and not separate solutions and by the incapacity to aim at indefinit e features of cities that both enhance and blemish from the standard of urban existence for city inhabitants. Smart city technology fina ncings are chiefly composed of a meliorations inst ea d of genuine innovations, on the cit izen consumer side. (Glasmeier and Christopherson, 2015) The notion of smart city as a method to improve the life standard of individuals has been achieving rising relevance in the calendars of policymakers. The amount of ""smart"" proposals initiated by a municipality can lead to an intermediate final product that indicates the endeavors made to augment the quality of existence of the citizens. The probabilities of a city raising its degree of smartness are contingent on several country-specific variables that outweigh its economic, technological and green advancement rate. Public administrations dema nd backing to organize the notion of the smartness of a city (Nica, 2015a, b, c, d), to encapsulate its ramifications, to establish standards at the global level, and to observe enhancement chances. (Neir otti et a l., 2014) T he gr owth of smart cit ies is assisting the rise of government employment of ITCs to enhance political involvement, enforce public schemes or supply public spher e ser vices. Ther e is no one wa y to becoming smart, and diverse cities ha ve embraced distinct adva nces that indicate their specific circumstances. The administration of smart cities is dependent on elaborate arrangements of interdependent entities. (Rodriguez Bolivar, 2015)The association of smart (technology-enabled) solutions to satisfy the leading societal difficult tasks and the concentration on the city as the chief determinant of alteration bring about the notion of the ""smart city."" The rise of novel technologies to assess and interlink various facets of ordinary exis- tence (""the internet of things"") is relevant in the progression towards a smart city. The latter is attempt ing to encourage a nd adjust innovations to the demands of their citizens (Pera, 2015a, b) by urging synergetic advancement of inventions with various stakeholders. …",TRUE,research problem
R374,Urban Studies and Planning,R141958,The ‘actually existing smart city’,S570756,R141960,has research problem,R139925,Smart cities,"This paper grounds the critique of the ‘smart city’ in its historical and geographical context. Adapting Brenner and Theodore’s notion of ‘actually existing neoliberalism’, we suggest a greater attention be paid to the ‘actually existing smart city’, rather than the exceptional or paradigmatic smart cities of Songdo, Masdar and Living PlanIT Valley. Through a closer analysis of cases in Louisville and Philadelphia, we demonstrate the utility of understanding the material effects of these policies in actual cities around the world, with a particular focus on how and from where these policies have arisen, and how they have unevenly impacted the places that have adopted them.",TRUE,research problem
R374,Urban Studies and Planning,R141961,Smart Cities at the Crossroads: New Tensions in City Transformation,S570755,R141963,has research problem,R139925,Smart cities,"The Smart Cities movement has produced a large number of projects and experiments around the world. To understand the primary ones, as well as their underlying tensions and the insights emerging from them, the editors of this special issue of the California Management Review enlisted a panel of experts, academics, and practitioners from different nationalities, backgrounds, experiences, and perspectives. The panel focused its discussion on three main areas: new governance models for Smart Cities, how to spur growth and renewal, and the sharing economy—both commons and market based.",TRUE,research problem
R374,Urban Studies and Planning,R141974,Smart Governance: Using a Literature Review and Empirical Analysis to Build a Research Model,S570751,R141976,has research problem,R139925,Smart cities,"The attention for Smart governance, a key aspect of Smart cities, is growing, but our conceptual understanding of it is still limited. This article fills this gap in our understanding by exploring the concept of Smart governance both theoretically and empirically and developing a research model of Smart governance. On the basis of a systematic review of the literature defining elements, aspired outcomes and implementation strategies are identified as key dimensions of Smart governance. Inductively, we identify various categories within these variables. The key dimensions were presented to a sample of representatives of European local governments to investigate the dominant perceptions of practitioners and to refine the categories. Our study results in a model for research into the implementation strategies, Smart governance arrangements, and outcomes of Smart governance.",TRUE,research problem
R374,Urban Studies and Planning,R141977,Smart citizens for smart cities: participating in the future,S570750,R141979,has research problem,R139925,Smart cities,"This paper discusses smart cities and raises critical questions about the faith being placed in technology to reduce carbon dioxide emissions. Given increasingly challenging carbon reduction targets, the role of information and communication technology and the digital economy are increasingly championed as offering potential to contribute to meeting these targets within cities and buildings. This paper questions the faith being placed in smart or intelligent solutions through asking, what role then for the ordinary citizen? The smart approach often appears to have a narrow view of how technology and user-engagement can sit together, viewing the behaviour of users as a hurdle to overcome rather than a resource to be utilised. This paper suggests lessons can be learnt from other disciplines and wider sustainable development policy that champions the role of citizens and user-engagement to harness the co-creation of knowledge, collaboration and empowerment. Specifically, empirical findings and observations a...",TRUE,research problem
R374,Urban Studies and Planning,R141980,Smart Cities Governance: The Need for a Holistic Approach to Assessing Urban Participatory Policy Making,S570749,R141982,has research problem,R139925,Smart cities,"Most of the definitions of a “smart city” make a direct or indirect reference to improving performance as one of the main objectives of initiatives to make cities “smarter”. Several evaluation approaches and models have been put forward in literature and practice to measure smart cities. However, they are often normative or limited to certain aspects of cities’ “smartness”, and a more comprehensive and holistic approach seems to be lacking. Thus, building on a review of the literature and practice in the field, this paper aims to discuss the importance of adopting a holistic approach to the assessment of smart city governance and policy decision making. It also proposes a performance assessment framework that overcomes the limitations of existing approaches and contributes to filling the current gap in the knowledge base in this domain. One of the innovative elements of the proposed framework is its holistic approach to policy evaluation. It is designed to address a smart city’s specificities and can benefit from the active participation of citizens in assessing the public value of policy decisions and their sustainability over time. We focus our attention on the performance measurement of codesign and coproduction by stakeholders and social innovation processes related to public value generation. More specifically, we are interested in the assessment of both the citizen centricity of smart city decision making and the processes by which public decisions are implemented, monitored, and evaluated as regards their capability to develop truly “blended” value services—that is, simultaneously socially inclusive, environmentally friendly, and economically sustainable.",TRUE,research problem
R374,Urban Studies and Planning,R141983,"Smart City Implementation Through Shared Vision of Social Innovation for Environmental Sustainability: A Case Study of Kitakyushu, Japan",S570748,R141985,has research problem,R139925,Smart cities,"Environmental sustainability is a critical global issue that requires comprehensive intervention policies. Viewed as localized intervention policy implementations, smart cities leverage information infrastructures and distributed renewable energy smart micro-grids, smart meters, and home/building energy management systems to reduce city-wide carbon emissions. However, theory-driven smart city implementation research is critically lacking. This theory-building case study identifies antecedent conditions necessary for implementing smart cities. We integrated resource dependence, social embeddedness, and citizen-centric e-governance theories to develop a citizen-centric social governance framework. We apply the framework to a field-based case study of Japan’s Kitakyushu smart community project to examine the validity and utility of the framework’s antecedent conditions: resource-dependent leadership network, cross-sector collaboration based on social ties, and citizen-centric e-governance. We conclude that complex smart community implementation processes require shared vision of social innovation owned by diverse stakeholders with conflicting values and adaptive use of informal social governance mechanisms for effective smart city implementation.",TRUE,research problem
R374,Urban Studies and Planning,R141989,Governing Smart Cities: An Empirical Analysis,S570746,R141991,has research problem,R139925,Smart cities,"Smart cities (SCs) are a recent but emerging phenomenon, aiming at using high technology and especially information and communications technology (ICT) to implement better living conditions in large metropolises, to involve citizens in city government, and to support sustainable economic development and city attractiveness. The final goal is to improve the quality of city life for all stakeholders. Until now, SCs have been developing as bottom-up projects, bringing together smart initiatives driven by public bodies, enterprises, citizens, and not-for-profit organizations. However, to build a long-term smart strategy capable of producing better returns from investments and deciding priorities regarding each city, a comprehensive SC governance framework is needed. The aim of this paper is to collect empirical evidences regarding government structures implemented in SCs and to outline a framework for the roles of local governments, nongovernmental agencies, and administrative officials. The survey shows that no consolidated standards or best practices for governing SCs are implemented in the examined cities; however, each city applies its own governance framework. Moreover, the study reveals some interesting experiences that may be useful for involving citizens and civil society in SC governance.",TRUE,research problem
R374,Urban Studies and Planning,R141998,How are citizens involved in smart cities? Analysing citizen participation in Japanese ``Smart Communities'',S570743,R142000,has research problem,R139925,Smart cities,"In recent years, ``smart cities'' have rapidly increased in discourses as well as in their real number, and raise various issues. While citizen engagement is a key element of most definitions of smart cities, information and communication technologies (ICTs) would also have great potential for facilitating public participation. However, scholars have highlighted that little research has focused on actual practices of citizen involvement in smart cities so far. In this respect, the authors analyse public participation in Japanese ``Smart Communities'', paying attention to both official discourses and actual practices. Smart Communities were selected in 2010 by the Japanese government which defines them as ``smart city'' projects and imposed criteria such as focus on energy issues, participation and lifestyle innovation. Drawing on analysis of official documents as well as on interviews with each of the four Smart Communities' stakeholders, the paper explains that very little input is expected from Japanese citizens. Instead, ICTs are used by municipalities and electric utilities to steer project participants and to change their behaviour. The objective of Smart Communities would not be to involve citizens in city governance, but rather to make them participate in the co-production of public services, mainly energy production and distribution.",TRUE,research problem
R374,Urban Studies and Planning,R142001,The ethics of smart cities and urban science,S570742,R142004,has research problem,R139925,Smart cities,"Software-enabled technologies and urban big data have become essential to the functioning of cities. Consequently, urban operational governance and city services are becoming highly responsive to a form of data-driven urbanism that is the key mode of production for smart cities. At the heart of data-driven urbanism is a computational understanding of city systems that reduces urban life to logic and calculative rules and procedures, which is underpinned by an instrumental rationality and realist epistemology. This rationality and epistemology are informed by and sustains urban science and urban informatics, which seek to make cities more knowable and controllable. This paper examines the forms, practices and ethics of smart cities and urban science, paying particular attention to: instrumental rationality and realist epistemology; privacy, datafication, dataveillance and geosurveillance; and data uses, such as social sorting and anticipatory governance. It argues that smart city initiatives and urban science need to be re-cast in three ways: a re-orientation in how cities are conceived; a reconfiguring of the underlying epistemology to openly recognize the contingent and relational nature of urban systems, processes and science; and the adoption of ethical principles designed to realize benefits of smart cities and urban science while reducing pernicious effects. This article is part of the themed issue ‘The ethical impact of data science’.",TRUE,research problem
R374,Urban Studies and Planning,R142005,Human limitations to introduction of smart cities: Comparative analysis from two CEE cities,S570741,R142007,has research problem,R139925,Smart cities,"Abstract Smart cities are a modern administrative/ developmental concept that tries to combine the development of urban areas with a higher level of citizens’ participation. However, there is a lack of understanding of the concept’s potential, due possibly to an unwillingness to accept a new form of relationship with the citizens. In this article, the willingness to introduce the elements of smart cities into two Central and Eastern European cities is tested. The results show that people are reluctant to use technology above the level of their needs and show little interest in participating in matters of governance, which prevents smart cities from developing in reality.",TRUE,research problem
R374,Urban Studies and Planning,R142008,"Speculative futures: Cities, data, and governance beyond smart urbanism",S570740,R142010,has research problem,R139925,Smart cities,"In this paper, I examine the convergence of big data and urban governance beyond the discursive and material contexts of the smart city. I argue that in addition to understanding the intensifying relationship between data, cities, and governance in terms of regimes of automated management and coordination in ‘actually existing’ smart cities, we should further engage with urban algorithmic governance and governmentality as material-discursive projects of future-ing, i.e., of anticipating particular kinds of cities-to-come. As urban big data looks to the future, it does so through the lens of an anticipatory security calculus fixated on identifying and diverting risks of urban anarchy and personal harm against which life in cities must be securitized. I suggest that such modes of algorithmic speculation are discernible at two scales of urban big data praxis: the scale of the body, and that of the city itself. At the level of the urbanite body, I use the selective example of mobile neighborhood safety apps to demonstrate how algorithmic governmentality enacts digital mediations of individual mobilities by routing individuals around ‘unsafe’ parts of the city in the interests of technologically ameliorating the risks of urban encounter. At the scale of the city, amongst other empirical examples, sentiment analytics approaches prefigure ephemeral spatialities of civic strife by aggregating and mapping individual emotions distilled from unstructured real-time content flows (such as Tweets). In both of these instances, the urban futures anticipated by the urban ‘big data security assemblage’ are highly uneven, as data and algorithms cannot divest themselves of urban inequalities and the persistence of their geographies.",TRUE,research problem
R374,Urban Studies and Planning,R142017,Governing the smart city: a review of the literature on smart urban governance,S570737,R142019,has research problem,R139925,Smart cities,"Academic attention to smart cities and their governance is growing rapidly, but the fragmentation in approaches makes for a confusing debate. This article brings some structure to the debate by analyzing a corpus of 51 publications and mapping their variation. The analysis shows that publications differ in their emphasis on (1) smart technology, smart people or smart collaboration as the defining features of smart cities, (2) a transformative or incremental perspective on changes in urban governance, (3) better outcomes or a more open process as the legitimacy claim for smart city governance. We argue for a comprehensive perspective: smart city governance is about crafting new forms of human collaboration through the use of ICTs to obtain better outcomes and more open governance processes. Research into smart city governance could benefit from previous studies into success and failure factors for e-government and build upon sophisticated theories of socio-technical change. This article highlights that smart city governance is not a technological issue: we should study smart city governance as a complex process of institutional change and acknowledge the political nature of appealing visions of socio-technical governance. Points for practitioners The study provides practitioners with an in-depth understanding of current debates about smart city governance. The article highlights that governing a smart city is about crafting new forms of human collaboration through the use of information and communication technologies. City managers should realize that technology by itself will not make a city smarter: building a smart city requires a political understanding of technology, a process approach to manage the emerging smart city and a focus on both economic gains and other public values.",TRUE,research problem
R374,Urban Studies and Planning,R142020,"Smart City Research: Contextual Conditions, Governance Models, and Public Value Assessment",S570736,R142022,has research problem,R139925,Smart cities,"There are three issues that are crucial to advancing our academic understanding of smart cities: (1) contextual conditions, (2) governance models, and (3) the assessment of public value. A brief review of recent literature and the analysis of the included papers provide support for the assumption that cities cannot simply copy good practices but must develop approaches that fit their own situation ( contingency) and concord with their own organization in terms of broader strategies, human resource policies, information policies, and so on ( configuration). A variety of insights into the mechanisms and building blocks of smart city practices are presented, and issues for further research are identified.",TRUE,research problem
R374,Urban Studies and Planning,R142035,Co-Governing Smart Cities Through Living Labs. Top Evidences From EU,S570731,R142037,has research problem,R139925,Smart cities,"Our purpose is to identify the relevance of participative governance in urban areas characterized by smart cities projects, especially those implementing Living Labs initiatives as real-life settings to develop services innovation and enhance engagement of all urban stakeholders. A research on the three top smart cities in Europe – i.e. Amsterdam, Barcelona and Helsinki – is proposed through a content analysis with NVivo on the offi cial documents issued by the project partners (2012-2015) to investigate their Living Lab initiatives. The results show the increasing usefulness of Living Labs for the development of more inclusive smart cities projects in which public and private actors, and people, collaborate in innovation processes and governance for the co-creation of new services, underlining the importance of the open and ecosystem-oriented approach for smart cities.",TRUE,research problem
R374,Urban Studies and Planning,R142756,Smart City Ontologies: Improving the effectiveness of smart city applications,S579614,R144786,has research problem,R142714,Smart city ontology,"This paper addresses the problem of low impact of smart city applications observed in the fields of energy and transport, which constitute high-priority domains for the development of smart cities. However, these are not the only fields where the impact of smart cities has been limited. The paper provides an explanation for the low impact of various individual applications of smart cities and discusses ways of improving their effectiveness. We argue that the impact of applications depends primarily on their ontology, and secondarily on smart technology and programming features. Consequently, we start by creating an overall ontology for the smart city, defining the building blocks of this ontology with respect to the most cited definitions of smart cities, and structuring this ontology with the Protégé 5.0 editor, defining entities, class hierarchy, object properties, and data type properties. We then analyze how the ontologies of a sample of smart city applications fit into the overall Smart City Ontology, the consistency between digital spaces, knowledge processes, city domains targeted by the applications, and the types of innovation that determine their impact. In conclusion, we underline the relationships between innovation and ontology, and discuss how we can improve the effectiveness of smart city applications, combining expert and user-driven ontology design with the integration and or-chestration of applications over platforms and larger city entities such as neighborhoods, districts, clusters, and sectors of city activities.",TRUE,research problem
,electrical engineering,R145549,Solution-processed high-performance p-channel copper tin sulfide thin-film transistors,S582828,R145550,has research problem,R145516,Performance of thin-film transistors,We introduce a solution-processed copper tin sulfide (CTS) thin film to realize high-performance of thin-film transistors (TFT) by optimizing the CTS precursor solution concentration.
,TRUE,research problem
R133,Artificial Intelligence,R76338,SemEval-2020 Task 6: Definition Extraction from Free Text with the DEFT Corpus,S349225,R76340,description,L249506,a SemEval shared task in which participants must extract definitions from free text using a term-definition pair corpus that reflects the complex reality of definitions in natural language,"Research on definition extraction has been conducted for well over a decade, largely with significant constraints on the type of definitions considered. In this work, we present DeftEval, a SemEval shared task in which participants must extract definitions from free text using a term-definition pair corpus that reflects the complex reality of definitions in natural language. Definitions and glosses in free text often appear without explicit indicators, across sentences boundaries, or in an otherwise complex linguistic manner. DeftEval involved 3 distinct subtasks: 1) Sentence classification, 2) sequence labeling, and 3) relation extraction.",TRUE,sentence
R133,Artificial Intelligence,R140992,Overview of the MEDIQA 2021 Shared Task on Summarization in the Medical Domain,S563220,R140994,description,L395262,"addressed three tasks on summarization for medical text: (i) a question summarization task aimed at exploring new approaches to understanding complex real-world consumer health queries, (ii) a multi-answer summarization task that targeted aggregation of multiple relevant answers to a biomedical question into one concise and relevant answer, and (iii) a radiology report summarization task addressing the development of clinically relevant impressions from radiology report findings","The MEDIQA 2021 shared tasks at the BioNLP 2021 workshop addressed three tasks on summarization for medical text: (i) a question summarization task aimed at exploring new approaches to understanding complex real-world consumer health queries, (ii) a multi-answer summarization task that targeted aggregation of multiple relevant answers to a biomedical question into one concise and relevant answer, and (iii) a radiology report summarization task addressing the development of clinically relevant impressions from radiology report findings. Thirty-five teams participated in these shared tasks with sixteen working notes submitted (fifteen accepted) describing a wide variety of models developed and tested on the shared and external datasets. In this paper, we describe the tasks, the datasets, the models and techniques developed by various teams, the results of the evaluation, and a study of correlations among various summarization evaluation measures. We hope that these shared tasks will bring new research and insights in biomedical text summarization and evaluation.",TRUE,sentence
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329511,R69391,Material,R69410,"convNet, LSTM, and bidirectional LSTM with/without attention","A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,sentence
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329508,R69391,Material,R69407,deep learning model called sAtt-BLSTM convNet,"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,sentence
R133,Artificial Intelligence,R75785,SemEval-2020 Task 5: Counterfactual Recognition,S346648,R75787,Subtask 1,R75796,Determine whether a given sentence is a counterfactual statement or not,"We present a counterfactual recognition (CR) task, the shared Task 5 of SemEval-2020. Counterfactuals describe potential outcomes (consequents) produced by actions or circumstances that did not happen or cannot happen and are counter to the facts (antecedent). Counterfactual thinking is an important characteristic of the human cognitive system; it connects antecedents and consequent with causal relations. Our task provides a benchmark for counterfactual recognition in natural language with two subtasks. Subtask-1 aims to determine whether a given sentence is a counterfactual statement or not. Subtask-2 requires the participating systems to extract the antecedent and consequent in a given counterfactual statement. During the SemEval-2020 official evaluation period, we received 27 submissions to Subtask-1 and 11 to Subtask-2. Our data and baseline code are made publicly available at https://zenodo.org/record/3932442. The task website and leaderboard can be found at https://competitions.codalab.org/competitions/21691.",TRUE,sentence
R133,Artificial Intelligence,R4857,How are topics born? Understanding the research dynamics preceding the emergence of new areas,S5334,R4863,method,R4869,dynamics preceding the creation of new topics,"The ability to promptly recognise new research trends is strategic for many stakeholders, including universities, institutional funding bodies, academic publishers and companies. While the literature describes several approaches which aim to identify the emergence of new research topics early in their lifecycle, these rely on the assumption that the topic in question is already associated with a number of publications and consistently referred to by a community of researchers. Hence, detecting the emergence of a new research area at an embryonic stage, i.e., before the topic has been consistently labelled by a community of researchers and associated with a number of publications, is still an open challenge. In this paper, we begin to address this challenge by performing a study of the dynamics preceding the creation of new topics. This study indicates that the emergence of a new topic is anticipated by a significant increase in the pace of collaboration between relevant research areas, which can be seen as the ‘parents’ of the new topic. These initial findings (i) confirm our hypothesis that it is possible in principle to detect the emergence of a new topic at the embryonic stage, (ii) provide new empirical evidence supporting relevant theories in Philosophy of Science, and also (iii) suggest that new topics tend to emerge in an environment in which weakly interconnected research areas begin to cross-fertilise.",TRUE,sentence
R133,Artificial Intelligence,R181000,"Image-Based Food Calorie Estimation Using Knowledge on Food Categories, Ingredients and Cooking Directions",S703010,R181002,Method,R181006,"estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning","Image-based food calorie estimation is crucial to diverse mobile applications for recording everyday meal. However, some of them need human help for calorie estimation, and even if it is automatic, food categories are often limited or images from multiple viewpoints are required. Then, it is not yet achieved to estimate food calorie with practical accuracy and estimating food calories from a food photo is an unsolved problem. Therefore, in this paper, we propose estimating food calorie from a food photo by simultaneous learning of food calories, categories, ingredients and cooking directions using deep learning. Since there exists a strong correlation between food calories and food categories, ingredients and cooking directions information in general, we expect that simultaneous training of them brings performance boosting compared to independent single training. To this end, we use a multi-task CNN [1]. In addition, in this research, we construct two kinds of datasets that is a dataset of calorie-annotated recipe collected from Japanese recipe sites on the Web and a dataset collected from an American recipe site. In this experiment, we trained multi-task and single-task CNNs. As a result, the multi-task CNN achieved the better performance on both food category estimation and food calorie estimation than single-task CNNs. For the Japanese recipe dataset, by introducing a multi-task CNN, 0.039 were improved on the correlation coefficient, while for the American recipe dataset, 0.090 were raised compared to the result by the single-task CNN.",TRUE,sentence
R133,Artificial Intelligence,R75785,SemEval-2020 Task 5: Counterfactual Recognition,S346640,R75787,Subtask 2,R75788,Extract the antecedent and consequent in a given counterfactual statement,"We present a counterfactual recognition (CR) task, the shared Task 5 of SemEval-2020. Counterfactuals describe potential outcomes (consequents) produced by actions or circumstances that did not happen or cannot happen and are counter to the facts (antecedent). Counterfactual thinking is an important characteristic of the human cognitive system; it connects antecedents and consequent with causal relations. Our task provides a benchmark for counterfactual recognition in natural language with two subtasks. Subtask-1 aims to determine whether a given sentence is a counterfactual statement or not. Subtask-2 requires the participating systems to extract the antecedent and consequent in a given counterfactual statement. During the SemEval-2020 official evaluation period, we received 27 submissions to Subtask-1 and 11 to Subtask-2. Our data and baseline code are made publicly available at https://zenodo.org/record/3932442. The task website and leaderboard can be found at https://competitions.codalab.org/competitions/21691.",TRUE,sentence
R133,Artificial Intelligence,R140616,SemEval-2010 Task 2: Cross-Lingual Lexical Substitution,S561567,R140618,description,L394197,"given an English target word in context, participating systems had to find an alternative substitute word or phrase in Spanish","In this paper we describe the SemEval-2010 Cross-Lingual Lexical Substitution task, where given an English target word in context, participating systems had to find an alternative substitute word or phrase in Spanish. The task is based on the English Lexical Substitution task run at SemEval-2007. In this paper we provide background and motivation for the task, we describe the data annotation process and the scoring system, and present the results of the participating systems.",TRUE,sentence
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329509,R69391,Material,R69408,hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet),"A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,sentence
R133,Artificial Intelligence,R140948,Findings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas,S563145,R140950,description,L395229,Open Machine Translation for Indigenous Languages of the Americas,"This paper presents the results of the 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas. The shared task featured two independent tracks, and participants submitted machine translation systems for up to 10 indigenous languages. Overall, 8 teams participated with a total of 214 submissions. We provided training sets consisting of data collected from various sources, as well as manually translated sentences for the development and test sets. An official baseline trained on this data was also provided. Team submissions featured a variety of architectures, including both statistical and neural models, and for the majority of languages, many teams were able to considerably improve over the baseline. The best performing systems achieved 12.97 ChrF higher than baseline, when averaged across languages.",TRUE,sentence
R133,Artificial Intelligence,R4857,How are topics born? Understanding the research dynamics preceding the emergence of new areas,S5336,R4863,method,R4871,pace of collaboration between relevant research areas,"The ability to promptly recognise new research trends is strategic for many stakeholders, including universities, institutional funding bodies, academic publishers and companies. While the literature describes several approaches which aim to identify the emergence of new research topics early in their lifecycle, these rely on the assumption that the topic in question is already associated with a number of publications and consistently referred to by a community of researchers. Hence, detecting the emergence of a new research area at an embryonic stage, i.e., before the topic has been consistently labelled by a community of researchers and associated with a number of publications, is still an open challenge. In this paper, we begin to address this challenge by performing a study of the dynamics preceding the creation of new topics. This study indicates that the emergence of a new topic is anticipated by a significant increase in the pace of collaboration between relevant research areas, which can be seen as the ‘parents’ of the new topic. These initial findings (i) confirm our hypothesis that it is possible in principle to detect the emergence of a new topic at the embryonic stage, (ii) provide new empirical evidence supporting relevant theories in Philosophy of Science, and also (iii) suggest that new topics tend to emerge in an environment in which weakly interconnected research areas begin to cross-fertilise.",TRUE,sentence
R133,Artificial Intelligence,R69387,Sarcasm Detection Using Soft Attention-Based Bidirectional Long Short-Term Memory Model With Convolution Network,S329503,R69391,Process,R69402,"sarcasm, stance, rumor, and hate speech detection","A large community of research has been developed in recent years to analyze social media and social networks, with the aim of understanding, discovering insights, and exploiting the available information. The focus has shifted from conventional polarity classification to contemporary application-oriented fine-grained aspects such as, emotions, sarcasm, stance, rumor, and hate speech detection in the user-generated content. Detecting a sarcastic tone in natural language hinders the performance of sentiment analysis tasks. The majority of the studies on automatic sarcasm detection emphasize on the use of lexical, syntactic, or pragmatic features that are often unequivocally expressed through figurative literary devices such as words, emoticons, and exclamation marks. In this paper, we propose a deep learning model called sAtt-BLSTM convNet that is based on the hybrid of soft attention-based bidirectional long short-term memory (sAtt-BLSTM) and convolution neural network (convNet) applying global vectors for word representation (GLoVe) for building semantic word embeddings. In addition to the feature maps generated by the sAtt-BLSTM, punctuation-based auxiliary features are also merged into the convNet. The robustness of the proposed model is investigated using balanced (tweets from benchmark SemEval 2015 Task 11) and unbalanced (approximately 40000 random tweets using the Sarcasm Detector tool with 15000 sarcastic and 25000 non-sarcastic messages) datasets. An experimental study using the training- and test-set accuracy metrics is performed to compare the proposed deep neural model with convNet, LSTM, and bidirectional LSTM with/without attention and it is observed that the novel sAtt-BLSTM convNet model outperforms others with a superior sarcasm-classification accuracy of 97.87% for the Twitter dataset and 93.71% for the random-tweet dataset.",TRUE,sentence
R133,Artificial Intelligence,R141030,*SEM 2013 shared task: Semantic Textual Similarity,S581387,R145247,description,L406284,"tries to characterize why two items are deemed similar, using cultural heritage items which are described with metadata such as title, author or description. Several types of similarity have been defined, including similar author, similar time period or similar location.","In Semantic Textual Similarity (STS), systems rate the degree of semantic equivalence, on a graded scale from 0 to 5, with 5 being the most similar. This year we set up two tasks: (i) a core task (CORE), and (ii) a typed-similarity task (TYPED). CORE is similar in set up to SemEval STS 2012 task with pairs of sentences from sources related to those of 2012, yet different in genre from the 2012 set, namely, this year we included newswire headlines, machine translation evaluation datasets and multiple lexical resource glossed sets. TYPED, on the other hand, is novel and tries to characterize why two items are deemed similar, using cultural heritage items which are described with metadata such as title, author or description. Several types of similarity have been defined, including similar author, similar time period or similar location. The annotation for both tasks leverages crowdsourcing, with relative high interannotator correlation, ranging from 62% to 87%. The CORE task attracted 34 participants with 89 runs, and the TYPED task attracted 6 teams with 14 runs.",TRUE,sentence
R133,Artificial Intelligence,R140867,SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA,S562969,R140869,description,L395141,"Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French","We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a cross-linguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. The shared task has yielded improvements over the state-of-the-art baseline in all languages and settings. Full results can be found in the task’s website https://competitions.codalab.org/competitions/19160.",TRUE,sentence
R133,Artificial Intelligence,R146699,A Markov-Switching Model Approach to Heart Sound Segmentation and Classification,S587366,R146704,Contribution description,L409134,We consider challenges in accurate segmentation of heart sound signals recorded under noisy clinical environments for subsequent classification of pathological events.,"Objective: We consider challenges in accurate segmentation of heart sound signals recorded under noisy clinical environments for subsequent classification of pathological events. Existing state-of-the-art solutions to heart sound segmentation use probabilistic models such as hidden Markov models (HMMs), which, however, are limited by its observation independence assumption and rely on pre-extraction of noise-robust features. Methods: We propose a Markov-switching autoregressive (MSAR) process to model the raw heart sound signals directly, which allows efficient segmentation of the cyclical heart sound states according to the distinct dependence structure in each state. To enhance robustness, we extend the MSAR model to a switching linear dynamic system (SLDS) that jointly model both the switching AR dynamics of underlying heart sound signals and the noise effects. We introduce a novel algorithm via fusion of switching Kalman filter and the duration-dependent Viterbi algorithm, which incorporates the duration of heart sound states to improve state decoding. Results: Evaluated on Physionet/CinC Challenge 2016 dataset, the proposed MSAR-SLDS approach significantly outperforms the hidden semi-Markov model (HSMM) in heart sound segmentation based on raw signals and comparable to a feature-based HSMM. The segmented labels were then used to train Gaussian-mixture HMM classifier for identification of abnormal beats, achieving high average precision of 86.1% on the same dataset including very noisy recordings. Conclusion: The proposed approach shows noticeable performance in heart sound segmentation and classification on a large noisy dataset. Significance: It is potentially useful in developing automated heart monitoring systems for pre-screening of heart pathologies.",TRUE,sentence
R14,Biochemistry,R74652,Dynamic Impacts of the Inhibition of the Molecular Chaperone Hsp90 on the T-Cell Proteome Have Implications for Anti-Cancer Therapy,S342950,R74654,Material,R74658,key essential and oncogenic signalling pathways,"The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.",TRUE,sentence
R14,Biochemistry,R74652,Dynamic Impacts of the Inhibition of the Molecular Chaperone Hsp90 on the T-Cell Proteome Have Implications for Anti-Cancer Therapy,S342961,R74654,Method,R74669,"novel pulse-chase strategy (Fierro-Monti et al., accompanying article)","The molecular chaperone Hsp90-dependent proteome represents a complex protein network of critical biological and medical relevance. Known to associate with proteins with a broad variety of functions termed clients, Hsp90 maintains key essential and oncogenic signalling pathways. Consequently, Hsp90 inhibitors are being tested as anti-cancer drugs. Using an integrated systematic approach to analyse the effects of Hsp90 inhibition in T-cells, we quantified differential changes in the Hsp90-dependent proteome, Hsp90 interactome, and a selection of the transcriptome. Kinetic behaviours in the Hsp90-dependent proteome were assessed using a novel pulse-chase strategy (Fierro-Monti et al., accompanying article), detecting effects on both protein stability and synthesis. Global and specific dynamic impacts, including proteostatic responses, are due to direct inhibition of Hsp90 as well as indirect effects. As a result, a decrease was detected in most proteins that changed their levels, including known Hsp90 clients. Most likely, consequences of the role of Hsp90 in gene expression determined a global reduction in net de novo protein synthesis. This decrease appeared to be greater in magnitude than a concomitantly observed global increase in protein decay rates. Several novel putative Hsp90 clients were validated, and interestingly, protein families with critical functions, particularly the Hsp90 family and cofactors themselves as well as protein kinases, displayed strongly increased decay rates due to Hsp90 inhibitor treatment. Remarkably, an upsurge in survival pathways, involving molecular chaperones and several oncoproteins, and decreased levels of some tumour suppressors, have implications for anti-cancer therapy with Hsp90 inhibitors. The diversity of global effects may represent a paradigm of mechanisms that are operating to shield cells from proteotoxic stress, by promoting pro-survival and anti-proliferative functions. Data are available via ProteomeXchange with identifier PXD000537.",TRUE,sentence
R104,Bioinformatics,R168521,Chaste: An Open Source C++ Library for Computational Physiology and Biology,S668333,R168524,creates,R166933,"Cancer, Heart And Soft Tissue Environment","Chaste — Cancer, Heart And Soft Tissue Environment — is an open source C++ library for the computational simulation of mathematical models developed for physiology and biology. Code development has been driven by two initial applications: cardiac electrophysiology and cancer development. A large number of cardiac electrophysiology studies have been enabled and performed, including high-performance computational investigations of defibrillation on realistic human cardiac geometries. New models for the initiation and growth of tumours have been developed. In particular, cell-based simulations have provided novel insight into the role of stem cells in the colorectal crypt. Chaste is constantly evolving and is now being applied to a far wider range of problems. The code provides modules for handling common scientific computing components, such as meshes and solvers for ordinary and partial differential equations (ODEs/PDEs). Re-use of these components avoids the need for researchers to ‘re-invent the wheel’ with each new project, accelerating the rate of progress in new applications. Chaste is developed using industrially-derived techniques, in particular test-driven development, to ensure code quality, re-use and reliability. In this article we provide examples that illustrate the types of problems Chaste can be used to solve, which can be run on a desktop computer. We highlight some scientific studies that have used or are using Chaste, and the insights they have provided. The source code, both for specific releases and the development version, is available to download under an open source Berkeley Software Distribution (BSD) licence at http://www.cs.ox.ac.uk/chaste, together with details of a mailing list and links to documentation and tutorials.",TRUE,sentence
R104,Bioinformatics,R135489,Identification of Leukemia Subtypes from Microscopic Images Using Convolutional Neural Network,S535868,R135491,description,L377983,"Leukemia is a fatal cancer and has two main types: Acute and chronic. Each type has two more subtypes: Lymphoid and myeloid. Hence, in total, there are four subtypes of leukemia. This study proposes a new approach for diagnosis of all subtypes of leukemia from microscopic blood cell images using convolutional neural networks (CNN), which requires a large training data set","Leukemia is a fatal cancer and has two main types: Acute and chronic. Each type has two more subtypes: Lymphoid and myeloid. Hence, in total, there are four subtypes of leukemia. This study proposes a new approach for diagnosis of all subtypes of leukemia from microscopic blood cell images using convolutional neural networks (CNN), which requires a large training data set. Therefore, we also investigated the effects of data augmentation for an increasing number of training samples synthetically. We used two publicly available leukemia data sources: ALL-IDB and ASH Image Bank. Next, we applied seven different image transformation techniques as data augmentation. We designed a CNN architecture capable of recognizing all subtypes of leukemia. Besides, we also explored other well-known machine learning algorithms such as naive Bayes, support vector machine, k-nearest neighbor, and decision tree. To evaluate our approach, we set up a set of experiments and used 5-fold cross-validation. The results we obtained from experiments showed that our CNN model performance has 88.25% and 81.74% accuracy, in leukemia versus healthy and multi-class classification of all subtypes, respectively. Finally, we also showed that the CNN model has a better performance than other well-known machine learning algorithms.",TRUE,sentence
R104,Bioinformatics,R5107,Implementing LOINC – Current Status and Ongoing Work at a Medical University,S5646,R5119,Material,R5128,our laboratory information system and research infrastructure,"The Logical Observation Identifiers, Names and Codes (LOINC) is a common terminology used for standardizing laboratory terms. Within the consortium of the HiGHmed project, LOINC is one of the central terminologies used for health data sharing across all university sites. Therefore, linking the LOINC codes to the site-specific tests and measures is one crucial step to reach this goal. In this work we report our ongoing efforts in implementing LOINC to our laboratory information system and research infrastructure, as well as our challenges and the lessons learned. 407 local terms could be mapped to 376 LOINC codes of which 209 are already available to routine laboratory data. In our experience, mapping of local terms to LOINC is a widely manual and time consuming process for reasons of language and expert knowledge of local laboratory procedures.",TRUE,sentence
R104,Bioinformatics,R168633,PEPIS: A Pipeline for Estimating Epistatic Effects in Quantitative Trait Locus Mapping and Genome-Wide Association Studies,S668764,R168634,creates,R167006,Pipeline for estimating EPIStatic genetic effects,"The term epistasis refers to interactions between multiple genetic loci. Genetic epistasis is important in regulating biological function and is considered to explain part of the ‘missing heritability,’ which involves marginal genetic effects that cannot be accounted for in genome-wide association studies. Thus, the study of epistasis is of great interest to geneticists. However, estimating epistatic effects for quantitative traits is challenging due to the large number of interaction effects that must be estimated, thus significantly increasing computing demands. Here, we present a new web server-based tool, the Pipeline for estimating EPIStatic genetic effects (PEPIS), for analyzing polygenic epistatic effects. The PEPIS software package is based on a new linear mixed model that has been used to predict the performance of hybrid rice. The PEPIS includes two main sub-pipelines: the first for kinship matrix calculation, and the second for polygenic component analyses and genome scanning for main and epistatic effects. To accommodate the demand for high-performance computation, the PEPIS utilizes C/C++ for mathematical matrix computing. In addition, the modules for kinship matrix calculations and main and epistatic-effect genome scanning employ parallel computing technology that effectively utilizes multiple computer nodes across our networked cluster, thus significantly improving the computational speed. For example, when analyzing the same immortalized F2 rice population genotypic data examined in a previous study, the PEPIS returned identical results at each analysis step with the original prototype R code, but the computational time was reduced from more than one month to about five minutes. These advances will help overcome the bottleneck frequently encountered in genome wide epistatic genetic effect analysis and enable accommodation of the high computational demand. The PEPIS is publically available at http://bioinfo.noble.org/PolyGenic_QTL/.",TRUE,sentence
R104,Bioinformatics,R168614,PSAMM: A Portable System for the Analysis of Metabolic Models,S668689,R168615,creates,R166994,Portable System for the Analysis of Metabolic Models,"The genome-scale models of metabolic networks have been broadly applied in phenotype prediction, evolutionary reconstruction, community functional analysis, and metabolic engineering. Despite the development of tools that support individual steps along the modeling procedure, it is still difficult to associate mathematical simulation results with the annotation and biological interpretation of metabolic models. In order to solve this problem, here we developed a Portable System for the Analysis of Metabolic Models (PSAMM), a new open-source software package that supports the integration of heterogeneous metadata in model annotations and provides a user-friendly interface for the analysis of metabolic models. PSAMM is independent of paid software environments like MATLAB, and all its dependencies are freely available for academic users. Compared to existing tools, PSAMM significantly reduced the running time of constraint-based analysis and enabled flexible settings of simulation parameters using simple one-line commands. The integration of heterogeneous, model-specific annotation information in PSAMM is achieved with a novel format of YAML-based model representation, which has several advantages, such as providing a modular organization of model components and simulation settings, enabling model version tracking, and permitting the integration of multiple simulation problems. PSAMM also includes a number of quality checking procedures to examine stoichiometric balance and to identify blocked reactions. Applying PSAMM to 57 models collected from current literature, we demonstrated how the software can be used for managing and simulating metabolic models. We identified a number of common inconsistencies in existing models and constructed an updated model repository to document the resolution of these inconsistencies.",TRUE,sentence
R104,Bioinformatics,R169626,Integrated Analysis and Visualization of Group Differences in Structural and Functional Brain Connectivity: Applications in Typical Ageing and Schizophrenia,S673305,R169631,uses,R167657,Statistical Analysis of Minimum cost path based Structural Connectivity,"Structural and functional brain connectivity are increasingly used to identify and analyze group differences in studies of brain disease. This study presents methods to analyze uni- and bi-modal brain connectivity and evaluate their ability to identify differences. Novel visualizations of significantly different connections comparing multiple metrics are presented. On the global level, “bi-modal comparison plots” show the distribution of uni- and bi-modal group differences and the relationship between structure and function. Differences between brain lobes are visualized using “worm plots”. Group differences in connections are examined with an existing visualization, the “connectogram”. These visualizations were evaluated in two proof-of-concept studies: (1) middle-aged versus elderly subjects; and (2) patients with schizophrenia versus controls. Each included two measures derived from diffusion weighted images and two from functional magnetic resonance images. The structural measures were minimum cost path between two anatomical regions according to the “Statistical Analysis of Minimum cost path based Structural Connectivity” method and the average fractional anisotropy along the fiber. The functional measures were Pearson’s correlation and partial correlation of mean regional time series. The relationship between structure and function was similar in both studies. Uni-modal group differences varied greatly between connectivity types. Group differences were identified in both studies globally, within brain lobes and between regions. In the aging study, minimum cost path was highly effective in identifying group differences on all levels; fractional anisotropy and mean correlation showed smaller differences on the brain lobe and regional levels. In the schizophrenia study, minimum cost path and fractional anisotropy showed differences on the global level and within brain lobes; mean correlation showed small differences on the lobe level. Only fractional anisotropy and mean correlation showed regional differences. The presented visualizations were helpful in comparing and evaluating connectivity measures on multiple levels in both studies.",TRUE,sentence
R104,Bioinformatics,R5107,Implementing LOINC – Current Status and Ongoing Work at a Medical University,S5639,R5119,Data,R5121,"The Logical Observation Identifiers, Names and Codes (LOINC)","The Logical Observation Identifiers, Names and Codes (LOINC) is a common terminology used for standardizing laboratory terms. Within the consortium of the HiGHmed project, LOINC is one of the central terminologies used for health data sharing across all university sites. Therefore, linking the LOINC codes to the site-specific tests and measures is one crucial step to reach this goal. In this work we report our ongoing efforts in implementing LOINC to our laboratory information system and research infrastructure, as well as our challenges and the lessons learned. 407 local terms could be mapped to 376 LOINC codes of which 209 are already available to routine laboratory data. In our experience, mapping of local terms to LOINC is a widely manual and time consuming process for reasons of language and expert knowledge of local laboratory procedures.",TRUE,sentence
R104,Bioinformatics,R139024,X-A-BiLSTM: a Deep Learning Approach for Depression Detection in Imbalanced Data,S552408,R139026,Data,R139027,The Reddit Self-reported Depression Diagnosis (RSDD) dataset,"An increasing number of people suffering from mental health conditions resort to online resources (specialized websites, social media, etc.) to share their feelings. Early depression detection using social media data through deep learning models can help to change life trajectories and save lives. But the accuracy of these models was not satisfying due to the real-world imbalanced data distributions. To tackle this problem, we propose a deep learning model (X-A-BiLSTM) for depression detection in imbalanced social media data. The X-A-BiLSTM model consists of two essential components: the first one is XGBoost, which is used to reduce data imbalance; and the second one is an Attention-BiLSTM neural network, which enhances classification capacity. The Reddit Self-reported Depression Diagnosis (RSDD) dataset was chosen, which included approximately 9,000 users who claimed to have been diagnosed with depression (”diagnosed users and approximately 107,000 matched control users. Results demonstrate that our approach significantly outperforms the previous state-of-the-art models on the RSDD dataset.",TRUE,sentence
R104,Bioinformatics,R148050,Tagging gene and protein names in biomedical text,S593729,R148052,description,L412869,We present a method for tagging gene and protein names in biomedical text using a combination of statistical and knowledge-based strategies,"MOTIVATION The MEDLINE database of biomedical abstracts contains scientific knowledge about thousands of interacting genes and proteins. Automated text processing can aid in the comprehension and synthesis of this valuable information. The fundamental task of identifying gene and protein names is a necessary first step towards making full use of the information encoded in biomedical text. This remains a challenging task due to the irregularities and ambiguities in gene and protein nomenclature. We propose to approach the detection of gene and protein names in scientific abstracts as part-of-speech tagging, the most basic form of linguistic corpus annotation. RESULTS We present a method for tagging gene and protein names in biomedical text using a combination of statistical and knowledge-based strategies. This method incorporates automatically generated rules from a transformation-based part-of-speech tagger, and manually generated rules from morphological clues, low frequency trigrams, indicator terms, suffixes and part-of-speech information. Results of an experiment on a test corpus of 56K MEDLINE documents demonstrate that our method to extract gene and protein names can be applied to large sets of MEDLINE abstracts, without the need for special conditions or human experts to predetermine relevant subsets. AVAILABILITY The programs are available on request from the authors.",TRUE,sentence
R104,Bioinformatics,R148050,Tagging gene and protein names in biomedical text,S593728,R148052,description,L412868,"We propose to approach the detection of gene and protein names in scientific abstracts as part-of-speech tagging, the most basic form of linguistic corpus annotation","MOTIVATION The MEDLINE database of biomedical abstracts contains scientific knowledge about thousands of interacting genes and proteins. Automated text processing can aid in the comprehension and synthesis of this valuable information. The fundamental task of identifying gene and protein names is a necessary first step towards making full use of the information encoded in biomedical text. This remains a challenging task due to the irregularities and ambiguities in gene and protein nomenclature. We propose to approach the detection of gene and protein names in scientific abstracts as part-of-speech tagging, the most basic form of linguistic corpus annotation. RESULTS We present a method for tagging gene and protein names in biomedical text using a combination of statistical and knowledge-based strategies. This method incorporates automatically generated rules from a transformation-based part-of-speech tagger, and manually generated rules from morphological clues, low frequency trigrams, indicator terms, suffixes and part-of-speech information. Results of an experiment on a test corpus of 56K MEDLINE documents demonstrate that our method to extract gene and protein names can be applied to large sets of MEDLINE abstracts, without the need for special conditions or human experts to predetermine relevant subsets. AVAILABILITY The programs are available on request from the authors.",TRUE,sentence
R122,Chemistry,R46117,Self-Doped Ti3+ Enhanced Photocatalyst for Hydrogen Production under Visible Light,S140490,R46118,visible-light driven photocatalysis,L86316,high visible-light photocatalytic activity for the generation of hydrogen gas from water,"Through a facile one-step combustion method, partially reduced TiO(2) has been synthesized. Electron paramagnetic resonance (EPR) spectra confirm the presence of Ti(3+) in the bulk of an as-prepared sample. The UV-vis spectra show that the Ti(3+) here extends the photoresponse of TiO(2) from the UV to the visible light region, which leads to high visible-light photocatalytic activity for the generation of hydrogen gas from water. It is worth noting that the Ti(3+) sites in the sample are highly stable in air or water under irradiation and the photocatalyst can be repeatedly used without degradation in the activity.",TRUE,sentence
R225,Civil Engineering,R5138,A Graph Based Tool for Modelling Planning Processes in Building Engineering,S5686,R5144,method,R5164,the research project “Relation Based Process Modelling,"The planning process of a building is very complex. Many participants with different technical disciplines are involved and work on certain tasks. To manage the planning process the project leader has to organize participants, tasks and building data. For this purpose modern information and communication technologies can be used very effi ciently. But these technologies require a formal description of the planning process. Within the research project “Relation Based Process Modelling of Co-operative Building Planning” we have defined a consistent mathematical process model for planning processes and have developed a prototype implementation of an application for modelling these processes. Our project is embedded in the priori ty program 1103 “Network-based Co-operative Planning Processes in Structural Engineering” promoted by the German Research Foundation (DFG). In this paper we present the mathematical concept of our relational process model and the tool for building up the m odel and checking the structural consistency and correctness.",TRUE,sentence
R346,Cognition and Perception,R110142,Do Animals Engage Greater Social Attention in Autism? An Eye Tracking Analysis,S502212,R110144,Material,R110150,"five regions of interest (left eye, right eye, eye region, face and screen)","Background Visual atypicalities in autism spectrum disorder (ASD) are a well documented phenomenon, beginning as early as 2–6 months of age and manifesting in a significantly decreased attention to the eyes, direct gaze and socially salient information. Early emerging neurobiological deficits in perceiving social stimuli as rewarding or its active avoidance due to the anxiety it entails have been widely purported as potential reasons for this atypicality. Parallel research evidence also points to the significant benefits of animal presence for reducing social anxiety and enhancing social interaction in children with autism. While atypicality in social attention in ASD has been widely substantiated, whether this atypicality persists equally across species types or is confined to humans has not been a key focus of research insofar. Methods We attempted a comprehensive examination of the differences in visual attention to static images of human and animal faces (40 images; 20 human faces and 20 animal faces) among children with ASD using an eye tracking paradigm. 44 children (ASD n = 21; TD n = 23) participated in the study (10,362 valid observations) across five regions of interest (left eye, right eye, eye region, face and screen). Results Results obtained revealed significantly greater social attention across human and animal stimuli in typical controls when compared to children with ASD. However in children with ASD, a significantly greater attention allocation was seen to animal faces and eye region and lesser attention to the animal mouth when compared to human faces, indicative of a clear attentional preference to socially salient regions of animal stimuli. The positive attentional bias toward animals was also seen in terms of a significantly greater visual attention to direct gaze in animal images. Conclusion Our results suggest the possibility that atypicalities in social attention in ASD may not be uniform across species. It adds to the current neural and biomarker evidence base of the potentially greater social reward processing and lesser social anxiety underlying animal stimuli as compared to human stimuli in children with ASD.",TRUE,sentence
R346,Cognition and Perception,R110142,Do Animals Engage Greater Social Attention in Autism? An Eye Tracking Analysis,S502213,R110144,has answer to research question,R110151,"greater attention allocation was seen to animal faces and eye region and lesser attention to the animal mouth when compared to human faces, indicative of a clear attentional preference to socially salient regions of animal stimuli.","Background Visual atypicalities in autism spectrum disorder (ASD) are a well documented phenomenon, beginning as early as 2–6 months of age and manifesting in a significantly decreased attention to the eyes, direct gaze and socially salient information. Early emerging neurobiological deficits in perceiving social stimuli as rewarding or its active avoidance due to the anxiety it entails have been widely purported as potential reasons for this atypicality. Parallel research evidence also points to the significant benefits of animal presence for reducing social anxiety and enhancing social interaction in children with autism. While atypicality in social attention in ASD has been widely substantiated, whether this atypicality persists equally across species types or is confined to humans has not been a key focus of research insofar. Methods We attempted a comprehensive examination of the differences in visual attention to static images of human and animal faces (40 images; 20 human faces and 20 animal faces) among children with ASD using an eye tracking paradigm. 44 children (ASD n = 21; TD n = 23) participated in the study (10,362 valid observations) across five regions of interest (left eye, right eye, eye region, face and screen). Results Results obtained revealed significantly greater social attention across human and animal stimuli in typical controls when compared to children with ASD. However in children with ASD, a significantly greater attention allocation was seen to animal faces and eye region and lesser attention to the animal mouth when compared to human faces, indicative of a clear attentional preference to socially salient regions of animal stimuli. The positive attentional bias toward animals was also seen in terms of a significantly greater visual attention to direct gaze in animal images. Conclusion Our results suggest the possibility that atypicalities in social attention in ASD may not be uniform across species. It adds to the current neural and biomarker evidence base of the potentially greater social reward processing and lesser social anxiety underlying animal stimuli as compared to human stimuli in children with ASD.",TRUE,sentence
R346,Cognition and Perception,R110142,Do Animals Engage Greater Social Attention in Autism? An Eye Tracking Analysis,S502211,R110144,Material,R110149,static images of human and animal faces,"Background Visual atypicalities in autism spectrum disorder (ASD) are a well documented phenomenon, beginning as early as 2–6 months of age and manifesting in a significantly decreased attention to the eyes, direct gaze and socially salient information. Early emerging neurobiological deficits in perceiving social stimuli as rewarding or its active avoidance due to the anxiety it entails have been widely purported as potential reasons for this atypicality. Parallel research evidence also points to the significant benefits of animal presence for reducing social anxiety and enhancing social interaction in children with autism. While atypicality in social attention in ASD has been widely substantiated, whether this atypicality persists equally across species types or is confined to humans has not been a key focus of research insofar. Methods We attempted a comprehensive examination of the differences in visual attention to static images of human and animal faces (40 images; 20 human faces and 20 animal faces) among children with ASD using an eye tracking paradigm. 44 children (ASD n = 21; TD n = 23) participated in the study (10,362 valid observations) across five regions of interest (left eye, right eye, eye region, face and screen). Results Results obtained revealed significantly greater social attention across human and animal stimuli in typical controls when compared to children with ASD. However in children with ASD, a significantly greater attention allocation was seen to animal faces and eye region and lesser attention to the animal mouth when compared to human faces, indicative of a clear attentional preference to socially salient regions of animal stimuli. The positive attentional bias toward animals was also seen in terms of a significantly greater visual attention to direct gaze in animal images. Conclusion Our results suggest the possibility that atypicalities in social attention in ASD may not be uniform across species. It adds to the current neural and biomarker evidence base of the potentially greater social reward processing and lesser social anxiety underlying animal stimuli as compared to human stimuli in children with ASD.",TRUE,sentence
R111778,Communication Neuroscience,R111716,Engaged listeners: shared neural processing of powerful political speeches,S508243,R111718,result,R111720,"alignment of the time course across listeners was stronger for rhetorically powerful speeches, especially for bilateral regions of the superior temporal gyri and medial prefrontal cortex","Powerful speeches can captivate audiences, whereas weaker speeches fail to engage their listeners. What is happening in the brains of a captivated audience? Here, we assess audience-wide functional brain dynamics during listening to speeches of varying rhetorical quality. The speeches were given by German politicians and evaluated as rhetorically powerful or weak. Listening to each of the speeches induced similar neural response time courses, as measured by inter-subject correlation analysis, in widespread brain regions involved in spoken language processing. Crucially, alignment of the time course across listeners was stronger for rhetorically powerful speeches, especially for bilateral regions of the superior temporal gyri and medial prefrontal cortex. Thus, during powerful speeches, listeners as a group are more coupled to each other, suggesting that powerful speeches are more potent in taking control of the listeners' brain responses. Weaker speeches were processed more heterogeneously, although they still prompted substantially correlated responses. These patterns of coupled neural responses bear resemblance to metaphors of resonance, which are often invoked in discussions of speech impact, and contribute to the literature on auditory attention under natural circumstances. Overall, this approach opens up possibilities for research on the neural mechanisms mediating the reception of entertaining or persuasive messages.",TRUE,sentence
R111778,Communication Neuroscience,R136437,Content Matters: Neuroimaging Investigation of Brain and Behavioral Impact of Televised Anti-Tobacco Public Service Announcements,S539980,R136439,Has result,L380137,"AS and MSV interacted in the inferior frontal, inferior parietal, and fusiform gyri; the precuneus; and the dorsomedial prefrontal cortex","Televised public service announcements are video ads that are a key component of public health campaigns against smoking. Understanding the neurophysiological correlates of anti-tobacco ads is an important step toward novel objective methods of their evaluation and design. In the present study, we used functional magnetic resonance imaging (fMRI) to investigate the brain and behavioral effects of the interaction between content (“argument strength,” AS) and format (“message sensation value,” MSV) of anti-smoking ads in humans. Seventy-one nontreatment-seeking smokers viewed a sequence of 16 high or 16 low AS ads during an fMRI scan. Dependent variables were brain fMRI signal, the immediate recall of the ads, the immediate change in intentions to quit smoking, and the urine levels of a major nicotine metabolite cotinine at a 1 month follow-up. Whole-brain ANOVA revealed that AS and MSV interacted in the inferior frontal, inferior parietal, and fusiform gyri; the precuneus; and the dorsomedial prefrontal cortex (dMPFC). Regression analysis showed that the activation in the dMPFC predicted the urine cotinine levels 1 month later. These results characterize the key brain regions engaged in the processing of persuasive communications and suggest that brain fMRI response to anti-smoking ads could predict subsequent smoking severity in nontreatment-seeking smokers. Our findings demonstrate the importance of the quality of content for objective ad outcomes and suggest that fMRI investigation may aid the prerelease evaluation of televised public health ads.",TRUE,sentence
R111778,Communication Neuroscience,R111723,Neural Correlates of Risk Perception during Real-Life Risk Communication,S508268,R111725,Has result,R111728,enhanced intersubject correlations among viewers with high-risk perception in the anterior cingulate,"During global health crises, such as the recent H1N1 pandemic, the mass media provide the public with timely information regarding risk. To obtain new insights into how these messages are received, we measured neural data while participants, who differed in their preexisting H1N1 risk perceptions, viewed a TV report about H1N1. Intersubject correlation (ISC) of neural time courses was used to assess how similarly the brains of viewers responded to the TV report. We found enhanced intersubject correlations among viewers with high-risk perception in the anterior cingulate, a region which classical fMRI studies associated with the appraisal of threatening information. By contrast, neural coupling in sensory-perceptual regions was similar for the high and low H1N1-risk perception groups. These results demonstrate a novel methodology for understanding how real-life health messages are processed in the human brain, with particular emphasis on the role of emotion and differences in risk perceptions.",TRUE,sentence
R111778,Communication Neuroscience,R136437,Content Matters: Neuroimaging Investigation of Brain and Behavioral Impact of Televised Anti-Tobacco Public Service Announcements,S540027,R136439,Main_manipulation_of_interest,L380156,"interaction between content (“argument strength,” AS) and format (“message sensation value,” MSV) of anti-smoking ads","Televised public service announcements are video ads that are a key component of public health campaigns against smoking. Understanding the neurophysiological correlates of anti-tobacco ads is an important step toward novel objective methods of their evaluation and design. In the present study, we used functional magnetic resonance imaging (fMRI) to investigate the brain and behavioral effects of the interaction between content (“argument strength,” AS) and format (“message sensation value,” MSV) of anti-smoking ads in humans. Seventy-one nontreatment-seeking smokers viewed a sequence of 16 high or 16 low AS ads during an fMRI scan. Dependent variables were brain fMRI signal, the immediate recall of the ads, the immediate change in intentions to quit smoking, and the urine levels of a major nicotine metabolite cotinine at a 1 month follow-up. Whole-brain ANOVA revealed that AS and MSV interacted in the inferior frontal, inferior parietal, and fusiform gyri; the precuneus; and the dorsomedial prefrontal cortex (dMPFC). Regression analysis showed that the activation in the dMPFC predicted the urine cotinine levels 1 month later. These results characterize the key brain regions engaged in the processing of persuasive communications and suggest that brain fMRI response to anti-smoking ads could predict subsequent smoking severity in nontreatment-seeking smokers. Our findings demonstrate the importance of the quality of content for objective ad outcomes and suggest that fMRI investigation may aid the prerelease evaluation of televised public health ads.",TRUE,sentence
R288,Communication Sciences,R172941,Feeling Left Out: Underserved Audiences in Science Communication,S690014,R172944,Result,R172952," emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role","Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and “feeling left out” can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.",TRUE,sentence
R288,Communication Sciences,R172941,Feeling Left Out: Underserved Audiences in Science Communication,S690008,R172944,Process,R172946,“feeling left out”,"Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and “feeling left out” can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.",TRUE,sentence
R288,Communication Sciences,R172941,Feeling Left Out: Underserved Audiences in Science Communication,S690016,R172944,Result,R172954,being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences,"Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and “feeling left out” can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.",TRUE,sentence
R288,Communication Sciences,R172941,Feeling Left Out: Underserved Audiences in Science Communication,S690013,R172944,Result,R172951,"material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication","Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and “feeling left out” can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.",TRUE,sentence
R288,Communication Sciences,R172941,Feeling Left Out: Underserved Audiences in Science Communication,S690015,R172944,Result,R172953,simply addressing material aspects can only be part of establishing more inclusive science communication practices,"Science communication only reaches certain segments of society. Various underserved audiences are detached from it and feel left out, which is a challenge for democratic societies that build on informed participation in deliberative processes. While only recently researchers and practitioners have addressed the question on the detailed composition of the not reached groups, even less is known about the emotional impact on underserved audiences: feelings and emotions can play an important role in how science communication is received, and “feeling left out” can be an important aspect of exclusion. In this exploratory study, we provide insights from interviews and focus groups with three different underserved audiences in Germany. We found that on the one hand, material exclusion factors such as available infrastructure or financial means as well as specifically attributable factors such as language skills, are influencing the audience composition of science communication. On the other hand, emotional exclusion factors such as fear, habitual distance, and self- as well as outside-perception also play an important role. Therefore, simply addressing material aspects can only be part of establishing more inclusive science communication practices. Rather, being aware of emotions and feelings can serve as a point of leverage for science communication in reaching out to underserved audiences.",TRUE,sentence
R277,Computational Engineering,R4884,"Learning to Generate Wikipedia Summaries for Underserved Languages
from Wikidata",S5371,R4893,method,R4901,a neural network architecture equipped with copy actions,"While Wikipedia exists in 287 languages, its content is unevenly distributed among them. In this work, we investigate the generation of open domain Wikipedia summaries in underserved languages using structured data from Wikidata. To this end, we propose a neural network architecture equipped with copy actions that learns to generate single-sentence and comprehensible textual summaries from Wikidata triples. We demonstrate the effectiveness of the proposed approach by evaluating it against a set of baselines on two languages of different natures: Arabic, a morphological rich language with a larger vocabulary than English, and Esperanto, a constructed language known for its easy acquisition.",TRUE,sentence
R277,Computational Engineering,R4884,"Learning to Generate Wikipedia Summaries for Underserved Languages
from Wikidata",S5366,R4893,Data,R4896,single-sentence and comprehensible textual summaries from Wikidata,"While Wikipedia exists in 287 languages, its content is unevenly distributed among them. In this work, we investigate the generation of open domain Wikipedia summaries in underserved languages using structured data from Wikidata. To this end, we propose a neural network architecture equipped with copy actions that learns to generate single-sentence and comprehensible textual summaries from Wikidata triples. We demonstrate the effectiveness of the proposed approach by evaluating it against a set of baselines on two languages of different natures: Arabic, a morphological rich language with a larger vocabulary than English, and Esperanto, a constructed language known for its easy acquisition.",TRUE,sentence
R322,Computational Linguistics,R148032,MedTag: A Collection of Biomedical Annotations,S603621,R148034,description,L417841,"MedTag combines three corpora, MedPost, ABGene and GENETAG, within a common relational database data model.","We present a database of annotated biomedical text corpora merged into a portable data structure with uniform conventions. MedTag combines three corpora, MedPost, ABGene and GENETAG, within a common relational database data model. The GENETAG corpus has been modified to reflect new definitions of genes and proteins. The MedPost corpus has been updated to include 1,000 additional sentences from the clinical medicine domain. All data have been updated with original MEDLINE text excerpts, PubMed identifiers, and tokenization independence to facilitate data accuracy, consistency and usability. The data are available in flat files along with software to facilitate loading the data into a relational SQL database from ftp://ftp.ncbi.nlm.nih.gov/pub/lsmith/MedTag/medtag.tar.gz.",TRUE,sentence
R231,Computer and Systems Architecture,R175453,A hierarchical approach to interactive motion editing for human-like figures,S695251,R175455,Algorithm and Techniques,L467385,A hierarchical curve fitting technique with a new inverse kinematics solver,"This paper presents a technique for adapting existing motion of a human-like character to have the desired features that are specified by a set of constraints. This problem can be typically formulated as a spacetime constraint problem. Our approach combines a hierarchical curve fitting technique with a new inverse kinematics solver. Using the kinematics solver, we can adjust the configuration of an articulated figure to meet the constraints in each frame. Through the fitting technique, the motion displacement of every joint at each constrained frame is interpolated and thus smoothly propagated to frames. We are able to adaptively add motion details to satisfy the constraints within a specified tolerance by adopting a multilevel Bspline representation which also provides a speedup for the interpolation. The performance of our system is further enhanced by the new inverse kinematics solver. We present a closed-form solution to compute the joint angles of a limb linkage. This analytical method greatly reduces the burden of a numerical optimization to find the solutions for full degrees of freedom of a human-like articulated figure. We demonstrate that the technique can be used for retargetting a motion to compensate for geometric variations caused by both characters and environments. Furthermore, we can also use this technique for directly manipulating a motion clip through a graphical interface. CR Categories: I.3.7 [Computer Graphics]: Threedimensional Graphics—Animation; G.1.2 [Numerical Analysis]: Approximation—Spline and piecewise polynomial approximation",TRUE,sentence
R417,Cultural History,R139993,The Role of Smart City Characteristics in the Plans of Fifteen Cities,S558948,R139995,Has finding,R139996,most smart city strategies fail to incorporate bottom-up approaches,"ABSTRACT This paper identifies the characteristics of smart cities as they emerge from the recent literature. It then examines whether and in what way these characteristics are present in the smart city plans of 15 cities: Amsterdam, Barcelona, London, PlanIT Valley, Stockholm, Cyberjaya, Singapore, King Abdullah Economic City, Masdar, Skolkovo, Songdo, Chicago, New York, Rio de Janeiro, and Konza. The results are presented with respect to each smart city characteristic. As expected, most strategies emphasize the role of information and communication technologies in improving the functionality of urban systems and advancing knowledge transfer and innovation networks. However, this research yields other interesting findings that may not yet have been documented across multiple case studies; for example, most smart city strategies fail to incorporate bottom-up approaches, are poorly adapted to accommodate the local needs of their area, and consider issues of privacy and security inadequately.",TRUE,sentence
R142,Earth Sciences,R140827,Sub-pixel mineral mapping using EO-1 Hyperion hyperspectral data,S563700,R140829,Analysis,R141099,Mixture Tuned Target Constrained Interference Minimized Filter (MTTCIMF) algorithm,"This study describes the utility of Earth Observation (EO)-1 Hyperion data for sub-pixel mineral investigation using Mixture Tuned Target Constrained Interference Minimized Filter (MTTCIMF) algorithm in hostile mountainous terrain of Rajsamand district of Rajasthan, which hosts economic mineralization such as lead, zinc, and copper etc. The study encompasses pre-processing, data reduction, Pixel Purity Index (PPI) and endmember extraction from reflectance image of surface minerals such as illite, montmorillonite, phlogopite, dolomite and chlorite. These endmembers were then assessed with USGS mineral spectral library and lab spectra of rock samples collected from field for spectral inspection. Subsequently, MTTCIMF algorithm was implemented on processed image to obtain mineral distribution map of each detected mineral. A virtual verification method has been adopted to evaluate the classified image, which uses directly image information to evaluate the result and confirm the overall accuracy and kappa coefficient of 68 % and 0.6 respectively. The sub-pixel level mineral information with reasonable accuracy could be a valuable guide to geological and exploration community for expensive ground and/or lab experiments to discover economic deposits. Thus, the study demonstrates the feasibility of Hyperion data for sub-pixel mineral mapping using MTTCIMF algorithm with cost and time effective approach.",TRUE,sentence
R24,Ecology and Evolutionary Biology,R54819,"Distribution of an alien aquatic snail in relation to flow variability, human activities and water quality",S174619,R54821,Measure of disturbance,L107863,anthropogenic developments (e.g. towns and dams),"1. Disturbance and anthropogenic land use changes are usually considered to be key factors facilitating biological invasions. However, specific comparisons of invasion success between sites affected to different degrees by these factors are rare. 2. In this study we related the large-scale distribution of the invading New Zealand mud snail ( Potamopyrgus antipodarum ) in southern Victorian streams, Australia, to anthropogenic land use, flow variability, water quality and distance from the site to the sea along the stream channel. 3. The presence of P. antipodarum was positively related to an index of flow-driven disturbance, the coefficient of variability of mean daily flows for the year prior to the study. 4. Furthermore, we found that the invader was more likely to occur at sites with multiple land uses in the catchment, in the forms of grazing, forestry and anthropogenic developments (e.g. towns and dams), compared with sites with low-impact activities in the catchment. However, this relationship was confounded by a higher likelihood of finding this snail in lowland sites close to the sea. 5. We conclude that P. antipodarum could potentially be found worldwide at sites with similar ecological characteristics. We hypothesise that its success as an invader may be related to an ability to quickly re-colonise denuded areas and that population abundances may respond to increased food resources. Disturbances could facilitate this invader by creating spaces for colonisation (e.g. a possible consequence of floods) or changing resource levels (e.g. increased nutrient levels in streams with intense human land use in their catchments).",TRUE,sentence
R24,Ecology and Evolutionary Biology,R54070,Phenotypic variation of an alien species in a new environment: the body size and diet of American mink over time and at local and continental scales,S165727,R54071,Specific traits,L100585,Changes in the body mass and length,"Introduced species must adapt their ecology, behaviour, and morphological traits to new conditions. The successful introduction and invasive potential of a species are related to its levels of phenotypic plasticity and genetic polymorphism. We analysed changes in the body mass and length of American mink (Neovison vison) since its introduction into the Warta Mouth National Park, western Poland, in relation to diet composition and colonization progress from 1996 to 2004. Mink body mass decreased significantly during the period of population establishment within the study area, with an average decrease of 13% from 1.36 to 1.18 kg in males and of 16% from 0.83 to 0.70 kg in females. Diet composition varied seasonally and between consecutive years. The main prey items were mammals and fish in the cold season and birds and fish in the warm season. During the study period the proportion of mammals preyed upon increased in the cold season and decreased in the warm season. The proportion of birds preyed upon decreased over the study period, whereas the proportion of fish increased. Following introduction, the strictly aquatic portion of mink diet (fish and frogs) increased over time, whereas the proportion of large prey (large birds, muskrats, and water voles) decreased. The average yearly proportion of large prey and average-sized prey in the mink diet was significantly correlated with the mean body masses of males and females. Biogeographical variation in the body mass and length of mink was best explained by the percentage of large prey in the mink diet in both sexes, and by latitude for females. Together these results demonstrate that American mink rapidly changed their body mass in relation to local conditions. This phenotypic variability may be underpinned by phenotypic plasticity and/or by adaptation of quantitative genetic variation. The potential to rapidly change phenotypic variation in this manner is an important factor determining the negative ecological impacts of invasive species. © 2012 The Linnean Society of London, Biological Journal of the Linnean Society, 2012, 105, 681–693.",TRUE,sentence
R24,Ecology and Evolutionary Biology,R54200,Spreading of the invasive Carpobrotus aff. acinaciformis in Mediterranean ecosystems: The advantage of performing in different light environments,S167251,R54201,Specific traits,L101849,Growth rates of main and lateral shoots,"ABSTRACT Question: Do specific environmental conditions affect the performance and growth dynamics of one of the most invasive taxa (Carpobrotus aff. acinaciformis) on Mediterranean islands? Location: Four populations located on Mallorca, Spain. Methods: We monitored growth rates of main and lateral shoots of this stoloniferous plant for over two years (2002–2003), comparing two habitats (rocky coast vs. coastal dune) and two different light conditions (sun vs. shade). In one population of each habitat type, we estimated electron transport rate and the level of plant stress (maximal photochemical efficiency Fv/Fm) by means of chlorophyll fluorescence. Results: Main shoots of Carpobrotus grew at similar rates at all sites, regardless habitat type. However, growth rate of lateral shoots was greater in shaded plants than in those exposed to sunlight. Its high phenotypic plasticity, expressed in different allocation patterns in sun and shade individuals, and its clonal growth which promotes the continuous sea...",TRUE,sentence
R24,Ecology and Evolutionary Biology,R54132,Elevational distribution limits of non-native species: combining observational and experimental evidence,S166451,R54133,Specific traits,L101185,Growth rates of plants from different elevations under different temperature treatments,"Background: In temperate mountains, most non-native plant species reach their distributional limit somewhere along the elevational gradient. However, it is unclear if growth limitations can explain upper range limits and whether phenotypic plasticity or genetic changes allow species to occupy a broad elevational gradient. Aims: We investigated how non-native plant individuals from different elevations responded to growing season temperatures, which represented conditions at the core and margin of the elevational distributions of the species. Methods: We recorded the occurrence of nine non-native species in the Swiss Alps and subsequently conducted a climate chamber experiment to assess growth rates of plants from different elevations under different temperature treatments. Results: The elevational limit observed in the field was not related to the species' temperature response in the climate chamber experiment. Almost all species showed a similar level of reduction in growth rates under lower temperatures independent of the upper elevational limit of the species' distribution. For two species we found indications for genetic differentiation among plants from different elevations. Conclusions: We conclude that factors other than growing season temperatures, such as extreme events or winter mortality, might shape the elevational limit of non-native species, and that ecological filtering might select for genotypes that are phenotypically plastic.",TRUE,sentence
R24,Ecology and Evolutionary Biology,R56903,"Quantifying ""apparent"" impact and distinguishing impact from invasiveness in multispecies plant invasions",S191115,R56904,Type of effect description,L119403,"interactive effects of multiple invaders on native plant abundance (percent cover), we found no evidence for invasional meltdown or synergistic interactions","The quantification of invader impacts remains a major hurdle to understanding and managing invasions. Here, we demonstrate a method for quantifying the community-level impact of multiple plant invaders by applying Parker et al.'s (1999) equation (impact = range x local abundance x per capita effect or per unit effect) using data from 620 survey plots from 31 grasslands across west-central Montana, USA. In testing for interactive effects of multiple invaders on native plant abundance (percent cover), we found no evidence for invasional meltdown or synergistic interactions for the 25 exotics tested. While much concern exists regarding impact thresholds, we also found little evidence for nonlinear relationships between invader abundance and impacts. These results suggest that management actions that reduce invader abundance should reduce invader impacts monotonically in this system. Eleven of 25 invaders had significant per unit impacts (negative local-scale relationships between invader and native cover). In decomposing the components of impact, we found that local invader abundance had a significant influence on the likelihood of impact, but range (number of plots occupied) did not. This analysis helped to differentiate measures of invasiveness (local abundance and range) from impact to distinguish high-impact invaders from invaders that exhibit negligible impacts, even when widespread. Distinguishing between high- and low-impact invaders should help refine trait-based prediction of problem species. Despite the unique information derived from evaluation of per unit effects of invaders, invasiveness 'scores based on range and local abundance produced similar rankings to impact scores that incorporated estimates of per unit effects. Hence, information on range and local abundance alone was sufficient to identify problematic plant invaders at the regional scale. In comparing empirical data on invader impacts to the state noxious weed list, we found that the noxious weed list captured 45% of the high impact invaders but missed 55% and assigned the lowest risk category to the highest-impact invader. While such subjective weed lists help to guide invasive species management, empirical data are needed to develop more comprehensive rankings of ecological impacts. Using weed lists to classify invaders for testing invasion theory is not well supported.",TRUE,sentence
R24,Ecology and Evolutionary Biology,R54841,Lack of native species recovery following severe exotic disturbance in southern Californian shrublands,S174880,R54843,Measure of disturbance,L108080,"landfill operations, soil excavation and tillage","Summary 1. Urban and agricultural activities are not part of natural disturbance regimes and may bear little resemblance to them. Such disturbances are common in densely populated semi-arid shrub communities of the south-western US, yet successional studies in these regions have been limited primarily to natural successional change and the impact of human-induced changes on natural disturbance regimes. Although these communities are resilient to recurrent and large-scale disturbance by fire, they are not necessarily well-adapted to recover from exotic disturbances. 2. This study investigated the effects of severe exotic disturbance (construction, heavy-vehicle activity, landfill operations, soil excavation and tillage) on shrub communities in southern California. These disturbances led to the conversion of indigenous shrublands to exotic annual communities with low native species richness. 3. Nearly 60% of the cover on disturbed sites consisted of exotic annual species, while undisturbed sites were primarily covered by native shrub species (68%). Annual species dominant on disturbed sites included Erodium botrys, Hypochaeris glabra, Bromus spp., Vulpia myuros and Avena spp. 4. The cover of native species remained low on disturbed sites even 71 years after initial exotic disturbance ceased. Native shrub seedlings were also very infrequent on disturbed sites, despite the presence of nearby seed sources. Only two native shrubs, Eriogonum fasciculatum and Baccharis sarothroides, colonized some disturbed sites in large numbers. 5. Although some disturbed sites had lower total soil nitrogen and percentage organic matter and higher pH than undisturbed sites, soil variables measured in this study were not sufficient to explain variations in species abundances on these sites. 6. Non-native annual communities observed in this study did not recover to a predisturbed state within typical successional time (< 25 years), supporting the hypothesis that altered stable states can occur if a community is pushed beyond its threshold of resilience.",TRUE,sentence
R24,Ecology and Evolutionary Biology,R56589,A null model of temporal trends in biological invasion records,S187591,R56590,Type of effect description,L116507,null model to identify the expected trend in invasion records over time,"Biological invasions are a growing aspect of global biodiversity change. In many regions, introduced species richness increases supralinearly over time. This does not, however, necessarily indicate increasing introduction rates or invasion success. We develop a simple null model to identify the expected trend in invasion records over time. For constant introduction rates and success, the expected trend is exponentially increasing. Model extensions with varying introduction rate and success can also generate exponential distributions. We then analyse temporal trends in aquatic, marine and terrestrial invasion records. Most data sets support an exponential distribution (15/16) and the null invasion model (12/16). Thus, our model shows that no change in introduction rate or success need be invoked to explain the majority of observed trends. Further, an exponential trend does not necessarily indicate increasing invasion success or 'invasional meltdown', and a saturating trend does not necessarily indicate decreasing success or biotic resistance.",TRUE,sentence
R24,Ecology and Evolutionary Biology,R57620,"Herbivory, disease, recruitment limitation, and success of alien and native tree species",S198494,R57622,Sub-hypothesis,L124618,P AN,"The Enemies Hypothesis predicts that alien plants have a competitive ad- vantage over native plants because they are often introduced with few herbivores or diseases. To investigate this hypothesis, we transplanted seedlings of the invasive alien tree, Sapium sebiferum (Chinese tallow tree) and an ecologically similar native tree, Celtis laevigata (hackberry), into mesic forest, floodplain forest, and coastal prairie sites in east Texas and manipulated foliar fungal diseases and insect herbivores with fungicidal and insecticidal sprays. As predicted by the Enemies Hypothesis, insect herbivores caused significantly greater damage to untreated Celtis seedlings than to untreated Sapium seedlings. However, contrary to predictions, suppression of insect herbivores caused significantly greater in- creases in survivorship and growth of Sapium seedlings compared to Celtis seedlings. Regressions suggested that Sapium seedlings compensate for damage in the first year but that this greatly increases the risk of mortality in subsequent years. Fungal diseases had no effects on seedling survival or growth. The Recruitment Limitation Hypothesis predicts that the local abundance of a species will depend more on local seed input than on com- petitive ability at that location. To investigate this hypothesis, we added seeds of Celtis and Sapium on and off of artificial soil disturbances at all three sites. Adding seeds increased the density of Celtis seedlings and sometimes Sapium seedlings, with soil disturbance only affecting density of Celtis. Together the results of these experiments suggest that the success of Sapium may depend on high rates of seed input into these ecosystems and high growth potential, as well as performance advantages of seedlings caused by low rates of herbivory.",TRUE,sentence
R24,Ecology and Evolutionary Biology,R54781,Are invaders disturbance-limited? Conservation of mountain grasslands in Central Argentina,S174163,R54783,Measure of disturbance,L107483,"soil disturbance, above-ground biomass removal by cutting and burning","Abstract Extensive areas in the mountain grasslands of central Argentina are heavily invaded by alien species from Europe. A decrease in biodiversity and a loss of palatable species is also observed. The invasibility of the tall-grass mountain grassland community was investigated in an experiment of factorial design. Six alien species which are widely distributed in the region were sown in plots where soil disturbance, above-ground biomass removal by cutting and burning were used as treatments. Alien species did not establish in undisturbed plots. All three types of disturbances increased the number and cover of alien species; the effects of soil disturbance and biomass removal was cumulative. Cirsium vulgare and Oenothera erythrosepala were the most efficient alien colonizers. In conditions where disturbances did not continue the cover of aliens started to decrease in the second year, by the end of the third season, only a few adults were established. Consequently, disturbances are needed to maintain ali...",TRUE,sentence
R194,Engineering,R139283,Glucose Biosensor Based on Disposable Activated Carbon Electrodes Modified with Platinum Nanoparticles Electrodeposited on Poly(Azure A),S555151,R139286,Sensing material,L390525,Glucose oxidase (GOx) immobilized on a surface containing platinum nanoparticles (PtNPs) electrodeposited on poly(Azure A) (PAA) ,"Herein, a novel electrochemical glucose biosensor based on glucose oxidase (GOx) immobilized on a surface containing platinum nanoparticles (PtNPs) electrodeposited on poly(Azure A) (PAA) previously electropolymerized on activated screen-printed carbon electrodes (GOx-PtNPs-PAA-aSPCEs) is reported. The resulting electrochemical biosensor was validated towards glucose oxidation in real samples and further electrochemical measurement associated with the generated H2O2. The electrochemical biosensor showed an excellent sensitivity (42.7 μA mM−1 cm−2), limit of detection (7.6 μM), linear range (20 μM–2.3 mM), and good selectivity towards glucose determination. Furthermore, and most importantly, the detection of glucose was performed at a low potential (0.2 V vs. Ag). The high performance of the electrochemical biosensor was explained through surface exploration using field emission SEM, XPS, and impedance measurements. The electrochemical biosensor was successfully applied to glucose quantification in several real samples (commercial juices and a plant cell culture medium), exhibiting a high accuracy when compared with a classical spectrophotometric method. This electrochemical biosensor can be easily prepared and opens up a good alternative in the development of new sensitive glucose sensors.",TRUE,sentence
R194,Engineering,R141136,A zipper RF MEMS tunable capacitor with interdigitated RF and actuation electrodes,S564141,R141138,keywords,L395897,Interdigitated RF,"This paper presents a new RF MEMS tunable capacitor based on the zipper principle and with interdigitated RF and actuation electrodes. The electrode configuration prevents dielectric charging under high actuation voltages. It also increases the capacitance ratio and the tunable analog range. The effect of the residual stress on the capacitance tunability is also investigated. Two devices with different interdigital RF and actuation electrodes are fabricated on an alumina substrate and result in a capacitance ratio around 3.0 (Cmin = 70?90 fF, Cmax = 240?270 fF) and with a Q > 100 at 3 GHz. This design can be used in wideband tunable filters and matching networks.",TRUE,sentence
R194,Engineering,R144792,Thermal annealing effect on β-Ga2O3 thin film solar blind photodetector heteroepitaxially grown on sapphire substrate,S579662,R144794,Film deposition method,L405330,Low pressure chemical vapor deposition (LPCVD),"This paper presents the effect of thermal annealing on β‐Ga2O3 thin film solar‐blind (SB) photodetector (PD) synthesized on c‐plane sapphire substrates by a low pressure chemical vapor deposition (LPCVD). The thin films were synthesized using high purity gallium (Ga) and oxygen (O2) as source precursors. The annealing was performed ex situ the under the oxygen atmosphere, which helped to reduce oxygen or oxygen‐related vacancies in the thin film. Metal/semiconductor/metal (MSM) type photodetectors were fabricated using both the as‐grown and annealed films. The PDs fabricated on the annealed films had lower dark current, higher photoresponse and improved rejection ratio (R250/R370 and R250/R405) compared to the ones fabricated on the as‐grown films. These improved PD performances are due to the significant reduction of the photo‐generated carriers trapped by oxygen or oxygen‐related vacancies.",TRUE,sentence
R32,Environmental Health,R76026,Design and Implementation of e-Health System Based on Semantic Sensor Network Using IETF YANG,S348971,R76028,Has method,L249393,modeling the semantic e-Health data to represent the information of e-Health sensors,"Recently, healthcare services can be delivered effectively to patients anytime and anywhere using e-Health systems. e-Health systems are developed through Information and Communication Technologies (ICT) that involve sensors, mobiles, and web-based applications for the delivery of healthcare services and information. Remote healthcare is an important purpose of the e-Health system. Usually, the eHealth system includes heterogeneous sensors from diverse manufacturers producing data in different formats. Device interoperability and data normalization is a challenging task that needs research attention. Several solutions are proposed in the literature based on manual interpretation through explicit programming. However, programmatically implementing the interpretation of the data sender and data receiver in the e-Health system for the data transmission is counterproductive as modification will be required for each new device added into the system. In this paper, an e-Health system with the Semantic Sensor Network (SSN) is proposed to address the device interoperability issue. In the proposed system, we have used IETF YANG for modeling the semantic e-Health data to represent the information of e-Health sensors. This modeling scheme helps in provisioning semantic interoperability between devices and expressing the sensing data in a user-friendly manner. For this purpose, we have developed an ontology for e-Health data that supports different styles of data formats. The ontology is defined in YANG for provisioning semantic interpretation of sensing data in the system by constructing meta-models of e-Health sensors. The proposed approach assists in the auto-configuration of eHealth sensors and querying the sensor network with semantic interoperability support for the e-Health system.",TRUE,sentence
R32,Environmental Health,R76035,An ontology-based healthcare monitoring system in the Internet of Things,S348926,R76037,Has result,L249377,show the feasibility and efficiency of the proposed ontology,"Continuous health monitoring is a hopeful solution that can efficiently provide health-related services to elderly people suffering from chronic diseases. The emergence of the Internet of Things (IoT) technologies have led to their adoption in the development of new healthcare systems for efficient healthcare monitoring, diagnosis and treatment. This paper presents a healthcare-IoT based system where an ontology is proposed to provide semantic interoperability among heterogeneous devices and users in healthcare domain. Our work consists on integrating existing ontologies related to health, IoT domain and time, instantiating classes, and establishing reasoning rules. The model created has been validated by semantic querying. The results show the feasibility and efficiency of the proposed ontology and its capability to grow into a more understanding and specialized ontology for health monitoring and treatment.",TRUE,sentence
R32,Environmental Health,R76032,Meaningful Integration of Data from Heterogeneous Health Services and Home Environment Based on Ontology,S348961,R76034,Has result,L249387,successfully integrates the health data and home environment data into a resource graph,"The development of electronic health records, wearable devices, health applications and Internet of Things (IoT)-empowered smart homes is promoting various applications. It also makes health self-management much more feasible, which can partially mitigate one of the challenges that the current healthcare system is facing. Effective and convenient self-management of health requires the collaborative use of health data and home environment data from different services, devices, and even open data on the Web. Although health data interoperability standards including HL7 Fast Healthcare Interoperability Resources (FHIR) and IoT ontology including Semantic Sensor Network (SSN) have been developed and promoted, it is impossible for all the different categories of services to adopt the same standard in the near future. This study presents a method that applies Semantic Web technologies to integrate the health data and home environment data from heterogeneously built services and devices. We propose a Web Ontology Language (OWL)-based integration ontology that models health data from HL7 FHIR standard implemented services, normal Web services and Web of Things (WoT) services and Linked Data together with home environment data from formal ontology-described WoT services. It works on the resource integration layer of the layered integration architecture. An example use case with a prototype implementation shows that the proposed method successfully integrates the health data and home environment data into a resource graph. The integrated data are annotated with semantics and ontological links, which make them machine-understandable and cross-system reusable.",TRUE,sentence
R54,Environmental Microbiology and Microbial Ecology,R78291,The Effect of Hydroxycinnamic Acids on the Microbial Mineralisation of Phenanthrene in Soil,S354093,R78298,Implication,R78299,"Depending on its concentrationin soil, hydroxycinnamic acids can either stimulate or inhibit mineralisation of phenanthrene by indigenous soil microbial community.","The effect of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids) on the microbial mineralisation of phenanthrene in soil slurry by the indigenous microbial community has been investigated. The rate and extent of 14C–phenanthrenemineralisation in artificially spiked soils were monitored in the absence of hydroxycinnamic acids and presence of hydroxycinnamic acids applied at three different concentrations (50, 100 and 200 µg kg-1) either as single compounds or as a mixture of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids at a 1:1:1 ratio). The highest extent of 14C–phenanthrene mineralisation (P 200 µg kg-1. Depending on its concentrationin soil, hydroxycinnamic acids can either stimulate or inhibit mineralisation of phenanthrene by indigenous soil microbial community. Therefore, effective understanding of phytochemical–microbe–organic contaminant interactions is essential for further development of phytotechnologies for remediation of PAH–contaminated soils.",TRUE,sentence
R54,Environmental Microbiology and Microbial Ecology,R78291,The Effect of Hydroxycinnamic Acids on the Microbial Mineralisation of Phenanthrene in Soil,S354094,R78298,Implication,R78300,"Therefore, effective understanding of phytochemical–microbe–organic contaminant interactions is essential for further development of phytotechnologies for remediation of PAH–contaminated soils.","The effect of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids) on the microbial mineralisation of phenanthrene in soil slurry by the indigenous microbial community has been investigated. The rate and extent of 14C–phenanthrenemineralisation in artificially spiked soils were monitored in the absence of hydroxycinnamic acids and presence of hydroxycinnamic acids applied at three different concentrations (50, 100 and 200 µg kg-1) either as single compounds or as a mixture of hydroxycinnamic acids (caffeic, ferulic and p-coumaric acids at a 1:1:1 ratio). The highest extent of 14C–phenanthrene mineralisation (P 200 µg kg-1. Depending on its concentrationin soil, hydroxycinnamic acids can either stimulate or inhibit mineralisation of phenanthrene by indigenous soil microbial community. Therefore, effective understanding of phytochemical–microbe–organic contaminant interactions is essential for further development of phytotechnologies for remediation of PAH–contaminated soils.",TRUE,sentence
R54,Environmental Microbiology and Microbial Ecology,R78283,The Effect of Rhizosphere Soil and Root Tissues Amendment on Microbial Mineralisation of Target 14C–Hydrocarbons in Contaminated Soil,S354075,R78289,Implication,R78290,"This study suggests that organic chemicals in roots and/or rhizosphere can enhance the microbial degradation of petroleum hydrocarbons in freshly contaminated soil by supporting higher numbers of hydrocarbon–degrading populations, promoting microbial activity and/or enhancing bioavailability of organic contaminants.","The effect of rhizosphere soil or root tissues amendments on the microbial mineralisation of hydrocarbons in soil slurry by the indigenous microbial communities has been investigated. In this study, rhizosphere soil and root tissues of reed canary grass (Phalaris arundinacea), channel grass (Vallisneria spiralis), blackberry (Rubus fructicosus) and goat willow (Salix caprea) were collected from the former Shell and Imperial Industries (ICI) Refinery site in Lancaster, UK. The rates and extents of 14C–hydrocarbons (naphthalene, phenanthrene, hexadecane or octacosane) mineralisation in artificially spiked soils were monitored in the absence and presence of 5% (wet weight) of rhizosphere soil or root tissues. Respirometric and microbial assays were monitored in fresh (0 d) and pre–incubated (28 d) artificially spiked soils following amendment with rhizosphere soil or root tissues. There were significant increases (P < 0.001) in the extents of 14C–naphthalene and 14C–phenanthrene mineralisation in fresh artificially spiked soils amended with rhizosphere soil and root tissues compared to those measured in unamended soils. However, amendment of fresh artificially spiked soils with rhizosphere soil and root tissues did not enhance the microbial mineralisation of 14C–hexadecane or 14C–octacosane by indigenous microbial communities. Apart from artificially spiked soil systems containing naphthalene (amended with reed canary grass and channel grass rhizosphere) and hexadecane amended with goat willow rhizosphere, microbial mineralisation of hydrocarbons was further enhanced following 28 d soil–organic contaminants pre–exposure and subsequent amendment with rhizosphere soil or root tissues. This study suggests that organic chemicals in roots and/or rhizosphere can enhance the microbial degradation of petroleum hydrocarbons in freshly contaminated soil by supporting higher numbers of hydrocarbon–degrading populations, promoting microbial activity and/or enhancing bioavailability of organic contaminants.",TRUE,sentence
R145,Environmental Sciences,R78114,Petroleum Exploration and Production: Past and Present Environmental Issues in the Nigeria’s Niger Delta,S353882,R78117,Has evaluation,R78208,"Although effective understanding of petroleum production and associated environmental degradation is importance for developing management strategies, there is a need for more multidisciplinary approaches for sustainable risk mitigation and effective environmental protection of the oil–producing host communities in the Niger Delta.","Petroleum exploration and production in the Nigeria’s Niger Delta region and export of oil and gas resources by the petroleum sector has substantially improved the nation’s economy over the past five decades. However, activities associated with petroleum exploration, development and production operations have local detrimental and significant impacts on the atmosphere, soils and sediments, surface and groundwater, marine environment and terrestrial ecosystems in the Niger Delta. Discharges of petroleum hydrocarbon and petroleum–derived waste streams have caused environmental pollution, adverse human health effects, socio–economic problems and degradation of host communities in the 9 oil–producing states in the Niger Delta region. Many approaches have been developed for the management of environmental impacts of petroleum production–related activities and several environmental laws have been institutionalized to regulate the Nigerian petroleum industry. However, the existing statutory laws and regulations for environmental protection appear to be grossly inadequate and some of the multinational oil companies operating in the Niger Delta region have failed to adopt sustainable practices to prevent environmental pollution. This review examines the implications of multinational oil companies operations and further highlights some of the past and present environmental issues associated with petroleum exploitation and production in the Nigeria’s Niger Delta. Although effective understanding of petroleum production and associated environmental degradation is importance for developing management strategies, there is a need for more multidisciplinary approaches for sustainable risk mitigation and effective environmental protection of the oil–producing host communities in the Niger Delta.",TRUE,sentence
R145,Environmental Sciences,R78209,Role of Plants and Microbes in Bioremediation of Petroleum Hydrocarbons Contaminated Soils,S583890,R78211,Has result,L407622,Bioremediation strategies has been recognized as an environmental friendly and cost–effective alternative in comparison with the traditional physico-chemical approaches for the restoration and reclamation of contaminated sites. The success of any plant–based remediation strategy depends on the interaction of plants with rhizospheric microbial populations in the surrounding soil medium and the organic contaminant. ,"Petroleum hydrocarbons contamination of soil, sediments and marine environment associated with the inadvertent discharges of petroleum–derived chemical wastes and petroleum hydrocarbons associated with spillage and other sources into the environment often pose harmful effects on human health and the natural environment, and have negative socio–economic impacts in the oil–producing host communities. In practice, plants and microbes have played a major role in microbial transformation and growth–linked mineralization of petroleum hydrocarbons in contaminated soils and/or sediments over the past years. Bioremediation strategies has been recognized as an environmental friendly and cost–effective alternative in comparison with the traditional physico-chemical approaches for the restoration and reclamation of contaminated sites. The success of any plant–based remediation strategy depends on the interaction of plants with rhizospheric microbial populations in the surrounding soil medium and the organic contaminant. Effective understanding of the fate and behaviour of organic contaminants in the soil can help determine the persistence of the contaminant in the terrestrial environment, promote the success of any bioremediation approach and help develop a high–level of risks mitigation strategies. In this review paper, we provide a clear insight into the role of plants and microbes in the microbial degradation of petroleum hydrocarbons in contaminated soil that have emerged from the growing body of bioremediation research and its applications in practice. In addition, plant–microbe interactions have been discussed with respect to biodegradation of petroleum hydrocarbons and these could provide a better understanding of some important factors necessary for development of in situ bioremediation strategies for risks mitigation in petroleum hydrocarbon–contaminated soil.",TRUE,sentence
R145,Environmental Sciences,R8048,Future changes of wind energy potentials over Europe in a large CMIP5 multi-model ensemble: FUTURE CHANGES OF WIND ENERGY OVER EUROPE IN A CMIP5 ENSEMBLE,S12124,R8049,Has result,R8056,changes in the wind energy potentials over Europe may take place in future decades.,"A statistical‐dynamical downscaling method is used to estimate future changes of wind energy output (Eout) of a benchmark wind turbine across Europe at the regional scale. With this aim, 22 global climate models (GCMs) of the Coupled Model Intercomparison Project Phase 5 (CMIP5) ensemble are considered. The downscaling method uses circulation weather types and regional climate modelling with the COSMO‐CLM model. Future projections are computed for two time periods (2021–2060 and 2061–2100) following two scenarios (RCP4.5 and RCP8.5). The CMIP5 ensemble mean response reveals a more likely than not increase of mean annual Eout over Northern and Central Europe and a likely decrease over Southern Europe. There is some uncertainty with respect to the magnitude and the sign of the changes. Higher robustness in future changes is observed for specific seasons. Except from the Mediterranean area, an ensemble mean increase of Eout is simulated for winter and a decreasing for the summer season, resulting in a strong increase of the intra‐annual variability for most of Europe. The latter is, in particular, probable during the second half of the 21st century under the RCP8.5 scenario. In general, signals are stronger for 2061–2100 compared to 2021–2060 and for RCP8.5 compared to RCP4.5. Regarding changes of the inter‐annual variability of Eout for Central Europe, the future projections strongly vary between individual models and also between future periods and scenarios within single models. This study showed for an ensemble of 22 CMIP5 models that changes in the wind energy potentials over Europe may take place in future decades. However, due to the uncertainties detected in this research, further investigations with multi‐model ensembles are needed to provide a better quantification and understanding of the future changes.",TRUE,sentence
R145,Environmental Sciences,R78224,"Assessment of Cadmium and Lead Distribution in the Outcrop Rocks of Abakaliki Anticlinorium in the Southern Benue Trough, Nigeria",S353933,R78232,Has implementation,R78234,"Evaluation of heavy metals may be effectively used in large scale regional pollution monitoring of soil, groundwater, atmospheric and marine environment.","This study investigates the distribution of cadmium and lead concentrations in the outcrop rock samples collected from Abakaliki anticlinorium in the Southern Benue Trough, Nigeria. The outcrop rock samples from seven sampling locations were air–dried for seventy–two hours, homogenized by grinding and pass through < 63 micron mesh sieve. The ground and homogenized rock samples were pulverized and analyzed for cadmium and lead using X-Ray Fluorescence Spectrometer. The concentrations of heavy metals in the outcrop rock samples ranged from < 0.10 – 7.95 mg kg–1 for cadmium (Cd) and < 1.00 – 4966.00 mg kg–1 for lead (Pb). Apart from an anomalous concentration measured in Afikpo Shale (Middle Segment), the results obtained revealed that rock samples from all the sampling locations yielded cadmium concentrations of < 0.10 mg kg–1 and the measured concentrations were below the average crustal abundance of 0.50 mg kg–1. Although background concentration of <1.00 ± 0.02 mg kg–1 was measured in Abakaliki Shale, rock samples from all the sampling locations revealed anomalous lead concentrations above average crustal abundance of 30 mg kg–1. The results obtained reveal important contributions towards understanding of heavy metal distribution patterns and provide baseline data that can be used for potential identification of areas at risk associated with natural sources of heavy metals contamination in the region. The use of outcrop rocks provides a cost–effective approach for monitoring regional heavy metal contamination associated with dissolution and/or weathering of rocks or parent materials. Evaluation of heavy metals may be effectively used in large scale regional pollution monitoring of soil, groundwater, atmospheric and marine environment. Therefore, monitoring of heavy metal concentrations in soils, groundwater and atmospheric environment is imperative in order to prevent bioaccumulation in various ecological receptors.",TRUE,sentence
R145,Environmental Sciences,R78224,"Assessment of Cadmium and Lead Distribution in the Outcrop Rocks of Abakaliki Anticlinorium in the Southern Benue Trough, Nigeria",S353935,R78235,Conclusion,R78236,"monitoring of heavy metal concentrations in soils, groundwater and atmospheric environment is imperative in order to prevent bioaccumulation in various ecological receptors.","This study investigates the distribution of cadmium and lead concentrations in the outcrop rock samples collected from Abakaliki anticlinorium in the Southern Benue Trough, Nigeria. The outcrop rock samples from seven sampling locations were air–dried for seventy–two hours, homogenized by grinding and pass through < 63 micron mesh sieve. The ground and homogenized rock samples were pulverized and analyzed for cadmium and lead using X-Ray Fluorescence Spectrometer. The concentrations of heavy metals in the outcrop rock samples ranged from < 0.10 – 7.95 mg kg–1 for cadmium (Cd) and < 1.00 – 4966.00 mg kg–1 for lead (Pb). Apart from an anomalous concentration measured in Afikpo Shale (Middle Segment), the results obtained revealed that rock samples from all the sampling locations yielded cadmium concentrations of < 0.10 mg kg–1 and the measured concentrations were below the average crustal abundance of 0.50 mg kg–1. Although background concentration of <1.00 ± 0.02 mg kg–1 was measured in Abakaliki Shale, rock samples from all the sampling locations revealed anomalous lead concentrations above average crustal abundance of 30 mg kg–1. The results obtained reveal important contributions towards understanding of heavy metal distribution patterns and provide baseline data that can be used for potential identification of areas at risk associated with natural sources of heavy metals contamination in the region. The use of outcrop rocks provides a cost–effective approach for monitoring regional heavy metal contamination associated with dissolution and/or weathering of rocks or parent materials. Evaluation of heavy metals may be effectively used in large scale regional pollution monitoring of soil, groundwater, atmospheric and marine environment. Therefore, monitoring of heavy metal concentrations in soils, groundwater and atmospheric environment is imperative in order to prevent bioaccumulation in various ecological receptors.",TRUE,sentence
R145,Environmental Sciences,R8061,Wind extremes in the North Sea Basin under climate change: An ensemble study of 12 CMIP5 GCMs: WIND EXTREMES IN THE NORTH SEA IN CMIP5,S12144,R8062,Has result,R8068,n indication that the annual extreme wind events are coming more often from western directions,"Coastal safety may be influenced by climate change, as changes in extreme surge levels and wave extremes may increase the vulnerability of dunes and other coastal defenses. In the North Sea, an area already prone to severe flooding, these high surge levels and waves are generated by low atmospheric pressure and severe wind speeds during storm events. As a result of the geometry of the North Sea, not only the maximum wind speed is relevant, but also wind direction. Climate change could change maximum wind conditions, with potentially negative effects for coastal safety. Here, we use an ensemble of 12 Coupled Model Intercomparison Project Phase 5 (CMIP5) General Circulation Models (GCMs) and diagnose the effect of two climate scenarios (rcp4.5 and rcp8.5) on annual maximum wind speed, wind speeds with lower return frequencies, and the direction of these annual maximum wind speeds. The 12 selected CMIP5 models do not project changes in annual maximum wind speed and in wind speeds with lower return frequencies; however, we do find an indication that the annual extreme wind events are coming more often from western directions. Our results are in line with the studies based on CMIP3 models and do not confirm the statement based on some reanalysis studies that there is a climate‐change‐related upward trend in storminess in the North Sea area.",TRUE,sentence
R145,Environmental Sciences,R78224,"Assessment of Cadmium and Lead Distribution in the Outcrop Rocks of Abakaliki Anticlinorium in the Southern Benue Trough, Nigeria",S353926,R78226,Has evaluation,R78227,The results obtained reveal important contributions towards understanding of heavy metal distribution patterns and provide baseline data that can be used for potential identification of areas at risk associated with natural sources of heavy metals contamination in the region.,"This study investigates the distribution of cadmium and lead concentrations in the outcrop rock samples collected from Abakaliki anticlinorium in the Southern Benue Trough, Nigeria. The outcrop rock samples from seven sampling locations were air–dried for seventy–two hours, homogenized by grinding and pass through < 63 micron mesh sieve. The ground and homogenized rock samples were pulverized and analyzed for cadmium and lead using X-Ray Fluorescence Spectrometer. The concentrations of heavy metals in the outcrop rock samples ranged from < 0.10 – 7.95 mg kg–1 for cadmium (Cd) and < 1.00 – 4966.00 mg kg–1 for lead (Pb). Apart from an anomalous concentration measured in Afikpo Shale (Middle Segment), the results obtained revealed that rock samples from all the sampling locations yielded cadmium concentrations of < 0.10 mg kg–1 and the measured concentrations were below the average crustal abundance of 0.50 mg kg–1. Although background concentration of <1.00 ± 0.02 mg kg–1 was measured in Abakaliki Shale, rock samples from all the sampling locations revealed anomalous lead concentrations above average crustal abundance of 30 mg kg–1. The results obtained reveal important contributions towards understanding of heavy metal distribution patterns and provide baseline data that can be used for potential identification of areas at risk associated with natural sources of heavy metals contamination in the region. The use of outcrop rocks provides a cost–effective approach for monitoring regional heavy metal contamination associated with dissolution and/or weathering of rocks or parent materials. Evaluation of heavy metals may be effectively used in large scale regional pollution monitoring of soil, groundwater, atmospheric and marine environment. Therefore, monitoring of heavy metal concentrations in soils, groundwater and atmospheric environment is imperative in order to prevent bioaccumulation in various ecological receptors.",TRUE,sentence
R145,Environmental Sciences,R78224,"Assessment of Cadmium and Lead Distribution in the Outcrop Rocks of Abakaliki Anticlinorium in the Southern Benue Trough, Nigeria",S353932,R78232,Has implementation,R78233,The use of outcrop rocks provides a cost–effective approach for monitoring regional heavy metal contamination associated with dissolution and/or weathering of rocks or parent materials. ,"This study investigates the distribution of cadmium and lead concentrations in the outcrop rock samples collected from Abakaliki anticlinorium in the Southern Benue Trough, Nigeria. The outcrop rock samples from seven sampling locations were air–dried for seventy–two hours, homogenized by grinding and pass through < 63 micron mesh sieve. The ground and homogenized rock samples were pulverized and analyzed for cadmium and lead using X-Ray Fluorescence Spectrometer. The concentrations of heavy metals in the outcrop rock samples ranged from < 0.10 – 7.95 mg kg–1 for cadmium (Cd) and < 1.00 – 4966.00 mg kg–1 for lead (Pb). Apart from an anomalous concentration measured in Afikpo Shale (Middle Segment), the results obtained revealed that rock samples from all the sampling locations yielded cadmium concentrations of < 0.10 mg kg–1 and the measured concentrations were below the average crustal abundance of 0.50 mg kg–1. Although background concentration of <1.00 ± 0.02 mg kg–1 was measured in Abakaliki Shale, rock samples from all the sampling locations revealed anomalous lead concentrations above average crustal abundance of 30 mg kg–1. The results obtained reveal important contributions towards understanding of heavy metal distribution patterns and provide baseline data that can be used for potential identification of areas at risk associated with natural sources of heavy metals contamination in the region. The use of outcrop rocks provides a cost–effective approach for monitoring regional heavy metal contamination associated with dissolution and/or weathering of rocks or parent materials. Evaluation of heavy metals may be effectively used in large scale regional pollution monitoring of soil, groundwater, atmospheric and marine environment. Therefore, monitoring of heavy metal concentrations in soils, groundwater and atmospheric environment is imperative in order to prevent bioaccumulation in various ecological receptors.",TRUE,sentence
R145,Environmental Sciences,R78224,"Assessment of Cadmium and Lead Distribution in the Outcrop Rocks of Abakaliki Anticlinorium in the Southern Benue Trough, Nigeria",S353929,R78226,Aim,R78230,"This study investigates the distribution of cadmium and lead concentrations in the outcrop rock samples collected from Abakaliki anticlinorium in the Southern Benue Trough, Nigeria. ","This study investigates the distribution of cadmium and lead concentrations in the outcrop rock samples collected from Abakaliki anticlinorium in the Southern Benue Trough, Nigeria. The outcrop rock samples from seven sampling locations were air–dried for seventy–two hours, homogenized by grinding and pass through < 63 micron mesh sieve. The ground and homogenized rock samples were pulverized and analyzed for cadmium and lead using X-Ray Fluorescence Spectrometer. The concentrations of heavy metals in the outcrop rock samples ranged from < 0.10 – 7.95 mg kg–1 for cadmium (Cd) and < 1.00 – 4966.00 mg kg–1 for lead (Pb). Apart from an anomalous concentration measured in Afikpo Shale (Middle Segment), the results obtained revealed that rock samples from all the sampling locations yielded cadmium concentrations of < 0.10 mg kg–1 and the measured concentrations were below the average crustal abundance of 0.50 mg kg–1. Although background concentration of <1.00 ± 0.02 mg kg–1 was measured in Abakaliki Shale, rock samples from all the sampling locations revealed anomalous lead concentrations above average crustal abundance of 30 mg kg–1. The results obtained reveal important contributions towards understanding of heavy metal distribution patterns and provide baseline data that can be used for potential identification of areas at risk associated with natural sources of heavy metals contamination in the region. The use of outcrop rocks provides a cost–effective approach for monitoring regional heavy metal contamination associated with dissolution and/or weathering of rocks or parent materials. Evaluation of heavy metals may be effectively used in large scale regional pollution monitoring of soil, groundwater, atmospheric and marine environment. Therefore, monitoring of heavy metal concentrations in soils, groundwater and atmospheric environment is imperative in order to prevent bioaccumulation in various ecological receptors.",TRUE,sentence
R317,Geographic Information Sciences,R110780,Uncovering the Relationship between Human Connectivity Dynamics and Land Use,S504813,R110784,Data,R110795, Copernicus Land Monitoring Service Urban Atlas,"CDR (Call Detail Record) data are one type of mobile phone data collected by operators each time a user initiates/receives a phone call or sends/receives an sms. CDR data are a rich geo-referenced source of user behaviour information. In this work, we perform an analysis of CDR data for the city of Milan that originate from Telecom Italia Big Data Challenge. A set of graphs is generated from aggregated CDR data, where each node represents a centroid of an RBS (Radio Base Station) polygon, and each edge represents aggregated telecom traffic between two RBSs. To explore the community structure, we apply a modularity-based algorithm. Community structure between days is highly dynamic, with variations in number, size and spatial distribution. One general rule observed is that communities formed over the urban core of the city are small in size and prone to dynamic change in spatial distribution, while communities formed in the suburban areas are larger in size and more consistent with respect to their spatial distribution. To evaluate the dynamics of change in community structure between days, we introduced different graph based and spatial community properties which contain latent footprint of human dynamics. We created land use profiles for each RBS polygon based on the Copernicus Land Monitoring Service Urban Atlas data set to quantify the correlation and predictivennes of human dynamics properties based on land use. The results reveal a strong correlation between some properties and land use which motivated us to further explore this topic. The proposed methodology has been implemented in the programming language Scala inside the Apache Spark engine to support the most computationally intensive tasks and in Python using the rich portfolio of data analytics and machine learning libraries for the less demanding tasks.",TRUE,sentence
R317,Geographic Information Sciences,R110803,A new insight into land use classification based on aggregated mobile phone data,S504930,R110805,Has result,L364683,"The detection rate decreases as the heterogeneity of land use increases, and increases as the density of cell phone towers increases","Land-use classification is essential for urban planning. Urban land-use types can be differentiated either by their physical characteristics (such as reflectivity and texture) or social functions. Remote sensing techniques have been recognized as a vital method for urban land-use classification because of their ability to capture the physical characteristics of land use. Although significant progress has been achieved in remote sensing methods designed for urban land-use classification, most techniques focus on physical characteristics, whereas knowledge of social functions is not adequately used. Owing to the wide usage of mobile phones, the activities of residents, which can be retrieved from the mobile phone data, can be determined in order to indicate the social function of land use. This could bring about the opportunity to derive land-use information from mobile phone data. To verify the application of this new data source to urban land-use classification, we first construct a vector of aggregated mobile phone data to characterize land-use types. This vector is composed of two aspects: the normalized hourly call volume and the total call volume. A semi-supervised fuzzy c-means clustering approach is then applied to infer the land-use types. The method is validated using mobile phone data collected in Singapore. Land use is determined with a detection rate of 58.03%. An analysis of the land-use classification results shows that the detection rate decreases as the heterogeneity of land use increases, and increases as the density of cell phone towers increases.",TRUE,sentence
R146,Geology,R78214,Textural and Heavy Minerals Characterization of Coastal Sediments in Ibeno and Eastern Obolo Local Government Areas of Akwa Ibom State – Nigeria,S353914,R78222,Conclusion,R78223,"Findings from the present study indicated that erosion, accretion, and stability of beaches are controlled by strong hydrodynamic and hydraulic processes.","Textural characterization and heavy mineral studies of beach sediments in Ibeno and Eastern Obolo Local Government Areas of Akwa Ibom State were carried out in the present study. The main aim was to infer their provenance, transport history and environment of deposition. Sediment samples were collected at the water–sediment contact along the shoreline at an interval of about 3m. Ten samples were collected from study location 1 (Ibeno Beach) and twelve samples were collected from study location 2 (Eastern Obolo Beach). A total of twenty–two samples were collected from the field and brought to the laboratory for textural and compositional analyses. The results showed that the value of graphic mean size ranged from 1.70Ф to 2.83Ф, sorting values ranged from 0.39Ф – 0.60Ф, skewness values ranged from -0.02 to 0.10 while kurtosis values ranged from 1.02 to 2.46, indicating medium to fine grained and well sorted sediments. This suggested that the sediments have been transported far from their source. Longshore current and onshore–offshore movements of sediment are primarily responsible in sorting of the heavy minerals. The histogram charts for the different samples and standard deviation versus skewness indicated a beach environment of deposition. This implies that the sediments are dominated by one class of grain size; a phenomenon characteristic of beach environments. The heavy mineral assemblages identified in this research work were rutile, zircon, tourmaline, hornblende, apatite, diopside, glauconite, pumpellyite, cassiterite, epidote, garnet, augite, enstatite, andalusite and opaque minerals. The zircon-tourmaline-rutile (ZTR) index ranged from 47.30% to 87.00% with most of the samples showing a ZTR index greater than 50%. These indicated that the sediments were mineralogically sub-mature and have been transported far from their source. The heavy minerals identified are indicative of being products of reworked sediments of both metamorphic (high rank) and igneous (both mafic and sialic) origin probably derived from the basement rocks of the Oban Massif as well as reworked sediments of the Benue Trough. Therefore, findings from the present study indicated that erosion, accretion, and stability of beaches are controlled by strong hydrodynamic and hydraulic processes.",TRUE,sentence
R146,Geology,R78214,Textural and Heavy Minerals Characterization of Coastal Sediments in Ibeno and Eastern Obolo Local Government Areas of Akwa Ibom State – Nigeria,S353907,R78216,Aim of the study,R78217,"The main aim was to infer their provenance, transport history and environment of deposition.","Textural characterization and heavy mineral studies of beach sediments in Ibeno and Eastern Obolo Local Government Areas of Akwa Ibom State were carried out in the present study. The main aim was to infer their provenance, transport history and environment of deposition. Sediment samples were collected at the water–sediment contact along the shoreline at an interval of about 3m. Ten samples were collected from study location 1 (Ibeno Beach) and twelve samples were collected from study location 2 (Eastern Obolo Beach). A total of twenty–two samples were collected from the field and brought to the laboratory for textural and compositional analyses. The results showed that the value of graphic mean size ranged from 1.70Ф to 2.83Ф, sorting values ranged from 0.39Ф – 0.60Ф, skewness values ranged from -0.02 to 0.10 while kurtosis values ranged from 1.02 to 2.46, indicating medium to fine grained and well sorted sediments. This suggested that the sediments have been transported far from their source. Longshore current and onshore–offshore movements of sediment are primarily responsible in sorting of the heavy minerals. The histogram charts for the different samples and standard deviation versus skewness indicated a beach environment of deposition. This implies that the sediments are dominated by one class of grain size; a phenomenon characteristic of beach environments. The heavy mineral assemblages identified in this research work were rutile, zircon, tourmaline, hornblende, apatite, diopside, glauconite, pumpellyite, cassiterite, epidote, garnet, augite, enstatite, andalusite and opaque minerals. The zircon-tourmaline-rutile (ZTR) index ranged from 47.30% to 87.00% with most of the samples showing a ZTR index greater than 50%. These indicated that the sediments were mineralogically sub-mature and have been transported far from their source. The heavy minerals identified are indicative of being products of reworked sediments of both metamorphic (high rank) and igneous (both mafic and sialic) origin probably derived from the basement rocks of the Oban Massif as well as reworked sediments of the Benue Trough. Therefore, findings from the present study indicated that erosion, accretion, and stability of beaches are controlled by strong hydrodynamic and hydraulic processes.",TRUE,sentence
R93,Human and Clinical Nutrition,R78237,Comparative Assessment of Iodine Content of Commercial Table Salt Brands Available in Nigerian Market,S353949,R78239,Has method,R78240,"Considering various human health implications associated with iodine deficiency, universal salt iodization programme has been recognized as one of the best methods of preventing iodine deficiency disorder and iodizing table salt is currently done in many countries. ","Iodine deficiency disorders (IDD) has been a major global public health problem threatening more than 2 billion people worldwide. Considering various human health implications associated with iodine deficiency, universal salt iodization programme has been recognized as one of the best methods of preventing iodine deficiency disorder and iodizing table salt is currently done in many countries. In this study, comparative assessment of iodine content of commercially available table salt brands in Nigerian market were investigated and iodine content were measured in ten table salt brands samples using iodometric titration. The iodine content ranged from 14.80 mg/kg – 16.90 mg/kg with mean value of 15.90 mg/kg for Sea salt; 24.30 mg/kg – 25.40 mg/kg with mean value of 24.60 mg/kg for Dangote salt (blue sachet); 22.10 mg/kg – 23.10 mg/kg with mean value of 22.40 mg/kg for Dangote salt (red sachet); 23.30 mg/kg – 24.30 mg/kg with mean value of 23.60 mg/kg for Mr Chef salt; 23.30 mg/kg – 24.30 mg/kg with mean value of 23.60 mg/kg for Annapurna; 26.80 mg/kg – 27.50 mg/kg with mean value of 27.20mg/kg for Uncle Palm salt; 23.30 mg/kg – 29.60 mg/kg with mean content of 26.40 mg/kg for Dangote (bag); 25.40 mg/kg – 26.50 mg/kg with mean value of 26.50 mg/kg for Royal salt; 36.80 mg/kg – 37.20 mg/kg with mean iodine content of 37.0 mg/kg for Abakaliki refined salt, and 30.07 mg/kg – 31.20 mg/kg with mean value of 31.00 mg/kg for Ikom refined salt. The mean iodine content measured in the Sea salt brand (15.70 mg/kg) was significantly P < 0.01 lower compared to those measured in other table salt brands. Although the iodine content of Abakaliki and Ikom refined salt exceed the recommended value, it is clear that only Sea salt brand falls below the World Health Organization (WHO) recommended value (20 – 30 mg/kg), while the remaining table salt samples are just within the range. The results obtained have revealed that 70 % of the table salt brands were adequately iodized while 30 % of the table salt brands were not adequately iodized and provided baseline data that can be used for potential identification of human health risks associated with inadequate and/or excess iodine content in table salt brands consumed in households in Nigeria.",TRUE,sentence
R93,Human and Clinical Nutrition,R78237,Comparative Assessment of Iodine Content of Commercial Table Salt Brands Available in Nigerian Market,S353950,R78239,Has method,R78241,"In this study, comparative assessment of iodine content of commercially available table salt brands in Nigerian market were investigated and iodine content were measured in ten table salt brands samples using iodometric titration.","Iodine deficiency disorders (IDD) has been a major global public health problem threatening more than 2 billion people worldwide. Considering various human health implications associated with iodine deficiency, universal salt iodization programme has been recognized as one of the best methods of preventing iodine deficiency disorder and iodizing table salt is currently done in many countries. In this study, comparative assessment of iodine content of commercially available table salt brands in Nigerian market were investigated and iodine content were measured in ten table salt brands samples using iodometric titration. The iodine content ranged from 14.80 mg/kg – 16.90 mg/kg with mean value of 15.90 mg/kg for Sea salt; 24.30 mg/kg – 25.40 mg/kg with mean value of 24.60 mg/kg for Dangote salt (blue sachet); 22.10 mg/kg – 23.10 mg/kg with mean value of 22.40 mg/kg for Dangote salt (red sachet); 23.30 mg/kg – 24.30 mg/kg with mean value of 23.60 mg/kg for Mr Chef salt; 23.30 mg/kg – 24.30 mg/kg with mean value of 23.60 mg/kg for Annapurna; 26.80 mg/kg – 27.50 mg/kg with mean value of 27.20mg/kg for Uncle Palm salt; 23.30 mg/kg – 29.60 mg/kg with mean content of 26.40 mg/kg for Dangote (bag); 25.40 mg/kg – 26.50 mg/kg with mean value of 26.50 mg/kg for Royal salt; 36.80 mg/kg – 37.20 mg/kg with mean iodine content of 37.0 mg/kg for Abakaliki refined salt, and 30.07 mg/kg – 31.20 mg/kg with mean value of 31.00 mg/kg for Ikom refined salt. The mean iodine content measured in the Sea salt brand (15.70 mg/kg) was significantly P < 0.01 lower compared to those measured in other table salt brands. Although the iodine content of Abakaliki and Ikom refined salt exceed the recommended value, it is clear that only Sea salt brand falls below the World Health Organization (WHO) recommended value (20 – 30 mg/kg), while the remaining table salt samples are just within the range. The results obtained have revealed that 70 % of the table salt brands were adequately iodized while 30 % of the table salt brands were not adequately iodized and provided baseline data that can be used for potential identification of human health risks associated with inadequate and/or excess iodine content in table salt brands consumed in households in Nigeria.",TRUE,sentence
R278,Information Science,R68931,The New DBpedia Release Cycle: Increasing Agility and Efficiency in Knowledge Extraction Workflows,S327337,R68933,Process,R68937,"agile, cost-efficient processes and scale up productivity","Abstract Since its inception in 2007, DBpedia has been constantly releasing open data in RDF, extracted from various Wikimedia projects using a complex software system called the DBpedia Information Extraction Framework (DIEF). For the past 12 years, the software received a plethora of extensions by the community, which positively affected the size and data quality. Due to the increase in size and complexity, the release process was facing huge delays (from 12 to 17 months cycle), thus impacting the agility of the development. In this paper, we describe the new DBpedia release cycle including our innovative release workflow, which allows development teams (in particular those who publish large, open data) to implement agile, cost-efficient processes and scale up productivity. The DBpedia release workflow has been re-engineered, its new primary focus is on productivity and agility , to address the challenges of size and complexity. At the same time, quality is assured by implementing a comprehensive testing methodology. We run an experimental evaluation and argue that the implemented measures increase agility and allow for cost-effective quality-control and debugging and thus achieve a higher level of maintainability. As a result, DBpedia now publishes regular (i.e. monthly) releases with over 21 billion triples with minimal publishing effort .",TRUE,sentence
R278,Information Science,R76423,Expanding horizons in historical linguistics with the 400-million word Corpus of Historical American English,S351832,R76425,description,L250537,"allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English","The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.",TRUE,sentence
R278,Information Science,R5259,OpenBiodiv: A Knowledge Graph for Literature-Extracted Linked Open Data in Biodiversity Science,S5817,R5269,Material,R5281,an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone,"Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.",TRUE,sentence
R278,Information Science,R5166,More Complete Resultset Retrieval from Large Heterogeneous RDF Sources,S5708,R5171,Material,R5184,both federated and non-federated SPARQL queries,"Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service ""Where is My URI"" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.",TRUE,sentence
R278,Information Science,R108652,"A streamlined workflow for conversion, peer review, and publication of genomics metadata as omics data papers ",S495055,R108654,Material,R108661,"European Nucleotide Archive, ArrayExpress, and BioSamples databases","Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.",TRUE,sentence
R278,Information Science,R5259,OpenBiodiv: A Knowledge Graph for Literature-Extracted Linked Open Data in Biodiversity Science,S5818,R5269,Material,R5282,"Findable, Accessible, Interoperable, Reusable (FAIR) data","Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.",TRUE,sentence
R278,Information Science,R186167,Scalable SPARQL querying of large RDF graphs,S711753,R186169,Material,R186176,popular multi-node RDF data management systems,"The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.",TRUE,sentence
R278,Information Science,R5166,More Complete Resultset Retrieval from Large Heterogeneous RDF Sources,S5702,R5171,Material,R5178,raw data dumps and HDT files,"Over the last years, the Web of Data has grown significantly. Various interfaces such as LOD Stats, LOD Laudromat, SPARQL endpoints provide access to the hundered of thousands of RDF datasets, representing billions of facts. These datasets are available in different formats such as raw data dumps and HDT files or directly accessible via SPARQL endpoints. Querying such large amount of distributed data is particularly challenging and many of these datasets cannot be directly queried using the SPARQL query language. In order to tackle these problems, we present WimuQ, an integrated query engine to execute SPARQL queries and retrieve results from large amount of heterogeneous RDF data sources. Presently, WimuQ is able to execute both federated and non-federated SPARQL queries over a total of 668,166 datasets from LOD Stats and LOD Laudromat as well as 559 active SPARQL endpoints. These data sources represent a total of 221.7 billion triples from more than 5 terabytes of information from datasets retrieved using the service ""Where is My URI"" (WIMU). Our evaluation on state-of-the-art real-data benchmarks shows that WimuQ retrieves more complete results for the benchmark queries.",TRUE,sentence
R278,Information Science,R108652,"A streamlined workflow for conversion, peer review, and publication of genomics metadata as omics data papers ",S495064,R108654,Process,R108669,streamlined import,"Abstract Background Data papers have emerged as a powerful instrument for open data publishing, obtaining credit, and establishing priority for datasets generated in scientific experiments. Academic publishing improves data and metadata quality through peer review and increases the impact of datasets by enhancing their visibility, accessibility, and reusability. Objective We aimed to establish a new type of article structure and template for omics studies: the omics data paper. To improve data interoperability and further incentivize researchers to publish well-described datasets, we created a prototype workflow for streamlined import of genomics metadata from the European Nucleotide Archive directly into a data paper manuscript. Methods An omics data paper template was designed by defining key article sections that encourage the description of omics datasets and methodologies. A metadata import workflow, based on REpresentational State Transfer services and Xpath, was prototyped to extract information from the European Nucleotide Archive, ArrayExpress, and BioSamples databases. Findings The template and workflow for automatic import of standard-compliant metadata into an omics data paper manuscript provide a mechanism for enhancing existing metadata through publishing. Conclusion The omics data paper structure and workflow for import of genomics metadata will help to bring genomic and other omics datasets into the spotlight. Promoting enhanced metadata descriptions and enforcing manuscript peer review and data auditing of the underlying datasets brings additional quality to datasets. We hope that streamlined metadata reuse for scholarly publishing encourages authors to create enhanced metadata descriptions in the form of data papers to improve both the quality of their metadata and its findability and accessibility.",TRUE,sentence
R278,Information Science,R5259,OpenBiodiv: A Knowledge Graph for Literature-Extracted Linked Open Data in Biodiversity Science,S5820,R5269,method,R5284,the concept of the Open Biodiversity Knowledge Management System (OBKMS),"Hundreds of years of biodiversity research have resulted in the accumulation of a substantial pool of communal knowledge; however, most of it is stored in silos isolated from each other, such as published articles or monographs. The need for a system to store and manage collective biodiversity knowledge in a community-agreed and interoperable open format has evolved into the concept of the Open Biodiversity Knowledge Management System (OBKMS). This paper presents OpenBiodiv: An OBKMS that utilizes semantic publishing workflows, text and data mining, common standards, ontology modelling and graph database technologies to establish a robust infrastructure for managing biodiversity knowledge. It is presented as a Linked Open Dataset generated from scientific literature. OpenBiodiv encompasses data extracted from more than 5000 scholarly articles published by Pensoft and many more taxonomic treatments extracted by Plazi from journals of other publishers. The data from both sources are converted to Resource Description Framework (RDF) and integrated in a graph database using the OpenBiodiv-O ontology and an RDF version of the Global Biodiversity Information Facility (GBIF) taxonomic backbone. Through the application of semantic technologies, the project showcases the value of open publishing of Findable, Accessible, Interoperable, Reusable (FAIR) data towards the establishment of open science practices in the biodiversity domain.",TRUE,sentence
R278,Information Science,R76428,CCOHA: Clean Corpus of Historical American English,S351858,R76430,description,L250554,The Corpus of Historical American English (COHA) is one of the most commonly used large corpora in diachronic studies in English,"Modelling language change is an increasingly important area of interest within the fields of sociolinguistics and historical linguistics. In recent years, there has been a growing number of publications whose main concern is studying changes that have occurred within the past centuries. The Corpus of Historical American English (COHA) is one of the most commonly used large corpora in diachronic studies in English. This paper describes methods applied to the downloadable version of the COHA corpus in order to overcome its main limitations, such as inconsistent lemmas and malformed tokens, without compromising its qualitative and distributional properties. The resulting corpus CCOHA contains a larger number of cleaned word tokens which can offer better insights into language change and allow for a larger variety of tasks to be performed.",TRUE,sentence
R137681,"Information Systems, Process and Knowledge Management",R140106,Smart Cities in Europe,S559129,R140108,has research domain,R140111,definition of the concept of the “smart city.”,"Urban performance currently depends not only on a city's endowment of hard infrastructure (physical capital), but also, and increasingly so, on the availability and quality of knowledge communication and social infrastructure (human and social capital). The latter form of capital is decisive for urban competitiveness. Against this background, the concept of the “smart city” has recently been introduced as a strategic device to encompass modern urban production factors in a common framework and, in particular, to highlight the importance of Information and Communication Technologies (ICTs) in the last 20 years for enhancing the competitive profile of a city. The present paper aims to shed light on the often elusive definition of the concept of the “smart city.” We provide a focused and operational definition of this construct and present consistent evidence on the geography of smart cities in the EU27. Our statistical and graphical analyses exploit in depth, for the first time to our knowledge, the most recent version of the Urban Audit data set in order to analyze the factors determining the performance of smart cities. We find that the presence of a creative class, the quality of and dedicated attention to the urban environment, the level of education, and the accessibility to and use of ICTs for public administration are all positively correlated with urban wealth. This result prompts the formulation of a new strategic agenda for European cities that will allow them to achieve sustainable urban development and a better urban landscape.",TRUE,sentence
R137681,"Information Systems, Process and Knowledge Management",R140080,Interdisciplinary Online Hackathons as an Approach to Combat the COVID-19 Pandemic: Case Study,S559106,R140092,Has finding,R140095,"hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype","Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19–related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.",TRUE,sentence
R137681,"Information Systems, Process and Knowledge Management",R140059,Open data hackathons: an innovative strategy to enhance entrepreneurial intention,S559054,R140061,Has finding,R140063,need for more research to be conducted regarding the open data in entrepreneurship through hackathons,"
Purpose
In terms of entrepreneurship, open data benefits include economic growth, innovation, empowerment and new or improved products and services. Hackathons encourage the development of new applications using open data and the creation of startups based on these applications. Researchers focus on factors that affect nascent entrepreneurs’ decision to create a startup but researches in the field of open data hackathons have not been fully investigated yet. This paper aims to suggest a model that incorporates factors that affect the decision of establishing a startup by developers who have participated in open data hackathons.
Design/methodology/approach
In total, 70 papers were examined and analyzed using a three-phased literature review methodology, which was suggested by Webster and Watson (2002). These surveys investigated several factors that affect a nascent entrepreneur to create a startup.
Findings
Eventually, by identifying the motivations for developers to participate in a hackathon, and understanding the benefits of the use of open data, researchers will be able to elaborate the proposed model and evaluate if the contest has contributed to the decision of establish a startup and what factors affect the decision to establish a startup apply to open data developers, and if the participants of the contest agree with these factors.
Originality/value
The paper expands the scope of open data research on entrepreneurship field, stating the need for more research to be conducted regarding the open data in entrepreneurship through hackathons.
",TRUE,sentence
R137681,"Information Systems, Process and Knowledge Management",R140106,Smart Cities in Europe,S559128,R140108,Has output,R140110,operational definition of this construct and present consistent evidence on the geography of smart cities in the EU27,"Urban performance currently depends not only on a city's endowment of hard infrastructure (physical capital), but also, and increasingly so, on the availability and quality of knowledge communication and social infrastructure (human and social capital). The latter form of capital is decisive for urban competitiveness. Against this background, the concept of the “smart city” has recently been introduced as a strategic device to encompass modern urban production factors in a common framework and, in particular, to highlight the importance of Information and Communication Technologies (ICTs) in the last 20 years for enhancing the competitive profile of a city. The present paper aims to shed light on the often elusive definition of the concept of the “smart city.” We provide a focused and operational definition of this construct and present consistent evidence on the geography of smart cities in the EU27. Our statistical and graphical analyses exploit in depth, for the first time to our knowledge, the most recent version of the Urban Audit data set in order to analyze the factors determining the performance of smart cities. We find that the presence of a creative class, the quality of and dedicated attention to the urban environment, the level of education, and the accessibility to and use of ICTs for public administration are all positively correlated with urban wealth. This result prompts the formulation of a new strategic agenda for European cities that will allow them to achieve sustainable urban development and a better urban landscape.",TRUE,sentence
R112125,Machine Learning,R159399,"DEHB: Evolutionary Hyperband for Scalable, Robust and Efficient Hyperparameter Optimization",S635022,R159430,keywords,R159436,evolutionary search,"Modern machine learning algorithms crucially rely on several design decisions to achieve strong performance, making the problem of Hyperparameter Optimization (HPO) more important than ever. Here, we combine the advantages of the popular bandit-based HPO method Hyperband (HB) and the evolutionary search approach of Differential Evolution (DE) to yield a new HPO method which we call DEHB. Comprehensive results on a very broad range of HPO problems, as well as a wide range of tabular benchmarks from neural architecture search, demonstrate that DEHB achieves strong performance far more robustly than all previous HPO methods we are aware of, especially for high-dimensional problems with discrete input dimensions. For example, DEHB is up to 1000x faster than random search. It is also efficient in computational time, conceptually simple and easy to implement, positioning it well to become a new default HPO method.",TRUE,sentence
R112125,Machine Learning,R144816,NLTK: The Natural Language Toolkit,S579782,R144818,description,L405384,"NLTK, the Natural Language Toolkit, is a suite of open source program modules, tutorials and problem sets, providing ready-to-use computational linguistics courseware. NLTK covers symbolic and statistical natural language processing, and is interfaced to annotated corpora.","NLTK, the Natural Language Toolkit, is a suite of open source program modules, tutorials and problem sets, providing ready-to-use computational linguistics courseware. NLTK covers symbolic and statistical natural language processing, and is interfaced to annotated corpora. Students augment and replace existing components, learn structured programming by example, and manipulate sophisticated models from the outset.",TRUE,sentence
R112125,Machine Learning,R162333,Sample-Efficient Automated Deep Reinforcement Learning,S647504,R162335,keywords,R162339,Off-Policy RL,"Despite significant progress in challenging problems across various domains, applying state-of-the-art deep reinforcement learning (RL) algorithms remains challenging due to their sensitivity to the choice of hyperparameters. This sensitivity can partly be attributed to the non-stationarity of the RL problem, potentially requiring different hyperparameter settings at various stages of the learning process. Additionally, in the RL setting, hyperparameter optimization (HPO) requires a large number of environment interactions, hindering the transfer of the successes in RL to real-world applications. In this work, we tackle the issues of sample-efficient and dynamic HPO in RL. We propose a population-based automated RL (AutoRL) framework to meta-optimize arbitrary off-policy RL algorithms. In this framework, we optimize the hyperparameters and also the neural architecture while simultaneously training the agent. By sharing the collected experience across the population, we substantially increase the sample efficiency of the meta-optimization. We demonstrate the capabilities of our sample-efficient AutoRL approach in a case study with the popular TD3 algorithm in the MuJoCo benchmark suite, where we reduce the number of environment interactions needed for meta-optimization by up to an order of magnitude compared to population-based training.",TRUE,sentence
R112125,Machine Learning,R147894,Active Learning Yields Better Training Data for Scientific Named Entity Recognition,S593430,R147896,description,L412682,"We have previously designed polyNER, a semi-automated system for efficient identification of scientific entities in text. PolyNER applies word embedding models to generate entity-rich corpora for productive expert labeling, and then uses the resulting labeled data to bootstrap a context-based classifier. PolyNER facilitates a labeling process that is otherwise tedious and expensive. Here, we use active learning to efficiently obtain more annotations from experts and improve performance.","Despite significant progress in natural language processing, machine learning models require substantial expertannotated training data to perform well in tasks such as named entity recognition (NER) and entity relations extraction. Furthermore, NER is often more complicated when working with scientific text. For example, in polymer science, chemical structure may be encoded using nonstandard naming conventions, the same concept can be expressed using many different terms (synonymy), and authors may refer to polymers with ad-hoc labels. These challenges, which are not unique to polymer science, make it difficult to generate training data, as specialized skills are needed to label text correctly. We have previously designed polyNER, a semi-automated system for efficient identification of scientific entities in text. PolyNER applies word embedding models to generate entity-rich corpora for productive expert labeling, and then uses the resulting labeled data to bootstrap a context-based classifier. PolyNER facilitates a labeling process that is otherwise tedious and expensive. Here, we use active learning to efficiently obtain more annotations from experts and improve performance. Our approach requires just five hours of expert time to achieve discrimination capacity comparable to that of a state-of-the-art chemical NER toolkit.",TRUE,sentence
R67,Medicinal Chemistry and Pharmaceutics,R110813,Resveratrol loaded polymeric micelles for theranostic targeting of breast cancer cells,S505687,R110815,Highlights,L365069,An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells.,"Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179±22 nm were produced. Res was loaded with high EE of 73±0.9% and DL content of 6.2±0.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.",TRUE,sentence
R67,Medicinal Chemistry and Pharmaceutics,R141401,Application of camelid heavy-chain variable domains (VHHs) in prevention and treatment of bacterial and viral infections,S565904,R141402,Mechanism of Antiviral Action,L397180,Binding to viral coat proteins or blocking interactions with cell-surface receptors,"ABSTRACT Camelid heavy-chain variable domains (VHHs) are the smallest, intact, antigen-binding units to occur in nature. VHHs possess high degrees of solubility and robustness enabling generation of multivalent constructs with increased avidity – characteristics that mark their superiority to other antibody fragments and monoclonal antibodies. Capable of effectively binding to molecular targets inaccessible to classical immunotherapeutic agents and easily produced in microbial culture, VHHs are considered promising tools for pharmaceutical biotechnology. With the aim to demonstrate the perspective and potential of VHHs for the development of prophylactic and therapeutic drugs to target diseases caused by bacterial and viral infections, this review article will initially describe the structural features that underlie the unique properties of VHHs and explain the methods currently used for the selection and recombinant production of pathogen-specific VHHs, and then thoroughly summarize the experimental findings of five distinct studies that employed VHHs as inhibitors of host–pathogen interactions or neutralizers of infectious agents. Past and recent studies suggest the potential of camelid heavy-chain variable domains as a novel modality of immunotherapeutic drugs and a promising alternative to monoclonal antibodies. VHHs demonstrate the ability to interfere with bacterial pathogenesis by preventing adhesion to host tissue and sequestering disease-causing bacterial toxins. To protect from viral infections, VHHs may be employed as inhibitors of viral entry by binding to viral coat proteins or blocking interactions with cell-surface receptors. The implementation of VHHs as immunotherapeutic agents for infectious diseases is of considerable potential and set to contribute to public health in the near future.",TRUE,sentence
R67,Medicinal Chemistry and Pharmaceutics,R144485,PLGA nanoparticles modified with a BBB-penetrating peptide co-delivering Aβ generation inhibitor and curcumin attenuate memory deficits and neuropathology in Alzheimer's disease mice,S578703,R144487,Surface functionalized with,L404926,Brain targeting peptide CRT (cyclic CRTIGPSVC peptide),"Alzheimer's disease (AD) is the most common form of dementia, characterized by the formation of extracellular senile plaques and neuronal loss caused by amyloid β (Aβ) aggregates in the brains of AD patients. Conventional strategies failed to treat AD in clinical trials, partly due to the poor solubility, low bioavailability and ineffectiveness of the tested drugs to cross the blood-brain barrier (BBB). Moreover, AD is a complex, multifactorial neurodegenerative disease; one-target strategies may be insufficient to prevent the processes of AD. Here, we designed novel kind of poly(lactide-co-glycolic acid) (PLGA) nanoparticles by loading with Aβ generation inhibitor S1 (PQVGHL peptide) and curcumin to target the detrimental factors in AD development and by conjugating with brain targeting peptide CRT (cyclic CRTIGPSVC peptide), an iron-mimic peptide that targets transferrin receptor (TfR), to improve BBB penetration. The average particle size of drug-loaded PLGA nanoparticles and CRT-conjugated PLGA nanoparticles were 128.6 nm and 139.8 nm, respectively. The results of Y-maze and new object recognition test demonstrated that our PLGA nanoparticles significantly improved the spatial memory and recognition in transgenic AD mice. Moreover, PLGA nanoparticles remarkably decreased the level of Aβ, reactive oxygen species (ROS), TNF-α and IL-6, and enhanced the activities of super oxide dismutase (SOD) and synapse numbers in the AD mouse brains. Compared with other PLGA nanoparticles, CRT peptide modified-PLGA nanoparticles co-delivering S1 and curcumin exhibited most beneficial effect on the treatment of AD mice, suggesting that conjugated CRT peptide, and encapsulated S1 and curcumin exerted their corresponding functions for the treatment.",TRUE,sentence
R67,Medicinal Chemistry and Pharmaceutics,R110813,Resveratrol loaded polymeric micelles for theranostic targeting of breast cancer cells,S505685,R110815,Highlights,L365067,Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control.,"Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179±22 nm were produced. Res was loaded with high EE of 73±0.9% and DL content of 6.2±0.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.",TRUE,sentence
R67,Medicinal Chemistry and Pharmaceutics,R141417,"Multiplex Paper-Based Colorimetric DNA Sensor Using Pyrrolidinyl Peptide Nucleic Acid-Induced AgNPs Aggregation for Detecting MERS-CoV, MTB, and HPV Oligonucleotides",S566062,R141418,has outcome,L397314,paper-based colorimetric assay for DNA detection,"The development of simple fluorescent and colorimetric assays that enable point-of-care DNA and RNA detection has been a topic of significant research because of the utility of such assays in resource limited settings. The most common motifs utilize hybridization to a complementary detection strand coupled with a sensitive reporter molecule. Here, a paper-based colorimetric assay for DNA detection based on pyrrolidinyl peptide nucleic acid (acpcPNA)-induced nanoparticle aggregation is reported as an alternative to traditional colorimetric approaches. PNA probes are an attractive alternative to DNA and RNA probes because they are chemically and biologically stable, easily synthesized, and hybridize efficiently with the complementary DNA strands. The acpcPNA probe contains a single positive charge from the lysine at C-terminus and causes aggregation of citrate anion-stabilized silver nanoparticles (AgNPs) in the absence of complementary DNA. In the presence of target DNA, formation of the anionic DNA-acpcPNA duplex results in dispersion of the AgNPs as a result of electrostatic repulsion, giving rise to a detectable color change. Factors affecting the sensitivity and selectivity of this assay were investigated, including ionic strength, AgNP concentration, PNA concentration, and DNA strand mismatches. The method was used for screening of synthetic Middle East respiratory syndrome coronavirus (MERS-CoV), Mycobacterium tuberculosis (MTB), and human papillomavirus (HPV) DNA based on a colorimetric paper-based analytical device developed using the aforementioned principle. The oligonucleotide targets were detected by measuring the color change of AgNPs, giving detection limits of 1.53 (MERS-CoV), 1.27 (MTB), and 1.03 nM (HPV). The acpcPNA probe exhibited high selectivity for the complementary oligonucleotides over single-base-mismatch, two-base-mismatch, and noncomplementary DNA targets. The proposed paper-based colorimetric DNA sensor has potential to be an alternative approach for simple, rapid, sensitive, and selective DNA detection.",TRUE,sentence
R67,Medicinal Chemistry and Pharmaceutics,R141413,Novel coronavirus-like particles targeting cells lining the respiratory tract,S566026,R141414,Mechanism of Antiviral Action,L397284,Selectively transduce cells expressing the ACE2 protein,"Virus like particles (VLPs) produced by the expression of viral structural proteins can serve as versatile nanovectors or potential vaccine candidates. In this study we describe for the first time the generation of HCoV-NL63 VLPs using baculovirus system. Major structural proteins of HCoV-NL63 have been expressed in tagged or native form, and their assembly to form VLPs was evaluated. Additionally, a novel procedure for chromatography purification of HCoV-NL63 VLPs was developed. Interestingly, we show that these nanoparticles may deliver cargo and selectively transduce cells expressing the ACE2 protein such as ciliated cells of the respiratory tract. Production of a specific delivery vector is a major challenge for research concerning targeting molecules. The obtained results show that HCoV-NL63 VLPs may be efficiently produced, purified, modified and serve as a delivery platform. This study constitutes an important basis for further development of a promising viral vector displaying narrow tissue tropism.",TRUE,sentence
R67,Medicinal Chemistry and Pharmaceutics,R141102,Oral delivery of anti-TNF antibody shielded by natural polyphenol-mediated supramolecular assembly for inflammatory bowel disease therapy,S563792,R141104,Highlights,L395602,"The average weight, colon length, and inflammatory factors in colon and serum of colitis mice after the treatment of novel formulation of anti-TNF-α antibodies even reached the similar level to healthy controls","Rationale: Anti-tumor necrosis factor (TNF) therapy is a very effective way to treat inflammatory bowel disease. However, systemic exposure to anti-TNF-α antibodies through current clinical systemic administration can cause serious adverse effects in many patients. Here, we report a facile prepared self-assembled supramolecular nanoparticle based on natural polyphenol tannic acid and poly(ethylene glycol) containing polymer for oral antibody delivery. Method: This supramolecular nanoparticle was fabricated within minutes in aqueous solution and easily scaled up to gram level due to their pH-dependent reversible assembly. DSS-induced colitis model was prepared to evaluate the ability of inflammatory colon targeting ability and therapeutic efficacy of this antibody-loaded nanoparticles. Results: This polyphenol-based nanoparticle can be aqueous assembly without organic solvent and thus scaled up easily. The oral administration of antibody loaded nanoparticle achieved high accumulation in the inflamed colon and low systemic exposure. The novel formulation of anti-TNF-α antibodies administrated orally achieved high efficacy in the treatment of colitis mice compared with free antibodies administered orally. The average weight, colon length, and inflammatory factors in colon and serum of colitis mice after the treatment of novel formulation of anti-TNF-α antibodies even reached the similar level to healthy controls. Conclusion: This polyphenol-based supramolecular nanoparticle is a promising platform for oral delivery of antibodies for the treatment of inflammatory bowel diseases, which may have promising clinical translation prospects.",TRUE,sentence
R67,Medicinal Chemistry and Pharmaceutics,R110242,"Comparison, synthesis and evaluation of anticancer drug-loaded polymeric nanoparticles on breast cancer cell lines",S502720,R110244,Highlights,L363329,The IC50 value of all drugs on T47D were lower than those on MCF7.,"Breast cancer is a major form of cancer, with a high mortality rate in women. It is crucial to achieve more efficient and safe anticancer drugs. Recent developments in medical nanotechnology have resulted in novel advances in cancer drug delivery. Cisplatin, doxorubicin, and 5-fluorouracil are three important anti-cancer drugs which have poor water-solubility. In this study, we used cisplatin, doxorubicin, and 5-fluorouracil-loaded polycaprolactone-polyethylene glycol (PCL-PEG) nanoparticles to improve the stability and solubility of molecules in drug delivery systems. The nanoparticles were prepared by a double emulsion method and characterized with Fourier Transform Infrared (FTIR) spectroscopy and Hydrogen-1 nuclear magnetic resonance (1HNMR). Cells were treated with equal concentrations of cisplatin, doxorubicin and 5-fluorouracil-loaded PCL-PEG nanoparticles, and free cisplatin, doxorubicin and 5-fluorouracil. The 3-[4,5-dimethylthiazol-2yl]-2,5-diphenyl tetrazolium bromide (MTT) assay confirmed that cisplatin, doxorubicin, and 5-fluorouracil-loaded PCL-PEG nanoparticles enhanced cytotoxicity and drug delivery in T47D and MCF7 breast cancer cells. However, the IC50 value of doxorubicin was lower than the IC50 values of both cisplatin and 5-fluorouracil, where the difference was statistically considered significant (p˂0.05). However, the IC50 value of all drugs on T47D were lower than those on MCF7.",TRUE,sentence
R67,Medicinal Chemistry and Pharmaceutics,R110813,Resveratrol loaded polymeric micelles for theranostic targeting of breast cancer cells,S505704,R110815,Highlights,L365081,the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.,"Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179±22 nm were produced. Res was loaded with high EE of 73±0.9% and DL content of 6.2±0.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.",TRUE,sentence
R67,Medicinal Chemistry and Pharmaceutics,R110813,Resveratrol loaded polymeric micelles for theranostic targeting of breast cancer cells,S505690,R110815,Highlights,L365072,the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. ,"Treatment of breast cancer underwent extensive progress in recent years with molecularly targeted therapies. However, non-specific pharmaceutical approaches (chemotherapy) persist, inducing severe side-effects. Phytochemicals provide a promising alternative for breast cancer prevention and treatment. Specifically, resveratrol (res) is a plant-derived polyphenolic phytoalexin with potent biological activity but displays poor water solubility, limiting its clinical use. Here we have developed a strategy for delivering res using a newly synthesized nano-carrier with the potential for both diagnosis and treatment. Methods: Res-loaded nanoparticles were synthesized by the emulsion method using Pluronic F127 block copolymer and Vitamin E-TPGS. Nanoparticle characterization was performed by SEM and tunable resistive pulse sensing. Encapsulation Efficiency (EE%) and Drug Loading (DL%) content were determined by analysis of the supernatant during synthesis. Nanoparticle uptake kinetics in breast cancer cell lines MCF-7 and MDA-MB-231 as well as in MCF-10A breast epithelial cells were evaluated by flow cytometry and the effects of res on cell viability via MTT assay. Results: Res-loaded nanoparticles with spherical shape and a dominant size of 179±22 nm were produced. Res was loaded with high EE of 73±0.9% and DL content of 6.2±0.1%. Flow cytometry revealed higher uptake efficiency in breast cancer cells compared to the control. An MTT assay showed that res-loaded nanoparticles reduced the viability of breast cancer cells with no effect on the control cells. Conclusions: These results demonstrate that the newly synthesized nanoparticle is a good model for the encapsulation of hydrophobic drugs. Additionally, the nanoparticle delivers a natural compound and is highly effective and selective against breast cancer cells rendering this type of nanoparticle an excellent candidate for diagnosis and therapy of difficult to treat mammary malignancies.",TRUE,sentence
R67,Medicinal Chemistry and Pharmaceutics,R141102,Oral delivery of anti-TNF antibody shielded by natural polyphenol-mediated supramolecular assembly for inflammatory bowel disease therapy,S563789,R141104,Highlights,L395599,The novel formulation of anti-TNF-α antibodies administrated orally achieved high efficacy in the treatment of colitis mice compared with free antibodies administered orally,"Rationale: Anti-tumor necrosis factor (TNF) therapy is a very effective way to treat inflammatory bowel disease. However, systemic exposure to anti-TNF-α antibodies through current clinical systemic administration can cause serious adverse effects in many patients. Here, we report a facile prepared self-assembled supramolecular nanoparticle based on natural polyphenol tannic acid and poly(ethylene glycol) containing polymer for oral antibody delivery. Method: This supramolecular nanoparticle was fabricated within minutes in aqueous solution and easily scaled up to gram level due to their pH-dependent reversible assembly. DSS-induced colitis model was prepared to evaluate the ability of inflammatory colon targeting ability and therapeutic efficacy of this antibody-loaded nanoparticles. Results: This polyphenol-based nanoparticle can be aqueous assembly without organic solvent and thus scaled up easily. The oral administration of antibody loaded nanoparticle achieved high accumulation in the inflamed colon and low systemic exposure. The novel formulation of anti-TNF-α antibodies administrated orally achieved high efficacy in the treatment of colitis mice compared with free antibodies administered orally. The average weight, colon length, and inflammatory factors in colon and serum of colitis mice after the treatment of novel formulation of anti-TNF-α antibodies even reached the similar level to healthy controls. Conclusion: This polyphenol-based supramolecular nanoparticle is a promising platform for oral delivery of antibodies for the treatment of inflammatory bowel diseases, which may have promising clinical translation prospects.",TRUE,sentence
R67,Medicinal Chemistry and Pharmaceutics,R141407,A self-adjuvanted nanoparticle based vaccine against infectious bronchitis virus,S565962,R141408,Mechanism of Antiviral Action,L397229,The second heptad repeat (HR2) region of IBV spike proteins,"Infectious bronchitis virus (IBV) affects poultry respiratory, renal and reproductive systems. Currently the efficacy of available live attenuated or killed vaccines against IBV has been challenged. We designed a novel IBV vaccine alternative using a highly innovative platform called Self-Assembling Protein Nanoparticle (SAPN). In this vaccine, B cell epitopes derived from the second heptad repeat (HR2) region of IBV spike proteins were repetitively presented in its native trimeric conformation. In addition, flagellin was co-displayed in the SAPN to achieve a self-adjuvanted effect. Three groups of chickens were immunized at four weeks of age with the vaccine prototype, IBV-Flagellin-SAPN, a negative-control construct Flagellin-SAPN or a buffer control. The immunized chickens were challenged with 5x104.7 EID50 IBV M41 strain. High antibody responses were detected in chickens immunized with IBV-Flagellin-SAPN. In ex vivo proliferation tests, peripheral mononuclear cells (PBMCs) derived from IBV-Flagellin-SAPN immunized chickens had a significantly higher stimulation index than that of PBMCs from chickens receiving Flagellin-SAPN. Chickens immunized with IBV-Flagellin-SAPN had a significant reduction of tracheal virus shedding and lesser tracheal lesion scores than did negative control chickens. The data demonstrated that the IBV-Flagellin-SAPN holds promise as a vaccine for IBV.",TRUE,sentence
R145261,Natural Language Processing,R146853,SciREX: A Challenge Dataset for Document-Level Information Extraction,S588265,R146855,description,L409589,"a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles.","Extracting information from full documents is an important problem in many domains, but most previous work focus on identifying relationships within a sentence or a paragraph. It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections. In this paper, we introduce SciREX, a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles. We annotate our dataset by integrating automatic and human annotations, leveraging existing scientific knowledge resources. We develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE. Analyzing the model performance shows a significant gap between human performance and current baselines, inviting the community to use our dataset as a challenge to develop document-level IE models. Our data and code are publicly available at https://github.com/allenai/SciREX .",TRUE,sentence
R145261,Natural Language Processing,R146853,SciREX: A Challenge Dataset for Document-Level Information Extraction,S588267,R146855,description,L409590,develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE.,"Extracting information from full documents is an important problem in many domains, but most previous work focus on identifying relationships within a sentence or a paragraph. It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections. In this paper, we introduce SciREX, a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles. We annotate our dataset by integrating automatic and human annotations, leveraging existing scientific knowledge resources. We develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE. Analyzing the model performance shows a significant gap between human performance and current baselines, inviting the community to use our dataset as a challenge to develop document-level IE models. Our data and code are publicly available at https://github.com/allenai/SciREX .",TRUE,sentence
R145261,Natural Language Processing,R162400,Overview of the gene ontology task at BioCreative IV,S647925,R162402,description,L442047,developed two subtasks: (i) to automatically locate text passages that contain GO-relevant information (a text retrieval task) and (ii) to automatically identify relevant GO terms for the genes in a given article (a concept-recognition task).,"Gene Ontology (GO) annotation is a common task among model organism databases (MODs) for capturing gene function data from journal articles. It is a time-consuming and labor-intensive task, and is thus often considered as one of the bottlenecks in literature curation. There is a growing need for semiautomated or fully automated GO curation techniques that will help database curators to rapidly and accurately identify gene function information in full-length articles. Despite multiple attempts in the past, few studies have proven to be useful with regard to assisting real-world GO curation. The shortage of sentence-level training data and opportunities for interaction between text-mining developers and GO curators has limited the advances in algorithm development and corresponding use in practical circumstances. To this end, we organized a text-mining challenge task for literature-based GO annotation in BioCreative IV. More specifically, we developed two subtasks: (i) to automatically locate text passages that contain GO-relevant information (a text retrieval task) and (ii) to automatically identify relevant GO terms for the genes in a given article (a concept-recognition task). With the support from five MODs, we provided teams with >4000 unique text passages that served as the basis for each GO annotation in our task data. Such evidence text information has long been recognized as critical for text-mining algorithm development but was never made available because of the high cost of curation. In total, seven teams participated in the challenge task. From the team results, we conclude that the state of the art in automatically mining GO terms from literature has improved over the past decade while much progress is still needed for computer-assisted GO curation. Future work should focus on addressing remaining technical challenges for improved performance of automatic GO concept recognition and incorporating practical benefits of text-mining tools into real-world GO annotation. Database URL: http://www.biocreative.org/tasks/biocreative-iv/track-4-GO/.",TRUE,sentence
R145261,Natural Language Processing,R162920,GATE: an architecture for development of robust HLT applications,S649898,R162922,description,L442944,"framework and graphical development environment which enables users to develop and deploy language engineering components and resources in a robust fashion. The GATE architecture has enabled us not only to develop a number of successful applications for various language processing tasks (such as Information Extraction), but also to build and annotate corpora and carry out evaluations on the applications generated.","In this paper we present GATE, a framework and graphical development environment which enables users to develop and deploy language engineering components and resources in a robust fashion. The GATE architecture has enabled us not only to develop a number of successful applications for various language processing tasks (such as Information Extraction), but also to build and annotate corpora and carry out evaluations on the applications generated. The framework can be used to develop applications and resources in multiple languages, based on its thorough Unicode support.",TRUE,sentence
R145261,Natural Language Processing,R162924,HeidelTime: High Quality Rule-Based Extraction and Normalization of Temporal Expressions,S649915,R162926,description,L442956,HeidelTime is a rule-based system mainly using regular expression patterns for the extraction of temporal expressions and knowledge resources as well as linguistic clues for their normalization.,"In this paper, we describe HeidelTime, a system for the extraction and normalization of temporal expressions. HeidelTime is a rule-based system mainly using regular expression patterns for the extraction of temporal expressions and knowledge resources as well as linguistic clues for their normalization. In the TempEval-2 challenge, HeidelTime achieved the highest F-Score (86%) for the extraction and the best results in assigning the correct value attribute, i.e., in understanding the semantics of the temporal expressions.",TRUE,sentence
R145261,Natural Language Processing,R172664,End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF,S689025,R172666,Material,R172668,Penn Treebank WSJ corpus for part-of-speech (POS) tagging,"State-of-the-art sequence labeling systems traditionally require large amounts of task-specific knowledge in the form of hand-crafted features and data pre-processing. In this paper, we introduce a novel neutral network architecture that benefits from both word- and character-level representations automatically, by using combination of bidirectional LSTM, CNN and CRF. Our system is truly end-to-end, requiring no feature engineering or data pre-processing, thus making it applicable to a wide range of sequence labeling tasks. We evaluate our system on two data sets for two sequence labeling tasks --- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003 corpus for named entity recognition (NER). We obtain state-of-the-art performance on both the two data --- 97.55\% accuracy for POS tagging and 91.21\% F1 for NER.",TRUE,sentence
R145261,Natural Language Processing,R146081,Analyzing the Dynamics of Research by Extracting Key Aspects of Scientific Papers,S585550,R146083,description,L408278,"present a method for characterizing a research work in terms of its focus, domain of application, and techniques used.","We present a method for characterizing a research work in terms of its focus, domain of application, and techniques used. We show how tracing these aspects over time provides a novel measure of the influence of research communities on each other. We extract these characteristics by matching semantic extraction patterns, learned using bootstrapping, to the dependency trees of sentences in an article’s",TRUE,sentence
R145261,Natural Language Processing,R147657,Concept-based analysis of scientific literature,S592405,R147659,description,L412221,propose an unsupervised bootstrapping algorithm for identifying and categorizing mentions of concepts. We then propose a new clustering algorithm that uses citations' context as a way to cluster the extracted mentions into coherent concepts.,"This paper studies the importance of identifying and categorizing scientific concepts as a way to achieve a deeper understanding of the research literature of a scientific community. To reach this goal, we propose an unsupervised bootstrapping algorithm for identifying and categorizing mentions of concepts. We then propose a new clustering algorithm that uses citations' context as a way to cluster the extracted mentions into coherent concepts. Our evaluation of the algorithms against gold standards shows significant improvement over state-of-the-art results. More importantly, we analyze the computational linguistic literature using the proposed algorithms and show four different ways to summarize and understand the research community which are difficult to obtain using existing techniques.",TRUE,sentence
R145261,Natural Language Processing,R164170,Coreference Resolution in Biomedical Texts: a Machine Learning Approach,S655539,R164172,Method,L445240,"proposed a detailed framework for the coreference resolution task, in which we augmented the traditional learning model by incorporating non-anaphors into training","Motivation: Coreference resolution, the process of identifying different mentions of an entity, is a very important component in a text-mining system. Compared with the work in news articles, the existing study of coreference resolution in biomedical texts is quite preliminary by only focusing on specific types of anaphors like pronouns or definite noun phrases, using heuristic methods, and running on small data sets. Therefore, there is a need for an in-depth exploration of this task in the biomedical domain. Results: In this article, we presented a learning-based approach to coreference resolution in the biomedical domain. We made three contributions in our study. Firstly, we annotated a large scale coreference corpus, MedCo, which consists of 1,999 medline abstracts in the GENIA data set. Secondly, we proposed a detailed framework for the coreference resolution task, in which we augmented the traditional learning model by incorporating non-anaphors into training. Lastly, we explored various sources of knowledge for coreference resolution, particularly, those that can deal with the complexity of biomedical texts. The evaluation on the MedCo corpus showed promising results. Our coreference resolution system achieved a high precision of 85.2% with a reasonable recall of 65.3%, obtaining an F-measure of 73.9%. The results also suggested that our augmented learning model significantly boosted precision (up to 24.0%) without much loss in recall (less than 5%), and brought a gain of over 8% in F-measure.",TRUE,sentence
R145261,Natural Language Processing,R146081,Analyzing the Dynamics of Research by Extracting Key Aspects of Scientific Papers,S585551,R146083,description,L408279,show how tracing these aspects over time provides a novel measure of the influence of research communities on each other.,"We present a method for characterizing a research work in terms of its focus, domain of application, and techniques used. We show how tracing these aspects over time provides a novel measure of the influence of research communities on each other. We extract these characteristics by matching semantic extraction patterns, learned using bootstrapping, to the dependency trees of sentences in an article’s",TRUE,sentence
R145261,Natural Language Processing,R166335,Overview of BioCreAtIvE task 1B: normalized gene lists,S662499,R166336,description,L448148,"The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse).","Abstract Background Our goal in BioCreAtIve has been to assess the state of the art in text mining, with emphasis on applications that reflect real biological applications, e.g., the curation process for model organism databases. This paper summarizes the BioCreAtIvE task 1B, the ""Normalized Gene List"" task, which was inspired by the gene list supplied for each curated paper in a model organism database. The task was to produce the correct list of unique gene identifiers for the genes and gene products mentioned in sets of abstracts from three model organisms (Yeast, Fly, and Mouse). Results Eight groups fielded systems for three data sets (Yeast, Fly, and Mouse). For Yeast, the top scoring system (out of 15) achieved 0.92 F-measure (harmonic mean of precision and recall); for Mouse and Fly, the task was more difficult, due to larger numbers of genes, more ambiguity in the gene naming conventions (particularly for Fly), and complex gene names (for Mouse). For Fly, the top F-measure was 0.82 out of 11 systems and for Mouse, it was 0.79 out of 16 systems. Conclusion This assessment demonstrates that multiple groups were able to perform a real biological task across a range of organisms. The performance was dependent on the organism, and specifically on the naming conventions associated with each organism. These results hold out promise that the technology can provide partial automation of the curation process in the near future.",TRUE,sentence
R145261,Natural Language Processing,R76157,SemEval-2020 Task 3: Graded Word Similarity in Context,S349176,R76159,description,L249472,"to predict the effects of context on human perception of similarity in English, Croatian, Slovene and Finnish","This paper presents the Graded Word Similarity in Context (GWSC) task which asked participants to predict the effects of context on human perception of similarity in English, Croatian, Slovene and Finnish. We received 15 submissions and 11 system description papers. A new dataset (CoSimLex) was created for evaluation in this task: it contains pairs of words, each annotated within two different contexts. Systems beat the baselines by significant margins, but few did well in more than one language or subtask. Almost every system employed a Transformer model, but with many variations in the details: WordNet sense embeddings, translation of contexts, TF-IDF weightings, and the automatic creation of datasets for fine-tuning were all used to good effect.",TRUE,sentence
R112130,Networking and Internet Architecture,R186234,PredictRoute: A Network Path Prediction Toolkit,S711966,R186236,Method,R186238,PredictRoute trains probabilistic models of routing towards prefixes on the Internet to predict network paths and their likelihood.,"Accurate prediction of network paths between arbitrary hosts on the Internet is of vital importance for network operators, cloud providers, and academic researchers. We present PredictRoute, a system that predicts network paths between hosts on the Internet using historical knowledge of the data and control plane. In addition to feeding on freely available traceroutes and BGP routing tables, PredictRoute optimally explores network paths towards chosen BGP prefixes. PredictRoute's strategy for exploring network paths discovers 4X more autonomous system (AS) hops than other well-known strategies used in practice today. Using a corpus of traceroutes, PredictRoute trains probabilistic models of routing towards prefixes on the Internet to predict network paths and their likelihood. PredictRoute's AS-path predictions differ from the measured path by at most 1 hop, 75% of the time. We expose PredictRoute's path prediction capability via a REST API to facilitate its inclusion in other applications and studies. We additionally demonstrate the utility of PredictRoute in improving real-world applications for circumventing Internet censorship and preserving anonymity online.",TRUE,sentence
R68,Pharmacology,R109548,EFFECT OF PIOGLITAZONE AND GEMFIBROZIL ADMINISTRATION ON C-REACTIVE PROTEIN LEVELS IN NON-DIABETIC HYPERLIPIDEMIC RATS,S499898,R109550,Material,R109557,Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH),"ABSTRACTOBJECTIVE: To assess and compare anti-inflammatory effect of pioglitazone and gemfibrozil by measuring C-reactive protein (CRP) levels in high fat fed non-diabetic rats.METHODS: A comparative animal study was conducted at the Post Graduate Medical Institute, Lahore, Pakistan in which 27, adult healthy male Sprague Dawley rats were used. The rats were divided into three groups. Hyperlipidemia was induced in all three groups by giving hyperlipidemic diet containing cholesterol 1.5%, coconut oil 8.0% and sodium cholate 1.0%. After four weeks, Group A (control) was given distilled water, Group B was given pioglitazone 10mg/kg body weight and Group C was given gemfibrozil 10mg/kg body weight as single morning dose by oral route for four weeks. CRP was estimated at zero, 4th and 8th week.RESULTS: There was significant increase in the level of CRP after giving high lipid diet from mean±SD of 2.59±0.28mg/L, 2.63±0.32mg/L and 2.67±0.23mg/L at 0 week to 3.55±0.44mg/L, 3.59±0.34mg/L and 3.6±0.32mg/L at 4th week in groups A, B and C respectively.Multiple comparisons by ANOVA revealed significant difference between groups at 8th week only. Post hoc analysis disclosed that CRP level was significantly low in pioglitazone treated group having mean±SD of 2.93±0.33mg/L compared to control group’s 4.42±0.30mg/L and gemfibrozil group’s 4.28±0.39mg/L. The p-value in each case was <0.001, while difference between control and gemfibrozil was not statistically significant.CONCLUSION: Pioglitazone is effective in reducing hyperlipidemia associated inflammation, evidenced by decreased CRP level while gemfibrozil is not effective.KEY WORDS: Pioglitazone (MeSH); Gemfibrozil (MeSH); Hyperlipidemia (MeSH); Anti-inflammatory (MeSH); C-reactive protein (MeSH).",TRUE,sentence
R138056,Planetary Sciences,R139661,Mineralogy of the MSL Curiosity landing site in Gale crater as observed by MRO/CRISM,S557922,R139662,Instrument,R108310,Compact Reconnaissance Imaging Spectrometer for Mars (CRISM),"Orbital data acquired by the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) and High Resolution Imaging Science Experiment instruments on the Mars Reconnaissance Orbiter (MRO) provide a synoptic view of compositional stratigraphy on the floor of Gale crater surrounding the area where the Mars Science Laboratory (MSL) Curiosity landed. Fractured, light‐toned material exhibits a 2.2 µm absorption consistent with enrichment in hydroxylated silica. This material may be distal sediment from the Peace Vallis fan, with cement and fracture fill containing the silica. This unit is overlain by more basaltic material, which has 1 µm and 2 µm absorptions due to pyroxene that are typical of Martian basaltic materials. Both materials are partially obscured by aeolian dust and basaltic sand. Dunes to the southeast exhibit differences in mafic mineral signatures, with barchan dunes enhanced in olivine relative to pyroxene‐containing longitudinal dunes. This compositional difference may be related to aeolian grain sorting.",TRUE,sentence
R138056,Planetary Sciences,R139664,New insights into gully formation on Mars: Constraints from composition as seen by MRO/CRISM,S557877,R139665,Instrument,R108310,Compact Reconnaissance Imaging Spectrometer for Mars (CRISM),"Over 100 Martian gully sites were analyzed using orbital data collected by the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) and High Resolution Imaging Science Experiment on the Mars Reconnaissance Orbiter (MRO). Most gullies are spectrally indistinct from their surroundings, due to mantling by dust. Where spectral information on gully sediments was obtained, a variety of mineralogies were identified. Their relationship to the source rock suggests that gully‐forming processes transported underlying material downslope. There is no evidence for specific compositions being more likely to be associated with gullies or with the formation of hydrated minerals in situ as a result of recent liquid water activity. Seasonal CO2 and H2O frosts were observed in gullies at middle to high latitudes, consistent with seasonal frost‐driven processes playing important roles in the evolution of gullies. Our results do not clearly indicate a role for long‐lived liquid water in gully formation and evolution.",TRUE,sentence
R343,Psychology,R44974,Corrective interpersonal experience in psychodrama group therapy: A comprehensive process analysis of significant therapeutic events,S138029,R44975,Qualitative findings ,L84391,Improvements in interpersonal functioning and sense of self ,"Abstract This study investigated the process of resolving painful emotional experience during psychodrama group therapy, by examining significant therapeutic events within seven psychodrama enactments. A comprehensive process analysis of four resolved and three not-resolved cases identified five meta-processes which were linked to in-session resolution. One was a readiness to engage in the therapeutic process, which was influenced by client characteristics and the client's experience of the group; and four were therapeutic events: (1) re-experiencing with insight; (2) activating resourcefulness; (3) social atom repair with emotional release; and (4) integration. A corrective interpersonal experience (social atom repair) healed the sense of fragmentation and interpersonal disconnection associated with unresolved emotional pain, and emotional release was therapeutically helpful when located within the enactment of this new role relationship. Protagonists who experienced resolution reported important improvements in interpersonal functioning and sense of self which they attributed to this experience.",TRUE,sentence
R339,Public Administration,R110440,How many could have been saved? Effects of social distancing on COVID-19,S503221,R110445,Results,L363613,Social distancing policies reduce the aggregated number of cases,"Abstract What is the effect of social distancing policies on the spread of the new coronavirus? Social distancing policies rose to prominence as most capable of containing contagion and saving lives. Our purpose in this paper is to identify the causal effect of social distancing policies on the number of confirmed cases of COVID-19 and on contagion velocity. We align our main argument with the existing scientific consensus: social distancing policies negatively affect the number of cases. To test this hypothesis, we construct a dataset with daily information on 78 affected countries in the world. We compute several relevant measures from publicly available information on the number of cases and deaths to estimate causal effects for short-term and cumulative effects of social distancing policies. We use a time-series cross-sectional matching approach to match countries’ observable histories. Causal effects (ATTs and ATEs) can be extracted via a dif-in-dif estimator. Results show that social distancing policies reduce the aggregated number of cases by 4,832 on average (or 17.5/100 thousand), but only when strict measures are adopted. This effect seems to manifest from the third week onwards.",TRUE,sentence
R11,Science,R108464,Proofs of Retrievability: Theory and Implementation,S494019,R108465,Custom1,L357924,"A proof of retrievability (POR) is a compact proof by a file system (prover) to a client (verifier) that a target file F is intact, in the sense that the client can fully recover it. As PORs incur lower communication complexity than transmission of F itself, they are an attractive building block for high-assurance remote storage systems.","A proof of retrievability (POR) is a compact proof by a file system (prover) to a client (verifier) that a target file F is intact, in the sense that the client can fully recover it. As PORs incur lower communication complexity than transmission of F itself, they are an attractive building block for high-assurance remote storage systems. In this paper, we propose a theoretical framework for the design of PORs. Our framework improves the previously proposed POR constructions of Juels-Kaliski and Shacham-Waters, and also sheds light on the conceptual limitations of previous theoretical models for PORs. It supports a fully Byzantine adversarial model, carrying only the restriction---fundamental to all PORs---that the adversary's error rate be bounded when the client seeks to extract F. We propose a new variant on the Juels-Kaliski protocol and describe a prototype implementation. We demonstrate practical encoding even for files F whose size exceeds that of client main memory.",TRUE,sentence
R11,Science,R151210,"Design Principles of Integrated Information
Platform for Emergency Responses: The Case of
2008 Beijing Olympic Games",S626357,R156054,paper:Study Type,L431078,"action research, participatory design, and situation-awareness oriented design","•his paper investigates the challenges faced in designing an integrated information platform for emergency response management and uses the Beijing Olympic Games as a case study. The research methods are grounded in action research, participatory design, and situation-awareness oriented design. The completion of a more than two-year industrial secondment and six-month field studies ensured that a full understanding of user requirements had been obtained. A service-centered architecture was proposed to satisfy these user requirements. The proposed architecture consists mainly of information gathering, database management, and decision support services. The decision support services include situational overview, instant risk assessment, emergency response preplan, and disaster development prediction. Abstracting from the experience obtained while building this system, we outline a set of design principles in the general domain of information systems (IS) development for emergency management. These design principles form a contribution to the information systems literature because they provide guidance to developers who are aiming to support emergency response and the development of such systems that have not yet been adequately met by any existing types of IS. We are proud that the information platform developed was deployed in the real world and used in the 2008 Beijing",TRUE,sentence
R11,Science,R33280,Identifying the factors influencing the performance of reverse supply chains (RSC),S115335,R33281,Critical success factors,R33277,ease of use,"This paper aims to extract the factors influencing the performance of reverse supply chains (RSCs) based on the structure equation model (SEM). We first introduce the definition of RSC and describe its current status and follow this with a literature review of previous RSC studies and the technology acceptance model . We next develop our research model and 11 hypotheses and then use SEM to test our model and identify those factors that actually influence the success of RSC. Next, we use both questionnaire and web‐based methods to survey five companies which have RSC operation experience in China and Korea. Using the 168 responses, we used measurement modeling test and SEM to validate our proposed hypotheses. As a result, nine hypotheses were accepted while two were rejected. We found that ease of use, perceived usefulness, service quality, channel relationship and RSC cost were the five most important factors which influence the success of RSC. Finally, we conclude by highlighting our research contribution and propose future research.",TRUE,sentence
R11,Science,R153003,Mobile text alerts are an effective way of communicating emergency information to adolescents: Results from focus groups with 12- to 18-year-olds,S626772,R156108,RQ,L431439,Identify factors influencing how adolescents would respond to receiving emergency text messages,Mobile phone text messages can be used to disseminate information and advice to the public in disasters. We sought to identify factors influencing how adolescents would respond to receiving emergency text messages. Qualitative interviews were conducted with participants aged 12–18 years. Participants discussed scenarios relating to flooding and the discovery of an unexploded World War Two bomb and were shown example alerts that might be sent out in these circumstances. Intended compliance with the alerts was high. Participants noted that compliance would be more likely if: they were familiar with the system; the messages were sent by a trusted source; messages were reserved for serious incidents; multiple messages were sent; messages were kept short and formal.,TRUE,sentence
R11,Science,R25477,Model Free iPID Control for Glycemia Regulation of Type-1 Diabetes,S76497,R25478,Future work/challenges,L47746,improve the computation of the intelligent part of iPID,"Objective: The objective is to design a fully automated glycemia controller of Type-1 Diabetes (T1D) in both fasting and postprandial phases on a large number of virtual patients. Methods: A model-free intelligent proportional-integral-derivative (iPID) is used to infuse insulin. The feasibility of iPID is tested in silico on two simulators with and without measurement noise. The first simulator is derived from a long-term linear time-invariant model. The controller is also validated on the UVa/Padova metabolic simulator on 10 adults under 25 runs/subject for noise robustness test. Results: It was shown that without measurement noise, iPID mimicked the normal pancreatic secretion with a relatively fast reaction to meals as compared to a standard PID. With the UVa/Padova simulator, the robustness against CGM noise was tested. A higher percentage of time in target was obtained with iPID as compared to standard PID with reduced time spent in hyperglycemia. Conclusion: Two different T1D simulators tests showed that iPID detects meals and reacts faster to meal perturbations as compared to a classic PID. The intelligent part turns the controller to be more aggressive immediately after meals without neglecting safety. Further research is suggested to improve the computation of the intelligent part of iPID for such systems under actuator constraints. Any improvement can impact the overall performance of the model-free controller. Significance: The simple structure iPID is a step for PID-like controllers since it combines the classic PID nice properties with new adaptive features.",TRUE,sentence
R11,Science,R108515,A Novel Zero Knowledge Proof of Retrievability,S494399,R108516,Custom1,L358253,"Proof of retrievability is a cryptographic tool which interacts between the data user and the server, and the server proves to the data user the integrity of data which he will download. It is a crucial problem in outsourcing storage such as cloud computing. In this paper, a novel scheme called the zero knowledge proof of retrievability is proposed, which combines proof of retrievability and zero knowledge proof. It has lower computation and communication complexity and higher security than the previous schemes.","Proof of retrievability is a cryptographic tool which interacts between the data user and the server, and the server proves to the data user the integrity of data which he will download. It is a crucial problem in outsourcing storage such as cloud computing. In this paper, a novel scheme called the zero knowledge proof of retrievability is proposed, which combines proof of retrievability and zero knowledge proof. It has lower computation and communication complexity and higher security than the previous schemes.",TRUE,sentence
R11,Science,R108476,Publicly Verifiable Proofs of Data Replication and Retrievability for Cloud Storage,S494112,R108477,Custom1,L358005,"Proofs of Retrievability (PORs) permit a cloud provider to prove to the client (owner) that her files are correctly stored. Extensions of PORs, called Proofs of Retrievability and Reliability (PORRs), enable to check in a single instance that replicas of those files are correctly stored as well.In this paper, we propose a publicly verifiable PORR using Verifiable Delay Functions, which are special functions being slow to compute and easy to verify. We thus ensure that the cloud provider stores both original files and their replicas at rest, rather than computing the latter on the fly when requested to prove fair storage. Moreover, the storage verification can be done by anyone, not necessarily by the client. To our knowledge, this is the first PORR that offers public verification. Future work will include implementation and evaluation of our solution in a realistic cloud setting.","Proofs of Retrievability (PORs) permit a cloud provider to prove to the client (owner) that her files are correctly stored. Extensions of PORs, called Proofs of Retrievability and Reliability (PORRs), enable to check in a single instance that replicas of those files are correctly stored as well.In this paper, we propose a publicly verifiable PORR using Verifiable Delay Functions, which are special functions being slow to compute and easy to verify. We thus ensure that the cloud provider stores both original files and their replicas at rest, rather than computing the latter on the fly when requested to prove fair storage. Moreover, the storage verification can be done by anyone, not necessarily by the client. To our knowledge, this is the first PORR that offers public verification. Future work will include implementation and evaluation of our solution in a realistic cloud setting.",TRUE,sentence
R11,Science,R108494,Dynamic Outsourced Proofs of Retrievability Enabling Auditing Migration for Remote Storage Security,S494232,R108495,Custom1,L358107,"Remote data auditing service is important for mobile clients to guarantee the intactness of their outsourced data stored at cloud side. To relieve mobile client from the nonnegligible burden incurred by performing the frequent data auditing, more and more literatures propose that the execution of such data auditing should be migrated from mobile client to third-party auditor (TPA). However, existing public auditing schemes always assume that TPA is reliable, which is the potential risk for outsourced data security. Although Outsourced Proofs of Retrievability (OPOR) have been proposed to further protect against the malicious TPA and collusion among any two entities, the original OPOR scheme applies only to the static data, which is the limitation that should be solved for enabling data dynamics. In this paper, we design a novel authenticated data structure called bv23Tree, which enables client to batch-verify the indices and values of any number of appointed leaves all at once for efficiency. By utilizing bv23Tree and a hierarchical storage structure, we present the first solution for Dynamic OPOR (DOPOR), which extends the OPOR model to support dynamic updates of the outsourced data. Extensive security and performance analyses show the reliability and effectiveness of our proposed scheme.","Remote data auditing service is important for mobile clients to guarantee the intactness of their outsourced data stored at cloud side. To relieve mobile client from the nonnegligible burden incurred by performing the frequent data auditing, more and more literatures propose that the execution of such data auditing should be migrated from mobile client to third-party auditor (TPA). However, existing public auditing schemes always assume that TPA is reliable, which is the potential risk for outsourced data security. Although Outsourced Proofs of Retrievability (OPOR) have been proposed to further protect against the malicious TPA and collusion among any two entities, the original OPOR scheme applies only to the static data, which is the limitation that should be solved for enabling data dynamics. In this paper, we design a novel authenticated data structure called bv23Tree, which enables client to batch-verify the indices and values of any number of appointed leaves all at once for efficiency. By utilizing bv23Tree and a hierarchical storage structure, we present the first solution for Dynamic OPOR (DOPOR), which extends the OPOR model to support dynamic updates of the outsourced data. Extensive security and performance analyses show the reliability and effectiveness of our proposed scheme.",TRUE,sentence
R11,Science,R34231,West African Single Currency and Competitiveness,S118992,R34232,Justification/ recommendation,L71882,Simulations show little support for a dominant peg,"This paper compares different nominal anchors to promote internal and external competitiveness in the case of a fixed exchange rate regime for the future single regional currency of the Economic Community of the West African States (ECOWAS). We use counterfactual analyses and estimate a model of dependent economy for small commodity exporting countries. We consider four foreign anchor currencies: the US dollar, the euro, the yen and the yuan. Our simulations show little support for a dominant peg in the ECOWAS area if they pursue several goals: maximizing the export revenues, minimizing their variability, stabilizing them and minimizing the real exchange rate misalignments from the fundamental value.",TRUE,sentence
R11,Science,R27295,Relaxationofshot peening induced compressive stress during fatigue of notched steel samples,S88022,R27296,Special Notes,R27293,Smooth and notched,"AbstractThis paper presents an experimental investigation of the surface residual stress relaxation behaviour of a shot peened 0.4% carbon low alloy steel under fatigue loading. A round specimen with a circumferential notch and a notch factor Kt = 1.75 was fatigue loaded in both shot peened and ground conditions. Loading conditions included axial fatigue with stress ratio R = −1 and R = 0 and also R = −1 with an additional peak overload applied at 106 cycles. Plain unnotched shot peened specimens were also fatigue loaded with stress ratio R = −1. The results show how the relaxation is dependent on load level, how the peak load changes the surface residual stress state, and that relaxation of the smooth and notched conditions is similar. Two different shot peening conditions were used, one with Almen intensity of 30–35A (mm/100) and another of 50–55 A (mm/l00).",TRUE,sentence
R11,Science,R27297,Relaxation of Shot Peening Induced Compressive Stress During Fatigue of Notched Steelsamples,S88035,R27298,Special Notes,R27293,Smooth and notched,"AbstractThis paper presents an experimental investigation of the surface residual stress relaxation behaviour of a shot peened 0.4% carbon low alloy steel under fatigue loading. A round specimen with a circumferential notch and a notch factor Kt = 1.75 was fatigue loaded in both shot peened and ground conditions. Loading conditions included axial fatigue with stress ratio R = −1 and R = 0 and also R = −1 with an additional peak overload applied at 106 cycles. Plain unnotched shot peened specimens were also fatigue loaded with stress ratio R = −1. The results show how the relaxation is dependent on load level, how the peak load changes the surface residual stress state, and that relaxation of the smooth and notched conditions is similar. Two different shot peening conditions were used, one with Almen intensity of 30–35A (mm/100) and another of 50–55 A (mm/l00).",TRUE,sentence
R11,Science,R151238,Social Media and Emergency Management: Exploring State and Local Tweets,S626457,R156068,approach,L431164,social media strategies employed by governments to respond to major weather-related events.,"Social media for emergency management has emerged as a vital resource for government agencies across the globe. In this study, we explore social media strategies employed by governments to respond to major weather-related events. Using social media monitoring software, we analyze how social media is used in six cities following storms in the winter of 2012. We listen, monitor, and assess online discourse available on the full range of social media outlets (e.g., Twitter, Facebook, blogs). To glean further insight, we conduct a survey and extract themes from citizen comments and government's response. We conclude with recommendations on how practitioners can develop social media strategies that enable citizen participation in emergency management.",TRUE,sentence
R11,Science,R33348,Critical success factors for B2B e‐commerce use within the UK NHS pharmaceutical supply chain,S115464,R33349,Critical success factors,R33347,world wide web – assurance and empathy,"Purpose – The purpose of this paper is to determine those factors perceived by users to influence the successful on‐going use of e‐commerce systems in business‐to‐business (B2B) buying and selling transactions through examination of the views of individuals acting in both purchasing and selling roles within the UK National Health Service (NHS) pharmaceutical supply chain.Design/methodology/approach – Literature from the fields of operations and supply chain management (SCM) and information systems (IS) is used to determine candidate factors that might influence the success of the use of e‐commerce. A questionnaire based on these is used for primary data collection in the UK NHS pharmaceutical supply chain. Factor analysis is used to analyse the data.Findings – The paper yields five composite factors that are perceived by users to influence successful e‐commerce use. “System quality,” “information quality,” “management and use,” “world wide web – assurance and empathy,” and “trust” are proposed as potentia...",TRUE,sentence
R373,Science and Technology Studies,R5223,"Self-citation is the hallmark of productive authors, of any gender",S5789,R5230,Process,R5255,disciplinary under-specialization,"It was recently reported that men self-cite >50% more often than women across a wide variety of disciplines in the bibliographic database JSTOR. Here, we replicate this finding in a sample of 1.6 million papers from Author-ity, a version of PubMed with computationally disambiguated author names. More importantly, we show that the gender effect largely disappears when accounting for prior publication count in a multidimensional statistical model. Gender has the weakest effect on the probability of self-citation among an extensive set of features tested, including byline position, affiliation, ethnicity, collaboration size, time lag, subject-matter novelty, reference/citation counts, publication type, language, and venue. We find that self-citation is the hallmark of productive authors, of any gender, who cite their novel journal publications early and in similar venues, and more often cross citation-barriers such as language and indexing. As a result, papers by authors with short, disrupted, or diverse careers miss out on the initial boost in visibility gained from self-citations. Our data further suggest that this disproportionately affects women because of attrition and not because of disciplinary under-specialization.",TRUE,sentence
R259,Semiconductor and Optical Materials,R71565,Bright Visible-Infrared Light Emitting Diodes Based on Hybrid Halide Perovskite with Spiro-OMeTAD as a Hole-Injecting Layer,S338069,R71567,Material,R71573,bright solid state light emitting diodes (LEDs),"Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W·sr(-1)·m(-2) at a current density of 232 mA·cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 ± 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.",TRUE,sentence
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338106,R71590,Material,R71594,"CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–]","Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,sentence
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338103,R71590,Material,R71591,Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites,"Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,sentence
R259,Semiconductor and Optical Materials,R71582,Dismantling the “Red Wall” of Colloidal Perovskites: Highly Luminescent Formamidinium and Formamidinium–Cesium Lead Iodide Nanocrystals,S338120,R71590,Material,R71608,"red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca.","Colloidal nanocrystals (NCs) of APbX3-type lead halide perovskites [A = Cs+, CH3NH3+ (methylammonium or MA+) or CH(NH2)2+ (formamidinium or FA+); X = Cl–, Br–, I–] have recently emerged as highly versatile photonic sources for applications ranging from simple photoluminescence down-conversion (e.g., for display backlighting) to light-emitting diodes. From the perspective of spectral coverage, a formidable challenge facing the use of these materials is how to obtain stable emissions in the red and infrared spectral regions covered by the iodide-based compositions. So far, red-emissive CsPbI3 NCs have been shown to suffer from a delayed phase transformation into a nonluminescent, wide-band-gap 1D polymorph, and MAPbI3 exhibits very limited chemical durability. In this work, we report a facile colloidal synthesis method for obtaining FAPbI3 and FA-doped CsPbI3 NCs that are uniform in size (10–15 nm) and nearly cubic in shape and exhibit drastically higher robustness than their MA- or Cs-only cousins with similar sizes and morphologies. Detailed structural analysis indicated that the FAPbI3 NCs had a cubic crystal structure, while the FA0.1Cs0.9PbI3 NCs had a 3D orthorhombic structure that was isostructural to the structure of CsPbBr3 NCs. Bright photoluminescence (PL) with high quantum yield (QY > 70%) spanning red (690 nm, FA0.1Cs0.9PbI3 NCs) and near-infrared (near-IR, ca. 780 nm, FAPbI3 NCs) regions was sustained for several months or more in both the colloidal state and in films. The peak PL wavelengths can be fine-tuned by using postsynthetic cation- and anion-exchange reactions. Amplified spontaneous emissions with low thresholds of 28 and 7.5 μJ cm–2 were obtained from the films deposited from FA0.1Cs0.9PbI3 and FAPbI3 NCs, respectively. Furthermore, light-emitting diodes with a high external quantum efficiency of 2.3% were obtained by using FAPbI3 NCs.",TRUE,sentence
R259,Semiconductor and Optical Materials,R71565,Bright Visible-Infrared Light Emitting Diodes Based on Hybrid Halide Perovskite with Spiro-OMeTAD as a Hole-Injecting Layer,S338070,R71567,Material,R71574,solution-processed hybrid lead halide perovskite (Pe),"Hybrid halide perovskites that are currently intensively studied for photovoltaic applications, also present outstanding properties for light emission. Here, we report on the preparation of bright solid state light emitting diodes (LEDs) based on a solution-processed hybrid lead halide perovskite (Pe). In particular, we have utilized the perovskite generally described with the formula CH3NH3PbI(3-x)Cl(x) and exploited a configuration without electron or hole blocking layer in addition to the injecting layers. Compact TiO2 and Spiro-OMeTAD were used as electron and hole injecting layers, respectively. We have demonstrated a bright combined visible-infrared radiance of 7.1 W·sr(-1)·m(-2) at a current density of 232 mA·cm(-2), and a maximum external quantum efficiency (EQE) of 0.48%. The devices prepared surpass the EQE values achieved in previous reports, considering devices with just an injecting layer without any additional blocking layer. Significantly, the maximum EQE value of our devices is obtained at applied voltages as low as 2 V, with a turn-on voltage as low as the Pe band gap (V(turn-on) = 1.45 ± 0.06 V). This outstanding performance, despite the simplicity of the approach, highlights the enormous potentiality of Pe-LEDs. In addition, we present a stability study of unsealed Pe-LEDs, which demonstrates a dramatic influence of the measurement atmosphere on the performance of the devices. The decrease of the electroluminescence (EL) under continuous operation can be attributed to an increase of the non-radiative recombination pathways, rather than a degradation of the perovskite material itself.",TRUE,sentence
R353,Social Psychology,R76575,The gender gap in mental well-being during the Covid-19 outbreak: evidence from the UK,S352197,R76576,Indicator for well-being,R77149,Mental well-being,"We document a decline in mental well-being after the onset of the Covid-19 pandemic in the UK. This decline is twice as large for women as for men. We seek to explain this gender gap by exploring gender differences in: family and caring responsibilities; financial and work situation; social engagement; health situation, and health behaviours, including exercise. Differences in family and caring responsibilities play some role, but the bulk of the gap is explained by social factors. Women reported more close friends before the pandemic than men, and increased loneliness after the pandemic's onset. Other factors are similarly distributed across genders and so play little role. Finally, we document larger declines in well-being for the young, of both genders, than the old.",TRUE,sentence
R75,Systems and Integrative Physiology,R4918,"Pilot Study to Estimate ""Difficult"" Area in e-Learning Material by Physiological Measurements",S5406,R4928,Data,R4936,electroencephalography (EEG) and eye gaze data,"To improve designs of e-learning materials, it is necessary to know which word or figure a learner felt ""difficult"" in the materials. In this pilot study, we measured electroencephalography (EEG) and eye gaze data of learners and analyzed to estimate which area they had difficulty to learn. The developed system realized simultaneous measurements of physiological data and subjective evaluations during learning. Using this system, we observed specific EEG activity in difficult pages. Integrating of eye gaze and EEG measurements raised a possibility to determine where a learner felt ""difficult"" in a page of learning materials. From these results, we could suggest that the multimodal measurements of EEG and eye gaze would lead to effective improvement of learning materials. For future study, more data collection using various materials and learners with different backgrounds is necessary. This study could lead to establishing a method to improve e-learning materials based on learners' mental states.",TRUE,sentence
R75,Systems and Integrative Physiology,R4918,"Pilot Study to Estimate ""Difficult"" Area in e-Learning Material by Physiological Measurements",S5404,R4928,Material,R4934,various materials and learners with different backgrounds,"To improve designs of e-learning materials, it is necessary to know which word or figure a learner felt ""difficult"" in the materials. In this pilot study, we measured electroencephalography (EEG) and eye gaze data of learners and analyzed to estimate which area they had difficulty to learn. The developed system realized simultaneous measurements of physiological data and subjective evaluations during learning. Using this system, we observed specific EEG activity in difficult pages. Integrating of eye gaze and EEG measurements raised a possibility to determine where a learner felt ""difficult"" in a page of learning materials. From these results, we could suggest that the multimodal measurements of EEG and eye gaze would lead to effective improvement of learning materials. For future study, more data collection using various materials and learners with different backgrounds is necessary. This study could lead to establishing a method to improve e-learning materials based on learners' mental states.",TRUE,sentence
R369,"Theory, Knowledge and Science",R76770,Knowledge Graphs in Manufacturing and Production: A Systematic Literature Review,S350480,R76772,Has result,L250138," knowledge fusion is currently the main use case for knowledge graphs,","Knowledge graphs in manufacturing and production aim to make production lines more efficient and flexible with higher quality output. This makes knowledge graphs attractive for companies to reach Industry 4.0 goals. However, existing research in the field is quite preliminary, and more research effort on analyzing how knowledge graphs can be applied in the field of manufacturing and production is needed. Therefore, we have conducted a systematic literature review as an attempt to characterize the state-of-the-art in this field, i.e., by identifying existing research and by identifying gaps and opportunities for further research. We have focused on finding the primary studies in the existing literature, which were classified and analyzed according to four criteria: bibliometric key facts, research type facets, knowledge graph characteristics, and application scenarios. Besides, an evaluation of the primary studies has also been carried out to gain deeper insights in terms of methodology, empirical evidence, and relevance. As a result, we can offer a complete picture of the domain, which includes such interesting aspects as the fact that knowledge fusion is currently the main use case for knowledge graphs, that empirical research and industrial application are still missing to a large extent, that graph embeddings are not fully exploited, and that technical literature is fast-growing but still seems to be far from its peak.",TRUE,sentence
R369,"Theory, Knowledge and Science",R75081,"A Survey on Knowledge Graphs: Representation, Acquisition and Applications",S344470,R75083,Has method,L247602," knowledge graph completion, embedding methods, path inference, and logical rule reasoning","Human knowledge provides a formal understanding of the world. Knowledge graphs that represent structural relations between entities have become an increasingly popular research direction toward cognition and human-level intelligence. In this survey, we provide a comprehensive review of the knowledge graph covering overall research topics about: 1) knowledge graph representation learning; 2) knowledge acquisition and completion; 3) temporal knowledge graph; and 4) knowledge-aware applications and summarize recent breakthroughs and perspective directions to facilitate future research. We propose a full-view categorization and new taxonomies on these topics. Knowledge graph embedding is organized from four aspects of representation space, scoring function, encoding models, and auxiliary information. For knowledge acquisition, especially knowledge graph completion, embedding methods, path inference, and logical rule reasoning are reviewed. We further explore several emerging topics, including metarelational learning, commonsense reasoning, and temporal knowledge graphs. To facilitate future research on knowledge graphs, we also provide a curated collection of data sets and open-source libraries on different tasks. In the end, we have a thorough outlook on several promising research directions.",TRUE,sentence
R369,"Theory, Knowledge and Science",R76762,Virtual Knowledge Graphs: An Overview of Systems and Use Cases.,S350514,R76764,Has method,L250161,VKG paradigm replaces the rigid structure of tables with the flexibility of graphs,"In this paper, we present the virtual knowledge graph (VKG) paradigm for data integration and access, also known in the literature as Ontology-based Data Access. Instead of structuring the integration layer as a collection of relational tables, the VKG paradigm replaces the rigid structure of tables with the flexibility of graphs that are kept virtual and embed domain knowledge. We explain the main notions of this paradigm, its tooling ecosystem and significant use cases in a wide range of applications. Finally, we discuss future research directions.",TRUE,sentence
R57,Virology,R36149,Analysis of the epidemic growth of the early 2019-nCoV outbreak using internationally confirmed cases,S123893,R36150,location,R36148,"Hong Kong, Japan, Korea, Macau, Singapore, and Taiwan","Background: On January 23, 2020, a quarantine was imposed on travel in and out of Wuhan, where the 2019 novel coronavirus (2019-nCoV) outbreak originated from. Previous analyses estimated the basic epidemiological parameters using symptom onset dates of the confirmed cases in Wuhan and outside China. Methods: We obtained information on the 46 coronavirus cases who traveled from Wuhan before January 23 and have been subsequently confirmed in Hong Kong, Japan, Korea, Macau, Singapore, and Taiwan as of February 5, 2020. Most cases have detailed travel history and disease progress. Compared to previous analyses, an important distinction is that we used this data to informatively simulate the infection time of each case using the symptom onset time, previously reported incubation interval, and travel history. We then fitted a simple exponential growth model with adjustment for the January 23 travel ban to the distribution of the simulated infection time. We used a Bayesian analysis with diffuse priors to quantify the uncertainty of the estimated epidemiological parameters. We performed sensitivity analysis to different choices of incubation interval and the hyperparameters in the prior specification. Results: We found that our model provides good fit to the distribution of the infection time. Assuming the travel rate to the selected countries and regions is constant over the study period, we found that the epidemic was doubling in size every 2.9 days (95% credible interval [CrI], 2 days--4.1 days). Using previously reported serial interval for 2019-nCoV, the estimated basic reproduction number is 5.7 (95% CrI, 3.4--9.2). The estimates did not change substantially if we assumed the travel rate doubled in the last 3 days before January 23, when we used previously reported incubation interval for severe acute respiratory syndrome (SARS), or when we changed the hyperparameters in our prior specification. Conclusions: Our estimated epidemiological parameters are higher than an earlier report using confirmed cases in Wuhan. This indicates the 2019-nCoV could have been spreading faster than previous estimates.",TRUE,sentence
R370,"Work, Economy and Organizations",R4234,An investigation of skill requirements for business and data analytics positions: A content analysis of job advertisements,S4352,R4241,Has result,R4249,clear definitions with respect to required skills for job categories in the business and data analytics domain,"Abstract Presently, analytics degree programs exhibit a growing trend to meet a strong market demand. To explore the skill sets required for analytics positions, the authors examined a sample of online job postings related to professions such as business analyst (BA), business intelligence analyst (BIA), data analyst (DA), and data scientist (DS) using content analysis. They present a ranked list of relevant skills belonging to specific skills categories for the studied positions. Also, they conducted a pairwise comparison between DA and DS as well as BA and BIA. Overall, the authors observed that decision making, organization, communication, and structured data management are key to all job categories. The analysis shows that technical skills like statistics and programming skills are in most demand for DAs. The analysis is useful for creating clear definitions with respect to required skills for job categories in the business and data analytics domain and for designing course curricula for this domain.",TRUE,sentence
R370,"Work, Economy and Organizations",R4347,An investigation of skill requirements for business and data analytics positions: A content analysis of job advertisements,S4527,R4354,Has result,R4360,clear definitions with respect to required skills for job categories in the business and data analytics domain,"Abstract Presently, analytics degree programs exhibit a growing trend to meet a strong market demand. To explore the skill sets required for analytics positions, the authors examined a sample of online job postings related to professions such as business analyst (BA), business intelligence analyst (BIA), data analyst (DA), and data scientist (DS) using content analysis. They present a ranked list of relevant skills belonging to specific skills categories for the studied positions. Also, they conducted a pairwise comparison between DA and DS as well as BA and BIA. Overall, the authors observed that decision making, organization, communication, and structured data management are key to all job categories. The analysis shows that technical skills like statistics and programming skills are in most demand for DAs. The analysis is useful for creating clear definitions with respect to required skills for job categories in the business and data analytics domain and for designing course curricula for this domain.",TRUE,sentence
R370,"Work, Economy and Organizations",R4622,Skill Requirements in Big Data: A Content Analysis of Job Advertisements,S5024,R4629,Has result,L3371,conceptual framework of big data skills categories,"ABSTRACT The technology behind big data, although still in its nascent stages, is inspiring many companies to hire data scientists and explore the potential of big data to support strategic initiatives, including developing new products and services. To better understand the skills and knowledge that are highly valued by industry for jobs within big data, this study reports on an analysis of 1216 job advertisements that contained “big data” in the job title. Our results are presented within a conceptual framework of big data skills categories and confirm the multi-faceted nature of big data job skills. Our research also found that many big data job advertisements emphasize developing analytical information systems and that soft skills remain highly valued, in addition to the value placed on emerging hard technological skills.",TRUE,sentence
R370,"Work, Economy and Organizations",R4290,Analyzing Computer Programming Job Trend Using Web Data Mining,S4414,R4294,result,L3107,conclusion about the trends in the job market,"Today’s rapid changing and competitive environment requires educators to stay abreast of the job market in order to prepare their students for the jobs being demanded. This is more relevant about Information Technology (IT) jobs than others. However, to stay abreast of the market job demands require retrieving, sifting and analyzing large volume of data in order to understand the trends of the job market. Traditional methods of data collection and analysis are not sufficient for this kind of analysis due to the large volume of job data that is generated through the web and elsewhere. Luckily, the field of data mining has emerged to collect and sift through such large data volumes. However, even with data mining, appropriate data collection techniques and analysis need to be followed in order to correctly understand the trend. This paper illustrates our experience with employing mining techniques to understand the trend in IT Technology jobs. Data was collect using data mining techniques over a number of years from an online job agency. The data was then analyzed to reach a conclusion about the trends in the job market. Our experience in this regard along with literature review of the relevant topics is illustrated in this paper.",TRUE,sentence
R370,"Work, Economy and Organizations",R4603,Skill Needs for Early Career Researchers—A Text Mining Approach,S4992,R4610,Has result,L3360,"data handling and processing skills are essential for early career researchers, irrespective of their research field","Research and development activities are one of the main drivers for progress, economic growth and wellbeing in many societies. This article proposes a text mining approach applied to a large amount of data extracted from job vacancies advertisements, aiming to shed light on the main skills and demands that characterize first stage research positions in Europe. Results show that data handling and processing skills are essential for early career researchers, irrespective of their research field. Also, as many analyzed first stage research positions are connected to universities, they include teaching activities to a great extent. Management of time, risks, projects, and resources plays an important part in the job requirements included in the analyzed advertisements. Such information is relevant not only for early career researchers who perform job selection taking into account the match of possessed skills with the required ones, but also for educational institutions that are responsible for skills development of the future R&D professionals.",TRUE,sentence
,electrical engineering,R145522,Remarkable Improvement in Foldability of Poly‐Si Thin‐Film Transistor on Polyimide Substrate Using Blue Laser Crystallization of Amorphous Si and Comparison with Conventional Poly‐Si Thin‐Film Transistor Used for Foldable Displays,S582956,R145526,Film deposition method,L407164,Blue laser annealing (BLA) of amorphous silicon ,"Highly robust poly‐Si thin‐film transistor (TFT) on polyimide (PI) substrate using blue laser annealing (BLA) of amorphous silicon (a‐Si) for lateral crystallization is demonstrated. Its foldability is compared with the conventional excimer laser annealing (ELA) poly‐Si TFT on PI used for foldable displays exhibiting field‐effect mobility of 85 cm2 (V s)−1. The BLA poly‐Si TFT on PI exhibits the field‐effect mobility, threshold voltage (VTH), and subthreshold swing of 153 cm2 (V s)−1, −2.7 V, and 0.2 V dec−1, respectively. Most important finding is the excellent foldability of BLA TFT compared with the ELA poly‐Si TFTs on PI substrates. The VTH shift of BLA poly‐Si TFT is ≈0.1 V, which is much smaller than that (≈2 V) of ELA TFT on PI upon 30 000 cycle folding. The defects are generated at the grain boundary region of ELA poly‐Si during folding. However, BLA poly‐Si has no protrusion in the poly‐Si channel and thus no defect generation during folding. This leads to excellent foldability of BLA poly‐Si on PI substrate.",TRUE,sentence
,electrical engineering,R145549,Solution-processed high-performance p-channel copper tin sulfide thin-film transistors,S582829,R145550,Material,L407067,Copper tin sulfide (CTS) thin film,We introduce a solution-processed copper tin sulfide (CTS) thin film to realize high-performance of thin-film transistors (TFT) by optimizing the CTS precursor solution concentration.
,TRUE,sentence
R133,Artificial Intelligence,R75785,SemEval-2020 Task 5: Counterfactual Recognition,S346639,R75787,Online competition,L248299,https://competitions.codalab.org/competitions/21691,"We present a counterfactual recognition (CR) task, the shared Task 5 of SemEval-2020. Counterfactuals describe potential outcomes (consequents) produced by actions or circumstances that did not happen or cannot happen and are counter to the facts (antecedent). Counterfactual thinking is an important characteristic of the human cognitive system; it connects antecedents and consequent with causal relations. Our task provides a benchmark for counterfactual recognition in natural language with two subtasks. Subtask-1 aims to determine whether a given sentence is a counterfactual statement or not. Subtask-2 requires the participating systems to extract the antecedent and consequent in a given counterfactual statement. During the SemEval-2020 official evaluation period, we received 27 submissions to Subtask-1 and 11 to Subtask-2. Our data and baseline code are made publicly available at https://zenodo.org/record/3932442. The task website and leaderboard can be found at https://competitions.codalab.org/competitions/21691.",TRUE,url
R133,Artificial Intelligence,R76056,SemEval-2020 Task 4: Commonsense Validation and Explanation,S348174,R76058,Data repositories,L249084,https://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation,"In this paper, we present SemEval-2020 Task 4,CommonsenseValidation andExplanation(ComVE), which includes three subtasks, aiming to evaluate whether a system can distinguish anatural language statement thatmakes senseto humans from one that does not, and provide thereasons. Specifically, in our first subtask, the participating systems are required to choose from twonatural language statements of similar wording the one thatmakes senseand the one does not. Thesecond subtask additionally asks a system to select the key reason from three options why a givenstatement does not make sense. In the third subtask, a participating system needs to generate thereason automatically. 39 teams submitted their valid systems to at least one subtask. For SubtaskA and Subtask B, top-performing teams have achieved results closed to human performance.However, for Subtask C, there is still a considerable gap between system and human performance.The dataset used in our task can be found athttps://github.com/wangcunxiang/SemEval2020-Task4-Commonsense-Validation-and-Explanation.",TRUE,url
R133,Artificial Intelligence,R75785,SemEval-2020 Task 5: Counterfactual Recognition,S346638,R75787,Data repositories,L248298,https://zenodo.org/record/3932442,"We present a counterfactual recognition (CR) task, the shared Task 5 of SemEval-2020. Counterfactuals describe potential outcomes (consequents) produced by actions or circumstances that did not happen or cannot happen and are counter to the facts (antecedent). Counterfactual thinking is an important characteristic of the human cognitive system; it connects antecedents and consequent with causal relations. Our task provides a benchmark for counterfactual recognition in natural language with two subtasks. Subtask-1 aims to determine whether a given sentence is a counterfactual statement or not. Subtask-2 requires the participating systems to extract the antecedent and consequent in a given counterfactual statement. During the SemEval-2020 official evaluation period, we received 27 submissions to Subtask-1 and 11 to Subtask-2. Our data and baseline code are made publicly available at https://zenodo.org/record/3932442. The task website and leaderboard can be found at https://competitions.codalab.org/competitions/21691.",TRUE,url
R104,Bioinformatics,R150537,LINNAEUS: A species name identification system for biomedical literature,S603595,R150539,url,L417835,http://linnaeus.sourceforge.net/,"Abstract Background The task of recognizing and identifying species names in biomedical literature has recently been regarded as critical for a number of applications in text and data mining, including gene name recognition, species-specific document retrieval, and semantic enrichment of biomedical articles. Results In this paper we describe an open-source species name recognition and normalization software system, LINNAEUS, and evaluate its performance relative to several automatically generated biomedical corpora, as well as a novel corpus of full-text documents manually annotated for species mentions. LINNAEUS uses a dictionary-based approach (implemented as an efficient deterministic finite-state automaton) to identify species names and a set of heuristics to resolve ambiguous mentions. When compared against our manually annotated corpus, LINNAEUS performs with 94% recall and 97% precision at the mention level, and 98% recall and 90% precision at the document level. Our system successfully solves the problem of disambiguating uncertain species mentions, with 97% of all mentions in PubMed Central full-text documents resolved to unambiguous NCBI taxonomy identifiers. Conclusions LINNAEUS is an open source, stand-alone software system capable of recognizing and normalizing species name mentions with speed and accuracy, and can therefore be integrated into a range of bioinformatics and text-mining applications. The software and manually annotated corpus can be downloaded freely at http://linnaeus.sourceforge.net/.",TRUE,url
R104,Bioinformatics,R148043,MedPost: a part-of-speech tagger for bioMedical text,S593689,R148045,url,L412850,ftp://ftp.ncbi.nlm.nih.gov/pub/lsmith/MedPost/medpost.tar.gz,"SUMMARY We present a part-of-speech tagger that achieves over 97% accuracy on MEDLINE citations. AVAILABILITY Software, documentation and a corpus of 5700 manually tagged sentences are available at ftp://ftp.ncbi.nlm.nih.gov/pub/lsmith/MedPost/medpost.tar.gz",TRUE,url
R322,Computational Linguistics,R148039,GENETAG: a tagged corpus for gene/protein named entity recognition,S593678,R148041,url,L412842,ftp://ftp.ncbi.nlm.nih.gov/pub/tanabe/GENETAG.tar.gz,"Abstract Background Named entity recognition (NER) is an important first step for text mining the biomedical literature. Evaluating the performance of biomedical NER systems is impossible without a standardized test corpus. The annotation of such a corpus for gene/protein name NER is a difficult process due to the complexity of gene/protein names. We describe the construction and annotation of GENETAG, a corpus of 20K MEDLINE ® sentences for gene/protein NER. 15K GENETAG sentences were used for the BioCreAtIvE Task 1A Competition. Results To ensure heterogeneity of the corpus, MEDLINE sentences were first scored for term similarity to documents with known gene names, and 10K high- and 10K low-scoring sentences were chosen at random. The original 20K sentences were run through a gene/protein name tagger, and the results were modified manually to reflect a wide definition of gene/protein names subject to a specificity constraint, a rule that required the tagged entities to refer to specific entities. Each sentence in GENETAG was annotated with acceptable alternatives to the gene/protein names it contained, allowing for partial matching with semantic constraints. Semantic constraints are rules requiring the tagged entity to contain its true meaning in the sentence context. Application of these constraints results in a more meaningful measure of the performance of an NER system than unrestricted partial matching. Conclusion The annotation of GENETAG required intricate manual judgments by annotators which hindered tagging consistency. The data were pre-segmented into words, to provide indices supporting comparison of system responses to the ""gold standard"". However, character-based indices would have been more robust than word-based indices. GENETAG Train, Test and Round1 data and ancillary programs are freely available at ftp://ftp.ncbi.nlm.nih.gov/pub/tanabe/GENETAG.tar.gz. A newer version of GENETAG-05, will be released later this year.",TRUE,url
R132,Computer Sciences,R129411,SciBERT: A Pretrained Language Model for Scientific Text,S514891,R129459,has source code,L369341,https://github.com/allenai/scibert,"Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SciBERT, a pretrained language model based on BERT (Devlin et. al., 2018) to address the lack of high-quality, large-scale labeled scientific data. SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-the-art results on several of these tasks. The code and pretrained models are available at https://github.com/allenai/scibert/.",TRUE,url
R132,Computer Sciences,R129725,Unsupervised Statistical Machine Translation,S515826,R129739,has source code,L369699,https://github.com/artetxem/monoses,"While modern machine translation has relied on large parallel corpora, a recent line of work has managed to train Neural Machine Translation (NMT) systems from monolingual corpora only (Artetxe et al., 2018c; Lample et al., 2018). Despite the potential of this approach for low-resource settings, existing systems are far behind their supervised counterparts, limiting their practical interest. In this paper, we propose an alternative approach based on phrase-based Statistical Machine Translation (SMT) that significantly closes the gap with supervised systems. Our method profits from the modular architecture of SMT: we first induce a phrase table from monolingual corpora through cross-lingual embedding mappings, combine it with an n-gram language model, and fine-tune hyperparameters through an unsupervised MERT variant. In addition, iterative backtranslation improves results further, yielding, for instance, 14.08 and 26.22 BLEU points in WMT 2014 English-German and English-French, respectively, an improvement of more than 7-10 BLEU points over previous unsupervised systems, and closing the gap with supervised SMT (Moses trained on Europarl) down to 2-5 BLEU points. Our implementation is available at https://github.com/artetxem/monoses.",TRUE,url
R132,Computer Sciences,R129585,"Entity, Relation, and Event Extraction with Contextualized Span Representations",S515329,R129591,has source code,L369508,https://github.com/dwadden/dygiepp,"We examine the capabilities of a unified, multi-task framework for three information extraction tasks: named entity recognition, relation extraction, and event extraction. Our framework (called DyGIE++) accomplishes all tasks by enumerating, refining, and scoring text spans designed to capture local (within-sentence) and global (cross-sentence) context. Our framework achieves state-of-the-art results across all tasks, on four datasets from a variety of domains. We perform experiments comparing different techniques to construct span representations. Contextualized embeddings like BERT perform well at capturing relationships among entities in the same or adjacent sentences, while dynamic span graph updates model long-range cross-sentence relationships. For instance, propagating span representations via predicted coreference links can enable the model to disambiguate challenging entity mentions. Our code is publicly available at https://github.com/dwadden/dygiepp and can be easily adapted for new tasks or datasets.",TRUE,url
R132,Computer Sciences,R131694,Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond,S523468,R131695,has source code,L373292,https://github.com/facebookresearch/LASER,"Abstract We introduce an architecture to learn joint multilingual sentence representations for 93 languages, belonging to more than 30 different families and written in 28 different scripts. Our system uses a single BiLSTM encoder with a shared byte-pair encoding vocabulary for all languages, which is coupled with an auxiliary decoder and trained on publicly available parallel corpora. This enables us to learn a classifier on top of the resulting embeddings using English annotated data only, and transfer it to any of the 93 languages without any modification. Our experiments in cross-lingual natural language inference (XNLI data set), cross-lingual document classification (MLDoc data set), and parallel corpus mining (BUCC data set) show the effectiveness of our approach. We also introduce a new test set of aligned sentences in 112 languages, and show that our sentence embeddings obtain strong results in multilingual similarity search even for low- resource languages. Our implementation, the pre-trained encoder, and the multilingual test set are available at https://github.com/facebookresearch/LASER.",TRUE,url
R132,Computer Sciences,R131153,Message Passing Attention Networks for Document Understanding,S521881,R131154,has source code,L372793,https://github.com/giannisnik/mpad,"Graph neural networks have recently emerged as a very effective framework for processing graph-structured data. These models have achieved state-of-the-art performance in many tasks. Most graph neural networks can be described in terms of message passing, vertex update, and readout functions. In this paper, we represent documents as word co-occurrence networks and propose an application of the message passing framework to NLP, the Message Passing Attention network for Document understanding (MPAD). We also propose several hierarchical variants of MPAD. Experiments conducted on 10 standard text classification datasets show that our architectures are competitive with the state-of-the-art. Ablation studies reveal further insights about the impact of the different components on performance. Code is publicly available at: https://github.com/giannisnik/mpad.",TRUE,url
R132,Computer Sciences,R131303,Neural Architecture Transfer,S523052,R131554,has source code,L373160,https://github.com/human-analysis/neural-architecture-transfer,"Neural architecture search (NAS) has emerged as a promising avenue for automatically designing task-specific neural networks. Existing NAS approaches require one complete search for each deployment specification of hardware or objective. This is a computationally impractical endeavor given the potentially large number of application scenarios. In this paper, we propose Neural Architecture Transfer (NAT) to overcome this limitation. NAT is designed to efficiently generate task-specific custom models that are competitive under multiple conflicting objectives. To realize this goal we learn task-specific supernets from which specialized subnets can be sampled without any additional training. The key to our approach is an integrated online transfer learning and many-objective evolutionary search procedure. A pre-trained supernet is iteratively adapted while simultaneously searching for task-specific subnets. We demonstrate the efficacy of NAT on 11 benchmark image classification tasks ranging from large-scale multi-class to small-scale fine-grained datasets. In all cases, including ImageNet, NATNets improve upon the state-of-the-art under mobile settings ($\leq$≤ 600M Multiply-Adds). Surprisingly, small-scale fine-grained datasets benefit the most from NAT. At the same time, the architecture search and transfer is orders of magnitude more efficient than existing NAS methods. Overall, experimental evaluation indicates that, across diverse image classification tasks and computational objectives, NAT is an appreciably more effective alternative to conventional transfer learning of fine-tuning weights of an existing network architecture learned on standard datasets. Code is available at https://github.com/human-analysis/neural-architecture-transfer.",TRUE,url
R132,Computer Sciences,R130930,Deep Equilibrium Models,S520810,R130942,has source code,L372172,https://github.com/locuslab/deq,"We present a new approach to modeling sequential data: the deep equilibrium model (DEQ). Motivated by an observation that the hidden layers of many existing deep sequence models converge towards some fixed point, we propose the DEQ approach that directly finds these equilibrium points via root-finding. Such a method is equivalent to running an infinite depth (weight-tied) feedforward network, but has the notable advantage that we can analytically backpropagate through the equilibrium point using implicit differentiation. Using this approach, training and prediction in these networks require only constant memory, regardless of the effective “depth” of the network. We demonstrate how DEQs can be applied to two state-of-the-art deep sequence models: self-attention transformers and trellis networks. On large-scale language modeling tasks, such as the WikiText-103 benchmark, we show that DEQs 1) often improve performance over these state-of-the-art models (for similar parameter counts); 2) have similar computational requirements to existing models; and 3) vastly reduce memory consumption (often the bottleneck for training large sequence models), demonstrating an up-to 88% memory reduction in our experiments. The code is available at https://github.com/locuslab/deq.",TRUE,url
R132,Computer Sciences,R131730,BioSentVec: creating sentence embeddings for biomedical texts,S523628,R131746,has source code,L373347,https://github.com/ncbi-nlp/BioSentVec,"Sentence embeddings have become an essential part of today’s natural language processing (NLP) systems, especially together advanced deep learning methods. Although pre-trained sentence encoders are available in the general domain, none exists for biomedical texts to date. In this work, we introduce BioSentVec: the first open set of sentence embeddings trained with over 30 million documents from both scholarly articles in PubMed and clinical notes in the MIMICIII Clinical Database. We evaluate BioSentVec embeddings in two sentence pair similarity tasks in different biomedical text genres. Our benchmarking results demonstrate that the BioSentVec embeddings can better capture sentence semantics compared to the other competitive alternatives and achieve state-of-the-art performance in both tasks. We expect BioSentVec to facilitate the research and development in biomedical text mining and to complement the existing resources in biomedical word embeddings. The embeddings are publicly available at https://github.com/ncbi-nlp/BioSentVec.",TRUE,url
R132,Computer Sciences,R131755,Knowledge Graph Embedding with Atrous Convolution and Residual Learning,S523670,R131756,has source code,L373369,https://github.com/neukg/AcrE,"Knowledge graph embedding is an important task and it will benefit lots of downstream applications. Currently, deep neural networks based methods achieve state-of-the-art performance. However, most of these existing methods are very complex and need much time for training and inference. To address this issue, we propose a simple but effective atrous convolution based knowledge graph embedding method. Compared with existing state-of-the-art methods, our method has following main characteristics. First, it effectively increases feature interactions by using atrous convolutions. Second, to address the original information forgotten issue and vanishing/exploding gradient issue, it uses the residual learning method. Third, it has simpler structure but much higher parameter efficiency. We evaluate our method on six benchmark datasets with different evaluation metrics. Extensive experiments show that our model is very effective. On these diverse datasets, it achieves better results than the compared state-of-the-art methods on most of evaluation metrics. The source codes of our model could be found at https://github.com/neukg/AcrE.",TRUE,url
R132,Computer Sciences,R129948,Piano Skills Assessment,S516558,R129955,has source code,L369987,https://github.com/ParitoshParmar/Piano-Skills-Assessment,"Can a computer determine a piano player’s skill level? Is it preferable to base this assessment on visual analysis of the player’s performance or should we trust our ears over our eyes? Since current convolutional neural networks (CNNs) have difficulty processing long video videos, how can shorter clips be sampled to best reflect the players skill level? In this work, we collect and release a first-of-its-kind dataset for multimodal skill assessment focusing on assessing piano player’s skill level, answer the asked questions, initiate work in automated evaluation of piano playing skills and provide baselines for future work. Dataset can be accessed from: https://github.com/ParitoshParmar/Piano-Skills-Assessment.",TRUE,url
R145261,Natural Language Processing,R162561,Overview of the NLM-Chem BioCreative VII track: Full-text Chemical Identification and Indexing in PubMed articles,S687041,R172126,Dataset download url,L462743,https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/,"The BioCreative NLM-Chem track calls for a community effort to fine-tune automated recognition of chemical names in biomedical literature. Chemical names are one of the most searched biomedical entities in PubMed and – as highlighted during the COVID-19 pandemic – their identification may significantly advance research in multiple biomedical subfields. While previous community challenges focused on identifying chemical names mentioned in titles and abstracts, the full text contains valuable additional detail. We organized the BioCreative NLM-Chem track to call for a community effort to address automated chemical entity recognition in full-text articles. The track consisted of two tasks: 1) Chemical Identification task, and 2) Chemical Indexing prediction task. For the Chemical Identification task, participants were expected to predict with high accuracy all chemicals mentioned in recently published full-text articles, both span (i.e., named entity recognition) and normalization (i.e., entity linking) using MeSH. For the Chemical Indexing task, participants identified which chemicals should be indexed as topics for the article's topic terms in the NLM article and indexing, i.e., appear in the listing of MeSH terms for the document. This manuscript summarizes the BioCreative NLM-Chem track. We received a total of 88 submissions in total from 17 teams worldwide. The highest performance achieved for the Chemical Identification task was 0.8672 f-score (0.8759 precision, 0.8587 recall) for strict NER performance and 0.8136 f-score (0.8621 precision, 0.7702 recall) for strict normalization performance. The highest performance achieved for the Chemical Indexing task was 0.4825 f-score (0.4397 precision, 0.5344 recall). The NLM-Chem track dataset and other challenge materials are publicly available at https://ftp.ncbi.nlm.nih.gov/pub/lu/BC7-NLM-Chem-track/. This community challenge demonstrated 1) the current substantial achievements in deep learning technologies can be utilized to further improve automated prediction accuracy, and 2) the Chemical Indexing task is substantially more challenging. We look forward to further development of biomedical text mining methods to respond to the rapid growth of biomedical literature. Keywords— biomedical text mining; natural language processing; artificial intelligence; machine learning; deep learning; text mining; chemical entity recognition; chemical indexing",TRUE,url
R145261,Natural Language Processing,R146853,SciREX: A Challenge Dataset for Document-Level Information Extraction,S587995,R146855,url,L409447,https://github.com/allenai/SciREX,"Extracting information from full documents is an important problem in many domains, but most previous work focus on identifying relationships within a sentence or a paragraph. It is challenging to create a large-scale information extraction (IE) dataset at the document level since it requires an understanding of the whole document to annotate entities and their document-level relationships that usually span beyond sentences or even sections. In this paper, we introduce SciREX, a document level IE dataset that encompasses multiple IE tasks, including salient entity identification and document level N-ary relation identification from scientific articles. We annotate our dataset by integrating automatic and human annotations, leveraging existing scientific knowledge resources. We develop a neural model as a strong baseline that extends previous state-of-the-art IE models to document-level IE. Analyzing the model performance shows a significant gap between human performance and current baselines, inviting the community to use our dataset as a challenge to develop document-level IE models. Our data and code are publicly available at https://github.com/allenai/SciREX .",TRUE,url
R133,Artificial Intelligence,R182238,"Food Recognition: A New Dataset, Experiments, and Results",S704937,R182240,Number of images,L475597,1027,"We propose a new dataset for the evaluation of food recognition algorithms that can be used in dietary monitoring applications. Each image depicts a real canteen tray with dishes and foods arranged in different ways. Each tray contains multiple instances of food classes. The dataset contains 1027 canteen trays for a total of 3616 food instances belonging to 73 food classes. The food on the tray images has been manually segmented using carefully drawn polygonal boundaries. We have benchmarked the dataset by designing an automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest, and predicts for each region the corresponding food class. We have experimented with three different classification strategies using also several visual descriptors. We achieve about 79% of food and tray recognition accuracy using convolutional-neural-networks-based features. The dataset, as well as the benchmark framework, are available to the research community.",TRUE,year/date
R133,Artificial Intelligence,R69297,BioCreative V CDR task corpus: a resource for chemical disease relation extraction,S328958,R69298,number of papers,L239681,1500,"Community-run, formal evaluations and manually annotated text corpora are critically important for advancing biomedical text-mining research. Recently in BioCreative V, a new challenge was organized for the tasks of disease named entity recognition (DNER) and chemical-induced disease (CID) relation extraction. Given the nature of both tasks, a test collection is required to contain both disease/chemical annotations and relation annotations in the same set of articles. Despite previous efforts in biomedical corpus construction, none was found to be sufficient for the task. Thus, we developed our own corpus called BC5CDR during the challenge by inviting a team of Medical Subject Headings (MeSH) indexers for disease/chemical entity annotation and Comparative Toxicogenomics Database (CTD) curators for CID relation annotation. To ensure high annotation quality and productivity, detailed annotation guidelines and automatic annotation tools were provided. The resulting BC5CDR corpus consists of 1500 PubMed articles with 4409 annotated chemicals, 5818 diseases and 3116 chemical-disease interactions. Each entity annotation includes both the mention text spans and normalized concept identifiers, using MeSH as the controlled vocabulary. To ensure accuracy, the entities were first captured independently by two annotators followed by a consensus annotation: The average inter-annotator agreement (IAA) scores were 87.49% and 96.05% for the disease and chemicals, respectively, in the test set according to the Jaccard similarity coefficient. Our corpus was successfully used for the BioCreative V challenge tasks and should serve as a valuable resource for the text-mining research community. Database URL: http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/",TRUE,year/date
R375,Arts and Humanities,R51006,Are the FAIR Data Principles fair?,S537871,R135913,has publication year,L379012,2017,"This practice paper describes an ongoing research project to test the effectiveness and relevance of the FAIR Data Principles. Simultaneously, it will analyse how easy it is for data archives to adhere to the principles. The research took place from November 2016 to January 2017, and will be underpinned with feedback from the repositories. The FAIR Data Principles feature 15 facets corresponding to the four letters of FAIR - Findable, Accessible, Interoperable, Reusable. These principles have already gained traction within the research world. The European Commission has recently expanded its demand for research to produce open data. The relevant guidelines1are explicitly written in the context of the FAIR Data Principles. Given an increasing number of researchers will have exposure to the guidelines, understanding their viability and suggesting where there may be room for modification and adjustment is of vital importance. This practice paper is connected to a dataset(Dunning et al.,2017) containing the original overview of the sample group statistics and graphs, in an Excel spreadsheet. Over the course of two months, the web-interfaces, help-pages and metadata-records of over 40 data repositories have been examined, to score the individual data repository against the FAIR principles and facets. The traffic-light rating system enables colour-coding according to compliance and vagueness. The statistical analysis provides overall, categorised, on the principles focussing, and on the facet focussing results. The analysis includes the statistical and descriptive evaluation, followed by elaborations on Elements of the FAIR Data Principles, the subject specific or repository specific differences, and subsequently what repositories can do to improve their information architecture.",TRUE,year/date
R169,Climate,R48367,Linking sea level rise and socioeconomic indicators underthe Shared Socioeconomic Pathways,S694605,R175313,has end of period,L467072,2050,"In order to assess future sea level rise and its societal impacts, we need to study climate change pathways combined with different scenarios of socioeconomic development. Here, we present Sea Level Rise (SLR) projections for the Shared Socioeconomic Pathway (SSP) storylines and different year-2100 radiative Forcing Targets (FTs). Future SLR is estimated with a comprehensive SLR emulator that accounts for Antarctic rapid discharge from hydrofracturing and ice cliff instability. Across all baseline scenario realizations (no dedicated climate mitigation), we find 2100 median SLR relative to 1986-2005 of 89 cm (likely range: 57 to 130 cm) for SSP1, 105 cm (73 to 150 cm) for SSP2, 105 cm (75 to 147 cm) for SSP3, 93 cm (63 to 133 cm) for SSP4, and 132 cm (95 to 189 cm) for SSP5. The 2100 sea level responses for combined SSP-FT scenarios are dominated by the mitigation targets and yield median estimates of 52 cm (34 to 75 cm) for FT 2.6 Wm-2, 62 cm (40 to 96 cm) for FT 3.4 Wm-2, 75 cm (47 to 113 cm) for FT 4.5 Wm-2, and 91 cm (61 to 132 cm) for FT 6.0 Wm-2. Average 2081-2100 annual SLR rates are 5 mm yr-1 and 19 mm yr-1 for FT 2.6 Wm-2 and the baseline scenarios, respectively. Our model setup allows linking scenario-specific emission and socioeconomic indicators to projected SLR. We find that 2100 median SSP SLR projections could be limited to around 50 cm if 2050 cumulative CO2 emissions since pre-industrial stay below 850 GtC ,with a global coal phase-out nearly completed by that time. For SSP mitigation scenarios, a 2050 carbon price of 100 US$2005 tCO2 -1 would correspond to a median 2100 SLR of around 65 cm. Our results confirm that rapid and early emission reductions are essential for limiting 2100 SLR.",TRUE,year/date
R322,Computational Linguistics,R163865,Part-of-Speech Annotation of Biology Research Abstracts,S654298,R163867,number of papers,L444492,2000,"A part-of-speech (POS) tagged corpus was built on research abstracts in biomedical domain with the Penn Treebank scheme. As consistent annotation was difficult without domain-specific knowledge we made use of the existing term annotation of the GENIA corpus. A list of frequent terms annotated in the GENIA corpus was compiled and the POS of each constituent of those terms were determined with assistance from domain specialists. The POS of the terms in the list are pre-assigned, then a tagger assigns POS to remaining words preserving the pre-assigned POS, whose results are corrected by human annotators. We also modified the PTB scheme slightly. An inter-annotator agreement tested on new 50 abstracts was 98.5%. A POS tagger trained with the annotated abstracts was tested against a gold-standard set made from the interannotator agreement. The untrained tagger had the accuracy of 83.0%. Trained with 2000 annotated abstracts the accuracy rose to 98.2%. The 2000 annotated abstracts are publicly available.",TRUE,year/date
R231,Computer and Systems Architecture,R175447,Motion synthesis and editing in low-dimensional spaces,S695167,R175449,paper: publication_year,L467321,2006,"Human motion is difficult to create and manipulate because of the high dimensionality and spatiotemporal nature of human motion data. Recently, the use of large collections of captured motion data has added increased realism in character animation. In order to make the synthesis and analysis of motion data tractable, we present a low‐dimensional motion space in which high‐dimensional human motion can be effectively visualized, synthesized, edited, parameterized, and interpolated in both spatial and temporal domains. Our system allows users to create and edit the motion of animated characters in several ways: The user can sketch and edit a curve on low‐dimensional motion space, directly manipulate the character's pose in three‐dimensional object space, or specify key poses to create in‐between motions. Copyright © 2006 John Wiley & Sons, Ltd.",TRUE,year/date
R417,Cultural History,R139736,Public History and Contested Heritage: Archival Memories of the Bombing of Italy,S557910,R139743,has start of period,R139752,1939,"This article presents a case study of a collaborative public history project between participants in two countries, the United Kingdom and Italy. Its subject matter is the bombing war in Europe, 1939-1945, which is remembered and commemorated in very different ways in these two countries: the sensitivities involved thus constitute not only a case of public history conducted at the national level but also one involving contested heritage. An account of the ways in which public history has developed in the UK and Italy is presented. This is followed by an explanation of how the bombing war has been remembered in each country. In the UK, veterans of RAF Bomber Command have long felt a sense of neglect, largely because the deliberate targeting of civilians has not fitted comfortably into the dominant victor narrative. In Italy, recollections of being bombed have remained profoundly dissonant within the received liberation discourse. The International Bomber Command Centre Digital Archive (or Archive) is then described as a case study that employs a public history approach, focusing on various aspects of its inclusive ethos, intended to preserve multiple perspectives. The Italian component of the project is highlighted, problematising the digitisation of contested heritage within the broader context of twentieth-century history. Reflections on the use of digital archiving practices and working in partnership are offered, as well as a brief account of user analytics of the Archive through its first eighteen months online.",TRUE,year/date
R417,Cultural History,R139736,Public History and Contested Heritage: Archival Memories of the Bombing of Italy,S557903,R139743,has end of period,R139746,1945,"This article presents a case study of a collaborative public history project between participants in two countries, the United Kingdom and Italy. Its subject matter is the bombing war in Europe, 1939-1945, which is remembered and commemorated in very different ways in these two countries: the sensitivities involved thus constitute not only a case of public history conducted at the national level but also one involving contested heritage. An account of the ways in which public history has developed in the UK and Italy is presented. This is followed by an explanation of how the bombing war has been remembered in each country. In the UK, veterans of RAF Bomber Command have long felt a sense of neglect, largely because the deliberate targeting of civilians has not fitted comfortably into the dominant victor narrative. In Italy, recollections of being bombed have remained profoundly dissonant within the received liberation discourse. The International Bomber Command Centre Digital Archive (or Archive) is then described as a case study that employs a public history approach, focusing on various aspects of its inclusive ethos, intended to preserve multiple perspectives. The Italian component of the project is highlighted, problematising the digitisation of contested heritage within the broader context of twentieth-century history. Reflections on the use of digital archiving practices and working in partnership are offered, as well as a brief account of user analytics of the Archive through its first eighteen months online.",TRUE,year/date
R417,Cultural History,R139800,A systematic review of literature on contested heritage,S558019,R139803,Has endpoint,R139806,2020,"ABSTRACT Contested heritage has increasingly been studied by scholars over the last two decades in multiple disciplines, however, there is still limited knowledge about what contested heritage is and how it is realized in society. Therefore, the purpose of this paper is to produce a systematic literature review on this topic to provide a holistic understanding of contested heritage, and delineate its current state, trends and gaps. Methodologically, four electronic databases were searched, and 102 journal articles published before 2020 were extracted. A content analysis of each article was then conducted to identify key themes and variables for classification. Findings show that while its research often lacks theoretical underpinnings, contested heritage is marked by its diversity and complexity as it becomes a global issue for both tourism and urbanization. By presenting a holistic understanding of contested heritage, this review offers an extensive investigation of the topic area to help move literature pertaining contested heritage forward.",TRUE,year/date
R234,Digital Communications and Networking,R108539,Case study: using MOOCs for conventional college coursework,S494550,R108541,MOOC Period,L358343,2013,"In Spring 2013 San José State University (SJSU) launched SJSU Plus: three college courses required for most students to graduate, which used massive open online course provider Udacity’s platform, attracting over 15,000 students. Retention and success (pass/fail) and online support were tested using an augmented online learning environment (AOLE) on a subset of 213 students; about one-half matriculated. SJSU faculty created the course content, collaborating with Udacity to develop video instruction, quizzes, and interactive elements. Course log-ins and progression data were combined with surveys and focus groups, with students, faculty, support staff, coordinators, and program leaders as subjects. Logit models used contingency table-tested potential success predictors on all students and five subgroups. Student effort was the strongest success indicator, suggesting criticality of early and consistent student engagement. No statistically significant relationships with student characteristics were found. AOLE support effectiveness was compromised with staff time consumed by the least prepared students.",TRUE,year/date
R142,Earth Sciences,R144024,Raman spectroscopy of the borosilicate mineral ferroaxinite,S576454,R144026,BO stretching vibrations,L403637,1025,"Raman spectroscopy, complemented by infrared spectroscopy has been used to characterise the ferroaxinite minerals of theoretical formula Ca2Fe2+Al2BSi4O15(OH), a ferrous aluminium borosilicate. The Raman spectra are complex but are subdivided into sections based upon the vibrating units. The Raman spectra are interpreted in terms of the addition of borate and silicate spectra. Three characteristic bands of ferroaxinite are observed at 1082, 1056 and 1025 cm-1 and are attributed to BO4 stretching vibrations. Bands at 1003, 991, 980 and 963 cm-1 are assigned to SiO4 stretching vibrations. Bands are found in these positions for each of the ferroaxinites studied. No Raman bands were found above 1100 cm-1 showing that ferroaxinites contain only tetrahedral boron. The hydroxyl stretching region of ferroaxinites is characterised by a single Raman band between 3368 and 3376 cm-1, the position of which is sample dependent. Bands for ferroaxinite at 678, 643, 618, 609, 588, 572, 546 cm-1 may be attributed to the ν4 bending modes and the three bands at 484, 444 and 428 cm-1 may be attributed to the ν2 bending modes of the (SiO4)2-.",TRUE,year/date
R142,Earth Sciences,R144024,Raman spectroscopy of the borosilicate mineral ferroaxinite,S576455,R144026,BO stretching vibrations,L403638,1056,"Raman spectroscopy, complemented by infrared spectroscopy has been used to characterise the ferroaxinite minerals of theoretical formula Ca2Fe2+Al2BSi4O15(OH), a ferrous aluminium borosilicate. The Raman spectra are complex but are subdivided into sections based upon the vibrating units. The Raman spectra are interpreted in terms of the addition of borate and silicate spectra. Three characteristic bands of ferroaxinite are observed at 1082, 1056 and 1025 cm-1 and are attributed to BO4 stretching vibrations. Bands at 1003, 991, 980 and 963 cm-1 are assigned to SiO4 stretching vibrations. Bands are found in these positions for each of the ferroaxinites studied. No Raman bands were found above 1100 cm-1 showing that ferroaxinites contain only tetrahedral boron. The hydroxyl stretching region of ferroaxinites is characterised by a single Raman band between 3368 and 3376 cm-1, the position of which is sample dependent. Bands for ferroaxinite at 678, 643, 618, 609, 588, 572, 546 cm-1 may be attributed to the ν4 bending modes and the three bands at 484, 444 and 428 cm-1 may be attributed to the ν2 bending modes of the (SiO4)2-.",TRUE,year/date
R142,Earth Sciences,R144024,Raman spectroscopy of the borosilicate mineral ferroaxinite,S576456,R144026,BO stretching vibrations,L403639,1082,"Raman spectroscopy, complemented by infrared spectroscopy has been used to characterise the ferroaxinite minerals of theoretical formula Ca2Fe2+Al2BSi4O15(OH), a ferrous aluminium borosilicate. The Raman spectra are complex but are subdivided into sections based upon the vibrating units. The Raman spectra are interpreted in terms of the addition of borate and silicate spectra. Three characteristic bands of ferroaxinite are observed at 1082, 1056 and 1025 cm-1 and are attributed to BO4 stretching vibrations. Bands at 1003, 991, 980 and 963 cm-1 are assigned to SiO4 stretching vibrations. Bands are found in these positions for each of the ferroaxinites studied. No Raman bands were found above 1100 cm-1 showing that ferroaxinites contain only tetrahedral boron. The hydroxyl stretching region of ferroaxinites is characterised by a single Raman band between 3368 and 3376 cm-1, the position of which is sample dependent. Bands for ferroaxinite at 678, 643, 618, 609, 588, 572, 546 cm-1 may be attributed to the ν4 bending modes and the three bands at 484, 444 and 428 cm-1 may be attributed to the ν2 bending modes of the (SiO4)2-.",TRUE,year/date
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R142471,DNA barcoding of Northern Nearctic Muscidae (Diptera) reveals high correspondence between morphological and molecular species limits,S624779,R155793,No. of samples (sequences),L429934,1114,"Abstract Background Various methods have been proposed to assign unknown specimens to known species using their DNA barcodes, while others have focused on using genetic divergence thresholds to estimate “species” diversity for a taxon, without a well-developed taxonomy and/or an extensive reference library of DNA barcodes. The major goals of the present work were to: a) conduct the largest species-level barcoding study of the Muscidae to date and characterize the range of genetic divergence values in the northern Nearctic fauna; b) evaluate the correspondence between morphospecies and barcode groupings defined using both clustering-based and threshold-based approaches; and c) use the reference library produced to address taxonomic issues. Results Our data set included 1114 individuals and their COI sequences (951 from Churchill, Manitoba), representing 160 morphologically-determined species from 25 genera, covering 89% of the known fauna of Churchill and 23% of the Nearctic fauna. Following an iterative process through which all specimens belonging to taxa with anomalous divergence values and/or monophyly issues were re-examined, identity was modified for 9 taxa, including the reinstatement of Phaonia luteva (Walker) stat. nov. as a species distinct from Phaonia errans (Meigen). In the post-reassessment data set, no distinct gap was found between maximum pairwise intraspecific distances (range 0.00-3.01%) and minimum interspecific distances (range: 0.77-11.33%). Nevertheless, using a clustering-based approach, all individuals within 98% of species grouped with their conspecifics with high (>95%) bootstrap support; in contrast, a maximum species discrimination rate of 90% was obtained at the optimal threshold of 1.2%. DNA barcoding enabled the determination of females from 5 ambiguous species pairs and confirmed that 16 morphospecies were genetically distinct from named taxa. There were morphological differences among all distinct genetic clusters; thus, no cases of cryptic species were detected. Conclusions Our findings reveal the great utility of building a well-populated, species-level reference barcode database against which to compare unknowns. When such a library is unavailable, it is still possible to obtain a fairly accurate (within ~10%) rapid assessment of species richness based upon a barcode divergence threshold alone, but this approach is most accurate when the threshold is tuned to a particular taxon.",TRUE,year/date
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R135750,Characterization and comparison of poorly known moth communities through DNA barcoding in two Afrotropical environments in Gabon,S537020,R135752,higher number estimated species,L378517,1385,"Biodiversity research in tropical ecosystems-popularized as the most biodiverse habitats on Earth-often neglects invertebrates, yet invertebrates represent the bulk of local species richness. Insect communities in particular remain strongly impeded by both Linnaean and Wallacean shortfalls, and identifying species often remains a formidable challenge inhibiting the use of these organisms as indicators for ecological and conservation studies. Here we use DNA barcoding as an alternative to the traditional taxonomic approach for characterizing and comparing the diversity of moth communities in two different ecosystems in Gabon. Though sampling remains very incomplete, as evidenced by the high proportion (59%) of species represented by singletons, our results reveal an outstanding diversity. With about 3500 specimens sequenced and representing 1385 BINs (Barcode Index Numbers, used as a proxy to species) in 23 families, the diversity of moths in the two sites sampled is higher than the current number of species listed for the entire country, highlighting the huge gap in biodiversity knowledge for this country. Both seasonal and spatial turnovers are strikingly high (18.3% of BINs shared between seasons, and 13.3% between sites) and draw attention to the need to account for these when running regional surveys. Our results also highlight the richness and singularity of savannah environments and emphasize the status of Central African ecosystems as hotspots of biodiversity.",TRUE,year/date
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R135750,Characterization and comparison of poorly known moth communities through DNA barcoding in two Afrotropical environments in Gabon,S537026,R135752,No. of estimated species,L378519,1385,"Biodiversity research in tropical ecosystems-popularized as the most biodiverse habitats on Earth-often neglects invertebrates, yet invertebrates represent the bulk of local species richness. Insect communities in particular remain strongly impeded by both Linnaean and Wallacean shortfalls, and identifying species often remains a formidable challenge inhibiting the use of these organisms as indicators for ecological and conservation studies. Here we use DNA barcoding as an alternative to the traditional taxonomic approach for characterizing and comparing the diversity of moth communities in two different ecosystems in Gabon. Though sampling remains very incomplete, as evidenced by the high proportion (59%) of species represented by singletons, our results reveal an outstanding diversity. With about 3500 specimens sequenced and representing 1385 BINs (Barcode Index Numbers, used as a proxy to species) in 23 families, the diversity of moths in the two sites sampled is higher than the current number of species listed for the entire country, highlighting the huge gap in biodiversity knowledge for this country. Both seasonal and spatial turnovers are strikingly high (18.3% of BINs shared between seasons, and 13.3% between sites) and draw attention to the need to account for these when running regional surveys. Our results also highlight the richness and singularity of savannah environments and emphasize the status of Central African ecosystems as hotspots of biodiversity.",TRUE,year/date
R136127,"Ecology and Biodiversity of Animals and Ecosystems, Organismic Interactions",R145304,Analyzing Mosquito (Diptera: Culicidae) Diversity in Pakistan by DNA Barcoding,S624628,R155763,No. of samples (sequences),L429857,1684,"Background Although they are important disease vectors mosquito biodiversity in Pakistan is poorly known. Recent epidemics of dengue fever have revealed the need for more detailed understanding of the diversity and distributions of mosquito species in this region. DNA barcoding improves the accuracy of mosquito inventories because morphological differences between many species are subtle, leading to misidentifications. Methodology/Principal Findings Sequence variation in the barcode region of the mitochondrial COI gene was used to identify mosquito species, reveal genetic diversity, and map the distribution of the dengue-vector species in Pakistan. Analysis of 1684 mosquitoes from 491 sites in Punjab and Khyber Pakhtunkhwa during 2010–2013 revealed 32 species with the assemblage dominated by Culex quinquefasciatus (61% of the collection). The genus Aedes (Stegomyia) comprised 15% of the specimens, and was represented by six taxa with the two dengue vector species, Ae. albopictus and Ae. aegypti, dominant and broadly distributed. Anopheles made up another 6% of the catch with An. subpictus dominating. Barcode sequence divergence in conspecific specimens ranged from 0–2.4%, while congeneric species showed from 2.3–17.8% divergence. A global haplotype analysis of disease-vectors showed the presence of multiple haplotypes, although a single haplotype of each dengue-vector species was dominant in most countries. Geographic distribution of Ae. aegypti and Ae. albopictus showed the later species was dominant and found in both rural and urban environments. Conclusions As the first DNA-based analysis of mosquitoes in Pakistan, this study has begun the construction of a barcode reference library for the mosquitoes of this region. Levels of genetic diversity varied among species. Because of its capacity to differentiate species, even those with subtle morphological differences, DNA barcoding aids accurate tracking of vector populations.",TRUE,year/date
R24,Ecology and Evolutionary Biology,R57223,Distributions of exotic plants in eastern Asia and North America,S195180,R57224,Number of species,L122411,1567,"Although some plant traits have been linked to invasion success, the possible effects of regional factors, such as diversity, habitat suitability, and human activity are not well understood. Each of these mechanisms predicts a different pattern of distribution at the regional scale. Thus, where climate and soils are similar, predictions based on regional hypotheses for invasion success can be tested by comparisons of distributions in the source and receiving regions. Here, we analyse the native and alien geographic ranges of all 1567 plant species that have been introduced between eastern Asia and North America or have been introduced to both regions from elsewhere. The results reveal correlations between the spread of exotics and both the native species richness and transportation networks of recipient regions. This suggests that both species interactions and human-aided dispersal influence exotic distributions, although further work on the relative importance of these processes is needed.",TRUE,year/date
R24,Ecology and Evolutionary Biology,R57035,Marketing time predicts naturalization of horticultural plants,S192840,R57036,Number of species,L120660,1903,"Horticulture is an important source of naturalized plants, but our knowledge about naturalization frequencies and potential patterns of naturalization in horticultural plants is limited. We analyzed a unique set of data derived from the detailed sales catalogs (1887-1930) of the most important early Florida, USA, plant nursery (Royal Palm Nursery) to detect naturalization patterns of these horticultural plants in the state. Of the 1903 nonnative species sold by the nursery, 15% naturalized. The probability of plants becoming naturalized increases significantly with the number of years the plants were marketed. Plants that became invasive and naturalized were sold for an average of 19.6 and 14.8 years, respectively, compared to 6.8 years for non-naturalized plants, and the naturalization of plants sold for 30 years or more is 70%. Unexpectedly, plants that were sold earlier were less likely to naturalize than those sold later. The nursery's inexperience, which caused them to grow and market many plants unsuited to Florida during their early period, may account for this pattern. Plants with pantropical distributions and those native to both Africa and Asia were more likely to naturalize (42%), than were plants native to other smaller regions, suggesting that plants with large native ranges were more likely to naturalize. Naturalization percentages also differed according to plant life form, with the most naturalization occurring in aquatic herbs (36.8%) and vines (30.8%). Plants belonging to the families Araceae, Apocynaceae, Convolvulaceae, Moraceae, Oleaceae, and Verbenaceae had higher than expected naturalization. Information theoretic model selection indicated that the number of years a plant was sold, alone or together with the first year a plant was sold, was the strongest predictor of naturalization. Because continued importation and marketing of nonnative horticultural plants will lead to additional plant naturalization and invasion, a comprehensive approach to address this problem, including research to identifyand select noninvasive forms and types of horticultural plants is urgently needed.",TRUE,year/date
R24,Ecology and Evolutionary Biology,R57010,"Alien flora of Europe: species diversity, temporal trends, geographical patterns and research needs",S192560,R57012,Number of species,L120428,1969,"The paper provides the first estimate of the composition and structure of alien plants occurring in the wild in the European continent, based on the results of the DAISIE project (2004–2008), funded by the 6th Framework Programme of the European Union and aimed at “creating an inventory of invasive species that threaten European terrestrial, freshwater and marine environments”. The plant section of the DAISIE database is based on national checklists from 48 European countries/regions and Israel; for many of them the data were compiled during the project and for some countries DAISIE collected the first comprehensive checklists of alien species, based on primary data (e.g., Cyprus, Greece, F. Y. R. O. Macedonia, Slovenia, Ukraine). In total, the database contains records of 5789 alien plant species in Europe (including those native to a part of Europe but alien to another part), of which 2843 are alien to Europe (of extra-European origin). The research focus was on naturalized species; there are in total 3749 naturalized aliens in Europe, of which 1780 are alien to Europe. This represents a marked increase compared to 1568 alien species reported by a previous analysis of data in Flora Europaea (1964–1980). Casual aliens were marginally considered and are represented by 1507 species with European origins and 872 species whose native range falls outside Europe. The highest diversity of alien species is concentrated in industrialized countries with a tradition of good botanical recording or intensive recent research. The highest number of all alien species, regardless of status, is reported from Belgium (1969), the United Kingdom (1779) and Czech Republic (1378). The United Kingdom (857), Germany (450), Belgium (447) and Italy (440) are countries with the most naturalized neophytes. The number of naturalized neophytes in European countries is determined mainly by the interaction of temperature and precipitation; it increases with increasing precipitation but only in climatically warm and moderately warm regions. Of the nowadays naturalized neophytes alien to Europe, 50% arrived after 1899, 25% after 1962 and 10% after 1989. At present, approximately 6.2 new species, that are capable of naturalization, are arriving each year. Most alien species have relatively restricted European distributions; half of all naturalized species occur in four or fewer countries/regions, whereas 70% of non-naturalized species occur in only one region. Alien species are drawn from 213 families, dominated by large global plant families which have a weedy tendency and have undergone major radiations in temperate regions (Asteraceae, Poaceae, Rosaceae, Fabaceae, Brassicaceae). There are 1567 genera, which have alien members in European countries, the commonest being globally-diverse genera comprising mainly urban and agricultural weeds (e.g., Amaranthus, Chenopodium and Solanum) or cultivated for ornamental purposes (Cotoneaster, the genus richest in alien species). Only a few large genera which have successfully invaded (e.g., Oenothera, Oxalis, Panicum, Helianthus) are predominantly of non-European origin. Conyza canadensis, Helianthus tuberosus and Robinia pseudoacacia are most widely distributed alien species. Of all naturalized aliens present in Europe, 64.1% occur in industrial habitats and 58.5% on arable land and in parks and gardens. Grasslands and woodlands are also highly invaded, with 37.4 and 31.5%, respectively, of all naturalized aliens in Europe present in these habitats. Mires, bogs and fens are least invaded; only approximately 10% of aliens in Europe occur there. Intentional introductions to Europe (62.8% of the total number of naturalized aliens) prevail over unintentional (37.2%). Ornamental and horticultural introductions escaped from cultivation account for the highest number of species, 52.2% of the total. Among unintentional introductions, contaminants of seed, mineral materials and other commodities are responsible for 1091 alien species introductions to Europe (76.6% of all species introduced unintentionally) and 363 species are assumed to have arrived as stowaways (directly associated with human transport but arriving independently of commodity). Most aliens in Europe have a native range in the same continent (28.6% of all donor region records are from another part of Europe where the plant is native); in terms of species numbers the contribution of Europe as a region of origin is 53.2%. Considering aliens to Europe separately, 45.8% of species have their native distribution in North and South America, 45.9% in Asia, 20.7% in Africa and 5.3% in Australasia. Based on species composition, European alien flora can be classified into five major groups: (1) north-western, comprising Scandinavia and the UK; (2) west-central, extending from Belgium and the Netherlands to Germany and Switzerland; (3) Baltic, including only the former Soviet Baltic states; (4) east-central, comprizing the remainder of central and eastern Europe; (5) southern, covering the entire Mediterranean region. The clustering patterns cut across some European bioclimatic zones; cultural factors such as regional trade links and traditional local preferences for crop, forestry and ornamental species are also important by influencing the introduced species pool. Finally, the paper evaluates a state of the art in the field of plant invasions in Europe, points to research gaps and outlines avenues of further research towards documenting alien plant invasions in Europe. The data are of varying quality and need to be further assessed with respect to the invasion status and residence time of the species included. This concerns especially the naturalized/casual status; so far, this information is available comprehensively for only 19 countries/regions of the 49 considered. Collating an integrated database on the alien flora of Europe can form a principal contribution to developing a European-wide management strategy of alien species.",TRUE,year/date
R24,Ecology and Evolutionary Biology,R54571,Exotic and native vegetation establishment following channelization of a western Iberian river,S171659,R54573,Study date,L105399,2001,"Channelization is often a major cause of human impacts on river systems. It affects both hydrogeomorphic features and habitat characteristics and potentially impacts riverine flora and fauna. Human-disturbed fluvial ecosystems also appear to be particularly vulnerable to exotic plant establishment. Following a 12-year recovery period, the distribution, composition and cover of both exotic and native plant species were studied along a Portuguese lowland river segment, which had been subjected to resectioning, straightening and two-stage bank reinforcement, and were compared with those of a nearby, less impacted segment. The species distribution was also related to environmental data. Species richness and floristic composition in the channelized river segment were found to be similar to those at the more ‘natural’ river sites. Floral differences were primarily consistent with the dominance of cover by certain species. However, there were significant differences in exotic and native species richness and cover between the ‘natural’ corridor and the channelized segment, which was more susceptible to invasion by exotic perennial taxa, such as Eryngium pandanifolium, Paspalum paspalodes, Tradescantia fluminensis and Acacia dealbata. Factorial and canonical correspondence analyses revealed considerable patchiness in the distribution of species assemblages. The latter were associated with small differences in substrate composition and their own relative position across the banks and along the river segments in question. Data was also subjected to an unweighted pair-group arithmetic average clustering, and the Indicator Value methodology was applied to selected cluster noda in order to obtain significant indicator species. Copyright © 2001 John Wiley & Sons, Ltd.",TRUE,year/date
R24,Ecology and Evolutionary Biology,R57137,Control of plant species diversity and community invasibility by species immigration: seed richness versus seed density,S194184,R57138,Study date,L121587,2003,"Brown, R. L. and Fridley, J. D. 2003. Control of plant species diversity andcommunity invasibility by species immigration: seed richness versus seed density. –Oikos 102: 15–24.Immigration rates of species into communities are widely understood to influencecommunity diversity, which in turn is widely expected to influence the susceptibilityof ecosystems to species invasion. For a given community, however, immigrationprocesses may impact diversity by means of two separable components: the numberof species represented in seed inputs and the density of seed per species. Theindependent effects of these components on plant species diversity and consequentrates of invasion are poorly understood. We constructed experimental plant commu-nities through repeated seed additions to independently measure the effects of seedrichness and seed density on the trajectory of species diversity during the develop-ment of annual plant communities. Because we sowed species not found in theimmediate study area, we were able to assess the invasibility of the resultingcommunities by recording the rate of establishment of species from adjacent vegeta-tion. Early in community development when species only weakly interacted, seedrichness had a strong effect on community diversity whereas seed density had littleeffect. After the plants became established, the effect of seed richness on measureddiversity strongly depended on seed density, and disappeared at the highest level ofseed density. The ability of surrounding vegetation to invade the experimentalcommunities was decreased by seed density but not by seed richness, primarilybecause the individual effects of a few sown species could explain the observedinvasion rates. These results suggest that seed density is just as important as seedrichness in the control of species diversity, and perhaps a more important determi-nant of community invasibility than seed richness in dynamic plant assemblages.",TRUE,year/date
R24,Ecology and Evolutionary Biology,R56557,The portability of foodweb dynamics: reassembling na Australian eucalypt-psyllid-bird association within California,S187219,R56558,Study date,L116199,2004,"Aims. To evaluate the role of native predators (birds) within an Australian foodweb (lerp psyllids and eucalyptus trees) reassembled in California. Location. Eucalyptus groves within Santa Cruz, California. Methods. We compared bird diversity and abundance between a eucalyptus grove infested with lerp psyllids and a grove that was uninfested, using point counts. We documented shifts in the foraging behaviour of birds between the groves using structured behavioural observations. Additionally, we judged the effect of bird foraging on lerp psyllid abundance using exclosure experiments. Results. We found a greater richness and abundance of Californian birds within a psyllid infested eucalyptus grove compared to a matched non-infested grove, and that Californian birds modify their foraging behaviour within the infested grove in order to concentrate on ingesting psyllids. This suggests that Californian birds could provide indirect top-down benefits to eucalyptus trees similar to those observed in Australia. However, using bird exclosure experiments, we found no evidence of top-down control of lerp psyllids by Californian birds. Main conclusions. We suggest that physiological and foraging differences between Californian and Australian pysllid-eating birds account for the failure to observe top-down control of psyllid populations in California. The increasing rate of non-indigenous species invasions has produced local biotas that are almost entirely composed of non-indigenous species. This example illustrates the complex nature of cosmopolitan native-exotic food webs, and the ecological insights obtainable through their study. © 2004 Blackwell Publishing Ltd.",TRUE,year/date
R24,Ecology and Evolutionary Biology,R54699,Effects of an intense prescribed fire on understory vegetation in a mixed conifer forest,S173192,R54701,Study date,L106676,2005,"Abstract Huisinga, K. D., D. C. Laughlin, P. Z. Fulé, J. D. Springer, and C. M. McGlone (Ecological Restoration Institute and School of Forestry, Northern Arizona University, Box 15017, Flagstaff, AZ 86011). Effects of an intense prescribed fire on understory vegetation in a mixed conifer forest. J. Torrey Bot. Soc. 132: 590–601. 2005.—Intense prescribed fire has been suggested as a possible method for forest restoration in mixed conifer forests. In 1993, a prescribed fire in a dense, never-harvested forest on the North Rim of Grand Canyon National Park escaped prescription and burned with greater intensity and severity than expected. We sampled this burned area and an adjacent unburned area to assess fire effects on understory species composition, diversity, and plant cover. The unburned area was sampled in 1998 and the burned area in 1999; 25% of the plots were resampled in 2001 to ensure that differences between sites were consistent and persistent, and not due to inter-annual climatic differences. Species composition differed significantly between unburned and burned sites; eight species were identified as indicators of the unburned site and thirteen as indicators of the burned site. Plant cover was nearly twice as great in the burned site than in the unburned site in the first years of measurement and was 4.6 times greater in the burned site in 2001. Average and total species richness was greater in the burned site, explained mostly by higher numbers of native annual and biennial forbs. Overstory canopy cover and duff depth were significantly lower in the burned site, and there were significant inverse relationships between these variables and plant species richness and plant cover. Greater than 95% of the species in the post-fire community were native and exotic plant cover never exceeded 1%, in contrast with other northern Arizona forests that were dominated by exotic species following high-severity fires. This difference is attributed to the minimal anthropogenic disturbance history (no logging, minimal grazing) of forests in the national park, and suggests that park managers may have more options than non-park managers to use intense fire as a tool for forest conservation and restoration.",TRUE,year/date
R24,Ecology and Evolutionary Biology,R57042,Comprehensive review of the records of the biota of the Indian Seas and introduction of non-indigenous species,S192937,R57045,Study date,L120739,2005,"1. Comparison of the pre-1960 faunal survey data for the Indian Seas with that for the post-1960 period showed that 205 non-indigenous taxa were introduced in the post-1960 period; shipping activity is considered a plausible major vector for many of these introductions. 2. Of the non-indigenous taxa, 21% were fish, followed by Polychaeta (<11%), Algae (10%), Crustacea (10%), Mollusca (10%), Ciliata (8%), Fungi (7%), Ascidians (6%) and minor invertebrates (17%). 3. An analysis of the data suggests a correspondence between the shipping routes between India and various regions. There were 75 species common to the Indian Seas and the coastal seas of China and Japan, 63 to the Indo-Malaysian region, 42 to the Mediterranean, 40 and 34 to western and eastern Atlantic respectively, and 41 to Australia and New Zealand. A further 33 species were common to the Caribbean region, 32 to the eastern Pacific, 14 and 24 to the west and east coasts of Africa respectively, 18 to the Baltic, 15 to the middle Arabian Gulf and Red Sea, and 10 to the Brazilian coast. 4. The Indo-Malaysian region can be identified as a centre of xenodiversity for biota from Southeast Asia, China, Japan, Philippines and Australian regions. 5. Of the introduced species, the bivalve Mytilopsis sallei and the serpulid Ficopomatus enigmaticus have become pests in the Indian Seas, consistent with the Williamson and Fitter ‘tens rule’. Included amongst the biota with economic impact are nine fouling and six wood-destroying organisms. 6. Novel occurrences of the human pathogenic vibrios, e.g. Vibrio parahaemolyticus, non-01 Vibrio cholerae, Vibrio vulnificus and Vibrio mimicus and the harmful algal bloom species Alexandrium spp. and Gymnodinium nagasakiense in the Indian coastal waters could be attributed to ballast water introductions. 7. Introductions of alien biota could pose a threat to the highly productive tropical coastal waters, estuaries and mariculture sites and could cause economic impacts and ecological surprises. 8. In addition to strict enforcement of a national quarantine policy on ballast water discharges, long-term multidisciplinary research on ballast water invaders is crucial to enhance our understanding of the biodiversity and functioning of the ecosystem. Copyright © 2005 John Wiley & Sons, Ltd.",TRUE,year/date
R24,Ecology and Evolutionary Biology,R57010,"Alien flora of Europe: species diversity, temporal trends, geographical patterns and research needs",S192593,R57015,Study date,L120455,2008,"The paper provides the first estimate of the composition and structure of alien plants occurring in the wild in the European continent, based on the results of the DAISIE project (2004–2008), funded by the 6th Framework Programme of the European Union and aimed at “creating an inventory of invasive species that threaten European terrestrial, freshwater and marine environments”. The plant section of the DAISIE database is based on national checklists from 48 European countries/regions and Israel; for many of them the data were compiled during the project and for some countries DAISIE collected the first comprehensive checklists of alien species, based on primary data (e.g., Cyprus, Greece, F. Y. R. O. Macedonia, Slovenia, Ukraine). In total, the database contains records of 5789 alien plant species in Europe (including those native to a part of Europe but alien to another part), of which 2843 are alien to Europe (of extra-European origin). The research focus was on naturalized species; there are in total 3749 naturalized aliens in Europe, of which 1780 are alien to Europe. This represents a marked increase compared to 1568 alien species reported by a previous analysis of data in Flora Europaea (1964–1980). Casual aliens were marginally considered and are represented by 1507 species with European origins and 872 species whose native range falls outside Europe. The highest diversity of alien species is concentrated in industrialized countries with a tradition of good botanical recording or intensive recent research. The highest number of all alien species, regardless of status, is reported from Belgium (1969), the United Kingdom (1779) and Czech Republic (1378). The United Kingdom (857), Germany (450), Belgium (447) and Italy (440) are countries with the most naturalized neophytes. The number of naturalized neophytes in European countries is determined mainly by the interaction of temperature and precipitation; it increases with increasing precipitation but only in climatically warm and moderately warm regions. Of the nowadays naturalized neophytes alien to Europe, 50% arrived after 1899, 25% after 1962 and 10% after 1989. At present, approximately 6.2 new species, that are capable of naturalization, are arriving each year. Most alien species have relatively restricted European distributions; half of all naturalized species occur in four or fewer countries/regions, whereas 70% of non-naturalized species occur in only one region. Alien species are drawn from 213 families, dominated by large global plant families which have a weedy tendency and have undergone major radiations in temperate regions (Asteraceae, Poaceae, Rosaceae, Fabaceae, Brassicaceae). There are 1567 genera, which have alien members in European countries, the commonest being globally-diverse genera comprising mainly urban and agricultural weeds (e.g., Amaranthus, Chenopodium and Solanum) or cultivated for ornamental purposes (Cotoneaster, the genus richest in alien species). Only a few large genera which have successfully invaded (e.g., Oenothera, Oxalis, Panicum, Helianthus) are predominantly of non-European origin. Conyza canadensis, Helianthus tuberosus and Robinia pseudoacacia are most widely distributed alien species. Of all naturalized aliens present in Europe, 64.1% occur in industrial habitats and 58.5% on arable land and in parks and gardens. Grasslands and woodlands are also highly invaded, with 37.4 and 31.5%, respectively, of all naturalized aliens in Europe present in these habitats. Mires, bogs and fens are least invaded; only approximately 10% of aliens in Europe occur there. Intentional introductions to Europe (62.8% of the total number of naturalized aliens) prevail over unintentional (37.2%). Ornamental and horticultural introductions escaped from cultivation account for the highest number of species, 52.2% of the total. Among unintentional introductions, contaminants of seed, mineral materials and other commodities are responsible for 1091 alien species introductions to Europe (76.6% of all species introduced unintentionally) and 363 species are assumed to have arrived as stowaways (directly associated with human transport but arriving independently of commodity). Most aliens in Europe have a native range in the same continent (28.6% of all donor region records are from another part of Europe where the plant is native); in terms of species numbers the contribution of Europe as a region of origin is 53.2%. Considering aliens to Europe separately, 45.8% of species have their native distribution in North and South America, 45.9% in Asia, 20.7% in Africa and 5.3% in Australasia. Based on species composition, European alien flora can be classified into five major groups: (1) north-western, comprising Scandinavia and the UK; (2) west-central, extending from Belgium and the Netherlands to Germany and Switzerland; (3) Baltic, including only the former Soviet Baltic states; (4) east-central, comprizing the remainder of central and eastern Europe; (5) southern, covering the entire Mediterranean region. The clustering patterns cut across some European bioclimatic zones; cultural factors such as regional trade links and traditional local preferences for crop, forestry and ornamental species are also important by influencing the introduced species pool. Finally, the paper evaluates a state of the art in the field of plant invasions in Europe, points to research gaps and outlines avenues of further research towards documenting alien plant invasions in Europe. The data are of varying quality and need to be further assessed with respect to the invasion status and residence time of the species included. This concerns especially the naturalized/casual status; so far, this information is available comprehensively for only 19 countries/regions of the 49 considered. Collating an integrated database on the alien flora of Europe can form a principal contribution to developing a European-wide management strategy of alien species.",TRUE,year/date
R24,Ecology and Evolutionary Biology,R54996,"The demography of introduction pathways, propagule pressure and occurrences of non-native freshwater fish in England",S176142,R54997,Study date,L108886,2010,"1. Biological invasion theory predicts that the introduction and establishment of non-native species is positively correlated with propagule pressure. Releases of pet and aquarium fishes to inland waters has a long history; however, few studies have examined the demographic basis of their importation and incidence in the wild. 2. For the 1500 grid squares (10×10 km) that make up England, data on human demographics (population density, numbers of pet shops, garden centres and fish farms), the numbers of non-native freshwater fishes (from consented licences) imported in those grid squares (i.e. propagule pressure), and the reported incidences (in a national database) of non-native fishes in the wild were used to examine spatial relationships between the occurrence of non-native fishes and the demographic factors associated with propagule pressure, as well as to test whether the demographic factors are statistically reliable predictors of the incidence of non-native fishes, and as such surrogate estimators of propagule pressure. 3. Principal coordinates of neighbour matrices analyses, used to generate spatially explicit models, and confirmatory factor analysis revealed that spatial distributions of non-native species in England were significantly related to human population density, garden centre density and fish farm density. Human population density and the number of fish imports were identified as the best predictors of propagule pressure. 4. Human population density is an effective surrogate estimator of non-native fish propagule pressure and can be used to predict likely areas of non-native fish introductions. In conjunction with fish movements, where available, human population densities can be used to support biological invasion monitoring programmes across Europe (and perhaps globally) and to inform management decisions as regards the prioritization of areas for the control of non-native fish introductions. © Crown copyright 2010. Reproduced with the permission of her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd.",TRUE,year/date
R24,Ecology and Evolutionary Biology,R56732,Identification of alien predators that should not be removed for controlling invasive crayfish threatening endangered odonates,S189210,R56733,Study date,L117840,2011,"1. When multiple invasive species coexist in the same ecosystem and their diets change as they grow, determining whether to eradicate any particular invader is difficult because of complex predator–prey interactions. 2. A stable isotope food-web analysis was conducted to explore an appropriate management strategy for three potential alien predators (snakehead Channa argus, bullfrog Rana catesbeiana, red-eared slider turtle Trachemys scripta elegans) of invasive crayfish Procambarus clarkii that had severely reduced the densities of endangered odonates in a pond in Japan. 3. The stable isotope analysis demonstrated that medium- and small-sized snakeheads primarily depended on crayfish and stone moroko Pseudorasbora parva. Both adult and juvenile bullfrogs depended on terrestrial arthropods, and juveniles exhibited a moderate dependence on crayfish. The turtle showed little dependence on crayfish. 4. These results suggest that eradication of snakeheads risks the possibility of mesopredator release, while such risk appears to be low in other alien predators. Copyright © 2011 John Wiley & Sons, Ltd.",TRUE,year/date
R24,Ecology and Evolutionary Biology,R54060,"Seedling defoliation, plant growth and flowering potential in native- and invasive-range Plantago lanceolata populations",S165609,R54061,Study date,L100487,2012,"Hanley ME (2012). Seedling defoliation, plant growth and flowering potential in native- and invasive-range Plantago lanceolata populations. Weed Research52, 252–259. Summary The plastic response of weeds to new environmental conditions, in particular the likely relaxation of herbivore pressure, is considered vital for successful colonisation and spread. However, while variation in plant anti-herbivore resistance between native- and introduced-range populations is well studied, few authors have considered herbivore tolerance, especially at the seedling stage. This study examines variation in seedling tolerance in native (European) and introduced (North American) Plantago lanceolata populations following cotyledon removal at 14 days old. Subsequent effects on plant growth were quantified at 35 days, along with effects on flowering potential at maturity. Cotyledon removal reduced early growth for all populations, with no variation between introduced- or native-range plants. Although more variable, the effects of cotyledon loss on flowering potential were also unrelated to range. The likelihood that generalist seedling herbivores are common throughout North America may explain why no difference in seedling tolerance was apparent. However, increased flowering potential in plants from North American P. lanceolata populations was observed. As increased flowering potential was not lost, even after severe cotyledon damage, the manifestation of phenotypic plasticity in weeds at maturity may nonetheless still be shaped by plasticity in the ability to tolerate herbivory during seedling establishment.",TRUE,year/date
R24,Ecology and Evolutionary Biology,R54066,Germination patterns and implications for invasiveness in three Taraxacum (Asteraceae) species,S165679,R54067,Study date,L100545,2012,"Luo J & Cardina J (2012). Germination patterns and implications for invasiveness in three Taraxacum (Asteraceae) species. Weed Research 52, 112–121. Summary The ability to germinate across different environments has been considered an important trait of invasive plant species that allows for establishment success in new habitats. Using two alien congener species of Asteraceae –Taraxacum officinale (invasive) and Taraxacum laevigatum laevigatum (non-invasive) – we tested the hypothesis that invasive species germinate better than non-invasives under various conditions. The germination patterns of Taraxacum brevicorniculatum, a contaminant found in seeds of the crop Taraxacum kok-saghyz, were also investigated to evaluate its invasive potential. In four experiments, we germinated seeds along gradients of alternating temperature, constant temperature (with or without light), water potential and following accelerated ageing. Neither higher nor lower germination per se explained invasion success for the Taraxacum species tested here. At alternating temperature, the invasive T. officinale had higher germination than or similar to the non-invasive T. laevigatum. Contrary to predictions, T. laevigatum exhibited higher germination than T. officinale in environments of darkness, low water potential or after the seeds were exposed to an ageing process. These results suggested a complicated role of germination in the success of T. officinale. Taraxacum brevicorniculatum showed the highest germination among the three species in all environments. The invasive potential of this species is thus unclear and will probably depend on its performance at other life stages along environmental gradients.",TRUE,year/date
R24,Ecology and Evolutionary Biology,R54070,Phenotypic variation of an alien species in a new environment: the body size and diet of American mink over time and at local and continental scales,S165723,R54071,Study date,L100581,2012,"Introduced species must adapt their ecology, behaviour, and morphological traits to new conditions. The successful introduction and invasive potential of a species are related to its levels of phenotypic plasticity and genetic polymorphism. We analysed changes in the body mass and length of American mink (Neovison vison) since its introduction into the Warta Mouth National Park, western Poland, in relation to diet composition and colonization progress from 1996 to 2004. Mink body mass decreased significantly during the period of population establishment within the study area, with an average decrease of 13% from 1.36 to 1.18 kg in males and of 16% from 0.83 to 0.70 kg in females. Diet composition varied seasonally and between consecutive years. The main prey items were mammals and fish in the cold season and birds and fish in the warm season. During the study period the proportion of mammals preyed upon increased in the cold season and decreased in the warm season. The proportion of birds preyed upon decreased over the study period, whereas the proportion of fish increased. Following introduction, the strictly aquatic portion of mink diet (fish and frogs) increased over time, whereas the proportion of large prey (large birds, muskrats, and water voles) decreased. The average yearly proportion of large prey and average-sized prey in the mink diet was significantly correlated with the mean body masses of males and females. Biogeographical variation in the body mass and length of mink was best explained by the percentage of large prey in the mink diet in both sexes, and by latitude for females. Together these results demonstrate that American mink rapidly changed their body mass in relation to local conditions. This phenotypic variability may be underpinned by phenotypic plasticity and/or by adaptation of quantitative genetic variation. The potential to rapidly change phenotypic variation in this manner is an important factor determining the negative ecological impacts of invasive species. © 2012 The Linnean Society of London, Biological Journal of the Linnean Society, 2012, 105, 681–693.",TRUE,year/date
R24,Ecology and Evolutionary Biology,R56990,Alien aquatic plant species in European countries,S192287,R56991,Study date,L120197,2012,"Hussner A (2012). Alien aquatic plant species in European countries. Weed Research52, 297–306. Summary Alien aquatic plant species cause serious ecological and economic impacts to European freshwater ecosystems. This study presents a comprehensive overview of all alien aquatic plants in Europe, their places of origin and their distribution within the 46 European countries. In total, 96 aquatic species from 30 families have been reported as aliens from at least one European country. Most alien aquatic plants are native to Northern America, followed by Asia and Southern America. Elodea canadensis is the most widespread alien aquatic plant in Europe, reported from 41 European countries. Azolla filiculoides ranks second (25), followed by Vallisneria spiralis (22) and Elodea nuttallii (20). The highest number of alien aquatic plant species has been found in Italy and France (34 species), followed by Germany (27), Belgium and Hungary (both 26) and the Netherlands (24). Even though the number of alien aquatic plants seems relatively small, the European and Mediterranean Plant Protection Organization (EPPO, http://www.eppo.org) has listed 18 of these species as invasive or potentially invasive within the EPPO region. As ornamental trade has been regarded as the major pathway for the introduction of alien aquatic plants, trading bans seem to be the most effective option to reduce the risk of further unintended entry of alien aquatic plants into Europe.",TRUE,year/date
R24,Ecology and Evolutionary Biology,R53261,A phylogenetic approach towards understanding the drivers of plant invasiveness on Robben Island- South Africa,S162636,R53264,Study date,L98187,2013,"Invasive plant species are a considerable threat to ecosystems globally and on islands in particular where species diversity can be relatively low. In this study, we examined the phylogenetic basis of invasion success on Robben Island in South Africa. The flora of the island was sampled extensively and the phylogeny of the local community was reconstructed using the two core DNA barcode regions, rbcLa and matK. By analysing the phylogenetic patterns of native and invasive floras at two different scales, we found that invasive alien species are more distantly related to native species, a confirmation of Darwin's naturalization hypothesis. However, this pattern also holds even for randomly generated communities, therefore discounting the explanatory power of Darwin's naturalization hypothesis as the unique driver of invasion success on the island. These findings suggest that the drivers of invasion success on the island may be linked to species traits rather than their evolutionary history alone, or to the combination thereof. This result also has implications for the invasion management programmes currently being implemented to rehabilitate the native diversity on Robben Island. © 2013 The Linnean Society of London, Botanical Journal of the Linnean Society, 2013, 172, 142–152.",TRUE,year/date
R24,Ecology and Evolutionary Biology,R57900,"The herbivorous arthropods associated with the invasive alien plant, Arundo donax, and the native analogous plant, Phragmites australis, in the Free State Province, South Africa",S202123,R57901,Study date,L127689,2014,"The Enemy Release Hypothesis (ERH) predicts that when plant species are introduced outside their native range there is a release from natural enemies resulting in the plants becoming problematic invasive alien species (Lake & Leishman 2004; Puliafico et al. 2008). The release from natural enemies may benefit alien plants more than simply reducing herbivory because, according to the Evolution of Increased Competitive Ability (EICA) hypothesis, without pressure from herbivores more resources that were previously allocated to defence can be allocated to reproduction (Blossey & Notzold 1995). Alien invasive plants are therefore expected to have simpler herbivore communities with fewer specialist herbivores (Frenzel & Brandl 2003; Heleno et al. 2008; Heger & Jeschke 2014).",TRUE,year/date
R194,Engineering,R141880,Integration of AlN piezoelectric thin films on ultralow fatigue TiNiCu shape memory alloys,S569215,R141883,Film thickness (nm),L399489,2000,"",TRUE,year/date
R194,Engineering,R139969,A Reliable Liquid-Based CMOS MEMS Micro Thermal Convective Accelerometer With Enhanced Sensitivity and Limit of Detection,S558878,R139971,Year,L392786,2021,"In this paper, a liquid-based micro thermal convective accelerometer (MTCA) is optimized by the Rayleigh number (Ra) based compact model and fabricated using the $0.35\mu $ m CMOS MEMS technology. To achieve water-proof performance, the conformal Parylene C coating was adopted as the isolation layer with the accelerated life-testing results of a 9-year-lifetime for liquid-based MTCA. Then, the device performance was characterized considering sensitivity, response time, and noise. Both the theoretical and experimental results demonstrated that fluid with a larger Ra number can provide better performance for the MTCA. More significantly, Ra based model showed its advantage to make a more accurate prediction than the simple linear model to select suitable fluid to enhance the sensitivity and balance the linear range of the device. Accordingly, an alcohol-based MTCA was achieved with a two-order-of magnitude increase in sensitivity (43.8 mV/g) and one-order-of-magnitude decrease in the limit of detection (LOD) ( $61.9~\mu \text{g}$ ) compared with the air-based MTCA. [2021-0092]",TRUE,year/date
R93,Human and Clinical Nutrition,R182134,On-Farm Crop Species Richness Is Associated with Household Diet Diversity and Quality in Subsistence- and Market-Oriented Farming Households in Malawi,S704505,R182136,has beginning,L475321,2010,"BACKGROUND On-farm crop species richness (CSR) may be important for maintaining the diversity and quality of diets of smallholder farming households. OBJECTIVES The objectives of this study were to 1) determine the association of CSR with the diversity and quality of household diets in Malawi and 2) assess hypothesized mechanisms for this association via both subsistence- and market-oriented pathways. METHODS Longitudinal data were assessed from nationally representative household surveys in Malawi between 2010 and 2013 (n = 3000 households). A household diet diversity score (DDS) and daily intake per adult equivalent of energy, protein, iron, vitamin A, and zinc were calculated from 7-d household consumption data. CSR was calculated from plot-level data on all crops cultivated during the 2009-2010 and 2012-2013 agricultural seasons in Malawi. Adjusted generalized estimating equations were used to assess the longitudinal relation of CSR with household diet quality and diversity. RESULTS CSR was positively associated with DDS (β: 0.08; 95% CI: 0.06, 0.12; P < 0.001), as well as daily intake per adult equivalent of energy (kilocalories) (β: 41.6; 95% CI: 20.9, 62.2; P < 0.001), protein (grams) (β: 1.78; 95% CI: 0.80, 2.75; P < 0.001), iron (milligrams) (β: 0.30; 95% CI: 0.16, 0.44; P < 0.001), vitamin A (micrograms of retinol activity equivalent) (β: 25.8; 95% CI: 12.7, 38.9; P < 0.001), and zinc (milligrams) (β: 0.26; 95% CI: 0.13, 0.38; P < 0.001). Neither proportion of harvest sold nor distance to nearest population center modified the relation between CSR and household diet diversity or quality (P ≥ 0.05). Households with greater CSR were more commercially oriented (least-squares mean proportion of harvest sold ± SE, highest tertile of CSR: 17.1 ± 0.52; lowest tertile of CSR: 8.92 ± 1.09) (P < 0.05). CONCLUSION Promoting on-farm CSR may be a beneficial strategy for simultaneously supporting enhanced diet quality and diversity while also creating opportunities for smallholder farmers to engage with markets in subsistence agricultural contexts.",TRUE,year/date
R93,Human and Clinical Nutrition,R182376,Relationship between agricultural biodiversity and dietary diversity of children aged 6-36 months in rural areas of Northern Ghana,S705481,R182380,has beginning,L475844,2013,"ABSTRACT In this study, we investigated the relationship between agricultural biodiversity and dietary diversity of children and whether factors such as economic access may affect this relationship. This paper is based on data collected in a baseline cross-sectional survey in November 2013.The study population comprising 1200 mother-child pairs was selected using a two-stage cluster sampling. Dietary diversity was defined as the number of food groups consumed 24 h prior to the assessment. The number of crop and livestock species produced on a farm was used as the measure of production diversity. Hierarchical regression analysis was used to identify predictors and test for interactions. Whereas the average production diversity score was 4.7 ± 1.6, only 42.4% of households consumed at least four food groups out of seven over the preceding 24-h recall period. Agricultural biodiversity (i.e. variety of animals kept and food groups produced) associated positively with dietary diversity of children aged 6–36 months but the relationship was moderated by household socioeconomic status. The interaction term was also statistically significant [β = −0.08 (95% CI: −0.05, −0.01, p = 0.001)]. Spearman correlation (rho) analysis showed that agricultural biodiversity was positively associated with individual dietary diversity of the child more among children of low socioeconomic status in rural households compared to children of high socioeconomic status (r = 0.93, p < 0.001 versus r = 0.08, p = 0.007). Socioeconomic status of the household also partially mediated the link between agricultural biodiversity and dietary diversity of a child’s diet. The effect of increased agricultural biodiversity on dietary diversity was significantly higher in households of lower socioeconomic status. Therefore, improvement of agricultural biodiversity could be one of the best approaches for ensuring diverse diets especially for households of lower socioeconomic status in rural areas of Northern Ghana.",TRUE,year/date
R93,Human and Clinical Nutrition,R182134,On-Farm Crop Species Richness Is Associated with Household Diet Diversity and Quality in Subsistence- and Market-Oriented Farming Households in Malawi,S704506,R182136,Has end,L475322,2013,"BACKGROUND On-farm crop species richness (CSR) may be important for maintaining the diversity and quality of diets of smallholder farming households. OBJECTIVES The objectives of this study were to 1) determine the association of CSR with the diversity and quality of household diets in Malawi and 2) assess hypothesized mechanisms for this association via both subsistence- and market-oriented pathways. METHODS Longitudinal data were assessed from nationally representative household surveys in Malawi between 2010 and 2013 (n = 3000 households). A household diet diversity score (DDS) and daily intake per adult equivalent of energy, protein, iron, vitamin A, and zinc were calculated from 7-d household consumption data. CSR was calculated from plot-level data on all crops cultivated during the 2009-2010 and 2012-2013 agricultural seasons in Malawi. Adjusted generalized estimating equations were used to assess the longitudinal relation of CSR with household diet quality and diversity. RESULTS CSR was positively associated with DDS (β: 0.08; 95% CI: 0.06, 0.12; P < 0.001), as well as daily intake per adult equivalent of energy (kilocalories) (β: 41.6; 95% CI: 20.9, 62.2; P < 0.001), protein (grams) (β: 1.78; 95% CI: 0.80, 2.75; P < 0.001), iron (milligrams) (β: 0.30; 95% CI: 0.16, 0.44; P < 0.001), vitamin A (micrograms of retinol activity equivalent) (β: 25.8; 95% CI: 12.7, 38.9; P < 0.001), and zinc (milligrams) (β: 0.26; 95% CI: 0.13, 0.38; P < 0.001). Neither proportion of harvest sold nor distance to nearest population center modified the relation between CSR and household diet diversity or quality (P ≥ 0.05). Households with greater CSR were more commercially oriented (least-squares mean proportion of harvest sold ± SE, highest tertile of CSR: 17.1 ± 0.52; lowest tertile of CSR: 8.92 ± 1.09) (P < 0.05). CONCLUSION Promoting on-farm CSR may be a beneficial strategy for simultaneously supporting enhanced diet quality and diversity while also creating opportunities for smallholder farmers to engage with markets in subsistence agricultural contexts.",TRUE,year/date
R93,Human and Clinical Nutrition,R182376,Relationship between agricultural biodiversity and dietary diversity of children aged 6-36 months in rural areas of Northern Ghana,S705482,R182380,Has end,L475845,2013,"ABSTRACT In this study, we investigated the relationship between agricultural biodiversity and dietary diversity of children and whether factors such as economic access may affect this relationship. This paper is based on data collected in a baseline cross-sectional survey in November 2013.The study population comprising 1200 mother-child pairs was selected using a two-stage cluster sampling. Dietary diversity was defined as the number of food groups consumed 24 h prior to the assessment. The number of crop and livestock species produced on a farm was used as the measure of production diversity. Hierarchical regression analysis was used to identify predictors and test for interactions. Whereas the average production diversity score was 4.7 ± 1.6, only 42.4% of households consumed at least four food groups out of seven over the preceding 24-h recall period. Agricultural biodiversity (i.e. variety of animals kept and food groups produced) associated positively with dietary diversity of children aged 6–36 months but the relationship was moderated by household socioeconomic status. The interaction term was also statistically significant [β = −0.08 (95% CI: −0.05, −0.01, p = 0.001)]. Spearman correlation (rho) analysis showed that agricultural biodiversity was positively associated with individual dietary diversity of the child more among children of low socioeconomic status in rural households compared to children of high socioeconomic status (r = 0.93, p < 0.001 versus r = 0.08, p = 0.007). Socioeconomic status of the household also partially mediated the link between agricultural biodiversity and dietary diversity of a child’s diet. The effect of increased agricultural biodiversity on dietary diversity was significantly higher in households of lower socioeconomic status. Therefore, improvement of agricultural biodiversity could be one of the best approaches for ensuring diverse diets especially for households of lower socioeconomic status in rural areas of Northern Ghana.",TRUE,year/date
R93,Human and Clinical Nutrition,R182148,"Farm production, market access and dietary diversity in Malawi",S704561,R182150,has beginning,L475355,2014,"Abstract Objective The association between farm production diversity and dietary diversity in rural smallholder households was recently analysed. Most existing studies build on household-level dietary diversity indicators calculated from 7d food consumption recalls. Herein, this association is revisited with individual-level 24 h recall data. The robustness of the results is tested by comparing household- and individual-level estimates. The role of other factors that may influence dietary diversity, such as market access and agricultural technology, is also analysed. Design A survey of smallholder farm households was carried out in Malawi in 2014. Dietary diversity scores are calculated from 24 h recall data. Production diversity scores are calculated from farm production data covering a period of 12 months. Individual- and household-level regression models are developed and estimated. Setting Data were collected in sixteen districts of central and southern Malawi. Subjects Smallholder farm households (n 408), young children (n 519) and mothers (n 408). Results Farm production diversity is positively associated with dietary diversity. However, the estimated effects are small. Access to markets for buying food and selling farm produce and use of chemical fertilizers are shown to be more important for dietary diversity than diverse farm production. Results with household- and individual-level dietary data are very similar. Conclusions Further increasing production diversity may not be the most effective strategy to improve diets in smallholder farm households. Improving access to markets, productivity-enhancing inputs and technologies seems to be more promising.",TRUE,year/date
R93,Human and Clinical Nutrition,R182396,The influence of crop production and socioeconomic factors on seasonal household dietary diversity in Burkina Faso,S705528,R182397,has beginning,L475871,2014,"Households in low-income settings are vulnerable to seasonal changes in dietary diversity because of fluctuations in food availability and access. We assessed seasonal differences in household dietary diversity in Burkina Faso, and determined the extent to which household socioeconomic status and crop production diversity modify changes in dietary diversity across seasons, using data from the nationally representative 2014 Burkina Faso Continuous Multisectoral Survey (EMC). A household dietary diversity score based on nine food groups was created from household food consumption data collected during four rounds of the 2014 EMC. Plot-level crop production data, and data on household assets and education were used to create variables on crop diversity and household socioeconomic status, respectively. Analyses included data for 10,790 households for which food consumption data were available for at least one round. Accounting for repeated measurements and controlling for the complex survey design and confounding covariates using a weighted multi-level model, household dietary diversity was significantly higher during both lean seasons periods, and higher still during the harvest season as compared to the post-harvest season (mean: post-harvest: 4.76 (SE 0.04); beginning of lean: 5.13 (SE 0.05); end of lean: 5.21 (SE 0.05); harvest: 5.72 (SE 0.04)), but was not different between the beginning and the end of lean season. Seasonal differences in household dietary diversity were greater among households with higher food expenditures, greater crop production, and greater monetary value of crops sale (P<0.05). Seasonal changes in household dietary diversity in Burkina Faso may reflect nutritional differences among agricultural households, and may be modified both by households’ socioeconomic status and agricultural characteristics.",TRUE,year/date
R93,Human and Clinical Nutrition,R182148,"Farm production, market access and dietary diversity in Malawi",S704562,R182150,Has end,L475356,2014,"Abstract Objective The association between farm production diversity and dietary diversity in rural smallholder households was recently analysed. Most existing studies build on household-level dietary diversity indicators calculated from 7d food consumption recalls. Herein, this association is revisited with individual-level 24 h recall data. The robustness of the results is tested by comparing household- and individual-level estimates. The role of other factors that may influence dietary diversity, such as market access and agricultural technology, is also analysed. Design A survey of smallholder farm households was carried out in Malawi in 2014. Dietary diversity scores are calculated from 24 h recall data. Production diversity scores are calculated from farm production data covering a period of 12 months. Individual- and household-level regression models are developed and estimated. Setting Data were collected in sixteen districts of central and southern Malawi. Subjects Smallholder farm households (n 408), young children (n 519) and mothers (n 408). Results Farm production diversity is positively associated with dietary diversity. However, the estimated effects are small. Access to markets for buying food and selling farm produce and use of chemical fertilizers are shown to be more important for dietary diversity than diverse farm production. Results with household- and individual-level dietary data are very similar. Conclusions Further increasing production diversity may not be the most effective strategy to improve diets in smallholder farm households. Improving access to markets, productivity-enhancing inputs and technologies seems to be more promising.",TRUE,year/date
R93,Human and Clinical Nutrition,R182396,The influence of crop production and socioeconomic factors on seasonal household dietary diversity in Burkina Faso,S705529,R182397,Has end,L475872,2014,"Households in low-income settings are vulnerable to seasonal changes in dietary diversity because of fluctuations in food availability and access. We assessed seasonal differences in household dietary diversity in Burkina Faso, and determined the extent to which household socioeconomic status and crop production diversity modify changes in dietary diversity across seasons, using data from the nationally representative 2014 Burkina Faso Continuous Multisectoral Survey (EMC). A household dietary diversity score based on nine food groups was created from household food consumption data collected during four rounds of the 2014 EMC. Plot-level crop production data, and data on household assets and education were used to create variables on crop diversity and household socioeconomic status, respectively. Analyses included data for 10,790 households for which food consumption data were available for at least one round. Accounting for repeated measurements and controlling for the complex survey design and confounding covariates using a weighted multi-level model, household dietary diversity was significantly higher during both lean seasons periods, and higher still during the harvest season as compared to the post-harvest season (mean: post-harvest: 4.76 (SE 0.04); beginning of lean: 5.13 (SE 0.05); end of lean: 5.21 (SE 0.05); harvest: 5.72 (SE 0.04)), but was not different between the beginning and the end of lean season. Seasonal differences in household dietary diversity were greater among households with higher food expenditures, greater crop production, and greater monetary value of crops sale (P<0.05). Seasonal changes in household dietary diversity in Burkina Faso may reflect nutritional differences among agricultural households, and may be modified both by households’ socioeconomic status and agricultural characteristics.",TRUE,year/date
R93,Human and Clinical Nutrition,R184009,"Market Access, Production Diversity, and Diet Diversity: Evidence From India",S707049,R184011,has beginning,L478034,2017,"Background: Recent literature, largely from Africa, shows mixed effects of own-production on diet diversity. However, the role of own-production, relative to markets, in influencing food consumption becomes more pronounced as market integration increases. Objective: This paper investigates the relative importance of two factors - production diversity and household market integration - for the intake of a nutritious diet by women and households in rural India. Methods: Data analysis is based on primary data from an extensive agriculture-nutrition survey of 3600 Indian households that was collected in 2017. Dietary diversity scores are constructed for women and households is based on 24-hour and 7-day recall periods. Household market integration is measured as monthly household expenditure on key non-staple food groups. We measure production diversity in two ways - field-level and on-farm production diversity - in order to account for the cereal centric rice-wheat cropping system found in our study locations. The analysis is based on Ordinary Least Squares regressions where we control for a variety of village, household, and individual level covariates that affect food consumption, and village fixed effects. Robustness checks are done by way of using a Poisson regression specifications and 7-day recall period. Results: Conventional measures of field-level production diversity, like the number of crops or food groups grown, have no significant association with diet diversity. In contrast, it is on-farm production diversity (the field-level cultivation of pulses and on-farm livestock management, and kitchen gardens in the longer run) that is significantly associated with improved dietary diversity scores, thus suggesting the importance of non-staples in improving both individual and household dietary diversity. Furthermore, market purchases of non-staples like pulses and dairy products are associated with a significantly higher dietary diversity. Other significant determinants of dietary diversity include women’s literacy and awareness of nutrition. These results mostly remain robust to changes in the recall period of the diet diversity measure and the nature of the empirical specification. Conclusions: This study contributes to the scarce empirical evidence related to diets in India. Additionally, our results indicate some key intervention areas - promoting livestock rearing, strengthening households’ market integration (for purchase of non-staples) and increasing women’s awareness about nutrition. These are more impactful than raising production diversity. ",TRUE,year/date
R93,Human and Clinical Nutrition,R184009,"Market Access, Production Diversity, and Diet Diversity: Evidence From India",S707050,R184011,Has end,L478035,2017,"Background: Recent literature, largely from Africa, shows mixed effects of own-production on diet diversity. However, the role of own-production, relative to markets, in influencing food consumption becomes more pronounced as market integration increases. Objective: This paper investigates the relative importance of two factors - production diversity and household market integration - for the intake of a nutritious diet by women and households in rural India. Methods: Data analysis is based on primary data from an extensive agriculture-nutrition survey of 3600 Indian households that was collected in 2017. Dietary diversity scores are constructed for women and households is based on 24-hour and 7-day recall periods. Household market integration is measured as monthly household expenditure on key non-staple food groups. We measure production diversity in two ways - field-level and on-farm production diversity - in order to account for the cereal centric rice-wheat cropping system found in our study locations. The analysis is based on Ordinary Least Squares regressions where we control for a variety of village, household, and individual level covariates that affect food consumption, and village fixed effects. Robustness checks are done by way of using a Poisson regression specifications and 7-day recall period. Results: Conventional measures of field-level production diversity, like the number of crops or food groups grown, have no significant association with diet diversity. In contrast, it is on-farm production diversity (the field-level cultivation of pulses and on-farm livestock management, and kitchen gardens in the longer run) that is significantly associated with improved dietary diversity scores, thus suggesting the importance of non-staples in improving both individual and household dietary diversity. Furthermore, market purchases of non-staples like pulses and dairy products are associated with a significantly higher dietary diversity. Other significant determinants of dietary diversity include women’s literacy and awareness of nutrition. These results mostly remain robust to changes in the recall period of the diet diversity measure and the nature of the empirical specification. Conclusions: This study contributes to the scarce empirical evidence related to diets in India. Additionally, our results indicate some key intervention areas - promoting livestock rearing, strengthening households’ market integration (for purchase of non-staples) and increasing women’s awareness about nutrition. These are more impactful than raising production diversity. ",TRUE,year/date
R351,Industrial and Organizational Psychology,R76567,Individual differences and changes in subjective wellbeing during the early stages of the COVID-19 pandemic.,S352356,R76571,Compared time before the COVID-19 pandemic,L250785,19-Dec,"The COVID-19 pandemic has considerably impacted many people's lives. This study examined changes in subjective wellbeing between December 2019 and May 2020 and how stress appraisals and coping strategies relate to individual differences and changes in subjective wellbeing during the early stages of the pandemic. Data were collected at 4 time points from 979 individuals in Germany. Results showed that, on average, life satisfaction, positive affect, and negative affect did not change significantly between December 2019 and March 2020 but decreased between March and May 2020. Across the latter timespan, individual differences in life satisfaction were positively related to controllability appraisals, active coping, and positive reframing, and negatively related to threat and centrality appraisals and planning. Positive affect was positively related to challenge and controllable-by-self appraisals, active coping, using emotional support, and religion, and negatively related to threat appraisal and humor. Negative affect was positively related to threat and centrality appraisals, denial, substance use, and self-blame, and negatively related to controllability appraisals and emotional support. Contrary to expectations, the effects of stress appraisals and coping strategies on changes in subjective wellbeing were small and mostly nonsignificant. These findings imply that the COVID-19 pandemic represents not only a major medical and economic crisis, but also has a psychological dimension, as it can be associated with declines in key facets of people's subjective wellbeing. Psychological practitioners should address potential declines in subjective wellbeing with their clients and attempt to enhance clients' general capability to use functional stress appraisals and effective coping strategies. (PsycInfo Database Record (c) 2020 APA, all rights reserved).",TRUE,year/date
R278,Information Science,R76423,Expanding horizons in historical linguistics with the 400-million word Corpus of Historical American English,S351834,R76425,Time period,L250539,from the 1810s to the 2000s,"The Corpus of Historical American English (COHA) contains 400 million words in more than 100,000 texts which date from the 1810s to the 2000s. The corpus contains texts from fiction, popular magazines, newspapers and non-fiction books, and is balanced by genre from decade to decade. It has been carefully lemmatised and tagged for part-of-speech, and uses the same architecture as the Corpus of Contemporary American English (COCA), BYU-BNC, the TIME Corpus and other corpora. COHA allows for a wide range of research on changes in lexis, morphology, syntax, semantics, and American culture and society (as viewed through language change), in ways that are probably not possible with any text archive (e.g., Google Books) or any other corpus of historical American English.",TRUE,year/date
R137681,"Information Systems, Process and Knowledge Management",R140080,Interdisciplinary Online Hackathons as an Approach to Combat the COVID-19 Pandemic: Case Study,S559114,R140092,has date ,R140103,20-Apr,"Background The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. Objective This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. Methods This 4-day hackathon was conducted in April 2020, based on six COVID-19–related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. Results A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. Conclusions This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.",TRUE,year/date
R137681,"Information Systems, Process and Knowledge Management",R164003,SoMeSci- A 5 Star Open Data Gold Standard Knowledge Graph of Software Mentions in Scientific Articles,S662968,R166456,Number of documents,L448346,1367,"Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.",TRUE,year/date
R137681,"Information Systems, Process and Knowledge Management",R164003,SoMeSci- A 5 Star Open Data Gold Standard Knowledge Graph of Software Mentions in Scientific Articles,S655079,R164005,number of papers,L445041,1367,"Knowledge about software used in scientific investigations is important for several reasons, for instance, to enable an understanding of provenance and methods involved in data handling. However, software is usually not formally cited, but rather mentioned informally within the scholarly description of the investigation, raising the need for automatic information extraction and disambiguation. Given the lack of reliable ground truth data, we present SoMeSci-Software Mentions in Science-a gold standard knowledge graph of software mentions in scientific articles. It contains high quality annotations (IRR: K=.82) of 3756 software mentions in 1367 PubMed Central articles. Besides the plain mention of the software, we also provide relation labels for additional information, such as the version, the developer, a URL or citations. Moreover, we distinguish between different types, such as application, plugin or programming environment, as well as different types of mentions, such as usage or creation. To the best of our knowledge, SoMeSci is the most comprehensive corpus about software mentions in scientific articles, providing training samples for Named Entity Recognition, Relation Extraction, Entity Disambiguation, and Entity Linking. Finally, we sketch potential use cases and provide baseline results.",TRUE,year/date
R359,Medicine and Health,R76537," Psychische Krise durch Covid-19? Sorgen sinken, Einsamkeit steigt, Lebenszufriedenheit bleibt stabil",S352347,R76538,Compared time before the COVID-19 pandemic,L250776,2019,"Die vorliegende Studie vergleicht das Niveau der selbstberichteten psychischen Gesundheit und des Wohlbefindens in Deutschland zu Beginn der Corona-Krise (April 2020) mit den der Vorjahre. Die Zufriedenheit mit der Gesundheit steigt uber alle Bevolkerungsgruppen hinweg deutlich an, wahrend Sorgen um die Gesundheit uber alle Gruppen deutlich sinken. Dies deutet darauf hin, dass die aktuelle Einschatzung stark im Kontext des Bedrohungsszenarios der Pandemie erfogt. Die subjektive Einsamkeit steigt uber alle betrachteten Gruppe sehr stark an. Der Anstieg fallt unter jungeren Menschen und Frauen etwas groser aus. Depressions- und Angstsymptome steigen ebenfalls an im Vergleich zu 2019, sind jedoch vergleichbar zum Nievau in 2016. Das Wohlbefinden verandert sich insgesamt kaum, es zeigen sich jedoch kleine Geschlechterunterschiede. Wahrend Frauen im Durchschnitt ein etwas geringeres Wohlbefinden berichten, ist das Wohlbefinden bei Mannern leicht angestiegen. Die allgemeine Lebenszufriedenheit verandert sich im Vergleich zu den Vorjahren im April 2020 noch nicht signifikant. Allerdings findet man eine Angleichung der soziookonomischen Unterschiede in Bildung und Einkommen. Personen mit niedriger Bildung und Personen mit niedrigem Einkommen berichten einen leichten Anstieg ihrer Lebenszufriedenheit, wahrend Personen mit hoher Bildung und Personen mit hohem Einkommen eine leichte Reduktion ihrer Lebenszufriedenheit berichten. Insgesamt ergibt sich, dass in der ersten Phase der Corona-Pandemie soziookonomische Unterschiede fur die psychische Gesundheit noch keine grose modifizierende Rolle spielen. Bestehende soziale Ungleichheiten in gesundheitsbezogenen Indikatoren bleiben weitgehend bestehen, einige verringern sich sogar.",TRUE,year/date
R279,Nanoscience and Nanotechnology,R135569,A Highly Sensitive and Flexible Capacitive Pressure Sensor Based on a Porous Three-Dimensional PDMS/Microsphere Composite,S536276,R135573,Loading-unloading cycles,L378249,1000,"In recent times, polymer-based flexible pressure sensors have been attracting a lot of attention because of their various applications. A highly sensitive and flexible sensor is suggested, capable of being attached to the human body, based on a three-dimensional dielectric elastomeric structure of polydimethylsiloxane (PDMS) and microsphere composite. This sensor has maximal porosity due to macropores created by sacrificial layer grains and micropores generated by microspheres pre-mixed with PDMS, allowing it to operate at a wider pressure range (~150 kPa) while maintaining a sensitivity (of 0.124 kPa−1 in a range of 0~15 kPa) better than in previous studies. The maximized pores can cause deformation in the structure, allowing for the detection of small changes in pressure. In addition to exhibiting a fast rise time (~167 ms) and fall time (~117 ms), as well as excellent reproducibility, the fabricated pressure sensor exhibits reliability in its response to repeated mechanical stimuli (2.5 kPa, 1000 cycles). As an application, we develop a wearable device for monitoring repeated tiny motions, such as the pulse on the human neck and swallowing at the Adam’s apple. This sensory device is also used to detect movements in the index finger and to monitor an insole system in real-time.",TRUE,year/date
R279,Nanoscience and Nanotechnology,R161508,Fabrication of a SnO2 Nanowire Gas Sensor and Sensor Performance for Hydrogen,S644979,R161510,Measuring concentration (ppm),L440645,1000,SnO2 nanowire gas sensors have been fabricated on Cd−Au comb-shaped interdigitating electrodes using thermal evaporation of the mixed powders of SnO2 and active carbon. The self-assembly grown sensors have excellent performance in sensor response to hydrogen concentration in the range of 10 to 1000 ppm. This high response is attributed to the large portion of undercoordinated atoms on the surface of the SnO2 nanowires. The influence of the Debye length of the nanowires and the gap between electrodes in the gas sensor response is examined and discussed.,TRUE,year/date
R58,Neuroscience and Neurobiology,R75482,Prevalence and Incidence of Epilepsy in Italy Based on a Nationwide Database,S346058,R75484,Year of study,R75319,2011,"Objectives: To estimate the prevalence and incidence of epilepsy in Italy using a national database of general practitioners (GPs). Methods: The Health Search CSD Longitudinal Patient Database (HSD) has been established in 1998 by the Italian College of GPs. Participants were 700 GPs, representing a population of 912,458. For each patient, information on age and sex, EEG, CT scan, and MRI was included. Prevalent cases with a diagnosis of ‘epilepsy' (ICD9CM: 345*) were selected in the 2011 population. Incident cases of epilepsy were identified in 2011 by excluding patients diagnosed for epilepsy and convulsions and those with EEG, CT scan, MRI prescribed for epilepsy and/or convulsions in the previous years. Crude and standardized (Italian population) prevalence and incidence were calculated. Results: Crude prevalence of epilepsy was 7.9 per 1,000 (men 8.1; women 7.7). The highest prevalence was in patients <25 years and ≥75 years. The incidence of epilepsy was 33.5 per 100,000 (women 35.3; men 31.5). The highest incidence was in women <25 years and in men 75 years or older. Conclusions: Prevalence and incidence of epilepsy in this study were similar to those of other industrialized countries. HSD appears as a reliable data source for the surveillance of epilepsy in Italy. i 2014 S. Karger AG, Basel",TRUE,year/date
R58,Neuroscience and Neurobiology,R75482,Prevalence and Incidence of Epilepsy in Italy Based on a Nationwide Database,S346051,R75484,has publication year,R75609,2014,"Objectives: To estimate the prevalence and incidence of epilepsy in Italy using a national database of general practitioners (GPs). Methods: The Health Search CSD Longitudinal Patient Database (HSD) has been established in 1998 by the Italian College of GPs. Participants were 700 GPs, representing a population of 912,458. For each patient, information on age and sex, EEG, CT scan, and MRI was included. Prevalent cases with a diagnosis of ‘epilepsy' (ICD9CM: 345*) were selected in the 2011 population. Incident cases of epilepsy were identified in 2011 by excluding patients diagnosed for epilepsy and convulsions and those with EEG, CT scan, MRI prescribed for epilepsy and/or convulsions in the previous years. Crude and standardized (Italian population) prevalence and incidence were calculated. Results: Crude prevalence of epilepsy was 7.9 per 1,000 (men 8.1; women 7.7). The highest prevalence was in patients <25 years and ≥75 years. The incidence of epilepsy was 33.5 per 100,000 (women 35.3; men 31.5). The highest incidence was in women <25 years and in men 75 years or older. Conclusions: Prevalence and incidence of epilepsy in this study were similar to those of other industrialized countries. HSD appears as a reliable data source for the surveillance of epilepsy in Italy. i 2014 S. Karger AG, Basel",TRUE,year/date
R172,Oceanography,R147159,Evidence of high N<sub>2</sub> fixation rates in the temperate northeast Atlantic,S589653,R147161,Depth integrated N2 fixation rate (upper limit),L410443,1533,"Abstract. Diazotrophic activity and primary production (PP) were investigated along two transects (Belgica BG2014/14 and GEOVIDE cruises) off the western Iberian Margin and the Bay of Biscay in May 2014. Substantial N2 fixation activity was observed at 8 of the 10 stations sampled, ranging overall from 81 to 384 µmol N m−2 d−1 (0.7 to 8.2 nmol N L−1 d−1), with two sites close to the Iberian Margin situated between 38.8 and 40.7∘ N yielding rates reaching up to 1355 and 1533 µmol N m−2 d−1. Primary production was relatively lower along the Iberian Margin, with rates ranging from 33 to 59 mmol C m−2 d−1, while it increased towards the northwest away from the peninsula, reaching as high as 135 mmol C m−2 d−1. In agreement with the area-averaged Chl a satellite data contemporaneous with our study period, our results revealed that post-bloom conditions prevailed at most sites, while at the northwesternmost station the bloom was still ongoing. When converted to carbon uptake using Redfield stoichiometry, N2 fixation could support 1 % to 3 % of daily PP in the euphotic layer at most sites, except at the two most active sites where this contribution to daily PP could reach up to 25 %. At the two sites where N2 fixation activity was the highest, the prymnesiophyte–symbiont Candidatus Atelocyanobacterium thalassa (UCYN-A) dominated the nifH sequence pool, while the remaining recovered sequences belonged to non-cyanobacterial phylotypes. At all the other sites, however, the recovered nifH sequences were exclusively assigned phylogenetically to non-cyanobacterial phylotypes. The intense N2 fixation activities recorded at the time of our study were likely promoted by the availability of phytoplankton-derived organic matter produced during the spring bloom, as evidenced by the significant surface particulate organic carbon concentrations. Also, the presence of excess phosphorus signature in surface waters seemed to contribute to sustaining N2 fixation, particularly at the sites with extreme activities. These results provide a mechanistic understanding of the unexpectedly high N2 fixation in productive waters of the temperate North Atlantic and highlight the importance of N2 fixation for future assessment of the global N inventory.",TRUE,year/date
R172,Oceanography,R160146,Variabilities in the fluxes and annual emissions of nitrous oxide from the Arabian Sea,S638075,R160173,Sampling year,L437086,1994,"Extensive measurements of nitrous oxide (N2O) have been made during April–May 1994 (intermonsoon), February–March 1995 (northeast monsoon), July–August 1995 and August 1996 (southwest monsoon) in the Arabian Sea. Low N2O supersaturations in the surface waters are observed during intermonsoon compared to those in northeast and southwest monsoons. Spatial distributions of supersaturations manifest the effects of larger mixing during winter cooling and wind‐driven upwelling during monsoon period off the Indian west coast. A net positive flux is observable during all the seasons, with no discernible differences from the open ocean to coastal regions. The average ocean‐to‐atmosphere fluxes of N2O are estimated, using wind speed dependent gas transfer velocity, to be of the order of 0.26, 0.003, and 0.51, and 0.78 pg (pico grams) cm−2 s−1 during northeast monsoon, intermonsoon, and southwest monsoon in 1995 and 1996, respectively. The lower range of annual emission of N2O is estimated to be 0.56–0.76 Tg N2O per year which constitutes 13–17% of the net global oceanic source. However, N2O emission from the Arabian Sea can be as high as 1.0 Tg N2O per year using different gas transfer models.",TRUE,year/date
R172,Oceanography,R109569,An extensive bloom of the N2-fixing cyanobacterium Trichodesmium erythraeum in the central Arabian Sea,S500175,R109586,Sampling year,L361887,1995,"LVe encountered an extensive surface bloom of the N, fixing cyanobactenum Trichodesrniurn erythraeum in the central basin of the Arabian Sea during the spring ~nter-n~onsoon of 1995. The bloom, which occurred dunng a penod of calm winds and relatively high atmospher~c iron content, was metabollcally active. Carbon fixation by the bloom represented about one-quarter of water column primary productivity while input by h:: flxation could account for a major fraction of the estimated 'new' N demand of pnmary production. Isotopic measurements of the N in surface suspended material confirmed a direct contribution of N, fixation to the organic nltrogen pools of the upper water column. Retrospective analysis of NOAA-12 AVHRR imagery indicated that blooms covered up to 2 X 106 km2, or 20% of the Arabian Sea surface, during the period from 22 to 27 May 1995. In addition to their biogeochemical impact, surface blooms of this extent may have secondary effects on sea surface albedo and light penetration as well as heat and gas exchange across the air-sea interface. A preliminary extrapolation based on our observed, non-bloom rates of N, fixation from our limited sampling in the spring intermonsoon, including a conservative estimate of the input by blooms, suggest N2 fixation may account for an input of about 1 Tg N yr-I This is substantial, but relatively minor compared to current estimates of the removal of N through denitrification in the basin. However, N2 fixation may also occur in the central basin through the mild winter monsoon, be considerably greater during the fall intermonsoon than we observed during the spring intermonsoon, and may also occur at higher levels in the chronically oligotrophic southern basin. Ongoing satellite observations will help to determine more accurately the distribution and density of Trichodesmium in this and other tropical oceanic basins, as well as resolving the actual frequency and duration of bloom occurrence.",TRUE,year/date
R172,Oceanography,R138474,An extensive bloom of the N₂-fixing cyanobacterium Trichodesmium erythraeum in the central Arabian Sea,S549626,R138475,Sampling year,L386702,1995,"LVe encountered an extensive surface bloom of the N, fixing cyanobactenum Trichodesrniurn erythraeum in the central basin of the Arabian Sea during the spring ~nter-n~onsoon of 1995. The bloom, which occurred dunng a penod of calm winds and relatively high atmospher~c iron content, was metabollcally active. Carbon fixation by the bloom represented about one-quarter of water column primary productivity while input by h:: flxation could account for a major fraction of the estimated 'new' N demand of pnmary production. Isotopic measurements of the N in surface suspended material confirmed a direct contribution of N, fixation to the organic nltrogen pools of the upper water column. Retrospective analysis of NOAA-12 AVHRR imagery indicated that blooms covered up to 2 X 106 km2, or 20% of the Arabian Sea surface, during the period from 22 to 27 May 1995. In addition to their biogeochemical impact, surface blooms of this extent may have secondary effects on sea surface albedo and light penetration as well as heat and gas exchange across the air-sea interface. A preliminary extrapolation based on our observed, non-bloom rates of N, fixation from our limited sampling in the spring intermonsoon, including a conservative estimate of the input by blooms, suggest N2 fixation may account for an input of about 1 Tg N yr-I This is substantial, but relatively minor compared to current estimates of the removal of N through denitrification in the basin. However, N2 fixation may also occur in the central basin through the mild winter monsoon, be considerably greater during the fall intermonsoon than we observed during the spring intermonsoon, and may also occur at higher levels in the chronically oligotrophic southern basin. Ongoing satellite observations will help to determine more accurately the distribution and density of Trichodesmium in this and other tropical oceanic basins, as well as resolving the actual frequency and duration of bloom occurrence.",TRUE,year/date
R172,Oceanography,R160144,Nitrous oxide emissions from the Arabian Sea,S638055,R160172,Sampling year,L437068,1995,"Dissolved and atmospheric nitrous oxide (N2O) were measured on the legs 3 and 5 of the R/V Meteor cruise 32 in the Arabian Sea. A cruise track along 65°E was followed during both the intermonsoon (May 1995) and the southwest (SW) monsoon (July/August 1995) periods. During the second leg the coastal and open ocean upwelling regions off the Arabian Peninsula were also investigated. Mean N2O saturations for the oceanic regions of the Arabian Sea were in the range of 99–103% during the intermonsoon and 103–230% during the SW monsoon. Computed annual emissions of 0.8–1.5 Tg N2O for the Arabian Sea are considerably higher than previous estimates, indicating that the role of upwelling regions, such as the Arabian Sea, may be more important than previously assumed in global budgets of oceanic N2O emissions.",TRUE,year/date
R172,Oceanography,R160146,Variabilities in the fluxes and annual emissions of nitrous oxide from the Arabian Sea,S638076,R160173,Sampling year,L437087,1995,"Extensive measurements of nitrous oxide (N2O) have been made during April–May 1994 (intermonsoon), February–March 1995 (northeast monsoon), July–August 1995 and August 1996 (southwest monsoon) in the Arabian Sea. Low N2O supersaturations in the surface waters are observed during intermonsoon compared to those in northeast and southwest monsoons. Spatial distributions of supersaturations manifest the effects of larger mixing during winter cooling and wind‐driven upwelling during monsoon period off the Indian west coast. A net positive flux is observable during all the seasons, with no discernible differences from the open ocean to coastal regions. The average ocean‐to‐atmosphere fluxes of N2O are estimated, using wind speed dependent gas transfer velocity, to be of the order of 0.26, 0.003, and 0.51, and 0.78 pg (pico grams) cm−2 s−1 during northeast monsoon, intermonsoon, and southwest monsoon in 1995 and 1996, respectively. The lower range of annual emission of N2O is estimated to be 0.56–0.76 Tg N2O per year which constitutes 13–17% of the net global oceanic source. However, N2O emission from the Arabian Sea can be as high as 1.0 Tg N2O per year using different gas transfer models.",TRUE,year/date
R172,Oceanography,R160152,A revised nitrogen budget for the Arabian Sea,S637936,R160164,Sampling year,L436970,1995,"Despite its importance for the global oceanic nitrogen (N) cycle, considerable uncertainties exist about the N fluxes of the Arabian Sea. On the basis of our recent measurements during the German Arabian Sea Process Study as part of the Joint Global Ocean Flux Study (JGOFS) in 1995 and 1997, we present estimates of various N sources and sinks such as atmospheric dry and wet depositions of N aerosols, pelagic denitrification, nitrous oxide (N2O) emissions, and advective N input from the south. Additionally, we estimated the N burial in the deep sea and the sedimentary shelf denitrification. On the basis of our measurements and literature data, the N budget for the Arabian Sea was reassessed. It is dominated by the N loss due to denitrification, which is balanced by the advective input of N from the south. The role of N fixation in the Arabian Sea is still difficult to assess owing to the small database available; however, there are hints that it might be more important than previously thought. Atmospheric N depositions are important on a regional scale during the intermonsoon in the central Arabian Sea; however, they play only a minor role for the overall N cycling. Emissions of N2O and ammonia, deep‐sea N burial, and N inputs by rivers and marginal seas (i.e., Persian Gulf and Red Sea) are of minor importance. We found that the magnitude of the sedimentary denitrification at the shelf might be ∼17% of the total denitrification in the Arabian Sea, indicating that the shelf sediments might be of considerably greater importance for the N cycling in the Arabian Sea than previously thought. Sedimentary and pelagic denitrification together demand ∼6% of the estimated particulate organic nitrogen export flux from the photic zone. The main northward transport of N into the Arabian Sea occurs in the intermediate layers, indicating that the N cycle of the Arabian Sea might be sensitive to variations of the intermediate water circulation of the Indian Ocean.",TRUE,year/date
R172,Oceanography,R160155,Nitrous oxide cycling in the Arabian Sea,S637950,R160165,Sampling year,L436982,1995,"Depth profiles of dissolved nitrous oxide (N2O) were measured in the central and western Arabian Sea during four cruises in May and July–August 1995 and May–July 1997 as part of the German contribution to the Arabian Sea Process Study of the Joint Global Ocean Flux Study. The vertical distribution of N2O in the water column on a transect along 65°E showed a characteristic double-peak structure, indicating production of N2O associated with steep oxygen gradients at the top and bottom of the oxygen minimum zone. We propose a general scheme consisting of four ocean compartments to explain the N2O cycling as a result of nitrification and denitrification processes in the water column of the Arabian Sea. We observed a seasonal N2O accumulation at 600–800 m near the shelf break in the western Arabian Sea. We propose that, in the western Arabian Sea, N2O might also be formed during bacterial oxidation of organic matter by the reduction of IO3 − to I−, indicating that the biogeochemical cycling of N2O in the Arabian Sea during the SW monsoon might be more complex than previously thought. A compilation of sources and sinks of N2O in the Arabian Sea suggested that the N2O budget is reasonably balanced.",TRUE,year/date
R172,Oceanography,R160735,Strong CO2emissions from the Arabian Sea during south-west monsoon,S641229,R160736,Sampling year,L438900,1995,"The partial pressure of CO2 (pCO2) was measured during the 1995 South‐West Monsoon in the Arabian Sea. The Arabian Sea was characterized throughout by a moderate supersaturation of 12–30 µatm. The stable atmospheric pCO2 level was around 345 µatm. An extreme supersaturation was found in areas of coastal upwelling off the Omani coast with pCO2 peak values in surface waters of 750 µatm. Such two‐fold saturation (218%) is rarely found elsewhere in open ocean environments. We also encountered cold upwelled water 300 nm off the Omani coast in the region of Ekman pumping, which was also characterized by a strongly elevated seawater pCO2 of up to 525 µatm. Due to the strong monsoonal wind forcing the Arabian Sea as a whole and the areas of upwelling in particular represent a significant source of atmospheric CO2 with flux densities from around 2 mmol m−2 d−1 in the open ocean to 119 mmol m−2 d−1 in coastal upwelling. Local air masses passing the area of coastal upwelling showed increasing CO2 concentrations, which are consistent with such strong emissions.",TRUE,year/date
R172,Oceanography,R160146,Variabilities in the fluxes and annual emissions of nitrous oxide from the Arabian Sea,S638077,R160173,Sampling year,L437088,1996,"Extensive measurements of nitrous oxide (N2O) have been made during April–May 1994 (intermonsoon), February–March 1995 (northeast monsoon), July–August 1995 and August 1996 (southwest monsoon) in the Arabian Sea. Low N2O supersaturations in the surface waters are observed during intermonsoon compared to those in northeast and southwest monsoons. Spatial distributions of supersaturations manifest the effects of larger mixing during winter cooling and wind‐driven upwelling during monsoon period off the Indian west coast. A net positive flux is observable during all the seasons, with no discernible differences from the open ocean to coastal regions. The average ocean‐to‐atmosphere fluxes of N2O are estimated, using wind speed dependent gas transfer velocity, to be of the order of 0.26, 0.003, and 0.51, and 0.78 pg (pico grams) cm−2 s−1 during northeast monsoon, intermonsoon, and southwest monsoon in 1995 and 1996, respectively. The lower range of annual emission of N2O is estimated to be 0.56–0.76 Tg N2O per year which constitutes 13–17% of the net global oceanic source. However, N2O emission from the Arabian Sea can be as high as 1.0 Tg N2O per year using different gas transfer models.",TRUE,year/date
R172,Oceanography,R160152,A revised nitrogen budget for the Arabian Sea,S637937,R160164,Sampling year,L436971,1997,"Despite its importance for the global oceanic nitrogen (N) cycle, considerable uncertainties exist about the N fluxes of the Arabian Sea. On the basis of our recent measurements during the German Arabian Sea Process Study as part of the Joint Global Ocean Flux Study (JGOFS) in 1995 and 1997, we present estimates of various N sources and sinks such as atmospheric dry and wet depositions of N aerosols, pelagic denitrification, nitrous oxide (N2O) emissions, and advective N input from the south. Additionally, we estimated the N burial in the deep sea and the sedimentary shelf denitrification. On the basis of our measurements and literature data, the N budget for the Arabian Sea was reassessed. It is dominated by the N loss due to denitrification, which is balanced by the advective input of N from the south. The role of N fixation in the Arabian Sea is still difficult to assess owing to the small database available; however, there are hints that it might be more important than previously thought. Atmospheric N depositions are important on a regional scale during the intermonsoon in the central Arabian Sea; however, they play only a minor role for the overall N cycling. Emissions of N2O and ammonia, deep‐sea N burial, and N inputs by rivers and marginal seas (i.e., Persian Gulf and Red Sea) are of minor importance. We found that the magnitude of the sedimentary denitrification at the shelf might be ∼17% of the total denitrification in the Arabian Sea, indicating that the shelf sediments might be of considerably greater importance for the N cycling in the Arabian Sea than previously thought. Sedimentary and pelagic denitrification together demand ∼6% of the estimated particulate organic nitrogen export flux from the photic zone. The main northward transport of N into the Arabian Sea occurs in the intermediate layers, indicating that the N cycle of the Arabian Sea might be sensitive to variations of the intermediate water circulation of the Indian Ocean.",TRUE,year/date
R172,Oceanography,R160155,Nitrous oxide cycling in the Arabian Sea,S637951,R160165,Sampling year,L436983,1997,"Depth profiles of dissolved nitrous oxide (N2O) were measured in the central and western Arabian Sea during four cruises in May and July–August 1995 and May–July 1997 as part of the German contribution to the Arabian Sea Process Study of the Joint Global Ocean Flux Study. The vertical distribution of N2O in the water column on a transect along 65°E showed a characteristic double-peak structure, indicating production of N2O associated with steep oxygen gradients at the top and bottom of the oxygen minimum zone. We propose a general scheme consisting of four ocean compartments to explain the N2O cycling as a result of nitrification and denitrification processes in the water column of the Arabian Sea. We observed a seasonal N2O accumulation at 600–800 m near the shelf break in the western Arabian Sea. We propose that, in the western Arabian Sea, N2O might also be formed during bacterial oxidation of organic matter by the reduction of IO3 − to I−, indicating that the biogeochemical cycling of N2O in the Arabian Sea during the SW monsoon might be more complex than previously thought. A compilation of sources and sinks of N2O in the Arabian Sea suggested that the N2O budget is reasonably balanced.",TRUE,year/date
R172,Oceanography,R138479,Enhanced chlorophyllaand primary production in the northern Arabian Sea during the spring intermonsoon due to greenNoctilucascintillansbloom,S549683,R138481,Sampling year,L386751,2000,"Abstract The surface waters of the northeastern Arabian Sea sustained relatively high chlorophyll a (average 0.81±0.80 mg m–3) and primary production (average 29.5±23.6 mgC m–3 d–1) during the early spring intermonsoon 2000. This was caused primarily by a thick algal bloom spread over a vast area between 17–21°N and 66–70°E. Satellite images showed exceptionally high concentration of chlorophyll a in the bloom area, representing the annually occurring ‘spring blooms’ during February–March. The causative organism of the bloom was the dinoflagellate, Noctiluca scintillans (Dinophyceae: Noctilucidea), symbiotically associated with an autotrophic prasinophyte Pedinomonas noctilucae. The symbiosis between N. scintillans and P. noctilucae is most likely responsible for their explosive growth (average 3 million cells l–1) over an extensive area, making the northeastern Arabian Sea highly productive (average 607±338 mgC m–2 d–1) even during an oligotrophic period such as spring intermonsoon.",TRUE,year/date
R172,Oceanography,R138350,Intense blooms of Trichodesmium erythraeum (Cyanophyta) in the open waters along east coast of India,S548529,R138351,Sampling year,L385803,2001,"Two blooms of Trichodesmium erythraeum were observed during April 2001, in the open waters of Bay of Bengal and this is the first report from this region. The locations of the bloom were off Karaikkal (10°58'N, 81°50'E) and off south of Calcutta (19° 44'N, 89° 04'), both along east coast of India. Nutrients (nitrate, phosphate, silicate) concentration in the upper 30 m of the water column showed very low values. High-integrated primary production (Bloom 1- 2160 mgC m -2 d -1 , Bloom 2-1740 mgC m -2 d -1 ) was obtained in these regions, which indicated the enhancement of primary production in the earlier stages of the bloom. Very low NO3-N concentrations, brownish yellow bloom colour, undisturbed patches and high primary production strongly suggested that the blooms were in the growth phase. Low mesozooplankton biomass was found in both locations and was dominated by copepods followed by chaetognaths.",TRUE,year/date
R172,Oceanography,R141337,Nitrogen Uptake in the Northeastern Arabian Sea during Winter Cooling,S565250,R141339,Sampling year,L396654,2003,"The uptake of dissolved inorganic nitrogen by phytoplankton is an important aspect of the nitrogen cycle of oceans. Here, we present nitrate () and ammonium () uptake rates in the northeastern Arabian Sea using tracer technique. In this relatively underexplored region, productivity is high during winter due to supply of nutrients by convective mixing caused by the cooling of the surface by the northeast monsoon winds. Studies done during different months (January and late February-early March) of the northeast monsoon 2003 revealed a fivefold increase in the average euphotic zone integrated uptake from January (2.3 mmolN ) to late February-early March (12.7 mmolN ). The -ratio during January appeared to be affected by the winter cooling effect and increased by more than 50% from the southernmost station to the northern open ocean stations, indicating hydrographic and meteorological control. Estimates of residence time suggested that entrained in the water column during January contributed to the development of blooms during late February-early March.",TRUE,year/date
R172,Oceanography,R141340,Quantification of new production during a winterNoctiluca scintillansbloom in the Arabian Sea,S565283,R141341,Sampling year,L396683,2004,"We present new data on the nitrate (new production), ammonium, urea uptake rates and f‐ratios for the eastern Arabian Sea (10° to 22°N) during the late winter (northeast) monsoon, 2004, including regions of green Noctilucascintillans bloom. A comparison of N‐uptake rates of the Noctiluca dominated northern zone to the southern non‐bloom zone indicates the presence of two biogeochemical regimes during the late winter monsoon: highly productive north and less productive south. The conservative estimates of photic zone‐integrated total N‐uptake and f‐ratio are high in the north (∼19 mmolNm−2d−1 and 0.82, respectively) during the bloom and low (∼5.5 mmolNm−2d−1 and 0.38 respectively) in the south. The present and earlier data imply persistence of high N‐uptake and f‐ratio during blooms year after year. This quantification of the enhanced seasonal sequestration of carbon is an important input to global biogeochemical models.",TRUE,year/date
R172,Oceanography,R147149,Evidence for efficient regenerated production and dinitrogen fixation in nitrogen-deficient waters of the South Pacific Ocean: impact on new and export production estimates,S589542,R147151,Sampling year,L410352,2004,"Abstract. One of the major objectives of the BIOSOPE cruise, carried out on the R/V Atalante from October-November 2004 in the South Pacific Ocean, was to establish productivity rates along a zonal section traversing the oligotrophic South Pacific Gyre (SPG). These results were then compared to measurements obtained from the nutrient – replete waters in the Chilean upwelling and around the Marquesas Islands. A dual 13C/15N isotope technique was used to estimate the carbon fixation rates, inorganic nitrogen uptake (including dinitrogen fixation), ammonium (NH4) and nitrate (NO3) regeneration and release of dissolved organic nitrogen (DON). The SPG exhibited the lowest primary production rates (0.15 g C m−2 d−1), while rates were 7 to 20 times higher around the Marquesas Islands and in the Chilean upwelling, respectively. In the very low productive area of the SPG, most of the primary production was sustained by active regeneration processes that fuelled up to 95% of the biological nitrogen demand. Nitrification was active in the surface layer and often balanced the biological demand for nitrate, especially in the SPG. The percentage of nitrogen released as DON represented a large proportion of the inorganic nitrogen uptake (13–15% in average), reaching 26–41% in the SPG, where DON production played a major role in nitrogen cycling. Dinitrogen fixation was detectable over the whole study area; even in the Chilean upwelling, where rates as high as 3 nmoles l−1 d−1 were measured. In these nutrient-replete waters new production was very high (0.69±0.49 g C m−2 d−1) and essentially sustained by nitrate levels. In the SPG, dinitrogen fixation, although occurring at much lower daily rates (≈1–2 nmoles l−1 d−1), sustained up to 100% of the new production (0.008±0.007 g C m−2 d−1) which was two orders of magnitude lower than that measured in the upwelling. The annual N2-fixation of the South Pacific is estimated to 21×1012g, of which 1.34×1012g is for the SPG only. Even if our ""snapshot"" estimates of N2-fixation rates were lower than that expected from a recent ocean circulation model, these data confirm that the N-deficiency South Pacific Ocean would provide an ideal ecological niche for the proliferation of N2-fixers which are not yet identified.",TRUE,year/date
R172,Oceanography,R147144,Nitrogen Fixation in Denitrified Marine Waters,S589477,R147145,Sampling year,L410295,2005,"Nitrogen fixation is an essential process that biologically transforms atmospheric dinitrogen gas to ammonia, therefore compensating for nitrogen losses occurring via denitrification and anammox. Currently, inputs and losses of nitrogen to the ocean resulting from these processes are thought to be spatially separated: nitrogen fixation takes place primarily in open ocean environments (mainly through diazotrophic cyanobacteria), whereas nitrogen losses occur in oxygen-depleted intermediate waters and sediments (mostly via denitrifying and anammox bacteria). Here we report on rates of nitrogen fixation obtained during two oceanographic cruises in 2005 and 2007 in the eastern tropical South Pacific (ETSP), a region characterized by the presence of coastal upwelling and a major permanent oxygen minimum zone (OMZ). Our results show significant rates of nitrogen fixation in the water column; however, integrated rates from the surface down to 120 m varied by ∼30 fold between cruises (7.5±4.6 versus 190±82.3 µmol m−2 d−1). Moreover, rates were measured down to 400 m depth in 2007, indicating that the contribution to the integrated rates of the subsurface oxygen-deficient layer was ∼5 times higher (574±294 µmol m−2 d−1) than the oxic euphotic layer (48±68 µmol m−2 d−1). Concurrent molecular measurements detected the dinitrogenase reductase gene nifH in surface and subsurface waters. Phylogenetic analysis of the nifH sequences showed the presence of a diverse diazotrophic community at the time of the highest measured nitrogen fixation rates. Our results thus demonstrate the occurrence of nitrogen fixation in nutrient-rich coastal upwelling systems and, importantly, within the underlying OMZ. They also suggest that nitrogen fixation is a widespread process that can sporadically provide a supplementary source of fixed nitrogen in these regions.",TRUE,year/date
R172,Oceanography,R138370,NITROGEN SOURCES FOR NEW PRODUCTION IN THE NE INDIAN OCEAN,S548718,R138372,Sampling year,L385967,2007,"Productivity measurements were carried out during spring 2007 in the northeastern (NE) Indian Ocean, where light availability is controlled by clouds and surface productivity by nutrient and light availability. New productivity is found to be higher than regenerated productivity at most locations, consistent with the earlier findings from the region. A comparison of the present results with the earlier findings reveals that the region contributes significantly in the sequestration of CO2 from the atmosphere, particularly during spring. Diatomdominated plankton community is more efficient than those dominated by other organisms in the uptake of CO2 and its export to the deep. Earlier studies on plankton composition suggest that higher new productivity at most locations could also be due to the dominance of diatoms in the region.",TRUE,year/date
R172,Oceanography,R141357,Latitudinal distribution of <i>Trichodesmium</i> spp. and N<sub>2</sub> fixation in the Atlantic Ocean,S565494,R141359,Sampling year,L396848,2007,"Abstract. We have determined the latitudinal distribution of Trichodesmium spp. abundance and community N2 fixation in the Atlantic Ocean along a meridional transect from ca. 30° N to 30° S in November–December 2007 and April–May 2008. The observations from both cruises were highly consistent in terms of absolute magnitude and latitudinal distribution, showing a strong association between Trichodesmium abundance and community N2 fixation. The highest Trichodesmium abundances (mean = 220 trichomes L−1,) and community N2 fixation rates (mean = 60 μmol m−2 d−1) occurred in the Equatorial region between 5° S–15° N. In the South Atlantic gyre, Trichodesmium abundance was very low (ca. 1 trichome L−1) but N2 fixation was always measurable, averaging 3 and 10 μmol m2 d−1 in 2007 and 2008, respectively. We suggest that N2 fixation in the South Atlantic was sustained by other, presumably unicellular, diazotrophs. Comparing these distributions with the geographical pattern in atmospheric dust deposition points to iron supply as the main factor determining the large scale latitudinal variability of Trichodesmium spp. abundance and N2 fixation in the Atlantic Ocean. We observed a marked South to North decrease in surface phosphate concentration, which argues against a role for phosphorus availability in controlling the large scale distribution of N2 fixation. Scaling up from all our measurements (42 stations) results in conservative estimates for total N2 fixation of ∼6 TgN yr−1 in the North Atlantic (0–40° N) and ~1.2 TgN yr−1 in the South Atlantic (0–40° S).",TRUE,year/date
R172,Oceanography,R147144,Nitrogen Fixation in Denitrified Marine Waters,S589478,R147145,Sampling year,L410296,2007,"Nitrogen fixation is an essential process that biologically transforms atmospheric dinitrogen gas to ammonia, therefore compensating for nitrogen losses occurring via denitrification and anammox. Currently, inputs and losses of nitrogen to the ocean resulting from these processes are thought to be spatially separated: nitrogen fixation takes place primarily in open ocean environments (mainly through diazotrophic cyanobacteria), whereas nitrogen losses occur in oxygen-depleted intermediate waters and sediments (mostly via denitrifying and anammox bacteria). Here we report on rates of nitrogen fixation obtained during two oceanographic cruises in 2005 and 2007 in the eastern tropical South Pacific (ETSP), a region characterized by the presence of coastal upwelling and a major permanent oxygen minimum zone (OMZ). Our results show significant rates of nitrogen fixation in the water column; however, integrated rates from the surface down to 120 m varied by ∼30 fold between cruises (7.5±4.6 versus 190±82.3 µmol m−2 d−1). Moreover, rates were measured down to 400 m depth in 2007, indicating that the contribution to the integrated rates of the subsurface oxygen-deficient layer was ∼5 times higher (574±294 µmol m−2 d−1) than the oxic euphotic layer (48±68 µmol m−2 d−1). Concurrent molecular measurements detected the dinitrogenase reductase gene nifH in surface and subsurface waters. Phylogenetic analysis of the nifH sequences showed the presence of a diverse diazotrophic community at the time of the highest measured nitrogen fixation rates. Our results thus demonstrate the occurrence of nitrogen fixation in nutrient-rich coastal upwelling systems and, importantly, within the underlying OMZ. They also suggest that nitrogen fixation is a widespread process that can sporadically provide a supplementary source of fixed nitrogen in these regions.",TRUE,year/date
R172,Oceanography,R141357,Latitudinal distribution of <i>Trichodesmium</i> spp. and N<sub>2</sub> fixation in the Atlantic Ocean,S565495,R141359,Sampling year,L396849,2008,"Abstract. We have determined the latitudinal distribution of Trichodesmium spp. abundance and community N2 fixation in the Atlantic Ocean along a meridional transect from ca. 30° N to 30° S in November–December 2007 and April–May 2008. The observations from both cruises were highly consistent in terms of absolute magnitude and latitudinal distribution, showing a strong association between Trichodesmium abundance and community N2 fixation. The highest Trichodesmium abundances (mean = 220 trichomes L−1,) and community N2 fixation rates (mean = 60 μmol m−2 d−1) occurred in the Equatorial region between 5° S–15° N. In the South Atlantic gyre, Trichodesmium abundance was very low (ca. 1 trichome L−1) but N2 fixation was always measurable, averaging 3 and 10 μmol m2 d−1 in 2007 and 2008, respectively. We suggest that N2 fixation in the South Atlantic was sustained by other, presumably unicellular, diazotrophs. Comparing these distributions with the geographical pattern in atmospheric dust deposition points to iron supply as the main factor determining the large scale latitudinal variability of Trichodesmium spp. abundance and N2 fixation in the Atlantic Ocean. We observed a marked South to North decrease in surface phosphate concentration, which argues against a role for phosphorus availability in controlling the large scale distribution of N2 fixation. Scaling up from all our measurements (42 stations) results in conservative estimates for total N2 fixation of ∼6 TgN yr−1 in the North Atlantic (0–40° N) and ~1.2 TgN yr−1 in the South Atlantic (0–40° S).",TRUE,year/date
R172,Oceanography,R160755,Chemolithoautotrophic production mediating the cycling of the greenhouses gases N<sub>2</sub>O and CH<sub>4</sub> in an upwelling ecosystem,S641443,R160756,Sampling year,L439070,2008,"Abstract. Coastal upwelling ecosystems with marked oxyclines (redoxclines) present high availability of electron donors that favour chemoautotrophy, leading in turn to high N2O and CH4 cycling associated with aerobic NH4+ (AAO) and CH4 oxidation (AMO). This is the case of the highly productive coastal upwelling area off Central Chile (36° S), where we evaluated the importance of total chemolithoautotrophic vs. photoautotrophic production, the specific contributions of AAO and AMO to chemosynthesis and their role in gas cycling. Chemoautotrophy (involving bacteria and archaea) was studied at a time-series station during monthly (2002–2009) and seasonal cruises (January 2008, September 2008, January 2009) and was assessed in terms of dark carbon assimilation (CA), N2O and CH4 cycling, and the natural C isotopic ratio of particulate organic carbon (δ13POC). Total Integrated dark CA fluctuated between 19.4 and 2.924 mg C m−2 d−1. It was higher during active upwelling and represented on average 27% of the integrated photoautotrophic production (from 135 to 7.626 mg C m−2d−1). At the oxycline, δ13POC averaged -22.209‰ this was significantly lighter compared to the surface (-19.674‰) and bottom layers (-20.716‰). This pattern, along with low NH4+ content and high accumulations of N2O, NO2- and NO3- within the oxycline indicates that chemolithoautotrophs and specifically AA oxydisers were active. Dark CA was reduced from 27 to 48% after addition of a specific AAO inhibitor (ATU) and from 24 to 76% with GC7, a specific archaea inhibitor, indicating that AAO and maybe AMO microbes (most of them archaea) were performing dark CA through oxidation of NH4+ and CH4. AAO produced N2O at rates from 8.88 to 43 nM d−1 and a fraction of it was effluxed into the atmosphere (up to 42.85 μmol m−2 d−1). AMO on the other hand consumed CH4 at rates between 0.41 and 26.8 nM d−1 therefore preventing its efflux to the atmosphere (up to 18.69 μmol m−2 d−1). These findings show that chemically driven chemoautotrophy (with NH4+ and CH4 acting as electron donors) could be more important than previously thought in upwelling ecosystems and open new questions concerning its future relevance.",TRUE,year/date
R172,Oceanography,R109392,First direct measurements of N2 fixation during a Trichodesmium bloom in the eastern Arabian Sea,S500193,R109588,Sampling year,L361903,2009,"We report the first direct estimates of N2 fixation rates measured during the spring, 2009 using the 15N2 gas tracer technique in the eastern Arabian Sea, which is well known for significant loss of nitrogen due to intense denitrification. Carbon uptake rates are also concurrently estimated using the 13C tracer technique. The N2 fixation rates vary from ∼0.1 to 34 mmol N m−2d−1 after correcting for the isotopic under‐equilibrium with dissolved air in the samples. These higher N2 fixation rates are consistent with higher chlorophyll a and low δ15N of natural particulate organic nitrogen. Our estimates of N2 fixation is a useful step toward reducing the uncertainty in the nitrogen budget.",TRUE,year/date
R172,Oceanography,R138486,First direct measurements of N2fixation during aTrichodesmiumbloom in the eastern Arabian Sea: N2FIXATION IN THE ARABIAN SEA,S549756,R138488,Sampling year,L386808,2009,"We report the first direct estimates of N2 fixation rates measured during the spring, 2009 using the 15N2 gas tracer technique in the eastern Arabian Sea, which is well known for significant loss of nitrogen due to intense denitrification. Carbon uptake rates are also concurrently estimated using the 13C tracer technique. The N2 fixation rates vary from ∼0.1 to 34 mmol N m−2d−1 after correcting for the isotopic under‐equilibrium with dissolved air in the samples. These higher N2 fixation rates are consistent with higher chlorophyll a and low δ15N of natural particulate organic nitrogen. Our estimates of N2 fixation is a useful step toward reducing the uncertainty in the nitrogen budget.",TRUE,year/date
R172,Oceanography,R160755,Chemolithoautotrophic production mediating the cycling of the greenhouses gases N<sub>2</sub>O and CH<sub>4</sub> in an upwelling ecosystem,S641444,R160756,Sampling year,L439071,2009,"Abstract. Coastal upwelling ecosystems with marked oxyclines (redoxclines) present high availability of electron donors that favour chemoautotrophy, leading in turn to high N2O and CH4 cycling associated with aerobic NH4+ (AAO) and CH4 oxidation (AMO). This is the case of the highly productive coastal upwelling area off Central Chile (36° S), where we evaluated the importance of total chemolithoautotrophic vs. photoautotrophic production, the specific contributions of AAO and AMO to chemosynthesis and their role in gas cycling. Chemoautotrophy (involving bacteria and archaea) was studied at a time-series station during monthly (2002–2009) and seasonal cruises (January 2008, September 2008, January 2009) and was assessed in terms of dark carbon assimilation (CA), N2O and CH4 cycling, and the natural C isotopic ratio of particulate organic carbon (δ13POC). Total Integrated dark CA fluctuated between 19.4 and 2.924 mg C m−2 d−1. It was higher during active upwelling and represented on average 27% of the integrated photoautotrophic production (from 135 to 7.626 mg C m−2d−1). At the oxycline, δ13POC averaged -22.209‰ this was significantly lighter compared to the surface (-19.674‰) and bottom layers (-20.716‰). This pattern, along with low NH4+ content and high accumulations of N2O, NO2- and NO3- within the oxycline indicates that chemolithoautotrophs and specifically AA oxydisers were active. Dark CA was reduced from 27 to 48% after addition of a specific AAO inhibitor (ATU) and from 24 to 76% with GC7, a specific archaea inhibitor, indicating that AAO and maybe AMO microbes (most of them archaea) were performing dark CA through oxidation of NH4+ and CH4. AAO produced N2O at rates from 8.88 to 43 nM d−1 and a fraction of it was effluxed into the atmosphere (up to 42.85 μmol m−2 d−1). AMO on the other hand consumed CH4 at rates between 0.41 and 26.8 nM d−1 therefore preventing its efflux to the atmosphere (up to 18.69 μmol m−2 d−1). These findings show that chemically driven chemoautotrophy (with NH4+ and CH4 acting as electron donors) could be more important than previously thought in upwelling ecosystems and open new questions concerning its future relevance.",TRUE,year/date
R172,Oceanography,R109573,N2 Fixation in the Eastern Arabian Sea: Probable Role of Heterotrophic Diazotrophs,S549801,R138494,Sampling year,L386847,2010,"Biogeochemical implications of global imbalance between the rates of marine dinitrogen (N2) fixation and denitrification have spurred us to understand the former process in the Arabian Sea, which contributes considerably to the global nitrogen budget. Heterotrophic bacteria have gained recent appreciation for their major role in marine N budget by fixing a significant amount of N2. Accordingly, we hypothesize a probable role of heterotrophic diazotrophs from the 15N2 enriched isotope labelling dark incubations that witnessed rates comparable to the light incubations in the eastern Arabian Sea during spring 2010. Maximum areal rates (8 mmol N m-2 d-1) were the highest ever observed anywhere in world oceans. Our results suggest that the eastern Arabian Sea gains ~92% of its new nitrogen through N2 fixation. Our results are consistent with the observations made in the same region in preceding year, i.e., during the spring of 2009.",TRUE,year/date
R172,Oceanography,R147138,Evidence of active dinitrogen fixation in surface waters of the eastern tropical South Pacific during El Niño and La Niña events and evaluation of its potential nutrient controls: N2FIXATION IN THE ETSP,S589422,R147140,Sampling year,L410248,2010,"Biological N2 fixation rates were quantified in the Eastern Tropical South Pacific (ETSP) during both El Niño (February 2010) and La Niña (March–April 2011) conditions, and from Low‐Nutrient, Low‐Chlorophyll (20°S) to High‐Nutrient, Low‐Chlorophyll (HNLC) (10°S) conditions. N2 fixation was detected at all stations with rates ranging from 0.01 to 0.88 nmol N L−1 d−1, with higher rates measured during El Niño conditions compared to La Niña. High N2 fixations rates were reported at northern stations (HNLC conditions) at the oxycline and in the oxygen minimum zone (OMZ), despite nitrate concentrations up to 30 µmol L−1, indicating that inputs of new N can occur in parallel with N loss processes in OMZs. Water‐column integrated N2 fixation rates ranged from 4 to 53 µmol N m−2 d−1 at northern stations, and from 0 to 148 µmol m−2 d−1 at southern stations, which are of the same order of magnitude as N2 fixation rates measured in the oligotrophic ocean. N2 fixation rates responded significantly to Fe and organic carbon additions in the surface HNLC waters, and surprisingly by concomitant Fe and N additions in surface waters at the edge of the subtropical gyre. Recent studies have highlighted the predominance of heterotrophic diazotrophs in this area, and we hypothesize that N2 fixation could be directly limited by inorganic nutrient availability, or indirectly through the stimulation of primary production and the subsequent excretion of dissolved organic matter and/or the formation of micro‐environments favorable for heterotrophic N2 fixation.",TRUE,year/date
R172,Oceanography,R155535,Aphotic N2 Fixation in the Eastern Tropical South Pacific Ocean,S623083,R155536,Sampling year,L428979,2010,"We examined rates of N2 fixation from the surface to 2000 m depth in the Eastern Tropical South Pacific (ETSP) during El Niño (2010) and La Niña (2011). Replicated vertical profiles performed under oxygen-free conditions show that N2 fixation takes place both in euphotic and aphotic waters, with rates reaching 155 to 509 µmol N m−2 d−1 in 2010 and 24±14 to 118±87 µmol N m−2 d−1 in 2011. In the aphotic layers, volumetric N2 fixation rates were relatively low (<1.00 nmol N L−1 d−1), but when integrated over the whole aphotic layer, they accounted for 87–90% of total rates (euphotic+aphotic) for the two cruises. Phylogenetic studies performed in microcosms experiments confirm the presence of diazotrophs in the deep waters of the Oxygen Minimum Zone (OMZ), which were comprised of non-cyanobacterial diazotrophs affiliated with nifH clusters 1K (predominantly comprised of α-proteobacteria), 1G (predominantly comprised of γ-proteobacteria), and 3 (sulfate reducing genera of the δ-proteobacteria and Clostridium spp., Vibrio spp.). Organic and inorganic nutrient addition bioassays revealed that amino acids significantly stimulated N2 fixation in the core of the OMZ at all stations tested and as did simple carbohydrates at stations located nearest the coast of Peru/Chile. The episodic supply of these substrates from upper layers are hypothesized to explain the observed variability of N2 fixation in the ETSP.",TRUE,year/date
R172,Oceanography,R147138,Evidence of active dinitrogen fixation in surface waters of the eastern tropical South Pacific during El Niño and La Niña events and evaluation of its potential nutrient controls: N2FIXATION IN THE ETSP,S589423,R147140,Sampling year,L410249,2011,"Biological N2 fixation rates were quantified in the Eastern Tropical South Pacific (ETSP) during both El Niño (February 2010) and La Niña (March–April 2011) conditions, and from Low‐Nutrient, Low‐Chlorophyll (20°S) to High‐Nutrient, Low‐Chlorophyll (HNLC) (10°S) conditions. N2 fixation was detected at all stations with rates ranging from 0.01 to 0.88 nmol N L−1 d−1, with higher rates measured during El Niño conditions compared to La Niña. High N2 fixations rates were reported at northern stations (HNLC conditions) at the oxycline and in the oxygen minimum zone (OMZ), despite nitrate concentrations up to 30 µmol L−1, indicating that inputs of new N can occur in parallel with N loss processes in OMZs. Water‐column integrated N2 fixation rates ranged from 4 to 53 µmol N m−2 d−1 at northern stations, and from 0 to 148 µmol m−2 d−1 at southern stations, which are of the same order of magnitude as N2 fixation rates measured in the oligotrophic ocean. N2 fixation rates responded significantly to Fe and organic carbon additions in the surface HNLC waters, and surprisingly by concomitant Fe and N additions in surface waters at the edge of the subtropical gyre. Recent studies have highlighted the predominance of heterotrophic diazotrophs in this area, and we hypothesize that N2 fixation could be directly limited by inorganic nutrient availability, or indirectly through the stimulation of primary production and the subsequent excretion of dissolved organic matter and/or the formation of micro‐environments favorable for heterotrophic N2 fixation.",TRUE,year/date
R172,Oceanography,R155535,Aphotic N2 Fixation in the Eastern Tropical South Pacific Ocean,S623084,R155536,Sampling year,L428980,2011,"We examined rates of N2 fixation from the surface to 2000 m depth in the Eastern Tropical South Pacific (ETSP) during El Niño (2010) and La Niña (2011). Replicated vertical profiles performed under oxygen-free conditions show that N2 fixation takes place both in euphotic and aphotic waters, with rates reaching 155 to 509 µmol N m−2 d−1 in 2010 and 24±14 to 118±87 µmol N m−2 d−1 in 2011. In the aphotic layers, volumetric N2 fixation rates were relatively low (<1.00 nmol N L−1 d−1), but when integrated over the whole aphotic layer, they accounted for 87–90% of total rates (euphotic+aphotic) for the two cruises. Phylogenetic studies performed in microcosms experiments confirm the presence of diazotrophs in the deep waters of the Oxygen Minimum Zone (OMZ), which were comprised of non-cyanobacterial diazotrophs affiliated with nifH clusters 1K (predominantly comprised of α-proteobacteria), 1G (predominantly comprised of γ-proteobacteria), and 3 (sulfate reducing genera of the δ-proteobacteria and Clostridium spp., Vibrio spp.). Organic and inorganic nutrient addition bioassays revealed that amino acids significantly stimulated N2 fixation in the core of the OMZ at all stations tested and as did simple carbohydrates at stations located nearest the coast of Peru/Chile. The episodic supply of these substrates from upper layers are hypothesized to explain the observed variability of N2 fixation in the ETSP.",TRUE,year/date
R172,Oceanography,R147156,"Biological N2 Fixation in the Upwelling Region off NW Iberia: Magnitude, Relevance, and Players",S589928,R147186,Sampling year,L410673,2014,"The classical paradigm about marine N2 fixation establishes that this process is mainly constrained to nitrogen-poor tropical and subtropical regions, and sustained by the colonial cyanobacterium Trichodesmium spp. and diatom-diazotroph symbiosis. However, the application of molecular techniques allowed determining a high phylogenic diversity and a wide distribution of marine diazotrophs, which extends the range of ocean environments where biological N2 fixation may be relevant. Between February 2014 and December 2015, we carried out 10 one-day samplings in the upwelling system off NW Iberia in order to: 1) investigate the seasonal variability in the magnitude of N2 fixation, 2) determine its biogeochemical role as a mechanism of new nitrogen supply, and 3) quantify the main diazotrophs in the region under contrasting hydrographic regimes. Our results indicate that the magnitude of N2 fixation in this region was relatively low (0.001±0.002 – 0.095±0.024 µmol N m-3 d-1), comparable to the lower-end of rates described for the subtropical NE Atlantic. Maximum rates were observed at surface during both upwelling and relaxation conditions. The comparison with nitrate diffusive fluxes revealed the minor role of N2 fixation (2 fixation activity detected in the region. Quantitative PCR targeting the nifH gene revealed the highest abundances of two sublineages of Candidatus Atelocyanobacterium thalassa or UCYN-A (UCYN-A1 and UCYN-A2) mainly at surface waters during upwelling and relaxation conditions, and of Gammaproteobacteria γ-24774A11 at deep waters during downwelling. Maximum abundance for the three groups were up to 6.7 × 102, 1.5 × 103 and 2.4 × 104 nifH copies L-1, respectively. Our findings demonstrate measurable N2 fixation activity and presence of diazotrophs throughout the year in a nitrogen-rich temperate region.",TRUE,year/date
R172,Oceanography,R147159,Evidence of high N<sub>2</sub> fixation rates in the temperate northeast Atlantic,S589946,R147187,Sampling year,L410689,2014,"Abstract. Diazotrophic activity and primary production (PP) were investigated along two transects (Belgica BG2014/14 and GEOVIDE cruises) off the western Iberian Margin and the Bay of Biscay in May 2014. Substantial N2 fixation activity was observed at 8 of the 10 stations sampled, ranging overall from 81 to 384 µmol N m−2 d−1 (0.7 to 8.2 nmol N L−1 d−1), with two sites close to the Iberian Margin situated between 38.8 and 40.7∘ N yielding rates reaching up to 1355 and 1533 µmol N m−2 d−1. Primary production was relatively lower along the Iberian Margin, with rates ranging from 33 to 59 mmol C m−2 d−1, while it increased towards the northwest away from the peninsula, reaching as high as 135 mmol C m−2 d−1. In agreement with the area-averaged Chl a satellite data contemporaneous with our study period, our results revealed that post-bloom conditions prevailed at most sites, while at the northwesternmost station the bloom was still ongoing. When converted to carbon uptake using Redfield stoichiometry, N2 fixation could support 1 % to 3 % of daily PP in the euphotic layer at most sites, except at the two most active sites where this contribution to daily PP could reach up to 25 %. At the two sites where N2 fixation activity was the highest, the prymnesiophyte–symbiont Candidatus Atelocyanobacterium thalassa (UCYN-A) dominated the nifH sequence pool, while the remaining recovered sequences belonged to non-cyanobacterial phylotypes. At all the other sites, however, the recovered nifH sequences were exclusively assigned phylogenetically to non-cyanobacterial phylotypes. The intense N2 fixation activities recorded at the time of our study were likely promoted by the availability of phytoplankton-derived organic matter produced during the spring bloom, as evidenced by the significant surface particulate organic carbon concentrations. Also, the presence of excess phosphorus signature in surface waters seemed to contribute to sustaining N2 fixation, particularly at the sites with extreme activities. These results provide a mechanistic understanding of the unexpectedly high N2 fixation in productive waters of the temperate North Atlantic and highlight the importance of N2 fixation for future assessment of the global N inventory.",TRUE,year/date
R172,Oceanography,R147156,"Biological N2 Fixation in the Upwelling Region off NW Iberia: Magnitude, Relevance, and Players",S589929,R147186,Sampling year,L410674,2015,"The classical paradigm about marine N2 fixation establishes that this process is mainly constrained to nitrogen-poor tropical and subtropical regions, and sustained by the colonial cyanobacterium Trichodesmium spp. and diatom-diazotroph symbiosis. However, the application of molecular techniques allowed determining a high phylogenic diversity and a wide distribution of marine diazotrophs, which extends the range of ocean environments where biological N2 fixation may be relevant. Between February 2014 and December 2015, we carried out 10 one-day samplings in the upwelling system off NW Iberia in order to: 1) investigate the seasonal variability in the magnitude of N2 fixation, 2) determine its biogeochemical role as a mechanism of new nitrogen supply, and 3) quantify the main diazotrophs in the region under contrasting hydrographic regimes. Our results indicate that the magnitude of N2 fixation in this region was relatively low (0.001±0.002 – 0.095±0.024 µmol N m-3 d-1), comparable to the lower-end of rates described for the subtropical NE Atlantic. Maximum rates were observed at surface during both upwelling and relaxation conditions. The comparison with nitrate diffusive fluxes revealed the minor role of N2 fixation (2 fixation activity detected in the region. Quantitative PCR targeting the nifH gene revealed the highest abundances of two sublineages of Candidatus Atelocyanobacterium thalassa or UCYN-A (UCYN-A1 and UCYN-A2) mainly at surface waters during upwelling and relaxation conditions, and of Gammaproteobacteria γ-24774A11 at deep waters during downwelling. Maximum abundance for the three groups were up to 6.7 × 102, 1.5 × 103 and 2.4 × 104 nifH copies L-1, respectively. Our findings demonstrate measurable N2 fixation activity and presence of diazotrophs throughout the year in a nitrogen-rich temperate region.",TRUE,year/date
R172,Oceanography,R155530,Nitrogen budgets following a Lagrangian strategy in the Western Tropical South Pacific Ocean: the prominent role of N<sub>2</sub> fixation (OUTPACE cruise),S623012,R155531,Sampling year,L428916,2015,"Abstract. We performed N budgets at three stations in the western tropical South Pacific (WTSP) Ocean during austral summer conditions (Feb. Mar. 2015) and quantified all major N fluxes both entering the system (N2 fixation, nitrate eddy diffusion, atmospheric deposition) and leaving the system (PN export). Thanks to a Lagrangian strategy, we sampled the same water mass for the entire duration of each long duration (5 days) station, allowing to consider only vertical exchanges. Two stations located at the western end of the transect (Melanesian archipelago (MA) waters, LD A and LD B) were oligotrophic and characterized by a deep chlorophyll maximum (DCM) located at 51 ± 18 m and 81 ± 9 m at LD A and LD B. Station LD C was characterized by a DCM located at 132 ± 7 m, representative of the ultra-oligotrophic waters of the South Pacific gyre (SPG water). N2 fixation rates were extremely high at both LD A (593 ± 51 µmol N m−2 d−1) and LD B (706 ± 302 µmol N m−2 d−1), and the diazotroph community was dominated by Trichodesmium. N2 fixation rates were lower (59 ± 16 µmol N m−2 d−1) at LD C and the diazotroph community was dominated by unicellular N2-fixing cyanobacteria (UCYN). At all stations, N2 fixation was the major source of new N (> 90 %) before atmospheric deposition and upward nitrate fluxes induced by turbulence. N2 fixation contributed circa 8–12 % of primary production in the MA region and 3 % in the SPG water and sustained nearly all new primary production at all stations. The e-ratio (e-ratio = PC export/PP) was maximum at LD A (9.7 %) and was higher than the e-ratio in most studied oligotrophic regions (~ 1 %), indicating a high efficiency of the WTSP to export carbon relative to primary production. The direct export of diazotrophs assessed by qPCR of the nifH gene in sediment traps represented up to 30.6 % of the PC export at LD A, while there contribution was 5 and
",TRUE,year/date
R172,Oceanography,R155573,Dynamic responses of picophytoplankton to physicochemical variation in the eastern Indian Ocean,S623476,R155575,Sampling year,L429307,2015,"Abstract Picophytoplankton were investigated during spring 2015 and 2016 extending from near‐shore coastal waters to oligotrophic open waters in the eastern Indian Ocean (EIO). They were typically composed of Prochlorococcus (Pro), Synechococcus (Syn), and picoeukaryotes (PEuks). Pro dominated most regions of the entire EIO and were approximately 1–2 orders of magnitude more abundant than Syn and PEuks. Under the influence of physicochemical conditions induced by annual variations of circulations and water masses, no coherent abundance and horizontal distributions of picophytoplankton were observed between spring 2015 and 2016. Although previous studies reported the limited effects of nutrients and heavy metals around coastal waters or upwelling zones could constrain Pro growth, Pro abundance showed strong positive correlation with nutrients, indicating the increase in nutrient availability particularly in the oligotrophic EIO could appreciably elevate their abundance. The exceptional appearance of picophytoplankton with high abundance along the equator appeared to be associated with the advection processes supported by the Wyrtki jets. For vertical patterns of picophytoplankton, a simple conceptual model was built based upon physicochemical parameters. However, Pro and PEuks simultaneously formed a subsurface maximum, while Syn generally restricted to the upper waters, significantly correlating with the combined effects of temperature, light, and nutrient availability. The average chlorophyll a concentrations (Chl a) of picophytoplankton accounted for above 49.6% and 44.9% of the total Chl a during both years, respectively, suggesting that picophytoplankton contributed a significant proportion of the phytoplankton community in the whole EIO.",TRUE,year/date
R172,Oceanography,R155508,N2 Fixation and New Insights Into Nitrification From the Ice-Edge to the Equator in the South Pacific Ocean,S622799,R155510,Sampling year,L428739,2016,"Nitrogen (N) is an essential element for life and controls the magnitude of primary productivity in the ocean. In order to describe the microorganisms that catalyze N transformations in surface waters in the South Pacific Ocean, we collected high-resolution biotic and abiotic data along a 7000 km transect, from the Antarctic ice edge to the equator. The transect, conducted between late Austral autumn and early winter 2016, covered major oceanographic features such as the polar front (PF), the subtropical front (STF) and the Pacific equatorial divergence (PED). We measured N2 fixation and nitrification rates and quantified the relative abundances of diazotrophs and nitrifiers in a region where few to no rate measurements are available. Even though N2 fixation rates are usually below detection limits in cold environments, we were able to measure this N pathway at 7/10 stations in the cold and nutrient rich waters near the PF. This result highlights that N2 fixation rates continue to be measured outside the well-known subtropical regions. The majority of the mid to high N2 fixation rates (>∼20 nmol L–1 d–1), however, still occurred in the expected tropical and subtropical regions. High throughput sequence analyses of the dinitrogenase reductase gene (nifH) revealed that the nifH Cluster I dominated the diazotroph diversity throughout the transect. nifH gene richness did not show a latitudinal trend, nor was it significantly correlated with N2 fixation rates. Nitrification rates above the mixed layer in the Southern Ocean ranged between 56 and 1440 nmol L–1 d–1. Our data showed a decoupling between carbon and N assimilation (NO3– and NH4+ assimilation rates) in winter in the South Pacific Ocean. Phytoplankton community structure showed clear changes across the PF, the STF and the PED, defining clear biomes. Overall, these findings provide a better understanding of the ecosystem functionality in the South Pacific Ocean across key oceanographic biomes.",TRUE,year/date
R172,Oceanography,R155573,Dynamic responses of picophytoplankton to physicochemical variation in the eastern Indian Ocean,S623477,R155575,Sampling year,L429308,2016,"Abstract Picophytoplankton were investigated during spring 2015 and 2016 extending from near‐shore coastal waters to oligotrophic open waters in the eastern Indian Ocean (EIO). They were typically composed of Prochlorococcus (Pro), Synechococcus (Syn), and picoeukaryotes (PEuks). Pro dominated most regions of the entire EIO and were approximately 1–2 orders of magnitude more abundant than Syn and PEuks. Under the influence of physicochemical conditions induced by annual variations of circulations and water masses, no coherent abundance and horizontal distributions of picophytoplankton were observed between spring 2015 and 2016. Although previous studies reported the limited effects of nutrients and heavy metals around coastal waters or upwelling zones could constrain Pro growth, Pro abundance showed strong positive correlation with nutrients, indicating the increase in nutrient availability particularly in the oligotrophic EIO could appreciably elevate their abundance. The exceptional appearance of picophytoplankton with high abundance along the equator appeared to be associated with the advection processes supported by the Wyrtki jets. For vertical patterns of picophytoplankton, a simple conceptual model was built based upon physicochemical parameters. However, Pro and PEuks simultaneously formed a subsurface maximum, while Syn generally restricted to the upper waters, significantly correlating with the combined effects of temperature, light, and nutrient availability. The average chlorophyll a concentrations (Chl a) of picophytoplankton accounted for above 49.6% and 44.9% of the total Chl a during both years, respectively, suggesting that picophytoplankton contributed a significant proportion of the phytoplankton community in the whole EIO.",TRUE,year/date
R172,Oceanography,R108803,Dinitrogen fixation rates in the Bay of Bengal during summer monsoon,S549265,R138414,Sampling year,L386428,2018,"Abstract Biological dinitrogen (N 2 ) fixation exerts an important control on oceanic primary production by providing bioavailable form of nitrogen (such as ammonium) to photosynthetic microorganisms. N 2 fixation is dominant in nutrient poor and warm surface waters. The Bay of Bengal is one such region where no measurements of phototrophic N 2 fixation rates exist. The surface water of the Bay of Bengal is generally nitrate-poor and warm due to prevailing stratification and thus, could favour N 2 fixation. We commenced the first N 2 fixation study in the photic zone of the Bay of Bengal using 15 N 2 gas tracer incubation experiment during summer monsoon 2018. We collected seawater samples from four depths (covering the mixed layer depth of up to 75 m) at eight stations. N 2 fixation rates varied from 4 to 75 μ mol N m −2 d −1 . The contribution of N 2 fixation to primary production was negligible (<1%). However, the upper bound of observed N 2 fixation rates is higher than the rates measured in other oceanic regimes, such as the Eastern Tropical South Pacific, the Tropical Northwest Atlantic, and the Equatorial and Southern Indian Ocean.",TRUE,year/date
R172,Oceanography,R138474,An extensive bloom of the N₂-fixing cyanobacterium Trichodesmium erythraeum in the central Arabian Sea,S549627,R138475,Sampling period,L386703,May,"LVe encountered an extensive surface bloom of the N, fixing cyanobactenum Trichodesrniurn erythraeum in the central basin of the Arabian Sea during the spring ~nter-n~onsoon of 1995. The bloom, which occurred dunng a penod of calm winds and relatively high atmospher~c iron content, was metabollcally active. Carbon fixation by the bloom represented about one-quarter of water column primary productivity while input by h:: flxation could account for a major fraction of the estimated 'new' N demand of pnmary production. Isotopic measurements of the N in surface suspended material confirmed a direct contribution of N, fixation to the organic nltrogen pools of the upper water column. Retrospective analysis of NOAA-12 AVHRR imagery indicated that blooms covered up to 2 X 106 km2, or 20% of the Arabian Sea surface, during the period from 22 to 27 May 1995. In addition to their biogeochemical impact, surface blooms of this extent may have secondary effects on sea surface albedo and light penetration as well as heat and gas exchange across the air-sea interface. A preliminary extrapolation based on our observed, non-bloom rates of N, fixation from our limited sampling in the spring intermonsoon, including a conservative estimate of the input by blooms, suggest N2 fixation may account for an input of about 1 Tg N yr-I This is substantial, but relatively minor compared to current estimates of the removal of N through denitrification in the basin. However, N2 fixation may also occur in the central basin through the mild winter monsoon, be considerably greater during the fall intermonsoon than we observed during the spring intermonsoon, and may also occur at higher levels in the chronically oligotrophic southern basin. Ongoing satellite observations will help to determine more accurately the distribution and density of Trichodesmium in this and other tropical oceanic basins, as well as resolving the actual frequency and duration of bloom occurrence.",TRUE,year/date
R172,Oceanography,R160144,Nitrous oxide emissions from the Arabian Sea,S638056,R160172,Sampling period,L437069,May,"Dissolved and atmospheric nitrous oxide (N2O) were measured on the legs 3 and 5 of the R/V Meteor cruise 32 in the Arabian Sea. A cruise track along 65°E was followed during both the intermonsoon (May 1995) and the southwest (SW) monsoon (July/August 1995) periods. During the second leg the coastal and open ocean upwelling regions off the Arabian Peninsula were also investigated. Mean N2O saturations for the oceanic regions of the Arabian Sea were in the range of 99–103% during the intermonsoon and 103–230% during the SW monsoon. Computed annual emissions of 0.8–1.5 Tg N2O for the Arabian Sea are considerably higher than previous estimates, indicating that the role of upwelling regions, such as the Arabian Sea, may be more important than previously assumed in global budgets of oceanic N2O emissions.",TRUE,year/date
R138056,Planetary Sciences,R147340,The distribution and purity of anorthosite across the Orientale basin: New perspectives from Moon Mineralogy Mapper data: CRYSTALLINE ANORTHOSITE ACROSS ORIENTALE,S590760,R147342,Plagioclase (nm),L411249,1250,"The Orientale basin is a multiring impact structure on the western limb of the Moon that provides a clear view of the primary lunar crust exposed during basin formation. Previously, near‐infrared reflectance spectra suggested that Orientale's Inner Rook Ring (IRR) is very poor in mafic minerals and may represent anorthosite excavated from the Moon's upper crust. However, detailed assessment of the mineralogy of these anorthosites was prohibited because the available spectroscopic data sets did not identify the diagnostic plagioclase absorption feature near 1250 nm. Recently, however, this absorption has been identified in several spectroscopic data sets, including the Moon Mineralogy Mapper (M3), enabling the unique identification of a plagioclase‐dominated lithology at Orientale for the first time. Here we present the first in‐depth characterization of the Orientale anorthosites based on direct measurement of their plagioclase component. In addition, detailed geologic context of the exposures is discussed based on analysis of Lunar Reconnaissance Orbiter Narrow Angle Camera images for selected anorthosite identifications. The results confirm that anorthosite is overwhelmingly concentrated in the IRR. Comparison with nonlinear spectral mixing models suggests that the anorthosite is exceedingly pure, containing >95 vol % plagioclase in most areas and commonly ~99–100 vol %. These new data place important constraints on magma ocean crystallization scenarios, which must produce a zone of highly pure anorthosite spanning the entire lateral extent of the 430 km diameter IRR.",TRUE,year/date
R11,Science,R32537,The Impact of Schooling Surplus on Earnings: Some Additional Findings»,S110698,R32538,Data collec- tion,L66346,1980,"This paper examines the impact of overeducation (or surplus schooling) on earnings. Overeducated workers are defined as those with educational attainments substantially above the mean for their specific occupations. Two models are estimated using data from the 1980 census. Though our models, data, and measure of overeducation are different from those used by Rumberger (1987), our results are similar. Our results show that overeducated workers often earn less than their adequately educated and undereducated counterparts.",TRUE,year/date
R11,Science,R32489,"The incidence of, and returns to overeducation in the UK",S110427,R32490,Data collec- tion,L66193,1991,"The 1991 wave of the British Household Panel Survey is used to examine the extent of, and the returns to overeducation in the UK. About 11% of the workers are overeducated, while another 9% are undereducated for their job. The results show that the allocation of female workers is more efficient than the allocation of males. The probability of being overeducated decreases with work experience, but increases with tenure. Overeducated workers earn less, while undereducated workers earn more than correctly allocated workers. Both the hypothesis that productivity is fully embodied and the hypothesis that productivity is completely job determined are rejected by the data. It is found that there are substantial wage gains obtainable from a more efficient allocation of skills over jobs.",TRUE,year/date
R11,Science,R29156,Process orientation through enterprise resource planning (ERP): a review of critical issues,S96634,R29157,Year,L59174,2001,"The significant development in global information technologies and the ever-intensifying competitive market climate have both pushed many companies to transform their businesses. Enterprise resource planning (ERP) is seen as one of the most recently emerging process-orientation tools that can enable such a transformation. Its development has presented both researchers and practitioners with new challenges and opportunities. This paper provides a comprehensive review of the state of research in the ERP field relating to process management, organizational change and knowledge management. It surveys current practices, research and development, and suggests several directions for future investigation. Copyright © 2001 John Wiley & Sons, Ltd.",TRUE,year/date
R11,Science,R27018,Robust ship scheduling with multiple time windows,S86873,R27019,Year,L53794,2002,"We present a ship scheduling problem concerned with the pickup and delivery of bulk cargoes within given time windows. As the ports are closed for service at night and during weekends, the wide time windows can be regarded as multiple time windows. Another issue is that the loading/discharging times of cargoes may take several days. This means that a ship will stay idle much of the time in port, and the total time at port will depend on the ship's arrival time. Ship scheduling is associated with uncertainty due to bad weather at sea and unpredictable service times in ports. Our objective is to make robust schedules that are less likely to result in ships staying idle in ports during the weekend, and impose penalty costs for arrivals at risky times (i.e., close to weekends). A set partitioning approach is proposed to solve the problem. The columns correspond to feasible ship schedules that are found a priori. They are generated taking the uncertainty and multiple time windows into account. The computational results show that we can increase the robustness of the schedules at the sacrifice of increased transportation costs. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 611–625, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/nav.10033",TRUE,year/date
R11,Science,R30747,Decomposition algorithms for the design of a nonsimultaneous capacitated evacuation tree network,S104178,R31105,Year,L62295,2009,"In this article, we examine the design of an evacuation tree, in which evacuation is subject to capacity restrictions on arcs. The cost of evacuating people in the network is determined by the sum of penalties incurred on arcs on which they travel, where penalties are determined according to a nondecreasing function of time. Given a discrete set of disaster scenarios affecting network population, arc capacities, transit times, and penalty functions, we seek to establish an optimal a priori evacuation tree that minimizes the expected evacuation penalty. The solution strategy is based on Benders decomposition, in which the master problem is a mixed‐integer program and each subproblem is a time‐expanded network flow problem. We provide efficient methods for obtaining primal and dual subproblem solutions, and analyze techniques for improving the strength of the master problem formulation, thus reducing the number of master problem solutions required for the algorithm's convergence. We provide computational results to compare the efficiency of our methods on a set of randomly generated test instances. © 2008 Wiley Periodicals, Inc. NETWORKS, 2009",TRUE,year/date
R11,Science,R151192,"The Role of Social Media during Queensland Floods:
An Empirical Investigation on the Existence of Multiple Communities of Practice (MCoPs)",S606030,R151193,paper: publication_year,L419026,2011,"The notion of communities getting together during a disaster to help each other is common. However, how does this communal activity happen within the online world? Here we examine this issue using the Communities of Practice (CoP) approach. We extend CoP to multiple CoP (MCoPs) and examine the role of social media applications in disaster management, extending work done by Ahmed (2011). Secondary data in the form of newspaper reports during 2010 to 2011 were analysed to understand how social media, particularly Facebook and Twitter, facilitated the process of communication among various communities during the Queensland floods in 2010. The results of media-content analysis along with the findings of relevant literature were used to extend our existing understanding on various communities of practice involved in disaster management, their communication tasks and the role of Twitter and Facebook as common conducive platforms of communication during disaster management alongside traditional communication channels.",TRUE,year/date
R11,Science,R33827,Assessment of copy number variation using the Illumina Infinium 1M SNP-array: A comparison of methodological approaches in the Spanish Bladder Cancer/EPICURO study,S117330,R33828,Year,L70901,2011,"High‐throughput single nucleotide polymorphism (SNP)‐array technologies allow to investigate copy number variants (CNVs) in genome‐wide scans and specific calling algorithms have been developed to determine CNV location and copy number. We report the results of a reliability analysis comparing data from 96 pairs of samples processed with CNVpartition, PennCNV, and QuantiSNP for Infinium Illumina Human 1Million probe chip data. We also performed a validity assessment with multiplex ligation‐dependent probe amplification (MLPA) as a reference standard. The number of CNVs per individual varied according to the calling algorithm. Higher numbers of CNVs were detected in saliva than in blood DNA samples regardless of the algorithm used. All algorithms presented low agreement with mean Kappa Index (KI) <66. PennCNV was the most reliable algorithm (KIw=98.96) when assessing the number of copies. The agreement observed in detecting CNV was higher in blood than in saliva samples. When comparing to MLPA, all algorithms identified poorly known copy aberrations (sensitivity = 0.19–0.28). In contrast, specificity was very high (0.97–0.99). Once a CNV was detected, the number of copies was truly assessed (sensitivity >0.62). Our results indicate that the current calling algorithms should be improved for high performance CNV analysis in genome‐wide scans. Further refinement is required to assess CNVs as risk factors in complex diseases.Hum Mutat 32:1–10, 2011. © 2011 Wiley‐Liss, Inc.",TRUE,year/date
R11,Science,R151256,"ICT-Enabled Community Empowerment in Crisis
Response: Social Media in Thailand Flooding 2011",S626534,R156077,Year,L431232,2011,"In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency’s perspective, which undermines communities’ role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.",TRUE,year/date
R11,Science,R151258,"Role of Social Media in Social Change:
An Analysis of Collective Sense Making
During the 2011 Egypt Revolution",S626547,R156078,Year,L431244,2011,"This study explores the role of social media in social change by analyzing Twitter data collected during the 2011 Egypt Revolution. Particular attention is paid to the notion of collective sense making, which is considered a critical aspect for the emergence of collective action for social change. We suggest that collective sense making through social media can be conceptualized as human-machine collaborative information processing that involves an interplay of signs, Twitter grammar, humans, and social technologies. We focus on the occurrences of hashtags among a high volume of tweets to study the collective sense-making phenomena of milling and keynoting. A quantitative Markov switching analysis is performed to understand how the hashtag frequencies vary over time, suggesting structural changes that depict the two phenomena. We further explore different hashtags through a qualitative content analysis and find that, although many hashtags were used as symbolic anchors to funnel online users' attention to the Egypt Revolution, other hashtags were used as part of tweet sentences to share changing situational information. We suggest that hashtags functioned as a means to collect information and maintain situational awareness during the unstable political situation of the Egypt Revolution.",TRUE,year/date
R11,Science,R153575,"ICT-Enabled Community Empowerment in Crisis
Response: Social Media in Thailand Flooding 2011.",S616721,R153885,Year,L425300,2011,"In this paper, we examine the emerging use of ICT in social phenomena such as natural disasters. Researchers have acknowledged that a community possesses the capacity to manage the challenges in crisis response on its own. However, extant IS studies focus predominantly on IS use from the crisis response agency’s perspective, which undermines communities’ role. By adopting an empowerment perspective, we focus on understanding how social media empowers communities during crisis response. As such, we present a qualitative case study of the 2011 Thailand flooding. Using an interpretive approach, we show how social media can empower the community from three dimensions of empowerment process (structural, psychological, and resource empowerment) to achieve collective participation, shared identification, and collaborative control in the community. We make two contributions: 1) we explore an emerging social consequence of ICT by illustrating the roles of social media in empowering communities when responding to crises, and 2) we address the literature gap in empowerment by elucidating the actualization process of empowerment that social media as a mediating structure enables.",TRUE,year/date
R11,Science,R31881,Second generation BtL type biofuels – a production cost analysis,S108213,R31965,Year data,L64952,2011,"The objective of this paper is to address the issue of the production cost of second generation biofuels via the thermo-chemical route. The last decade has seen a large number of technical–economic studies of second generation biofuels. As there is a large variation in the announced production costs of second generation biofuels in the literature, this paper clarifies some of the reasons for these variations and helps obtain a clearer picture. This paper presents simulations for two pathways and comparative production pathways previously published in the literature in the years between 2000 and 2011. It also includes a critical comparison and analysis of previously published studies. This paper does not include studies where the production is boosted with a hydrogen injection to improve the carbon yield. The only optimisation included is the recycle of tail gas. It is shown that the fuel can be produced on a large scale at prices of around 1.0–1.4 € per l. Large uncertainties remain however with regard to the precision of the economic predictions, the technology choices, the investment cost estimation and even the financial models to calculate the production costs. The benefit of a tail gas recycle is also examined; its benefit largely depends on the selling price of the produced electricity.",TRUE,year/date
R11,Science,R151222,"Lessons learned from the use of social media in
combating a crisis: A case study of 2011 Thailand flooding disaster",S606216,R151223,paper: publication_year,L419167,2012,"Social media have played integral roles in many crises around the world. Thailand faced severe floods between July 2011 and January 2012, when more than 13.6 million people were affected. This 7-month disaster provides a great opportunity to understand the use of social media for managing a crisis project before, during, and after its occurrence. However, current literature lacks a theoretical framework on investigating the relationship between social media and crisis management from the project management perspective. The paper adopts a social media-based crisis management framework and the structuration theory in investigating and analyzing social media. The results suggest that social media should be utilized to meet different information needs in order to achieve the success of managing a future crisis project.",TRUE,year/date
R11,Science,R32987,New insights into the prognostic impact of the karyotype in MDS and correlation with subtypes: evidence from a core dataset of 2124 patients,S114816,R33069,Number of patients studied,L69427,2072,"We have generated a large, unique database that includes morphologic, clinical, cytogenetic, and follow-up data from 2124 patients with myelodysplastic syndromes (MDSs) at 4 institutions in Austria and 4 in Germany. Cytogenetic analyses were successfully performed in 2072 (97.6%) patients, revealing clonal abnormalities in 1084 (52.3%) patients. Numeric and structural chromosomal abnormalities were documented for each patient and subdivided further according to the number of additional abnormalities. Thus, 684 different cytogenetic categories were identified. The impact of the karyotype on the natural course of the disease was studied in 1286 patients treated with supportive care only. Median survival was 53.4 months for patients with normal karyotypes (n = 612) and 8.7 months for those with complex anomalies (n = 166). A total of 13 rare abnormalities were identified with good (+1/+1q, t(1q), t(7q), del(9q), del(12p), chromosome 15 anomalies, t(17q), monosomy 21, trisomy 21, and -X), intermediate (del(11q), chromosome 19 anomalies), or poor (t(5q)) prognostic impact, respectively. The prognostic relevance of additional abnormalities varied considerably depending on the chromosomes affected. For all World Health Organization (WHO) and French-American-British (FAB) classification system subtypes, the karyotype provided additional prognostic information. Our analyses offer new insights into the prognostic significance of rare chromosomal abnormalities and specific karyotypic combinations in MDS.",TRUE,year/date
R57,Virology,R175284,Porcine Circoviruses and Herpesviruses Are Prevalent in an Austrian Game,S694382,R175286,has date ,L466934,fall 2019,"During the annual hunt in a privately owned Austrian game population in fall 2019 and 2020, 64 red deer (Cervus elaphus), 5 fallow deer (Dama dama), 6 mouflon (Ovis gmelini musimon), and 95 wild boars (Sus scrofa) were shot and sampled for PCR testing. Pools of spleen, lung, and tonsillar swabs were screened for specific nucleic acids of porcine circoviruses. Wild ruminants were additionally tested for herpesviruses and pestiviruses, and wild boars were screened for pseudorabies virus (PrV) and porcine lymphotropic herpesviruses (PLHV-1-3). PCV2 was detectable in 5% (3 of 64) of red deer and 75% (71 of 95) of wild boar samples. In addition, 24 wild boar samples (25%) but none of the ruminants tested positive for PCV3 specific nucleic acids. Herpesviruses were detected in 15 (20%) ruminant samples. Sequence analyses showed the closest relationships to fallow deer herpesvirus and elk gammaherpesvirus. In wild boars, PLHV-1 was detectable in 10 (11%), PLHV-2 in 44 (46%), and PLHV-3 in 66 (69%) of animals, including 36 double and 3 triple infections. No pestiviruses were detectable in any ruminant samples, and all wild boar samples were negative in PrV-PCR. Our data demonstrate a high prevalence of PCV2 and PLHVs in an Austrian game population, confirm the presence of PCV3 in Austrian wild boars, and indicate a low risk of spillover of notifiable animal diseases into the domestic animal population.",TRUE,year/date
R57,Virology,R175292,Dynamics of Antibodies to Ebolaviruses in an Eidolon helvum Bat Colony in,S694475,R175294,has date ,L467019,19-Nov,"The ecology of ebolaviruses is still poorly understood and the role of bats in outbreaks needs to be further clarified. Straw-colored fruit bats (Eidolon helvum) are the most common fruit bats in Africa and antibodies to ebolaviruses have been documented in this species. Between December 2018 and November 2019, samples were collected at approximately monthly intervals in roosting and feeding sites from 820 bats from an Eidolon helvum colony. Dried blood spots (DBS) were tested for antibodies to Zaire, Sudan, and Bundibugyo ebolaviruses. The proportion of samples reactive with GP antigens increased significantly with age from 0–9/220 (0–4.1%) in juveniles to 26–158/225 (11.6–70.2%) in immature adults and 10–225/372 (2.7–60.5%) in adult bats. Antibody responses were lower in lactating females. Viral RNA was not detected in 456 swab samples collected from 152 juvenile and 214 immature adult bats. Overall, our study shows that antibody levels increase in young bats suggesting that seroconversion to Ebola or related viruses occurs in older juvenile and immature adult bats. Multiple year monitoring would be needed to confirm this trend. Knowledge of the periods of the year with the highest risk of Ebolavirus circulation can guide the implementation of strategies to mitigate spill-over events.",TRUE,year/date
R57,Virology,R175284,Porcine Circoviruses and Herpesviruses Are Prevalent in an Austrian Game,S694380,R175286,has date ,L466932,2019,"During the annual hunt in a privately owned Austrian game population in fall 2019 and 2020, 64 red deer (Cervus elaphus), 5 fallow deer (Dama dama), 6 mouflon (Ovis gmelini musimon), and 95 wild boars (Sus scrofa) were shot and sampled for PCR testing. Pools of spleen, lung, and tonsillar swabs were screened for specific nucleic acids of porcine circoviruses. Wild ruminants were additionally tested for herpesviruses and pestiviruses, and wild boars were screened for pseudorabies virus (PrV) and porcine lymphotropic herpesviruses (PLHV-1-3). PCV2 was detectable in 5% (3 of 64) of red deer and 75% (71 of 95) of wild boar samples. In addition, 24 wild boar samples (25%) but none of the ruminants tested positive for PCV3 specific nucleic acids. Herpesviruses were detected in 15 (20%) ruminant samples. Sequence analyses showed the closest relationships to fallow deer herpesvirus and elk gammaherpesvirus. In wild boars, PLHV-1 was detectable in 10 (11%), PLHV-2 in 44 (46%), and PLHV-3 in 66 (69%) of animals, including 36 double and 3 triple infections. No pestiviruses were detectable in any ruminant samples, and all wild boar samples were negative in PrV-PCR. Our data demonstrate a high prevalence of PCV2 and PLHVs in an Austrian game population, confirm the presence of PCV3 in Austrian wild boars, and indicate a low risk of spillover of notifiable animal diseases into the domestic animal population.",TRUE,year/date
R57,Virology,R175284,Porcine Circoviruses and Herpesviruses Are Prevalent in an Austrian Game,S694379,R175286,has date ,L466931,2020,"During the annual hunt in a privately owned Austrian game population in fall 2019 and 2020, 64 red deer (Cervus elaphus), 5 fallow deer (Dama dama), 6 mouflon (Ovis gmelini musimon), and 95 wild boars (Sus scrofa) were shot and sampled for PCR testing. Pools of spleen, lung, and tonsillar swabs were screened for specific nucleic acids of porcine circoviruses. Wild ruminants were additionally tested for herpesviruses and pestiviruses, and wild boars were screened for pseudorabies virus (PrV) and porcine lymphotropic herpesviruses (PLHV-1-3). PCV2 was detectable in 5% (3 of 64) of red deer and 75% (71 of 95) of wild boar samples. In addition, 24 wild boar samples (25%) but none of the ruminants tested positive for PCV3 specific nucleic acids. Herpesviruses were detected in 15 (20%) ruminant samples. Sequence analyses showed the closest relationships to fallow deer herpesvirus and elk gammaherpesvirus. In wild boars, PLHV-1 was detectable in 10 (11%), PLHV-2 in 44 (46%), and PLHV-3 in 66 (69%) of animals, including 36 double and 3 triple infections. No pestiviruses were detectable in any ruminant samples, and all wild boar samples were negative in PrV-PCR. Our data demonstrate a high prevalence of PCV2 and PLHVs in an Austrian game population, confirm the presence of PCV3 in Austrian wild boars, and indicate a low risk of spillover of notifiable animal diseases into the domestic animal population.",TRUE,year/date